Nowhere the debate on AI is more polarized than between evangelists who see technology as the next big jump of humanity and the skeptics which warn against its deep limits. Two recent pieces – The characteristic blog of Sam Altman and the quietly devastating research document of Apple, “the illusion of thought” – offer a fascinating window on this fracture. While we hold on to the threshold of a new technological era, it is worth asking: what should we really fear and what is simple media threshing? And for a country like India, what path does wisdom suggest?
Sam Altman, CEO of OPENAI And a central figure in the AI revolution, written with the conviction of a real believer that the AI will soon compete, if not exceed, human reasoning. Altman’s vision will attract people. After all, he says that AI can be a real partner to solve the most difficult problems in the world, from disease to climate change. Its argument does not only concern the technological possibility, but on inevitability. In the world of Altman, walking towards general artificial intelligence (AG) is not only desirable – it is unstoppable.
But then comes Apple“The illusion of thought”, a paper that lands like a bucket of cold water on the enthusiasm of AI. Apple researchers have conducted a series of controlled experiences, opposing large -part language models (LLMS) to the advanced cutting edge with conventional logical puzzles. The results led enthusiasm around the general artificial intelligence (AG). Although these models impressed a low and average complexity, their performance collapsed as the puzzles developed stronger. AI is not really “thinking” but simply expands the models. Faced with problems that require real reasoning, there are still gaps. Apple’s work is a correction well necessary for the story that we are about to achieve Act.
So who is right? The answer, as is often the case, is somewhere between the two. Altman’s optimism is not entirely moved. The AI has already transformed industries and will continue to do so, in particular in the fields where the recognition of models and the synthesis of the data are increasingly useful. But Apple’s criticism exposes a fundamental defect in the current trajectory: the lack of confusing statistical capacities with an authentic understanding or reasoning. There is a world of difference between a machine which can predict the following word of a sentence and which can reason through the tower of Hanoi or give meaning to a complex and real dilemma.
What should the world be afraid of? The real danger is not that the AI suddenly becomes supentilellite and will take over, but that we would give too much confidence in the systems whose limitations are poorly understood. Imagine deploying these models in health care, infrastructure or governance, to discover that their intelligence is not really that. The risk is not Skynet, but a systemic failure born from a poorly placed faith. Billions could be wasted by hunting the Chimera of AG, while urgent and resolble problems are neglected. There is often waste in innovation processes. But the extent of the resources deployed for dwarf AI of other examples, and therefore requires another kind of prudence.
However, there are also fears that we can throw safely. The existential risk posed by current AI models is, for the moment, more science fiction than science. These systems are powerful, but they are not independent agents tracing the fall of humanity. These are tools – impressive, but fundamentally limited. The real threat, for the moment, is not malicious machines, but human pride.
Are there lessons for India to learn from it? The country wishes to gain a lot from AI, especially in fields such as the translation of languages, agriculture, the provision of public services and others. Here, on the basis of the strengths of today’s AI – recognition of models, automation and data analysis – it can be used to meet the local challenges of the real world, which India has mainly tried to do. But India must resist the temptation to mark with the threshing media. Instead, it should invest in human -in -loop systems, where AI helps rather than replacing human judgment, especially in the fields where discretion levels are raised to the point of contact with people and where the issues are high. Human judgment is still ahead of AI, for the moment, therefore, be sure to use it.
There is also a deeper lesson here, which is transmitted by the theory of control. Real control – on machines, systems or societies – requires the ability to adapt, to reason, to dynamically respond to comments. The current AI models, for all their power, do not have this flexibility. They cannot adjust their approach when complexity exceeds their training. More data and more computer science does not solve this problem. In this sense, the illusion of AI control is as dangerous as the illusion of the thought of AI.
The future will not be shaped neither by those who are blind in their faith towards AI, nor by those who see only limits, but by those who can navigate in space between the two. For India and for the world, the challenge is to exploit the real forces of AI while clearly remaining its eyes on its weaknesses. The real danger is not that the machines exceed us, but that we will stop thinking for ourselves. Linked to this was an interesting study of brain scanner by users of MIT Media Lab from Chatgpt users, which suggests that AI does not make us more productive. It could rather harm us cognitively. This is what we have to worry about, at least for the moment.
The writer is research analyst at the Takshashila institution in his high -tech geopolitics program