📰 Summary
🚫 Skepticism Mounts Against Superintelligence Hype
Despite bold claims by AI leaders like Sam Altman (OpenAI), Demis Hassabis (Google), and Dario Amodei (Anthropic) that superintelligent AI is imminent, many AI researchers argue the reality is far more limited.
Even Meta has joined the race, investing $14 billion to chase Mark Zuckerberg’s vision of human-surpassing AI.
🧠 Apple’s ‘The Illusion of Thinking’ Sparks Industry-Wide Debate
A new paper from Apple titled “The Illusion of Thinking” challenges claims that advanced AI models truly “reason.”
The research evaluated reasoning models from OpenAI, Anthropic, and others, concluding they collapse in accuracy when faced with even moderately complex problems.
The paper finds that these models often perform worse than simpler predecessors and fail tasks a child could solve.
⚠️ Critical Limitations of Today’s AIs
Apple and Salesforce researchers identified fundamental flaws:
Inability to follow instructions
Breakdown on complex logic tasks
Overreliance on pattern-based responses from training data
These issues suggest that current “reasoning” models are more associative generators than logical thinkers.
🧩 Real-World Implications
Experts like Gary Marcus argue the research exposes that today’s reasoning AIs are not a step toward AGI, but possibly a dead end.
Critics warn against making policy, investment, or business decisions based on hype, as AI performance still diverges from real-world needs.
🗣️ AI Maximalists vs. Skeptics
OpenAI maintains that its models are progressing toward autonomous agents capable of tool use and decision-making.
Others contend the belief that scale alone will yield human-level intelligence is “probably false”, as models fail basic reliability tests.
🧩 Market and Strategic Insight
AI remains a powerful tool for specific tasks and productivity gains (e.g., ChatGPT’s 500M+ users).
However, superintelligent AI claims are overstated, and industry should proceed with measured expectations and empirical evaluations.