AI’s future looks impressive but is constrained. Models will get faster, cheaper, and more integrated into daily work, but none of that changes the reality of the situation. AI is an optimizer, not a thinker. It doesn’t understand anything; it maps inputs to outputs. We are the ones projecting human qualities onto it, but AI doesn’t react with cognition. As scale increases, returns diminish. Each gain requires more data, more compute, and more energy for increasingly marginal improvements. The progress curve will bend upward in capability, but the underlying mechanism stays the same. [1]
A useful way to think about AI and its progress is as a parabola that approaches the x-axis but never crosses it. The x-axis represents genuine understanding, agency, or even consciousness. AI can get closer through scale and refinement, but it remains bounded by its training data and objective functions. It can simulate reasoning, but simulation is not the thing itself. Errors and biases don’t disappear at scale; they compound, forcing tighter constraints that further limit progress. The math may improve, but the ontology does not change. [2]
Sentience is not on the table. There is no inner life, no persistence of identity, no wants -just statistical continuation. AI has no agency or intent, no curiosity to drive discovery, and no self-correcting epistemology to distinguish truth from plausibility on its own. Without the ability to set goals, question its premises, or care whether it is wrong, it remains a tool rather than a mind, no matter how close the curve gets to the axis. [3]
References:
- There is no such thing as conscious artificial intelligence https://www.nature.com/articles/s41599-025-05868-8
- Reasoning skills of large language models are often overestimated https://computing.mit.edu/news/reasoning-skills-of-large-language-models-are-often-overestimated/
- Why AI still struggles to tell fact from belief https://news.stanford.edu/stories/2025/11/ai-language-models-facts-belief-human-understanding-research