"After poring through a century of varied conceptualizations, I’ll write out my current stance, half-baked as it is:
I think “AGI” is better understood through the lenses of faith, field-building, and ingroup signaling than as a concrete technical milestone. AGI represents an ambition and an aspiration; a Schelling point, a shibboleth.
The AGI-pilled share the belief that we will soon build machines more cognitively capable than ourselves—that humans won’t retain our species hegemony on intelligence for long. Many AGI researchers view their project as something like raising a genius alien child: We have an obligation to be the best parents we can, instilling the model with knowledge and moral guidance, yet understanding the limits of our understanding and control. The specific milestones aren’t important: it’s a feeling of existential weight.
However, the definition debates suggest that we won’t know AGI when we see it. Instead, it’ll play out more like this: Some company will declare that it reached AGI first, maybe an upstart trying to make a splash or raise a round, maybe after acing a slate of benchmarks. We’ll all argue on Twitter over whether it counts, and the argument will be fiercer if the model is internal-only and/or not open-weights. Regulators will take a second look. Enterprise software will be sold. All the while, the outside world will look basically the same as the day before.
I’d like to accept this anti-climactic outcome sooner than later. Decades of contention will not be resolved next year. AGI is not like nuclear weapons, where you either have it or you don’t; even electricity took decades to diffuse. Current LLMs have already surpassed the first two levels on OpenAI and DeepMind’s progress ladders. A(G)I does matter, but it will arrive—no, is already arriving—in fits and starts."
https://jasmi.news/p/agi