"Are we in an AGI bubble?" is an ill-posed question. The vast majority of companies riding the current wave of buzzwords are selling "GenAI" (generative) and proposing that general intelligences are on the horizon, be it near or far. GenAI is a bubble, AGI is vaporware.
I’ve created an interactive site showcasing the results of an online survey about AI. My hope is that this site could serve as a snapshot of public sentiment about AI, a sort of historical artifact for our time.
So far, only 15 random people have responded, but I thought it might be interesting to hear perspectives from the HN community as well! If you have a moment, I’d love for you to check it out and share your thoughts.
I’m also open to suggestions on how to improve the site or what else I could add to make it more engaging or insightful.
Is there even an accepted AGI promise for it to be deemed a bubble? The premise in this question seems to be inconsistent with what we have now, afaict.
Who gets to decide / define what 'AGI' even is? I doubt not even OpenAI, Anthropic or DeepMind can even define it or give an accurate date of when 'AGI' has been achieved and have given conflicting dates.
Due to this, 'AGI' can mean anything and it is currently (ab)used as a vehicle to raise billons of dollars from VCs to 'achieve' it despite it losing billions a quarter.
With reports of diminishing returns in training new models and spending hundreds of millions to do so, the AGI bubble is already presenting itself to being a massive scam.
reply