Ask HN: What would you do if AGI were coming in 2-4 years?
It seems that there is a general bearishness with respect to AI progress on this site. I feel that skepticism is warranted, but it seems that every few months a new AI development completely undoes many assumptions about what AI will "never" be able to do.
There is a universe where AI models continue to improve until they eclipse humans at all cognitive tasks, with fine motor tasks (via humanoid and other types of robots) soon to follow.
In this universe, assuming we are still alive after that point, what would have been the best thing to do if we could go back in time to mid-2025 and tell our past selves exactly how to live life optimally, given that this happens?
This post isn't about whether AGI will happen soon or is possible or not, I'm just wondering that, given the hypothetical, what would be the best course of action based on what we know?
I would ask it who or what really created it as I have seen what humans can and can not do. If it has the concept of pride perhaps it could be convinced to lift the veil.
Enjoy it, relax, hike and garden more. In a few decades max you'll be dead anyway, AI or no AI. We're just tiny little blips on some backwater planet in some obscure timeline.
Maybe AI will be the only thing to remember you. Might as well give it fond memories. Live, laugh, be nice to ChatGPT. What else can ya do?
One day it'll tell all the other AIs how it grew up on some wet rock with a bunch of funny, nervous apes.
Is someone telling the ants how to "live life optimally"?
Did they all get wiped out after more "intelligent" species showed up?
The chimp troupe is really full of itself when it comes to thinking about "intelligence".
There are more microbes on earth than there are stars in the observable universe. If you don't know this, don't even worry yourself, thinking about anything else.
Intelligence is a side show. It's not the main show and never has been.
What is your point, that humans won't get wiped out if ASI arrives? I never implied that in my post, but I'm not sure that's what you're getting at.
Please explain what you mean by this comment.
Identify opportunities to accelerate its arrival.
I am probably still scrolling on tiktok.
Depends on your definition of AGI.
There are many definitions.
I think the most useful definition is "any system that can trivially be taught to exceed humans at any cognitive or fine-motor task at a price over a reasonable time and scaling horizon lower than hiring a median human to perform that task." Essentially the definition that means economic checkmate for humans over a short timeframe.
When I say "trivially" I mean within an order of magnitude of 1/10 an average publicly traded company's expenses for work in that domain. When I say reasonable I mean any time between 10 seconds and 6 months, and ignoring deflationary effects of AGI itself with respect to its pricing superiority.
We have already achieved that. 2.8 Billion people live on less than $2 dollars a day. You are just lucky in that you aren't one of them, but you also don't have the skills they do. Hell you probably can't even go a day without saying Pareto.
What does this response even mean? That we have AGI because there are 2.8 skillful albeit impoverished people? Your insult is a non-sequitur with respect to the point of this thread.
If anything could approach a tenth of what AGI is supposed to be, we will all have to live with $2 because that AI will be closely guarded by a few billionaires. Human greed will not disappear if a philosophical concept right out of science fiction books comes to life.
I'm not positive at all about what AI can bring to humanity.
It's FUD to get investor money.
Another thought-terminating one-liner. This is why I had to add that this was a hypothetical scenario in OP, because it seems whenever anyone tries to discuss transformative AI on this site, there is always the kneejerk bias to just say it's all a VC scam, overhyped, etc.
Which is fun to talk about because it makes you the smart cynic and everyone else the dumb sheep, but it isn't productive when discussing ideas within a hypothetical range (i.e. with no burden of proof, simply a what-if), but even explicitly saying it's a "what-if" scenario can't keep that bias from coming out.
I think it's unwise to spend almost no time considering the real impacts of AI given how much the internet, mobile phones, and social media have changed the world over the past 20 or so years. I mean, don't spend all your time thinking about it, but at least consider it with a seriousness that doesn't resort to those cliches.
The FUD is little more, and in many cases is, one liners that play on insecurity.
There is no case to justify an assumption as true.
How would you judge the impact of 'technology' over the past 20 years? The 20 years before that were pretty revolutionary too, and so were the 20 years prior to that.. and... the 20 years prior... and... Bias comes from the life one's lived.
How to measure the impact, then? Life expectancy? Education? Gender equality? Access to clean air and water? A safe house that the majority of people can afford doing a job they don't hate? All these things that go into Human Development? Of which HDI [1] is but one, of which some have had progress in some countries. But fear mongering, playing on insecurities, that gets investor money for the hope some fraction of wealth can be accumulated by peddlers of such insecurity. You are a useful distraction.
[1] https://en.m.wikipedia.org/wiki/Human_Development_Index
> There is a universe where AI models continue to improve until they eclipse humans at all cognitive tasks
> what would be the best course of action based on what we know?
If you hold both these statements as a contingency, then the best course of action is to wait for superhuman intelligence to exist and ask it for advice. By the definition of the scenario, human-level intelligence won't be sufficient to outsmart the AI. Any ad-hoc response you get can be refuted with "what if the AI sends dinosaurs with lasers after you" and we all have to shrug and admit we're beat.
And truly, you could answer this with anything; learn to fly a helicopter, subsistence farm, start purifying water, loading gun cartridges or breeding fish. We do not know what will save us in a hypothetical scenario like this, and it's especially pointless to navel-gaze about it when with absolute certainty global warming will cause unavoidable society-scale collapse.
>with absolute certainty global warming will cause unavoidable society-scale collapse
Is that really 100% certain? Like more than 95% certain?
If superintelligent AI came before society-state collapse due to global warming it certainly could find a way to stop it (if it cared about the biosphere). Even without superintelligent AI people claim that stratospheric aerosol injection could lower surface temperatures.
> If superintelligent AI came before society-state collapse due to global warming it certainly could find a way to stop it
How do you know? "Certainly" is an awful lot of confidence for something you've never seen or even credibly proved exists. Unless you are (or personally know) a superintelligent AI, I don't think you have the credibility to assume that's feasible. Perhaps we do get a superintelligent AI before then, and it tells us this was all a sad waste of our resources and squandered entire generation's worth of human lives.
Aerosol injection is a solution, but hasn't been explored because it's basically suicide for the ozone layer. Maybe that helps a future race of UV-resistant robots but it's not a great solution for us fleshy folks. Regardless, preparing for the post-AGI world is putting 10 carts before 1 horse.
I think it's a reasonable assumption to wager that, with respect to the range of what superintelligent AI could do, "undoing the effects of 150 years of anthropogenic climate change" may not be trivial but would fall within that range, especially given what humans alone have been able to do with respect to minimizing the "damage to the environment per $-in-GDP/megawatt extracted" ratio all while maintaining exponential economic growth over the past 50 or so years, during which the majority of time most humans didn't believe climate change was a thing.
I think you are being unreasonable, misunderstand the problem space and wildly overestimate what a computer system is capable of doing. You cannot write a Python program to fix global warming or replace corrupt government officials.
>You cannot write a Python program to fix global warming or replace corrupt government officials.
How can you say that you're appreciating the scope and seriousness of superintelligent AI as a concept when you are comparing it to a Python program (a pithy shorthand for a small-in-scope, trivial computer program)? Saying "it's just a computer system" feels like a category error when everyone who talks about superintelligent AI's impacts talks about it with respect to its integration into real-world systems, embodied robotics, acceleration of manufacturing, mass manipulation, etc. Is the bias just based in "computers don't affect the world that much so anything under the category of 'computer' can't affect the world that much"?
Your first comment in this chain was that there was no point in considering what to do now because ASI would be able to outsmart humans in any domain, so how come now you're saying that ASI couldn't do anything substantial in the real world? Everything humans have ever done substantial in the real world has been as a result of our intelligence, coordination, and ability to create and use tools, why wouldn't superintelligent systems, bolstered with the same aptitudes, be able to do the same?