The article sounds a cliche. The progression was always happening, nothing sudden. Just like the continuous movement of tectonic plates through the earth quakes. When tension between the plates reach a level, rupture happens. But it is not the rupture causing the tectonic movement. It is the opposite.
Things like electricity, computers, internet, smartphones and AI are those earthquakes caused by the tectonic movement towards dominance of the machine.
The goal of human progress was to make everything easier. Tools came up to augment human abilities, both physical and mental, so that humans can free themselves from all hard work of physical labor and thinking.
We do gym and sports as the body needs some fake activity to fool it into believing that we still need all that muscle strength. We might get some gym and sports for the mind too to give it some fake activity.
Remember, the goal is to preserve ourselves as physical beings while not really doing any hard work.
I know it might be an offensive way to put it, but I honestly believe if AI ends up making people no longer need to use their brains as much, it's a great thing.
Think about it: do we rather to live in a world where heavy labor is a necessity to make a living, or a world where we go to gym to maintain our physique?
If mental labor isn't (as) necessary and people just play Scrabble or build weird Rude Goldberg machines in Minecraft to keep their minds somewhat fit, is this future really that bleak?
It is quite bleak to me. Thinking has always been an important part of what makes us human, much more so than physical labor.
Craftsmanship and tool usage are physical activities that also define us as a species and you will find no shortage of people lamenting our loss of those skills, too. Both those and thinking are categorically different than water carrying, ditch digging, and other basic heavy labor.
I'd say it is a constant fight against laziness. Sure it is convenient to drive everywhere with a car but at some point you might understand that it makes more sense to walk somewhere once in a while or regularly. Sure, escalators are convenient but better take the stairs so you don't need to go to the gym and save some money. If you ask me should all do more physical labour and the same goes for the mental labor. If we give that up as well the future is really bleak, to answer your question.
The analogy here is probably physical exercise. Lack of exertion sounds great until your body falls apart and destroys itself without frequent exercise.
It is paramount to not ignore the state of the world. Poverty, wars, inequality in the distribution of resources, accelerated natural disasters, political instability… Those aren’t going to be solved by a machine thoughtlessly regurgitating words from random text sources.
Even if a world where people don’t use their brains were desirable (that’s a humungous if), the present is definitely not the time to start. If anything, we’re in dire need of the exact opposite: people using their brains to not be conned by all the bullshit being constantly streamed into our eyes and ears.
And in your world, what happens when a natural disaster which wasn’t predicted takes out the AI and no one knows how to fix it? Or when the AI is blatantly and dangerously wrong but no one questions it?
And I responded to your hypothetical in detail and followed up with a request for clarification. That’s how discussions progress and it’s what forums such as HN are designed for.
You were clearly advocating for a particular future (“honestly believe (…) it’s a great thing”), so hiding behind it being a hypothetical feels disingenuous. Of course it’s an hypothetical, because it obviously does not describe the current state of the world. That doesn’t mean the idea is beyond criticism or commentary. On the contrary, that’s exactly what hypotheticals are for.
I get your point, but I confess I have sometimes had to pause for some time to decide if I was holding a recyclable, a compostable or landfill — looking at the little pictures in fact, hoping I can find the thing I am holding.
Yeah, but otherwise, the whole MIT Media Lab thing is increasingly tasting a little bitter, not the glamorous, enviable place it seemed like in decades past.
Rather than looking for the next internet-connected wearable, for some reason, increasingly, I keep thinking about Bruce Dern's character in the film Silent Running.
It's much worse in South Korea, I think there were at least 5 different bins with different signs. Most things you bought had a label on them and you could try to match the letters to what was on the bin. Except it wasn't perfectly matched up, and we didn't have some bins with signs that matched up with whatever was written on the package.
I eventually gave up and only ate to avoid having to deal with it.
Yes but…
As an example In some cities, the signs specifying whether parking is allowed can be impossible to decipher. Sometimes feels like an AI would be needed to tell you “can I park this particular vehicle here right now, and for how long?”
Not that I’d trust an AI to get it right - but people already don’t.
> Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
Key point. The top use case for "Artificial Intelligence" is lack of natural intelligence.
Not just anyone, people will "naturally" draw the line somewhere, and it will be in a number of different places for different people.
As the article emphasizes, "every technological advance seems to make it harder to work, remember, think and function independently …"
This is exactly what it takes for there to be a positive feedback mechanism for AI to accelerate. Almost like people havng the goal lines moved for them. Which it looks like AI has already done in spite of its notorious shortcomings.
That little quote doesn't only apply to AI, think about how it was as slide rule engineering faded into obscurity. Don't ask me how I know, that would be an even worse wall of text ;)
At one time all bridges, vehicles, aircraft and things like that were designed by people who had prevailed because their mindset was aligned with all the others who excelled at doing almost all the math necessary using only that one common tool, which was common among them because it was a best practice across so many cultures, and a leap above what they were using before. It wan't easy and it required a certain mindset which made engineering possible with such a primitive tool. Two pieces of wood.
The future's come a long way and nobody does this any more, so for the longest there's been no need for engineers to even learn how to use a slide rule, or especially not to use it professionally. Things actually did get easier. Slide rules were no longer necessary for engineering, and from that point the type of brain that could do those kinds of projects using only a slide rule has no longer been a requirement for who can become an engineer. This didn't make them stupid, engineering is still hard, naturally in many other ways.
But with that mindset that made it possible to accomplish so much with such primitive tools now largely absent, could that be why not much more is being accomplished with incredibly more advanced tools after so many decades?
Until recently every technological advancement replaced manual work, like in agriculture, transportation, industry. Even the tiniest car amenity, like electric windows, hydraulic breaks or touch screen entertainment is aiming to replace a limb movement. With AI it is the first time the tech offloads directly cognitive tasks, leading inevitably to mental atrophy. The hopeful scenario is to repurpose the brain for new activities and not rotting, like replacing labor work gives the opportunity for sports and not getting fat.
This sounds nice, but what I've run into is that the model fails to write changes if the code has changed under it. A better tool, where it takes a snapshot at the start of each non-interactive segment, and then resolves merge conflicts with my manual changes automatically, would make this much easier.
I used to work on a project that could take 30 mins+ to compile the entire project.
Nearly every time, your problems were detected _early_ in the process. Because build systems exist, they don't take 30 minutes on average. They focus on what's changed and you'll see problems instantly.
It's _WAY_ more efficient for human attentional flow than waiting for AI to reason about some change while I tap my fingers.
That is both a sweeping generalization and plainly wrong. The "much earlier" days of programming had blazing fast compilers, like Turbo Pascal. The "earlier" days had C compiler that were plenty fast. Only languages like C++ had this kind of problem.
Worst offenders like Rust are "today", not "earlier".
Certainly, but that depended on your own choices in technologies. This condition is not time-dependent. You can inflict yourself the exact same condition by choosing Rust.
Compile times are faster now for C and C++ right? Some of it due to further compiler optimizations but mostly due to higher CPU power.
Still, you seem to be arguing that the choice should be Pascal instead of Rust. There is a reason why we choose these new languages: language features. Compile time is a lesser consideration.
No, I'm arguing that having more "blocking" time is not a function of time (early days of programming), but a self-inflicted choice. Yes, choosing C in the 80s meant self-inflicting yourself with "blocking" time, but there was the choice of Turbo Pascal, or Forth or whatever. Plenty of great choices that ran really fast on those old machines.
Do I mean that one should choose Pascal today? No, compiling C code today is really fast and has practically no "blocking" time. But you can still inflict yourself "blocking" time if you want, with languages like Rust.
In C we had to resort to tricks like precompiled headers to get any sort of sensible compilation and it still lasted a minute for a decent library.
C++ was/is even worse what with generation of all the templated code and then through the roof link times for linker to sort out all the duplicate template implementations (ok, Solaris had a different approach but I guess that's a nitpick).
I have not worked on any large project in Pascal, but friends worked with Delphi and I remember them complaining how slow it was.
If you need an attention sink, try chess! Pick a time control if it's over 2 minutes of waiting, and do puzzles if it's under. I find that there's not much of a context switch when I get back to work.
I'm having the same problem. LLMs really take me out of the task mentally. It feels like studying as a kid. I need to really make a concerted effort; the task is no longer engaging on its own.
As someone with focus problems, I find it more productive to have a conversation with ChatGPT (or Claude) about code. And avoid letting it make major changes. And hand code a lot with Copilot.
It depends. For a task I know well the LLM is often much worse. If I'm being asked to do something brand new, the LLM does speed me up quite a bit and let me build something I might have gotten stuck on otherwise. The problem is that although I did "build the thing," it's not clear I really gained any meaningful skills. It feels analogous to watching a documentary vs. reading a book. You learned _something_, but it's honestly pretty superficial.
Because when you simply “read” you’re not necessarily learning. The illusion of knowledge is real, where you nod in agreement that you understood something but when it’s your turn to do it, you have no idea how to. You need to do something yourself to actually learn it, and it involves struggling, frustration, eventually insights etc.
How slow the AIs respond provides some opportunity to work on two task at once. Things like investigate a bug, think about the implementation for something larger, edit code that experience has told you it will take just as much or more typing to have the LLM do it.
It's less cool than having a future robot do it for you while you relax, but if you enjoy programming it brings some of the joy back.
They're not that slow! You want me to believe we've gone from programmers being so fragile that disturbing their 'flow state' will lose them hours of productivity, to programmers being the ultimate multitaskers who can think and code while their LLM takes 10 seconds to respond? /s
The worst part is when you find out your vibe coded stuff didn't actually work properly in production and you introduced a bug while being lazy. It's really easy to do.
The title itself. Without reading the article, I can sense the "we are living in a stupid age" arrogant trope characteristic of the "winning" social classes.
The article sounds a cliche. The progression was always happening, nothing sudden. Just like the continuous movement of tectonic plates through the earth quakes. When tension between the plates reach a level, rupture happens. But it is not the rupture causing the tectonic movement. It is the opposite.
Things like electricity, computers, internet, smartphones and AI are those earthquakes caused by the tectonic movement towards dominance of the machine.
The goal of human progress was to make everything easier. Tools came up to augment human abilities, both physical and mental, so that humans can free themselves from all hard work of physical labor and thinking.
We do gym and sports as the body needs some fake activity to fool it into believing that we still need all that muscle strength. We might get some gym and sports for the mind too to give it some fake activity.
Remember, the goal is to preserve ourselves as physical beings while not really doing any hard work.
I know it might be an offensive way to put it, but I honestly believe if AI ends up making people no longer need to use their brains as much, it's a great thing.
Think about it: do we rather to live in a world where heavy labor is a necessity to make a living, or a world where we go to gym to maintain our physique?
If mental labor isn't (as) necessary and people just play Scrabble or build weird Rude Goldberg machines in Minecraft to keep their minds somewhat fit, is this future really that bleak?
It is quite bleak to me. Thinking has always been an important part of what makes us human, much more so than physical labor.
Craftsmanship and tool usage are physical activities that also define us as a species and you will find no shortage of people lamenting our loss of those skills, too. Both those and thinking are categorically different than water carrying, ditch digging, and other basic heavy labor.
I'd say it is a constant fight against laziness. Sure it is convenient to drive everywhere with a car but at some point you might understand that it makes more sense to walk somewhere once in a while or regularly. Sure, escalators are convenient but better take the stairs so you don't need to go to the gym and save some money. If you ask me should all do more physical labour and the same goes for the mental labor. If we give that up as well the future is really bleak, to answer your question.
The analogy here is probably physical exercise. Lack of exertion sounds great until your body falls apart and destroys itself without frequent exercise.
> where we go to gym to maintain our physique
It is paramount to not ignore the state of the world. Poverty, wars, inequality in the distribution of resources, accelerated natural disasters, political instability… Those aren’t going to be solved by a machine thoughtlessly regurgitating words from random text sources.
Even if a world where people don’t use their brains were desirable (that’s a humungous if), the present is definitely not the time to start. If anything, we’re in dire need of the exact opposite: people using their brains to not be conned by all the bullshit being constantly streamed into our eyes and ears.
And in your world, what happens when a natural disaster which wasn’t predicted takes out the AI and no one knows how to fix it? Or when the AI is blatantly and dangerously wrong but no one questions it?
I said "if." In case you're not aware, the statement following "if" is usually a hypothetical situation that might not actually happen in reality.
And I responded to your hypothetical in detail and followed up with a request for clarification. That’s how discussions progress and it’s what forums such as HN are designed for.
You were clearly advocating for a particular future (“honestly believe (…) it’s a great thing”), so hiding behind it being a hypothetical feels disingenuous. Of course it’s an hypothetical, because it obviously does not describe the current state of the world. That doesn’t mean the idea is beyond criticism or commentary. On the contrary, that’s exactly what hypotheticals are for.
> We might get some gym and sports for the mind too to give it some fake activity.
The Factorio devs are ahead of the curve on that front I guess.
> an AI waste-sorting assistant named Oscar can tell you where to put your used coffee cup.
A printed sign can do the same.
Try harder, A"I".
I get your point, but I confess I have sometimes had to pause for some time to decide if I was holding a recyclable, a compostable or landfill — looking at the little pictures in fact, hoping I can find the thing I am holding.
Yeah, but otherwise, the whole MIT Media Lab thing is increasingly tasting a little bitter, not the glamorous, enviable place it seemed like in decades past.
Rather than looking for the next internet-connected wearable, for some reason, increasingly, I keep thinking about Bruce Dern's character in the film Silent Running.
It's much worse in South Korea, I think there were at least 5 different bins with different signs. Most things you bought had a label on them and you could try to match the letters to what was on the bin. Except it wasn't perfectly matched up, and we didn't have some bins with signs that matched up with whatever was written on the package.
I eventually gave up and only ate to avoid having to deal with it.
Yes but… As an example In some cities, the signs specifying whether parking is allowed can be impossible to decipher. Sometimes feels like an AI would be needed to tell you “can I park this particular vehicle here right now, and for how long?”
Not that I’d trust an AI to get it right - but people already don’t.
> the signs specifying whether parking is allowed can be impossible to decipher.
In UK, works as designed... to maximise penalty earnings.
If figuring out which used item belongs to which category is easy, it means your community/city/state/country is doing recycling wrong.
Until you realize there are more objects that people aren't able to sort correctly, than there are space on signs to put next to the bins.
Recycling labels are a thing.
Not everything people throw has those labels on them, now what?
MIT switches to labelled cups, is what.
Super helpful and generalized solution, good job.
> Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
Key point. The top use case for "Artificial Intelligence" is lack of natural intelligence.
PS Cute choice of sample size.
There are many on HN that claim that only programmers that are already really good can leverage AI. And then the sky’s the limit basically.
Maybe both are correct because most people are not using AI to generate their next SAAS passive income whatever.
> There are many on HN that claim that only programmers that are already really good can leverage AI.
So all we need is a ban on every other programmer's employment of it.
I'll wait :)
More than just both can be correct.
>top use case for "Artificial Intelligence" is lack of natural intelligence.
Also true if you think about a situation where there is just not enough natural intelligence to accomplish something within its scope.
Maybe there never was enough natural intelligence for something or other, or maybe not enough any more.
It could be a lot more acceptable to settle for artificial in those cases more so than average, especially if there is a dire need,
But first you have to admit the dire lack of natural intelligence :\
> It could be a lot more acceptable to settle for artificial in those cases more so than average, especially if there is a dire need...
...from the lacking intelligence, sure.
But from anyone else?
Not just anyone, people will "naturally" draw the line somewhere, and it will be in a number of different places for different people.
As the article emphasizes, "every technological advance seems to make it harder to work, remember, think and function independently …"
This is exactly what it takes for there to be a positive feedback mechanism for AI to accelerate. Almost like people havng the goal lines moved for them. Which it looks like AI has already done in spite of its notorious shortcomings.
That little quote doesn't only apply to AI, think about how it was as slide rule engineering faded into obscurity. Don't ask me how I know, that would be an even worse wall of text ;)
At one time all bridges, vehicles, aircraft and things like that were designed by people who had prevailed because their mindset was aligned with all the others who excelled at doing almost all the math necessary using only that one common tool, which was common among them because it was a best practice across so many cultures, and a leap above what they were using before. It wan't easy and it required a certain mindset which made engineering possible with such a primitive tool. Two pieces of wood.
The future's come a long way and nobody does this any more, so for the longest there's been no need for engineers to even learn how to use a slide rule, or especially not to use it professionally. Things actually did get easier. Slide rules were no longer necessary for engineering, and from that point the type of brain that could do those kinds of projects using only a slide rule has no longer been a requirement for who can become an engineer. This didn't make them stupid, engineering is still hard, naturally in many other ways.
But with that mindset that made it possible to accomplish so much with such primitive tools now largely absent, could that be why not much more is being accomplished with incredibly more advanced tools after so many decades?
What's the benchmark for accomplishment?
A modern fighter jet can fly literal circles around one that was designed with a slide rule.
Computers have gained enormous complexity.
Medicine is doing all sorts of crazy stuff (biologic drugs and mRNA and so on).
It's initially true but I think there is a human tendency to outsource, that's where the dragons lie.
Until recently every technological advancement replaced manual work, like in agriculture, transportation, industry. Even the tiniest car amenity, like electric windows, hydraulic breaks or touch screen entertainment is aiming to replace a limb movement. With AI it is the first time the tech offloads directly cognitive tasks, leading inevitably to mental atrophy. The hopeful scenario is to repurpose the brain for new activities and not rotting, like replacing labor work gives the opportunity for sports and not getting fat.
I find when I do a lot of AI coding, or any "blocking" task with AI, I inevitably end up scrolling social media. And I feel dumber.
I'm left wondering whether I should have just hand-coded what I was doing a bit slower, but kept my attention focused on the task
Waiting for the LLM is the best time to do the deeper review of the last output.
I like to fire the model off to do exploratory implementations as I refine the existing work.
This sounds nice, but what I've run into is that the model fails to write changes if the code has changed under it. A better tool, where it takes a snapshot at the start of each non-interactive segment, and then resolves merge conflicts with my manual changes automatically, would make this much easier.
I run the model against its own dev branch and either cherry-pick commits or do merges the old-fashioned way.
I'm using Aider though, which makes this easy: it's just another tab in the terminal.
Do (did) you feel the same when you wait for code to compile? Or wait for CI/CD?
The earlier days of programming had more "blocking" since compilation was quite slow. So the issue is obviously that "blocking", but social media.
I used to work on a project that could take 30 mins+ to compile the entire project.
Nearly every time, your problems were detected _early_ in the process. Because build systems exist, they don't take 30 minutes on average. They focus on what's changed and you'll see problems instantly.
It's _WAY_ more efficient for human attentional flow than waiting for AI to reason about some change while I tap my fingers.
That is both a sweeping generalization and plainly wrong. The "much earlier" days of programming had blazing fast compilers, like Turbo Pascal. The "earlier" days had C compiler that were plenty fast. Only languages like C++ had this kind of problem.
Worst offenders like Rust are "today", not "earlier".
I am old enough to remember a time when compilation could last minutes or even one hour, depending on what you compiled, and it was in the late 1980s.
Certainly, but that depended on your own choices in technologies. This condition is not time-dependent. You can inflict yourself the exact same condition by choosing Rust.
Compile times are faster now for C and C++ right? Some of it due to further compiler optimizations but mostly due to higher CPU power.
Still, you seem to be arguing that the choice should be Pascal instead of Rust. There is a reason why we choose these new languages: language features. Compile time is a lesser consideration.
No, I'm arguing that having more "blocking" time is not a function of time (early days of programming), but a self-inflicted choice. Yes, choosing C in the 80s meant self-inflicting yourself with "blocking" time, but there was the choice of Turbo Pascal, or Forth or whatever. Plenty of great choices that ran really fast on those old machines.
Do I mean that one should choose Pascal today? No, compiling C code today is really fast and has practically no "blocking" time. But you can still inflict yourself "blocking" time if you want, with languages like Rust.
Are things clearer now?
In C we had to resort to tricks like precompiled headers to get any sort of sensible compilation and it still lasted a minute for a decent library.
C++ was/is even worse what with generation of all the templated code and then through the roof link times for linker to sort out all the duplicate template implementations (ok, Solaris had a different approach but I guess that's a nitpick).
I have not worked on any large project in Pascal, but friends worked with Delphi and I remember them complaining how slow it was.
So, in my experience, it really was slow.
If you need an attention sink, try chess! Pick a time control if it's over 2 minutes of waiting, and do puzzles if it's under. I find that there's not much of a context switch when I get back to work.
I'm having the same problem. LLMs really take me out of the task mentally. It feels like studying as a kid. I need to really make a concerted effort; the task is no longer engaging on its own.
As someone with focus problems, I find it more productive to have a conversation with ChatGPT (or Claude) about code. And avoid letting it make major changes. And hand code a lot with Copilot.
yet you kind of feel "too slow" if you don't use one?
It depends. For a task I know well the LLM is often much worse. If I'm being asked to do something brand new, the LLM does speed me up quite a bit and let me build something I might have gotten stuck on otherwise. The problem is that although I did "build the thing," it's not clear I really gained any meaningful skills. It feels analogous to watching a documentary vs. reading a book. You learned _something_, but it's honestly pretty superficial.
Because when you simply “read” you’re not necessarily learning. The illusion of knowledge is real, where you nod in agreement that you understood something but when it’s your turn to do it, you have no idea how to. You need to do something yourself to actually learn it, and it involves struggling, frustration, eventually insights etc.
How slow the AIs respond provides some opportunity to work on two task at once. Things like investigate a bug, think about the implementation for something larger, edit code that experience has told you it will take just as much or more typing to have the LLM do it.
It's less cool than having a future robot do it for you while you relax, but if you enjoy programming it brings some of the joy back.
Human attention doesn't work this way. This type of task switching makes both tasks fairly inefficient.
"What was I doing again!?" is a big problem
They're not that slow! You want me to believe we've gone from programmers being so fragile that disturbing their 'flow state' will lose them hours of productivity, to programmers being the ultimate multitaskers who can think and code while their LLM takes 10 seconds to respond? /s
You could do other work while the AI does one task :)
That’ll likely degenerate into “I want my AI to do dishes and laundry so I can code, not code so I can do my dishes and laundry”
That's even worse, because I'm task switching between two intensive tasks
Sorry no, nothing is worse than doomscrolling social media.
Checking Facebook while you have a few minutes available isn't the worst thing in the world. Calm down.
Wait, people still use Facebook? :)
The worst part is when you find out your vibe coded stuff didn't actually work properly in production and you introduced a bug while being lazy. It's really easy to do.
Judging by the comments on any HN post that has to do with finance, yes.
Insulting title. Sorry, but no thanks.
Genuine question: What do you find insulting about the title?
While it's not presenting anything new, the article does cover a number of important talking points in an accessible way.
> What do you find insulting about the title?
The title itself. Without reading the article, I can sense the "we are living in a stupid age" arrogant trope characteristic of the "winning" social classes.
I do not think so when compared to past ages. The only difference now is we get to see "stupid" due to the internet.
In the past, the majority of people who could be heard by the "masses" tended to be educated and wealthy. Now, everyone gets a voice.
But seems the article is more about AI and how it may make us stupider. Which I have no opinion on.