jibiki 16 years ago

It would be interesting to apply Eurisko-style techniques to a hopeless problem like translating natural languages or playing Arimaa. I don't think it would work, but the computer might learn things that humans don't know.

sho 16 years ago

What is with the neo-luddite scaremongering in the comments of that post? I was amazed by the conviction and tenacity of some of the arguers. Is there really a large movement of people opposed to AI research on "skynet" grounds?

"This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it."

Christ, what can you even say to that apart from "go back to Above Top Secret" ..

  • jibiki 16 years ago

    It's Eliezer's website, and friendly AI is kind of his thing. This link is long but entertaining:

    http://lesswrong.com/lw/qk/that_alien_message/

    Here's another good one, although it doesn't seem to tell the whole story (he lost two of the next three games):

    http://yudkowsky.net/singularity/aibox

    I'm not sure what the canonical EY defense of FAI is. Most of his articles on it seem to be very tl;dr.

    • frig 16 years ago

      A wag other than myself once summarized EY's other site as "wallowing in bias", which summarization did seem apt every time I overcame that tl;dr feeling and tried to wade through; I'll happily admit the possibility that there really is something nontrivial and worthwhile going on there, but the tone and contents were at more-than-a-glance indistinguishable from what people write when they are trying to convince themselves that they do believe in their heart what their mind wants to believe it believes.

      As often, Dostoyevsky said it best:

      He was one of those idealistic beings common in Russia, who are suddenly struck by some overmastering idea which seems, as it were, to crush them at once, and sometimes for ever. They are never equal to coping with it, but put passionate faith in it, and their whole life passes afterwards, as it were, in the last agonies under the weight of the stone that has fallen upon them and half crushed them.

      • jibiki 16 years ago

        I don't know about that, I think it's good that someone is looking at the risks of runaway AI and similar things. Progress relies upon a certain diversity of thought, and that means that some people (and nearly all scientists) will end up wasting their lives looking down blind alleys, and none of us knows whether EY is one of them. Your Dostoyevsky quote could well apply to Einstein, for instance, in his later years.

        • arakyd 16 years ago

          There are blind alleys, and then there are ideas that make people want to psychoanalyze you. To hyperbolize a bit, but only a bit, worrying about runaway AI that has the potential to turn the universe into paperclips is about as useful as worrying about warp drives that damage spacetime. It's hard enough to create fictional technologies that require basic changes in our understanding of the laws of physics, let alone debug them before they are created, let alone debug them before we make the basic discoveries that make them possible, let alone make them provably safe before we make the basic discoveries that make them possible and before we even figure out how to prove that relatively simple programs won't crash.

          I know of only one science fiction author who was honest enough to spell out the fact that his hypothetical runaway AI had to make several foundational breakthroughs in physics at just the right time in order to plausibly take off faster than anyone could stop it, or become as smart and powerful as singularitarians assume is possible. We have no evidence that the laws of physics in our universe allow that sort of thing. Most super-AI fiction (and everything written about super-AI is fiction at this point) either glosses over this by tacitly assuming that the plodding nature of human intelligence is the only real bottleneck to godhood (most transhumanists), or assumes that large numbers of humans will be incredibly incompetent (most mainstream AI fiction).

          • stcredzero 16 years ago

            before we even figure out how to prove that relatively simple programs won't crash

            You are forgetting the lessons of pragmatism. If AI can get a heuristic sense of whether a simple program will or won't crash with 99.999% accuracy in practice, then you already have a very useful tool.

            What if the AI we produce turns out to to be a bit dimmer than us, but never tires, can reproduce just by making copies, and is tirelessly diligent? I suspect that such an AI could beat us in a war, despite our being arguably "smarter."

            • sho 16 years ago

              What if? Well then we'd pull out its power plug. Or build another AI and program it to be sympathetic to our needs. Or any of an infinite number of possible solutions, assuming we were the utter idiots necessary to not implement appropriate safeguards in the first place.

              Look, anyone can come up with an infinite number of movie-plot scenarios where naive humans bumbled into technology that then destroyed them. You can say that about almost anything. In understanding DNA, we might accidentally create an unstoppable virus! In conducting space exploration, we may alert a hostile alien civilisation to our presence! In researching chemistry we may create ice-9! Etc etc etc until the end of time.

              All of these, including EY's thesis, have one thing in common - the unjustified, massive expansion of a speculative and highly unlikely risk into a reason for retarding progress which would otherwise yield non-speculative, highly likely, extraordinarily beneficial gains.

              • stcredzero 16 years ago

                What if? Well then we'd pull out its power plug.

                I was just posing an interesting question, especially as regards to the whole notion of "superior" intelligence. Superior in what regards? By whose measure? Eliezer should be given credit for promoting an information theoretic approach to that question. At least that seems to be a fundamental measure with a good chance of escaping cultural biases concerning "intelligence."

                I am certainly not in some simpleminded superior AI is going to kill us all camp. Perhaps we will have very powerful AI optimization tools that have no sense of self, or self-originating volition whatsoever. There would be no reason for such entities to act in their own self interest, and therefore no danger for their interests to conflict with our own. They would have the disadvantage, though, of never coming up with something neat on their own initiative. I think there's enough more than enough initiative from human sources. What's needed is better optimization.

                (Of course, only one rogue self-directed AI entity escaping into the wild could possibly -- not certainly -- doom us all. But this is not a new kind of danger. We have been facing that sort of danger -- where one robust and virulent enough example could escape and wreak havoc -- from technologies based on molecular biology for a few years now. So far, so good.)

            • arakyd 16 years ago

              FAI advocates are not satisfied with a 99.999% chance that the universe will not be turned into a uniform distribution of paperclips.

              • Eliezer 16 years ago

                99.999%? I'd totally take it.- if it was a real number, calculated by some trustworthy method, and not a bogus expert estimate.

                Of course if you can do that, probability ~1 (i.e., proven theorem given that transistors obey stated axioms) is probably just as easy.

        • stcredzero 16 years ago

          Progress relies upon a certain diversity of thought, and that means that some people (and nearly all scientists) will end up wasting their lives looking down blind alleys

          I think the progress of civilization is largely dependent on this Genetic Algorithm-like search/optimization process. Individual lives in the struggle of progress are like bullets in a machine gun. We are like soldiers on a battlefield.

    • randallsquared 16 years ago

      I don't know that he would agree with me, but EY's argument seems to me to boil down to: without a formal specification of goals, we won't know how to predict a system's actions, and not knowing how to predict a system's actions when that system is better at reaching its goals than you are is very likely to make any bug a (literally) fatal one.

      Humans have a vast network of shared implicit goals and norms, such that we know that satisfying a goal of "end poverty" doesn't justify "kill all the poor". It's easy to wave your hands and say that by the time we approach human-level AI it will necessarily have all the features of human intelligence, but what if it doesn't?

      Sometimes various people have referred to "optimization process" rather than "intelligence" to try to get this point across. What if it's possible to build an optimization process that is better, or far better, at solving problems than humans? Planes are much better at flying fast and far than birds, but often don't bother with some of the things that all birds have, like movable wings and feathers. I believe that Eliezer thinks that the things that make us human are feathers, analogously.

      Even if you think there's only a small chance of this, the chance that a program more intelligent than humans will have things that humans consider bugs is very high, and the consequences are an existential risk.

      • stcredzero 16 years ago

        Why would such an "optimization process" have to have something resembling the continuous environmental monitoring process we call consciousness? Why would it even need to be self aware? Why would it need to have any concept of its own goals? For such an optimization tool to be useful, we just need to be able to use it. It would need an inkling of goals. It would need to understand individuals and other entities. It would not need any concept of self.

        • arakyd 16 years ago

          The counter argument is probably that self-improving AI would need to be self-aware.

          Besides, it's a well known trope in these stories that AI becomes self aware on it's own anyway, so there you go. By argumentum ad Jurassic Park, life- I mean intelligence - always finds a way.

          • stcredzero 16 years ago

            The counter argument is probably that self-improving AI would need to be self-aware.

            If we command it, why? Why can't we just tell the non self-aware AI to go and improve its own design? It may even deduce that the design is its own, but it may be so composed such that it simply doesn't care.

            The advantage of a self-aware AI is that it can come up with goals that you didn't think of. This is the quintessence of the "double edged sword." Humans are already quite good at this, however. As William Gibson wrote, "The Street finds its own uses for technology."

            Eurisko has already demonstrated that non self-aware AI can arrive at of ways to satisfy goals you never thought of. (So have Bayesian spam filters.) This is already a powerful tool that we haven't exploited even halfway as well as we might.

        • randallsquared 16 years ago

          I'm not sure if you believe you're disagreeing with me, but you're not. Goals are necessary for an optimization process, but errors in specifying those goals (or in the goal-specification procedure itself) are potentially existential risks, for a powerful enough optimization process.

  • rms 16 years ago

    That's Eliezer. Less Wrong is his site. He also posts here sometimes. Friendly AI is his big idea.

    No, there is not a large movement of people opposed to AI research on skynet grounds, Eliezer is the main advocate. I'm certainly willing to admit that Friendly AI is good and important but I think it's perfectly fine to reimplement and (recursively self-) improve Eurisko. There's always quantum immortality.

    Some selections from his writing:

    http://lesswrong.com/lw/y4/three_worlds_collide_08/ (That Alien Message, my favorite short piece by Eliezer)

    http://lesswrong.com/lw/y4/three_worlds_collide_08/ (Three Worlds Collide, novella length fiction)

    http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ (Quantum Physics Series)

    I'm glad Eliezer is out there thinking the thoughts that he is thinking. Maybe by the time we're closer to AI people will start really listening to him. For now, AI research continues to crawl along.

    • randallsquared 16 years ago

      There's always quantum immortality.

      That's not the best argument for inspiring confidence, to put it mildly.

      • rms 16 years ago

        I was trying to be funny. At its best my sense of humor involves statements where you are unclear if I am being sarcastic or serious, because the statements rest somewhere in between. Kind of like my username.

    • sho 16 years ago

      "That Alien Message"

      Thanks for the link. I read it, and it's very well written. What a great metaphor for conceptualising the speed of thought of a super-AI.

      That said, the essay still absolutely reeks of unfounded near-paranoia. The hypothetical alien civilisation with the power to manipulate the output of stars - or even to build a simulation of such astonishing power - must surely know the very basics of risk management. And the superintelligent "earth" seemed completely friendly, all they wanted to do was talk. They must have reasoned that "hacking" the other civilisation with self-replicating nano-machines would destroy them. Why on earth would they do that? You'd expect that the one thing an AI would not want to do is destroy unique information.

      Start with a great setup and an engaging story, then right at the end take a totally unjustified turn into pure worst-possible-case disaster fantasy. It really just proves nothing, except that Mr. EY does not believe risks can be managed. And really, if you're going to write the "precautionary principle" this large, we really just couldn't do anything, ever. It reminds me of those nuts arguing against the LHC.

      "I'm glad Eliezer is out there thinking the thoughts that he is thinking."

      The idea of hostile AI probably predates the notion of friendly AI! It's not a new thing and I'd argue it's always been on the table. It just has to be kept in perspective, like any risk scenario from new technology. EY takes it to the most ridiculous all-or-nothing paranoid extreme when we're not even close to the time when we need to seriously weigh up these things.

      There would be fantastic, concrete benefits to having even slightly better AI. Trying to retard research because of some ill-justified nightmare "what if" scenario set in the far-off speculative future is ludicrous and, arguably, morally wrong.

  • gort 16 years ago

    I found this article quite compelling: http://selfawaresystems.files.wordpress.com/2008/01/ai_drive...

    Imagine you do have an AI with a certain goal. It seems obvious that:

    It will resist being switched off (because if switched off it cannot fulful its goal).

    It will try, and likely succeed, in becoming more powerful (to better satisfy its goal).

    It will not want to change its goal (because it sees that changing its goal means its current goal, i.e. the one currently motivating it, will go unfulfilled).