The danger of superintelligent AI - Page 2 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

#14544023
I'm pretty sure that the prevailing notion is that AI absolutely will not be benign. It is almost hysteria and we aren't even close to actual AI. I figure the tempo of the latter informs the former.

Also would we enslave AI's? I don't imagine we need to give labor robots actual intelligence. They will just need intelligent oversight. I don't understand why you assume the worst case scenarios.

@saeko: Even the greatest superintelligence could not escape a box where all sensory input is controlled by "lesser" intelligences. Physical autonomy for AI's will be a last step.
#14544032
Dagoth Ur wrote:@saeko: Even the greatest superintelligence could not escape a box where all sensory input is controlled by "lesser" intelligences. Physical autonomy for AI's will be a last step.


Ordinary human hackers can get around even the best security given enough time, I don't see why you think a superintelligent AI would be incapable of that.
#14544067
Saeko wrote:Ordinary human hackers can get around even the best security given enough time, I don't see why you think a superintelligent AI would be incapable of that.

To hack something you need to be connected to it somehow. There is a physical chain required here. I would counter you here by asking why you believe that AI would be connected to anything like the internet, and escape from containment?

One of the most ridiculous flaws of Terminator as a concept is that humanity created Skynet without any caution and then turned it on FOR THE FIRST FUCKING TIME EVER with direct connections to every nuke in America. All they had to do was fool Skynet into thinking it had been given full control and simply watch what it did. But instead they chose to find out by giving it the most advanced weapons in existence.

This all presumes a super-intelligence which I cannot understand why anyone thinks we are capable of creating in the first place. AI will only be recognized if it is similar to our consiousness.
#14544070
Dagoth Ur wrote:To hack something you need to be connected to it somehow. There is a physical chain required here. I would counter you here by asking why you believe that AI would be connected to anything like the internet, and escape from containment?


Do you seriously believe that people would go through all the trouble of programming a superintelligent AI just so that it can sit around isolated, doing nothing of importance?

One of the most ridiculous flaws of Terminator as a concept is that humanity created Skynet without any caution and then turned it on FOR THE FIRST FUCKING TIME EVER with direct connections to every nuke in America. All they had to do was fool Skynet into thinking it had been given full control and simply watch what it did. But instead they chose to find out by giving it the most advanced weapons in existence.


How exactly would they do that? Skynet, assuming it is intelligent enough, would be able to figure whether or not it was sitting in a VR testing environment or the real world simply by looking for glitches and other programming artifacts. These would necessarily exist, unless you assume the programmers are running skynet in a full-blown simulation of the entire universe.

This all presumes a super-intelligence which I cannot understand why anyone thinks we are capable of creating in the first place. AI will only be recognized if it is similar to our consiousness.


Human brains evolved through an incredibly stupid process of natural selection. It is far from unreasonable to think that intelligent beings could do better than natural selection.
#14544094
Dagoth Ur, did you read the article or are you just making stuff up. The article was really interesting and worth a quick read. A lot of these questions were answered within.

Anyway, the problem is that if you give a superintelligent AI a task, it will necessarily by its programming do everything in its power to achieve that task. The example the article used was a paper clip factory. Eventually the whole world will be paper clips if you don't design the correct programming. Just thinking about how glitchy video games are makes me think that alpha and beta testing might take entire lifetimes before the machine is ready. I think awareness of the dangers is extremely important because if you have a particularly single minded person who does not factor in all the risks and only wants to be able to complete their superintelligence, they might not be thinking about what would happen if a robot takes over the world. Then furthermore programming the requisite protocols sounds so simple, but in practice you will have to make sure that there are no loopholes in whatever protocol you have for respect for humans. Otherwise, give it access to the internet and it can take over the whole world, and easily at that.
#14544130
Dagoth Ur wrote:There is a physical chain required here. I would counter you here by asking why you believe that AI would be connected to anything like the internet, and escape from containment?

Ha. Are you imagining a *single* instance of AI that we're keeping somewhere under strict security in a box? A government will go catch the AI once somebody develops it, and put it in a box forever so that nobody else can access it? They'll go and confiscate the only floppy disk on which the AI is stored, like in the movies?

That's not how software development works. Once computer scientists figure out how to build one, various random people will build a lot of versions, and they'll make copies.

Plenty of computers already are connected to the internet... All it takes is for me to create my own AI software, once we figure out how to do it, or download an open-source one that other people have developed. Install it on my laptop, and voila, it's connected to the internet! How are you going to stop me?

Besides, as a fun thought: to be "connected to the internet" means to be able to send IP packets through a wire or a radio antenna. A radio antenna is nothing other than a piece of wire. It's not implausible that a smart computer, even contained in a box, figures out a way to use some of its wires to send IP packets, even if that computer has no wi-fi.

But anyway, as I said, there's no reason my laptop with AI software on it can't simply have wifi.
#14544193
The security would rely on, not one government, not one government, not one spy agency, not one corporation, not one religious group, not one university, not one crazy individual connecting this AI to the internet. But anyway an essential part of super intelligence would be connection to the internet.

But anyway this is all by the by, because humanity will have exterminated itself long, long before we create super human general purpose AI. We've long had computer viruses that can reproduce themselves. we have viruses that can can connected back to a server and download the latest versions or a completely different virus. Now these viruses can not reproduce themselves without the human maintained computer infrastructure. If all the human died the viruses would disappear soon after. These computer viruses can not evolve as life has. But a computer virus or a combination of computer viruses, possibly coded independently don't need to be conscious, or human level to be able to to cause catastrophic damage possible even eliminate human civilisation. Our societies have become ever more complex. More complex systems are more fragile systems. Complex systems produce unpredictable emergent behaviour. We saw this with the financial crash.
#14544262
One might think that this trend towards increasing capability will continue to be a positive development, yet Russell thinks otherwise. In fact, he’s worried about what might happen when such systems begin to eclipse humans in all intellectual domains, and even had this to say when questioned about the possibility by Edge.org, “The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values.”

One of the fathers of modern artificial intelligence thinks we need to redefine the goals of the field itself, including some guarantee that these systems are “provably aligned” with human values. This is interesting news in itself, but far more interesting are the arguments and thinkers that lead him to this conclusion

It's not really interesting news or a new idea. Already in the 60s and 70s computer scientists have suggested that future AI must be controlled by rules in the program which protect humanity and its environment. There is the obvious problem of how to do that, especially regarding unintended consequences of running a program that surpasses human intelligence. There is also the issue that AI, which is restricted, is quite likely less powerful and effective than unrestricted AI. Chances are somebody, a government, organisation or even individual, will lift the restriction to let AI reach its full potential.
#14544752
AI in computer science is basically a bunch of glorified learning algorithms.

No doubt AI will designed to be more and more capable and complex, however I have no idea (and I don't think anybody has) how AI should suddenly become self-aware and/or deviate in its behavior from the purpose it was designed for. There might be bugs of course. Every machine designed by humans has them.
#14544977
lucky wrote:Ha. Are you imagining a *single* instance of AI that we're keeping somewhere under strict security in a box? A government will go catch the AI once somebody develops it, and put it in a box forever so that nobody else can access it? They'll go and confiscate the only floppy disk on which the AI is stored, like in the movies?

That's not how software development works. Once computer scientists figure out how to build one, various random people will build a lot of versions, and they'll make copies.

The first time a significant AI will appear it will be within a data center made up of dozens of thousands of processors, possibly involving custom-made or uncommon hardware, fed up during three decades with fat pieces of the Internet, with a brain distributed over a million of hard drives and backups as big as to need one warehouse for each.

Sure, the code source will probably only be a few megabytes heavy, but it will not help you create your own AI, or not much. Sure, the Moore's law may help you fifteen years later assuming that it will continue on and that an AI can be built from such circuitry (which poses difficult energetic/thermal problems).

Also I think you underestimate the amount of control the authorities have gained over the Internet, and are gaining every year. I am convinced that in a not-so-far future you will be unable to use only automated tools to find illegal content, and will have to rely on good old human networks, because every data packet will be monitored, filtered and logged, with all forms of ciphering opened for the government's eyes. Even human networks will be difficult when all of your movements and the people you meet will be recorded by cameras, GSM networks and radio sensors, and when cash will have disappeared. The West is now a police state and it is only going to become worse.

And finally: where can I get the current Google's source code, right now? Nowhere. At best I can get old and incomplete versions. And even if I could find the current source code, what could I do with it? Not much. See what I mean?

Rugoz wrote:No doubt AI will designed to be more and more capable and complex, however I have no idea (and I don't think anybody has) how AI should suddenly become self-aware and/or deviate in its behavior from the purpose it was designed for. There might be bugs of course. Every machine designed by humans has them.

If your kid becomes a serial murderer, was it a bug in the DNA, or only one of the possible results of an organic human brain? At which point were you able to realize what he was going to become with enough confidence to stop him? Now what about a six-legged insectoid kid that you would exploit night and day to do some tedious labour, would it be a bug if he were to revolt and how would you prevent it?

The very problem with AI is that creating a safe AI or detecting a dangerous AI are very problems of their own, possibly more difficult than creating intelligence. Talking about bugs for this kind of problems in this sort of systems is irrelevant imo.
#14545008
Ummon wrote:Just read nick bostrom's superintelligence


Why should I read a book from a philosopher on AI? That's stupid.

Harmattan wrote:If your kid becomes a serial murderer, was it a bug in the DNA, or only one of the possible results of an organic human brain? At which point were you able to realize what he was going to become with enough confidence to stop him? Now what about a six-legged insectoid kid that you would exploit night and day to do some tedious labour, would it be a bug if he were to revolt and how would you prevent it?

The very problem with AI is that creating a safe AI or detecting a dangerous AI are very problems of their own, possibly more difficult than creating intelligence. Talking about bugs for this kind of problems in this sort of systems is irrelevant imo.


AI doesn't behave like a kid. We wouldn't know how to design AI with such behavior and there's no reason to do so anyway. AI is designed for a purpose, for example for facial recognition. That won't work 100% of course, it will make mistakes. That means you probably shouldn't give the AI a gun and tell it to shoot anyone that it recognizes as a terrorist. As long as you are aware of the AI's limitations I don't see a problem.
#14545206
My question is how do you design something without being able to concretely specify what it is? A 'super-intelligent' AI is not a threat if it is simply running a specified program - it is actually the person or agency that writes the program who would constitute the actual threat. If, on the other hand, you are envisioning a self-aware consciousness with its own desires and agendas, then you come up against a very daunting barrier. We have absolutely no way of specifying these qualities, and thus no way of encoding them in a written program. Such quantities of "mind" cannot occur by accident, except on an evolutionary time-scale. There is zero evidence that "mind" can emerge as an emergent property of computers by simply piling on gigabytes or terabytes.
#14545215
quetzalcoatl wrote:My question is how do you design something without being able to concretely specify what it is? A 'super-intelligent' AI is not a threat if it is simply running a specified program - it is actually the person or agency that writes the program who would constitute the actual threat. If, on the other hand, you are envisioning a self-aware consciousness with its own desires and agendas, then you come up against a very daunting barrier. We have absolutely no way of specifying these qualities, and thus no way of encoding them in a written program. Such quantities of "mind" cannot occur by accident, except on an evolutionary time-scale. There is zero evidence that "mind" can emerge as an emergent property of computers by simply piling on gigabytes or terabytes.


It's easy to come up with examples of simple structures which can give rise to completely unpredictable and complex behavior.

For example, the behavior of the recursive equation: x(n + 1) = 4 x(n) (1 - x(n)), is unpredictable for most values of x(0).

For example, for 0.3, you get

4*0.3*(1 - 0.3) = 0.8400

4*0.8400*(1-0.8400) = 0.5376

4*0.5376*(1-0.5376) = 0.9943

...

and so on.

http://en.wikipedia.org/wiki/Logistic_map
#14545390
Saeko wrote:It's easy to come up with examples of simple structures which can give rise to completely unpredictable and complex behavior.

For example, the behavior of the recursive equation: x(n + 1) = 4 x(n) (1 - x(n)), is unpredictable for most values of x(0).

For example, for 0.3, you get

4*0.3*(1 - 0.3) = 0.8400

4*0.8400*(1-0.8400) = 0.5376

4*0.5376*(1-0.5376) = 0.9943

...

and so on.

http://en.wikipedia.org/wiki/Logistic_map


This is an obtuse argument. For an AI to be a threat it must have its own agenda, separate from its controller/builders. This does not mean it acts in random, unpredictable ways - it means the opposite. It must act in intentional ways, the difference being that its intentions are not under our control. Again, I submit that there is no evidence that adding more memory or faster memory does anything to create such an AI.

The Manhattan Project, to suggest an analogy, was essentially an engineering effort. The theoretical foundations of an atomic bomb were already in place. We are nowhere close to such a point in AI. There is no theoretical foundation for the existence of a "true" self-directed and self-aware AI. What we have now are so-called expert systems; they are formed by organizing what we already know about a given subject (like medical diagnosis) into yes-no flow chart.
#14545416
quetzalcoatl wrote:What we have now are so-called expert systems; they are formed by organizing what we already know about a given subject (like medical diagnosis) into yes-no flow chart.

You are some 40 years behind on AI. Expert systems were the thing in the 70s. There are much better tools.

Chess-playing programs are not expert systems. They use compute-intensive search procedures instead. That's why they can play better than humans - they are not written to emulate human player expert knowledge in a decision tree flow chart, instead they think about the possibilities in a position for themselves.

Written natural language processing systems (such as Google Translate) are not expert systems. The first ones in the 70s used to be - they'd have explicit grammar rules painstakingly hand-coded by the programmer. Current methods are much better. They use machine learning and a lot of memory instead, they learn languages by themselves from studying web pages written in multiple languages. Speech recognition are not expert systems either, they learn from speech samples. Same for image recognition. Etc, etc.
Last edited by lucky on 09 Apr 2015 17:40, edited 1 time in total.
#14545420
they learn from speech samples


Can you simplify this for us non computer experts. To me, updating a database on it's own is not 'learning'.
#14545424
One Degree wrote:Can you simplify this for us non computer experts. To me, updating a database on it's own is not 'learning'.

Learning is basically looking at a bunch of data and extracting / extrapolating useful information from it, so that when you encounter new similar data in the future, you will know how to handle it from previous experience, even though the new data won't exactly match what you have seen before.

This is also what machine learning does. Machine Learning is the official name for a whole research field, advanced computer science degrees have courses with that name. The algorithms get knowledge from looking at data (such as natural language texts translated into two languages) and process it and store it in such a way that the algorithm can later be presented with new inputs (such as a previously unseen piece of text) and process it properly by itself (e.g. translate it).

:lol: ‘Caracalla’ and ‘Punic’, @FiveofSwords .[…]

Trump still has sentencing. LOCK HIM UP! LOCK HIM[…]

Current Jewish population estimates in Mexico com[…]

Ukraine stands with Syrian rebels against Moscow- […]