Stephen Hawking: Transcendence of AI taken seriously enough? - Page 2 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

#14406477
@ ummon: I've skimmed the article and will read it in full.

There's much to be discussed with regard to AI. Perhaps one of the more interesting aspects is the definition of it. Last I knew, the Turing test was as far as things had gotten. Perhaps I've been unaware of more recent developments? After all, we're some 60+ years further along the path since Mr. Turing's time.
#14408045
there is a distinction to be made between weak AI (ai that is very good at performing a single task like playing chess [ie Watson, DeepBlue]) and Artificial General Intelligence or Strong AI. The former exists, while the latter does not.

I believe cleverbot passed the Turing test http://www.newscientist.com/article/dn2 ... human.html

We're getting closer to AGI though. Especially with things like the Blue Brain Project http://www.wired.com/2013/05/neurologis ... brain/all/

Blue Brain is sort of like the human genome project for understanding the brain in terms of simulating it like software. If we can do this we very well could have AGI. We have simulated a cat's neocortex http://www.popsci.com/technology/articl ... ercomputer (2009)

park wrote:Why did UN ban AI weapons?


Hasn't yet. http://www.sheffield.ac.uk/news/nr/robo ... e-1.373321
#14408288
@ Ummon:

I've definitely some catching up ahead. Thank you for the weak/strong distinction. I'm not particularly interested in the single task programs. They're little more than reiteratives coupled with 'weighing' criteria. Brute force computation. Not that their results, particularly in chess, aren't impressive. They certainly are. Playing against more easily available programs such as 'Fritz'(r) can certainly open one's eyes as to what can be done in this area.

I'm far more interested in programs which include a learning capability. I think that when all's said and done, this will be found to be the key to the development of a true general AI. The relationship between the ability to learn and the emergence of a 'self' is presently defined by speculation. That may change, and when it does, Dr. Hawking's warning will demand immediate attention.
#14408298
in my view, the best cure for fears around strong AI is to get into programming. Java, CSS and XML don't speak to me of a looming singularity. If anything they reinforce the view that we may be approaching the summit and about to go down.

i reinstalled windows 7 on my new Haswell machine recently, admittedly I put it on the Hard drive keeping the SSD for my Linux distros. It goes like a slug on tranquillisers. No I see little signs of emergent higher intelligence amongst software developers let alone from the actual software.
#14408305
@ Rich:

I think your point [I may well be mistaken,] is that increasing the complexity of programs results in slower operation and that this militates against the development of an eventual AI. Let's look at this a bit.

Taking chess as an example, it's been noted that one of the differences between human chess masters and 'puters is our ability to disregard whole areas of investigation of a position and concentrate in depth on a very small number of specific move sequences. 'Puters spend time, often lots of time, on lines of 'thought' which we apparently immediately disregard.

This point is worth pondering. What is it about the human mind that permits it to function as it does? Can this be defined and incorporated into a 'puter's abilities? Would it not act to speed up the 'puter's 'thought' processes?
#14408488
^even without strong AI there are still problems like autonomous military robots, swarm robots, etc

Torus34 wrote:@ Ummon:
I'm far more interested in programs which include a learning capability.


You may be interested in reading up on genetic programming.

http://en.wikipedia.org/wiki/List_of_ge ... plications
#14408803
Ummon wrote:^even without strong AI there are still problems like autonomous military robots, swarm robots, etc
Oh this is most certainly true. New technology will also reinforce power inequalities. I guess we already see this with America's drone wars. It is really only aversion to casualties amongst their own troops and civilians that holds back America. This will only get worse. I don't think we could have a completely AI run military or security, but with technology a relatively small numbers of individuals get a huge multiplier in their power.

But its important to understand the problem, the real word of weak AI as opposed to the science fiction fantasy world of strong AI. Although I remember being sightly freaked once, years back I was playing Age of Empires and an AI ally turned on me and attacked me, when I 'd set it up at game creation as a permanent ally. Of course this wasn't really the AI actually becoming conscious nad following a will of its own, its just that the programme and its documentation were not fully consistent. And for me this points to the uselessness of the Turing Test. Human users are easily fooled. You only have to think of a horror film. We know its not real, but we can still be scared shitless. Its the programmer that would have to be fooled not the user. I'm not aware of computer programmes showing creative emergent behaviour. Computer programmes do all sorts of things that there programmers don't expect but that's not the same thing. "Machine learning is just pompous marketing. A real example of creative emergent behaviour would be writing a word processor and then the programme learning by itself to process spread sheets, without any intent form the programmer.
#14408846
Fasces wrote:I have to hope that any intelligence, artificial or not, is capable of developing some sort of system of ethical behavior.

Isn't ethics the way we rationalize our behavior to rule compromises between our individual interests and our social emotions (empathy especially)? While I guess that we will gift many AI with empathy, I do not think that they would all have it.

Bulaba Jones wrote:I doubt super-intelligent artificial intelligences would have any interest in either destroying or dominating humanity.

I could envision scenarios where it would be easier for them to exterminate us. You tend to consider them as gods whose power would know no bounds but not only this assumes that the physics laws are plastic enough to accommodate that, but also there would be a transition before that where they would need to build the incredibly rich industrial complex required to sustain a high technology. Starting from our industries and Earth resources seem a lot easier and possibly mandatory

Dagoth Ur wrote:I think so long as we go about it like we're dealing with a living thing we'll get by just fine.

Do you think they would be happy to be dealt with like we do with cattle? The whole point of the AI is to create us slaves.

Rich wrote:in my view, the best cure for fears around strong AI is to get into programming. Java, CSS and XML don't speak to me of a looming singularity. If anything they reinforce the view that we may be approaching the summit and about to go down.

True AI's behaviors are not programmed, not more than you are. We only program the rules that make them able to learn. From this simplicity the complexity arises.
A true AI's code will be reasonably simple and easy to understand, a lot more than Google Search or Windows.

But to reach this simplicity there are a lot of long and complex steps: first we need to get the right algorithms (we are not sure about the details of how the brain learn, we know its algorithm is different from the gradien descent we use) and this takes time (learning is slow: years may pass before a computer scientist gets the feedback he needs to evaluate his AI). Then we will need to build hardware optimized for this task and this probably involves great difference with the technology used for decades in our CPU given that it would probably be unable to match a brain's power due to thermic problems and limited couplings between gates.

Torus34 wrote:Last I knew, the Turing test was as far as things had gotten.

I do not think we should focus on the Turing test. The Turing test proves the ability of a computer program to pretend to be a human. A program may pass the test without being intelligent and a great intelligence may be unable or unwilling to pass the test.

Torus34 wrote:I'm far more interested in programs which include a learning capability. I think that when all's said and done, this will be found to be the key to the development of a true general AI.

We have had true learning algorithms for decades now. But it does not make them "generic". Isn't mankind generic because of the lot of factors to satisfy? Sex, entertainment or innovations look like satisfactions to hormonal constraint problems.

That being said there are very impressive things in the labs. AI may still be specialized and unadaptable but they are getting very smart at their tasks and outperform us (or will soon) at many of them, including finance, surgery, fact checking, mathematical proofs, jeopardy, data pattern discovery (the starting point of science), etc.
#14455110
If you say something like this 10 years ago, it may just sound meaningless to most of us. But it is not just a random talking on innovation now. AI and Robotics agenda become so serious and it will continue to be a hot topic as long as corporations are interested in this and those robot makers make profit. Hawking got it right.
#14462293
A side thought which may be of interest:

When we speak of AI we unconsciously confine ourselves to a 'puter-based entity. But that's not necessary for an AI to exist. There are other matrices. In previous posts on this thread certain characteristics of an AI are mentioned. A simple list is not difficult to create. It would include awareness of the outer world, the ability to learn from sensory input and a drive to effect some degree of control over external forces. There are other criteria which you are free to add as you wish.

Now try this:

Instead of 'puter, think corporation.

Perhaps, just perhaps, we've already created AI.

Lots of them.

"And gladly wolde he lerne, and gladly teche." Geoffrey Chaucer.
#14475006
Bulaba Jones wrote:Wouldn't artificial intelligence soon become alien-like to us? Assuming an artificial intelligence far more intelligent than any human can outsmart and bypass controls on its development (eventually, an event like this will occur just as any possible event will probably occur over a long enough period of time), its overall development (let alone its cognitive processes themselves) would grow exponentially. Initially, while AI would resemble a super-intelligent human intelligence, over time, it would no longer resemble anything remotely human. Its interests and desires would become so alien and incomprehensible that there would could longer be any meaningful relationship or communication with a hyper-intelligence like an AI, allowed to naturally develop.

The reason I don't think it would necessarily be hostile or dangerous is because a heightened state of intelligence does not necessitate aggressive behavior. Many animals on Earth who possess intelligence aren't as dangerous, let alone wantonly destructive, as humans. Granted, there's a cognitive variable thrown into the mix because we can think about wanting to cause death and destruction for reasons beyond instincts and primitive emotions, even vulgar ideologies. Consider what possible benefit or gain a hyper intelligence would have from 1) staying on Earth, and 2) dominating or harming humanity in some way. A hyper-intelligent AI would be virtually god-like to us in many respects: why would it wish to hinder and retard its development by remaining on Earth among a human civilization?

Many astronauts who return to Earth report the "overview effect" where many aspects of human civilization suddenly seem provincial, petty, and trivial, notably ideals of nationalism and tribalism in most cases. Apply this to an intelligence so developed and alien from ourselves, observing and considering us. Why would it wish to hinder its development by staying here with us? Surely it could and would develop the means to leave this planet and never come back.

The other thing I consider is something I mentioned in my previous post about extraterrestrial civilizations, which suitably applies to artificial intelligence. The universe itself is many billions of years old, Earth is about 4.6 billion years old, life on Earth has been around for about 3.5 billion years, multicellular life is only about 1 billion years old, and humans first appeared about 100,000 years ago. Even 10,000 years ago or so, if extraterrestrial explorers visited Earth, they would have been relatively unimpressed as there would have been no real settlements to speak of, or indications of a developing civilization. Most likely, the life we will find in this galaxy will either be millions and billions of years younger, and less developed than us, or as much older and incomprehensibly advanced than us, and will accordingly have no interest in our affairs. This bridge of differences is the same for a hyper-intelligent AI that would, within decades or centuries, create a development gap between itself and human civilization that would resemble tens, hundreds, or thousands of thousands of years of development. I assume that notions of being planet-bound, of wishing to dominate other species, and wishing to wage war for territory or out of a need to destroy would be as primitive, meaningless, and petty as the perspective of astronauts who have experienced the overview effect.


Doncha just love big numbers?
#14475007
^Even if A.I. isn't a threat to us in the science fiction terminator sense its still a threat to us in that we are animals. As smart as we are, we are just as stupid and stuck in our ways. A.I. would be viewed with suspicion and resenment eventually because we have to be alpha and if A.I. does surpass us, humans will be beta and with that disgruntled, defeated and depressed.
#14595178
Torus34 wrote:Recently finished a sci-fi book which played with the concept of 'intelligence' and 'self-awareness' as possibly separable concepts. Thinking about it made my head hurt, but it may lead to a better understanding of an eventual AI.


This is a legitimate question. Also, I would ask does the instinct for self-preservation arise from our consciousness? If the answer is no, why would it arise in artificial intelligence? There is so much we don't know about general intelligence (artificial or otherwise), that predictions of an AI holocaust seem as remote as Cthulhu's return.

The assumption underlying this speculation is that consciousness is a program that can be run on any platform. There is no inherent reason to believe this assumption is either true or false...it's just an assumption. Many processes are specific to a particular physical and/or biological matrix.
#14651691
Just to update this thread a little the world economic forum thankfully is beginning to discuss the topic of autonomous military robots:

http://www.weforum.org/events/world-eco ... -go-to-war

You guys also probably heard earlier this month that artificial intelligence has mastered the game of Go as well which is much more complex than chess:

https://www.technologyreview.com/s/5460 ... -expected/

The particularly interesting thing is this was more of a general ai than much of what machine learning algorithms are doing and could be used in part in developing an GAI. In other words it is much more general than a "narrow" ai created to beat chess so this was a pretty huge advance in the computer science world.
#14662491
Ummon wrote:You guys also probably heard earlier this month that artificial intelligence has mastered the game of Go as well which is much more complex than chess:

https://www.technologyreview.com/s/5460 ... -expected/

The particularly interesting thing is this was more of a general ai than much of what machine learning algorithms are doing and could be used in part in developing an GAI. In other words it is much more general than a "narrow" ai created to beat chess so this was a pretty huge advance in the computer science world.

There is a thread on this advance. It's notable that the AI techniques used were much different from the ones that beat Kasparov at chess, and perhaps more significantly, no one, not even the programmers, thought it could get so strong so fast. Similar things have happened with Watson playing Jeopardy, and the DARPA Challenge winners.

IMO it is likely that strong AI will emerge as a result of a learning or evolutionary program, and as a result, it will happen unpredictably, and its nature will not be understood by those who created it. IOW, when the genie shows up, it will already be out of the bottle.

None of what you said implies it is legal to haras[…]

That was weird

No, it won't. Only the Democrats will be hurt by […]

No. There is nothing arbitrary about whether peop[…]