Hong Wu wrote:They say this when they talk about self-developing "neural networks" and so-on but the processing speeds of cutting edge robots like Boston Dynamics "dogs" appear to be on par with the nervous system of a cockroach, so I get the impression that there's a long way to go. I would be interested in reading your analysis though.
You seem to be very hung up on the mechanical engineering side of all of this. That's not really the challenge. The mechanical issues will get resolved with time, it just requires better/more sensing technology on the robot, and better/faster processing of the data the sensors output. Basically, it's a control systems problem, which is a very mature field of engineering. From where we were 20 years ago to today, the mechanics of machines/robots have improved massively, and it will continue to improve. There aren't any major challenges in this. It's just a matter of time. The mechanics of robots simply isn't where the challenge is for AI.
The challenge is in the neural networks. There are a lot of algorithms that work well in many situations, but no one has yet figure out the holy grail algorithm that can work in all situations. Also, no one has put together an algorithm that is totally self replicating and is capable of unrestricted evolution yet. That doesn't mean it's not possible, most people believe it's very possible, it's just going to require more time and research. This field has had HUUUUGE strides in the last 5-10 years, but its still in its absolute infancy.
The general premise of how AI works is this:
You build an algorithm that can build algorithms.
For example, maybe you want an AI that can identify cars within pictures. Basically, it just needs to answer the question "Does this image have care in it? If so, what kind of car?" What you do is, you start with some basic building blocks (think of it as lego blocks, you can then use to build a house). In the case of image processing, the blocks would be various kinds of image processing techniques like edge detection, template matching, sharpening, blurring, corner detection, scaling, rotation, etc. etc. There are literally thousands of tools/techniques/building blocks for image processing. You then make an algorithm that attempts to combine all of these tools in different ways and in different quantities. For example, it can try sharpen, scaling, template match, scale again, and template match again as one combination. Then it can make another one that's just edge detect, and template match. Etc. etc. Basically, it will put together millions of combinations of these basic building blocks. After it does this, it will then run these combinations of building blocks against a set of images. The answers to the question for these images is already known. This is called a key. When the AI runs all these combinations of building blocks it created against all the images, it will get back a percentage of how good it was at detecting cars and what type of cars are in the images. The AI will then keep the combinations that did well, and throw away the ones that did bad. After this, it will then further refine the combinations, and build even more complex chains, and try again. Further refining itself. This is called training. Do this long enough, and you get an AI. Once it has learned sufficiently and becomes good at detecting cars, it can be deployed on images that it doesn't know the answer to. This is called inference. It's very similar to how evolution works in biology.
Life works similarly:
If you at a single celled organism. A cell is something that most people would say is "alive". However, take a closer look at the cell. you will start to realize that it's actually kind of hard to pin point what exactly about the cell is alive. If you look at the individual components of a cell, each of those components is actually dead. For example, the cell wall is just a collection of 'dead' molecules. Yet, when you put all the 'dead' pieces together, it feels like something we would call "alive", which is a cell. Over time, these cells will reproduce, evolve, become more complex. The best combinations of these cells are kept alive through the process of natural selection. They will continue to change as they reproduce. These changes will again be tested by natural selection. The process goes on, and the creatures refine themselves overtime. Do this long enough, and eventually you get humans (or a dog, or a cat, whatever). This is basically what AI is trying to do.
Comparing AI and Life... They look crazy similar in how they work. AI works exactly how life works. All the components of an AI (say the edge detection building block) is 'dead', but like life, if you put enough 'dead' stuff together, you somehow get something that's 'alive'. This is the eerie weird thing about both life itself, and AI.
Last, as I said before, death by AI will not come in the form of physical robots, it will come from AIs that control the internet and all our automated systems around the planet.