JohnRawls wrote:We are way more than "few decades" away from having AI like in the movies.
Maybe. But it is in the nature of technological progress to be unpredictable. We don't know what advances will be made, or when, or what their effects will be. Before AlphaGo, the best estimates were that superhuman go playing was at least a decade away. It wasn't.
One of the fundamental problems is that we don't know how our brain and human in general works in this regard so modern "AI" is just attempts to use mathematical techniques to solve problems which might lead to an AI BUT it is like trying to try things over and over and over and over again and again and again expecting to create Ai.
Superhuman AI (SAI) is very unlikely to work the same way the human brain works. We don't know how it will work, but once it gets close to human level, it could find a more efficient way for the next generation to work, and the process of AI improvement could then snowball very quickly.
As my experience tells me, without clear understanding of theory, such a complicated thing as an AI will not be created without theory and understanding.
Probably new methods and even new paradigms will be needed. But a lot of very smart people are thinking about this problem because it is The Prize: SAI is the ultimate winner-take-all objective. The fact that it is also extremely dangerous will not stop people from pursuing it because intelligence is power, and if there is one thing people want, it is power.
It is like creating a nuclear bomb without knowing anything about E=MC^2.
In fact, one could do that. All that is necessary is to know that bringing certain isotopes into close proximity and keeping them there long enough will release a large amount of energy. At that point, it's just an engineering problem.