Fasces wrote:I have to hope that any intelligence, artificial or not, is capable of developing some sort of system of ethical behavior.
Isn't ethics the way we rationalize our behavior to rule compromises between our individual interests and our social emotions (empathy especially)? While I guess that we will gift many AI with empathy, I do not think that they would all have it.
Bulaba Jones wrote:I doubt super-intelligent artificial intelligences would have any interest in either destroying or dominating humanity.
I could envision scenarios where it would be easier for them to exterminate us. You tend to consider them as gods whose power would know no bounds but not only this assumes that the physics laws are plastic enough to accommodate that, but also there would be a transition before that where they would need to build the incredibly rich industrial complex required to sustain a high technology. Starting from our industries and Earth resources seem a lot easier and possibly mandatory
Dagoth Ur wrote:I think so long as we go about it like we're dealing with a living thing we'll get by just fine.
Do you think they would be happy to be dealt with like we do with cattle? The whole point of the AI is to create us slaves.
Rich wrote:in my view, the best cure for fears around strong AI is to get into programming. Java, CSS and XML don't speak to me of a looming singularity. If anything they reinforce the view that we may be approaching the summit and about to go down.
True AI's behaviors are not programmed, not more than you are. We only program the rules that make them able to learn. From this simplicity the complexity arises.
A true AI's code will be reasonably simple and easy to understand, a lot more than Google Search or Windows.
But to reach this simplicity there are a lot of long and complex steps: first we need to get the right algorithms (we are not sure about the details of how the brain learn, we know its algorithm is different from the
gradien descent we use) and this takes time (learning is slow: years may pass before a computer scientist gets the feedback he needs to evaluate his AI). Then we will need to build hardware optimized for this task and this probably involves great difference with the technology used for decades in our CPU given that it would probably be unable to match a brain's power due to thermic problems and limited couplings between gates.
Torus34 wrote:Last I knew, the Turing test was as far as things had gotten.
I do not think we should focus on the Turing test. The Turing test proves the ability of a computer program to pretend to be a human. A program may pass the test without being intelligent and a great intelligence may be unable or unwilling to pass the test.
Torus34 wrote:I'm far more interested in programs which include a learning capability. I think that when all's said and done, this will be found to be the key to the development of a true general AI.
We have had
true learning algorithms for decades now. But it does not make them "generic". Isn't mankind generic because of the lot of factors to satisfy? Sex, entertainment or innovations look like satisfactions to hormonal constraint problems.
That being said there are very impressive things in the labs. AI may still be specialized and unadaptable but they are getting very smart at their tasks and outperform us (or will soon) at many of them, including finance, surgery, fact checking, mathematical proofs, jeopardy, data pattern discovery (the starting point of science), etc.