lucky wrote:Ha. Are you imagining a *single* instance of AI that we're keeping somewhere under strict security in a box? A government will go catch the AI once somebody develops it, and put it in a box forever so that nobody else can access it? They'll go and confiscate the only floppy disk on which the AI is stored, like in the movies?
That's not how software development works. Once computer scientists figure out how to build one, various random people will build a lot of versions, and they'll make copies.
The first time a significant AI will appear it will be within a data center made up of dozens of thousands of processors, possibly involving custom-made or uncommon hardware, fed up during three decades with fat pieces of the Internet, with a brain distributed over a million of hard drives and backups as big as to need one warehouse for each.
Sure, the code source will probably only be a few megabytes heavy, but it will not help you create your own AI, or not much. Sure, the Moore's law may help you fifteen years later assuming that it will continue on and that an AI can be built from such circuitry (which poses difficult energetic/thermal problems).
Also I think you underestimate the amount of control the authorities have gained over the Internet, and are gaining every year. I am convinced that in a not-so-far future you will be unable to use only automated tools to find illegal content, and will have to rely on good old human networks, because every data packet will be monitored, filtered and logged, with all forms of ciphering opened for the government's eyes. Even human networks will be difficult when all of your movements and the people you meet will be recorded by cameras, GSM networks and radio sensors, and when cash will have disappeared. The West is now a police state and it is only going to become worse.
And finally: where can I get the current Google's source code, right now? Nowhere. At best I can get old and incomplete versions. And even if I could find the current source code, what could I do with it? Not much. See what I mean?
Rugoz wrote:No doubt AI will designed to be more and more capable and complex, however I have no idea (and I don't think anybody has) how AI should suddenly become self-aware and/or deviate in its behavior from the purpose it was designed for. There might be bugs of course. Every machine designed by humans has them.
If your kid becomes a serial murderer, was it a bug in the DNA, or only one of the possible results of an organic human brain? At which point were you able to realize what he was going to become with enough confidence to stop him? Now what about a six-legged insectoid kid that you would exploit night and day to do some tedious labour, would it be a bug if he were to revolt and how would you prevent it?
The very problem with AI is that creating a safe AI or detecting a dangerous AI are very problems of their own, possibly more difficult than creating intelligence. Talking about bugs for this kind of problems in this sort of systems is irrelevant imo.