quetzalcoatl wrote:AI as a project is not very old in an evolutionary sense.
Our current AI are already more intelligent than any other life form that existed hundreds of millions of years after the first monocellular system appeared. And those AI are already thousands of times more intelligent than those we had fifteen years ago. You cannot compare biological and silicon timescales.
Directed engineering to achieve a specific goal is not at all the same as organic evolution within an ecologic niche. A directed engineering project to achieve AI can hardly depend on awareness being born of a chaotic system; this is not how engineering works.
This IS how engineering works when the
only way to reach your goal is to introduce a large and currently unpredictable evolutionary freedom. Intelligence is always defined in terms of evolution: if your AI cannot evolve, then it is useless. I do not think that intelligence and awareness are as distinct as you think they are.
Maybe there is a way to confine this evolution to boundaries that would prevent the emergence of awareness (hopefully without impairing the AI efficiency). But as I said such boundaries are unknown today and we cannot guarantee that awareness will not appear out of our current designs. We are in almost uncharted territory, both on the theoretical and empirical sides.
Human invention short circuits the evolutionary process to achieve a quick result, but some basic theoretical framework must already be in place. Without the understanding that electric current produces heat, and heat produces light, a light bulb would not have been achievable. There is, to my knowledge, no actionable equivalent in AI research.
You are arguing that without understanding awareness it is not possible to create awareness.
On the opposite I claim that without understanding awareness it is not possible to warrant its absence.
Il Doge wrote:There's three problems that need to be overcome that I can see:
1. Silicon processors aren't fast enough.
2. Quantum computing needs to be machine coded to its equations which makes a self-programming quantum computer either impossible or very, very slow due to a need for access to an advanced system of factories and at default the ability to keep them operating, so that it can make new machine-coded parts for itself.
3. A self-programming system needs to somehow be barred from de-programming its own fundamental purposes and thereby disabling itself. If such a limitation is possible from a programming standpoint, it should logically follow that intelligent computers barred from ever threatening people should be able to be made since an objective to not harm people isn't fundamentally different from any other cognizable objective.
1. Individual processors are not enough but some companies already provide AI researchers with datacenters as powerful as a human brain. Something else is missing:
1.1 Our algorithms are still not good enough and produce brains whose potential is limited, slower to learn than equivalent organic brains, and not very reliable (similar inputs sometimes yield very different results - some mediocre ones and some excellent ones). Making an intelligent network is not trivial, even with the proper power.
1.2 We probably need a revolutionary shift in favor of hardware architectures more fit for a neural network simulation. Or algorithms able to create intelligence from many interconnected little brains rather than a single one. I am not sure that any significant intelligence can be practically born from a grape of slowly interconnected computers, whatever their total computing power is. Its training may be too long for our human timescale. But I am speculating and may be wrong.
2. Forget quantum computing. Aside of cryptography and a few other specific needs, they will not be of use for anything in any foreseeable future (and maybe ever). Right now they still are completely useless buzz, no less. I will be dead before they are significant.
PS: some quantum computers are actually programmable.
3. As I said earlier an advanced diagnosing AI is not programmed for medicine, it is simply programmed to be able to learn, and then fed with medical data. When you look at the most advanced projects,
their purpose is not programmed, only their mean is. But trying to constrain what it can think seems like a circular problem: you would need to first understand what it thinks, and this may be a problem as hard at creating intelligence - probably harder. Maybe you can create an AI able to understand and monitor another AI's thoughts, but who will watch the watchmen?
Given our limited understanding of awareness, I suspect the only reasonable route is to constrain an AI at its interface with the real world: its inputs and outputs. This would not prevent awareness to appear, but it could make it harmless, at the cost of also impairing its effectiveness. And this would amount to the use of torture or drugs (good/bad stimuli - pleasure/pain - sensory deprivation, caging, etc). Or we could do what we did with animals and plants: breed many, select docile and interesting varieties, and clone/reproduce them.