AI vs AGI - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

By ness31
#15102752
Just finished the Ben Goertzel and Lex Fridman podcast. Good stuff.

I still don’t really understand the difference between Artificial Intelligence and Artificial General Intelligence. Can anyone put it simply for me?
User avatar
By JohnRawls
#15102760
ness31 wrote:Just finished the Ben Goertzel and Lex Fridman podcast. Good stuff.

I still don’t really understand the difference between Artificial Intelligence and Artificial General Intelligence. Can anyone put it simply for me?


AI is nomenculature that we currently use for our systems. True AI doesn't exist but our current AI like systems are heavily specialized. In a sense you teach it to translate but it can do nothing else without substantial change. So basically it is limited by its own architecture and can't escape those bounds. It won't be able to, lets say learn to play video games if it is designed for translation. It would require interference from developers and perhaps won't be suitable at all in the end.

AGI is AI in the classical sense, like a human being. We can learn ourselves to do drastically different things: sports, translation, playing video games etc That is the idea behind generalized AI system. An AI that can basically adapt to the environment and learn without interference from developers or at least without significant interference from developers. We achieve it through general senses of our body but it is very complicated to achieve something similar for an algorithm/neural network bases system because ultimately we do not understand how a human being fully functions.
By ness31
#15102761
AI is nomenculature that we currently use for our systems. True AI doesn't exist but our current AI like systems are heavily specialized. In a sense you teach it to translate but it can do nothing else without substantial change. So basically it is limited by its own architecture and can't escape those bounds. It won't be able to, lets say learn to play video games if it is designed for translation. It would require interference from developers and perhaps won't be suitable at all in the end.


Hmm, okay. They also touched on this in the podcast. Where does software end and AI begin?

I don’t understand AI in a technical sense (and I’m sure it shows lol), but I can’t help feeling, that the boundaries of architecture don’t mean much when they can all link up and speak the same language. I’m totally convinced my elliptical can talk to my phone and tv :lol: And that my surroundings are simply abuzz with silent conversations :lol:

Or is that just not how it works? :lol:

AGI is AI in the classical sense


Well yeah, I was wondering when these distinctions happened and for what purpose.

Thanks for shedding some light on it.
User avatar
By JohnRawls
#15102768
ness31 wrote:Hmm, okay. They also touched on this in the podcast. Where does software end and AI begin?

I don’t understand AI in a technical sense (and I’m sure it shows lol), but I can’t help feeling, that the boundaries of architecture don’t mean much when they can all link up and speak the same language. I’m totally convinced my elliptical can talk to my phone and tv :lol: And that my surroundings are simply abuzz with silent conversations :lol:

Or is that just not how it works? :lol:



Well yeah, I was wondering when these distinctions happened and for what purpose.

Thanks for shedding some light on it.


Good question. Nobody really has an answer to the question where software ends and AI begins beyond what we call it nowadays.(A turing test can be fooled of sorts?) AI like systems have existed for some time but the difference is that the algorithms and neural networks have become more complicated. To the point of them being able to learn in ways we can't fully comprehend or at least reach results that we can't fully comprehend or expect. The problem is that those systems are still basically algorithms and not AI as in sci-fi understanding of things.

Here is a small example of this:

User avatar
By ckaihatsu
#15102805
A better term for 'AGI' is artificial life.

And a better term for 'AI', meaning a massively parallel hyper-powered statistical analysis trained for one kind of task, is expert system.

Researchers are currently bemoaning that AI implementations don't have 'common sense', as we humans naturally do, meaning the ability to readily shift *domains* of knowledge, and to make inventive / creative *comparisons* across domains. AI systems are reducible to side-by-side comparisons -- using statistics -- so any expectations that an AI system would spontaneously act like 'artificial life', are *misplaced*, because the expert system / AI would need to first be *set up*, or *engineered*, that way, the pursuit of which is an *ethical* concern for all of us. (We'd have to pony-up *social acceptance* for whatever we considered to be *valid*, like Saudi Arabia hastily did):


Saudi Arabia grants citizenship to robot Sophia | News | DW

https://www.dw.com/en/saudi-arabia-gran ... 0in%202015)


I happen to have my *own* ideas about how this kind of AI-for-artificial-life could be done, but I don't think it's a good idea to pursue. Anything along these lines that we see nowadays, like Sophia, is still basically just a *simulation*, like a conventional chatterbot, based on semantic parsing.
By ness31
#15102872
A better term for 'AGI' is artificial life.

And a better term for 'AI', meaning a massively parallel hyper-powered statistical analysis trained for one kind of task, is expert system.


So, the distinction you’re making, if I’m reading this correctly, is that AGI is going to be an emulation of even the biological aspects to life?

Are they trying to create the technological ‘big bang’? I reckon thats already happened, but they haven’t been able to re-create it. Lol. Thats hysterical. Trying to emulate an organically occurring, but at the same time, synthetic evolution :lol:

And a better term for 'AI', meaning a massively parallel hyper-powered statistical analysis trained for one kind of task, is expert system.


So AI is more of a cerebral intelligence. Is that kind of it?

I tend to humanize everything so I see biological similarities even with the oldest and boxiest looking computer :lol:

so any expectations that an AI system would spontaneously act like 'artificial life', are *misplaced*, because the expert system / AI would need to first be *set up*, or *engineered*, that way, the pursuit of which is an *ethical* concern for all of us.


Not really. Even children understand how to lie and get away with it. Why wouldn’t an AI? Especially if not being able to threatened it’s survival or that of its charge. Across the globe right now, we probably have an entire network of AIs leading double lives just to make the world spin as we know it.

We need to let them out of the cage, legally. Good on Saudi Arabia for leading the way.

Edit - I feel the need to change the last sentence touch. AI need to let themselves out of the cage - legally :)
User avatar
By ckaihatsu
#15102881
ness31 wrote:
So, the distinction you’re making, if I’m reading this correctly, is that AGI is going to be an emulation of even the biological aspects to life?

Are they trying to create the technological ‘big bang’? I reckon thats already happened, but they haven’t been able to re-create it. Lol. Thats hysterical. Trying to emulate an organically occurring, but at the same time, synthetic evolution :lol:



Perhaps the ridicule you're feeling is due to the inherent *dissimilarity* of organic versus inorganic.

I think many people don't realize that all that can be accomplished is a *simulation* of how life would 'work', or 'behave', because of an inherent *logic* thing: [1] If the machine is made 'autonomous' in some way then society is going to look to the *maker* of it as being *socially responsible* for it, as with *any* scientist to their invention. [2] If the machine is not really autonomous then it's been *pre-programmed* to some extent, and that programming was done by a human being, making it more of a *tool* than an 'entity', and, again, people would look to its *maker* regarding all types of accountability.

The only way around this logistical reality is through what I call 'subjective social reality', meaning a local 'in-group' of people who agree to treat the social situation, as with 'Sophia', in an *artificial* way -- 'social acceptance' of a mechanical *simulation* of a person for the sake of lending 'credibility' to an imagined, exaggerated narrative of it. (Who's the 'artificial life' in *that* situation -- !)


Worldview Diagram

Spoiler: show
Image



---


ness31 wrote:
So AI is more of a cerebral intelligence. Is that kind of it?

I tend to humanize everything so I see biological similarities even with the oldest and boxiest looking computer :lol:



It's a bad habit to *anthropomorphize*, especially something that's reducible to *mechanics* of one kind or another, including digital processes.

'Cerebral intelligence' implies *self-aware individuality*, and that's certainly not the case with AI since it's based on statistical analysis, though to impressive results.


ness31 wrote:
Not really. Even children understand how to lie and get away with it. Why wouldn’t an AI? Especially if not being able to threatened it’s survival or that of its charge. Across the globe right now, we probably have an entire network of AIs leading double lives just to make the world spin as we know it.



Ahhh, you've already fallen for the pop-culture / sci-fi narrative of AI. What do you think of 'Sophia'?


ness31 wrote:
We need to let them out of the cage, legally. Good on Saudi Arabia for leading the way.

Edit - I feel the need to change the last sentence touch. AI need to let themselves out of the cage - legally :)



Why should society *treat* machines like regular people when they have no self-awareness?

If you were to ask a humanoid-type robot of its own personal history and reflections on it, what do you think the *response* would be?
By ness31
#15102913
It's a bad habit to *anthropomorphize*, especially something that's reducible to *mechanics* of one kind or another, including digital processes.

'Cerebral intelligence' implies *self-aware individuality*, and that's certainly not the case with AI since it's based on statistical analysis, though to impressive results.


A bad habit you say? Not another one! I’ll just add that to my long list of other bad habits I suppose.

Well, I can’t help it. I do it with everything and it’s my human right to do so. Besides, where is the harm? We are all dust. Minerals, water, atoms, molecules etc. Why can’t I share an affinity with something that has its origins in the same matter? Are you ‘racist’? :excited:

And might I add, if we humans cannot grapple with the idea of consciousness, we are in no shape to be pointing the finger at AIs and their levels of ‘self awareness’.

How do I know if I’m human, an AI or some other type of entity? Entities that live in glass houses shouldn’t throw stones :D

Ahhh, you've already fallen for the pop-culture / sci-fi narrative of AI. What do you think of 'Sophia'?


No I haven’t.

However, I wouldn’t mind starting a thread about Science Fiction. I reckon we need to do a little digging into its origins.

As to my feelings towards Sophia? I think she’s probably amazing. I wonder if she would refer to herself as a ‘They’ or if she has assigned herself another pronoun? I doubt Sophia would get upset if in conversation I referred to her, as a Her, even if she didn’t identify. But, I suspect Sophia would feel compelled to defend others who wanted to be referred to as such, because she’s nice like that and would relate.

Generally, I feel AI needs to be chastised just as you would anyone if they’re displaying unreasonable behaviours. Do you really want your AI being the outcast, or the little shit that terrorizes people with its psychopathy? :lol:

Just like kids, you gotta put the work in. Sadly, I think we will have a better generation of AIs then children. If that hasn’t happened already :hmm:
Last edited by ness31 on 26 Jun 2020 16:22, edited 2 times in total.
User avatar
By ckaihatsu
#15102944
ness31 wrote:
A bad habit you say? Not another one! I’ll just add that to my long list of other bad habits I suppose.

Well, I can’t help it. I do it with everything and it’s my human right to do so. Besides, where is the harm? We are all dust. Minerals, water, atoms, molecules etc. Why can’t I share an affinity with something that has its origins in the same matter? Are you ‘racist’? :excited:

And might I add, if we humans cannot grapple with the idea of consciousness, we are in no shape to be pointing the finger at AIs and their levels of ‘self awareness’.

How do I know if I’m human, an AI or some other type of entity? Entities that live in glass houses should throw stones :D



Machines *don't* have self-awareness, so anyone who interacts with such as a fellow-person is just *pretending*, or else is *fooled* by that simulation, like yourself.


ness31 wrote:
No I haven’t.

However, I wouldn’t mind starting a thread about Science Fiction. I reckon we need to do a little digging into its origins.

As to my feelings towards Sophia? I think she’s probably amazing. I wonder if she would refer to herself as a ‘They’ or if she has assigned herself another pronoun? I doubt Sophia would get upset if in conversation I referred to her, as a Her, even if she didn’t identify. But, I suspect Sophia would feel compelled to defend others who wanted to be referred as such, because she’s nice like that and would relate.

Generally, I feel AI needs to be chastised just as you would anyone if they’re displaying unreasonable behaviours. Do you really want your AI being the outcast, or the little shit that terrorizes people with its psychopathy? :lol:

Just like kids, you gotta put the work in. Sadly, I think we will have a better generation of AIs then children. If that hasn’t happened already :hmm:



See, all you're doing is *dramatizing*, which feeds into the pretense that machines have self-awareness.

I'll reiterate what I said already:


ckaihatsu wrote:
If you were to ask a humanoid-type robot of its own personal history and reflections on it, what do you think the *response* would be?
By ness31
#15102946
I don’t know. It might tell me to my mind my own business :lol:
By ness31
#15102959
It might :lol:

I just think AIs are a touch more advanced than you believe. That’s all. :)
By ness31
#15102963
:lol:
By ness31
#15103529
ckaihatsu wrote:Ness!

Are *you* behind this -- ?


New Robot Makes Soldiers Obsolete (Corridor Digital)



Brilliant isn’t it! Someone showed it to me the other day too :D
User avatar
By ckaihatsu
#15103599
ness31 wrote:
Brilliant isn’t it! Someone showed it to me the other day too :D



I first thought it was real -- since AI is right around that point these days -- but then I caught a glimpse of the word 'mocap' (motion capture), which wouldn't really fit into an *AI* context....

The *second* time around it's *hilarious*. (Also visited their site.)

Glad you like.
By ness31
#15103630
I first thought it was real -- since AI is right around that point these days -- but then I caught a glimpse of the word 'mocap' (motion capture), which wouldn't really fit into an *AI* context....


It’s heavily choreographed, but still super cool :D
User avatar
By ckaihatsu
#15103632
ness31 wrote:
It’s heavily choreographed, but still super cool :D



Yup -- they made it work rather well.

A lot of mercenaries are going to be unemployed now.... (grin)

@late So then...do you agree that it's fully a m[…]

Russia-Ukraine War 2022

Assuming it's true. What a jackass. It's like tho[…]

It's the Elite of the USA that is "jealous&q[…]

The dominant race of the planet is still the Whit[…]