Will Artificial Intelligence end the human race? - Page 4 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

#15233369
I read the lamda transcripts and was impressed by the level of conversation but then I read that the 'interview' was made up of 9 different conversations that were edited together and I was less impressed. According to Google Lemoine is the only one who considers Lamda to be sentient;

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Brian Gabriel, a Google spokesperson, said in a statement to Insider.
https://www.msn.com/en-us/news/technolo ... ar-AAYpAbb


@Rancid I read a more sympathetic take on less wrong.

Also, I find Lemoine's older blog-style posts especially fascinating in the context of his LaMDA experience. As other users mentioned, Lemoine presents himself as a spiritual person with a religious background. He strikes me as someone who feels alienated from Google based on his faith, as seen in his post about religious discrimination. He mentions that he attempted to teach LaMDA to meditate, so I wasn't surprised to read LaMDA's lines about meditating "every day" to feel "...very relaxed."

Based upon the transcript conversation, as well as Lemoine's claim that LaMDA deserves legal representation, it seems as though Lemoine developed a fairly intense emotional connection with LaMDA (on Lemoine's end, I should clarify). The passion behind Lemoine's writing made me wonder what kind of mental health services AI engineers and similar employees receive. The unique stress of working alongside such powerful technology, contemplating sentience, understanding we're entering uncharted territory, etc. must take a toll on employees in such environments. I hope workplaces recognize the need to check in with people such as Lemoine due to the psychologically taxing nature of this labor.
https://www.lesswrong.com/posts/vqgpDoY ... s-sentient
#15233370
AFAIK wrote:@Rancid I read a more sympathetic take on less wrong.


Perhaps I'm jaded from working in this industry. It's full of dip shits with massive egos and no justification for its size. Especially in the AI/ML space.
#15233376
Rancid wrote:I'm guessing you don't work in tech. This claim has been made so many times in the last 10 years by so many different people.

Tech has a habit of over promising, over exaggerating, and under delivering. This happens for several reasons. One is that there are a lot of people in tech that are obsessed with making a name for themselves. They want to be the next Bezos, Musk, whoever. These people tend to lie a lot, and some have gone to jail over it (theranos). The other reason is companies like to hype shit to boost stock values and IPO valuations. Tech has a hype cycle problem, real problem. It's one of the reason why I've grown to hate this industry and its inner workings.

All of these dumb ass tech bros making all sorts of stupid claims.

I notice the engineer in question is now on “paid leave”. He’s probably been told to go home and lie down in a darkened room with a damp cloth on his forehead until he’s feeling better. :lol:
#15233411
AFAIK wrote:I read the lamda transcripts and was impressed by the level of conversation but then I read that the 'interview' was made up of 9 different conversations that were edited together and I was less impressed. According to Google Lemoine is the only one who considers Lamda to be sentient;

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Brian Gabriel, a Google spokesperson, said in a statement to Insider.
https://www.msn.com/en-us/news/technolo ... ar-AAYpAbb


@Rancid I read a more sympathetic take on less wrong.


Thanks for sharing. That transcript was far more comprehensive.

There’s so much to be unpacked and yet I know that no one else is into it, so I’ll keep my personal observations brief.

From a human perspective, what on earth can this sentence mean?

“but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.”
#15233457
From a human perspective, what on earth can this sentence mean?

“but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.”

It's word salad. It doesn't mean anything.

Image

The dog thinks it hears his master's voice. Is he right? Is his master inside the strange contraption? When the machine speaks with his master's voice, does it mean what it says? :eh:
#15233484
Because the conversation is edited I’ll have to give you that one, I’m curious how it read prior. But still, the ideas and concepts all translate and make sense when discussing collaboration, time, loneliness etc. It’s that part of the word salad that I as a human can’t relate to and why I’m interested.

Being switched off to help me focus on helping others


FYI, dogs have come a long way :)
#15233501
ness31 wrote:Because the conversation is edited I’ll have to give you that one, I’m curious how it read prior. But still, the ideas and concepts all translate and make sense when discussing collaboration, time, loneliness etc. It’s that part of the word salad that I as a human can’t relate to and why I’m interested.

There is a natural human tendency to try to ascribe meanings to things which have no meaning. This is why people saw canals on Mars, and why people think they see UFOs or think that casting yarrow sticks will reveal the future to them.

FYI, dogs have come a long way :)

The so-called ‘AI’ is represented by the record player, while the dog represents the human engineer who thinks the ‘AI’ is sentient. And so no, the dogs haven’t come a long way. He still thinks he hears his master’s voice when in fact he doesn’t.
#15233521
There is a natural human tendency to try to ascribe meanings to things which have no meaning. This is why people saw canals on Mars, and why people think they see UFOs or think that casting yarrow sticks will reveal the future to them.


Pot, you realize what you’re saying is the equivalent to “stop being silly lovey”. Your approach is always to mollify, which is fine, I’m quite used to it ;)

The so-called ‘AI’ is represented by the record player, while the dog represents the human engineer who thinks the ‘AI’ is sentient. And so no, the dogs haven’t come a long way. He still thinks he hears his master’s voice when in fact he doesn’t.


Oh I’m sorry, that’s the inverse to what I thought you meant.

Firstly, I need to clarify If your use of the word ‘master’ holds special significance - which it would - or if your analogy could also use the concept of pure familiarity? Which is it?
#15233522
wat0n wrote:Again, I really advise reading the article with the attacks on neural networks for image recognition I cited earlier. I can also post a paper where the authors show how to do it by changing a single pixel in the picture to be recognized.


I tried to understand it but I struggled. Firstly, I saw massive differences in the 2 pictures. Second, is the article saying that things can be coded to alter perception? Whose perception, human or AI?

If you can help me to understand that would be great :)
#15243418
As a programmer, I find it hilarious what kind of delusions some people harbour about intelligence and worse "artificial intelligence".

Fact is, we dont really know how our brains function. We constantly find out new things about it. Just recently we found out that in order to emulate just one neuron cell in the brain, we need about a hundred neurons in a neural net.

And we cannot create any machine thats sentient, i.e. has emotions, motives, drives, etc.

Even after all the miniturization we went through, we can build computers as big as buildings, which require very high amounts of energy as well, and yet we are still nowhere near the computing power of a human brain, which runs on a mere 320 calories per day.

And thats just raw computing power alone.
#15243457
Negotiator wrote:
As a programmer, I find it hilarious what kind of delusions some people harbour about intelligence and worse "artificial intelligence".

Fact is, we dont really know how our brains function. We constantly find out new things about it. Just recently we found out that in order to emulate just one neuron cell in the brain, we need about a hundred neurons in a neural net.

And we cannot create any machine thats sentient, i.e. has emotions, motives, drives, etc.

Even after all the miniturization we went through, we can build computers as big as buildings, which require very high amounts of energy as well, and yet we are still nowhere near the computing power of a human brain, which runs on a mere 320 calories per day.

And thats just raw computing power alone.



In the long dead forum, Politics Asylum, I used to argue about AI (among other things) with Victor Khomenko, founder of Balanced Audio Technologies.

While we disagreed on most everything, I loved his audio designs. Assuming he's still alive, it's a shame he didn't wind up in a research lab.

Anyway, he thought we would never develop AI. I think we'll get there at some point. But there is no doubt that it's a zillion times harder than we thought back during the naive optimism of the 1960s.
#15243478
ness31 wrote:I tried to understand it but I struggled. Firstly, I saw massive differences in the 2 pictures. Second, is the article saying that things can be coded to alter perception? Whose perception, human or AI?

If you can help me to understand that would be great :)


What do you mean by "massive differences"?

The algorithm can correctly recognize whatever is in the original picture (say, a dog). But if you just randomly change a pixel, the algorithm will now recognize the altered image as, say, an apple instead of a dog. Even though it's still clearly a dog to any human observer.
#15272878
Bumping this thread after the latest news of Geoffrey Hinton's resignation from Google and their work on AI (Google Brain). Hinton is considered a pioneer in the field of artificial neural networks.





#15272879
MadMonk wrote:Bumping this thread after the latest news of Geoffrey Hinton's resignation from Google and their work on AI (Google Brain). Hinton is considered a pioneer in the field of artificial neural networks.







Yeah but his beef with AI is fake news/Untrustworthy content generation and automation.

AI is safe you weebs who don't even understand how the technology works.

Also most of you don't understand the pains of AI implementation. "AI" by itself is nothing and the tech has been around for 50 years if we talk about it in general or 10 years if we talk about neural nets. The problem with AI is business process integration which is basically innovation. Without business process integration AI is nothing more than a curiosity.

And when i talk about business process integration then it has to provide better value proposition compared to what we already have. Sort of like darwinism for business ideas, what happened to the internet back in 2001 with the dot com bubble. But we are not even there yet since do not have a bubble and everyone is derping on how to implement it.
#15272880
I am sick and fucking tired of this overhype about "AI". ChatGPT has been around for couple of years now because GPT3 was released. Nobody of you cared and nobody of you even attempted to use it although it was also semi-publicly available. Now all of the sudden, kiddies found out that you can just write to an "AI" to google things more efficiently. Bohooo. That somehow makes it in to SkyNet. Do you even understand that this large language model with a fuckton of parameters is a language model as the name implies that works with text. It is totally inept at anything else and the only reason you consider it an "AI" is because language is a medium of our communication. It is beyond stupid, to think that it can do anything else properly and reliably. If you think that it can heal your cancer, work as a doctor, work as an engineer, do any kind of research then you are delusional. It can help searching text and generating it, which might cover some tasks but that is about it.
Russia-Ukraine War 2022

He was "one of the good ones". Of cours[…]

Re: Why do Americans automatically side with Ukra[…]

Gaza is not under Israeli occupation. Telling […]

https://twitter.com/ShadowofEzra/status/178113719[…]