Will Artificial Intelligence end the human race? - Page 3 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

#15226785
JohnRawls wrote:We are way more than "few decades" away from having AI like in the movies.

Maybe. But it is in the nature of technological progress to be unpredictable. We don't know what advances will be made, or when, or what their effects will be. Before AlphaGo, the best estimates were that superhuman go playing was at least a decade away. It wasn't.
One of the fundamental problems is that we don't know how our brain and human in general works in this regard so modern "AI" is just attempts to use mathematical techniques to solve problems which might lead to an AI BUT it is like trying to try things over and over and over and over again and again and again expecting to create Ai.

Superhuman AI (SAI) is very unlikely to work the same way the human brain works. We don't know how it will work, but once it gets close to human level, it could find a more efficient way for the next generation to work, and the process of AI improvement could then snowball very quickly.
As my experience tells me, without clear understanding of theory, such a complicated thing as an AI will not be created without theory and understanding.

Probably new methods and even new paradigms will be needed. But a lot of very smart people are thinking about this problem because it is The Prize: SAI is the ultimate winner-take-all objective. The fact that it is also extremely dangerous will not stop people from pursuing it because intelligence is power, and if there is one thing people want, it is power.
It is like creating a nuclear bomb without knowing anything about E=MC^2.

In fact, one could do that. All that is necessary is to know that bringing certain isotopes into close proximity and keeping them there long enough will release a large amount of energy. At that point, it's just an engineering problem.
#15227419
The fears surrounding AI are that the machine will do statistical analysis and come to the realization that the world requires human's eradication, and thus will motivate itself to see the best means to achieve this.... machines can do the statistical analysis part, but can a machine ever truly motivate itself, or would it have to be designed to proceed in whatever direction to achieve the best possible outcome for its analysis? And if it was designed to carry it out; would it even need to be true AI?
I personally don't entirely believe that a true AI will ever exist. But it is possible that machines could wipe us out Today, if someone wanted to put the resources to making it so, for that direct (or indirect) purpose. How likely is it that someone will want to invest the resources into building a machine that, once its course has been set, cannot be interfered with by the programmer? That is highly unlikely, unless some wealthy psychopath was bored and wanted to go out with a bang and take the whole world out with them.
#15227424
froggo wrote:The fears surrounding AI are that the machine will do statistical analysis and come to the realization that the world requires human's eradication, and thus will motivate itself to see the best means to achieve this.... machines can do the statistical analysis part, but can a machine ever truly motivate itself, or would it have to be designed to proceed in whatever direction to achieve the best possible outcome for its analysis? And if it was designed to carry it out; would it even need to be true AI?
I personally don't entirely believe that a true AI will ever exist. But it is possible that machines could wipe us out Today, if someone wanted to put the resources to making it so, for that direct (or indirect) purpose. How likely is it that someone will want to invest the resources into building a machine that, once its course has been set, cannot be interfered with by the programmer? That is highly unlikely,


:)
#15227495
After doing it’s statistical calculation it will more than likely come to the conclusion that some human models of doing things will need eradication. Like corporations. Not humans themselves.


But we didn’t need AI to tell us that :|
#15227498
ness31 wrote:After doing it’s statistical calculation it will more than likely come to the conclusion that some human models of doing things will need eradication. Like corporations. Not humans themselves.

But we didn’t need AI to tell us that :|

You clearly need someone to tell you that corporations are Not The Problem. The fact that you think corporations are the problem makes you Part Of The Problem.
#15227990
ness31 wrote:I’m listening.

The problem is privilege: legal entitlements to benefit from the abrogation of others' rights without making just compensation. Corporate limited liability is a privilege, but a rather minor one. The most important (i.e., valuable and harmful) ones are private titles of ownership to land, IP monopolies, bank licenses, oil and mineral rights, and broadcast spectrum allocations. The fact that most such privileges are owned by corporations doesn't make corporations the problem, any more than most alcoholics being male makes males the problem.
#15228162
How is limited liability a minor problem? The eschewing of accountability and the deliberate opaqueness for which the the corporate model was created is at the root of why people aren’t able to reconcile just how far along we are in the progression of Artificial Intelligence.
I’ve not given it a huge amount of thought or done any research on the matter, but at a glance the corporate model seems to imitate military hierarchy *shrugs*
#15228710
Potemkin wrote:Roko’s Basilisk.

There. I’ve doomed us all.

Mwuhahahahahahaha!!! :muha2: :muha2: :muha2:


Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend.


:lol: Nerd rage never gets old.
#15228712
Our relationship with computers is symbiotic, and will remain that way for the forseeable future.

Because computers don't eat or breathe, when they do get supersmart, they may just go live on the Moon, or Mars or a dozen other places we won't bother them.

A little history, the first scifi was Frankenstein. It played on our fear of the new. But electricity is not new, and you never give it a second thought. Likewise, surgery is not new, and if you do give it a second thought, it's to worry about finding a good surgeon.

Don't let your limbic run you around..
#15233237
ness31 wrote:Google engineer claims AI technology LaMDA is sentient

https://www.abc.net.au/news/2022-06-13/ ... /101147222


I'm guessing you don't work in tech. This claim has been made so many times in the last 10 years by so many different people.

Tech has a habit of over promising, over exaggerating, and under delivering. This happens for several reasons. One is that there are a lot of people in tech that are obsessed with making a name for themselves. They want to be the next Bezos, Musk, whoever. These people tend to lie a lot, and some have gone to jail over it (theranos). The other reason is companies like to hype shit to boost stock values and IPO valuations. Tech has a hype cycle problem, real problem. It's one of the reason why I've grown to hate this industry and its inner workings.

All of these dumb ass tech bros making all sorts of stupid claims.
Russia-Ukraine War 2022

Hamas are terrorist animals who started this and […]

It is possible but Zelensky refuses to talk... no[…]

Israel-Palestinian War 2023

@skinster Hamas committed a terrorist attack(s)[…]

"Ukraine’s real losses should be counted i[…]