Which organization will develop AI sentience first? - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Polls on politics, news, current affairs and history.

First AI achieved by?

Elon Musk / Space X / Tesla
No votes
0%
Microsoft Corporation, Cortana
1
5%
Apple Computers, Sirri
No votes
0%
Google X, Deepmind
3
14%
NSA, US intelligence cryptanalysts
4
19%
Cyberdyne Systems, HAL
No votes
0%
Dan Dennet, Doug Hofstadter, Bostrom
No votes
0%
Stephen Wolfram, Wolfram Alpha
1
5%
IBM, Deep Blue, Watson
3
14%
Toyota, Asimo
No votes
0%
DARPA / Boston Dynamics, Atlas
No votes
0%
Other, please elaborate
9
43%
#14693105
Mossad already have one in the underground labs and have had for centuries. What do you think controls all the western media and makes it take the same line? You think that happens by accident? You think that's air you're breathing?
#14693165
mikema63 wrote:Other, I suspect that developing true AI is going to take so long that it may not be any of these groups at all.

Other, this.

AI is old wine in new bottles.

There was an AI fad in the 1980s and even in the 1950s.

We have more processing power now and a better grasp of machine learning algorithms, but let's just say that's a few prawns short of a galaxy.
#14693604
^ No, it doesn't. Moore's law is dying as we speak. Unless a major breakthrough happens in computing, the rate of development of processing power is going to drastically fall.

As per my answer, what Mike said.
#14693778
Sphinx wrote:I tend to believe that AI is not a well-defined term.


This is the problem. Sentience, consciousness, self-awareness, etc. are not concrete goals toward which you can advance. Our understanding of these phenomena is at a standstill, from both a theoretical and a applied standpoint.

We don't even know is consciousness is platform-independent (Penrose). If it is not, it would essentially doom any hope of AGI - at least, until we are able to specify what about the human brain, as a physical platform, enables consciousness.
#14693808
Computer power improves exponentially and continues to do so. Current estimates for the AI singularity are for around 2050.


What Fuser said is true, we are reaching the physical limit to how small we can make transistors that are still functional.

I would also add that computing power by itself isn't what makes humans self aware and intelligent in the way we are discussing computers. It's hard to really compare the Brain to a computer because of how very differently they operate but we do already have supercomputers that can theoretically outpreform the human brain in terms of memory and processing power. (though we process information in such a radically different way than computers that the statement isn't worth much).

The problem with creating a machine consciousness is that we have no idea how consciousness operates or even what exactly it is. When we talk about creating an AI we are just stumbling around in the dark hoping we accidentally hit on the right method. We will no doubt create very interesting programs that are very convincing but ultimately I don't think we will produce true consciousness by this method.
#14695626
I think it will come about from a data analysis software (probably many modules forced together by some machine learning software) for advertising, and that we will only realize it when a user interface (perhaps one responding to admin/root inquiries) starts acting intelligently, but not in the expected way .
#14695676
Thunderhawk wrote:I think it will come about from a data analysis software (probably many modules forced together by some machine learning software) for advertising, and that we will only realize it when a user interface (perhaps one responding to admin/root inquiries) starts acting intelligently, but not in the expected way .

I did write the first GPAI in 1999: it was an Excel sheet aimed at optimizing my laundry fees according to the weather forecasts, the CAC40 index and quantum variations.

It scared me so I deleted it: I wasn't ready to become a father then.


@quetzalcoatl
They may not be well-defined, but they are nevertheless concrete goals with teams advancing towards them, especially self-awareness as it is very important for learning. And I do not believe an instant in Penrose's hypothesis.

As for the original question, the big companies do not look like they are truly working towards a GPAI. Their work focuses more on statistical buyers detection and kittens tagging, and I doubt this will truly be of use for a GPAI.
#14695774
Other: Schlumberger

They're already one of the largest processors of data for the purposes of analyzing geological data to drill for oil. Possibly on par with other massive data processors like the NSA or Google. They will be the first to develop true AI in order to find oil.

(Also I'm being contrarian and picking a highly controversial dark horse for this race. 8) )

The more time passes, the more instances of harass[…]

What is it? Please be clear and specific. Well[…]

He's not going to get 12 years. Relax. Yeah, the[…]

And I don't blame Noam Chomsky for being a falli[…]