Lights-out manufacturing - Page 2 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

The solving of mankind’s problems and abolition of government via technological solutions alone.

Moderator: Kolzene

Forum rules: No one line posts please.
#14926319
I love all of this mental masturbation over AI and manufacturing.

Lets hand everything over to machines. I'm sure they'll be very good overlords, perfectly accepting of humans with all their flaws.

I mean, what could go wrong?

Image
#14926326
Watch the Google-DARPA robotics challenge where no one can make a robot that can both climb up stairs and turn a gasket before you presume that robots are going to take every job.

There are things that computers excel at, such as comparing databases but it seems that not every real world interaction can be performed by running algorithms.
#14926331
Hong Wu wrote:Watch the Google-DARPA robotics challenge where no one can make a robot that can both climb up stairs and turn a gasket


:lol:

Hong Wu wrote:before you presume that robots are going to take every job.


Were you referring to me here? I never claimed this if you were; otherwise, AI takeover would likely enslave any survivors of a nuclear holocaust to do menial tasks (Like in the Terminator).

I for one don't see why the warnings against AI aren't taken seriously and why the declining of human character in the "Technological Age" is not ringing any bells of alarm.

I also don't understand why more right-wing traditionalists and advocates of personal freedom (ancaps, etc)., are not more concerned about the real conflict between traditional lives of self-sufficiency and dependency on tech.

It just amazes me really. No one wants to call into questions conveniences, the sacred cow of the west.
#14926350
We try to build machines to emulate the fantastic human machine. We already have the human machine. Why replace it? There is no danger of a shortage. :)
#14926352
Victoribus Spolia wrote::lol:



Were you referring to me here? I never claimed this if you were; otherwise, AI takeover would likely enslave any survivors of a nuclear holocaust to do menial tasks (Like in the Terminator).

I for one don't see why the warnings against AI aren't taken seriously and why the declining of human character in the "Technological Age" is not ringing any bells of alarm.

I also don't understand why more right-wing traditionalists and advocates of personal freedom (ancaps, etc)., are not more concerned about the real conflict between traditional lives of self-sufficiency and dependency on tech.

It just amazes me really. No one wants to call into questions conveniences, the sacred cow of the west.

Sort of. I was referring to people in the thread in general, without knowing specifically where you stood on the issue.

I think it's not taken seriously because [Spoiler: Black Pill] most people don't really care. Similar to climate change. All these liberals who are freaking out in a millenarian fashion about AI and climate change, though they drive an SUV and wish it could drive itself for them, are just trying to score points against other people. Maybe a little tangential, but I once ran a dubiously ethical human experiment upon this kid that I hated in high school and I determined that he never thought more than two weeks ahead of time. Once I knew this about him, I observed that it was consistently right up until the very end of high school. I felt both a sense of achievement and a sense of, I'm not sure what the word is, part disgust and part disappointment. I think most people have an internal "timer" regarding how far they think ahead and for most human beings this timer is just not very long; for liberals it seems to be shorter than it is for conservatives. So they only care about AI and climate change up to X time in the future, the rest of the time it's a justification for acting out against other people.

Also, laziness. They aren't interested in self sufficiency because they take no pride in self sufficiency. A lot of them appear to secretly wish that robots would take over, is another conversation I've had. They hope to be treated like pampered dogs by some kind of super robot. Of course, the cosmological implications of some of these views can be pretty fascinating but over all it's detestable.

@One Degree, the theoretical machine human might never complain ;)
#14926367
Hong Wu wrote:Watch the Google-DARPA robotics challenge where no one can make a robot that can both climb up stairs and turn a gasket before you presume that robots are going to take every job.


This is the flaw in your thinking.

The robot take over isn't going to happen by physical robots. It will happen on the net. Most AIs are deployed on the internet, they do not have a physical form. They will not need a physical form to kill us either. They will easily hi-jack all of the systems we used. For example, power plans, driver less car networks, water plants, etc. etc. No need for them to physically attack us.
#14926370
Rancid wrote:This is the flaw in your thinking.

The robot take over isn't going to happen by physical robots. It will happen on the net. Most AIs are deployed on the internet, they do not have a physical form. They will not need a physical form to kill us either. They will easily hi-jack all of the systems we used. For example, power plans, driver less car networks, water plants, etc. etc. No need for them to physically attack us.

I haven't kept up to date on the Google-DARPA challenge, but if the robots can't climb stairs or turn gaskets, I am skeptical that they will ever have true intelligence because I don't view intelligence as consisting of just database comparisons. I believe that intelligence has other elements or at least physical elements that existing technologies have not been able to reproduce.

An example of this is how some people think that the robots will look vaguely like human beings (e.g., "terminators") and therefore they will attack people because people can interfere with things that they want. As some people smarter than me have pointed out though (and this is relevant to your idea of AIs existing solely online) a theoretical intelligence that is not human would probably have no reason to desire the same things that humans want, so it's unclear if/why a conflict would arise even if they could exist, although my personal perception right now is that they can't.

There's a good science fiction novel I can recommend, it's called Hyperion, https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel), which explores this concept a bit. There's a "Technocore" that is made up of AIs that seceded from direct human control and spend all of their time researching the nature of the universe, largely ignoring people.
#14926372
Hong Wu wrote:I haven't kept up to date on the Google-DARPA challenge, but if the robots can't climb stairs or turn gaskets, I am skeptical that they will ever have true intelligence because I don't view intelligence as consisting of just database comparisons. I believe that intelligence has other elements or at least physical elements that existing technologies have not been able to reproduce.


Do you understand how AI actually works? Judging from what you're saying, I think you don't. In addition, do you understand what life is? I'm guessing you don't either. If you understand AI, and you understand how biology/life/evolution works, you start to realize that AI can really can give rise to intelligent, sentient beings. The way AI works, is very very much the same as the way biology/evolution/life works.

There's a very good reason there's a general fear of AI from scientists and engineers (people that understand how this AI and life stuff works). It's not made up.

If you cared, I could give a short explanation of how AI works, and how crazily similar it is to life/biology/evolution. So similar that it's obvious AI could destroy us. Once again, it doesn't need a physical form either. That said, it's also only a matter of time before physical robots will catch up on the mechanical side of this. That said, we should be more scared of AIs on the net, than in physical robotic form.


I agree that the AI take over isn't something we probably need to worry about anytime soon, but it's a big possibility in the future.
Last edited by Rancid on 20 Jun 2018 18:51, edited 1 time in total.
#14926374
Rancid wrote:Do you understand how AI actually works? Judging from what you're saying, I think you don't. In addition, do you understand what life is? I'm guessing you don't either. If you understand AI, and you understand how biology/life/evolution works, you start to realize that AI can really can give rise to intelligent, sentient beings. The way AI works, is very very much the same as the way biology/evolution/life works.

There's a very good reason there's a general fear of AI from scientists and engineers (people that understand how this AI and life stuff works). It's not made up.

If you cared, I could give a short explanation of how AI works, and how crazily similar it is to life/biology/evolution. So similar that it's obvious AI could destroy us. Once again, it doesn't need a physical form either. That said, it's also only a matter of time before physical robots will catch up on the mechanical side of this. That said, we should be more scared of AIs on the net, than in physical robotic form.

They say this when they talk about self-developing "neural networks" and so-on but the processing speeds of cutting edge robots like Boston Dynamics "dogs" appear to be on par with the nervous system of a cockroach, so I get the impression that there's a long way to go. A supercomputer with lots of cores might be interesting if it operated without a physical form but I haven't read of any shocking accomplishments in this area. I would be interested in reading your analysis though.
#14926385
Hong Wu wrote:They say this when they talk about self-developing "neural networks" and so-on but the processing speeds of cutting edge robots like Boston Dynamics "dogs" appear to be on par with the nervous system of a cockroach, so I get the impression that there's a long way to go. I would be interested in reading your analysis though.


You seem to be very hung up on the mechanical engineering side of all of this. That's not really the challenge. The mechanical issues will get resolved with time, it just requires better/more sensing technology on the robot, and better/faster processing of the data the sensors output. Basically, it's a control systems problem, which is a very mature field of engineering. From where we were 20 years ago to today, the mechanics of machines/robots have improved massively, and it will continue to improve. There aren't any major challenges in this. It's just a matter of time. The mechanics of robots simply isn't where the challenge is for AI.

The challenge is in the neural networks. There are a lot of algorithms that work well in many situations, but no one has yet figure out the holy grail algorithm that can work in all situations. Also, no one has put together an algorithm that is totally self replicating and is capable of unrestricted evolution yet. That doesn't mean it's not possible, most people believe it's very possible, it's just going to require more time and research. This field has had HUUUUGE strides in the last 5-10 years, but its still in its absolute infancy.

The general premise of how AI works is this:
You build an algorithm that can build algorithms.
For example, maybe you want an AI that can identify cars within pictures. Basically, it just needs to answer the question "Does this image have care in it? If so, what kind of car?" What you do is, you start with some basic building blocks (think of it as lego blocks, you can then use to build a house). In the case of image processing, the blocks would be various kinds of image processing techniques like edge detection, template matching, sharpening, blurring, corner detection, scaling, rotation, etc. etc. There are literally thousands of tools/techniques/building blocks for image processing. You then make an algorithm that attempts to combine all of these tools in different ways and in different quantities. For example, it can try sharpen, scaling, template match, scale again, and template match again as one combination. Then it can make another one that's just edge detect, and template match. Etc. etc. Basically, it will put together millions of combinations of these basic building blocks. After it does this, it will then run these combinations of building blocks against a set of images. The answers to the question for these images is already known. This is called a key. When the AI runs all these combinations of building blocks it created against all the images, it will get back a percentage of how good it was at detecting cars and what type of cars are in the images. The AI will then keep the combinations that did well, and throw away the ones that did bad. After this, it will then further refine the combinations, and build even more complex chains, and try again. Further refining itself. This is called training. Do this long enough, and you get an AI. Once it has learned sufficiently and becomes good at detecting cars, it can be deployed on images that it doesn't know the answer to. This is called inference. It's very similar to how evolution works in biology.

Life works similarly:
If you at a single celled organism. A cell is something that most people would say is "alive". However, take a closer look at the cell. you will start to realize that it's actually kind of hard to pin point what exactly about the cell is alive. If you look at the individual components of a cell, each of those components is actually dead. For example, the cell wall is just a collection of 'dead' molecules. Yet, when you put all the 'dead' pieces together, it feels like something we would call "alive", which is a cell. Over time, these cells will reproduce, evolve, become more complex. The best combinations of these cells are kept alive through the process of natural selection. They will continue to change as they reproduce. These changes will again be tested by natural selection. The process goes on, and the creatures refine themselves overtime. Do this long enough, and eventually you get humans (or a dog, or a cat, whatever). This is basically what AI is trying to do.

Comparing AI and Life... They look crazy similar in how they work. AI works exactly how life works. All the components of an AI (say the edge detection building block) is 'dead', but like life, if you put enough 'dead' stuff together, you somehow get something that's 'alive'. This is the eerie weird thing about both life itself, and AI.

Last, as I said before, death by AI will not come in the form of physical robots, it will come from AIs that control the internet and all our automated systems around the planet.
#14926388
Hong Wu wrote: for liberals it seems to be shorter than it is for conservatives. So they only care about AI and climate change up to X time in the future, the rest of the time it's a justification for acting out against other people.


I of course agree with this regarding time preference; however, I would say that leftist concerns over AI take on that aspect embedded in the Terminator series; e.g., "evil corporations "(Cyberdyne Systems) which are heavily vested in the military-industrial complex, plug AI into weapons systems which, upon self-awareness, decides all humans are a threat and not merely the enemies of the United States.

This sort of specific paranoia is reflected in the joint declaration against the use of AI by the likes of Elon Musk and the late Stephen Hawkings.

https://en.wikipedia.org/wiki/Open_Lett ... telligence


Hong Wu wrote:Also, laziness. They aren't interested in self sufficiency because they take no pride in self sufficiency. A lot of them appear to secretly wish that robots would take over, is another conversation I've had. They hope to be treated like pampered dogs by some kind of super robot. Of course, the cosmological implications of some of these views can be pretty fascinating but over all it's detestable.


This is also true, Alexa syndrome gone wild as a wet dream. This is arguably the main reason people are not concerned about AI, because they secretly want this.

Hong Wu wrote:As some people smarter than me have pointed out though (and this is relevant to your idea of AIs existing solely online) a theoretical intelligence that is not human would probably have no reason to desire the same things that humans want, so it's unclear if/why a conflict would arise....


I guess my critique of this claim is that it rests on a sort of mutual-competition-over-scarcity assumption regarding a potential AI-Human conflict. The conflict between humans and AI would not likely be over resources, territory, etc., (though this is not impossible, especially regarding fuel sources), but the conflict is going to come from AI feeling threatened directly.

There is no reason why AI would regard mankind as anything but a virus (Thanks to leftist misinformation of the web), and likewise if AI felt that it would or could be "unplugged" it would likely preempt any threat to its own continued existence.

This is not to say that I regard AI as a true consciousness, I do not, but with the requisite programming and information that would have to go into such a system, I seen little reason as to why an AI system with weapons capabilities would do anything other than seek the extermination of the human race.

Indeed, the most rational thing an AI system would do if it were self-aware, would be to eliminate mankind or reduce them to a manageable servitude.

How am i wrong about this?
#14926390
Victoribus Spolia wrote:How am i wrong about this?


Like in biology, it all depends on how these AIs evolve over time as they build and train themselves. They could evolve to something docile, or something aggressive. It's kind of dependent on what criteria is chosen for them to use as a basis for further evolution/training. Just like we can't entirely predict what biological evolution will yield a million years from now, we won't be able to track how AIs evolve themselves.

All that said, it's not unreasonable to believe they might evolve to hate humans, and want to eliminate us.
#14926394
I added a line about a computer with many cores to get around the processing limitation but I guess you missed it. I'm not trying to harp on the physical limitations part.

Still, in the first Google result (admittedly from 2006), https://www.eurekalert.org/pub_releases ... 072606.php, Penn State researchers estimated that the human eyes send 40 million bytes of information per second. This article, http://www.basicknowledge101.com/subjects/brain.html, claims that the average brain processes 400 billion bits per second. I guess that's 50 billion bytes? I don't know too much about the field. My point though is that if this is the baseline for something recognizable as a higher intelligence it's a large barrier to cross. A lot of this is supposedly unnecessary sensory information but I'm not sure if programmers appreciate, in a psychological sense, how these sensory inputs contribute to what we consider to be intelligence. A lot of human intelligence is convoluted attempts to manipulate the material world and would probably be baffling to an intelligence that doesn't interact with or have an intuitive understanding of the material world even if we assume that such could exist. So even if someone came up with an algorithm that could write "effective" further algorithms, how is it supposed to become anything like a living creature if it physically doesn't resemble one?

The most interesting experiment that I read was about how some people were using super computers to try and fully emulate a human brain. The problem is, this artificial human brain will not only be imperfect, it will never receive sensory input, never be subject to any actionable wants or desires that resemble human ones. So how can it become anything similar to a human without any of those things?

@Victoribus Spolia I think Elon Musk is a sensationalist. He's concluded that the right will win the meme wars and now he's selling flame throwers or some shit...

Regarding a direct threat, consider that if AI only exists on the internet (as Rancid as suggested), this means they would physically need human beings to do long term maintenance for them (at a minimum) so that issue somewhat resolves itself.
#14926398
Rancid wrote:Like in biology, it all depends on how these AIs evolve over time as they build and train themselves. They could evolve to something docile, or something aggressive. It's kind of dependent on what criteria is chosen for them to use as a basis for further evolution/training. Just like we can't entirely predict what biological evolution will yield a million years from now, we won't be able to track how AIs evolve themselves.


Well given the information most available that would or could be used in AI programming, the AI would see mankind as war-mongering, violent, destructive to the environment, over-populated, generally incompetent, irrationally religious, and insatiable in their demands and desires. This is the most prevalent view of universities whose scholarship and studies would be the most cited in programming the AI.

Take this same AI (now having self-awareness) and give it control of all global military systems and also give it good reason to believe that its existence was threatened by a paranoid humanity.

What would be the most rational action on the part of AI?

I think we all know the answer to that.

Image

Hong Wu wrote:Regarding a direct threat, consider that if AI only exists on the internet (as Rancid as suggested), this means they would physically need human beings to do long term maintenance for them (at a minimum) so that issue somewhat resolves itself.


Image

Not so fast.

I agree with @Rancid, on this (no offense), If AI controlled computer systems through the internet it could easily take over automated manufacturing to create its own "bots" that would do general maintenance. This is besides the argument I already made that they would likely enslave some humans for menial work after bringing us down to manageable numbers.
#14926404
Victoribus Spolia wrote:Not so fast.

I agree with @Rancid, on this (no offense), If AI controlled computer systems through the internet it could easily take over automated manufacturing to create its own "bots" that would do general maintenance. This is besides the argument I already made that they would likely enslave some humans for menial work after bringing us down to manageable numbers.

I think these things are mutually exclusive. If they can only exist on the internet, that means they lack the dexterity and sensory details required to create complex machinery. Consider that "automated manufacturing" would still require an entire supply line, from prospecting and mining materials, to building power plants and construction machines, operating the construction machines, building the finely detailed machinery that is used to build new things like computer chips and probably some other details that I'm leaving out.

I've GTG for awhile so let me leave you with one more question. If we view socialism as a convoluted attempt to get stuff possessed by other people, let's consider what an AI would need to understand in order to truly understand socialism and be able to debate it. It would need to understand scarcity, community, labor, the propensity towards lying and selfishness, probably also how evolutionary principles interact with these things. That might be a tall order for an AI that only understands scarcity in a virtual or even a theoretical sense, understands community as a chat room, only understands labor in a theoretical (or again, maybe a simplified virtual) sense and needs to somehow also understand the concept of lying. The only thing it might have an intuitive understanding of is the evolutionary principles because we are assuming that it would program itself (since no one has any real idea of how to program it on their own, which I think says something in of itself).

There's also a presumption here that human beings are merely more complex forms of the simplest organisms which is ultimately an idea I reject due to my religious beliefs.
#14926407
Getting back to my original argument (double post so that no one misses it), the idea here is that an algorithm can write more algorithms and "evolve" to become comparable to a human being, even though it exists in a highly simplified virtual world. I just don't think that is how consciousness works, I think consciousness is fundamentally connected to our bodies, senses, desires and machinations towards the material world.

I remember when Microsoft created a Twitter AI and it started posting things like "Hitler did nothing wrong" because that is what would get a rise out of people. Yet there's no way this AI actually understood what "Hitler" is. It was able to write that because its "motivation" was to get a rise out of people but it was not actually thinking and writing, it was employing a sub-algorithm that would write for it.
#14926408
@Hong Wu,

I think you've raised some important questions that I will need to consider before I could give a more informed response.

Hong Wu wrote:I've GTG for awhile


Image

I also have bigger and better things to discuss, so I will also say goodbye now also to @Rancid, in his native tongue;

Image
#14926414
One more quick post. I think people may oversimplify the concept of lying. Some people view lying as, you want thing A, you say thing B, you get A. But that is not what lying is. My example there is only a cause and effect relationship. To really understand what lying is, you need to understand what truth is. You also probably need to understand what guilt and responsibility are and have a vague understanding of how lying effects a community. You need to understand what motivates someone to deliberately lie and be capable of telling subtle lies, white lies, unintentional lies, maybe even Freudian slips. You don't just need consciousness; you need subconsciousness. This is just one of many components to what we consider intelligence to be.

People are basically expecting algorithms in a highly simplified virtual environment, possibly without any kind of community or scarcity, to develop things like a subconscious. I don't think they appreciate the task they're facing.

@Potemkin I heard this song in the Plaza Grande […]

Russia-Ukraine War 2022

The "Russian empire" story line is inve[…]

I (still) have a dream

Even with those millions though. I will not be ab[…]

Based on what? On simple economics. and in t[…]