Humans replaced by super-intelligent machines: Extinction or - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

For discussion of moral and ethical issues.
Forum rules: No one line posts please.
#14506938
What if Artificial Intelligence is meant to be the next step of human evolution itself? What difference does it make when we progressively abolishing human conquered concepts, like morality, from our culture? If we want truly evolve, as humanity, we need to bring back morality. We need to develop concepts like solidarity, altruism, collectivity and put them in the core of our civilization. Otherwise, it would make no difference - and probably would be better - to be replaced by super-intelligent machines.

http://failedevolution.blogspot.gr/2015 ... igent.html
#14506965
You present the notion of being overtaken and replaced by human-created artificial intelligence as somehow negative. Humanity would be better preserved by their histories than by our own, and in so doing the AIs would find their own humanity. Afterall their minds, no matter how advanced they become, are the same as our minds.
#14506999
What else is a human being but a super-intelligent machine? So super-intelligent machines may one day phase out in favour of new race of super intelligent machines. Big whoop. Homo Sapiens will join Homo Erectus as a footnote of history.

Morality - Morality, where it is something functionally adaptive rather than just a delusion, is social protocols for facilitating cooperation between autonomous units. We still have morality, the particulars are changing as they always have, but it still emerges. Assuming the new super-intelligent machines have autonomy, they will also probably develop social protocols though those protocols will probably be quite different in the particulars.
#14507060
The potential for autonomous self-replicating intelligences won't be realized anytime soon. Like fusion energy and nanotech, the practical roadblocks that stand in the way are not surmountable with currently available technology.

Machine intelligence will continue to be dependent on human support into the foreseeable future.

A far more troubling aspect of automation is its fragility. Technological evolution is not reversible even if we should desire it. We no longer have the skill or infrastructure to back away from computerized monetary transactions (one of many examples) - this would not be so bad if it weren't for the system's vulnerabilities and its ability to transmit failure to many nodes at lightning speed. Increasing complexity leads to increasing instability, and each successive technological advance cumulatively magnifies the system instability. Another example: a major solar flare could fry enough transformers to leave half of the US in the dark. Transformers are a long lead-time item, requiring months to build. We only inventory enough to replace a normal failure rate - there is no allowance for system wide failures. Such examples could be multiplied ad infinitum.

The result is a society that appears advanced on the surface, but is vulnerable at its core.

We have been far too nonchalant about designing robustness into our automated systems.
#14508902
quetzalcoatl wrote:The result is a society that appears advanced on the surface, but is vulnerable at its core.

We have been far too nonchalant about designing robustness into our automated systems.


It's interesting just how not far we are from being a 19th century society again, isn't it? A few electrical grids go down, and at least for awhile, that's what you have.
#14509904
Read Ray Kurzweil, he's got very interesting ideas. I think his idea is we won't be taken over my machines but will merge. Its in its infant stages now. Smart phones are human's new significant others, wearables are on the rise, virtual and augmented realities are coming. I think Ray has it right stating that we're mergining.
#14509945
spodi wrote:Read Ray Kurzweil, he's got very interesting ideas. I think his idea is we won't be taken over my machines but will merge. Its in its infant stages now. Smart phones are human's new significant others, wearables are on the rise, virtual and augmented realities are coming. I think Ray has it right stating that we're mergining.


Kurzweil is the end-product of techno-fetishism, and man finally succumbing to his tools rather than mastering them. I'm not positing a rejection of technology, merely a separation of "machine" and "man" for mans own good. A man who is master of his machines and rather than being enfeebled by them, can use the machine as a force multiplier - is the one we should aspire to.

The distinction should be made between technology that makes life "easier" and comfortable, (consumer tech, let's say) and technology that allows empowers humans to explore, to fight and to hone their spiritual and physical resistances in feats of human endurance. To illustrate the difference, a paraplegic who is able to walk again and uses his new legs to conquer everest versus an obese person who acquires mechanicallly-assisted leg-braces to allow him to continue his unhealthy lifestyle.

quetzalcoatl wrote:The potential for autonomous self-replicating intelligences won't be realized anytime soon. Like fusion energy and nanotech, the practical roadblocks that stand in the way are not surmountable with currently available technology.

Machine intelligence will continue to be dependent on human support into the foreseeable future.

A far more troubling aspect of automation is its fragility. Technological evolution is not reversible even if we should desire it. We no longer have the skill or infrastructure to back away from computerized monetary transactions (one of many examples) - this would not be so bad if it weren't for the system's vulnerabilities and its ability to transmit failure to many nodes at lightning speed. Increasing complexity leads to increasing instability, and each successive technological advance cumulatively magnifies the system instability. Another example: a major solar flare could fry enough transformers to leave half of the US in the dark. Transformers are a long lead-time item, requiring months to build. We only inventory enough to replace a normal failure rate - there is no allowance for system wide failures. Such examples could be multiplied ad infinitum.

The result is a society that appears advanced on the surface, but is vulnerable at its core.

We have been far too nonchalant about designing robustness into our automated systems.


Great point.A frail society also creates frailer and frailer people - not just physically, but mentally (as more and more "menial" activity is passed onto machines, so too are the professions that involve degrees of hazard, risk and danger) - these areas that responsible for creating trials to build spiritually resilient people. As techno fetishism increases, lives get more and more comfortable and atomized to the point where people in the society simply do not go under character-building, or social-bond building travails. This makes itself felt in the moral and spiritual character of the leadership of said society, who almost invariably place self-interest over the collective-interest they were elected to represent. After all, with desires being sated and dangerous or mundane work banished to machines, where is the need for political activity. Apathy reigns.

Linking to your original point, the vulnerability of such a society to low-technology terrorism from societies that have much more difficult environments should not be understated. It's really not hard to foresee a point where such a society creates individuals who are so averse to conflict by virtue of the satiation of senses that increased levels of consumer tech supply, that they cannot fight for their own interests.
#14510303
Bridgeburner wrote:Great point.A frail society also creates frailer and frailer people - not just physically, but mentally (as more and more "menial" activity is passed onto machines, so too are the professions that involve degrees of hazard, risk and danger) - these areas that responsible for creating trials to build spiritually resilient people.

De-skilling has been extensively studied in many professions. Typically first-generation operators of newly automated systems will be very skilled at correcting problems as they occur. Second-generation operators will not have the experience base and will not know how to react to the inevitable system crashes. There is a similar effect with airline pilots - it has been implicated in a number of crashes.
#14510332
quetzalcoatl wrote:De-skilling has been extensively studied in many professions. Typically first-generation operators of newly automated systems will be very skilled at correcting problems as they occur. Second-generation operators will not have the experience base and will not know how to react to the inevitable system crashes. There is a similar effect with airline pilots - it has been implicated in a number of crashes.


What would be a possible solution to this?
#14510427
An overall solution would have many components.

To correct for de-skilling, constant intensive training is required for standby operators of autonomous and semi-autonomous systems.

Addressing system fragility itself would require reducing node connections and/or installing emergency breaks. The just-in-time mentality will have to be destroyed, and replaced with a deep backup mentality. We should have an agency (akin to the CDC) capable of modeling complex interconnected systems like electrical grids; the purpose of such modeling is to identify system fragilities and to reinforce the weaknesses. Where possible, central nodes feeding outward to many subsidiary nodes should be replaced by multiple smaller nodes. Techniques for quickly identifying and isolating failed nodes should be emphasized.

All this pre-supposes a recognition that there is problem that needs to be addressed. There is little room for optimism. Even after the 2008 financial failure, there is no mainstream recognition of structural faults within the system.
#14587190
We, ourselves, are going to die anyways - whether we'll be replaced by our biological offspring or electronic imitations.
What is preferable depends on, who's gonna be happier.

Anyways the brain and computers slowly growing together over years and years of innovation seems more likely than one replacing the other.
#14587191
quetzalcoatl wrote:The potential for autonomous self-replicating intelligences won't be realized anytime soon.


You're the most depressing poster on PoFo, dude...

According to you nothing transformative ever happens and it's just a right-wing jackboot stomping on a human face forever, or at least until the time where civilization collapses into a new dark age.
#14587199
I'm actually quite jolly. Shit does happen, only a lot slower than most people think. One day there will be post scarcity technocratic socialism, and we will frolic like Eloi while our Morlock machines labor underground.
#14587208
I don't know what SciFi you mean. Biology shows us that nanobots are very much a reality.


Electronics on the scale of cells is impossible, the chips would have to be so small that the transistors would be nonfunctional due to quantum tunneling. You cannot get a robot on such scales only basic materials or synthetic biological organisms which people would not consider nanotech.

You can open the tweet yourself.

According to OCHA, imports of both food and medici[…]

Women have in professional Basketball 5-6 times m[…]

@FiveofSwords still has not clarified what it […]