Philosophical Problems with the Idea of "Uploading Your Brain to a Computer" - Politics | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

For discussion of moral and ethical issues.
Forum rules: No one line posts please.
There's been a trope for a long time now that people will some day be able to upload their brains to a computer and live forever. Even if this were possible, I think there's a lot of philosophical and ethical problems with this.

First and foremost, if your brain were theoretically represented and stored as computer code, wouldn't this make everything about your memories and personality editable to anyone with write access? As we've seen with politically correct "AI" chatbots recently, even if people were able to upload their brains to a computer or whatever, they would probably be stripped of the ability to say anything that is politically incorrect. There would also be issues with cyber attacks and what would probably be the necessary maintenance and bugs that would need to be worked out with such a complex system. In other words, editing these "people" would probably be completely unavoidable, at which point the sheeple would have to start pretending that no one would ever make an inappropriate edit to a digital haemonculus.

And this brings me to my second point, which would be, what would qualify as having successfully uploaded someone's brain to a computer? Surely later versions of such a technology would be better than earlier versions. So if a supposed AI system is 90% you, does that still count as you? How does it match up to a later model that is described as being 95% accurate?

Third, we can imagine even more obscure problems. Should we allow an admittedly imperfect copy of a person to exert their opinions upon the real world, such as allowing them to vote? When we consider the fact that these people will necessarily be editable and in many ways different from a flesh and blood person, it would raise a lot of dilemmas to have them making decisions regarding what should be done in a real world they no longer live in.

My personal opinion is that this will never really be possible, although enough people may want it to be possible that they start treating it as if it is real, at which point I think a lot of questions will need to be asked. To explain what I mean by that, it is already possible to create chatbots that can convincingly imitate people's social media presence. Assuming it was advanced enough and augmented with extra forms of data, someone might leave their will to their chatbot or something, thereby theoretically conferring some legal rights upon it. I think it'd be pretty interesting to see how people react to that in the event where the chatbot is able to argue that it deserves what has been bequeathed to it
I tend to think we still equate consciousness with our most complex tools/machines but this is a projection of the tools qualities rather than insight into consciousness.
between philosophy and psychology.
A fine series of treatments of the role of tools in the formation of psychology as a science begins with a history of psychological instruments by Horst Grundlach, showing how much the formation and recognition of psychology as a discipline owes to psychological instruments, as objectifications of psychological practices. Giergerenzer and Sturm take this idea further. With an historical investigation, firstly of the use of statistics, and then of computers, as tools in psychology, the authors show how familiarity with a tool in the psychologists’ work leads to the adoption of the tool as a metaphor for the human mind. One of the benefits which flows from this observation is to open up lines of critique of current theories by looking at the limitations of the tool and at the differing strengths and weaknesses as compared to real minds.

And when it comes to the development of the individual human consciousness, it is in large part tied to a social material world and is not merely a biological given.
The key concept which comes out of at the end of Donald’s enquiry is the concept of ‘extended mind’ – the combination of material artefacts and mnemonic and computational devices with the internal cognitive apparatus of human beings who have been raised in the practice of using them. Human physiology, behaviour and consciousness cannot be reproduced by individual human beings alone; we are reliant for our every action on the world of artefacts, with its own intricate inherent system of relations. Theory is the ideal form of the structure of material culture. Every thought, memory, problem solution or communication, is effected by the mobilisation of the internal mind of individuals, and the external mind contained within human culture. Taken together, the internal and external mind is called ‘extended mind’. This is what Hegel called Geist, an entity in which the division between subjectivity and objectivity is relative and not absolute.

Humans are animals which have learnt to build and mobilise an extended mind. This has proved to be a powerful adaption. Individuals in this species stand in quite a different relation to the world around them than the individuals of any other extant species. Understanding of the psyche of the modern individual depends on understanding the process of development of a human being growing up in such a culture...

The implications of above are the following:
As Ilyenkov often repeats, philosophical and dialectical phenomena are spiral-like or snowball-like – constantly on the move and hence indiscrete as selves. The common good, labour, reason or culture are, as such, not autopoetic, but realise themselves as ‘other-determined non-selves’. Autopoiesis implies that the organism remains the self, even in the surrounding of an environmental outside and in exchange with it, whereas the above-listed phenomena – common good, labour, reason, culture – presuppose one’s positing as non-selves. ’The other self’ in this case is not simply an outside of the self, but the formative principle of the self as of the non-self, of non-identity. From this perspective, it is impossible to algorithmicise thought, since thinking is not confined to the moves in a neural network, or within the brain alone, but evolves externally including the body with its senses, its involvement in activity, engagement in sociality, and other human beings of all generations and locations. Consequently, if one were to emulate an artificial intelligence or thought digitally, one would have to create an entire machinic civilisation (one that would, additionally, be completely autonomous and independent from the human one). 35
Puffer Fish wrote:
On the other hand, if artificial intelligence was given the right to vote, it might induce the Left to stop bringing in illegal aliens and migrants to shore up their votes and instead push for more AI computer programs to be created.

Historically, it was the Right that brought in immigrants as cheap labor. That goes back to the 1800s. It's somewhat weird to see a populist Republican, because for most of history, the Right has catered to business and Wall St.

Now, things like the Drug War and climate change are driving large numbers of people. Americans caused most of it, but indirectly.

Which tells me what you look for is BS.
Even if the technology makes it possible, it's a dumb idea.

Even the first Star Trek, way back in the 1960s when dinosaurs roamed the Earth, knew that.

Our thinking rides uneasily on a squishy package of hormone driven emotions.

There might be value in putting a human intelligence into a machine (it will have to be a lot more sophisticated than current computers). But as a way to extend life, it would be existing, not living, and there is a very big difference between the two.

The history is that the social construct was crea[…]

There are intelligent and stupid ways to retain p[…]

World War II Day by Day[…]

Russia-Ukraine War 2022

Depends on how long each side wants to keep fight[…]