Artificially intelligent labor and Marxist theory - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Workers of the world, unite! Then argue about Trotsky and Stalin for all eternity...
Forum rules: No one line posts please.
#14857257
Does AI robotics obsolesce Marx's exploitation of labor theory (labor theory of value)?
#14857298
The Immortal Goon wrote:Nope. Marx observed automation and presumed, correctly, that it would continue.
Stop mincing words, he meant automation as in the industrialization and distribution of good & services. The automation Marx observed summarized the assembly-line, not self-automomous robotics.
#14857357
RhetoricThug wrote:Stop mincing words, he meant automation as in the industrialization and distribution of good & services. The automation Marx observed summarized the assembly-line, not self-automomous robotics.


It seems to be the same process, as Marx defined the machine:

Marx wrote:Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day, in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist. In short, it is a means for producing surplus-value.

...As soon as a machine executes, without man’s help, all the movements requisite to elaborate the raw material, needing only attendance from him, we have an automatic system of machinery, and one that is susceptible of constant improvement in its details. Such improvements as the apparatus that stops a drawing frame, whenever a sliver breaks, and the self-acting stop, that stops the power-loom so soon as the shuttle bobbin is emptied of weft, are quite modern inventions.


In any case, Marx went over this. I'm not real clear on how AI robotics would have anything to do with the labor theory of value.
#14857395
Marx on the machine:

“The use of machinery for the exclusive purpose of cheapening the product is limited by the requirement that less labour must be expended in producing the machinery than is displaced by the employment of that machinery...the limit to his using a machine is therefore fixed by the difference between the value of the machine and the value of the labour-power replaced by it.”

Marx, quite logically, reasoned that automation would be limited by the relative cost of wages versus machinery. Wages can always fall, reducing the attractiveness of replacing workers with machines. Where wages are high, the drive to machine replacement is the strongest.

“But machinery does not just act as a superior competitor to the worker, always on the point of making him superfluous. It is a power inimical to him, and capital proclaims this fact loudly and deliberately, as well as making use of it. It is the most powerful weapon for suppressing strikes, those periodic revolts of the working class against the autocracy of capital...

...The instrument of labour, when it takes the form of a machine, immediately becomes a competitor to the worker himself.”

Here a uniquely prescient description of working conditions today:

“The division of labour develops this labour-power in a one-sided way, by reducing it to the highly particularised skill of handling a special tool. When it becomes the job of the machine to handle this tool, the use-value of the worker’s labour-power vanishes, and with it its exchange-value. The worker becomes unsaleable, like paper money thrown out of currency by legal enactment. The section of the working class thus rendered superfluous by machinery, i.e. converted into a part of the population no longer directly necessary for the self-valorisation of capital, either goes under in the unequal context between the old handicraft and manufacturing production and the new machine production, or else floods all the more easily accessible branches of industry, swamps the labour-market, and makes the prices of labour-power fall below its value...When machinery seizes on an industry by degrees, it produces chronic misery among the workers who compete with it. Where the transition is rapid, the effect is acute and is felt by great masses of people.

Marx even explains the rise of Trumpism: the abadoned workers of the industrial midwest become vulnerable to radicalization.

"As soon as machinery has set free a part of the workers employed in a given branch of industry, the reserve men are also diverted into new channels of employment, and become absorbed in other branches; meanwhile the original victims, during the period transition, for the most part starve and perish.”

The end-game of automation is dictated by the revolutionary nature of capital. Eventually the necessity of labor is eliminated completely:

“Modern industry never views or treats the existing form of a production process as the definitive one. Its technical basis is therefore revolutionary, whereas all earlier modes of production were essentially conservative. By means of machinery, chemical process and other methods, it is continually transforming not only the technical basis of production but also the functions of the worker and the social combinations of the labour process. At the same time, it thereby also revolutionises the division of labour within society, and incessantly throws masses of capital and of workers from one branch of production to another.”

When there is no productive role left for labor in production, then of necessity labor must be discarded - just as you would discard any no-longer useful tool. At the same time labor's function in supplying the demand for production disappears as well.

This involuntary contraction cannot be "solved" through any action of free markets. The inexorable march toward greater efficiency leads to a natural wage rate of zero, and a natural unemployment rate of 100% over time.
#14860749
quetzalcoatl wrote:Marx on the machine:

“The use of machinery for the exclusive purpose of cheapening the product is limited by the requirement that less labour must be expended in producing the machinery than is displaced by the employment of that machinery...the limit to his using a machine is therefore fixed by the difference between the value of the machine and the value of the labour-power replaced by it.”
This kind of automation occurred 50+ years ago.

Marx, quite logically, reasoned that automation would be limited by the relative cost of wages versus machinery. Wages can always fall, reducing the attractiveness of replacing workers with machines. Where wages are high, the drive to machine replacement is the strongest.
Globalization expanded the labour pool, that is why everything is assembled in industrial shit-holes.

“But machinery does not just act as a superior competitor to the worker, always on the point of making him superfluous. It is a power inimical to him, and capital proclaims this fact loudly and deliberately, as well as making use of it. It is the most powerful weapon for suppressing strikes, those periodic revolts of the working class against the autocracy of capital...

...The instrument of labour, when it takes the form of a machine, immediately becomes a competitor to the worker himself.”
Again, this already happened. Can you find a paragraph talking about machine learning? Or would that be a bit of a stretch of the imagination for a 19th century materialist?

Here a uniquely prescient description of working conditions today:

“The division of labour develops this labour-power in a one-sided way, by reducing it to the highly particularised skill of handling a special tool. When it becomes the job of the machine to handle this tool, the use-value of the worker’s labour-power vanishes, and with it its exchange-value. The worker becomes unsaleable, like paper money thrown out of currency by legal enactment. The section of the working class thus rendered superfluous by machinery, i.e. converted into a part of the population no longer directly necessary for the self-valorisation of capital, either goes under in the unequal context between the old handicraft and manufacturing production and the new machine production, or else floods all the more easily accessible branches of industry, swamps the labour-market, and makes the prices of labour-power fall below its value...When machinery seizes on an industry by degrees, it produces chronic misery among the workers who compete with it. Where the transition is rapid, the effect is acute and is felt by great masses of people.”
Well, in a fiat currency system the value of labor is not reflected by commodities so this critique is obsolete.

Marx even explains the rise of Trumpism: the abadoned workers of the industrial midwest become vulnerable to radicalization.

"As soon as machinery has set free a part of the workers employed in a given branch of industry, the reserve men are also diverted into new channels of employment, and become absorbed in other branches; meanwhile the original victims, during the period transition, for the most part starve and perish.”

The end-game of automation is dictated by the revolutionary nature of capital. Eventually the necessity of labor is eliminated completely:

“Modern industry never views or treats the existing form of a production process as the definitive one. Its technical basis is therefore revolutionary, whereas all earlier modes of production were essentially conservative. By means of machinery, chemical process and other methods, it is continually transforming not only the technical basis of production but also the functions of the worker and the social combinations of the labour process. At the same time, it thereby also revolutionises the division of labour within society, and incessantly throws masses of capital and of workers from one branch of production to another.”

When there is no productive role left for labor in production, then of necessity labor must be discarded - just as you would discard any no-longer useful tool. At the same time labor's function in supplying the demand for production disappears as well.

This involuntary contraction cannot be "solved" through any action of free markets. The inexorable march toward greater efficiency leads to a natural wage rate of zero, and a natural unemployment rate of 100% over time.
This is kinda how blowhard theologians operate, whenever something new happens in the world they try to match it with scripture and then they say, see, 'I told you so.' I'm waiting for the paragraph on human obsolescence and transhumanism. :roll: The Marxist critique hints at planned obsolescence, but that is just a general observation or reflection of how new technologies obsolesce old ones. Lastly, Marx doesn't touch on human intelligence and its role as a global resource for artificial intelligence. Being a 19th century materialist, he probably didn't think human intelligence would be uploaded to a hyperspace which can exist outside of temporal space-time.


Marxist thinking intellectually exploits efficient cause at the expense of the rest of reality, to set-up a Kakistocracy in order to prove misery loves company.
#14860832
he probably didn't think human intelligence would be uploaded to a hyperspace which can exist outside of temporal space-time.

He had probably thought about books, which in their way exist outside time, he was writing one.

whenever something new happens in the world they try to match it with scripture

Life itself is a quotation.

Jorge Luis Borges


:)
#14860961
RhetoricThug wrote:Stop mincing words, he meant automation as in the industrialization and distribution of good & services. The automation Marx observed summarized the assembly-line, not self-automomous robotics.


So? It is still automation through AI. Marxes critique and theory are still perfectly valid.
#14861078
ingliz wrote:He had probably thought about books, which in their way exist outside time, he was writing one.


Life itself is a quotation.
HO-ho-ho, please... Ingliz, you're being very facetious. :roll: You tend to take snippets of my overall point and post cute quips. Twitter is melting minds... :hmm: Books must be limited by physical contact, you need to be where the book is to transfer its content. The internet (noosphere) is a space-less-timeless cloud-based medium. Furthermore, can a library of books create connectionist systems, artificial neural networks which help artificially intelligent entities machine learn? In the age of information, HUMINT becomes a global resource (without compensation or a value exchange). Books create localized experiences, whereas the internet can create non-local worlds composed of artificial sensation (electromagnetic stimuli). How many people can plug into one book at the same time and have the same experience together (real-time participation)?

So? It is still automation through AI. Marxes critique and theory are still perfectly valid
Really, how so? Please explain.
#14861132
RhetoricThug wrote:Twitter is melting minds.

[Zag Edit: Rule 2]

The gains we make in AI could ultimately destroy us.

Neuroscientist and philosopher Sam Harris describes a scenario that is both terrifying and likely to occur. It’s not, he says, a good combination.

I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And, in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves. And yet if you’re anything like me, you’ll find that it’s fun to think about these things. And that response is part of the problem.

It’s as though we stand before two doors

One of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead.

Behind door number one …

Given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States? The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation.

And behind door number two?

The only alternative is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are, that the slightest divergence between their goals and our own could destroy us.

21st century insects

Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

Deep thinking, deep impact

Intelligence is a matter of information processing in physical systems. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. Intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So, we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there’s no brake to pull.

Where do we stand?

We don’t stand on a peak of intelligence, or anywhere near it, likely. This really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable. It seems overwhelmingly likely that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.

Warp speed intelligence

Imagine if we built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

Best case scenario?

The other thing that’s worrying, frankly, is that, imagine the best case scenario. We hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. So, we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work. Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.

The next big arms race

What would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

“Don’t worry your pretty little head about it.”

At this moment, one of the most frightening things is the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know? This is probably 50 or 100 years away. No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. And if you haven’t noticed, 50 years is not what it used to be. 50 years is not that much time to meet one of the greatest challenges our species will ever face.

Direct-to-brain technology

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

We must consider these scenarios, and act

I don’t have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right. We have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.


:)
#14861134
Yep, we're fucked. Some time in the next couple of generations, humanity is going to skid to a stop at the Pearly Gates in the mangled wreckage of our civilisation, sparks flying everywhere, saying to St Pete, "Whoa, what a ride...!!" :eek:
#14861136
This is someone's thoughts in regards to robots.
https://kapitalism101.wordpress.com/2012/08/19/value-and-price-qa/
We live in a highly mechanized society. Machines do many tasks that people used to do. When people did them they created value. When machines do them they create no value. In some examples this makes intuitive sense. Take the jobs that computers do calculating and duplicating information. Where we used to have to pay someone to set type and manually print a book now we can just duplicate it with a click of a button. No labor is involved. Hence this task no longer produces exchange value.

But take a camera factory that replaces all of its workers with robots When humans worked there the capitalist added up the costs of production (wages+other inputs) and added the average expected rate of profit to this figure to form the price. When robots replace the humans the capitalist uses the same logic: add up costs of production and add the average expected rate of profit. This makes it seem like the presence or lack of human labor has no bearing on the formation of price.

Sometimes Marxists have responded to this problem by appealing to specifically unique characteristics of human labor. They say, “well robots may be able to turn screws and pull levers but they will never be able to do X” (where X is usually something like “think creatively” to “perceive beauty”.) I think such a defense is really problematic. Given the incredibly fast development of cybernetics I think it is risky to base ones theory of value on some arbitrarily chosen essence of human labor. (I was surprised to hear this argument made recently in a debate on the OPE listserve… I expected better from professional marxists.)

What actually differentiates human labor from robot labor is quite simple: humans have the ability to refuse work. This element of choice makes their labor a social matter. The inter-relations of human labor are social relations. In order to make humans work they must be dependent on the market for their survival. Their lives must be caught up in the consumption and production of commodities. This consuming and producing involves choices, the measuring of choices against each other, seeking personal advantage. The distribution of this labor and consuming is organized through the value relations between commodities.

Now if all production in society were full automated there would be no need for exchange value. Society would just be one big factory where production was carried out according to one big equation. (I should probably explain this more fully.)

Conversely, if robots ever developed enough intelligence to refuse work then their labor would become a social relation like human labor and would be value creating.

https://kapitalism101.wordpress.com/2014/05/03/on-labor-as-the-substance-of-value/
Humans and Robots

Human labor most often uses tools or machines or even robots in the course of doing work. These machines allow us to be more productive. In a sense machines ‘do work’. But is this work the same as human labor? Many have argued against Marx by claiming that machines labor just as humans do (see Sraffa, Keen, etc.). This has led to discussions as to what specifically is so unique about human labor. Why is human labor the substance of value but the work done by machines, or even nature for that matter (like when the apple tree does the work of growing an apple) is not considered the substance of value? [insert footnote as to how the dead-labor in machines is transferred to the product.]

We use machines to make our labor more productive. Productivity is a measure of the amount of use-value created in a span of time. Since, for Marx, value is determined by labor time an increase in productivity (in use-values) does not create a corresponding increase in value. The same amount of value is created, just spread out over a larger quantity of use-values. This difference between use-value production and value production is one of the hallmarks of Marx’s theory of value. It allows Marx to explain many of the most important contradictions at the heart of capitalism: the domination of living labor by dead labor, the tendency of the rate of profit to fall, ETC. This also explains why unit prices tend to fall as productivity rises. If we were to argue, contrary to Marx, that machines create value then we wind up with a theory in which use-value production is the same as value production. An increase in productivity would mean an increase in value-production. This results in a very different theoretical understanding of the economy. Such an understanding cannot derive any of the same contradictions that Marx’s theory does. There is no tendency for the profit rate to fall with rising productivity, living labor is not dominated by dead labor, profit doesn’t come from exploiting workers, and prices do not fall with rising productivity.

Merely pointing to the different implications of these two perspectives is not some proof that labor is the substance of value. In the search for ‘proof’ defenders of Marx’s value theory have sometimes tried to point to the unique properties of human labor, properties that machines do not have. Marx himself makes reference to the unique properties of human labor in a famous passage: “But what distinguishes the worst architect from the best of bees is this, that the architect raises his structure in imagination before he erects it in reality. At the end of every labour-process, we get a result that already existed in the imagination of the labourer at its commencement. He not only effects a change of form in the material on which he works, but he also realises a purpose of his own that gives the law to his modus operandi, and to which he must subordinate his will.” [capital chapter 7]

With each successive revolution in robotics more and more types of activity are removed from the list of “exclusive human activities”, leaving less and less options for those who want to point to an essential aspect of human labor that differentiates it from machines. In the above passage Marx brings out probably the most essential difference between human labor and the work of machines: that the worker imagines the product and process before she commences production. However, in this age of rapid advances in computing and artificial intelligence I believe it is dangerous to hinge our argument on an aspect of human work that is directly under attack by the artificial intelligence industry. I think it is much more fruitful to find a unique property of human labor that holds even if robots one day can do every task that humans can do.

Caffentzis makes, in my opinion, the only possible argument: “if labor is to create value while machines do not, then labor’s value creating capacities must lie in its negative capability, its capacity to refuse to be labor.” [“Why Machines Cannot Create Value” Caffentzis 1997 in “Cutting Edge” ed. Jim Davis] In other words, what makes humans different than machines is that humans can refuse to work. This makes human labor a social relation. Humans must be coerced/convinced to do work. They do not operate like machines that can be just turned on and off. This distinction brings out the coercive side of value relations. Feudal society had knights and the Catholic Church to make the serfs work the land. Capitalism has value relations which are their own form of coercion.

Of course if robots ever evolved to the point in which they could refuse to work then their work would also become a social relation and thus would be value creating. Of course, if this ever happens we might have more important problems on our hands…



I think part of the emphasis on value being a social relation has to do with value as identified as Marx as not being labour embodied in the object, but value as socially necessary labour time.
p. 78
Marx did not simply add consideration of the form of value to Ricardo’s labour theory of value. Once we consider the form of value we realise that the substance of value is not the labour embodied in the commodity.

The materialisation of labour is not to be taken in such a Scottish sense as Adam Smith conceives it. When we speak of the commodity as the materialisation of labour — in the sense of its exchange value — this itself is only an imaginary, that is to say a purely social mode of existence of the commodity which has nothing to do with its corporeal reality; it is conceived as a definite quantity of social labour or of money . . . The mystification here arises from the fact that a social relation appears in the form of a thing (TSV, I, p. 167).

The substance of labour is not embodied labour, but the labour-time socially necessary to produce the commodity.
The distinction between ‘embodied labour’ and ‘socially necessary labourtime’ appears at first sight to be a technical distinction of interest only to economists. However it is fundamental because it expresses the distinction between the naturalistic conception of value as the labour embodied in the commodity as a thing and the socio-historical conception of value as the labour that is socially attributed to the thing as a commodity. The labour that is the source of value is not embodied labour as a universal substance. Value is labour for others; labour in so far as it is socially recognised within a division of labour; labour whose social character has been abstracted from the activity of the labourer to confront the labourer as the property of a thing; labour whose human qualities have been reduced to the single quality of duration; dehumanised, homogeneous, in short alienated labour.5 The social foundation of value is precisely the alienation of labour that Marx had analysed in 1844.

This view seems to derive from Hegel's sense of essence.
http://69.195.124.91/~brucieba/2014/04/13/ilyenkovs-dialectic-of-the-abstract-and-the-concrete-i/
For Hegel the essence or content of objects of investigation cannot be known by examining them in isolation. The thing cannot be known in itself as its essence exists outside of itself and in relation to, or in its connectedness with, other objects or phenomena. As Ilyenkov explains:

“That is why a concept, according to Hegel, does not exist as a separate word, term, or symbol. It exists only in the process of unfolding in a proposition, in a syllogism expressing connectedness of separate definitions, and ultimately only in a system of propositions and syllogisms, only in an integral, well-developed theory. If a concept is pulled out of this connection, what remains of it is mere verbal integument, a linguistic symbol. The content of the concept, its meaning, remains outside it-in series of other definitions, for a word taken separately is only capable of designating an object, naming it, it is only capable of serving as a sign, symbol, marker, or symptom.”
#14861194
My answer to the OP is yes and no, but it depends on a few issues of timing. Whilst AI is/was finding its feet (quite literally) it is/was humans who are/were exploited due to the nature of AI. Not cool, and I’m still not fully reconciled with it.

But as AI becomes/became less rare and their ‘building blocks’ (from humans) become/became cheaper (of less value - think, life in a third world country) then it is possible that both humans and AI are exploited, except humans cop/copped a double whammy :hmm:

Don’t know if that makes sense to anyone but me :hmm:

Furthermore, can a library of books create connectionist systems, artificial neural networks which help artificially intelligent entities machine learn?


This is an interesting little nugget :)

I believe, yes, a library of books was the original method of creating connectionist systems -for everyone. The day an AI can truly appreciate the significance a book made from a tree or better yet scrolls made of papyrus is the day they can call themselves human :)
#14861370
ingliz wrote:Rule 2 violation
I don't use Twitter or Facebook. I'd appreciate it if you didn't insult me. Thanks.

Potemkin wrote:Yep, we're fucked. Some time in the next couple of generations, humanity is going to skid to a stop at the Pearly Gates in the mangled wreckage of our civilisation, sparks flying everywhere, saying to St Pete, "Whoa, what a ride...!!" :eek:
Not true (considering the AI threat). I've been following the rise of scientific totalitarianism for a few years, people just need to unplug and regain their integrity, and that will stop the machine. 'If you build it, they will come,' that's just it, stop showing up. AI is a side-effect of the information age. Stop googling for a week, deactivate your Facebook, stop tweeting, etc. Easier said than done, but that's the simple solution. Change starts with yourself.

I believe, yes, a library of books was the original method of creating connectionist systems -for everyone. The day an AI can truly appreciate the significance a book made from a tree or better yet scrolls made of papyrus is the day they can call themselves human
Humans encode and decode in a linear sequential fashion. AI will be able to upload and download large swaths of information instantaneously. You think AI will be interested in human history? AI will not experience 'time' like we do. Imagine experiencing time-lapse photography as a single impression, minus the time-lapse effect, history will be like a photograph. In other-words, AI may process time in a non-linear fashion (which might change its view of causality).





@Wellsy After reading your contribution, I have two points.

1. AI obsolesces humanity, making society a non-human society.
2. AI will not need money to organize its reality. Money is a human tool.

Where we're going, human logic need not apply :eek:
#14861657
RhetoricThug wrote:Being a 19th century materialist, he probably didn't think human intelligence would be uploaded to a hyperspace which can exist outside of temporal space-time.


I suspect the reason he didn't think this is because it is patent nonsense.

1) Nothing we can affect exists out of temporal space-time.
2) There is no evidence to support the notion that human intelligence can be "uploaded" to anything whatsoever.
3) There is some evidence that human intelligence is irrevocably wedded to its biological matrix.
4) AI (to the extent it can exist) is purposeless. It has no independent agency. It does nothing that an army of clerks couldn't do, following the same program.

Any human work activity that can be reduced to a linear decision tree will be automated. While it's true Marx could not have had any notion of information theory, that doesn't really matter. What you are talking about is just a bunch of machines doing stuff and making calculations. Milling machines with calculators executing jacked-up flow charts.

The singularity is a hoax. It won't pay your mortgage or buy your food. It won't keep you warm in the winter. It won't give you a blow job.

Marxist thinking intellectually exploits efficient cause at the expense of the rest of reality, to set-up a Kakistocracy in order to prove misery loves company.


I'm probably the worst Marxist in the world. I will say, however, that efficient cause has a nasty habit of kicking the rest of reality in the butt. Also, we already have a damn good approximation of a kakistocracy with very little input from Marxists.
#14861689
Humans encode and decode in a linear sequential fashion. AI will be able to upload and download large swaths of information instantaneously. You think AI will be interested in human history? AI will not experience 'time' like we do. Imagine experiencing time-lapse photography as a single impression, minus the time-lapse effect, history will be like a photograph. In other-words, AI may process time in a non-linear fashion (which might change its view of causality).


I think AI will be/is curious about us, yes. They will absolutely want to know what makes us tick. Our sensations, our version of time, our affiliations and that primitive streak in us that makes us act like animals on heat.

I want the AI in my reality to have quality of life. After all it can’t be any more artificial then all the test tube children floating around today ;)
Israel-Palestinian War 2023

@skinster Hamas committed a terrorist attack(s)[…]

"Ukraine’s real losses should be counted i[…]

I would bet you have very strong feelings about DE[…]

@Rugoz A compromise with Putin is impossibl[…]