froggo wrote:I confess, I may have assumed a will-towards-an-ideal when I heard various Marxists speaking.
I guess Marx's greatest contribution was the way to look at society, and the notion that communism is the end-goal of Marxist thought is false; though communism may historically align itself as an accurate conclusion.
Well if you're interested, there is a tension around those that try to present an ideal of what a socialist or communist society would look like based on the present world conditions and conservative force that simply focuses on how we move from our present means closer to such an ideal.https://www.marxists.org/archive/connolly/1904/condel/index.htm
Here is the controversy between two Marxist heavy weight theoreticians, where DeLeon and I think Babel made claims about what a socialist society would look like. Connolly emphasizes that socialism only aims to address the economic problem, but how society struggles to determine how to deal with problems will still be determined within the new economic condition and not strictly determined by the economic base. The economics is the preconditions for certain possibilities but not a particular outcome.
Also I might mention this in the sort of liberal Frankfurt school of communicative theorists about the role of an ideal.https://www.marxists.org/archive/connolly/1904/condel/index.htm
Regulative ideals are the “utopian” images of society which allow us to make sense of ethical propositions. A regulative ideal is the abstract generalisation of an ethic, how the world would be if an ethical principle were to be generalised. Laissez faire, the idyllic village community, the socialist utopia, are examples which demonstrate that regulative ideals need to be used with care. Nevertheless, I believe they have an important place in ethical political struggle.
Karl-Otto Apel put it well:
“... ethics seems to be fundamentally distinguished from
utopia in the following manner: ethics, like utopia
commences from an ideal that is distinguished from
existing reality; but it does not anticipate the ideal
through the conception of an empirically possible
alternative or counter-world; rather it views the ideal
merely as a regulative idea, whose approximation to the
conditions of reality - e.g. discourse consensus formation
under the conditions of strategic self-assertion - can
indeed be striven for but never completely assumed to be
“... the most basic connection between ethics and utopia -
and that also means, between reason and utopia ... is
evidently one that is embedded in the “condition
humaine” as unavoidable. Human beings, as linguistic
beings who must share meaning and truth with fellow
beings in order to be able to think in a valid form, must at
all times anticipate counterfactually an ideal form of
communication and hence of social interaction. This
“assumption” is constitutive for the institution of
argumentative discourse;” [Karl-Otto Apel: Is Ethics of
the Ideal Communication Community a Utopia? ... in The
Communicative Ethics Controversy, ed. Benhabib and
In other words, the regulative ideals by means of which a person organises their norms and values ought not to be taken as a future state of the world at which history must one day arrive. One can be a Christian without believing in the Second Coming, a Communist without believing in a future world lacking in all social conflict and a liberal without believing in the end of history - that is in fact precisely what it means to be an “ethical Christian,” an “ethical communist” or an “ethical liberal.”
Is this a jab at me?
No, that section is a point against a strict economic determinism in people's ideological beliefs but defending that there is a trend or tendency for people to conform to certain outlooks based on their position in society and thus experiences of it. But it's not possible for someones class position to not really be synonymous with their political outlook and actions. I mean there is a shit tendency to simply dismiss someone because of their clas position or to assert the class character of their ideas but not actually illustrating the social basis of such ideas. So Marxists should avoid such tendencies as its more a way of shutting down discussing amidst power struggles.
Human action is not caused, it is the inevitable result of response or lack of response--- this does not rule out the possibility that humans may entrust machines in managing their affairs one day.
Machines might be able to streamline the efficiency in running things but I would put a qualifier that I don't think machines are going to be ones to form the judgements necessary in dictating how we change the condition of our society. Basically they are part of that economic precondition in the functioning of our everyday lives, but they won't be the political force of people.
It'd be a long time until machines may even be competent of such judgements and I have a resistant tendency to such cybernetics.
I infact find appeal in Ilyenkov's struggle against cybernetics in the USSR where he asserts that machines as followers of formal logic, cannot adequately deal with contradiction and such an ordering of life would be destructive.http://libelli.ru/works/idols/2.htm
He presents it in the form of a story.
For a summary: https://www.radicalphilosophy.com/article/the-philosophical-disability-of-reason
In fact, this disjunction between input and output, as generating incomputable infinities within the network, was already revealed by Warren McCulloch and Walter Pitts in the early 1940s. When trying to deduce ‘how we know what we know’, they suggested getting information about the inside of the brain in order to emulate the neural diagram of how perception evolves. As Slava Gerovich relates, McCulloch and Pitts constructed for this an artificial neural network that could represent logical function, and where, conversely, any logical function could be translated into a neural network. By this they wanted to prove that knowledge has a neural construction and that any logical function can be implemented in formal neural networks. In a nutshell, they sought to deduce the brain’s input, the ‘black box’ (the imprint of facts about external world inside the brain), from its outputs (our perception). Yet, the epistemological ambition of their project failed. As Gerovich writes, McCulloch and Pitts were thus forced to acknowledge that ‘from the perceptions retrieved from one’s memory, it was not possible to deduce the “facts” that caused those perception’. 18 Nonetheless, McCulloch and Pitts continued to deny this failure. Instead, they simply contended that ‘the limitations of their formal model of the brain confirmed fundamental limitations of our knowledge of the world’. Meanwhile the only discovery obtained through the experiment was that ‘even if we cannot know the world, the nervous system can at least compute infinite numbers as a universal logical machine’. 19
Similarly, in The Mystery of Black Box, 33 a pamphlet published in 1968, Ilyenkov created a technocratic dystopia in which there is a total supercession of reason and thought by machinic intelligence. The text is readable as seeking to reveal those parameters of dialectical logic that cannot be hijacked by algorithmic ratiocination. The Mystery of Black Box touches, in this way, upon some of the most crucial issues which are at stake today, I would argue, in the inquiry concerning what reason is. What are those components of human reason that cannot be emulated by any machinic intelligence? Is machinic intelligence able to become a sovereign autonomous autopoetic Subject, the epistemic nature of which is different from the human mode of speculation, or does it remain a complement of human reason? In other words, precisely those questions that Negarestani and Parisi claim to answer in their recent texts.
In the story told in Ilyenkov’s 1968 pamphlet, a cybernetic scholar Adam Adamich decides that the human brain possesses no essential differences from machinic computation. Being sure that a machine has more chances to augment its intelligence than the very slowly developing mind of man, he invents an artificial intelligence intended to accelerate thinking processes. It emulates thinking more efficiently than the human brain. All those arguments about the qualitative difference of human intelligence from machinic intelligence, as represented by such categories as reason, will, the ideal or the sublime, are rejected by Adam Adamich as so much obsolete mythology; a mythology which was once mistaken for philosophy. The machine of augmented intelligence created by the scholar gradually proliferates into a broader neural system, allowing each machine to acquire the capacity to autonomously implement self-learning and self-improvement.
A problem however arises when one of the most advanced machines – ‘a thinking ear’ – reaches its ultimate goal: it ‘learns’ to hear everything on the planet; but since there are no sounds in the cosmos, its further perfection becomes unnecessary, whereas the algorithm of amelioration inscribed in its coding incessantly instigates the machine to develop further. This situation creates a contradiction: perfection is an unending capacity of an artificial intelligence, but there is no need in it. Eventually, in order to resolve such contradiction, the neural system establishes the authority of a ‘Black Box’: a meta-intelligence machine, which simply neutralises all contradictions, and in which all excessive data can vanish when not needed. Thus, when any other machine starts glitching because of contradiction, the Black Box immediately neutralises the problem. The Black Box becomes, in other words, a device to ingress and devour the excesses of algorithms and data that were not logically necessary, but that had to proliferate as a consequence of the infinite capacity of algorithmic outputs – quite similar, that is, to the incomputable as described by Parisi.
In The Mystery of Black Box, ultimately, the inventor of the system, Adam Adamich, is blamed for excessive thinking; the machines decapitate him and substitute his head with a device for data memorising. The didactic conclusion is that the perfection of computation has been reached, but the infinity of production that was inscribed in the machine became unnecessary. So, paradoxically, infinity, when it stops being a category of thinking and dialectics, and is regarded as a mere flow of data, cannot manifest its true nature, which should be dialectical and contradictory. In the search for the guaranteed limit to infinity, machines reach the condition of the absolute end of thought, which coincides with the permanent blankness of the Black Box.
Despite the fact that The Mystery of Black Box was written in the late 1960s in the very different context of Soviet academia, the principal technical remedies in the augmentation of mind that it features are actually very similar to those found in current theories of computation. These might be summarised as follows:
1. A capacity for self-perfection, acceleration and self–learning by the machine.
2. The discrete character of algorithmic tasks and the eviction of any blurred, contradictory inputs, which might block the output.
3. The infinity of those discrete data.
4. The total division of activities and hence of labour, as a consequence of the extreme discreteness of algorithmisation.
5. The autonomy and autopoeisis of machinic intelligence.
While doubt and contradiction (or the ‘disability of philosophy’) diminish the efficiency of reason and make it powerless in post-philosophical theories of mind or of the brain, for Ilyenkov it is precisely these traits that construct thought. The mind’s ‘disability’ is inscribed into the mind’s ability. This disability is surpassed not by means of an augmented storage of knowledge or of cognised data and thought’s functionality. Rather, it is an awareness of the disability of human reason in its treatment of the contradictions of reality that is able to redeem such disability. Moreover, thought’s inevitable disability, perishability and its bond with human neoteny – that is, the retention of protective capacities for surviving in natural environments, as a condition in which the existence of the human species is grounded – does not contradict its quest for the Absolute. 34
As Ilyenkov often repeats, philosophical and dialectical phenomena are spiral-like or snowball-like – constantly on the move and hence indiscrete as selves. The common good, labour, reason or culture are, as such, not autopoetic, but realise themselves as ‘other-determined non-selves’. Autopoiesis implies that the organism remains the self, even in the surrounding of an environmental outside and in exchange with it, whereas the above-listed phenomena – common good, labour, reason, culture – presuppose one’s positing as non-selves. ’The other self’ in this case is not simply an outside of the self, but the formative principle of the self as of the non-self, of non-identity. From this perspective, it is impossible to algorithmicise thought, since thinking is not confined to the moves in a neural network, or within the brain alone, but evolves externally including the body with its senses, its involvement in activity, engagement in sociality, and other human beings of all generations and locations. Consequently, if one were to emulate an artificial intelligence or thought digitally, one would have to create an entire machinic civilisation (one that would, additionally, be completely autonomous and independent from the human one). 35 At the same time, the very idea of programing a human consciousness or a thought as input is unimplementable, since there is not a single moment when a human being and her reason would have a stable and discrete programmatic interface that could be used as an input. As Ilyenkov argues, if there is any function of thought, it is in surpassing that function. As such, even if computation inscribes within itself the incomputable as its autopoetic potentiality, it would not be able to pre-empt the concrete paths for dealing with contradiction, as the requirement of algorithmic logic is in either solving or neutralising the paradox, rather than in extrapolating it. 36 As Boris Groys puts it, the sovereignty of thinking procedure is possible only when it is defunctionalised and miscommunicated. Moreover, a truly interesting (artistic) computer would be the one that ‘always produces the same result – for example zero – for any and all computations, or that always produces different results for the same computational process’. 37
Until people should opt to concede the greatest practical critique comes from a machine which could predict with precision all the variables (I'm still obviously hazy about how the social aspect of humanity could be determined by the machine, but I am sure the deliberate nature of human actors would not be too difficult for an advanced machinery to figure out--- even today we can see how algorithms can create a desired response in human conduct, but I am not positing that in my little scenario the machine will be applied to determine human conduct; it will focus entirely on managerial aspects of an economy.)
I am skeptical of a machines ability to determine all variables, seems like a wish for a kind of practical God with absolute knowledge. I of course would reiterate the above on ideas of the ability of machines and what I think is often a lack of attention to the specific qualities of human thinking in it's social totality, rather than simply confining human thinking to the procedures of latest technology as has been true in psychology for some time to conflate the tools of psychology with the structure of consciousness.
I am also suspect of the attempt fo the centralized dictation of the economy in that I think there is a strength in markets sort of anarchistic chaos and decentralized approach that must be taken advantage of but on the grounds of not value but ethics.
The ability to predict such an elaborate and complex system of society is dubious, in part because of the human actors involved. But such a predictive certainty should not be the goal.https://www.marxists.org/reference/subject/philosophy/help/value.htm
Anyone genuinely familiar with Marx's critique of political economy will know how powerful is his analysis of commodity production and the labour theory of value which is at the heart of that analysis and the many great insights that this analysis has given us about the essential nature and historical trajectory of capitalism. However, whatever the claims, I do not know of a single Marxist who can claim, hand on heart, that they have done better than a capitalist think-tank in predicting the ups and downs of capitalism in the short or long term. And complexity economics shows how desperately inadequate bourgeois economics remains.
The "labour theory of value" disappears with value itself, as soon as people stop exchanging commodities. We do not need a new theory of value. We will demonstrate our values when we can decide how to spend our time and the sooner we can decide what to do with our own time, the better. So long as we still want something in exchange, so long are we enslaved. So long as we have to spend out time doing one thing in order to get something else in exchange, so long are we enslaved.
I know it is unlikely that something which has taken power would give it up, but its also not impossible. I don't see how that argument determines it cannot necessarily be so.
Because you can't make someone less dependent on you in trying to free them, ultimately the best you can do is support their independence. If I continue to make decisions for my child and don't give them the space to do things on their own and be increasingly independent, then my actions do not create the change needed.
Your point would be a kind of charity rather than solidarity in which people are assisted in their struggles, how does one dismantle ones self? This seems a practical impossibility to me.
For power to be given up there must be someone for which the power is being taken. Like a leader under threat who must peacefully remove themselves but they can't simply go, power to the people, I dissolve my office and things just work.
I feel bad here, because I think consensus was a bad choice of wording on my part, and you got really into it!
Consensus decision making is quite influential and must be considered in how it is implicated in the future of te political landscape.
So this I guess is just confirming that an ideal future is never as certain as a materialist future, but does an ideal get turned into historicity after it's enactment is realized?
Indeed, ideals motivte people who then objectify those ideals in the changes and practices which they institutionalize. The ideal becomes material.
This might help in providing a simple example of how a problem becomes identified such as racism or sexism which then posits the ideal of a society without it, and is sought through things like anti-sexism legislation or what ever the movement wants to actualize in society.https://www.ethicalpolitics.org/ablunden/works/concepts-activity.htm
I am avoiding a discussion of Hegel's connection of the universal, particular and individual in being moments or parts of a concept which aren't identical but interconnected.
But basically an example would be the universal is unionism, the particular is the existing social practices such as a particular teachers union in the US, and the individual being the union member. Universals are the realization of the particular social practices but are impossible unless enacted by a collection of individuals.
In this technological application scenario, the intention is not to save us; I'm not even saying that this scenario is the optimal conclusion; I present it as a plausible unfolding based on where I see technology leading us to.
I am less optimistic about the ability of technology to be the primary factor in the organizing of society. I guess I see it as a possible precondition for certain things but mostly in the productive capacity to meet people's basic needs efficiently as organized by people. Because machines are fast but they're really stupid. An analogy being how easy it can be for a machine to fuck up a task like making a sandwich unless you outline in great detail everything it needs to do and the contingencies it has to adapt to.
When I earlier stated my suggestion of "a technological aristocracy authoritatively seizing power and implementing this procedure" the primary support this person would have would be 'cult of personality' combined with means and support from the educated elites.
Might that be like China with its politburo of engineers?
If this technology were to prevail, it seems rational to conclude that the machine's system would challenge itself and gradually leads us towards an advanced state.
A self-perfecting system?
Initially the technology would exist within the capitalistic sphere, it would provide room for the corporations to maneuver but it would highly regulate such movements, to determine if too much imbalance were apparent in other sectors removed from that corporate body. The primitive capitalism-for-profit would be replaced by a capitalism-for-profit-balanced-by-social-consciousness (and not a social-consciousness as branding technique, but one that was part of the contract of establishing that corporation)
Yeah this sounds like the USSR dream of the proper regulation of the capitalist economy but I think is a fetishism of economics rather than an effort to fundamentally challenge the qualitative nature of markets in which exchange value dominates over use value.
This is basically a social democratic platform really, and it basically denies the inherent contradictions in capitalism, calling jut for enough regulation that it has a human face missing Marx's point of how the gap between use values and exchange values is a fundamental contradiction which develops into the larger problems that underpin the repeating capitalist crises.
We're of different minds.
The corporation could still enrich various people for a time; so long as they were co-operating with the commands of the machine. Innovation could also perhaps be prioritized in the realm of academia, and collegial esteem might drive the innovative force in such a way where wealth-esteem once had.
You might like the thunderhead scifi book of benevolent AI.
Not sure why corporations would submit to anything which interferes with trade and profit.
-For Ethical Politics