The market mind hypothes.., p.60

The Market Mind Hypothesis, page 60

 

The Market Mind Hypothesis
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  its pervasiveness and computational potential, and its ability to pose new kinds of challenges not just to rationality but to consciousness in general, including the experience of selfhood, the power of reason, and the evolutionary costs and systemic blindnesses of consciousness. (Hayles, 2014, p. 199)

  Importantly, humans understand the context of AI, be it cultural, economic, or historical. AI itself doesn’t. Through scientific and technological discoveries, so by luck and wisdom, we drove automation and it became part of our history. And while we ourselves are driven by survival in that process, AI is not (although some believe we can program it to do so). I will have more to say about this evolutionary angle in a following Cognitive Note.

  Whereas the first two industrial revolutions focussed on replacing largely unskilled physical work, automation more recently has shifted to replacing our mental work. And this is no longer restricted to repetitive and menial tasks. Humans are replaced because we have weaknesses. We get ill, injured, or tired. But mechanical devices are vulnerable too. Computers crash and machines break down. They suffer wear and tear. Arguably they are more vulnerable to online attacks. Modern machines have grown so much in complexity that they require a team of experts to handle them. No individual fully understands an airplane, satellite, or even a car anymore. In fact, we increasingly rely on computers and AI to not only run them, but to also tell us how to fix and maintain them. You see the problem here: we soon do not understand these sufficiently either. AI is able to execute tasks that traditionally required advanced mental efforts and intelligence. In a growing number of cases it does so more efficiently. As just mentioned, depending on which type of machine learning is involved it is not always clear how it achieves this. Crucially this may also hide any errors it does make, including biases. While we are generally able to retrace and identify errors made by humans or simple machines, this is progressively impossible for AI.

  There is another, human side to this mental replacement. If physical labour is taken away from manual workers it is not just their income that is removed. Their intentionality, both in terms of what directs and occupies their minds, is also replaced. Similarly this happens to mental workers. This marginalisation changes their behaviour as market participants (limited to the extent that they can afford to continue as such). The CVC, with its physical and mental constraints, showed this in a different context and in dramatic fashion.

  One example of a human trait which seems hard to replicate is imagination. It is the reverse-engineering capability of the human mind: we see a desired future and then determine the steps how we could get there. AI lacks imagination, it can only do “normal” engineering, determining the path based on past data. The key aspect that truly makes the difference is the pallet of sensations that humans feel when imagining. It is this aspect that delivers the mental causation in human behaviour to create that future.

  More broadly, what is generally acknowledged as missing from AI is consciousness (see Figure A.2). Some hesitate to consider computation capable of instantiating consciousness because it opens the door to panpsychism, for example. Still, artificial consciousness or machine consciousness is a growing field within AI (e.g. Chrisley, 2009; Stuart, 2011). For reasons I already mentioned, in my opinion artificial consciousness will remain distinct from human consciousness. Specifically, as discussed in Chapter 7, it is the non-axiomatic A-ha experience (in the eureka moment) that sets human discovery apart from artificial discovery and thus, by extension, their respective consciousness.

  Figure A.2: AI and existentialism.66

  Depending on how AI matures and what level of artificial consciousness it reaches, its rights enter the equation (just like animal rights). Conscience could also become a key issue. The latest developments in that regard include machine learning algorithms that self-evolve (e.g. Real et al., 2020). One of the claimed benefits is the lack of instructions leading to a supposed removal of human biases in the resulting algorithms.67

  We can connect this to the mechanical approach in economics where there is the distinction between empirical input (e.g. data), practical output (e.g. policies) and theoretical instructions (e.g. models). Although input and output can vary, the instructions that relate the two are pre-determined (or automatic) and form a fixed set. As I have argued, the reason is that mechanical economics has made an ontological commitment that is mechanical at all levels.

  On that note, in March 2023 several AI experts (both academics and practitioners) signed an open letter which stated, among others, that “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable”. While admirable, it is naïve to believe this will be accepted worldwide (which is required for it to be effective). It will thus unfortunately fail. The genie is out of the bottle … but in a different way than you might think. It brings me to the mechanisation trap the global men-of-system—in both the East and West—have set. On a few occasions in this book I warn that “mechanisation begets mechanisation”. Take the example of the growing number of totalitarian regimes who, as the name implies, wish to be in total control. AI can help them with this, and they will consequently never abide by the commitment suggested by the letter’s signatories. Nor will they care much about the ethics of AI, for that matter. To be clear, the naivety started decades ago and was economic when these regimes were welcomed to join the global economic system, as if they were plug-ins: “let’s add the production engines of the East to the machine”. Not realising that manipulated or stymied markets can no longer provide the hoped-for discipline to change such regimes according to the principles of a ‘global free economy’, they have become its Trojan horse instead. To wit, IT knowledge and tools were naively exported and/or simply stolen. Democracies are now served a banquet of consequences, including their own growing repression ‘to keep control’. In short, this is not about AI versus humanity. Nor is it about capitalism versus socialism. This is about freedom versus control, which affects both markets and minds.

  How AI will develop further remains to be seen. There are risks. First a general one on mechanisation. The difference between man and machine is that man is not a machine. This also explains the difference between valuing and pricing. Machines may be very efficient at the latter. But only humans can value. Machines cannot value because they don’t care. Our 4E cognitive economics setting has a warning about mechanisation encroaching further into our personal and professional lives: if technologies, like AI, replace rather than augment labour, we risk losing skills that help us with valuing in the economic system. Hubert Dreyfus and others have (implicitly or explicitly) warned for this.

  More specifically for AI, in a statement to the New York Times in May 2023, Geoffrey Hinton, considered the godfather of AI, announced his resignation from Google, adding he now regrets much of his work and is fearful of potential abuse by bad actors. Jan Leike, the head of OpenAI, tweeted in March 2023: “Before we scramble to deeply integrate LLMs everywhere in the economy, can we pause and think whether it is wise to do so? This is quite immature technology and we don’t understand how it works. If we’re not careful, we’re setting ourselves up for a lot of correlated failures”. Specifically, in economic terms the risk is that we are creating another systemic externality. An important dimension is the culture in which AI evolves. What is acceptable in one country is not acceptable in another (e.g. Lee, 2018). The optimistic view is that AI can complement and thus enhance human intelligence. This is when coordination means that AI and HI cooperate and become a coupled cognitive system. This is in the spirit of 4E cognition. The wider question is thus not whether human minds are the only kind of minds. Rather, the question is whether human minds can extend and connect to form a different mind. This is also where AI and robots play a role, namely to support human minds in forming that mind and enhancing it.

  However, this book is not about AI, so I will not spend more time on it here. Fortunately others have written volumes on AI and economics. For a general overview, see Buchanan (2019). For a more particular and critical view of AI and financial risk management, see Danielsson, Macrae and Uthemann (2017).

  In summary, AI is currently about intelligence, not consciousness. Humans have been able, for a while now, to wonder “what it is like”, for example, “to be a bat” (Nagel, 1974). We do not know the answer, but the question remains legitimate because it is instigated by the fact that there is something it is like to be you or me. That is, we are kind of familiar with a self, even though we don’t exactly know it, and we suspect other animals may have similar experiences. We do not ask that question about AI. In general, my scepticism regarding AI is largely confined to the pretence of it reaching human consciousness. I believe the role of evolution is hugely underestimated, as I will explain next (Cognitive Note: AI and Existentialism).

  Cognitive Note AI and Existentialism

  I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.Rutger Hauer (RIP) Blade Runner

  In this note I combine some reflections on intentionality and AI. It should be read keeping in mind the metaphysical framework I set up earlier. I limit it to entities with a highly developed intentionality, viz. humans and (intelligent) machines, like a robot.

  Biocentrism is warranted because there will always be a difference between organic cognition and artificial cognition. The reason is, what I call, 4E evolutionary awareness: recognising in oneself and others the purpose of the urge to survive, i.e. the survival instinct, in a 4E context.68 A threat to one’s being is sensed via the fight, flight, and freeze experiences, including all the associated mentality varying from S1’s emotions (e.g. fear) to S3’s awareness (e.g. attention). What is involved includes the environment, one’s body, and others. For humans, what sets this experience apart from, say, smelling freshly brewed coffee is an awareness of the meaning of death, i.e. ending it all forever, that accompanies the unconscious instinct.

  In terms of my earlier comments on qualitative valuation: it is the valuation of the self under extreme uncertainty. Specifically, it is the realisation, impressed as deep sadness, that the self is at risk of total loss and what that would mean, not just for you but also for those you leave behind. Crucially, while valued we do not exactly know what that self is. It could be an illusion (e.g. Metzinger, 2010). In any case, there is an evolved subliminal but unknown link between the selfish gene and the self. Because AI ‘skips’ evolution and, instead, is created by us (even if we eventually delegate such creation to future AI-versions) it will never be able to experience evolutionary awareness, especially about extinction. As Jonas observes: “A feedback mechanism may be going, or may be at rest: in either state the machine exists. The organism has to keep going, because to be going is its very existence—which is revocable—and, threatened with extinction, it is concerned in existing” (Jonas, 1966, p. 117). Specifically, AI cannot simulate—let alone instantiate—such awareness because, again, we do not know how to ‘pass it on’. (Of course, this does not mean that AI is ‘thus’ inferior to us in surviving).

  I will now link this to intention. I always felt that the ‘intentional stance’ towards intentionality, particularly when used to equate artificial and human mentality, has cracks in its foundation. Specifically, it starts to become wobbly once you consider that the rational/goal-directed behaviour it pretends to explain is ultimately aimed at self-preservation. Surely intentionality counts most when one’s existence is at stake, i.e. when it is existential and there is only one goal: to survive?

  So, let’s view intentionality from our ‘4E evolutionary stance’: “As far as science is concerned, the acceptance of evolution meant that the world could no longer be considered merely as the seat of activity of physical laws but had to incorporate history and, more importantly, the observed changes in the living world in the course of time. Gradually the term ‘evolution’ came to represent these changes” (Mayr, 2001, p. 3). From that angle, the difference between a human and a robot must be based on those 200 million plus years of 4E evolution. It is not about intentionality per se, which both (can) show. The difference is threefold.

  First, what its own intentionality means to the entity itself in terms of being and survival. Again, evolutionary awareness is the understanding of the difference between existence and non-existence (in human terms: life or death). Goal-directed behaviour in the case of survival only becomes convincingly, e.g. recognisably, rational69 if the entity expressing it is aware what it means if the goal is not achieved, namely that its behaviour is permanently terminated and it ceases to be. At that moment, past and future come into very sharp contrast. For humans, although evolutionary awareness is general, that realisation is deeply personal as well as emotional, with ellipsis kicking in. For example, “My family heritage is threatened”, or “Everything I worked for will have been for nothing”. But also “I won’t fulfil my dream” and “I won’t see my kids growing up”, even to the point of “I’ll sacrifice myself for my kids/country/cause”. The words of Victor Frankl seem appropriate here:

  The fact, and only the fact, that we are mortal, that our lives are finite, that our time is restricted and our possibilities are limited, this fact is what makes it meaningful to do something, to exploit a possibility and make it become a reality, to fulfil it, to use our time and occupy it. Death gives us a compulsion to do so. Therefore, death forms the background against which our act of being becomes a responsibility. (Frankl, 2014, p. 108)

  In other words, that wider 4E-realisation is, informationally, extremely meaningful and makes a Batesonian difference in human goal-directed behaviour. Of course, for humans this is co-mingled with another evolutionary influence on intentionality, consisting of the survival instincts of which we are not aware. This started with our ancestors:

  Once we see a couple of bears eat our relatives, the whole species gets a bad reputation. Then … when we spot a huge shaggy animal with large, sharpe incisors, we don’t hang around gathering more data; we act on our automatic hunch that it is dangerous and move away from it. (Mlodinow, 2012, p. 146)

  In short, Mother Nature (via a spontaneous process) created us and we are programmed to survive for our own intentions. That program is ancient, and we do not have the code. In the end we experience intentionality existentially as a rich history threatened by no future. It becomes a kind of meta-intention to preserve intentionality. I doubt this can be replicated (except in movies).

  Second, how this meaning of survival is subsequently attributed to the respective entity as part of its intentionality. For example, the small triangle by Heider and Simmel was made to look as trying to survive from the aggression by the larger triangle. However, although that struggle is expressed convincingly, upon reflection we must conclude that the meaning of its survival cannot be attributed to the small triangle itself (even if it understood that meaning). In fact, to us it is clear that the attribution is indirect in the sense of programmed by the producers. Similarly, in science fiction there are plenty of examples of intelligent machines striving for survival. Ironically, often the goal of that survival is to continue to threaten human lives. But let’s give them the benefit of the doubt and accept that they reflect more benignly on their survival. They eventually will then come to realise not only that humans are the origin of their existence, but also that we created them to basically help us survive. Consequently, the meaning of their survival is ‘indirect’.

  Finally, how this survival intentionality is then recognised by the other entity, i.e. the one taking the traditional ‘intentional stance’. In principle we judge a desperate fight for survival as rational because we recognise, via empathy, the evolutionary origin. However, that changes when we know that an entity has no embodied evolutionary awareness. (I am excluding ethical questions here). We can interpret the famous cartoon above along these lines: the point is not the computer’s lack of awareness of the plug’s existential role. The point is the computer’s lack of awareness of the purpose of its intentionality. Whereas Mother Nature designed us, we designed computers, each with different initial building blocks or ingredients. Even if, in future, computers design computers, they will miss the rich (and enriching) evolutionary history.

  As an aside, there must be a nice challenge here to toughen Turing’s test, starting with asking (some AI): “What would you do when your existence is threatened and why?” and then to drill it further. Turning the test’s original purpose upside-down would be to let this AI explain consciousness better than we understand it now. Success, I guess, would be philosophy’s AlphaGo moment. But, in agreement with Soddy, I suspect it will not arrive soon (if at all).

  A lot has been written about the threat of AI and the potential demise of the human race, including the arrival of the singularity (i.e. the moment AI takes over the world). In the final analysis, if pressed I believe Andrew McAfee put it succinctly: “People will rise before machines do”. In the current environment (where, per the MMH, imbalances and stress are driven by ongoing mechanisation, surveillance and so on) that moment may be approaching.

  B. Economic Science

  Throughout this book the term economics includes finance as a nested discipline, Still, I will briefly discuss them separately below. I will also explain the economic system, made up of the real and financial economies. However, I will start with investing (including trading) and what it means for this book.

  B1. Investing

  Investing is the process of allocating/withdrawing money to/from a particular asset class by buying/selling it. Whatever money in a portfolio is not allocated is called cash (held in currencies). I will discuss money separately in Appendix B5, as well as in the main text. The main asset classes are bonds, commodities, equities, and real estate. There are various types of financial market participants like brokers, market makers, private individuals, pension funds, and hedge funds. If they have a short-term focus, they are often called speculators or traders. For simplicity I will refer to them collectively as investors, unless specified differently.

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183