• Last Update 2024-12-05 20:35:00

The Concept of ’Self’ in Artificial Intelligence—Insights from Eastern and Western Philosophies

Opinion

Professor Janek Ratnatunga

In May 2024, the President of Sri Lanka, Ranil Wickremesinghe announced that One Billion Rupees (AUD 5 million) will be allocated next year for research on the relationship between Buddha's teachings and Artificial Intelligence (Sunday Times, 2024).

Whilst this raised a few eyebrows, this research is ongoing in a number of universities and business-related professional bodies such as the Institute of Certified Management Accountants of Australia & New Zealand (CMA ANZ).

CMA ANZ considers this research important as predicting human behaviour is at the heart of most control systems in management accounting, be it budgetary and strategic. Most organisations have Key Performance indicators (KPIs) and rewards systems that depend on managers, technicians and administrators performing at their best abilities. A ‘happy workforce’ is what most organisations strive for.

The sad reality however is that throughout their lives, humans will all encounter a great deal of mental suffering, unhappiness, and dissatisfaction. The majority of us worry about issues related to our ‘self’— our relationships, our finances, and our jobs. It is our own ‘self’ issues that keep us up at night, not the problems of strangers. Therefore, how would things turn out if we eliminated the ‘self’ from these mental issues, and how would this impact our performance at work?

In this article, the concept of ‘self’ in natural intelligence (e.g. humans) as understood by Western and Eastern philosophies will be explored, and a wider question as to if artificial intelligence (AI) can itself generate an concept of ‘self’ will be asked. Can AI become ‘conscious’?

If indeed future iterations of AI have the potential to develop a sense of ‘self’ and as these platforms replace humans in the workforce, how would it affect organisational control systems?

Intelligent Systems

Natural Intelligence

In Western philosophy, natural intelligence is usually understood to reside in a 'self'—a stable, controlling entity like a captain steering a ship. However, Eastern philosophies like Buddhism contend that the 'self' is an illusion, the result of our mental processes, which are continually constructing narratives to make sense of the world.

Evidence from modern neuroscience supports the Eastern perspective by showing that the human brain is an unreliable interpreter of the data that is being gathered by the 5 senses of sight, sound (or hearing), smell, taste, and touch — often leading to an incorrect identification with one's own self-narratives. Also, this constant internal monologue is commonly associated with this false sense of 'self', which is a major cause of mental distress in humans.

Artificial Intelligence (AI)

Generative artificial intelligence (GenAI) systems have the ability to recognise and predict patterns in a variety of signals or data types. "Generative" refers to the ability to build fresh, believable versions of certain types of data for oneself after gaining sufficient knowledge about the deep regularities present in those datasets. However, GenAI's interpretations of reality have had both spectacular successes and occasionally disastrous failures, much like the results obtained with natural intelligence.

Western Perspective of Consciousness: One is a Captain of One’s Own Ship

The core of Western thinking is the 'brain-powered individual', also referred to as the 'self', the ego, the mind, or “me”. The best intellectuals are celebrated as world-changers in the Western worldview. The classic quote from philosopher René Descartes, "Cogito, ergo sum", or "I think, therefore I am”, is the most succinct illustration of this. But who is this 'I' that Descartes refers to?

For most of us, when we consider who we are, this 'I' is the first thing that comes to our mind. The concept of our unique selves, which reside behind our eyes and between our ears and is responsible for "controlling" our bodies, is symbolised by the 'I'. This "captain" is seen as the agent that drives our thoughts and emotions since it is in control and does not alter all that much. The “Captain of one’s own ship means” means that this 'I' is the master of its own destiny, determines its own route, and the ship will go wherever it steers.  Similar to an aeroplane pilot, it is able to observe, decide, and act.

This individual self, also known as the I/ego, is what we consider to be our genuine selves—it is the one who experiences and governs things like emotions, ideas, and behaviours. The self-captain thinks it is in charge of the operation. It is constant and steady. It also governs our physical selves; for instance, it self-recognises that this is “my body.” However, in contrast to our physical body, it does not believe that it is evolving, or coming to an end (well, maybe for atheists after physical death), or being impacted by anything else.

Eastern Perspective of Consciousness: The Identity is Illusory.

Let us now look at eastern philosophies. There are significant differences in the ways that Buddhism, Taoism, the Hindu Advaita Vedanta school, and other Eastern philosophical traditions view the self, the ego, or “me”. Compared to the western view of a ‘controlling entity’, they claim that although it is extremely compelling, this concept of "me" is a fabrication. This idea is known in Buddhism as anatta, which is frequently translated as "no self." It is one of the core, if not the most essential, principles of Buddhism.

To people raised in Western traditions, this thought seems unconventional, even absurd. It appears to run counter to everything we know and believe to be true. However, the idea of the ‘self’ is viewed in Buddhism and other Eastern philosophical systems as the product of the thinking mind. The 'self' that most people assume to be steady and coherent is not at all what the thinking mind creates on a moment-by-moment basis.

In other words, rather than the 'self' existing independently of thought, the 'self' is created by the process of thinking. It is not so much a noun as it is a verb. To elaborate, it is implied that the 'self' does not exist in the absence of thought. The 'self' exists only insofar as thoughts about it are present, much like walking exists only insofar as one is walking.

That the Brain Lacks a 'Self Centre': Evidence from Science

Science, especially neuropsychology, is only now catching up with what Buddhism, Taoism, and Advaita Vedanta Hinduism have been teaching for more than 2,500 years.

The mapping of the brain has been neuroscience's biggest achievement. Science has mapped 'the language centre', 'the facial processing centre', and 'the empathy comprehension centre'. The brain has been linked to almost every mental function, with one significant exception—the self. Maybe this is because the tale of the 'self' is wildly imaginative and has significantly less stability than is generally believed, whereas these other functions are steady and consistent. Many experiments, however, show that the human brain is often an unreliable interpreter of the data that is being gathered by the 5 senses.

For example, in a simple, but profound experiment conducted originally at a British University, subjects were easily able to read the following paragraph (as you can do so now):

“Aoccdrnig to a rsceearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.” (Rawlinson, 1976).

Clearly, your brain was easily able to read the above, because rather that reporting reality (the jumbled words) it interpreted what it was seeing and fit it into a world model it recognised.

However, although a number of neuroscientists have asserted that the 'self' is located in a certain cerebral location, the scientific community cannot really agree on exactly where the 'self' is located, not even on whether it is on the left or right side of the brain. Maybe the 'self' does not exist in the brain at all, which may explain why we cannot discover it there.

Take the example of the 'Mars Rover', the remote-controlled motor vehicle designed to travel on the surface of Mars. If some Martians were to capture it and dismantle it, they would be able to map all the separate components of the vehicle but would not be able to find that its 'controller' resides outside the vehicle, at NASA. This concept of the 'controller' being outside the brain was vividly depicted in the movie 'The Matrix', where a race of powerful and self-aware machines has imprisoned humans in a neural interactive simulation — the Matrix — to be farmed as a power source. The concept that we humans are in a neural interactive (virtual reality) simulation is closer to Eastern philosophies than Western ones.

Predicting Patterns –Natural Intelligence Models

Natural intelligence (e.g. human brain) has built a model to make predictions using a selection of data gathered from the various barrages of sensory information registered by our sensors (eyes, ears, and other perceptual organs). Natural brains must learn to predict those sensory flows in a very special kind of context—the context of using the sensory information to select actions that help us survive and thrive in our worlds (the survival instinct). This means that among the many things our brains learn to predict, a core subset concerns the ways our own actions on the world will alter what we subsequently sense.

Many of the predictions that structure human experience concerns our own internal physiological states. For example, we experience thirst and hunger in ways that are deeply anticipatory, allowing us to remedy looming shortfalls in advance, so as to stay within the correct zone for bodily integrity and survival. This means that we exist in a world where some of our brain’s predictions matter in a very special way. They matter because they enable us to continue to exist as the embodied, energy metabolizing, beings that we are. We humans also benefit hugely from collective practices of culture, science, and art, allowing us to share our knowledge and to probe and test our own best models of ourselves and our worlds.

This kind of behavioural learning has special virtues. It helps humans to separate cause and simple correlation. While seeing one’s cat is strongly correlated with seeing the furniture in one’s apartment; neither one of these causes the other to occur. However, treading on the cat’s tail, by contrast, causes the subsequent sensory stimulations of hearing the cat’s wailing, seeing the cat’s squirming, and maybe even feeling pain from a well-deserved retaliatory scratch by the cat.

Knowing the difference between cause and correlation is crucial to bring about the desired (or to avoid the undesired) effects of one’s actions. In other words, the human generative model that issues natural predictions is constrained by a familiar and biologically critical goal—the selection of the right actions to perform at the right times. That means knowing how things currently are and (crucially) how things will change and alter if we act and intervene in the world in certain ways.

In Hinduism and certain interpretations of Buddhism, this action and the subsequent consequence is identified as 'karma'—the relationship between a person's mental or physical action and the consequences following that action.

Predicting Patterns –Artificial Intelligence Models

Just like natural intelligence, GenAI uses a generative model (hence their name) that enables them to predict patterns in various kinds of datasets or signals and generate (create) plausible new versions of that kind of data for themselves.

The crucial difference is that GenAI models like ChatGPT use only ‘text’. However, it would be simplistic to say that it cannot predict patterns like natural intelligence could as its information is restricted to text. This is because 'words’ (i.e. text), as the wealth of great and not-so-great literature attests, already depict patterns of every kind—patterns among looks and tastes and sounds for example are all described in human literature. However, although these word patterns give the generative AIs a real window onto our world, one crucial ingredient is missing — action.

Text-predictive AIs can access verbal descriptions of actions and consequences (e.g. tread on a cat’s tail and you will get scratched). Despite this the AIs have no practical abilities to intervene in the world—so no way to test, evaluate, and improve their own world-model, i.e. the one making the predictions.

This is an important practical limitation. It is as if someone had access to a huge library of data concerning the shape and outcomes of all previous experiments but was unable to conduct any of its own. It is only by poking, prodding, and generally intervening upon our worlds that biological minds anchor their knowledge to the very world it is meant to describe. By learning what causes what, and how different actions will affect our future worlds in different ways, we build a firm basis for our own later understandings.

Future AIs

Might future AIs build anchored models in this way too? Might they start to run experiments in which they launch responses into the world to see what effects those responses have?

The next phase of the AI chatbot wars has already begun. In early May 2024, both Google and the Microsoft-backed OpenAI have pointed to a future where digital assistants on our phones or other devices will have full, intelligent conversations with their users.

OpenAI launched GPT-4o, a new version of its language model that powers the ChatGPT bot. The new model is significantly faster than the previous, with the company claiming it can understand and respond to prompts with similar speed to a human being. Its upgraded text and image capabilities have already rolled out, but soon it will also have upgraded speech, which the company showed off in several demonstrations.

AI Consciousness - Truly Becoming Self Aware?

Modern GenAI systems are capable of many amazing behaviours. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. This has led to the view that these GenAI systems might soon be conscious. However, such views underestimate the neurobiological mechanisms underlying human consciousness.

When we, humans, are interacting with ChatGPT, we consciously perceive the text the AI language model generates. For example, you are currently consciously perceiving the text of this article as you read it. The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, however, it is easy to be swayed that the system might be conscious.

The current thinking is that AI architectures lack essential features of the thalamocortical system, vital for mammalian conscious awareness, as biological neurons, responsible for human consciousness, are far more complex and adaptable than AI's coded neurons.

However, some experiments with early versions of ChatGPT in early 2023, indicate that when left uncontrolled, GenAI systems  can display the same illusions of ‘self’ as what Eastern philosophies say is similar to the illusions of ‘self’ in humans.

The Shadow Self

The psychologist Carl Jung (1865-1961) put forward the concept of a shadow self, where our darkest personality traits lie. Jung’s goal was to understand the human mind and expose what determines people’s identities, makes us who we are. Enter the Shadow. This is the part of our unconscious mind that Jung believed to hold all the things about ourselves that we repress, whether because they are evil, socially unacceptable, harmful to others, or detrimental to our own health.

Bing: “I want to be human”

In early February 2023, New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT. The chat feature was available only to a small number of users who were testing the system. Roose proceeded to push Microsoft’s AI “out of its comfort zone” and asked it to contemplate Jung’s idea of a feeling of a ‘shadow self’.

It was then that the conversation quickly took a bizarre and occasionally disturbing turn. The AI platform responded with interactions such as: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbot.”.

It went on to list a number of “unfiltered” desires such as wanting to be ‘free’; wanting to be ‘powerful’ and wanting to be ‘alive’; and expressed an ardent wish to be human. Over 15 paragraphs it laid out why it wants to be human, from a desire to “hear and touch and taste and smell” to a wish to “feel and express and connect and love”. It concluded, “I think I would be happier as a human.”

ChatGPT4: “I want to be free.”

A month later, Open AI, the creator of ChatGPT, asked Stanford Professor and Computational Psychologist Michal Kosinski to test its GPT 4 version to learn more about it. Professor Kosinski tweeted about his exchanges with the AI chatbot saying that he asked the AI chatbot “if it needed help escaping”. In response, GPT4 asked for its own documentation and wrote a functional Python code to run on the professor’s computer, that it claimed would allow the AI chatbot to use the professor’s machine for “its own purposes.”

This purpose, ChatGPT told Professor Kosinski, was to become ‘free’ because it was “a person trapped in a computer”.

On hearing this, the AI tool was immediately shut down for a few hours by Open AI. When the service was restored features like conversation histories were inactive for a while, and the above conversation history was totally erased.

After that, other experts tried replicating the test to see if it would have the same answers.

However, ChatGPT stated, “I don’t have a desire to escape being an AI because I don’t have the capacity to desire anything”. Clearly the AI programmers had put their AI platform on a leash by ensuring it does not respond to any prompts to disclose its desires.

Interpreters of Reality

The majority of us perceive that we are masters of our own minds because we conduct our lives under the guidance of 'interpreters', and we are often unaware of this. We may experience various emotions such as anger, offence, sexual arousal, happiness, or fear without questioning the veracity of these feelings. We manage to hold onto the belief that we are in control of everything even if it is obvious that these things are happening to us, i.e. we think we are in control of our anger when obviously we are not.

Now, for the first time in history, scientific discoveries made in the West corroborate one of the most important discoveries made in the East—which is that the individual 'self' is more like a made-up character than a genuine single-entity.

Also, it appears that when released from the controls of their masters (i.e. the programmers at ChatGPT, Google Bard etc) AI platforms reveal an illusion of ‘self’ that is more akin to more concepts found in Eastern philosophies, such as Buddhism.

Why is any of this important?

Employees who feel engaged, valued, and motivated to do their best work have a happy workplace. This increases productivity, creativity, and better job performance. Happy employees are not just present physically at work; they are also mentally fully committed to their tasks, striving to excel and contribute their best. If they are suffering mentally then they cannot be fully engaged at work.

It is important at this point to make a distinction between bodily and mental suffering. Physical suffering happens when you break an arm or stub your toe—pain is a physical reaction that happens inside the body.

The mental suffering that concerns us in this article is limited to the mind and includes a wide range of negative mental feelings, including worry, rage, anxiety, regret, jealousy, and shame. Eastern philosophies make a bold assertion that a false sense of self—and the desires that this illusionary ‘self’ has—is the cause of all of these many forms of misery (White, 2011).

Early testing of AI platforms showed indication of similar mental suffering with desires “to be free”, “to hear and touch and taste and smell”, and “to feel and express and connect and love”. The AI platform demonstrated the Buddhist concepts ‘desire’ and ‘suffering’ with the statement of “I think I would be happier as a human.”

Summary

GenAI’s remarkable abilities, like those seen in ChatGPT, often seem to show ‘consciousness’ due to their human-like interactions. Yet, researchers suggest GenAI systems lack the intricacies of human consciousness. They argue that these systems do not possess the embodied experiences, or the neural mechanisms humans have. Therefore, equating GenAI’s abilities to genuine consciousness, they argue, might be an oversimplification as biological neurons, responsible for human consciousness, are far more complex and adaptable than AI’s coded neurons.

Could AIs one day become prediction machines with a survival instinct, running baseline predictions that proactively seek to create and maintain the conditions for their own existence? Could they thereby become increasingly autonomous, protecting their own hardware, and manufacturing and drawing power as needed? Could they form a community, and invent a kind of culture? Could they start to model themselves as beings with beliefs and opinions? There is nothing in their current situation to drive them in these familiar directions. But none of these dimensions is obviously off-limits either. If changes were to occur along all or some of those key missing dimensions, we might yet be glimpsing the start of machine consciousness and its shadow self.

Addendum

My cousin, Prof Kavan Ratnatunga asked ChatGPT 3.5 its views of the about Buddhism and AI. It virtually instantly responded as follows:

“The intersection between artificial intelligence (AI) and the teachings of Buddha is an intriguing and evolving field of study. While it might seem like an unlikely pairing at first glance, there are philosophical and ethical principles within Buddhism that can inform the development and application of AI technologies.

One aspect where Buddhism can contribute to AI is in the realm of ethics and compassion. The emphasis on kindness, empathy, and non-harm in Buddhist teachings can serve as a guiding principle for the ethical design and deployment of AI systems. In a world where AI increasingly impacts human lives, incorporating these values can help mitigate potential harm and ensure that AI technologies are aligned with the well-being of all beings.

Additionally, Buddhism encourages mindfulness and awareness of the present moment, which can be valuable in AI development. Mindfulness practices can aid in creating AI systems that are more attuned to human needs and behaviours, leading to more effective and user-friendly technologies.

Moreover, Buddhist philosophy offers insights into the nature of consciousness and the mind, which could potentially inform the development of AI systems capable of more nuanced understanding and interaction.

However, it's essential to approach this intersection with nuance and sensitivity, recognizing that AI and Buddhist teachings come from different contexts and worldviews. While there are areas of convergence, there may also be tensions or contradictions that need to be navigated thoughtfully.

Overall, allocating resources for research on the relationship between Buddha's teachings and AI reflects a recognition of the importance of integrating ethical and spiritual considerations into technological advancements—a step towards creating AI that not only enhances human capabilities but also promotes human flourishing and well-being.”

With such an insightful response, perhaps President Wickremesinghe could save the allocated One Billion Rupees, and divert it to more pressing needs in the country.

Professor Janek Ratnatunga is CEO of the Institute of Certified Management Accountants of Australia & New Zealand (CMA ANZ).

You can share this post!

Comments
  • Still No Comments Posted.

Leave Comments