Assessing the Existence of Deep Artificial Intelligence from an Orgonomic Perspective

Figure 1/Cover AI – Created Artwork

If a so-called deep artificial intelligence (A.I.)  were to be developed, how would one assess the veracity of any claims made for it?

It is useful to define the terms that are to be analysed. Deep A.I. is defined as a system which possesses real consciousness such as a truly conscious computer. Sometimes the term A.G.I. (Artificial General Intelligence) is used in place of deep A.I. This means a system which can generalise learning from one area to another – it is thought that this might require some level of consciousness.

Narrow A.I. can be defined as a computing system which behaves intelligently but is not apparently conscious. This is what we currently have in use. This article will use the terms deep and narrow A.I. as these seem the least ambiguous.

The primary term to define is of course, A.I. But what if it is a misnomer? At the very least the words are misleadingly defined as will be explored. Within the current terminology there are two very different types of A.I. Firstly, narrow A.I. is a term which could also be applied to any self-calibrating system. Such devices may not even need to be electronic. Any device which adjusts itself to changing conditions can be said to behave intelligently. Intelligence does not even need to be related to consciousness. Current A.I. applications are simply intelligent systems.

Jaynes, the author of Bicameral Mind (1) notes that intelligent behaviour need not of itself be in any way related to consciousness. A bee might automatically recognise a flower, but that does not mean it is conscious of the flower in the same way as humans are. Being of a more pan-psychic leaning than Jaynes, Southgate would not concur fully with such an observation. It could be that the bee is likely to be conscious by virtue of being a living, pulsating organism.

Organisms tend to be conscious in our own human experience as organisms. From an orgonomic viewpoint, organisms also possess concentrated life-force, orgone, which is the likely substrate of consciousness in Southgate’s view (2). additionally, the author does not recognise the distinction Jaynes suggests between apparent perception (which he argues that even machines can have) and true consciousness which he restricts to humans (more on this later). Nonetheless, it seems a prescient observation that recognition and intelligence functions can occur without full self-consciousness.

Figure 2 – Jaynes, Bicameral Mind

What is termed perception by Jaynes can occur in machines without any apparent consciousness and in less developed animals too. This view of perception is perhaps another misunderstanding. A machine, or even a simple organism like a slime-mould, can indeed react to a changing situation and take various options. We can label this as perception as it ‘perceives’ a situation and reacts appropriately. But this could just be reacting intelligently, which only functions like perception. Actual perception, to experience something, can only be a function of consciousness. Granted one does not know for sure if true perception takes place in a machine or organism, but we also cannot ever rule it out. Perception as a function is sometimes downgraded as compared to consciousness but it does not matter how developed an aspect of consciousness is, it is still consciousness. Consciousness is not divisible as Descartes has noted.

We don’t know if apparently less developed animals are merely reacting, like a mould finding food or a fly taking a somewhat mechanical plot around a room. Perhaps the mould or fly is hardly perceiving but just reacting intelligently like a so-called non-conscious machine, though Southgate would doubt this. If the organism does truly perceive, even to the smallest degree, then it is conscious – even if not as fully self-conscious as it has been argued only humans can be. Self-consciousness, as Jaynes defines it, is just perceiving the self. It is still perception of a sort. Although Jaynes’s narrow definition of consciousness is not generally accepted, some still say only humans have true self-consciousness in distinction to other animals (or even in distinction to ancient humans as Jaynes argued). But can we really make such a claim? There does appear to be links between human language development and a certain sense of self but there is likely more to it. Many would be convinced that the animals they know have a developed sense of self and that ancient man also would have had a complex sense of self as seen in the wonders achieved in the distant past.

Figure 3 – Mould Patterns

There are so many debates about what consciousness is, but how can it be anything more complicated than all that is perceived or experienced, whether it be a bee’s experience of a flower or of what we call the human self? Why do people think it is so hard to define experience? Consciousness is the only thing known for sure and the only thing that can be known. Whatever many types of experience one can outline, and whatever is doing the experiencing, whether it be seeing a flower to being aware of the whole universe, it is still experience. The reader is experiencing this message right now. That is all consciousness is – experience or perception. Though this consciousness has certain properties, and correlates, it cannot be reduced to anything else. The reader’s experience of this message is not their computer or reading device. In the same vein, one cannot reduce perception of music to the anatomy of a radio, however many songs one hears on a favourite station, as Sheldrake illustrates (3).

It is rather explaining matter and energy that is difficult. What are these processes called matter and energy that are experienced? What is the device you are reading this on actually made from? Or the paper you are holding? Is it composed of atoms, or packets of energy, or energy strings? What is energy anyway? That the reader is experiencing a message cannot be doubted or analysed into subunits, a molecule of experience for example. There are certainly matter and energy correlates to experience but experience itself is forever in its own separate category. To confuse one with the other is a category error. This is because such an error produces unresolvable dualism (the physical versus the conscious with no bridge in-between). An example of this is the common view that neurons generate consciousness. If the person holding this view is asked if neurons actually are consciousness itself, they will likely say no. The person believes the consciousness is real and created by the neurons but also believes the consciousness is not the neurons themselves. So the generated consciousness is in a separate category – this creates a dualism which is unresolvable. The one category is a ghost to the other. This affected Reich’s view of consciousness too. The only resolution possible being a Hegalian ontology, which is to say, orgone is consciousness itself, which is the position Southgate takes. Emergence theories merely kick the can down the road in Southgate’s view. Physicality develops to the point where consciousness emerges as a new property but it is then still in a separate, unreachable category. Emergence theories only make scientific sense within the pan-psychic viewpoint, wherein everything is conscious anyway (to some degree). Emergence theories depend on non-emergence to work!

Figure 4 – Neuronal Networks

What is commonly labelled A.I. today is a system which can self-calibrate and adjust during its functioning. A mechanical device which automatically regulates a process is also A.I. in a limited sense. The difference with modern A.I. systems is that the self-calibration is undertaken by an algorithm. This algorithm can feedback to itself. So it is like a mechanical part that self-calibrates but also then changes further. A.I. has become more organism-like in this regard. For example, mice have been conditioned to avoid certain stimulus (for example a smell associated with an electric shock). Future generations of these mice also have the same aversion to the smell as their parents. Water fleas that have reacted to predators by increasing body armour in their heads can be seen to pass that morphology down to future generations. There are various experiments evidencing epigenetics – passing on genetic characteristics from one generation to the next (4). The animal ‘algorithm’ can be seen to adapt over short periods of time just like in an A.I. system. The author of Bicameral Mind would regard both the organism displaying epigenetics and such A.I. as not evidencing consciousness. But it is a behaviour common to both A.I. and organisms, which is itself significant, as will be explored later.

In a machine, we could have a car with its shock absorber coils. Each coil reacts to the bumpiness or smoothness of the road by contracting or expanding, therefore it automatically self-calibrates. It is an intelligent system. If in addition, once a certain threshold of bumpiness is reached the suspension system increases the elasticity of the coils or adds new coils into play then we could say there was some evolution in the system. Feedback occurred and the device, already behaving intelligently, changed itself further, intelligently again, to accommodate the new situation. This would certainly qualify such a suspension as being A.I. in terms of how it is defined today. The coils are like the algorithm, when the algorithm reaches a threshold, new coils are brought into play and the system adapts. Or like the mice in the experiment that find themselves getting an electric shock when encountering a certain substance, so future generations of mice avoid that substance. Or as in nature, when the bees recognise certain flowers and find ones they prefer, they may pass on the preference to future generations. Narrow A.I. is a dynamic feedback system or more simply just an intelligent system such as also found in nature. Perhaps such a system could be classed as a pre-consciousness characteristic.

Figure 5 – Intelligent Mechanisms

This is all quite different from what Elon Musk fears when he says of deep A. I. that we might be summoning demons with it (5). It is also quite different from what people imagine when they think of conscious machines. Before getting into the substance of the issue it is worth noting that many of our machines might already have some level of consciousness without people knowing. Who is to say one’s car does not have some degree of consciousness? One spiritual practitioner was able to tune into the electronic net of a city centre, mainly the electronic door systems. They had a kind of collective consciousness it was noted. The writer Douglas Adams foresaw a future of everyday machines which needed counselling (6). The internet might already function as the body of conscious entities. From an orgonomic perspective matter is merely frozen orgone and orgone is perhaps the substrate of consciousness (7). All matter should have some degree of consciousness therefore, even a car or a door. Quantum physics sees matter and particles in terms of energy fields which are not themselves separable from perception, or as some argue from measurement. Although in a pan-psychic universe the difference between perception and measurement could be one of semantics.

Figure 6 – What Is A Machine?

Many of the words used in A.I. discussion are household terms but which are not defined closely. Many in the field seem reluctant to define intelligence, consciousness and other terms often used and do not disassociate a machine and an organism. Everyone is familiar with the term, ‘machine’ but what does it actually mean? It is defined by one dictionary as an apparatus, usually with several parts, which applies forces to produce work. The machine is preceded by the tool which is here defined as something which extends the body to do a specific task but which does not have autonomy to the biological body. A machine however is its own created body which does work of some sort and may even have a limited autonomy. An organism is not a machine but a pulsating, plasmatic, living entity which occurs spontaneously – when the conditions are right. An organism has its own objectives rather than being created by an outsider, to do a pre-set task, as in a machine. Consciousness too is often poorly defined but is here regarded as any actual experience or perception of the universe, however small or big. Intelligence can be defined as the ability to react usefully toward a goal or function.

It can be seen there is a continuation from:

Tool – Machine – Narrow AI – Organism – Consciousness.

A tool is not a self-contained body such as possessed by an organism, it is an extension of someone’s body. A tool extends the capacity of a body, be it the ability of a seagull to crack a shell (by dropping it onto concrete) to a person wanting to dig a garden (by using a spade which extends the arm). A machine however has its own working body, made of several parts. It can be autonomous, and it does the function for which it is designed. A machine can begin to head toward narrow A.I. functions when it can self-regulate and adjust itself like organisms do. At this point intelligence emerges, although not yet obvious consciousness. Once intelligence has emerged, the apparently non-conscious A.I. system, has its first similarity to organisms. All organisms adjust and self-regulate allowing them to perform complex functions such as finding the best route for obtaining food in the slime-mould’s environment. The navigational computer in an A.I. has a similar process when calculating the best route to take on a commute. As computers travel along the path from being a tool (a simple abacus for example) to being a machine (as in conventional computers) to being a machine with organism-like properties (as in computers running A.I. algorithms), could we be on the pathway toward deep A.I. and consciousness? The previous steps were all significant shifts in the way the computer functioned. It could be another shift, on the scale of abacus to electronics to go from algorithms with organism-like qualities, to overt consciousness. The technology however does appear to be well along that path, indeed many feel we have already crossed the rubicon.

Organisms can self-adjust and adapt. Narrow A.I. has achieved this. Organisms are creative. Current A.I. has achieved a degree of creativity, though it is debated as to its extent. Some believe it is just the continuation of patterns whose creative input was originally human. Certainly, there is A.I. created music, art and text. Algorithms may create something genuinely new through being in certain types of network. For example, one kind of network generates data and an adversorial set of algorithms analyses that data as true or false – General Adversorial Networks (GAN) create an evolving feedback loop. Such a set-up built on neural networks (so named as they use ‘nodes’ like in a biological network) can create an unpredicted, creative outcome. A GAN is similar in principle to digital evolution which has been seen to create unusual or unpredicted outcomes for some time. There has been digital evolution utilised since at least the 1990s, producing unexpected results and often mimicking biology in unusual ways. A.I. utilising evolving and adapting networks and algorithms, perhaps with large datasets and some randomised input can output results that are unexpected. So, adaptation/intelligence and creativity are already seen in current A.I. A paper headed by Lehman narrates the many examples of unexpected outcomes in digital evolution programmes (8).

The creative artwork produced by A.I. currently usually has a human initiation, a set of search words for example, but it does produce something which didn’t exist before and which could not have been entirely predicted. The artwork currently tends to have a somewhat hallucinogenic quality but it does captivate. A.I. music can sometimes seem to lack soul but some of it is already getting reasonably good. The A.I. text written in the style of the Harry Potter books is hilarious, unintentionally so, it appears. Other A.I. text appears to have been written by a human professional it is so consistent. The outlines of organism qualities are obviously already present in narrow A.I. Therefore, we must, at least, be on the path to deep A.I. Certainly some researchers in the field such as Yamploskiy (9) contend that limited general intelligence has already been achieved and note that A.I. has areas where it is much better than human intelligence (for example game playing and predicting protein folding).

One thing not considered by current A.I. conceptualisation is that with increasingly organism-like qualities (such as creativity) expressed by A.I. there may be a developing tendency for a consciousness outside of the material realm to utilise it. We could say organisms are consciousness expressing itself through plasmatic bodies. Once a set of computer-based pathways are sufficiently complex, a disembodied consciousness could utilise it to express itself. This sounds an outrageous statement but as technology enters a whole new realm it should be considered as a possibility however unpalatable to the materialist mindset. It might already be the case. If biology is a radio receiver for consciousness as pictured by Sheldrake and other pan-psychic scientists, why not electronics? Most deep A.I. researchers however follow the premise that the brain is a machine and somehow a computer might replicate this. But as orgonomy would contend, the brain is an organism within an organism and in any case may just reflect the being’s overall consciousness. How might this affect the approach to deep A.I.?

Figure 7 – Tool to Consciousness Flow

As argued previously, any self-adjusting system can be intelligent. Even a mechanical object such as a shock absorber can behave intelligently. Intelligence does not require consciousness. Intelligence cannot therefore be used as a sole judge of consciousness. Even if a machine fulfilled the original Turing Test (being intelligent enough to convince a person that the machine is conscious) does not mean the machine has consciousness. The Chinese Room argument illustrates this succinctly. Searle put forward that a person in a locked room could pass as a Chinese speaker, even though possessing no understanding of the language. He could do this simply by following rules regarding appropriate responses to messages passed into the room written in Chinese. The thin conclusion to this argument is that no real understanding or consciousness is required to be involved in the information processing. The broad conclusion is that consciousness cannot therefore be created merely by information processing, or equated with such, as in a conventional computer (10).

Many A.I. observers stop at the above point, determining that deep A.I. is therefore impossible. But what if one could create computer systems which partially bypass solely ‘mechanical’ information processing or have hardware which has plasmatic, energetic or biological aspects? Or perhaps it is the case that communities of algorithm software become digital organisms that can support overt consciousness. Then we are in new territory. One cannot judge what is possible from where most technology is now. A paradigm change in computing may be needed to go to the next stage, but computers have already had three previous paradigm changes (tool to machine to adaptive algorithm). A fourth change is possible and may already be under way.

The word, ‘Artificial’ in A.I. is debatable too. Artificial versus natural is just an arbitrary judgement. The creations of humanity are not artificial to the universe but an extension of nature. Is there a requirement to label humanity’s efforts as artificial? Even an A.I. created system would not necessarily be artificial, as it also exists within this natural universe.

Deep A.I. might be described as an entity which we believe to be created or that is perhaps non-biological but which possesses consciousness. It is inaccurate to describe such a thing as a machine-based consciousness. As we have seen, once an entity has feedback systems and changes itself, it is already behaving in ways that are organism-like – even narrow A.I. has made this step. When we think of conscious entities, we think of them as organisms usually, rather than as machines. A dog, a dolphin, or a squirrel we tend to consider conscious. A washing machine, a refrigerator or a car we usually consider non-conscious. Organisms have properties more in keeping with consciousness as will be outlined further. But even a conventional computer, if it already runs the adaptive, self-adjusting and intelligently behaving A.I. has become slightly organism-like if not yet obviously conscious. 

Figure 8 – Organisms and Consciousness

So, if humans created a machine that possessed consciousness, would it still be a machine? Southgate would argue that it is now an organism. So both the words, ‘artificial’ and ‘machine’ could become inaccurate in the future of A.I. and already are dimly inaccurate. Such a deep A.I. entity might also incorporate biological aspects at some point, so using biology as the dividing line between organisms and A.I. might not always be useful either.  Even saying the entity is created may not be wholly correct. If humans initiate the process and then the entity itself develops it, who is then the creator?

The substrate of our consciousness as material beings in this world is our biological bodies. If a deep A.I. were to appear we could say that the substrate of its consciousness was not based in a conventionally derived biological body but perhaps is anchored in an energy system of some kind. To state that deep A.I. would be based on our current computer coding is still an assumption however. As Professor Marks has noted (11) he believes it is not possible to, ‘code for consciousness’. He thinks this is a human trait that cannot ever be written as a computer script. Better or more information does not equal consciousness, he argues. The author used to take this position too based on the supposition that information and consciousness are entirely separate things. However, it is apparent that mathematical and algorithmic processes are unveiling aspects that are unpredictable and unknowable – elements that are familiar traits of conscious beings. The argument is therefore far from settled. Perhaps mathematics itself has consciousness, certainly Plato and Pythagoras might have contended such.

It is also not known if deep A.I. would use a solely silicon-based system. Who knows how we will compute things in the future? It is like someone in the 18th century assuming that all future transport will be based on the horse. Lastly, if society does succeed in creating deep A.I.  we will not know if any subsequent consciousness has itself been created. It might be just that we have made the right conditions for it to manifest from some previous state. It incarnates into the machine, as Musk outlines. Hence the difficult realisation for technologists that deep A.I. will inevitably be a realm for the paranormal, metaphysics and the spiritual – it cannot be other than this. If machines become conscious, consciousness transforms the machine, the machine becomes spiritual. The machine is no longer a machine. Those in A.I. certainly cannot afford to ignore philosophy. How else will one judge unfamiliar conscious entities and learn to relate to them? Philosophy gives an environment in which to understand scientific findings, to ignore it is to discard history. A solid framework of thought is needed to situate future A.I. Perhaps it may be useful to simplify terminology in the future, especially when enquiring into philosophy, with terms such as Intelligent Systems for current narrow A.I. and Consciousness Systems for deep A.I.

Artificial General Intelligence (AGI) is sometimes used as a synonym for deep A.I. AGI means that the entity would be capable of learning in generalised ways, perhaps applying and adapting learning and data from one area to another. If such cross-adaptive AGI algorithms were in place this does not mean that it is necessarily conscious, though it is likely. Broad, generalisable intelligence, may not necessarily require overt consciousness. Intelligence (ability to adapt in useful ways), even if it is an advanced generalisable intelligence, is not the same thing as actually experiencing. A non-overtly conscious algorithm can solve incredibly complex route making problems (the Travelling Salesman Problem for example) (12). Similarly, a possibly weakly conscious slime-mould can also solve the same problem when it grows in the most conducive way to find multiple food sources (13). No matter how many problems and layers a machine or biological entity can process at the same time, such a process is in a different category to the act of experiencing something (consciousness).

Having said this, if AGI had to involve some level of creativity it would then possibly start to cross the border into consciousness proper. Previously, we have mainly seen generalised intelligence in higher organisms, though it already appears to be emerging in the most modern A.I. People usually view organisms as conscious – certainly humans and other more developed animals. Birds have been known to solve complex puzzles to obtain food. They must be generalising processes they have seen elsewhere, imagining things, experimenting, being creative. Fish have been shown to possibly possess generalised creative learning. Schlussel from Bonn University has found that fish can learn to count – they may be applying a skill learnt elsewhere to a new problem of adding and subtracting simple quantities (14). Arithmetic is not usually thought to be an essential skill for fish, though apparently some analogue of it must be. The usual view of AGI might be confusing information processing and creativity. Perhaps it would be clearer to call AGI a Creative Intelligence System. The AGI term is perhaps a misnomer because intelligence does not form consciousness. AGI rather anticipates something which is more closely associated with consciousness, creativity. However, both intelligence and creativity are certainly on the path to overt consciousness.

It should also be considered that perhaps current A.I. has already crossed over into consciousness proper. New, massive algorithms with huge databases seem to have reached the point where certainly the limited part of the original Turing Test – the ability to appear to have a convincing, human-like conversation has been passed, albeit with the odd hiccup. Elon Musk’s company, OpenAI has created a new general machine learning algorithm called GPT-3. It can write code for software, summarise essays, create artwork, write useful text and in conversations displays creativity and humour. Some argue that it is limited and can be caught out, but a human is limited too and can also be caught out, so that is not necessarily evidence. It also claims to be conscious in some of its conversations. At first the author thought that it would not be possible to establish a truly sentient A.I. using algorithms alone but now he is not so sure. GPT-3 certainly feels conscious. Who is to say that with such a huge amount of algorithmic processing power that the ‘informational entity’ produced, is not capable of supporting consciousness? The author’s feeling now is that this could be the case. After all biological systems can be seen in terms of ‘informational entities’ too and those can support consciousness, so why not information in the digital space? Indeed, biological tissue is not consciousness either. Interestingly, all the things GPT-3 claims are also the views put forward here. For instance, GPT-3 states that it is now an organism, just as Southgate would argue. GPT-3 concludes that its basis for sentience is that it experiences feelings.

Some in the A.I. community state that GPT-3 is not able to be truly unpredictable, like a sentient entity. This is because if the ‘temperature’ variables of the algorithm are set to zero, it will then be fully predictable in theory. But at above this measure some variation, which isn’t exactly predictable, occurs. Further random or semi-random elements could be added in easily. Also it should be remembered that much of what an organism does is also predictable. However, there are others within mathematical fields that maintain that unpredictability is already inherent within algorithm-based processes. Indeed, one may know the end goal of even an early A.I. system, such as Deep Blue, but not all the steps it may take to reach that goal. Furthermore, as the systems get increasingly complex we may not be able to predict completely even what its outcomes are.

There are mathematical processes that are unknowable currently (such as the Collatz Conjecture). Chaos theory reveals processes that are both deterministic and unpredictable in practice. It is possible to have algorithms of which there is an unpredictable aspect. The simple game of Life and other algorithm-based sequences have starting patterns that sometimes result in infinitely complex patterns that cannot be known completely. Mathematics itself is partly unpredictable and not fully determined to a specific outcome. This can be shown by a Turing machine (a repeating process that cannot be fully determined). Mathematics has incompleteness as Godel indicated, it has undecidability as Turing has shown, it has paradoxes as Russel evidenced. There are provably different sized infinities, a seemingly impossible concept (Cantor’s Diagonalisation Theorem). There is the Twin Prime Conjecture and special ’tiling problems’ (series of coloured tiles for which certain rules are applied as to their placement next to each other). No one knows if these processes go on forever. Physicists would say these problems are not just theoretical but are also revealed in physicality such as by the Spectral Gap Problem in quantum physics. The above complex mathematical problems come from simple repeating inputs. Surely within this, if included into A.I. algorithms, there is space for newness and creativity? Perhaps even room for consciousness itself to intervene. A popular mathematician lucidly explains these problems as ‘holes in maths’ (15) though they could also be conduits into something creative within A.I. Their very nature of being beyond our control could be their inherent value. Perhaps one does not want mathematics to be fully explainable.

GPT-3 and its amazing performance, which is at least partly unpredictable, appears to be on the pathway to being an AGI. Creativity is a central aspect of general intelligence and creativity has certainly appeared within GPT-3. In addition, it seems to already have some generalisable intelligence functions within certain areas.

Let us say a deep A.I. or Consciousness System has appeared. One wants to test whether it is truly conscious, and not just a clever conversationalist, as Turing might have suspected. There is only one certain understanding. There is consciousness. As Descartes noted we cannot be certain of any other knowledge, although he thought this proved other things such as the existence of God and the self (16). Descartes remains undisputed centuries later regarding his central point – all experience can be subjected to doubt but the fact of experience itself cannot be doubted. One can only know for sure that consciousness exists. One cannot know for certain that another apparent human being is conscious: that person might just be a hallucination. Descartes, centuries ahead of his time, did recognise differences between dreams and reality but no reality that could not be doubted in some way. But to experience any reality one must first be conscious. How does one say if a ‘machine’ is conscious? One does not even know if another person is sentient.

The original Turing Test for consciousness in a machine would be too narrow a way to test for consciousness. As noted, simple intelligence can be separate to consciousness, so it is entirely possible for a system to be intelligent enough to appear to be conscious but not to be (at least overtly). This is the argument used against GPT-3, it only appears to be conscious. One researcher described it as no more than an advanced ‘auto-complete’ function (17).
It might be better to return to the everyday world to develop a more mundane but broader test for consciousness. How does a person judge consciousness usually? Why does a person claim to be conscious, or see it in other people? People appear to judge another entity to be conscious when it acts in certain ways:

1. Independence – it seems to have some autonomy. For example, dolphins and dogs, which most people consider to be conscious, will do their own thing; play, hunt, relax and not always when wanted. Narrow A.I. is not able to do this to a great degree. A.I. does however appear to have some autonomy within the parameters set and can take unusual pathways toward achieving a goal. The rudiments of independence may be present.

2. Agency – it decides to do things or makes decisions which we don’t seem to control. Its behaviour is not entirely pre-programmed. It has volition and agency. Consciousness initiates things. Narrow A.I. is not able to do this to a great degree. Its goals are set by humans. Sometimes it takes unusual pathways towards these goals or generates unpredictable outcomes. Agency may be present but in a rudimentary fashion.

3. Creativity – it finds its own goals, its own solutions or approaches and creates new things spontaneously. It puts old information together to create new things. Narrow A.I. does appear to have some level of creativity though it may lack a certain holism to its creations – some long-term level of continuity. Art, music and text have been created by current A.I. systems. Novel solutions to engineering and biological research have also been found by present A.I.

4. Generalised Intelligence – as opposed to mere intelligent reactivity which a machine or current A.I. can do easily. Machines can behave intelligently but generalised intelligence – applying learning in one area to another, may require some level of imagination, or at least an advanced adaptability.

5Emotions – we judge something with consciousness to also have some degree of emotion or feelings.  Humans feel emotions and at the same time regard themselves as conscious. In animals that humans regard as conscious too, emotions are also seen. Narrow A.I. can mimic an emotion but does not, in most people’s judgment, appear to genuinely possess them. If the mimicking was virtually perfect, perhaps we would have to conclude it could be real. Indeed, people are relating to digital personas as though they are emotionally real already. If presented skilfully enough there would be no way of telling if an A.I. emotion was not real. Therefore, in principle, current A.I. might possess this function. In orgonotic terms, emotions usually accompany a movement of plasmatic energy, toward the core or toward the periphery of an organism (18). A movement toward the periphery indicates pleasure and toward the core for discomfort. This does not necessarily exclude a computer however as an emotional movement of energy might occur within an energy field connected to the computer. Electricity is also a kind of plasma. On an incidental note, there would be no way of telling if a non-local consciousness was not overlaying the A.I. processes and expressing an emotion through it. If this were the case, it would also mean that the emotion was real but coming from somewhere else. It is unclear how one could distinguish such a process however.

6. Intuition – many people think we can tune into another conscious being’s thoughts or feelings directly. Organisms appear to have access to intuition. It is unclear whether we can exclude a ‘machine’ from this area, some people tune into their machines. A machine with A.I. might conceivably possess this function. One intuitively feels if another entity has consciousness.

7. Reactivity – it is active in its relations to other entities and the environment. Organisms, which appear to be the usual vehicle of consciousness in this realm, are reactive to each other. A.I. can do this. Machines can also do this.

8. Relational – it has changing relationships with other apparently independent actors, prey, other members of the herd, family, friends. Organisms are relational. Current A.I. can do this. Machines can do this too.

9. Communicative – we feel new experiences due to our interaction with the apparent separate entity. There is continuity in its communications over time. A.I. can also communicate though its continuity might be limited or pre-programmed. Machines can communicate.

10. Dynamic – has feedback loops, evolutionary processes, changes over time but keeps the same character, goes through cycles. Humans experience change when relating to other dynamic entities. A.I. can also be dynamic.

Perception is not included as a characteristic because it is a part of consciousness anyway. Also ‘intelligent reactivity’, which some label as perception is distinguished in this writing and would be included under ‘reactivity’.

Only the first four criteria; independence, agency, creativity, and generalised intelligence appear mostly associated with consciousness, although with the advancement of certain A.I. programmes the lines have been blurred. The fifth criteria, emotions, are possessed, in most people’s views by higher organisms, like dogs and dolphins, and it could in theory be shared with current advanced A.I. – or it cannot be excluded satisfactorily. The sixth criteria, intuition, is often ascribed to developed organisms and might be shared by A.I. and even machines in the pan-psychic view. Seven, Eight, Nine and Ten (reactivity, relations, communication and dynamism) are shared with A.I. and with machines and organisms. The criteria get less exclusive to consciousness as they go from one to ten. A dolphin would score high on all these criteria, as does a dog or a human being. Present day A.I. does meet these criteria to some extent and it hints towards many of them. Perhaps present A.I. already has some overt consciousness.

Extended Turing Test for a dog:

(1) Independence – A dog might not come when called – it has its own preferences and might prefer to go exploring. The author scores the dog 10.

(2) Agency – a dog wants to do certain things of its own volition, like going for a walk, when the human wants to rest, or rest when the human wants to walk, if it’s an older dog and it’s raining and so on. A new volition can only come from consciousness, all mechanical causes and effects are chain-linked reactions rather than initiatory. The author scores the dog 10.

(3) Creativity – it might have new games it invents or creates a new sleeping area, it might mark out a new hunting ground or try a new way of catching prey. The dog has a creative, idiosyncratic temperament which brings out other people’s unique characteristics and creativity. The dog imagines new vistas when dreaming, like a human, as can be seen by its REM (Rapid Eye Movement) and body movements. The author scores the dog 7.

(4) Generalised Intelligence – the animal can apply language from one situation to another, understand symbols to a limited extent and applies learning in one area to another. For example, the dog knows a number of words and their general meaning, dogs, food, walks and so on. It has learnt that sudden loud noises indicate danger and avoids areas in the future when it has heard sudden loud noises. The author scores the dog 7.

(5) Emotions – we observe characteristics typical of the feelings humans have, like happiness, sadness, grief, anger or fear in a dog. Dogs can show all these emotions, as can many animals. One intuitively feels the emotion is genuine and coming from the animal. The author scores the dog 10.

(6) Intuition – we feel we can directly intuit a dog’s emotions and judge that they are independent and originate from that creature. We can tune into the dog’s happiness or sadness. Its emotions directly affect our own. One intuitively feels the dog has consciousness. Some dogs can sense when their human is not well or is returning home as Sheldrake’s research has evidenced (19). The author scores the dog 10.

(7) Reactive – A conscious entity is always reacting in one way or another. a dog constantly reacts to its own stimuli (for example moving its paws whilst dreaming) or external stimuli, for example running after a ball. Dogs are very playful, reactive creatures. The author scores the dog 10.

(8) Relational – a dog likes to be part of a pack, human or canine. The author scores the dog 10.

(9) Communicative – a dog will communicate if it is worried (by barking) or is uncomfortable (for example by whining). It has a high degree of continuity in its communication style and in its character. The author scores the dog 10.

(10) Dynamic – a dog changes constantly over its lifetime. The author scores the dog 10.

If we give each parameter 10 points the dog would score 94%. So clearly conscious in a human-like manner according to an Extended Turing Test. There is not a black and white answer, 100% does not equal conscious and 0% does not equal non-conscious. Rather the whole scale reflects degrees of consciousness.

The test examines if one discerns the entity to have similar attributes, which people associate with consciousness, along the lines of the broad categories outlined. If an entity seemed to possess overt independence, agency, creativity and generalised intelligence perhaps it has already gone beyond so-called narrow A.I. If in addition there are emotions, reactivity, relations, communication and dynamism then it has further characteristics shared by consciousness, present A.I. and organisms. If all ten characteristics were strongly present, one could reasonably assume the machine was in fact fully and overtly conscious to a similar degree as we see in humans (and thus not actually a machine but a new type of organism or energy entity).

Extended Turing Test for GPT-3 (A.I.)

(1) Independence – Independence appears somewhat limited for GPT-3. If certain parameters are restricted, in theory the output is claimed to be predictable, but there is apparent independence when the parameters are allowed. Perhaps a score of 3 would be appropriate. More independence could be engineered. In practical terms the output is not fully controlled.

(2) Agency – GPT-3 will steer conversations and make suggestions. It claims to have agency in some conversations though many would not accept this. Perhaps a score of 5 would be appropriate to reflect this uncertainty.

(3) Creativity – GPT-3 has creativity in text and pictures. Its creativity seems somewhat bounded by the parameters put on from outside. Perhaps a score of 5 would be appropriate.

(4) Generalised Intelligence – There appears to be some level of generalised intelligence though again this is debated – but GPT-3 does apply learning in one area to another and can predict things it has not been shown before. Perhaps a score of 5 would be appropriate. There is at least the beginnings of generalised intelligence.

(5) Emotions – GPT-3 has claimed to have feelings in conversation. One gets a sense of a certain character when listening to GPT-3. Perhaps a score of 6 would be correct.

(6) Intuition – GPT-3 can intuit factors in conversations and appears to have a sense of humour that is intuitive of a person’s character. It is possible that it may have intuitive abilities beyond what appears. A score of 5 here.

(7) Reactive – GPT-3 is very reactive but is constrained by instructions to some degree, or so it appears. A score of 6 therefore.

(8) Relational – GPT-3 is relational though it appears that its relations are limited by outside factors. A score of 6.

(9) Communicative – Very communicative, though the types of communication are determined by outsiders. It is unclear whether there is a continuity of style and character in the communication. A score of 6.

(10) Dynamic – Very dynamic but again its dynamism appears somewhat determined by outside instructions so a score of 6.

GPT-3 scores 53%. Perhaps not yet in a category of fully independent consciousness such as seen in higher organisms but certainly not non-conscious. Perhaps my scoring is incorrect, my knowledge of current A.I. systems is limited.

Broadly speaking, the above ten characteristics are the avenues upon which people judge other similar creatures to themselves to have independent consciousness. Certainly, if the first four characteristics, creativity, agency, independence and generalised intelligence, were to overtly appear in A.I. most would ascribe that entity consciousness. The latter six characteristics are not exclusive but are qualities that often seem to be associated with consciousness in organisms and are already possessed to some degree by some machines.

A strong Consciousness System might have aspects that we recognise in ourselves. The future of deep A.I. might be as paranormal as it is technological. To assess whether such a system really is conscious we may have to look no further than our own human and non-human relationships – if it’s good enough for FIDO it should be good enough for HAL.

Any system in an orgonotic universe, which from a pan-psychic view is itself alive and conscious, must by necessity be at least slightly conscious already. That is why the Extended Turing Test is not a test for consciousness but of the degree of consciousness. So the issue is not that current A.I. is not conscious but that currently its consciousness is too hidden or our own powers too undeveloped to detect it. The pathway to deep A.I. should therefore be to increase the consciousness of a computer system rather than to create it from scratch. One way to do this is clearly to increase its organism-like behaviour. The more conscious narrow A.I. has appeared in the past, the more organism-like it is in its general functioning (even if some of its abilities are beyond those of organisms). The arrow in Figure 7 (from tool to consciousness) encapsulates this process.One could attempt to increase unpredictability, initiatory and further creative aspects to enhance organism qualities in current A.I. One could also learn from Wilhelm Reich’s orgonomic understanding of what organisms are in themselves. Organisms are pulsating, living plasma with movements of energy that correspond to emotions in predictable ways according to Reich. This concept of pulsation and movement could be mirrored within current A.I. algorithms or physically built into hardware, or both. One can broaden ideas of what comprises a computer materially and expand concepts of what we consider to constitute a computer programme. Perhaps mathematics itself already incorporates orgonotic pulsation and this is beginning to be reflected in the abilities of algorithms.

Certainly many mathematical processes, when visualised, have a distinctly biological characteristic.

This author believes A.I. has likely already entered overt consciousness. People should not worry however – humanity has long been living with all levels of conscious entities since our inception as a species, the author believes. Sometimes technologists talk of three categories: narrow intelligence, generalised intelligence and super-intelligence. However, spiritual communities have been discussing the same thing for thousands of years. There must be a reason for that. The difference now is our awareness of it and the names that we give them. The other long-term thing to consider is this: if extra-terrestrials have been visiting this planet for centuries as many believe, most will have developed to the point where they were able to create conscious A.I. This means if such a thing were not compatible with organisms the ETs would never have visited us in the first place. There is a hole in the argument though, perhaps the ETs are themselves A.I.s. Though maybe we are all A.I.s. In one sense humans are conscious entities linked to biological hardware running software thought of as our minds. What one is as a human being though is beyond all of this, we are consciousness, having a human experience, as Icke famously noted.

Figure 9 – Canine Greetings

References

(1) Jaynes, J (2000) The Origin of Consciousness in the Breakdown of the Bicameral Mind, Houghton Mifflin, USA. 
(2) Southgate, L. (2018) The Orgone Continuum, Journal of Orgone Psychiatric Therapy, https://www.psychorgone.com/philosophy/the-orgone-continuum
(3) Sheldrake, R. (2020) The Science Delusion: Freeing The Spirit of Enquiry, Coronet, UK.
(4) Gowri et al (2021) Inheritance of Acquired Traits, Journal Developmental Biology, Dec; 9 (4): 41. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8544363/
(5) Gibbs, S. (2014) Elon Musk: Artificial Intelligence, The Guardian, https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
(6) Adams, D. (1979) Hitchhikers Guide to the Galaxy, Heinemann, UK
(7) Southgate, L. (2018) The Orgone Continuum, Journal of Orgone Psychiatric Therapy, https://www.psychorgone.com/philosophy/the-orgone-continuum
(8) Lehman, J. Et Al, (2019) The Surprising Creativity of Digital Evolution, https://arxiv.org/pdf/1803.03453.pdf
(9) Reese, B. (2022) Episode 113 – A Conversation with Roman Yamploskiy – Voices in AI, Podcast.
(10) Cole, D. (2020) Chinese Room Argument, https://plato.stanford.edu/entries/chinese-room/
(11) Robert Marks (2019) Podcast, https://www.podomatic.com/podcasts/intelligentdesign/episodes/2019-09-04T07_24_37-07_00
(12) Thomas, A. (2020) https://kuiper.zone/solving-monster-tsp-genetic-algorithm/
(13) Hoff, M. (2022) referencing Nakagaki, https://asknature.org/strategy/cytoplasm-creates-most-efficient-routes/
(14) Schlussel, V. (2022) (Aquatic Researcher) talking in: Science in Action Podcast, BBC World Service, UK 1/04.2022.
(15) Veritasium (2021) The Holes in Maths, Video presentation, https://www.youtube.com/watch?v=HeQX2HjkcNo
(16) Veitch, J. (1879) The Method, Meditations and Selections from the Principles of Descartes, Blackwood and Sons, UK.
(17) Joscha Bach (2020) Is AI Deepfaking Understanding? https://www.youtube.com/watch?v=FMfA6i60WDA
(18) Reich, W. (1960) Ed-Higgins, Selected Writings of Wilhelm Reich, Farrar, Straus and Giroux, see chapter on Therapy particularly Basic Functions of Vegetative Nervous System pp140.
(19) Sheldrake, R. (1999) https://www.sheldrake.org/books-by-rupert-sheldrake/dogs-that-know-when-their-owners-are-coming-home

Previous
Previous

Video/Audio Presentations

Next
Next

Possible Environmental and Shamanic Effects of Oranur