Context
Consciousness
Dualist theory
Dualism is famously associated with the philosopher René Descartes. He proposed that the mind is a non-material, thinking substance, while the body is a material, non-thinking substance. This theory suggests that consciousness resides in the immaterial mind, which is distinct from the physical brain. In his book Hare Brain, Tortoise Mind: How Intelligence Increases When You Think Less (1999), Guy Claxton summed up this view in this way:
“Descartes’ legacy to the twentieth century is an image of the mind as ‘the theatre of consciousness’, a brightly illuminated stage on which the action of mental life takes place; or perhaps as a well-lit office in which sits an intelligent manager, coolly weighing evidence, making decisions, solving problems and issuing orders."
The philosophical "hard problem" of consciousness, announced by Chalmers, arises from this dualistic perspective because it challenges us to understand how a non-physical mind can experience conscious awareness and how it can possibly interact with the physical body. Consciousness is often associated with the mind in dualism and is considered a non-physical phenomenon that cannot be fully explained by physical processes alone.
Over the past half century the science of consciousness has been divisible into two periods. From the 60s until 1990 it was a fringe interest, due to the domination of behaviourism. After the latter date there came a flood of research into the brain basis of consciousness, mostly through neuroimaging methods. The MRI scanner and EEG with invasive neurophysiology in non-human primates led to a pragmatic approach to consciousness based on a materialist philosophy.
Materialist theory
In contrast to dualism, materialism affirms that everything that exists, including consciousness, is fundamentally physical. It asserts that everything that exists, including consciousness, can be explained in terms of physical matter and its properties. Instead of concentrating on the "hard problem", posed by Chalmers, of how consciousness could arise from matter, neuroscientists focused on the link between brain regions and conscious experiences.
Criticising the materialist approach, Schwartz, a US psychiatrist, wrote:
“The rise of modern science in the seventeenth century — with the attendant attempt to analyse all observable phenomena in terms of mechanical chains of causation — was a knife in the heart of moral philosophy, for it reduced human beings to automatons."
Emergentism
The philosophical and scientific theory of emergentism argues that conscious experience is the result of the interactions between simpler brain elements. Its emergence is not a material property of neurons, but depends on certain conditions.
Daniel Siegel, clinical professor of psychiatry at the UCLA School of Medicine and the executive director of the Mindsight Institute, explains that
"Our mental lives — awareness, subjectivity, and the regulatory facet of the mind — are emergent processes that arise from both neural and relational processes and their interface with each other.”
Panpsychism
The philosophical theory of panpsychism affirms that some form of mental experience is a characteristic of the universe. The material world enjoys a degree of consciousness and the smallest particles possess an inherent mental aspect.
Orch-OR (Orchestrated Objective Reduction)
This theory, proposed by Penrose and Hameroff, suggests that quantum mechanics may have a role in consciousness. The idea is that awareness may arise from quantum processes within neurons.
AI
Intelligence and Consciousness
One of the questions posed by AI is that of the definition of intelligence. Traditionally it has been linked to the cognitive capabilities of human reasoning, learning, and problem-solving. AI has challenged this outlook by showing that machines can perform many of these types of tasks. Alan Turing proposed a test in 1949 that suggested that if a machine could converse in a way indistinguishable from a human, then it could be considered intelligent. This has promoted advances in natural language investigations. Critics assert that this is not equivalent to genuine comprehension since machines might simulate conversational abilities without subjective experience or awareness.
John Searle’s Chinese Room thought experiment also challenges the idea that a computer programmme can possess understanding or consciousness. Searle puts himself in the computer's place following a set of rules to manipulate Chinese symbols. He can manipulate the Chinese characters to produce appropriate responses by following syntactic rules, though he does not actually understand Chinese. His goal is to think through the possibility that a machine can converse in correct Chinese with no true understanding or consciousness. Critics contend that understanding emerges from all the system interactions, not from individual components. Others say that this illustrates the need for more syntactic-semantic synthesis in machine learning.
Weak/Strong AI
Narrow, or weak AI, is a reference to systems that are designed to perform specific tasks and resolve precise problems. They have no cognitive abilities. Alexa and Siri are examples.
Strong, or general AI, describes systems that are able to understand, learn, and apply intelligence to a range of tasks, copying human cognitive abilities. This AI would perform tasks and also possess awareness and be self-conscious. It could plan, reason and grasp abstract ideas, very much comparable to humans.
Some researchers believe that advances in AI will lead to strong AI; others argue that humans possess non-computational consciousness processes, not replicable in machines. Pursuing general AI involves new moral questions such as the nature of consciousness and the risks of self-governing machines.
Ethics and AI
The moral considerations of AI include the social effects of the technology. As the systems become involved in healthcare, finance, and law enforcement, their ethical ramifications will be more important. One concern is the biases already contained in the data that the AI systems access and so repeat, aggravating existing injustices. One example is facial recognition technology, which has higher error rates for people of colour. This increases the issues over use in surveillance and law enforcement. Another ethical concern relates to the capacity for autonomy and decisions of AI and its accountability. Who is responsible if an autonomous vehicle causes an accident, the maker, the programmer or the system?
The Putnam thought experiment
Hilary Putnam popularised the brain-in-the-vat thought experiment that poses the question of reality, perception and knowledge. The Matrix-style scenario involves the removal of a human brain from the body and placing it in a vat of sustaining fluid. The brain is connected to a supercomputer that simulates reality through electrical impulses. The brain would not know that its experiences are generated artificially. The experiment questions that we can be certain that our perceptions correspond to an external reality or might be a simulation. This idea is similar to Descartes' hypothesis of the evil demon in the first of his Meditations of First Philosophy where it presents a complete illusion of an external world, putting into doubt the reliability of our sensory experiences. This raises ethical questions about AI's capacity to create and manipulate perceptions, blurring the distinctions between what is real and what is artificial.
The Penrose Problem
Penrose argues that human consciousness is not computable and so cannot be replicated by a machine that relies on algorithmic processes. His suggestion is that the human mind uses quantum mechanics principles and that these are outside the capabilities of modern computer systems. He bases his thesis on Gödel's theorems on incompleteness, which refer to truths in mathematics that cannot be proven using a formal system. His suggestion is that human consciousness transcends known computational models. Critics reply that there is no empirical support for this view and that it is speculative to connect quantum mechanics and consciousness.
Summary
The Emperor’s New Mind by Roger Penrose was published in 1989.
1. Can computers have a mind?
"It is our present lack of understanding of the fundamental laws of physics that prevents us from coming to grips with the concept of ‘mind’ in physical or logical terms."
Penrose challenges Strong AI which asserts that computers that are sufficiently advanced will reach consciousness. He argues that the present understanding of physics is not enough to understand the mind.
The Turing Test is useful, but not sufficient to rule on true consciousness. If a computer passes the test it shows that it is able to mimic human reactions. It is not true understanding nor subjective experience. To achieve this something more than algorithmic computation is necessary.
If a machine passed the Turing Test and demonstrated behaviours linked to consciousness, it would bring up ethical questions about human responsibilities towards the machine. Penrose underlines the need to think through the moral implications of designing machines that could be conscious.
2. Algorithms and the Turing Machines
"What all this shows is that the quality of understanding is not something that can ever be encapsulated in a set of rules."
The book explains that algorithms are step-by-step procedures for solving problems. Basing himself on Turing's notion of a machine that can calculate anything if it has the correct programme, the author analyses their theoretical equivalence. Despite recognising the power of algorithms, Penrose argues that human comprehension and consciousness sit beyond computation.
3. Mathematics: Discovery or Invention?
Penrose is awed by the material world and by the pure mathematics sphere of Platon. He explores the challenge of whether mathematical notions are invented by the mind or discoveries of existing truths. The author tends to the Platonic idea that mathematical objects exist independently of human thinking.
He presents the Mandelbrot set, which is generated by iteration of a simple mathematical function to create fractals, as an example of a mathematical structure with an innate complexity and beauty waiting to be discovered, not a human creation.
The book presents mathematical truths as existing in a timeless and independent platonic realm that mathematicians access through intuition and insight. This outlook contrasts with formalism, where mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation. In short, they are meaningless games of symbol manipulation.
4. Gödel's Theorem
"Our insights enable us to transcend the limited procedures of ‘proof that we had allowed ourselves previously."
Gödel's theorum shows that any complex system, such as mathematics, will necessarily produce true statements that cannot be proven within the system. This contradicts the notion that mathematical truths can be determined from a fixed set of rules and axioms.
Penrose argues that Gödel's theorem underlines the limitations of algorithmic thought and the relevance of human insight in maths. He believes that mathematicians are able to recognise this truth of Gödelian statements, though they cannot be proven.
The capacity to comprehend mathematical truths that are beyond the grasp of formal systems suggests to the author that non-computational processes take place in human consciousness. This is a direct challenge to Strong AI which claims that minds are complex algorithms.
5. Classical Physics
"What distinguishes the person from his house is the pattern of how his constituents are arranged, not the individuality of the constituents themselves."
Newtonian mechanics offers a deterministic worldview where the past determines the future. This predestinarianism influences our thinking about free will and consciousness.
Classical physics has seen successes, but its limitations to explain phenomena such as the stability of atoms and aspects of the radiation spectrum have led to the development of quantum theory.
Penrose has recourse to the hardware/software analogy to draw the relationship between the brain and the mind. The physical brain is the hardware and the mind is the software, the key to comprehending mental activity.
6. Quantum Theory
Quantum theory, which deals with the microscopic, has introduced uncertainty and indeterminacy in thinking about physics. It challenges the classical narrative of a predictable universe through presentation of the duality of wave particles, superposition, entanglement and the uncertainty principle.
Quantum theory is divisible into two processes: the deterministic unitary evolution directed by the Schrödinger equation and state-vector reduction which is probabilistic and happens during measurement.
The measurement problem in quantum mechanics is explored in the book. It deals with the question of the transition from the concept of superposition to classical results:
- The Copenhagen interpretation involves the belief that quantum mechanics is intrinsically indeterministic. Probabilities are calculated using the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously.
- The many-worlds interpretation asserts that the universal wave function is objectively real and that there is no wave function collapse. This means that all possible outcomes of quantum measurements are physically realised in different "worlds". The evolution of reality as a whole in this theory is strictly deterministic and local.
7. Time
Time appears to flow in one direction, from the past to the future. This is connected to the second law of thermodynamics, which asserts that entropy (disorder) has a tendency to increase over time.
Penrose explores the cosmological rise of entropy from its lower state earlier in the universe. Big Bang theory is discussed and how gravity shapes the structure of the universe.
The book examines black holes and suggests that the Weyl curvature hypothesis may explain the second law. Penrose links the initial low entropy of the universe with the vanishing of the Weyl curvature tensor of the cosmological gravitational field near the Big Bang. From then its dynamical influence gradually increased and is responsible for an increase in the amount of entropy in the universe. This has induced a cosmological arrow of time running from past to future.
8. Quantum Gravity
The author argues that a theory of quantum gravity is needed to harmonise quantum mechanics with general relativity. This may require modification in the measurement process of quantum mechanics.
He suggests that the correct theory of quantum gravity could be time-asymmetric, which would reflect the asymmetry in the universe. This contradicts the traditional view that the laws of physics are time-symmetric.
9. Brains
The book offers a view on the structure and function of the key areas of the brain. It discusses the roles of different parts in processing sensory information, motor control and cognitive functions.
Computer models of the brain are examined, but doubts arise about whether these models can capture the fundamental characteristics of consciousness and intelligence.
The possibility that quantum mechanics might play a role in brain activity is explored. Penrose discusses the influence of quantum mechanics in neural processes and disputes the probability of maintaining quantum coherence in the brain environment which is warm and wet.
10. Consciousness
The author challenges the notion that consciousness is a by-product of computation, however complex. He argues that mathematical insight and creative thinking are non-algorithmic processes that machines cannot replicate. The suggestion is that consciousness has elements that are beyond computation.
The writer speculates that consciousness could involve an association with a deeper level of reality, possibly Plato's mathematical Forms. Human minds may be capable of accessing truths that are inaccessible through algorithms.
Themes
Singularity Theorems
In the theory of general relativity singularities are points of space-time where there is a divergence of physical quantities like curvature or the density of energy. In the 1930s Lemaître noted that there was a singularity in Einstein's equation. He found that at its origin in time, a homogeneous, isotropic universe had infinite curvature. Others argue that the theoretic singularities would disappear in general solutions.
In 1965-66 Penrose and Hawking proved that singularities existed in all solutions to Einstein's equations. Cosmologists believe that singularities are not material objects, but point to places where general relativity does not model reality. They are thus prime candidates for investigations into quantum gravity.
Quantum measurement
The measurement problem is that of definite outcomes. Quantum systems have superpositions, but quantum measurements only give one definitive result.
The Schrödinger equation affirms that in quantum mechanics the wave function evolves deterministically following a linear superposition of different states. However, when measured the physical system is always in a definite state. The problem becomes how to establish a correspondence between quantum reality and classical reality.
Since the 1990s Penrose's solution to this problem is a sort of objective collapse theory. The wave function is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: the "one-graviton level". He then hypothesises that this energy difference causes the wave function to collapse to a single state. Penrose's "one-graviton level" criterion provides an objective criterion for wave function collapse. Despite the difficulties of specifying this in a rigorous way, he proposes that the states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.
Cyclic cosmology
Penrose popularised the Conformal Cyclic Cosmology theory in his book Cycles of Time: An Extraordinary New View of the Universe (2010). It describes a cosmological model within general relativity. In this theory the present universe is only one of a possibly infinite sequence of universes, or eons, because the universe repeats infinite cycles. Each iteration is marked by the Big Bang singularity of the next.
No comments:
Post a Comment