WAAC Logo
Back to News

2026-03-30

Keynote Speech by Academician Seeram Ramakrishna of the Chinese Academy of Engineering at the 3rd World Conference on Artificial Consciousness

At the 3rd World Conference on Artificial Consciousness, held in Shenzhen on March 21, 2026, the World Academy of Artificial Consciousness presented certificates to Professor James J. Heckman, recipient of the 2000 Nobel Prize in Economic Sciences, and Professor Seeram Ramakrishna, Foreign Member of the Chinese Academy of Engineering. The conference was themed "Fundamental Theories and Practical Exploration of Artificial Consciousness, and Artificial Intelligence Empowering Proactive Health Medicine."

Certificate presentation at the conference

The following is the keynote speech manuscript delivered by Foreign Member of the Chinese Academy of Engineering Seeram Ramakrishna for the 3rd World Conference on Artificial Consciousness. In this address, Ramakrishna discusses artificial consciousness through the lenses of materials science, systems research, interdisciplinary collaboration, and responsible technological development.

It is my honor and pleasure to address this conference on one of the most complex and profound topics before humanity: consciousness and mind. At Tsinghua University, our work seeks to develop physical systems that not only model intelligent behavior, but also provide tools for a deeper understanding of mind and matter. These are extraordinarily difficult questions, and they cannot be approached from a single discipline alone.

Conference screen and event image

My own perspective comes from materials science and systems research. Yet any serious discussion of consciousness must also engage philosophy, neuroscience, biology, medicine, and the social sciences. Only through such an interdisciplinary effort can we make genuine progress in understanding what consciousness is, what mind is, and how future intelligent systems may relate to them.

Across the world, billions of dollars are now being invested in artificial intelligence, and much of the public discussion focuses on rapid technological progress and future economic growth. We hear frequent claims about superintelligence, advanced robots, human-like abilities, and the possibility that large language models may eventually evolve toward forms of artificial consciousness.

At the same time, AI is changing not only industrial systems, but also human life and human self-understanding. We are now forced to ask broader questions about cognition, emotion, human agency, and the future quality of mental life. These are not purely technical questions. They are civilizational questions.

Two words are central to this discussion: mind and consciousness. Mind is often understood as the non-physical aspect of a person, the composite of many mental functions. From a scientific standpoint, however, mind requires a physical basis. For mind to exist, there must be a material substrate, namely the brain, with its neurons and synapses. Some traditions hold that mind can exist independently of physical form, but scientific research generally seeks its basis in physical processes.

Intelligence, in turn, is a functional capacity. It depends on memory, experience, learning, and the ability to process information. If we want to cultivate intelligence in human beings, or simulate aspects of it in AI, we must understand how experience is accumulated, how memory is organized, and how learning transforms stored information into capability.

This leads naturally to cognition and metacognition. Cognition refers to the process of thinking, feeling, perceiving, and interpreting information. Metacognition is the ability to be aware of one's own thinking and to regulate it. It is one of the capacities education seeks to develop in young people, and it may become highly relevant as we consider the design of artificial mental systems in the future.

Consciousness has been discussed for thousands of years, and its meaning varies across traditions. In many philosophical or spiritual frameworks, consciousness is understood as a state of being or a reality that transcends the merely physical. In science, by contrast, consciousness is commonly defined in more operational terms: the capacity for subjective awareness of internal processes, such as thoughts and feelings, and of external stimuli in the surrounding world.

These two broad traditions do not always converge. That is precisely why an international conference on artificial consciousness is so necessary. Consciousness means different things to different scholars, and responsible progress requires us to confront these differences directly rather than ignore them.

Once we examine the many definitions proposed by philosophers, scientists, clinicians, and engineers, we immediately see that consciousness cannot be treated lightly or reduced to a single slogan. Different intellectual traditions begin from different assumptions. Some insist on a material basis; others reject the possibility that artificial systems could ever possess the same kind of consciousness attributed to human beings.

For that reason, the discussion of artificial consciousness is inherently challenging. It demands conceptual humility and sustained dialogue. The point of a conference such as this is not to pretend that the issue is simple, but to create a serious platform where different traditions can compare their ideas and gradually move toward greater clarity.

Over the past several decades, science has made substantial progress in studying consciousness through the brain. Human brains contain enormous numbers of neurons and synaptic connections, and these biological processes underlie emotion, thought, and mental activity. Technologies such as functional MRI, EEG, and other measurement systems are helping us understand the molecular and systems-level processes associated with brain function.

In our own work, we put forward a hypothesis that the brain should not be regarded only as a processor. It should also be understood as a sensor. Human beings possess the classical senses, but the brain itself also participates in sensing, filtering, integrating, and prioritizing information. This viewpoint may be important for future artificial systems, because it suggests that intelligent architectures must do more than compute: they must perceive, select, and organize meaningful signals.

I do not wish merely to repeat the many familiar narratives about artificial intelligence. Instead, I want to focus on artificial consciousness and how we may approach it scientifically. In our view, progress depends on bringing together sensing systems, computational models, affective computing, and advanced algorithms capable of integrating data related to multiple mental functions.

Today, a range of core technologies are already used in hospitals and research environments to observe mental and neural processes at different spatial and temporal resolutions. These tools remain limited, and many are still far from sufficient. But they are giving us increasingly fine-grained windows into the physical processes associated with mental function.

A deeper understanding of brain, mind, and consciousness can produce significant practical benefits. It can help us develop better methods and devices to support people who face anxiety, tension, stress, sleep disorders, and other mental-health-related challenges. This is one reason the field matters not only scientifically, but socially.

Wearable systems are evolving rapidly. Devices that already measure blood pressure, oxygen levels, glucose, movement, and other signals may, in the future, provide richer forms of self-knowledge and health monitoring. In medical settings, such systems may reduce the burden on patients by sensing and communicating relevant information directly to nurses, physicians, and family members.

Our research also explores the embedding of intelligence into materials and systems. The long-term goal is to integrate as much sensory information as possible across multiple channels, combine it with large language models, AI, and big data, and move toward architectures that may one day support artificial consciousness-like functionality.

This vision involves not only sensing, but also self-powered systems, actuators, efficient computing processes, and new forms of material design. The broader scientific motivation touches several hard questions: the origin of life, the embedding of intelligence into physical systems, and the possibility of richer forms of communication, including the idea of mind-to-mind interaction. These remain difficult questions, but interdisciplinary work may bring meaningful advances in the years ahead.

Artificial consciousness carries vast implications. Its development therefore requires careful navigation, responsible governance, and sustained discussion among scientists, engineers, medical researchers, legal thinkers, and policymakers. This is not a field that can be guided by technical ambition alone.

My hope is that through interdisciplinary collaboration we will deepen our understanding of mind, matter, intelligence, and consciousness, while also ensuring that the technologies we develop are beneficial to humanity. Thank you very much for the opportunity to share these thoughts.