AI consciousness
Eddie Mendoza - Senior Concept Artist at Apple

Will Robots Dream? Research Priorities For AI Consciousness in 2024

From Logic to Life: What are the key Issues in AI Research, a deep dive into AI Consciousness Research. Can Machines Learn to Love, Fear, and Dream? Understanding the Boundary Between AI Consciousness and Unconscious Systems.

From Logic to Life: What are the key Issues in AI Research, a deep dive into AI Consciousness Research. Can Machines Learn to Love, Fear, and Dream? Understanding the Boundary Between AI Consciousness and Unconscious Systems.

Artificial intelligence (AI) is rapidly advancing, raising a crucial question: could AI machines ever become conscious? While we don’t know for sure, a leading group of AI consciousness scientists is urging immediate action to explore this critical issue.

Ethical and Safety Concerns Drive Call for Action

The Association for Mathematical Consciousness Science (AMCS) submitted a statement to the United Nations, highlighting the urgent need for research on AI consciousness. Their concerns extend beyond theoretical curiosity, addressing real-world ethical and safety implications. For example, what rights and protections would a conscious AI be entitled to? And should it be legal to simply switch off a conscious machine after use?

Missing from the AI Safety Conversation

Despite the rapid progress in AI, discussions on safety often ignore the question of consciousness. Jonathan Mason, an AMCS board member, points out that even prominent events like the AI Safety Summit in the UK and President Biden’s AI executive order largely overlooked this crucial aspect.

A Gap in Scientific Understanding

Currently, science lacks definitive answers about AI consciousness. We can’t even guarantee we’d recognize it if it emerged, as validated methods to assess machine consciousness are still under development. “Our ignorance about AI consciousness, given the pace of AI progress, is deeply concerning,” admits Robert Long, a philosopher at the Center for AI Safety.

AI is Evolving: So Should Our Understanding

Concerns about AI consciousness extend beyond science fiction. Companies like OpenAI, creators of the AI chatbot ChatGPT, are actively pursuing artificial general intelligence (AGI) – a system capable of diverse intellectual tasks akin to humans. Some experts predict this milestone could be reached within 5-20 years. Yet, the field of consciousness research remains woefully underfunded, with Mason noting a complete lack of dedicated grant offers in 2023.

Aritificial Intelligence Neurons

Feeding the UN’s AI Policy: Understanding Consciousness is Key

The AMCS submission plays a crucial role in informing the UN High-Level Advisory Body on Artificial Intelligence. This newly established body, scheduled to release a report in mid-2024, will shape global governance of AI technology. Recognizing the potential dangers of conscious AI systems is vital for formulating safe and ethical regulations.

Beyond Human Values: Protecting Potential AI Sentience

Understanding what grants AI consciousness is crucial not just for human safety, but for the well-being of the AI itself. Could such systems experience suffering? Philosopher Susan Schneider emphasizes the dangers of dismissing AI consciousness. “Without scientific clarity, some might wrongly assume these systems feel nothing, while others might anthropomorphize them excessively,” she argues.

Urgent Need for Funding and Public Education

To navigate the complexities of AI consciousness, the AMCS urges governments and the private sector to step up funding for research. Despite limited resources, promising initiatives exist. Long and his colleagues, for instance, developed a checklist to assess potential consciousness in AI systems. Such advancements demonstrate the field’s potential for meaningful progress.

“We have the opportunity to advance significantly,” concludes Mason. By proactively researching and addressing the question of AI consciousness, we can build a future where technology and humanity co-exist safely and ethically.