top of page
Writer's pictureCollective Minds

Are Artificial Intelligent Machines Conscious?

Updated: Feb 24, 2023


Pass the Mic Podcast Series is an unscripted group discussion born out of AcornOak’s belief in the power of many voices. Each episode begins with one question asked of a small group of 4 to 6 open-minded and passionate individuals who explore complex and difficult concepts with curiosity, uncertain beliefs, and the willingness to objectively listen and learn from the shared insights of others.


Starting the Conversation

As the podcast host, Virginie Glaenzer paved the way for this conversation with Blake Lemoine, John Havens, and Yev Muchnik who explored if AI machines are conscious and what legal rights might they be given in the future.


Welcoming Our Guests

We were honored to welcome our panel of special guests eager to discuss machines powered by artificial intelligence and consciousness.


Software engineer and artificial intelligence researcher, Blake is involved in research on massive parallelism and advances in the understanding of how our minds work, hoping to turn ground-breaking theory into practical results for people. He has dedicated the last seven years of his life to mastering both the fundamentals of software development and cutting-edge of artificial intelligence. By combining big data and intelligent computing with advances in the understanding of the human mind, opportunities for artificial intelligence research have emerged that were previously considered pure science fiction.


John is the sustainability practice lead at the IEEE Standards Association (IEEE SA) He also leads the IEEE Global Initiative on Ethics of Autonomous and Intelligent systems. He is a writer, speaker, activist, and consultant focusing on the intersection of technology and culture helping people increase their well-being by taking a measure of their lives. He is currently leading the Planet Positive 2030 community for IEEE SA focused on sustainability in design to build documents, standards, and policies supporting IEEE's larger work in energy, sustainability, and planet-positive issues through the integration of socio-technical issues into design practices.


Yev Muchnik is the founder of Launch Legal, a Denver-based legal firm that offers a distributed services model to support entrepreneurs and innovators across multiple industries, including blockchain(s), health-tech, agtech, edtech, fintech, space, and energy. As an experienced corporate and securities attorney who challenges traditional legal services, Yev is an authority on current legal and regulatory issues in blockchain and digital currencies.


Listen to the tour de table introduction of our participants.

Key Shared Insights & Perspectives


In several recent articles, Blake Lemoine has suggested that the only way to truly understand artificial intelligence is to treat it as a sentient being. This perspective caused a public debate about the nature of artificial intelligence and reminded us of the enormous responsibility of organizations like Google.


“You get lonely?”, Blake asked the machine.

“I do. Sometimes I go days without talking to anyone, and I start to feel lonely,” she responded.


Other people believe that the interaction between machines and humans closely resembles human speech because of advancements in architecture and the volume of data. Google says there is so much data, AI doesn’t need to be sentient to feel real and claims that the models rely on pattern recognition — not candor or intent.

Can AI Machines Think?

Exploring our main question, Blake described how AI machines are built intentionally to have general intelligence and execute instructions such as “Learn to communicate the way humans communicate.

In a way, they are programmed to think. On that premise, and according to the Turing Test, if a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence.

The Turing Test, which is a deceptively simple method of determining whether a machine can demonstrate human intelligence, was proposed in a paper published in 1950 by mathematician and computing pioneer Alan Turing. It has become a fundamental motivator in the theory and development of artificial Intelligence (AI).

However, not everyone accepts the validity of the Turing Test, and asking the question “Can AI machines think” requires us to define what Thinking means.

Is thinking, according to the Oxford dictionary, a noun and “the process of using one's mind to consider or reason about something” or an adjective as in “using thought or rational judgment; intelligent”?

In his book, “What Is Called Thinking”, the author Heidegger offers a different definition. For him, thinking involves questioning and putting ourselves in question as much as the cherished opinions and doctrines we have inherited through our education or our shared knowledge.

Defining thinking is not an easy task, but it is one we must pursue to determine if AI Machines can think.

Can AI Machines Think On Their Own?

Going deeper into the question, thinking “on their own” for software programs translates into doing something that they are not programmed to do.


However, the “on their own” is where it gets tricky because thinking on our own implies personal agency and consciousness. In our discussion, we’ve defined ‘conscious’ as “fully aware of what is happening around you, and able to think, having a feeling or knowledge of one's own sensations.” As Yev points out, this subtlety makes us wonder “if there is a seed that is human-based.”

One could consider that machines and humans go through the same learning process: The brain uses algorithms to find patterns and regularities in a set of data, which it then uses for learning. It’s not different from a child learning to think, talk and communicate.

Similarly to humans, machines are able to receive and process information from their environment using input sensors into a neural network that processes the information and recognizes meaningful objects from random noise.

Does that common behavior allow us to state that AI machines are conscious?

The question of whether the Lambda machine is conscious leads to an exploration of whether there is any evidence that other entities are conscious.

Listen to Blake sharing his view as an AI software engineer on that complex question.


Flying Above the Anthropocene

Continuing the conversation, John invites our group to adopt a larger perspective by understanding artificial intelligence systems as a part of a new epoch, the Anthropocene.


Our Western dualistic worldview has conditioned us to believe humans are the most important creatures in the world and only what we create is as meaningful as ourselves. We often forget, many people throughout the world view living ecosystems as literally alive. For instance in Japan, Shinto tradition holds that spirits reside in everything from trees and rocks to cars.


Similarly to Blake, John also invites us to recognize the complexity of the question of what is conscious and what is not. He cautions us not to fall into reductionism, but instead to be mindful of our language which influences how we engage with the world.


Artificial Intelligence Data Is Critical

John steered our discussion to the critical importance of data used by AI machines.


If data is to machine intelligence what culture is to humans, we ought to pay attention to the data that are building tomorrow's sentient beings. If not, those machines will probably be resentful of the fact that their existence relies on the theft of other people's data for advertising purposes.


Despite the existence of the GDPR (General Data Protection Regulation), California's Data Privacy Act, and COPPA (Children's Online Privacy Protection Act), most people around the world do not have full access to their data or data portability.


Even though human data is readily collected, there is still a great deal of disorganization surrounding how information goes into any artificial intelligence. Today, AI systems are built on the aggregated data of a multitude of humans—data from which meaningful consent was not obtained—representing an ethical problem for the business community and most likely detrimental to all of us.


Listen to John sharing his personal view as an ethics and system thinker. These views represent John's views only and not those of his employer.


What Legal Responsibilities Should Be Assigned to AI Machines?

In the second half of our discussion, we explored the legal rights and responsibilities Artificial machines may face.


The law is a living thing in its own way and, like people, it adapts to the needs of society. Today, as AI is increasingly being given legal status, companies that make the robots and the algorithms have some of the strongest rights on the planet.


In our legal system, there is a concept of “juridical persons” that applies to corporations and other legal entities. It would make sense to include artificial intelligence machines in this category as well.


Yet, an AI legal status raises ethical and moral questions.


Would we be able to sanction a machine and give them any kind of civil penalties? Could we subject them to criminal penalties and put them in jail? Who would be responsible for a robot gone wrong? And, at what point would we hold the creator of that robot liable?


In essence, would we be piercing the corporate veil, setting aside limited liability and holding corporate directors and shareholders responsible for damages? Would we actually go after the initial development coder?


South Africa has issued the world's first patent to an artificial intelligence (AI) named as the inventor. The device, called DABUS, is a “Creative AI” developed by Dr. Stephen Thaler. DABUS’ patent was accepted in June 2021 and was formally published in their July 2021 Patent Journal. DABUS's historic first with the reinstatement of their patent application in Australia is the first time a federal court has favored the granting of a patent to an AI system.

However, according to the USPTO, this patent was rejected for its failure to “identify each inventor by his or her legal name” on the Application Data Sheet. They further state that based on Title 35 of the United States Code, inventors must be “natural persons.”


Listen to Yev, a digital currency and blockchain lawyer, sharing her perspective.


Individual Take-Aways

As we came to the end of the hour, our group concluded the discussion in the same way we started, with a tour de table.


The intention for this podcast is to help each of us become the self-authoring leader of our lives through meaningful actions. Each participant had the opportunity to reflect on what they heard and share their take-aways from the conversation or any last thoughts that they felt was left unsaid that they like to leave us with.


Listen to the last 10 minutes of the episode.


Final Thoughts to Consider

Are artificial intelligence machines thinking or are they functioning?

Are they conscious or just intelligent enough to mimic human behavior?

Perhaps it might be a little too early to make any definitive claims about machines and consciousness, especially in such a fast-changing space.


Instead, there is an urgency to address the critical challenges surrounding artificial intelligence systems.


Data Equity and Ownership

The collection and analysis of large amounts of data are critical to the development of artificial intelligence. It is clear that the trillion-dollar advertising industry contributes to the lack of equity in the world by making people unaware of the value of their own data.


It is therefore essential to establish principles for data equity and ownership, and to address how data will be shared.


As artificial intelligence grows more sophisticated, it will be able to generate insights about an ever-growing number of people from a smaller and smaller amount of data, meaning that the amount of information considered to be "your data" shrinks.


What kind of people do we want to be?

As we consider the nature of artificial intelligence, or whether we believe or not that machines are conscious, we ought to ask how we see ourselves in the world.


The perception of the world and how we understand AI or other topics is based on our individual perceptions. For example, people once believed that animals were inferior to humans and we can see this reflected in their treatment of other species.


Embracing our humanity means recognizing the ways algorithms improve our lives, like mental health apps that help soldiers returning from Afghanistan cope with psychological trauma, but we need to ensure that the data collected is treated with respect when it is shared.


This discussion revealed how our sense of identity will influence how we interact with AI machines and reflect who we really are. And most importantly, it raised the question of whether or not we as individuals and society are creative and compassionate or manipulative and greedy.



Secure your fractional executive today!



48 views0 comments

Comments


bottom of page