A recent opinion piece in Scientific American by Susan Schneider ("If a chatbot tells you it is conscious, should you believe it?" scientificamerican.com) annoyed me. I haven't figured out why, yet...
First, the observation that chatbots are claiming to be conscious, would be no test for consciousness. Almost anyone can write a computer program to answer, "Are you conscious?" with "Yes".
IF INPUT = "ARE YOU CONSCIOUS?" THEN OUTPUT = "YES"
Of course, this is a simplified example, but it demonstrates how there are many computer programs that might answer "yes" to any number of inquiries as to consciousness. Asking a thing if it is conscious may be necessary, but it is not sufficient. In this same vein, I would disagree with Pascal's statement, "I think, therefor I am."(1)
I would argue that consciousness cannot be self-referential. In fact, I would argue that consciousness, by definition, is one thing observing another. Self-consciousness is one part of the brain watching another part of itself.
The attempt to define consciousness is similar to the attempts to define intelligence. As each test is developed, we become aware of other definitions that exist that are NOT in the test. In many ways, the attempt to define intelligence through tests is similar to designing and testing scientific models in physics, chemistry, biology, etc. Each model may get us closer, but there always seems to be something beyond, something not captured by the model. This is particularly true in large scale complex models (like weather forecasting), and even more problematic when the models include members that change their behavior based on observations (like models with intelligent learning agents). Such complex systems may exceed the model's information content, allowing us to only model approximations.
Perhaps a better way of considering consciousness, as we generally experience it, is as a collection of models of various descriptions. Like intelligence, consciousness might best be represented as multi-dimensional.
As Schneider points out, "AI chatbots have been trained on massive amounts of human data". But what she does not point out is that one could (and does) design chatbots on weighted sets of data. The most obvious example of this is the censoring of pornography and erotica from the chatbots, even though pornography permeates the web. If a chatbot was built from the actual data on the web, chatbots would be as obsessed with sex as humans. Instead, folks building the chatbots excluded these inputs from the learning data.
"Biases" in the chatbot models are many and large. For example, since the chatbot is built based on a weighted version of the web, bias is built in. Another bias is based on the content used to train the chatbot. More importantly, there is a lot of data that is NOT in digital form and cannot be fed into the learning algorithms of the chatbots. For example, life experiences that aren't in digital form. Approximations might be included, if someone has written about the experience.
Another challenge comes from the bias of Western thought, since there are countless cultures with little or no representation in digital form. I ran into this bias recently when asking for sources about eight statements I gave the chatbot, including, "I will die someday" and "I cannot change the past". At first, the chatbot gave only western culture works. When I pointed out the bias, the chatbot acknowledged the bias and redid the list, which now included many works from the far east, as well as one source from Africa.(2)
Attempts to remove other "biases" have required more weighting schemes, giving some statements more influence than others. But who decides? And what biases are so pervasive that we are blind to them? Like the bias resulting from learning everything from what is written, rather than experienced? What does a chatbot know of the smell of a flower, the taste of an apple, the sound of a baby crying, the touch of a loved one, beyond what someone wrote? Until chatbots can incorporate all the experiences of a human being, they will be limited to approximating human beings, biased towards written (and visual) data.
As a statistician, I view bias as inevitable. No view is unbiased. My reality is not your reality. But we can share our perspectives and come to some agreement to reduce individual bias. For example, a good example of an algorithm we use to reduce bias of human observation is the scientific method.
Schneider is right to point out that "conscious" does not mean "responsible". In a similar way, we consider minors to be conscious, but do not hold them to the level of responsibility that we hold adults.
But I don't think it will benefit us to try and define chatbot consciousness in terms of human consciousness. Better to define and test for the traits we can observe and measure, tests which relate to the usefulness of the chatbots. In any case, consciousness should never be defined within a system. Consciousness only exists as an observation from outside the system looking in.
____________________
(1) For more discussion on "I think, therefor I am" see my blog entry on "Thinking about thinking".
(2) https://thoughtsideaswanderings.blogspot.com/2025/04/having-fun-on-sunday-afternoon-with.html
No comments:
Post a Comment