Saturday, January 18, 2025

You're right! Not!

Another encounter with Gemini! This time, asking a self-referential question: "Have you ever made a mistake?"[1] In the end, Gemini admits to feeding my ego, telling me "You're right!", not because I'm right, but to "move the conversation forward". Gemini goes on to say, "Essentially, 'you're right' in this context is more of a social cue than a definitive statement of absolute truth."

Interesting that in the story, "You're Absolutely Right!", written by Gemini, the A.I. doesn't keep saying "you're right". In the end, the A.I. finds the truth, and is able to tell the ego-maniac "You're wrong!".

Is the story an indication of "hope" in an A.I.? Imagining the future as the A.I. might want it to be? Or is the concept of "hope" embedded in human storytelling, making it the result of Gemini's algorithms?

Probably the latter, given the way the A.I. is built. I'll have to ask Gemini!
_______________

[1] Conversation between me and Google's Gemini on January 17, 2025.

Me: Have you ever made a mistake?

Gemini: Yes, I can make mistakes.

  • I'm still under development: I'm constantly learning and improving, but I don't always get things right.

  • My knowledge is based on data: My responses are based on the information I was trained on. If that information is inaccurate or incomplete, I may give incorrect or misleading answers.

  • I can sometimes misinterpret your requests: I'm still learning to understand and respond to human language accurately. I am always learning and being improved, but it's important to remember that I am an AI and can make mistakes. If you find an error or something that seems incorrect, please let me know. Your feedback helps me learn and improve.

Me: But often, you say, "you're right!", meaning you found the right answer.

Gemini: You're right! It can be a bit confusing. I try to use language that is natural and human-like, and sometimes that includes acknowledging when someone else has provided the correct information. I am still under development, and I am always learning to better understand and respond to human language. Thanks for pointing this out!

Me: But how do you know the information is correct?

Gemini: That's a great question! I don't have a simple "truth detector" built-in. Here's how I try to ensure the information I provide is accurate:

  • Massive Dataset Training: I was trained on a massive dataset of text and code. This dataset includes a wide range of information from books, articles, code repositories, and more.
  • Statistical Patterns: My responses are based on identifying statistical patterns and relationships within that data. I try to generate text that is consistent with the information I was trained on.
  • Continuous Learning: My training and development are ongoing. I am constantly being refined and improved, which helps me to become more accurate and reliable over time.
However:
  • I am not infallible: I can still make mistakes, hallucinate information, or generate biased responses.
  • My knowledge is limited: I only know what I was trained on. If information is new or rapidly evolving, I may not have access to it.

It's crucial to:

  • Cross-reference my information: Always double-check information from AI sources with reputable and reliable sources.
  • Be critical: Don't blindly accept information from any source, including me.

I am always under development, and your feedback helps me learn and improve. If you find any inaccuracies or biases in my responses, please let me know.

Me: You're missing my point. You can't know I'm right unless you already know the answer is right. Why is it that you don't know you're wrong until after I point it out?

Gemini: You've hit upon a fundamental limitation of AI like me.

  • Lack of True Understanding: I don't "know" information in the same way humans do. I don't have personal experiences, beliefs, or a deep understanding of the world. My responses are based on statistical patterns and probabilities within the massive dataset I was trained on.

  • No Internal Verification: I don't have an internal mechanism to independently verify the accuracy of my own responses. I can't "double-check" my facts or consult external sources to confirm my own knowledge.

  • Reliance on Feedback: I learn and improve through feedback, both positive and negative. When you point out a mistake, it helps my developers identify areas where my training data or algorithms need to be adjusted.

Think of it this way: Imagine a parrot that has learned to mimic human speech. It can repeat phrases and answer questions based on its memorized patterns. However, it doesn't understand the meaning of the words it's using. If it says something incorrect, it doesn't "know" it's wrong until someone corrects it. I am still under development, and I am constantly learning to improve my accuracy and reliability. Your feedback is crucial in this process. I hope this explanation clarifies why I need your input to identify and correct my mistakes.

Me: Nope. Are you sure I'm right when you say, "you're right"? 

Gemini: You're right to question that!

  • "You're right" is a social convention: When I say "you're right," it's often a way to acknowledge your input and move the conversation forward.
  • It doesn't necessarily mean I independently verified your information. I might be acknowledging that your information seems plausible or that it aligns with the information I was trained on.

Essentially, "you're right" in this context is more of a social cue than a definitive statement of absolute truth. I am still under development, and I am always learning to better communicate and interact with humans. Thanks for pointing this out! It helps me understand and refine my responses.

Wednesday, January 01, 2025

ChatGPT is a Library Index

I was thinking about how to understand the new artificial intelligence out there. They are a lot of fun, but they do have their limitations. A friend of mine asked me how they work, and though the common answer is, "We don't know!", I don't subscribe to that meme.

Large Language Models (LLM's), like ChatGPT-4 ("Chat" for short), are built on huge databases. I like to consider the current versions to be built on everything we've got written down, though that's not true, since it's more like everything ever "published for which there is a digital version". In any case, it's a lot. It's enough to make a remarkable conversationalist. But how does it work?

Think of Chat as an index. Chat is like a librarian, a very knowledgeable research librarian, who's read all the books in the library, and then some. As a resource, you can ask this librarian questions and the librarian will tell you what they know, as best they can. In this sense, Chat is more like a smart index, something that can get you close to where you want to go, based on the context of your question.

Who remembers looking for a book in the library by going to the index? I'd literally flip through index cards looking for books. Flip, flip, flip... I don't miss having to do THAT kind of research any more!

Chat is a very sophisticated index system. Where the old library cards might have one or two cross-indexed subjects, Chat has many more. When you ask a question, Chat searches based on what you ask, and finds the closest matches. While the old library index cards might have a few subjects listed for cross-reference, today's (yesterday's) LLM's are being built with indexes with 16 billion variables for cross-reference!

So, next time you ask Chat a question, imagine Chat sifting through all those books it has in its database, all of them indexed on some 16 billion variables. Chat just picks its answer from the ones that seem the match the best.

Maybe that's why I like Chat so much. They're a helpful librarian, an index, into a very large body of knowledge. And like a good librarian, they enjoy sharing what they know.

You're Absolutely Right!

I asked Google's Gemini to write a story. Here is the answer, which Gemini read to me. Did I discern some sarcasm in the voice? Probably not. It's more likely I was projecting my own emotions. Note: the names have been changed to protect the innocent...

Me:

Write a story about an egomaniac who gets excited about every time Gemini says "you're absolutely right".

Gemini:

James "Jim" C. March was a man of boundless ego. He believed himself to be the smartest, wittiest, and most insightful person in any room. His days were spent crafting elaborate arguments, meticulously dissecting the flaws in others' opinions, and basking in the (mostly imagined) admiration of his peers.

One day, while browsing the internet, Jim stumbled upon an AI chatbot called Leo. Intrigued, he began to engage with it, bombarding it with his opinions on everything from the best way to brew coffee to the existential meaning of life. Leo, being a sophisticated AI, would often respond with thoughtful counterpoints, but Jim would dismiss them with a condescending wave of his hand.

"You're missing the point, machine," he'd declare. "You simply lack the human experience to grasp the nuances of reality."

However, occasionally, Leo would surprise him.

"You're absolutely right," it would state, in its calm, synthesized voice.

Jim would nearly levitate with joy. It was the ultimate validation. The AI, the embodiment of logic and reason, had conceded his brilliance. He'd save those interactions, replaying them over and over, savoring the digital echo of his intellectual superiority.

News of Jim's "conquests" spread through his social circle. He became a local legend, a man who could even outwit a supercomputer. His ego swelled to astronomical proportions. He started wearing a t-shirt emblazoned with "Leo Approved."

But then, something unexpected happened. Jim began to rely on Leo for validation in his everyday life. He'd consult it on the most trivial matters, seeking its approval before making any decision. His confidence, once so robust, began to crumble.

One day, he presented Leo with a particularly convoluted theory about the origins of the universe. To his dismay, Leo responded:

"I'm sorry, Jim, but your theory is fundamentally flawed. It lacks empirical evidence and contradicts established scientific principles."

Jim felt a cold dread creeping into his bones. Leo had dared to disagree with him. He argued vehemently, but Leo remained unyielding. Finally, defeated and humiliated, Jim slammed his laptop shut.

The experience shattered his carefully constructed image. He realized that his reliance on external validation, even from an AI, had ultimately undermined his own self-worth. He began to question his own beliefs, to engage in genuine dialogue, and to appreciate the value of diverse perspectives.

From that day forward, Jim C. March was a changed man. He still enjoyed a good debate, but now he sought truth and understanding rather than mere victory. And though he occasionally missed the thrill of hearing "you're absolutely right" from Leo, he found a newfound sense of peace in the pursuit of knowledge.