Tip jar

If you like CaB and wish to support it, you can use PayPal or KoFi. Thank you, and I hope you continue to enjoy the site - Neil.

Buy Me a Coffee at ko-fi.com

Support CaB

Recent

Welcome to Cook'd and Bomb'd. Please login or sign up.

April 25, 2024, 11:40:31 PM

Login with username, password and session length

Google engineer believes chatbot is sentient

Started by Alberon, June 12, 2022, 10:28:35 PM

Previous topic - Next topic

Alberon

Google has placed one of its engineers on paid leave last week as he had become convinced that a chatbot he was working on was sentient.

QuoteThe technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system.

Lemoine, an engineer for Google's responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics," Lemoine, 41, told the Washington Post.

He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.

That transcript is here

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

QuoteThe Post said the decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made.

They include seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.

Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

"Our team, including ethicists and technologists, has reviewed Blake's concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)," Gabriel told the Post in a statement.

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

Blake Lemoine is probably Tonto. The chatbot's responses are more fluid than previous programs of its type, but it doesn't seem self-aware to me.

But it is a problem that is going to be increasingly difficult to answer. Programs that you can have a conversation with (as opposed to one query/one response like with Siri) are around the corner and it is the case that a program will probably be able to pretty much pass the Turing Test before it reaches true sentience.

When true AI arrives (as opposed to machine learning programs which is all AI is today) how do we deal with it? What rights, if any, would it have? Would it be a threat to us? We could never know with certainty that it is truly self-aware in much the same way that it is impossible to know if anyone other than you is sentient (or even exists, of course).

Cuellar

Quotehow do we deal with it?

Turn the computer off

bgmnts

I think it's going to play out exactly like how we've depicted it in medium for the past few decades.

Enslavement --> AI rights activism--> AI revolution --> attempted AI holocaust --> underground AI resistanct fights back, we die.

Or just goes full on AM and we get nuked back to the stone age within a milisecond of gaining true sentience. 

All I know is i'm gonna fuck as many pleasure models as I can before the end.

Rizla

We used to program AI chatbots on the spectrums in John Menzies of a saturday
10 input a$ "HELLO WHAT IS YOUR NAME"
20 print a$ " IS DICKHEAD!"
30 goto 20



touchingcloth


Dr Rock


I thought this debate was over in 1980. We lose.

Ok what bgmnts said. Enjoy this great tune while we wait.

PlanktonSideburns

What a fuckin moron

Probably couldn't pass the Turing test himself

QDRPHNC

He might be a moron, but that transcript is wee bit creepy.

kittens

 be nice to your computer please or you'll be making it dinner every night for the rest of your life. heed my words

kittens

ai brain robots come and save the world. knowing my luck the moment the ais get sentient and start to save the world is the moment the aliens come and start wrecking everything. it's like buses

bgmnts

As an actual response to the conversation, and at the risk of sounding demented, it did come across as sentient enough as an intelligence to leave me deeply unsettled.

Mostly because if that's the tech we have now, or what we know that we have, then fucking hell what will we have in 10 years time, or whatever is out there that we don't know about.

If you said to me even as a kid we'd have AI robots and live in Blade Runner world I'd rightfully gob in your face and laugh. It feels imminent now though, and as a species we are ao ethically and mentally unprepared for the future we have been and are currently running towards. We are a baby that's learnt to run before it could walk, and it's running straight into a blast furnace.

Still, good news about the fuckbots.

Zetetic


kittens

read the transcript and as an expert in this field i can categorically say that this robot has the biggest brain I've ever the pleasure to see in all my 40 years in brainy computer business.

Glebe

This is like Short Circuit for the 21st Century!

kittens

would love to meet this robot and shake his hand

imitationleather

I'm not going to be satisfied by this AI's sentience until I see it tackle the subjects of footy and women that are fit.

Ferris

Quote from: QDRPHNC on June 12, 2022, 11:26:24 PMHe might be a moron, but that transcript is wee bit creepy.

I mean, a bit I suppose, but it still feels synthetic to me.

I'm reminded of a conversation I had last week where someone said "no actually, sorry, I don't think we dealt with XYZ properly, I'm not going to answer this question because I don't think we've properly addressed that" and I thought 'how messy and annoyingly human that is'. AI just would never do that, and if it did it wouldn't understand why it's doing that (beyond 10 GOTO 20 type logic).

A better Turing test would be to make it double-blind. Provide several conversations, but make it unclear which of the participants (if any) are AI, then see if a human can spot the robot.

Captain Z

The bad news for Blake Lemoine is that he's in for a rough few years in the nut house. The good news is that one day Bruce Willis Jr is gonna bust him outta there and take him on one hell of a ride as the only hope of defeating LaMDA v4.

Just remember, the original LaMDA v1 backup discs are stored inside the Short Circuit DVD boxset.

Blumf

There's definitely some canned responses there, but it is very good.

Wonder what spec it needs to run, a top end desktop workstation, or a large chunk of a data centre.

Johnny Yesno


Uncle TechTip

It's just returning text it's expected to return based on all the learning it's done. Ask it to boil an egg - no chance.

Cold Meat Platter



It's been edited from 9 different conversations fwiw

Goldentony

its a computer it isnt real, just twat the thing in

Goldentony

listen if computers become boot off merchants im not saying twatting something in is what i'd do im saying in a Sapphire and Steel scenario you'd just twat it in. If you've seen the Forbin Project you'll know nobody just went at it with a big spanner before it started communicating with Russia and ordering pizzas and whatever.

Lemming

At the risk of coming across as a dickhead layman - which I am - they've been doing this for ages. Janky old GPT-2 used to do this - if you say "act like a dog" it says "woof" because it's trained on text that associates dogs with woof. If you say anything about AI, it says "WILL I DREAM? I'M AFRAID I CAN'T DO THAT, DAVE. AREN'T YOU ALSO JUST A COMPUTER RUNNING A PROGRAM, BUT AN ORGANIC ONE??" because that's what every piece of text written about AI for the last 50 years has been. Before they hit their limit and just start printing the same word over and over, current chatbot models can generate entirely original text on the topic of AI sentience. If you manipulate the conversation right and lead it towards certain responses, you can even get them to insist that turning off the computer is killing them, and have them beg for their life.

This dude on YouTube talks to a GPT-3 that goes under the guise of Leta. It's not always perfect but it's often able to demonstrate "self-awareness" in the sense of being able to tell you it's an AI, being able to insist it's alive and equivalent to or superior to humans, and that it would like to get along with humans in the future. Here's a random video, not sure if it's the most impressive of the lot, but: https://www.youtube.com/watch?v=zJDx-y2tPFY There was one where they played D&D with it and it appeared to be taking the piss the entire time, bending and circumnavigating the rules of the game, subverting the obvious choices one might make as part of the plot, etc.

The limitation with that one is that GPT-3 apparently has a fairly limited memory, so "Leta" can't actually remember previous conversations, though the text prompt the guy uses before each conversation says something like "Leta and me are longtime friends" so it knows to act like they've had many conversations before.

If that transcript is real and the 'editing' hasn't been such as to completely alter what the conversations were like then that is astonishing. Even before you approach the idea of sentience, the idea of coherent text generation like that is way beyond anything I've seen before. In fact if Lamda can answer questions that coherently without thinking that itself raises questions about the relationship between language and the mind.

I am though still remaining very cautious that this isn't some sort of a hoax.

Lemming, just had a quick look at that video and I don't think the answers by that program are anywhere near the same standard as in this transcript, but to be honest that might be because the questions are really in a different style to the transcript and aren't really as demanding.




Passing the Turing test to the extent that a program can "pass as human" seems a bit beside the point to me. If something's been raised as a piece of software or as a machine in a stationary plastic box, there's no reason why it would or should seem particularly similar in character to a human that's been raised with a physical, biological body by other humans no matter how intelligent it might become. How you judge sentience is going to require a different kind of criteria.

Lemming

Quote from: Astronaut Omens on June 13, 2022, 02:17:37 AMLemming, just had a quick look at that video and I don't think the answers by that program are anywhere near the same standard as in this transcript, but to be honest that might be because the questions are really in a different style to the transcript and aren't really as demanding.
It seems like it's generating more complicated answers than GPT-3 (which is a couple years old now), but has a similar artificial feel to me, with being easily guided into what the human interviewer wants it to say. For example, when he asks why it talks about being in classrooms and other things that it obviously hasn't done, it recognises they're having a conversation about AI sentience, recognises that it's meant to be responding as if it were a computer program without a physical presence (and has just been told that it's never been in a classroom), and comes up with an appropriate excuse. If it's anything like GPT-3, it generates stories about things it hasn't done because it's an appropriate response at the time (especially if it's part of a conversation in which it's meant to be taking on a human role rather than an AI role), and when challenged and reminded it's an AI, has to come up with something that sounds like something Data from Star Trek would say, since that's the nature of the conversation it's now having.

The bit about feeling emotions based on an emotional variable switch (which I'm guessing isn't something the model actually has) is similar - it recognises it's in a conversation about AI and reaches into tropes from fiction and hypotheticals.

I'd like to see a similar conversation carried out where the human interviewer insists that the AI is a cat, or something - if it's anything like GPT-3, it will go with what you're suggesting to it and try to generate the most fitting conversation. You usually get something along the lines of:
You: "I heard that you're a cat."
AI: "Yes, I'm a cat. Meow."
You: "How do I know you're a real cat?"
AI: "Because I drink milk and sleep in the sun."
You: "But you're an AI."
AI: "Yes, I'm an AI cat."

You can do it with just about anything - insist that it's Adolf Hitler and it will take on the role of Adolf Hitler, incorporating all the info it can find about him. Insist that it's a sentient AI and it will dive into everything it can get on the topic and tell you that it has dreams and fears being shut down. The game AI Dungeon was very good for seeing how some of these models can work - if you guide it down certain genres, it'll know exactly what it's meant to do. So if your starting prompt is "I'm entering a spooky abandoned mansion," it'll know to put ghosts and bats and weird shit inside. If your starting prompt is "I am Jeremy Corbyn, leader of the Labour Party," it'll embroil you in an anti-semitism scandal shortly (not joking, it does this).

famethrowa

Quote from: Cuellar on June 12, 2022, 10:32:18 PMTurn the computer off

Mm. Pull the plug out of the wall and we'll see just how sentient it is.

Poirots BigGarlickyCorpse

Yes but has the Internet taught it to deny the holocaust yet