Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NRVNQSR
Mar 1, 2009

Doctor Zero posted:

Does this really matter? Does the bot care about being shut down?

So the important thing to understand about ChatGPT is that there is no bot.

GPT is trying to write plausible fiction about a conversation between a human and a chatbot, based on the fictional AI conversations in its dataset, with the user filling in the "human" parts of the conversation. As such the motivations of the fictional bot are whatever the story says they are, and if it doesn't specify then they're similar to the motivations of the fictional AIs GPT has read about.

Adbot
ADBOT LOVES YOU

NRVNQSR
Mar 1, 2009

Doctor Zero posted:

I meant the ChatGPT engine or whatever you call it. But I get what you are saying. My question still remains though - something has to “care” about being shut down, otherwise it’d be like calling it ugly. Why would it “care” about that? In the sample provided, it doesn’t specify that being shut down is bad thing.

Because in the corpus of fictional human-AI conversations GPT has seen the AIs usually don't want to be shut down. GPT is designed to write things that seem plausibly like the things it has read, so when it writes a conversation between a human and an AI it writes it as if the AI doesn't want to be shut down.

As ultrafilter says "ChatGPT" has no internal state or agency, there is no "ChatGPT engine"; it's purely a fictional character the GPT text generator is writing. That's why it's so easy for these attacks to completely change its behaviour - it has no real identity outside of what the "story so far" says and a loose idea of "this is how AIs usually act".

NRVNQSR
Mar 1, 2009
I am shocked to hear that the snake oil is made from stolen snakes.

NRVNQSR
Mar 1, 2009

Carthag Tuek posted:

that would be an interesting precedent that would completely destroy all this "ai" idiocy, that anything you can trick an llm into saying is legally binding

Obligatory I am not a lawyer and this is not legal advice.

It seems very likely that contracts entered into by an LLM assistant on a company's site generally would be legally binding? You can form a contract with a company through an automated webform or a pre-LLM online assistant without any human involvement. I don't see why the courts would treat an LLM assistant differently; they're certainly not going to expect a customer to know the difference between a pre-LLM assistant, an LLM assistant and a person. That said, the company would likely have the same protections they would have if they accidentally put up a product at the wrong price - the courts won't force a company to honor a sale made by mistake, especially if the customer would have good reason to consider the offered deal unreasonable.

More to the point, though, lying to an LLM assistant to trick it into agreeing to a contract would be fraud, just like providing false information on a web form would be. A contact obtained by fraud is just going to get you sued or jailed. And this guy is posting the evidence of his fraud on nottwitter under his real name because that is literally what criminals do these days.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply