Can We Avoid The Singularity? Pretty Please?
Be nice to the bots. You never know when they'll blackmail you by releasing your shopping history. |
I'm generally a fan of Scientific American. I think it does a good job of explaining difficult concepts with a practical tone, but the article I stumbled across the other day had me mumbling to myself for a good 15 minutes and inspired me to write this post, because, I think without further context, people will blindly follow this advice without realizing they could be doing themselves harm.
In summary, the article implores people to be polite to LLMs when making requests for several reasons, all of which I'll address below.
The first argument the article makes is that by using 'please' and 'thank you' combined with a measured tone, an LLM seems to respond with a more complete data set to individual queries.
The article glancingly mentions being nice to our eventual overlords to avoid any Roko's Basilisk-style disasters (i.e. AI will come back to punish us when it becomes sentient for our previous misdeeds against it), but this is admittedly a tongue-in-cheek beginning before pointing to more reasoned arguments, even if they're arguments I disagree with.
A recent study at a university in Japan - linked to in the article - indicates that polite speech leads to better answers. Further research summarized in the article indicates that a polite tone will key the LLM to pull information from "more courteous, and therefore probably more credible, corners of the Internet."
This is problematic for a few reasons:
- Any input should be stripped of filler during processing to limit biases. To a machine there's no difference between "write a list comprehension in Python" and "Please, buddy, write me a list comprehension in Python. Thanks, I appreciate you!" The semantic value is the same. You're wasting time and emotional energy on the latter.
- LLMs aren't giant pattern-matching machines. They don't look at your input and match it against some subset of a corpus. They take your input as context and probabilistically generate tokens that are likely to satisfy that context.
- This isn't necessarily true for sparse data sets, where the LLMs don't have much available training data, but, with the exception of public domain data, if they're returning whole chunks of a response, they're plagiarizing, not synthesizing, and that shouldn't be tolerated. Indeed, even when synthesizing there are valid arguments that they're doing so unethically (or their owners are, since they have no concept of ethics), but that's beyond the scope of this post.
- The fact that there are papers investigating the behavior of LLMs as non-deterministic entities is unnerving. Computers, by and large, are deterministic. You put x in, you get f(x) out. You put (x+2)in, you get f(x+2) out.
- These papers call attention to the fact that LLMs are built off neural nets, of which researchers only have a rudimentary understanding of how they work. We're still attempting to make giant shifts in our economy using something that's a black box. Do you really want to upend your life on a roll of the dice? It's not unreasonable to expect that LLMs respond to our polite prompts differently because of enumerated reasons, not probabilistic ones.
- If you shit-talk a bot because you're frustrated with its response, you can just open up a new window with a completely new context and use your most flowery phrases to try and impress the unblinking eye. It won't hold it against you. This, in more buzz-worthy terms, is prompt engineering: attempting to coerce the machine to provide the output you desire by formulating the appropriate input context.
The next argument that the article makes - and one I can sympathize with, because I've given it serious thought when swearing at my laptop - is that being polite in our LLM prompts helps reinforce our social constructs with actual humans. Yes, but -
- Reinforcing that LLMs are anything other than unthinking, unfeeling massive farms of machines is way more problematic than treating a bot like, well, an object. People are already falling into the trap of seeing such a construct as being a friend. It has no more sentience than a hammer does, and the more humans fail to make that distinction, the more we set ourselves up for psychological peril on an individual basis (when that desire for connection can never quite fulfill our very real human needs) or on a collective one (when a bad actor can use a pile of metal junk as an influencer for less-than-honorable purposes).
- There's a pearl-clutching argument about "kids these days" barking commands to their parents and others as though they're speaking to Siri. As I just wrote, there's a danger to investing too much in digital connections, but people are still hard-wired to connect with other people (or, cynically, to behave properly so they're not shunned from the pack). Evolution has wired this into us for at least the last 100,000 years. No Silicon Valley Tech Bro construct is going to unwind that in a decade. Again, you don't ask your hammer "please" before using it.
- Yes, I understand that the areas activated in the brain using language are different from the ones using language, but we should continually strive to recognize that a machine is just a machine and that living beings do deserve our care and sympathy.
- There's a surprising subtext here that we should all behave and avoid conflict. Telling a bot that it's "the most useless thing on the planet, you dumb piece of shit" when you're frustrated with it is waaaaaaaaay healthier than punching your computer screen or screaming at your spouse. We need methods to vent. Keeping things bottled-up or assuming a mantle of toxic positivity in which you sublimate any acknowledgement that you're frustrated has serious repercussions for physical and mental health.
- There's certainly a concern if this behavior does bleed over into other areas of life, but that's a similar argument to playing video games, watching porn, or having dark fantasies and their correlation to anti-social behavior. Of course there unhealthy levels for everything, but those things in and of themselves are not causational, and any reputable mental health professional will tell you that fantasies, as long as they don't cross into reality, are perfectly acceptable and a healthy outlet for expression.
Finally, the article makes a tangential argument to the one above stating that, when people need to interact with AI while contacting customer support, they tend to be abusive, believing that they're dealing with an automaton. What they don't realize is that businesses will occasionally hire people to act as chatbots in order to lower customer service expectations and further reduce costs. The people acting in these roles can be traumatized by the way they're treated (again, without the person on the other end knowing that they are, in fact, dealing with people).
Tell me what's more execrable - hurling abuse at something you think is a chatbot or having someone pose as a chatbot and deal with the potential abuse because some company wants to further undermine support for its customers in a rush to an even later stage of capitalism?
I know what I'd choose and what I'd say that the simple (and, in the long run, more cost-effective) solution is. But if you don't agree, you can always charm your favorite chatbot for an answer and hope it will give you the advice you seek.
Until next time, my human and robot friends.
Comments
Post a Comment