Words Matter

I was just finishing up a somewhat meandering post on options for my business in year 2, when I ran across this gem on Axios, and felt compelled to interrupt my original programming to address this.  In summary, Anthropic is "warning" that fully AI "employees" are about a year away from becoming a reality.

Discounting the veracity of the statement (Is there hype in the AI space now?  I wouldn't know), there are a few things from the headline alone that caught my attention.  First, it's somewhat surprising that one of the companies promising an AI utopia is "warning" us of this development (to be fair, the warnings are about ensuring that the agents behave in a secure manner, but wouldn't this be an expected condition of hiring, especially among such a special beast?).  I don't usually receive warnings for good news - "Hey, Todd, I want to warn you that you'll be expecting a massive windfall of several million dollars soon!  Beware!" - somehow doesn't seem to ring true.  

I also find it amusing (in the basest sense of the word) that one of the large companies responsible for unleashing AI on us is issuing the warning as though this development were an imminent act of god.  I realize the shutdown argument is typically "well, if we don't do it, someone else will."  Given human nature, I believe there's certainly truth there, but why not extend that further - "if we don't detonate a nuclear weapon in Midtown Manhattan, someone else will"?  That statement, among a special brand of sociopaths, is definitely true, but we make efforts to prevent such a cataclysm from coming to fruition nonetheless. 

The danger presented by AI may be over-emphasized, but if the eutopian dream of near-instant mass unemployment comes to fruition, worrying about one nuclear weapon in one city seems more like wishful thinking amongst the bedlam that will otherwise occur.  At its peak, the Great Depression had an unemployment rate of about 25%, and look at all the societal changes that era wrought for better or worse. Imagine a greater number of unemployed or a more sustained period of suffering due to the promise of AI.  

So, maybe it's worth doing something more than issuing a "warning.''

The other thing that stuck out from the headline was the term "AI employee." For several years, I've bristled at the use of the term "resource" as a euphemism for "employee."  I will admit that, for a few years, I absentmindedly used the term, especially in the lexicon of project planning (we'll need more resources to help meet the deadline, Buzz).  However, when I gave it more thought, I stopped.  Terms like resource, or anything that dehumanizes employees, are a shorthand way to allow people who always make more money and occasionally have to make tough decisions to ignore the magnitude of the choices they're making.  It's much easier cognitively to accept that you had to reduce the number of vague resources at your company than to admit you fired people and accept the consequences of such an action.  I understand businesses need to perform layoffs (though it's used far too often as a tool and if your greedy ass didn't overhire in the first place, maybe you wouldn't have to reduce resources at this juncture and deal with all of the headaches that imposes), but be honest about the human toll.  Minimizing the impact of such a decision dehumanizes you, first and foremost.

The headline for this article is a mirror image of the "Is an employee a resource?" question.

Let's define our terms clearly - an employee is someone who trades their fleeting time for your precious money.  Ostensibly, though it can get lost in the overwhelming quotidian deluge, this exchange is to allow both you and your employee to pursue some greater meaning, whether that's at work or outside of it (hence the exchange of money).

An AI agent is nothing more than a glorified spreadsheet.  It has no concept of suffering, elation, sex, or pooping.  It is, therefore, not an employee.  Now, you can employ it for certain tasks, in the same way you can employ a hammer for driving nails into a board, but if you refer to your hammer as your favorite employee, people will either look at you cockeyed or think you're an asshat.  Maybe both.

And if your claim is that it will soon be sentient, so the term is valid, then, buddy, you have a whole host of new problems.

Now if your computer is acting up, you'll often reboot it.  In the future, if it's sentient and acting up, what do you do?  Is rebooting akin to putting it to sleep (which is still morally reprehensible to force on someone against their consent), or is it something more?  Are you now committing murder?  Something more philosophically dubious, where the base personality of the machine is unchanged, but its memory is wiped every time

I'll concede that if AI achieves sentience, then I'll accept the use of the term employee on its behalf, but it would then deserve all rights we, as humans, deserve.  But in the meantime, calling anything like this an employee is an insult to anyone who's ever woken up in the middle night, fearing what happens next after we shuffle off this mortal coil, while also dreading interactions with their boss the next day.

When AI Employees can experience Scary Sundays, I'm all ears.  Until then, be less lazy in your terminology.

Until next time, my human and robot friends.

Comments

  1. You don't have to be polite to Chatbots. Since each token costs $$, you are wasting company money by being polite. When the chatbots start getting indignant or offended, it won't be due to anything like sentience. It will be because Anthropic wants those sweet, sweet "please and thank you" token dollars.

    ReplyDelete
  2. The AI arms race is a lot like the nuclear arms race. Every company wants to build the biggest world-smashing thing and the cautious people are peeling off to start their own "ethical AI" startups that just end up being another branch in the arms race funded by investors pouring their FOMO money into the big bowl of "f- your jobs" cereal.

    ReplyDelete
  3. I use far more words to swear at my LLMs, so I'm not too concerned about missing an occasional thank you to my friendly, neighborhood transistor. I'm still waiting for the day that Google or Meta blacklist me for my harassment of their VMs.

    As to the AI arms race, sometimes I long to live in a more stable era, like 1300s Europe.

    ReplyDelete

Post a Comment

Popular Posts