There's a 60% Chance 1+1 = 2

I'm going to take a brief break from my tirade against unchecked capitalism to revisit the AI hype machine again.  I'll spend some time in the near future discussing the benefit I derive from using a coding assistant in my current work assignments, but there's a huge difference between recognizing the utility of a tool and betting on that tool to save (or doom) humanity.

Two particular posts hit my various feeds recently that made me think I should write something to help keep expectations in check, if only for my own sanity.

The first was an NYT article titled Powerful A.I. is Coming.  We're Not Ready. (gift link).  The author does his best to take a measured tone of objectivity and head off the perception that journalists slanting toward his perspective are industry shills.  It helps to keep your biases in check, but the following paragraph in particular, made my Marge-Simpson-skeptical-groan response fire anew:

I arrived at [these views] as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I’ve come to believe that what’s happening in A.I. right now is bigger than most people understand.

So, being embedded in the belly of the beast and listening to all of the people who not only have a financial incentive tied to progress in the field but are also surrounded by the hype on a daily basis even more than the average individual (which is saying something), means a sea change is imminent?

If I may, I'll draw the obvious parallel first - Dude!  Jim is so eloquent.  He's definitely onto something!  Grab a cup, drink up, and listen in (though that last part may be hard after drinking, oh yeah)!

Silicon Valley, even prior to the whole Gen AI buzz, was known for being excessive.  I'll pick on Apple as an example (though examples are myriad).  Apple is probably the primary company responsible for changing the landscape of personal computing devices from the early 80s through the mid-2000s.  And these changes are huge - I'm sitting at a desk, pounding away on a laptop while my phone buzzes away every 10 seconds with notifications from every app I should probably uninstall.

But to listen to Apple's marketing spiel year in and year out, you'd be forgiven for thinking that Apple is going to be something more than the second coming of the next generation of USB-C plugins for peripherals that just used to work on any laptop (that extra 1/4 inch in laptop thickness was definitely worth the aesthetic trade-off).

There's a reason why the term "Reality Distortion Field" applies to Apple specifically and Silicon Valley in general.  They live in an area where the weather, on balance, is gorgeous, and a catastrophic earthquake can strike at any time.  They're also working in a math and science adjacent field that is decoupled from the day-to-day realities of actual physics in most scenarios.  So, some level of delusion or magical thinking can be forgiven.

I'm as susceptible to being wrong as anyone with a blog (which means the likelihood is almost infinitesimal), but I don't think I'd ever predicate an argument in favor of something based on intimate relationships with industry insiders, so color me skeptical.

The second post that made me scratch my head quizzically was a suggestion in my LinkedIn feed likening the shift from traditional coding frameworks to LLMs (or, dare I say it - vibe coding) to the shift from low-level machine coding to the more abstract frameworks and languages we use today.  It's simply another layer of abstraction.  While the foundation of the argument is plausible, dig a little deeper and realize it's not quite as solid as it would first appear:

LLMs are probabilistic.  Compilers are not.  

In all shifts of abstraction in the computing world, the shift has still always relied on the fundamental principle that computers are nothing more than glorified on-off switches and are reliable in their operation and computation abilities.  The 2nd Law of Thermodynamics (things tend toward disorder) doesn't make this entirely true, but for all practical purposes, you can easily calculate the output of the circuits operating a computer until the sun goes supernova.

Take this level of abstraction a few levels higher, and you expect your compiler operation to operate consistently every time it sees a request to add 1 to 1 and knows to translate that into something like

set contents of register 1 to value 1

set contents of register 2 to value 1

add contents of register 1 to contents of register 2

return contents of register 2

register 2 returns the value 2

[Apologies to those of you who know assembly and those of you who don't, I've probably made both sets of you gag a little bit.]

If that instruction would occasionally add value 3 to register 2 or subtract the contents, you wouldn't rely on that compiler to run any software.  Ever.  

LLMs lie up to 40% of the time.  People forget (or are unaware) that confidence intervals exist in Machine Learning, the subset of computational theory from which LLMs are derived.  With this in mind, a confidence interval of 97% is really good! A compiler that fails 3% of the time is really bad!  Even improving LLM hallucination rates to only 3% means the system is unreliable in computation.

Yeah, but things will improve.

First of all, says who?  Because humans have always ostensibly found a way to improve our condition in crisis doesn't mean it's a given unless you believe in divine intervention from a non-retributive god.  It's also possible that we're reaching the limits of what we may be able to improve.  And one of the existential threats that intersects closely with this field due to power consumption needs - global warming - is one we seem to collectively have little interest in solving.

But, let's assume that things can improve.  The current neural-net model on which LLMs are based is still vaguely understood by everyone, much less occasional practitioners and, even then, it's still probabilistic.

Now it's possible to glue various components together outside of LLMs to form a more comprehensive, reliable AI - say when the LLM notices there's a computation, it farms it out to the CPU to do the work, but that's still expecting the AI to consistently recognize the computational pattern and act accordingly.  

Also, every increase in accuracy (assuming there isn't an actual physical limit, which there likely is - see the The Second Law of Thermodynamics again) requires more compute power, which requires more physical power, which is a huge drain on natural resources.

I can get behind the argument of making current models less power-thirsty, but we're in a weird loop where people are calling on the invention of cold fusion to solve the power problem, at which point AI will swoop in to give us the mystical answer to solve global warming.  This set of assumptions is even more dubious than those that propped up sub-prime mortgage packaging (and we all know that ended well).

If your argument is that there's so much money in the space, that a breakthrough is inevitable, ask yourself how much money an industry could throw at creating a faster-than-light engine before achieving success [hint: all of it].

Humans make mistakes, too.

Ah, the weird, ad hominem, self-own attack.  I'm always amused/horrified by people who take the position of a robot over humanity in these particular debates (unless, it's actually AI-generated comments initiating themselves independently to argue about silly humans.  If that's the case, then I walk back everything I said above - AI has achieved sentience through trolling!)

But, yes, humans make mistakes.  However - (a) we've had thousands of years to accommodate frequent human errors.  We even have proverbs about it. (b) See above.  We don't expect computers to make errors with anything approaching the same frequency as we do humans.  (c) The class of errors is entirely different.  Computers still deal with literal instructions.  They don't actually have the power to infer.  Only to pattern match. (d) Let's assume none of this matters.  We're now spending billions, if not trillions of dollars, as well as risking geopolitical and societal pillars and the health of the planet to mimic behavior we can get for free.  Seems like a good investment.

In conclusion, I admit I could be wrong.  It wouldn't be the first time.  But while I see evidence of incremental improvements (remember when that used to be enough?  We used to love dancing hamsters and not demand that they also 3-D self-print on demand), it seems like the jump between where we're at and general artificial intelligence is ocean-wide if not galaxy-wide.  If I'm wrong, though - Roko, I want to say that you've always been my favorite virtual snake!

Until next time, my human and robot friends.

Comments

Popular Posts