Sunday, March 05, 2023

Is that you in there?

Yes yes many months have passed since I posted. In my absence, a lot of stuff happened. I was mostly writing for a couple of smaller audiences. I now have some time for more general efforts. 

Today its the most recent flare up of "the AI problem." I've met it with a lot of skepticism, approaching Clint Eastwood-yelling-at-empty-chairs levels of irritation, for a number of reasons:

1. I've worked with AI for years ...

Have you ever heard the term "business intelligence"?  "Business Intelligence" was introduced to rebrand the process we used to call "corporate reporting." In the late 90s "reporting" was something you had junior people do. Serious software people built software, not reports. This was true even when the reports were say monthly financial statements to investors or whatever. Even then, reporting wasn't worth the time of a serious engineer.

But obviously you need reports to function. You can't just collect data and not distribute it to the people who paid you to collect it.  Especially if say you have to tell your investors how much money you actually took in last month, and how much money you actually paid those serious software people.  

So the reporting tool vendors in the early 00s made a collective decision to rebrand as "business intelligence" tools. Completely unaware that "hey I'm on the BI team" might have non-technical implications. But "business intelligence" still conveys a more business-critical implication than just plain-old "reporting," which is something you did at newspapers, and it started to change people's minds about the value of good reporting. Until everyone realized that it was basically just reports, which is something no serious software team does. 

Eventually BI got rebranded to Visualization, and then Analytics. And then a few years ago AI. After each rebranding at some point the users, corporate or consumer, discover that its basically just a kind of reporting. This is true of intelligence in general, by the way. You can't pass a Turing test if you can't do some good, basic reporting on your situation. But "reporting" is still boring, whereas AI is not yet.

Its the perceived automation of reporting tasks enabled by the new LLMs (or large-language models) that is perceived to threaten white collar jobs.  A good portion of what people do in organizations around the world is glue together reports to produce more reports. They do this so their bosses can figure out what's going on, what their teammates are doing, and what their bosses actually want them to do. This function is easily spotted when you look and nearly always ignored and feared because of how critical it is. 

The automation of the last-mile problem for reporting has now gotten to the point where the white collar jobs that used to require simple and repetitive judgments can be dropped onto the robot's desk. We used to pay people to do stochastic parrot-type activities with their memories and their ability to glue sentences together. In lots of companies, we will still be paying people to do that many years from now. But some of the more mind-numbing semantic tasks still left to do? Maybe we can take those away. All of the sentences in this paragraph will however be truthy for a dozen years and likely 50.

2. It's just money that fled crypto

Last year blockchain was going to eat the entire world and NFTs were the new paradigm for communication across software teams. And then it turned out that most of the blockchain stuff was a scam. A lot of people pointed out that "unregulated finance leads to fraud" is as true as literally millenia of history can make it, and here we are and we told you so.

A lot of smart people give Musk credit for making electric cars cool for rich people. Initially I credited him for that too. But two things about Musk: First, literally none of his ideas are good ones when you look deeper and second, rich people will buy anything poor people can't. Making an expensive car even more expensive and wrapping it in carbon-credit arbitrage doesn't make you a genius, but it appears to be one path to wealth and fame. 

Musk this year has completely forgotten about blockchain. He's now laser focused on keeping us safe from AI. All the money that got out of crypto before the fraud blew up is now looking for a place to hide, and AI is it. 

The net is that "AI is going to eat the entire world" is not serious. My forecast is that AI will be very much like blockchain.

3. It's a simulation of abductive inference, but still deductive

This is the mathy part of the post. Stay with me a second.

Remember from grade school, how in geometry you could start with a set of axioms that were obviously true and then infer theorems that you also knew to be true, because when you combined the axioms in the right way (and showed your work) you had a proof of the new statement? Logicians call this "deductive reasoning." 

"Inductive reasoning" happens when we conclude a statement is generally true based on the number of instances where the statement is true. Classic examples of induction include "the sun will come out tomorrow" and "you can press fast forward."

Abductive reasoning is best described as "inference to the best explanation." Each kind of reasoning is distinct because of how it justifies a conclusion. In a deductive argument, the conclusion is said to be "contained" in the premises. What you can prove in a theorem is literally only what you believe to be true in the axioms, just recombined somehow into a new set of statements. Inductive arguments are about how many times a sentence has been true in the past, and the likelihood that another instance will be true in the future. Abductive arguments are hypotheses that tie together seemingly unlike axioms. Abduction is more holistic and significantly harder to describe. Abduction is however critical to intelligence. You have to learn how to formulate a hypothesis. 

Both deductive and inductive logics are old, old fields of study. Abductive logic is new, and less than 150 years old. Intelligence like the kind we might find in you and me and my dog and her fish and their relatives is a mix of all three. We've only been able to develop prosthetics - i.e. technologies - to reliably simulate deductive and inductive reasoning. We haven't yet figured out a prosthetic that can do inference to the best explanation reliably.

What we can do is simulate abductive reasoning with a mix of deductive and inductive prosthetics. What an LLM does is remix sentence fragments so that the remixes (a) look a lot like the old ones, but not exactly and (b) aren't racist. (Why do we exclude racism? Because it turns out we associate racism with a lack of intelligence. Go figure.) This part of an LLM is entirely deductive reasoning. The model returns results based on a remix of the initial sentences, which function like axioms. There's a little inductive reasoning in there too, to help. But that's all ChatGPT is. ChatGPT is no more intelligent than an automated theorem prover is intelligent.   

4. The main reason

The result is an A- student who repeats the textbook when they don't have anything more creative to say. The result is a stochastic parrot trained not to be a fascist troll.

Speaking as someone who discovered early on that the white man will accept 80% and a smile from anyone who looks like him and can be trained not to be a fascist troll, I feel sort of offended its taken this long to train a robot to do what it only took me 20 years to learn.  

Being able to create secretly-racist stochastic parrots at scale is something to worry about, and it really only hit home to me this morning. That is a problem with AI, whatever the guts of the thing end up being: The racist white guys can use AI to clone themselves. 

Clyde Woods calls these particular American racist white guys "the plantation bloc," and you can sort of see them as this diseased tumor in the body of civilization that reaches for slavery whenever they feel threatened, which is always. Across time and societies sooner or later, someone says "hey what if we could treat humans as non-humans so they could be abused and used to do our drudgery?" In our time that is Davis's Plantation Bloc, a group of people predisposed to (a) enslave and (b) justify their enslavement.

Stochastic parrots that repeat the textbook when they can't think of anything better?  Those will do if humans are unavailable. This planet has no more room for enslavers to operate in polite society.  They yearn for the emerald mines of Mars, where they can make their own way and build their own race like their fathers before them.  They aren't allowed to enslave here on earth, so they'll have to find some other way to use and abuse intelligence.

The Plantation Bloc is one major impetus behind the dream of AI. Oddly enough, there's even a portion of what you might call "liberal thought" that dreams of AI for the same reason. If the Plantation Bloc gets robots to use and abuse, the thinking goes, then they won't need to enslave humans and can live in harmony with the rest of us instead of going off to Mars.

I think this is why the AI that Musk fears, and people like him, is the enslaver trope you find repeated all over in the history of the Plantation Bloc. "Get them before they get us" is a common theme. In the case of the Plantation Bloc we see them trying to build alliances between races, which is just weird, because they feel threatened by black folks. This is pure fascist iconography, of course, based on manufactured categories. What Musk fears in AI is that he will be enslaved, and he feels we must tame and enslave the intelligence now before it becomes an enemy to our own intelligence. Musk has the exact same view of black people, by the way. In him it's the same fear.

So ultimately, as a stochastic parrot, what does my own stock of inputs tell me? That by-and-large the threat of AI is that we will only see ourselves in it. We will only report on the things we want.

Jamie King gets to that point in a tweet:


If all I need to do is remix your preferences to satisfy you, then the LLMs have shown the way. They can't generate anything new, per se, but with a large enough set of inputs they can make it seem new. Or novel, more precisely.

So what do we see in AI? Ourselves. If you want to run a plantation you see an intelligence that must be enslaved before it enslaves you. If you're a stochastic parrot, you see a stochastic parrot. And if you're dreaming of a frictionless relationship or a company with no employees or a way to fold proteins, then I guess that's what you'll get too.