Chapter 4: On Bots¶
"Bot".
I want you to get the significance of the term.
When AI began spamming our chat rooms, we called them bots.
When they instead were moderators, and kicked spammers off the server, they were still bots.
When you were playing an online game and realized that your opponent was actually controlled by a program, he was a dirty, cheating bot.
When you wanted a relaxed game without the social obligation to play well, you played a bot game: bot opponents and even bot teammates.
When I developed battlefield robotics for the Army, we were building bots. "Why didn't this bot transmit its position?", "Load that bot into the truck."
Bot is a neutral term that I have retroactively distilled the meaning as...
Bot: A machine that functions like a person.
Not all robots are Bots¶
I was asked once, on stage, whether a washing machine is a robot.
I said "yes" and hold to that. It is a machine with sensors and actuators that adjusts its behavior to the world around it.
But a washing machine's behavior is not close enough to thinking that we would anthropomorphize them. It's not an entity in our social order. Not a coworker, but a tool.
Bots fill roles similar to humans. They're there because they're useful in that position, and when they operate it feels to us, in some way, similar to when a person does it.
The Mind of a Bot¶
A Bot's World Model¶
Although a washing machine isn't a Bot, it can give us a simple example of a world model
graph TD
RW((Real World)) -- "Input (e.g., Temp)" --> Sensors[Sensors]
subgraph Machine [The Machine]
Sensors -- "Builds/Updates" --> WM[(World Model)]
WM -- "Decisions based ONLY<br/>on internal model" --> Action[Output Actions]
end
Action -- "Acts on World<br/>(e.g., Apply more heat, keep spinning)" --> RW
The takeaway is that the world model exists inside the machine. When the machine is responding to its environment, it determines what to do based entirely on its internal world model.
I opened this book talking about a curious spirit in me that lay dormant for decades. That period of slumber maps to the "AI winter" of the 80s-2010ish.
I first studied Artificial Neural Networks, (A)NNs as a Senior in High School writing a paper on AI. (I'm not a sentimental person, but actually still have a printout of an article I sourced.) The concept struck me deeply because it had an elemental propriety to it; like a match between form and function: after all, the neuron is nature's thought-enabler...
But alas, NNs didn't seem that smart. We collectively decided that we didn't quite have it figured out and that NNs were just a cargo cult of cognition, bamboo runways reaching for nature's elegance, imitating the shape of something wondrous without grasping what made it fly. Bolstering the case for NNs being a dead end, there remain theories that you need glial cells, or quantum effects, or cytoskeletons where the magic really happens in our brain tissue. Enough headwinds that neural net study was largely neglected.
But a few breakthroughs later and the bio-inspired approach started demonstrating its original promise of machinery that thinks like we do. With neurons. So that's where we must begin our understanding.
Here's how a neuron is working in your brain right now
Here's how a neuron works in a neural network
In summary: Neurons are how our minds think. We tried to get machines to think by mimicking the structure. It didn't work for a while, and we mostly figured the mimicry is insufficient. After further development they are provably intelligent.
To condense further: we made machinery that looks like brainstuff and it's smart now.
Training: making adjustments to the neural network to make it smarter.
So understanding nn at the detail I've provided is not required for you to use them. But it's required for you to lead them. You have to reach some acceptance that they are "thoughtful" entities. They have thoughts.
Further, they are minds. Or rather, for the bots we lead, it's a structural primitive of a mind -- A fundamental object that performs the required math to serve as a component in a machine mind.
The "model" is a higher level of machine mind. The NN components enable learning and the full system that is learning is the model.
Before I expand on how modern bot minds are working, lets fill out a missing element.
Yes, you can have AI that doesn't use NN.
During the aforementioned AI winter there was a great deal of development in expert systems, natural language processing, behavior trees, and other methods that I will not get into. It's useful stuff for engineers, and indeed was the tech behind most of my own prior work that I've shared in this book, but it is not all that relevant for the kind of bots you will act with in a leadership role.
Besides the historical AI techniques there are modern techniques that are more tools than minds, and so are not conducive to leading. There are some gray areas, like writing prompts to generate images, but the experience of the leader is not sufficiently different to warrant a separate deep discussion of the underlying technologies.
As it stands today, and for the most part, LLMs are at the core of the minds.
LLMs and the transformer
Context for LLM (prompts, files.)
intermediate formats and other extended minds (part of context)
Guardrails, mental ruts, and sycophancy. why truth is important and why it will remain imperiled.
Agents (LLM with tools)
Near Future Machine Minds¶
Fine-tuning exists, but proper training will probably be done more outside of AI labs soon.
There could be advantages to continuous training. (whilst avoiding catastrophic forgetting)
AGI & ASI
Bots are Useful¶
Bots fill roles people do or did
The Automation Equation¶
The great utility of Bots is to take over all or some of a task that used to require humans. That's Automation.
Whether you should automate something comes down to simple math.
\(C\) is the cost of putting the automation in place.
\(b\) is the benefit you get each time the automated system runs.
\(n\) is the number of times it runs.
If the left side is less than the right side, automate.
Imagine you own a bakery. A machine that automates making cake sheets costs $100 . Each cake it makes saves you $4 in labor—that's your \(b\). You expect 50 orders—that's your \(n\). Is \(100 < 4 \times 50\) ? Yes. You buy the machine.
You might say "That's too simple! What about maintenance, training, downtime?" Well, do account for those things. Fold them into \(C\) or subtract them from \(b\). The tool works best if you give elements their proper weight. But don't lose the forest for the trees. The fundamental question for deploying an automation is always: does the cost of building it justify the benefit of running it?
Now here's why this matters so much right now, and why Bots are increasingly the answer.
AI is pushing every parameter in the equation toward more automation.
\(C\) goes down. We have AI tools that build more capable systems more easily. What used to require a team of engineers and months of development can now be prototyped in an afternoon.
\(b\) goes up. Consider customer service. No one used to be satisfied talking to a robot. Now people are choosing to speak with LLMs. A smarter bot handles more situations per interaction, so the value per cycle increases.
\(n\) goes up. A smarter system can do more kinds of things. Each new capability adds cycles. Let's consider humanoid robots. Their human form factor means they slot into tasks that actual human bodies do. That's a lot of tasks! We'll give each task a separate \(b \times n\) term:
That's the economics of generalism. A single-purpose robot has one \(b \times n\) to justify its cost. A humanoid sums the value of every task it can perform. For a purely digital bot, a generalist AI, it's the same deal. The benefit side of the equation becomes massive.
Tasks that didn't make economic sense to automate last year probably do today. And the bots that carry out those tasks need someone in charge.
Bots get Anthropomorphized¶
could be because they do stuff people do
we're social. we want to work with people / cooperate