The Next American Century
The Next American Century Podcast
Big Government's Big AI Excuse
2
0:00
-10:15

Big Government's Big AI Excuse

Edited Transcript below.
2

This morning, scanning the Wall Street Journal, I saw another round of stories about how the government wants to do two things: regulate artificial intelligence so that it won't be dangerous and subsidize artificial intelligence so that our artificial intelligence will be good as everyone else's including most dread China.

This is silly for the big reason that big AI, by its very bigness, is doomed to fail.

No one seemed to care very much about artificial intelligence until generative AI came along.

Yes, there were occasional protests against the use of big data for marketing purposes. Bizarrely some people imagined Big Data infringes on their privacy, as if being one byte among several trillion could possibly infringe on anyone's privacy.

Generally speaking though, no one was up in arms about the AI threat until it seemed as if AI could write a poem, write a novel, write a phony media story, be used to slander public figures, and generally imitate human beings.

Now, that AIs that can do this, the big generative AIs, are conceived to be opportune for government regulation, mostly because they are so, so big. They are trained and largely executed in gigantic data centers.

This, first, gives the government a location, a place where it can regulate and second, allows the government to stoke fear and resentment of mega corporations. At the same time being top AI nation seems like one more national security imperative. (Lost count? Maybe we need an AI for that)

The problem is that the very bigness of generative AI guarantees its failure.

We all know that AI isn't perfect, that it makes mistakes, that it has hallucinations, that it generates false data. This is so well known that now most AIs will, at the end of some complicated conversation, tell you, ‘by the way, I could be wrong about all that.’

This cannot be fixed. It can't be fixed because big AI is necessarily a black box. It cannot verify its answers because it doesn't know why it's answering.

 Of course, it doesn't really know anything. AIs aren't intelligent. But because AI works by summing probabilities of millions, billions, trillions now of instances and parameters, it cannot go back and say, the reason I told Sally the answer was X was because I observed A, B, and C. It can't do that.

That matters because it means that generative AI on that scale will never be reliable. The “I can't tell you why problem” is not merely a next step on a technological pathway that will be solved as we keep getting bigger and bigger and bigger. Bigger isn't better, so it can't be solved.

There are AIs that can tell you why. They are generally small. A company that

I'm interested in and have written about that does small AIs is One AI, which you will not be surprised to hear is an Israeli company. One AI proceeds on the idea that if your AI is smarter than an intern, it's a problem.

That's because for business, what AIs mostly do is intern work, at least in business.

What one AI does is provide a platform with about 40 skills you can choose from: the skill of summarizing a conversation, the skill of evaluating a customer's actual interest in buying your product, the skill of outlining a conversation.

These 40 or so skills comprise tasks to which you could typically assign a bright intern. You would hand the intern, say, a bunch of skills transcripts from customer calls and conversations and ask him to pull out certain tendencies. Not that hard. It's just time consuming, right?

So that's important. Whenever an AI is genuinely useful, it's not because it's doing something that is hard in any one instance. It's because it's doing a lot of simple things very, very fast.

One AI's conclusion is that what a business wants in an AI is not something really smart, but about 40 million interns.  That’s what it provides.

Because the models that can do these specific skills are small in scale, they can be verified. You can ask your intern, your AI, why it reached that conclusion. You can ask it to show you the data it used. Because it can be verified, it's useful. It's also cheap compared to a big generative AI model.

Now, these so-called small language models do use large language models to enable them.  The large language model doesn't become obsolete. Rather, it serves as a sort of foundation for a verifiable small language model.

As the founder and CEO of One AI, a man named Amit Ben, said to my colleague John Schroeter and me in a recent interview ‘why, if you want an intern,

to track the import of a sales call would you also teach it to write the Iliad or Paradise Lost or distinguish between a Rembrandt and a Van Gogh?

It’s not just that training the intern in classical lit or art the intern would get expensive.  The intern might get ideas.  It might start sending you memos in the style of Homer or Milton, with illustrations of starry nights. That wouldn't be good. That would be the opposite of good. Small AI is useful AI because it is verifiable AI.

Discussion about this podcast

The Next American Century
The Next American Century Podcast
Maintaining US world leadership
Listen on
Substack App
Spotify
RSS Feed
Appears in episode
Richard Vigilante