Gabriel René, the visionary founder and CEO of Verses AI (VRSSF), offers a provocative critique of the current state of generative AI technologies, including ChatGPT. Far from being the latest development of an information economy for an information age, he characterizes these systems as “machine intelligence from the machine age”—a retrogression in the evolution of technology, echoing the industrial era’s emphasis on conformity and interchangeable parts.
The industrial era was the age of beneficent conformity. Its essence was the interchangeable part that slotted perfectly into an assembly of other interchangeable parts. Even the great contribution of the machine age to information technology—moveable type—he points out in a brilliant interview with George you can find here, was an exercise in conformity, with each letter slotting in seamlessly with the others. “The machine age” says René “started with machine printing and is now dying in machine learning.”
As Claude Shannon, MIT’s father of information theory discovered, only surprising statements count as information. If the receiver could have faultlessly predicted the message, as in “a three-letter word beginning with “th” is ___,” then the missing bit is not information. We didn’t need to be told.
If you find ChatGPT predictable and boring that is because it is designed to be so. It generates nothing. The choices it makes are the most probable answers based on historical data. When its answers surprise, they are glitches, defects in the system, “hallucinations.”
When, as in the case of Google’s hilarious Gemini disaster, the answers are further constrained by ideology, they become both more predictable and, at least the first time, funnier. (Both Matt Taibbi and Ann Coulter have recorded their own hilarious experiences of asking Gemini to tell them about themselves. But the joke gets tired quickly.)
This is the inevitable result of the way current AIs are built. The current method—which Verses AI will overthrow—is to separate “training” from “inference.” During the training portion, immensely expensive and practical only in “hyperscale” data centers, the AI model “learns” how to react to inquiries “dog or cat?”, “Richard or George?” by being exposed to millions or billions of instances, each time taking into account thousands or millions, or billions of parameters. When the model is all trained up it is ready for “inferencing”—answering questions like “Richard or George?” as they come up out in the world. Inferencing is far more economical and is coming to your PC this year.
Economical or not, this two-step process dooms the AI to a mere regurgitation of historical averages, with the most popular result the most favored. No surprise, no information.
Because surprise can be subjective—if I did not know the Sun revolves around the Earth, and you tell me so, I am surprised and informed—this recitation of past knowledge is not useless. If I want the history of an idea, ChatGPT might give it to me. Gemini will go a step further and deliver only those parts of the history free of micro-aggressions that might trigger unhappy feelings. It will keep me safe from Coulter or Taibbi.
The real danger—and futility—of ChatGPT et. Al. is that they must rely on a centralized consensus. This is in the very nature of their learning process. It is the whole deal behind supposedly invaluable “big data.”
Centralized learning is what made the Gemini comic opera. Centralized learning is why we keep hearing cries for regulating AI with its pretensions to authority. Centralized learning will make AI a political battleground because the payoff for conquering a centralized system always appears so great (and sometimes is, as witness the American public school system.)
Verses AI promises to overthrow the entire scheme, replacing the combination of historically trained models and static inference with a far more economical and dynamic system called “active inference.”
Based on research into human brain function led by the eminent Karl Friston, one of the world’s most frequently cited scientists, active inference helps answer the great question “how does the human brain do so much with so little?” So little energy compared to a data center; so little information compared to a trained AI model.
Friston, now Chief Scientist at Verses AI, answers that the brain focuses on change and its probabilities. Humans build models of our immediate environment accompanied by a set of possible changes ranked by probability. If a change occurs, we remodel and react, the more drastically if the change carried a low probability. We are educated over time but continue to learn in the present tense.
Active inference models will free AI from both historic averages and centralization. Active inference not only can be decentralized, it must be, just as “large language model AI” must be centralized. Active inference agents are, by definition, spread through the environment to which they must adjust. Not only do they not “require 40,000 Nvidia computers” as René remarks, where would you put them in an Internet of Things sensor that must be smaller than a matchbox?
Verses is an AI for the Information Age. ChatGPT is stuck in the Age of Steel and Gemini in one of Mao’s backyard steel mills.
George and Richard are both investors in Verses AI. If you are interested, check out VRSSF on OTCQB or VERS on CBOE Canada.
George, Richard, and the rest of the Gilder Technology team write about breakthrough companies like Verses AI in George Gilder’s Moonshots. To learn more, go here.