Imagine a conversation with one of these newly released AI chatbots. You ask it it to solve a tricky math problem. It responds with “That seems kind of hard. Give me some time to think.”. After a few minutes it comes back with “I haven’t solve it yet. And I am not sure I can. Would you like me to continue working on it?”. Another few minutes pass and then it comes back with “Aha! I figured it out!” and it proceeds to explain a neat and creative solution.
This scenario can never occur with PaLM, BARD, GPT-4, or any of the other transformer-based large language models that are thought to be on the path to general intelligence. In all of these models, each word in the machine’s response is produced in a fixed amount of time. The model cannot go away and “think” for a while. This is one of the reasons why I believe a solely transformer-based model can never be “intelligent”. (If you disagree with my characterization of transformers here, see section 4 and also this post).
Summary: I argue here, that intelligence requires the ability to explore “trains of thought” that are potentially never-ending. One cannot know a priori if a certain train of thought will lead to a solution or if it is futile. The only way to find out is to actually explore. And this type of exploration comes with the risk of never knowing if you are on the path to a solution or if your current path will go on forever. Intelligence involves problem-solving, and problem-solving requires arbitrary amounts of time. If a computer program is bound to finish quickly by virtue of its architecture, it cannot possibly be capable of general problem-solving.
In the summary paragraph above, I appealed to a number of intuitive notions (e.g. “train of thought”, or “exploration”, or “problem-solving”). In order to make my argument rigorous, I have to first introduce a few concepts rooted in classical theory of computation. In section 1, I will introduce three types of computer programs. In section 2, I describe what an unintelligent problem-solver can look like. In section 3, I describe what is needed to make the unintelligent problem-solver intelligent. In section 4, I explain why transformers can never be general problem-solvers. In section 5, I briefly discuss what I think needs to be done to address this problem.
1. There are Three Types of Computer Programs
First, let us establish that there are three types of computer programs:
(A) Programs that eventually finish executing (or “halt”)
(B) Programs that provably never halt
(C) Programs that never halt but for which there is no proof
For our purposes, we are talking about computer program that take no inputs but may produce an output. Let’s call these types of programs “standalone programs”. For instance, take the standalone program below that tries to find a Pythagorean triple:
Program I
1: start with S = 3
2: for all positive integers a, b, and c where a+b+c=S
3: if a2 + b2 = c2
4: return tuple (a, b, c) and halt
5: end for loop
5: increment S by 1
6: go to line 2
We know this program will eventually output the tuple (3, 4, 5) because 32 + 42 = 52 and these three numbers have the smallest sum among all Pythagorean triples. So this program is in category (A). It’s a program that eventually halts.
What about category B? What is an example of a program that we know never halts? Here is a simple one:
Program II
1: start with x = 1
2: while x is not divisible by 3
3: if x is odd
4: increment x by 1
5: otherwise if x is even
6: decrement x by 1
7: end while loop
8: return x and halt
We can prove that this program never stops. It revisits the same state and falls in an infinite loop. The variable x starts with 1, then becomes 2, then 1, then 2, and this goes on forever.
The program above revisits repetitive states. But not all programs in category B have to repeat themselves. For example, take the following program that attempts to find a solution that fulfills an+bn=cn for some n ā„ 3.
Program III
1: start with S = 6
2: start with n = 3
3: while n < S
4: for all positive integers a, b, and c where a+b+c+n=S
5: if an + bn = cn
6: return tuple (a, b, c, n) and exit
7: end for loop
8; increment n by 1
9: end while loop
10: increment S by 1
11: go to line 2
This program never revisits the same state. It keeps on trying new values for a, b, c, and n, trying larger and larger numbers as it progresses in time. But it also never halts. We are now certain it never halts because someone proved Fermat’s last theorem less than 30 years ago. So this program is also in category B.
So far we have examples for categories A and B. Now here is the strange thing. It is logically impossible for me to give you an example of a program that is in category C. Because in order for us to show that a program is in category C, we must somehow know that it never halts. But if I can prove that a program never halts then it falls in category B, not C. Category C consists of programs that cannot be proved to halt.
But category C is not empty. We know that there are programs that fall in category C. In fact we can prove it! We just can’t give a specific example. I won’t explain how we know this, but suffice it to say that we have known this for close to a hundred years from the work of Turing, Church, Godel, and the other logicians/mathematicians of the 1930s.
Consider the following program:
Program IV
1: start with n = 7
2: while n is a new number never before encountered
3: if n is even
4: replace n with n/2
5: otherwise
6: replace n with 5n+1
7: return n
This program produces the Collatz 5n+1 sequence which goes like 7, 36, 18, 9, 46, 23, 116, 58, 29, … Currently, no one knows its fate. It might end by finally reaching a repetitive number (making it a category A program) or it may go on forever, never repeating the same number twice. If it does go on forever, it can be a category B program (meaning that we can somehow prove it goes on forever), or it might be unprovably never ending, in which case it will fall in category C.
If Collatz 5n+1 is a category A or B program, there is some hope in finding that out. But if it is a category C program we will never know if it is in A, B, or C! The fate of category C programs will always remain a mystery to humanity. Executing a category C program will forever generate novel patterns and relationships that we could not have predicted before running it. (See “Novelty generator” and “Collatz 5n+1 Simulator” on LifeWiki if you found this intriguing).
At every moment in time, we have programs that we know eventually halt (category A), programs that we know go on forever (category B), and programs that we have not yet categorized (which can be in category A, B, or C). Over time as we make progress, programs that had an unknown category fall into A or B (but never C). For example the third program I introduced above had an unknown category 30 years ago. Until Andrew Wiles came up with an ingenious proof and we discovered it was a category B program. This process can go on forever. We will never be done categorizing unknown programs, otherwise we could say “all that remains are category C programs” and we would then have found examples of category C programs, which we established is impossible. (This is why mathematics is a never-ending endeavor. We will never be done proving all provable theorems).
2. Let’s Build an Unintelligent General Problem Solver
Determining whether a program halts is such a general problem that you can take any other well-defined problem and reformulate it in terms of halting [and finding the final the state or the output of the program if it halts]. For example, I can rewrite a mathematical statement as a program such that halting is equivalent to proving that statement true. The program can iterate through all possible strings of text, in order of length, and checking whether that text is a valid proof for the statement in question. If the program finds a valid proof it halts. Otherwise it goes on forever.
But the halting problem is unsolvable. If it was solvable then there would be no category C programs. So there is no program that can accurately solve the generic problem of “does this program halt”.
Now we are going to do something that may first sound ambitious. We will sketch out a program that can take any standalone program as input and try to determine whether it halts or not. I will call this program “BruteBot“. Remember, that the only acceptable inputs to BruteBot are standalone programs, i.e. programs that do not take an input. BruteBot is not a standalone program itself because it takes inputs.
BruteBot is named such because it uses the most unintelligent brute force strategy possible. Given program X, it searches through all possible texts and for each text checks whether that is a valid proof that program X does or does not halt. In theory BruteBot can solve any solvable halting problem. If X is a program that eventually halts, BruteBot will eventually run into a text describing the progression of program X all the way until its end, which is a valid proof that X halts. (Note that BruteBot can also find the output of any category A program). If X is a category B program, that means there is a proof (expressible in text form) that X never halts. BruteBot will eventually run into it, given enough time and resources.
The obvious trouble here is that BruteBot is practically useless. It is like the monkey typewriter that hits random keys and will eventually type up Hamlet. It needs years, perhaps millennia, to stumble upon the shortest texts that constitute valid proofs. And for difficult problems, like proving Fermat’s last theorem (or equivalently proving Program III never halts), there is no hope that it will ever stumble upon a valid proof given our time and resource constraints.
But there’s something very important to understand about BruteBot. When given a program that is in category A or category B, BruteBot eventually halts. But when BruteBot is given a category C program, it cannot possibly ever halt, because there is no valid proof for it to discover. So BruteBot(X) halts if and only if X is a category A or B program. If the input X is a category C program BruteBot(X) is also a category C program. I next argue that this feature is not limited to BruteBot. Any general problem-solver, intelligent or not, must have this feature. If it is guaranteed to distinguish between category A and category B programs it can only do so if it never halts when given a category C program.
3. General Problem-Solving Requires Unbounded Exploration with the Risk of Never Ending
Now let us imagine another general problem solver: GeniusBot. Like BruteBot, GenuisBot takes a standalone program as input and determines whether it halts. If it halts, it also returns the output of that program. We don’t know how this program works. Maybe it’s based on neural nets. Maybe it’s using a strange algorithm that its creators do not want to disclose. All we know is that it runs on a computer and it is strictly better than BruteBot. That means that GenuisBot not only eventually solves any solvable problem, it is also fast and practical. When we give it Program III (above), it solves Fermat’s Last Theorem on its own, never having seen Wiles’s proof.
Without knowing the inner workings of GeniusBot we are going to prove something about it: that it requires unbounded time to solve solvable problems. Assume the contrary: let’s say GeniusBot is able to solve any solvable problem within a certain amount of time, say, less than a minute. That means for every category A or B input, it answers within a minute. But what happens when we provide it with a category C input? Category C programs are unsolvable. But GeniusBot cannot report back that the input program is unsolvable, because otherwise it would have given us an example of a category C program! And that is impossible! Alternatively, if it gets stuck for category C programs, that is also self-contradictory! Because we could then use GeniusBot to determine if a program is in category C. All we need to do is wait for a minute (or whatever the time duration within which it can solve all solvable problems) and if GeniusBot has not responded by then, we know it was given a category C program. Remember we cannot ever determine the fate of a category C program (See part 1). So our initial assumption that GeniusBot solves all solvable problems within a fixed amount of time cannot be true.
What this means is that 1) there is no cap on how long GeniusBot needs to solve problems, and 2) there are some problems for which GeniusBot will go on forever and we cannot possibly know whether it will never end or if it just needs a little more time. (This latter conclusion means that if input X is a category C program, GeniusBot(X) is also a category C program).
Taking time to think isn’t just something humans do. Any intelligent problem-solver, biological or artificial, needs time to “think”, because problem-solving fundamentally takes time and it is similar to exploring an unknown territory. We proved this with minimal assumptions about how our general problem-solver, GeniusBot, works. Our only assumptions are that it can be run on a computer, it can solve any solvable problem provided sufficient resources (just like BruteBot can), and that it doesn’t give us incorrect answers.
People have come up with all sorts of definitions for intelligence (see this paper for a comprehensive review). I won’t add another formal definition to the list here. But for me intelligence is all about problem-solving. In my view, the quest for formalizing intelligence is about transforming something like BruteBot into something like GeniusBot: a program that is smart and efficient in how it searches for solutions for general problems. We defined our problems in terms of halting but AI doesn’t have to be explicitly about the halting problem. If you define “general problem-solving” broadly enough, it will inevitably include unprovably unsolvable problems (like category C programs). With a sufficiently general definition any problem will be reducible to the halting problem and, vice versa, the halting problem can be reduced to any type of problem. For example, any mathematical problem can be reduced to the halting problem, and likewise the halting problem for any program can be reduced to a problem in mathematics.
When I say intelligent problem-solving risks the possibility of never ending, that doesn’t mean it has to get stuck working on a single problem forever. That wouldn’t be very useful. Rather, the program can temporarily take a break from working on a certain problem and decide to go back to it later, perhaps once it has made insight on another related problem. It might end up never returning to that problem. Ideally, AI can get better with experience, drawing on acquired knowledge and using search heuristics that have practically worked out for it in the past. But no matter what it does, it cannot escape the fact that it needs unbounded time to search for solutions and that it will never know which problems are unprovably unsolvable. (This is not too different from how mathematicians do math, as individuals or even a whole community).
4. Why Transformers Alone Cannot Solve General Intelligence
Transformers are a feed-forward neural network architecture that has become one of the state-of-the-art methods in contemporary AI. They have led to an outburst of impressive language models that have stirred up lots of excitement, controversy, debate, and in some cases fear.
There is a prevailing belief that transformers are sufficient for performing any computational task, or even achieving general intelligence. A number of papers have claimed that transformers are Turing-complete, meaning that they can simulate any computer program (including category C programs). Those claims are demonstrably false. I explain why in a separate post. Apart from these papers, people often cite the universal approximation theorem (UAT) to support the idea that transformers are capable of computing any [computable] function. UAT has a misleading name and it does not imply the ability to solve all solvable functions. (I also wrote about this in a another post you can read here.)
Let us think about transformers in the context of all that we have learned up to here. Can a transformer be generally intelligent? We established that a generally intelligent program (like GeniusBot of the previous section) must take arbitrary amounts of time to solve problems. And for some inputs it will try to keep running forever. How can a transformer work like that? Transformers are feed-forward networks so they produce each token within a fixed amount of time. If we require it to only respond with the final answer, that certainly won’t work because that would mean it attempts to solve problems within a capped time limit, which we showed is impossible (section 3).
Another strategy is to allow it to “think out-loud” through chain-of-thought (CoT) prompting (see this paper and this other paper). With this method the transformer does not have to respond with the final answer immediately. It can produce a series of tokens that move toward the answer step by step. The architecture now includes a feed-back loop and is not strictly feed-forward anymore. It’s recurrent. We can even modify it to hide the intermediate tokens and keep its thought process to itself. At every step the transformer can use any of the previously produced tokens as input creating something like a memory scratchpad. This allows it to run for arbitrary amounts of time even allowing it to theoretically fall into an infinite loop! Have we solved the problem of unbounded time then? No, we have not.
Even a recurrent language model with CoT prompting does not fulfill the requirements outlined above, despite the fact that it can theoretically run forever for some inputs. A generally intelligent problem-solver must have the following property: for some inputs it must become a category C program (a program that never halts but it cannot be proved that it never halts). But the input size is fixed for a given transformer model. If the size of the input is fixed and there is a limit to the number of possible tokens (words, or even numbers with bounded precision) then the state of the transformer at any given moment is describable with a string of fixed length. So the only way for a transformer to go on forever is for it to revisit the same state. And that will make it a type B program; we can prove that it never halts. So there is no way for it to become a category C program. Computer programs, in general, can be category C programs because their states may need to be described by strings of unbounded length. Some programs, unlike transformers, do not have an inherent limit on the number of reachable states. Memory usage may depend on the input and be unbounded otherwise. (A program’s memory usage is the proportionally equivalent to the length of the string that describes that program’s state).
Ultimately, artificial intelligence must be formalized as a computer program. It’s fine if that program uses transformers to solve problems. That program can improve itself over time and its problem-solving strategies don’t have to stay constant. It can be highly dynamic and interactive, using feedback from its environment. But however it works, its state must not be describable with a string of fixed length. If the number of reachable states for a program is limited, independent of the input, then that program cannot become a category C program for any input. And it therefore cannot engage in general problem-solving. The AI program must be able to encounter paths of state changes (analogous to “trains of thought”) with no limit to the number of states in them. And that comes with the risk of being on a never ending path without being able to know it. This is not something that transformers, with their fixed size, can do.
One might argue that our brains are describable with a string of fixed length so my perspective on intelligence precludes humans. But that would be mistaking computer programs for the physical devices that run them. Dijskstra once said: āComputer Science is no more about computers than astronomy is about telescopesā and the same can be said about thought and the brain. My perspective on intelligence does not exclude humans. We are intelligent and we are capable of general problem-solving. I believe our physically limited brains execute programs whose states cannot be described with a string of fixed length. (Remember “fixed” or “bounded” is not the same as “finite”. You cannot describe any arbitrary integer with a fixed number of digits, but any integer can be described with a finite number of digits). When our problem-solving programs hit biological limits of memory, we somehow recognize it, and we then recruit resources outside of our brains for memory (e.g. fingers, or pebbles, or more commonly writing on paper). And before our problem-solving programs hit biological limits of time (e.g. limited life-spans of individuals) we communicate information about the state of our programs for others to be able to continue from where we left off. That is how mathematics, biology, physics, computer science, and many other problem-solving endeavors work. I do not see this as fundamentally different than taking a computer program from a resource constrained computer and continuing its execution on another computer, or having a computer program use cloud storage if it runs out of space on a local hard drive. These are all ways for dealing with resources constraints, independent of the computational problem that hits the constraints.
5. What needs to be done?
I hope to have convinced you that an intelligent general problem-solving program must have to property that it takes unbounded amounts of time to run. And that for some problems it will never halt but we would not be able to tell if it will eventually end (category C programs). But transformers currently lack this property. It may be tempting to try to solve this problem by hooking up a model like GPT-4 to an interface that allows it to run programs for unbounded lengths of time. I am skeptical that this approach can take us toward general intelligence.
The non-halting property is a necessary but not sufficient condition for intelligence. The unintelligent problem-solver (BruteBot) that we imagined above fulfills this necessary condition, yet is unintelligent. You can address the criticism I raised here by augmenting a transformer with something like an unbounded stack or memory pad that lets it run arbitrarily long programs. That would fulfill this condition but it will not necessarily make it intelligent. It would be like if you point out that cars cannot fly because flying requires an upward lift force, and someone responds by installing a powerful fan on the roof of a car. They may be able to lift the car but flying requires more than just getting off the ground. It may be wiser to forget about modifying the car and start over with something else.
The problem raised here is not just an abstract one. Transformer-based models are failing at solving intelligence benchmarks that that other approaches are succeeding in, like the Abstraction and Reasoning Corpus (ARC). I personally think program synthesis is the way to go and I am currently working on developing an approach to ARC. But I also think we should continue using large language models as tools. In the car metaphor, it would be unwise to design an airplane by modifying cars, but it would also be unwise to not use cars as transportation vehicles that can help us get to the goal of building an airplane.
The three images in this article were all created with the help of AI models. From top to bottom: Midjourney, DALLĀ·E 2, and DreamStudio.