news-14072024-042919

In the movie Oppenheimer, Niels Bohr challenges the physicist early in his career: Bohr: Algebra is like sheet music. The important thing isn’t “can you read music?” It’s “can you hear it?” Can you hear the music, Robert? Oppenheimer: Yes, I can. I can’t hear the algebra, but I feel the machine.

I felt the machine even before I touched a computer. In the 1970s I awaited the arrival of my first one, a Radio Shack TRS-80, imagining how it would function. I wrote some simple programs on paper and could feel the machine I didn’t yet have processing each step. It was almost a disappointment to finally type in the program and just get the output without experiencing the process going on inside.

Even today, I don’t visualize or hear the machine, but it sings to me; I feel it humming along, updating variables, looping, branching, searching, until it arrives at its destination and provides an answer. To me, a program isn’t static code, it’s the embodiment of a living creature that follows my instructions to a (hopefully) successful conclusion. I know computers don’t physically work this way, but that doesn’t stop my metaphorical machine.

Once you start thinking about computation, you start to see it everywhere. Take mailing a letter through the postal service. Put the letter in an envelope with an address and a stamp on it, and stick it in a mailbox, and somehow it will end up in the recipient’s mailbox. That is a computational process—a series of operations that move the letter from one place to another until it reaches its final destination. This routing process is not unlike what happens with electronic mail or any other piece of data sent through the internet.

This innate sense of a machine at work can lend a computational perspective to almost any phenomenon, even one as seemingly inscrutable as the concept of randomness. Something seemingly random, like a coin flip, can be fully described by some complex computational process that yields an unpredictable outcome of heads or tails. The outcome depends on myriad variables: the force and angle and height of the flip; the weight, diameter, thickness, and distribution of mass of the coin; air resistance; gravity; the hardness of the landing surface; and so on.

The idea goes back centuries. In 1814, in his Philosophical Essay on Probabilities, Pierre-Simon Laplace first described an intelligence, now known as Laplace’s demon, that could predict these outcomes. The reverse implication is that for someone without a vast enough intellect, processes such as a coin flip would appear random. The language of computation lets us formalize this connection.

Earlier this year, Avi Wigderson received the Turing award, the “Nobel Prize of computing,” partly for formally connecting randomness with mathematical functions that are hard to compute. He and his colleagues created a process that takes a suitably complex function and outputs “pseudorandom” bits that can’t be efficiently distinguished from truly random bits. Randomness, it seems, is just computation we cannot predict.

Do we have a way to manage this randomness and complexity? The recent progress we have seen in artificial intelligence through machine learning gives us a glimpse into what it would mean to do just that. Information can be split into a structured part and a random part. Recent advances in machine learning have allowed us to take random samples and recover much of the underlying structure underneath.

Consider the problem of translation. Machine learning takes a similar approach, training language models on large amounts of data. When properly trained, the neural net will predict the probability of the next word in a sequence being translated from English to French.

While we typically cannot understand the underlying process of a trained neural net any more than Sophie understands her complete translation process, we can easily simulate that process to get the probability of the next word. Just as Wigderson connected complexity functions and pseudorandomness, predicting the probabilities of the next word lets us capture the complex calculations behind it.

The learning algorithms themselves are processes. Machine learning models are still prone to mistakes and misinformation, and they still have trouble with basic reasoning tasks. Nevertheless, we’ve entered an era where we can use computation itself to help us manage the randomness that arises from complex systems.