For their 125th anniversary Science magazine asked the Top 25 science questions, one of which is in my area of expertise. That question is: What Are the Limits of Conventional Computing?.

At first glance, the ultimate limit of computation seems to be an engineering issue. How much energy can you put in a chip without melting it? How fast can you flip a bit in your silicon memory? How big can you make your computer and still fit it in a room? These questions don't seem terribly profound.

Unless of course, you've actually been involved like I have in semiconductor design. It always amazes us that any of this stuff works. With time, the physics gets nastier and nastier. When I started my career we were designing 4 micron gates (the distance measured here are the length of the gate of the transistor). Now we are working on 90 nm gates and next year it will be 65 nm. We now have to worry about capacitance, resistance, inductance, setup times, hold times, on chip variation, power consumption, crosstalk avoidance, electromigration, latch up, charge antennas, planarization, leakage, and electrostatic discharge (and these are just the ones I can think of off the top of my head).

In fact, computation is more abstract and fundamental than figuring out the best way to build a computer. This realization came in the mid-1930s, when Princeton mathematicians Alonzo Church and Alan Turing showed--roughly speaking--that any calculation involving bits and bytes can be done on an idealized computer known as a Turing machine. By showing that all classical computers are essentially alike, this discovery enabled scientists and mathematicians to ask fundamental questions about computation without getting bogged down in the minutiae of computer architecture.

This is true. In fact, the basic architecture of a computer has not fundamentally changed since Turing described binary digits, von Neumann developed automata theory, and Shannon developed information theory before and during World War II.

For example, theorists can now classify computational problems into broad categories.

Pproblems are those, broadly speaking, that can be solved quickly, such as alphabetizing a list of names.NPproblems are much tougher to solve but relatively easy to check once you've reached an answer. An example is the traveling salesman problem, finding the shortest possible route through a series of locations. All known algorithms for getting an answer take lots of computing power, and even relatively small versions might be out of reach of any classical computer.Mathematicians have shown that if you could come up with a quick and easy shortcut to solving any one of the hardest type of

NPproblems, you'd be able to crack them all. In effect, theNPproblems would turn intoPproblems. But it's uncertain whether such a shortcut exists--whetherP=NP. Scientists think not, but proving this is one of the great unanswered questions in mathematics.

First some definitions. P stands for polynomial time and NP stands for non-deterministic polynomial time. It is really, really important to find an order P solution. Given a fast enough traditional computer, this can be solved in reasonable time. An NP complete problem is in essence unsolvable for a computer. To see how many problems are probably NP complete check out this non-exhaustive list. If a mathematician can show P equals NP or P does not equal NP, it will be an instant Fields Medal for such an individual.

If it is the latter we may need to abandon the von Neumann architecture that has served us so well. But, how would we do that?

But there is a realm beyond the classical computer: the quantum. The probabilistic nature of quantum theory allows atoms and other quantum objects to store information that's not restricted to only the binary 0 or 1 of information theory, but can also be 0 and 1 at the same time. Physicists around the world are building rudimentary quantum computers that exploit this and other quantum effects to do things that are provably impossible for ordinary computers, such as finding a target record in a database with too few queries. But scientists are still trying to figure out what quantum-mechanical properties make quantum computers so powerful and to engineer quantum computers big enough to do something useful.

By learning the strange logic of the quantum world and using it to do computing, scientists are delving deep into the laws of the subatomic world. Perhaps something as seemingly mundane as the quest for computing power might lead to a newfound understanding of the quantum realm.

The reason why quantum computing could be so powerful is the difference between a bit and a qubit. A qubit consists of multiple entangled quantum states. A bit can be 0 or 1. A qubit consists of superpositions of whole binary strings. Thus we can do things in parallel which we are currently doing serially. One of the current challenges is maintaining what is know as coherence. Coherence can be lost by observing the state of a qubit as a result of the Heisenberg Uncertainty Principle.

The first half of my career has been marked by doing the same thing but making it smaller and smaller, and faster and faster. We've ridden Moore's Law far longer than I had personally anticipated. The question for the second half of my career is whether we will see Python's (Monty) Law: "And now for something completely different."