Labs around the world are racing to develop new computing and sensing devices that operate on the principles of quantum mechanics and could offer dramatic advantages over their classical counterparts. But these technologies still face several challenges, and one of the most significant is how to deal with “noise” — random fluctuations that can eradicate the data stored in such devices.
A new approach developed by researchers at MIT could provide a significant step forward in quantum error correction. The method involves fine-tuning the system to address the kinds of noise that are the most likely, rather than casting a broad net to try to catch all possible sources of disturbance.
The analysis is described in the journal Physical Review Letters, in a paper by MIT graduate student David Layden, postdoc Mo Chen, and professor of nuclear science and engineering Paola Cappellaro.
“The main issues we now face in developing quantum technologies are that current systems are small and noisy,” says Layden. Noise, meaning unwanted disturbance of any kind, is especially vexing because many quantum systems are inherently highly sensitive, a feature underlying some of their potential applications.