It figures that just when I’m congratulating myself on discovering exciting news that I happen not to read any, or have the time to write about it, over the weekend.

No matter; now I’ve caught up. And the first fascinating thing I read today was that researchers at the National Institute of Standards and Technology have created a way to simulate the behavior of hundreds of qubits working together. There’s a more detailed SciGuru article here.

This is a little mind-boggling, mainly because I don’t have the background to grasp the physics behind this. I’m not sure how modeling the qubits in this fashion would produce an accurate prediction — this line, in particular: “Although the two systems may outwardly appear dissimilar, their behavior is engineered to be mathematically identical. In this way, simulators allow researchers to vary parameters that **couldnâ€™t be changed in natural solids**, such as atomic lattice spacing and geometry.” How would these parameters help in the building of qubits?

The article also mentions that NIST uses weak interactions between their atoms: “In the NIST benchmarking experiments, the strength of the interactions was intentionally weak so that the simulation remained simple enough to be confirmed by a classical computer. Ongoing research uses much stronger interactions.” How would this difference in strength affect the accuracy of the predictions? Unless, of course, the mathematical predictions of the system allow the entire thing to act like a black box, where the parameters are input, tuned, and then the final mathematical result is verified by a classical computer.

Hmm. More research (on my part) might be called for.