Misconceptions, misjudgments, and poor use of technologyWould you print out your emails just to be faster? Even if that means ignoring the potential of your shiny new inbox, and even if that inbox still needs fixing? That's the kind of trade-off I'm facing today. And that's exactly the problem we have to deal with in quantum machine learning.September 2, 2025

Dear Quantum Machine Learner,

Misconceptions can be costly. One of my misconceptions was that integrating new features into this blog would be quick. Estimating effort was the topic of my doctoral thesis, so I should have known better. Nevertheless, I underestimated the actual amount of work involved in what appeared to be minor cosmetic changes from the outside.

Most of my recent efforts have been behind the scenes and are not visible to you. However, some effects are visible: more images, new illustrations, clickable keywords. These make the posts richer. But this richness comes at a price. Writing in this new way is slower, not faster, as I have had hoped. I thought I would be able to publish more frequently. Reality has proven otherwise. Just maintaining my old rhythm is already a challenge.

Here, the parallel to quantum machine learning becomes apparent. The purpose of a parameterized quantum circuit is not to offer marginal acceleration compared to a classical model. Its potential lies in opening up an exponentially large feature space that grows with the number of qubits. This exponential expansion is the structural reason why an advantage can be expected.

But there is a catch: access to a Hilbert space of dimension is meaningless if it is not actually used. A quantum kernel that simply embeds data naively in this huge space falls back on forms that classical kernels can also approximate. The exponential potential collapses to a classical baseline. No advantage, just overhead.

Naive use of technology does not exploit its full potential.
Figure 1. Naive use of technology does not exploit its full potential.

That's exactly how I feel when I think about going back to the "old-fashioned" posts. Publishing on this new platform but ignoring its capabilities would be like inserting a quantum circuit into a machine learning loop without using its parameterization. Or like printing out your emails instead of searching them. Technically possible, but conceptually wasteful.

So the way forward is neither to abandon the new features nor to pretend that they aren't slowing me down. The compromise is posts that are still in progress: shorter, but not weaker. The quality remains the same, even if the length and frequency change. Just like with quantum learning, the trick is to actually use the structure instead of pretending it's free.

I am convinced that the investment will pay off. Step by step, I will not only refine my texts and their presentation, but also the process itself. So the creation of this content can be done more quickly without losing any depth.

So let's begin our search for the essence of quantum machine learning.

Seeker Greeting

—Frank ZickertAuthor of PyQML