
Why hardware is not the solution
In quantitative finance, there is a familiar pattern. When performance starts to lag, the instinct is to buy faster machines, rent cloud servers, or throw GPUs at the problem. The narrative is that computation is hard, so the solution must be hardware. It sounds reasonable, but it is often the wrong approach. In my experience, performance issues usually have little to do with hardware and everything to do with mathematics and design.
The problem with convenience layers and Python dependency
At Algorithmica, where we have been building quantitative systems since 1994,this lesson has repeated itself many times. Over the years, I have seen a shift toward quick fixes and convenience layers, a tendency to assemble Python libraries and call it a day. Python is a wonderful language for research and experimentation, but it has also created a generation of quants who are further from the machine than ever before. They know how to use libraries, but not necessarily why they work. They can import a Fourier transform but cannot explain what it really does or how memory access patterns affect performance.
Qlang and LLVM: performance through design
In Quantlab, our in-house platform, we chose another path. We built our own language, Qlang, which compiles directly to machine code through LLVM. It is as expressive as C++, yet avoids manual memory management and the traps that come with it. The idea was simple: give quants and developers full control of performance, but keep the productivity of a higher-level language.
Case study: speeding up Heston calibration with the COS method
That control matters. We recently implemented a calibration of a Heston volatility surface to S&P index options using the COS method. The mathematics behind the COS algorithm relies on Fourier transforms, concepts that date back to the 19th century but remain as elegant and powerful as ever. By combining a clean mathematical formulation with careful attention to vector operations and data locality, we reduced calibration time from twelve seconds to three-tenths of a second. No GPUs. No parallelisation. No external speed-up libraries. Just solid mathematics and good engineering.
Mathematics before hardware
This is not a story about technology, but about mathematics. The foundation of performance lies in how well the problem itself is understood and formulated. Once the maths is right, the implementation becomes straightforward. Throwing hardware at an inefficient model is like adding horsepower to a car with flat tyres. You move faster for awhile, but the fundamentals are still wrong.
When hardware optimisation actually matters
That is not to say hardware has no place. In high-frequency trading or large-scale investment-bank systems, additional performance can be unlocked by parallelisation, vectorization, or dedicated hardware. But these techniques only matter when built on a solid mathematical core. Without it, optimisation becomes noise.
Real performance comes from understanding the machine
The point is simple. Real performance comes from understanding the mathematics and the machine. It comes from reading academic papers, challenging assumptions, and building tools that fit the problem instead of glueing together generic solutions. It comes from hiring engineers who can explain both the algorithm and the architecture.
Conclusion: precision thinking beats raw power
In an age where technology is celebrated for its own sake, it is worth remembering that good mathematics still wins. Hardware amplifies; it does not create. The real leverage lies in thinking clearly, designing carefully, and coding with intent.
Sometimes, old-school mathematics, applied with modern precision, is still the fastest way forward.