When theory meets practice—and they argue productively

May 25, 2025

Welcome to the second entry in my Holiday in Sicily series. Today, I reflect on the interplay between algorithms and theory—how practice often leads, theory follows, and how the two shape each other in a dynamic loop that drives much of applied math progress.

Introduction

In many fields of applied mathematics and engineering, the relationship between theory and practice isn’t a one-way street. It’s a conversation. At its best, it’s a lively argument.

Often, the most exciting developments come from this back-and-forth—a dynamic feedback loop between algorithms and theory. Practice speaks first: a new method appears, driven by need or intuition. Then theory replies, asking tough questions. Does it work? When? Why?

Sometimes, theory pushes back, exposing flaws or failure modes. Practice listens, adjusts, and counters with a better design. And so the exchange continues—design, analysis, failure, improvement. A real dialogue. Each side learning, challenging, shaping the other.

The pattern

This pattern can be broken down into five stages:

  • It starts with an algorithm.
    Someone proposes a new method. Maybe it’s computationally efficient, maybe it’s inspired by physics, maybe it’s a neural network architecture copied from another domain. The key point: it’s motivated by what works, not necessarily by what can be proved.
  • Then comes theory.
    After initial empirical validation, theoretical work seeks to rigorously understand the algorithm: Does it converge? Under what conditions? How stable is it? Is it optimal? These proofs don’t just validate the method—they clarify its domain of applicability and often reveal surprising subtleties.
  • Theory explains failure modes.
    Theory doesn’t just confirm what works. It explains why it fails under certain conditions. It identifies the edge cases: when the matrix is ill-conditioned, when the PDE becomes singular, when the neural network can’t generalize. These insights often reveal flaws in the original method—and point the way forward.
  • Failures lead to new designs.
    Armed with new theoretical insight, practitioners redesign the algorithm. They regularize it. They precondition it. They add skip connections, or stabilize the discretization, or adjust the loss function. Now there’s a new method—a fix, a patch, or sometimes a breakthrough.
  • Theory circles back.
    The cycle completes as the community begins to rigorously analyze these new approaches, proving convergence, stability, or optimality. The feedback loop spins again—and the field takes another step forward.

This cycle is more than just a historical curiosity. It reflects a healthy research ecosystem where algorithms and theory co-evolve, each pushing the other forward.

Classic example: the finite element method

In the 1960s and 1970s, engineers began using the finite element method (FEM) for solving PDEs in structural mechanics. Early algorithms were developed based on physical intuition and computational feasibility, often with minimal theoretical backing.

As usage spread, mathematicians stepped in to analyze convergence, consistency, and error bounds. This theoretical work revealed the importance of mesh quality, element choice, and function spaces—leading to remedies such as adaptive mesh refinement and stabilized formulations. These in turn became subjects of rigorous mathematical analysis, culminating in the powerful FEM theory we now rely on.

Modern example: neural networks

The past decade has seen a similar cycle in machine learning. Deep neural networks first exploded in popularity due to empirical success in computer vision and natural language processing. The original architectures—like AlexNet—worked well, but few could explain why.

Soon after, theoretical work began to address generalization, expressivity, and training dynamics. Researchers identified failure modes: vanishing gradients, overfitting, sensitivity to initialization. Each insight led to remedies like residual connections, batch normalization, and dropout.

These new components became standard architecture elements, and new theoretical frameworks arose to understand them—such as neural tangent kernels, overparameterization theory, and mean-field limits.

Why it matters

Understanding this pattern helps us better appreciate the non-linear path of scientific progress. It reminds us that:

  • Empirical success can precede theory.
    Not every good idea starts with a theorem.
  • Failures are fertile ground.
    When algorithms fail, it often sparks the most productive theoretical advances.
  • Theory isn’t just for validation.
    It can actively shape new methods, sometimes in surprising ways.

If you’re a theorist, don’t wait for the perfect question. Look at what practitioners are doing—it’s full of open problems.

If you’re a practitioner, don’t be afraid to work with unproven methods. The theory might catch up—and when it does, it’ll make your work even better.


Blog posts about academic life