May 31, 2025
Welcome to the fifth entry in my Holiday in Sicily series. As I sip a strong espresso under the Etna sun, I can’t help but feel it: the eruption is getting closer. Not just from Etna—though it’s rumbling again—but from the growing volcano in computational science. I’m talking about Scientific Machine Learning (SciML) and its most explosive crater: Physics-Informed Neural Networks (PINNs).
At its core, SciML is the blend of traditional scientific computing (think numerical linear algebra, differential equations, optimization) with machine learning techniques. The goal? Learn from data and physics. It’s a natural step forward for computational scientists in the era of data-driven models.
Among its many algorithms, PINNs are arguably the flashiest. Introduced by Raissi et al. in 2019, PINNs solve PDEs by turning them into loss functions for neural networks. Instead of assembling a stiffness matrix or computing fluxes across volumes, you just minimize a loss that penalizes deviation from the PDE—using automatic differentiation and PyTorch or TensorFlow.
How hot is it? PINNs are scorching. They’ve already surpassed the Fast Multipole Method (FMM) in citations and are catching up to the Fast Fourier Transform (FFT)—two of the most celebrated algorithms of the 20th century, and personal favorites of mine:
But it’s not just citations. On LinkedIn, debates over PINNs are as passionate as they are polarizing. You’ll find postdocs promoting “PINN 2.0” architectures, industry labs claiming real-world applications, and grumpy seniors complaining about their instability.
Let’s try to be rational. Here’s my take:
In a world where success is often measured by adoption and citations, PINNs seem to be a win. But that brings us to a deeper question: what is science for? Is it about writing technically flawless, Bourbaki-style papers that are hard to read and understand—and that few people actually do? Or is it for spreading ideas, teaching, and energizing the community? PINNs undeniably brought people in. That matters.
The reality is more complex. Despite the buzz, PINNs often fail to deliver on benchmarks. A recent methodical benchmarking study highlighted serious issues: weak baselines, inconsistent reporting, and disappointing performance compared to traditional solvers. Or consider this honest blog post by a grad student in astrophysics who fell for the hype—then ran into a wall. Following that post, things got especially heated on LinkedIn, with people openly clashing.
And yet, the papers keep piling up—like villagers reinforcing old walls, hoping they’ll hold when the eruption comes. One patch after another on an architecture many suspect is fundamentally unstable.
Let’s not throw everything away. PINNs shine in the classroom. We still teach FD methods for space discretization—even though few people actually use them in practice. Why not teach PINNs? They offer:
I came to SciML not via PDEs but through approximation theory. It started with approximations in Korobov spaces, inspired by Yarotsky’s work in Sobolev spaces. Later, I explored the Kolmogorov–Arnold superposition theorem.
These works felt theoretical at the time—but they later inspired practical architectures like KAN, which like PINNs, exploded in popularity almost overnight and generated heated debates. (This is an example where theory precedes practice, and talk to each other.) I missed the practical trick that made implementation efficient: stacking multiple layers and iterating. As a numerical analyst, I should’ve guessed—iteration is the heart of so many methods.
Then, this year, I launched a SciML course at École Polytechnique. It was a rewarding experience. Today, I’m working with a brilliant PhD student, Victor, on regularizing the Linear Sampling Method using neural operators, a class of SciML algorithms designed to learn mappings between function spaces. It works surprisingly well. Victor’s writing up his first paper now.
In science, we must remain rational, not emotional. Some people are irrationally for PINNs—hyped beyond evidence. Others are irrationally against—rejecting anything AI-flavored on principle.
At work, I often hear: “AI consumes too much electricity! ChatGPT is catastrophic!” But let’s put things in context.
Say a single ChatGPT query gives you the answer you need. It consumes about 1.5 Wh on the server side. (This is a very conservative estimate—GPT-4o appears to be significantly more efficient.) If it takes 10 seconds to read and process on your laptop (drawing 50 W), that adds around 0.14 Wh. Total: ~1.64 Wh.
Now compare that with Google. Each search uses about 0.3 Wh server-side. Suppose you need 3 queries and spend 3 minutes browsing—totaling 0.9 Wh server-side, plus 2.5 Wh from your laptop. Total: ~3.4 Wh.
Used efficiently, ChatGPT can be up to twice as energy-efficient for certain tasks. The real issue isn’t the tool—it’s the usage. If you fire off 10 prompts a minute, yes, it adds up. But thoughtful use can save both time and energy.
More broadly, let’s consider history. The printing press democratized access to information. The Industrial Revolution increased productivity through mechanization. The telegraph and telephone collapsed distances and accelerated communication. The Internet transformed how we share and retrieve knowledge. Now, AI is speeding up technical workflows—from writing code and analyzing data to formulating and testing scientific models. It’s not an anomaly; it’s a natural continuation.
Yes, AI must be efficient and responsible—but we shouldn’t dismiss progress just because it’s imperfect. Going back to SciML, I do believe that neural operators can make a real impact. That said, too many papers and talks still avoid fair comparisons: they often benchmark against single-instance solvers like FEM, rather than their true competitor—Model Order Reduction.
So, here I am in Sicily, watching both Etna and SciML rumble. Both are beautiful, unstable, and sometimes dangerous. Both inspire awe and fear. And both may shape the future—even if we don’t know how.
Let’s stay curious, critical, and open. Not irrationally for, not irrationally against. Just rationally engaged.