Adaptive Space-Time Finite Element Solver

Developed as a capstone project for the “Numerical Methods for Partial Differential Equations” course at Politecnico di Milano, this project implements a highly optimized finite element solver for the 2D heat equation. The system leverages the deal.II C++ library and MPI to dynamically resolve localized impulsive physical phenomena.

The Challenge

In time-dependent PDEs, sharp spatial gradients often appear only in highly localized regions and during brief time intervals. Using a uniformly fine mesh and a strictly small time step across the entire domain is computationally wasteful. The goal was to build a solver that strictly concentrates computational power—degrees of freedom (DoFs) and temporal resolution—exactly where and when the physics demand it.

Architectural & Mathematical Overview

The solver simulates a heat equation driven by a separable forcing term containing a spatial Gaussian pulse and oscillatory temporal impulses.

1. Spatial Discretization & AMR

  • Finite Elements: The spatial domain is discretized using continuous $Q_1$ finite elements on a quadrilateral mesh.
  • Adaptive Mesh Refinement (AMR): The solver dynamically refines and coarsens the grid using the Kelly error estimator—a residual-based indicator that analyzes the jumps in the normal derivative across cell faces. Solution transfer mechanisms seamlessly interpolate states between adapting meshes.

2. Time Integration & Adaptivity

  • $\theta$-Method: Uses the unconditionally stable, second-order Crank-Nicolson scheme ($\theta=0.5$) for time stepping.
  • Rannacher Smoothing: To prevent spurious oscillations from rough initial data, the solver employs Rannacher Smoothing, temporarily utilizing L-stable Implicit Euler ($\theta=1.0$) for the first few steps to damp out stiff, high-frequency modes.
  • Time-Step Controllers: Implements a rigorous Step-Doubling (Richardson Extrapolation) method, paired with a sophisticated PI Controller to smoothly adjust step sizes based on error history, as well as a lower-cost heuristic controller.

3. Linear Algebra

  • At each step, the resulting Symmetric Positive Definite (SPD) system is solved using the Preconditioned Conjugate Gradient (PCG) method, significantly accelerated by a Trilinos Algebraic Multigrid (AMG) preconditioner.

Distributed Computing & MPI Parallelization

To handle the massive computational load generated by the fully adaptive space-time simulations—where degrees of freedom can grow exponentially during refinement—the solver is fully parallelized using the Message Passing Interface (MPI).

  • The workload and grid are distributed across multiple processors using deal.II’s parallel distributed triangulation and Trilinos Wrappers for distributed linear algebra.
  • The entire architecture was successfully deployed, benchmarked, and stress-tested on the High Performance Computing cluster at the Politecnico di Milano Mathematics Department, demonstrating decent scalability and robust memory management across distributed nodes.

Results & Performance Analysis

A comprehensive benchmark was conducted to evaluate the cost-accuracy trade-offs of four configurations: Fixed/Fixed, Adaptive Space/Fixed Time, Fixed Space/Adaptive Time, and Fully Adaptive.

  • Spatial Dominance: Spatial adaptivity proved to be the absolute dominant accuracy lever, consistently reducing the $L^2$ error by over an order of magnitude compared to uniform grids.
  • The “Fully Adaptive” Overhead: While coupling AMR with step-doubling time adaptivity achieved the highest theoretical fidelity ($e \approx 10^{-7}$), it introduced severe computational overhead. The time integrator would often misinterpret the interpolation noise generated by mesh refinement as temporal error, leading to excessive step rejections.
  • Optimal Compromise: The study concluded that for this class of problems, an Adaptive Space / Fixed Time configuration represents the best engineering compromise, delivering a 20x speed-up over the fully adaptive method with only a minor penalty in accuracy.

Source Code: View on GitHub