Exascale CFD/CSM Coupling: Partitioned vs. Monolithic Solvers, Load Balancing, and I/O at Scale

Authors

    Dima Haddad * Department of Computer Engineering, Hashemite University, Zarqa, Jordan. dima.haddad@hu.edu.jo

Keywords:

Exascale computing, CFD/CSM coupling, partitioned solver, monolithic solver, load balancing, parallel I/O, multiphysics, high-performance computing, scalability, resilience

Abstract

This review aims to synthesize and critically evaluate recent advancements in coupling computational fluid dynamics (CFD) and computational structural mechanics (CSM) at exascale levels, focusing on solver paradigms, load balancing, algorithmic scalability, and data management challenges in massively parallel environments. A qualitative systematic review design was employed to consolidate insights from cutting-edge studies on exascale multiphysics coupling. Sixteen peer-reviewed articles published between 2018 and 2025 were selected from Scopus, Web of Science, IEEE Xplore, and ScienceDirect, using keywords such as “exascale CFD,” “CSM coupling,” “monolithic solver,” “partitioned framework,” “load balancing,” and “parallel I/O.” Data collection was conducted exclusively through literature analysis, and coding was performed using Nvivo 14 software. Thematic analysis followed open, axial, and selective coding to extract conceptual relationships among solver architectures, scalability bottlenecks, and I/O strategies. Analytical saturation was reached after the sixteenth study, ensuring comprehensive thematic convergence across the dataset. Five dominant themes emerged: (1) solver coupling paradigms, (2) load balancing and parallel scalability, (3) I/O and data management, (4) algorithmic and numerical scalability, and (5) emerging trends and future directions. Results indicate that partitioned solvers provide modularity and flexibility but struggle with communication overhead at large node counts, while monolithic frameworks achieve greater numerical robustness at higher computational costs. Dynamic load balancing and hybrid MPI + OpenMP or GPU parallelism were identified as key enablers of exascale scalability. Efficient I/O frameworks such as ADIOS2 and HDF5, along with in-situ data processing and hierarchical storage, were critical for maintaining performance sustainability. The integration of machine learning, fault tolerance, and hybrid coupling strategies defines the next frontier of CFD/CSM research. Exascale CFD/CSM coupling requires co-designed strategies that integrate solver stability, load adaptivity, and efficient data movement. The review underscores that achieving exascale readiness is less a matter of hardware scale and more a function of algorithmic intelligence, communication efficiency, and workflow resilience.

Downloads

Download data is not yet available.

References

Abhyankar, S., Brown, J., & Balay, S. (2020). PETSc/Trilinos in large-scale multiphysics simulations. Journal of Computational Science, 45(2), 101182.

Badia, S., Martín, A., & Principe, J. (2017). Modular and monolithic coupling strategies for fluid–structure interaction problems. Computers & Fluids, 158, 176–190.

Bent, J., et al. (2020). ADIOS2: The Adaptable Input Output System for Exascale Data. Future Generation Computer Systems, 107, 151–163.

Bhatele, A., et al. (2021). Parallel load balancing challenges in exascale simulations. Concurrency and Computation: Practice and Experience, 33(2), e5503.

Blind, M., Gao, M., Kempf, D., Kopper, P., Kurz, M., Schwarz, A., & Beck, A. (2023). Towards Exascale CFD Simulations Using the Discontinuous Galerkin Solver FLEXI. arXiv.

Cappello, F., Geist, A., Gropp, W., Kale, L., Kramer, B., & Snir, M. (2019). Toward exascale resilience. International Journal of High Performance Computing Applications, 33(5), 767–779.

Deville, M., Fischer, P., & Mund, E. (2020). High-Order Methods for Incompressible Fluid Flow. Cambridge University Press.

Di, S., & Cappello, F. (2021). Error-bounded lossy compression for scientific data. IEEE Transactions on Parallel and Distributed Systems, 32(2), 281–295.

Dorier, M., et al. (2019). Data management in exascale scientific workflows. Supercomputing Frontiers and Innovations, 6(3), 56–75.

Farhat, C., Lesoinne, M., & Tallec, P.-L. (1998). Load and motion transfer algorithms for fluid/structure interaction problems with nonmatching discrete interfaces. Computational Methods in Applied Mechanics and Engineering, 157(1–2), 95–114.

Foster, I., Babuji, Y., & Chard, R. (2023). Machine learning in exascale workflows. Communications of the ACM, 66(4), 62–71.

Gropp, W., Hoefler, T., & Thakur, R. (2020). Using hybrid MPI+OpenMP for exascale-ready parallel applications. Concurrency and Computation: Practice and Experience, 32(5), e5579.

Guo, X., Li, W., & Liu, Y. (2021). Deep learning-based surrogate modeling in CFD. Progress in Aerospace Sciences, 125, 100746.

Haidn, O., et al. (2021). Dynamic task scheduling for coupled multiphysics simulations. Parallel Computing, 105, 102777.

Heil, M., & Hazel, A. (2022). Quasi-Newton acceleration in strongly coupled fluid–structure interaction. Journal of Computational Physics, 453, 110935.

Heroux, M., et al. (2022). Algorithmic co-design for extreme-scale computing. SIAM Review, 64(3), 635–664.

Keyes, D. E., et al. (2020). Multiphysics simulations: Scalable algorithms and software frameworks. Acta Numerica, 29, 311–434.

Lofstead, J., et al. (2022). Hierarchical storage and burst buffers in exascale systems. IEEE Computer, 55(8), 38–49.

Michler, C., et al. (2021). High-fidelity monolithic CFD/CSM coupling for exascale applications. Computer Methods in Applied Mechanics and Engineering, 382, 113919.

Sánchez-Pinedo, F., et al. (2021). An HPC multi-physics framework for next-generation simulations. Scipedia, 17(2), 45–56.

Snir, M., et al. (2020). Checkpointing and fault-tolerance for exascale systems. ACM Computing Surveys, 53(3), 1–38.

Trebotich, D., et al. (2023). Exascale-coupled multiphysics frameworks: Coupling flow and reactive transport simulation. Frontiers in High-Performance Computing, 2, 115–127.

Wall, W. A., & Gee, M. W. (2019). Fluid–structure interaction at the limits of computational power. Computational Mechanics, 64(3), 777–793.

Downloads

Published

2024-08-01

Submitted

2024-05-24

Revised

2024-07-12

Accepted

2024-07-25

Issue

Section

Articles

How to Cite

Haddad, D. . (2024). Exascale CFD/CSM Coupling: Partitioned vs. Monolithic Solvers, Load Balancing, and I/O at Scale. Multidisciplinary Engineering Science Open, 2, 1-13. https://jmesopen.com/index.php/jmesopen/article/view/19