<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://ecfd.coria-cfd.fr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tberthelon</id>
		<title>Extreme CFD workshop - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://ecfd.coria-cfd.fr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tberthelon"/>
		<link rel="alternate" type="text/html" href="https://ecfd.coria-cfd.fr/index.php/Special:Contributions/Tberthelon"/>
		<updated>2026-05-16T05:58:14Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.26.2</generator>

	<entry>
		<id>https://ecfd.coria-cfd.fr/index.php?title=Ecfd:ecfd_8th_edition&amp;diff=782</id>
		<title>Ecfd:ecfd 8th edition</title>
		<link rel="alternate" type="text/html" href="https://ecfd.coria-cfd.fr/index.php?title=Ecfd:ecfd_8th_edition&amp;diff=782"/>
				<updated>2025-02-10T11:12:40Z</updated>
		
		<summary type="html">&lt;p&gt;Tberthelon: /* N5 - Local timestep. T. Berthelon (LEGI), M. Bernard (LEGI), G. Balarac (LEGI) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE: ECFD workshop, 8th edition, 2025}}&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| align=&amp;quot;right&amp;quot; style=&amp;quot;text-align:center;&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
| [[File:Logo_ECFD8.png | center | thumb | 350px | ECFD8 workshop logo.]]&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* Event from '''27th of January to 7th of February 2025'''&lt;br /&gt;
* Location: [https://www.sport-normandie.fr/le-centre/le-site-de-houlgate Centre Sportif de Normandie], Houlgate, near Caen (14)&lt;br /&gt;
* Two types of sessions:&lt;br /&gt;
** common technical presentations: roadmaps, specific points&lt;br /&gt;
** mini-workshops. Potential workshops are listed below&lt;br /&gt;
* Free of charge&lt;br /&gt;
* Participants from academics, HPC center/experts and industry are welcome&lt;br /&gt;
* The number of participants is limited to 68.&lt;br /&gt;
&lt;br /&gt;
* Objectives &lt;br /&gt;
** Bring together experts in high-performance computing, applied mathematics and multi-physics CFDs&lt;br /&gt;
** Identify the technological barriers of exaflopic CFD via numerical experiments&lt;br /&gt;
** Identify industrial needs and challenges in high-performance computing&lt;br /&gt;
** Propose action plans to add to the development roadmaps of the CFD codes&lt;br /&gt;
* Organizers &lt;br /&gt;
** Guillaume Balarac (LEGI), Simon Mendez (IMAG), Pierre Bénard, Vincent Moureau, Léa Voivenel (CORIA). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ecfd8.png|600px|link=https://ecfd.coria-cfd.fr/index.php/Ecfd:ecfd_8th_edition]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Acknowledgments_ecfd8.png|text-bottom|600px]]&lt;br /&gt;
&lt;br /&gt;
== News ==&lt;br /&gt;
&lt;br /&gt;
* 23/10/2024: First announcement of the '''8th Extreme CFD Workshop &amp;amp; Hackathon''' !&lt;br /&gt;
* 22/11/2024: Deadline to submit your project&lt;br /&gt;
&lt;br /&gt;
== Thematics / Mini-workshops ==&lt;br /&gt;
&lt;br /&gt;
These mini-workshops may change and cover more or less topics. This page will be adapted according to your feedback.&lt;br /&gt;
&lt;br /&gt;
To come...&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
=== Hackathon GENCI - P. Begou, LEGI ===&lt;br /&gt;
This ECFD8 GENCI Hackathon was a rich event, involving 4 differents CFD codes (AVBP, ParaDIGM, SONICS and YALES2) using various paradigms (C++/cuda/hip, Fortran/OpenMP/OpenACC) with several SDKs (AMD, Cray/HPE, Nvidia, Gnu) on a large range of GPU architectures (Nvidia A100, GH100, AMD instinct Mi210, Mi250, Mi300). This two-week event benefited from a high level support from three HPC mentors, two on-site from AMD (J. Noudohouenou and A. Tsetoglou) and one remote from CINES (M. Boudaoud). &lt;br /&gt;
&lt;br /&gt;
==== H1 - ParaDIGM and SONICS on GPU, B. Maugars, G. Staffelbach, R.Cazalbou and B. Michel (ONERA)====&lt;br /&gt;
&lt;br /&gt;
==== H2 - AVBP GPU offloading based on OpenMP, M.Ghenai, L. Legaux and A. Dauptain (CERFACS) ====&lt;br /&gt;
 &lt;br /&gt;
==== H3 - YALES2 GPU from OpenACC to OpenMP, P. Bégou (LEGI), V. Moureau, G. Lartigue (CORIA) and R. Dubois (IMAG) ====&lt;br /&gt;
This Hackathon focuses on running Yales2 code on AMD Instinct Mi250 and Mi300 GPUs of the Adastra supercomputer (CINES).&lt;br /&gt;
Previously, a first solver in the Yales2 CFD code was successfully ported on the GPU accelerators of the Jean-Zay supercomputer (IDRIS) using Nvidia SDK but difficulties remain on Adastra AMD GPUs, mainly related to the available development tools. High compilation time and the impossibility to use debug flags at compile time as soon as OpenACC is enabled are a real challenge when tracking errors. The current project is to evaluate a freshly deployed version (at the begining of the workshop) of the AMD Fortran compiler. This requires moving to OpenMP paradigm, starting from scratch since the OpenACC branch has largely diverged from the master one while tracking spurious remaining bugs.&lt;br /&gt;
If the AMD compiler is able to build the cpu version of Yales2 &amp;quot;out of the box&amp;quot; (wich is not the case for Cray Fortran), the compilation time for each file is significantly higher. However, setting up a 2 stages dynamic compilation process allows for high parallelism that is not possible with Cray Fortran 18 and the library build time drops from nearly 2 hours (Cray Fortran 18) to 17 minutes (Amd Fortran compiler).&lt;br /&gt;
Large kernels have been ported from OpenACC to OpenMP, raising some difficulties when offloading intrinsics functions or using strutures attributes in kernels loops. These limitations were also known in the previous OpenACC work. The goal was mainly to check the correctness of the results. The offloading of the complex data structure of Yales2 code was then investigated. Here again some limitations of the &amp;quot;young&amp;quot; compiler were discovered and workarounds were implemented. Several reproducers were built during this ECFD8 and provided to developpers by the 2 on-site AMD engineers.&lt;br /&gt;
Preliminary tests on micro-applications show good performances of the generated binaries proving that this compiler could be a serious alternative on AMD GPUs and the goal is now to focus on this SDK in an OpenMP strategy while checking the portablility of this new implementation in Nvidia, Cray/HPE (and Gnu ?) environments.&lt;br /&gt;
&lt;br /&gt;
=== Mesh adaptation - A. Grenouilloux, ONERA &amp;amp; G. Balarac, LEGI ===&lt;br /&gt;
&lt;br /&gt;
=== Numerics - M. Bernard, LEGI &amp;amp; G. Lartigue, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== N1 - Traction open boundary condition  ====&lt;br /&gt;
&lt;br /&gt;
==== N2 - Treatment of Inlet conditions in High-Order solver. M. Bernard (LEGI), Ghislain Lartigue (CORIA), Guillaume Balarac (LEGI) ====&lt;br /&gt;
In the context of node-centered Finite Volumes Method, spacial accuracy of a numerical scheme depends on ability to evaluate accurately fluxes through interface of each control volume (CV). Such accurate evaluation is not straightforward, especially when dealing with distorted grids. This project follows the work of [1] where fluxes use pointwise quantities, which are reconstructed from integrated quantities advanced in time. During the previous edition of the ECFD, a new data structure has been developed to store data at location of the boundary conditions facelets, with application to wall boundary conditions. During this 8th edition of the ECFD, we used the same data structure, but dedicated to the treatment of inlet conditions.&lt;br /&gt;
The inlet condition is then either imposed directly at facelets center, or at nodes position them extrapolated to facelets center by use of Taylor expansion. For this later solution, high-order treatment requires the successive derivatives to be computed in the plane of the boundary condition. This is not done yet, leading for the moment to low accuracy results but the framework is ready for upcoming implementation.&lt;br /&gt;
&lt;br /&gt;
[1] ''A framework to perform high-order deconvolution for finite-volume method on simplicial meshes, , Bernard et. al., IJNMF 2020''&lt;br /&gt;
&lt;br /&gt;
==== N3 - Conservative mesh-to-mesh interpolation. M. Bernard (LEGI), Ghislain Lartigue (CORIA), Guillaume Balarac (LEGI) ====&lt;br /&gt;
&lt;br /&gt;
Mesh to mesh interpolations occur quite often in CFD simulations : in the context of adaptative mesh convergence or in the case of dynamic mesh adaptation for for example.&lt;br /&gt;
Quality of the solution on the destination grid will depend on the characteristics of the interpolation method.&lt;br /&gt;
In this project, we did not focus on accuracy of the interpolation method but rather on conservativity characteristics.&lt;br /&gt;
A conservative interpolation ensures that the integral of the data on the source grid is exactly retrieved on the destination grid. &lt;br /&gt;
This property is highly interesting when dealing with scalar quantities or phase indicators, whose values should remained bounded.&lt;br /&gt;
In the context of nodes centered Finite Volume schemes, the methodology we used consists in (i) reconstructing element quantity from average nodal quantities on source grid.&lt;br /&gt;
Then, for a cell of the destination mesh, (ii) computing the geometrical intersection between cells of source and destination meshes to evaluate to evaluate the rate of quantities they. &lt;br /&gt;
Eventually, (iii) redistributing the solution from elements to control volumes of the destination mesh.&lt;br /&gt;
The overall process is fully conservative as it is based on geometrical intersection of locally integrated quantities.&lt;br /&gt;
The methodology as been implemented and tested on a few basic configurations and the conservativity is retrieved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== N4 - Determination of timestep in semi-implicit solver. T. Berthelon (LEGI), G. Balarac (LEGI), H. Lam (LEGI), M. El Moatamid (CORIA) ====&lt;br /&gt;
In order to reduce the computation time associated with incompressible LES simulations, an implicit time integration, based on BDF schemes, has been developed within the ICS solver. This integration eliminates the stability constraints associated with explicit schemes, and therefore opens up the question of the appropriate choice of time step. &lt;br /&gt;
In parallel, recent work has been carried out on meshing criteria in LES. The strategy in place consists of adapting the mesh by distinguishing two zones:&lt;br /&gt;
- &amp;quot;DNS&amp;quot; zones, where the meshing criterion is based on an estimate of the adimensioned spatial error.&lt;br /&gt;
- &amp;quot;LES&amp;quot; zones, where the meshing criterion is based on Kolmorogov theory.&lt;br /&gt;
During this project, the spatial criteria were extended to include temporal criteria. In the &amp;quot;DNS&amp;quot; zones, the time step is chosen using an estimate of the temporal error of the BDF scheme judiciously scaled to match the spatial error. In the &amp;quot;LES&amp;quot; zones, the time step is chosen using a scaling law associated with fully developed turbulence.&lt;br /&gt;
The new time step selection strategy has been tested on the case of a turbulent jet and leads to an accuracy equivalent to the explicit case while reducing the simulation return time by a factor of nearly 3.&lt;br /&gt;
&lt;br /&gt;
Another aspect of this project was to integrate certain implicit temporal schemes (C-N and SDIRK) recently developed by Mr. El Moatamid into the incompressible solver.&lt;br /&gt;
&lt;br /&gt;
==== N5 - Local timestep. T. Berthelon (LEGI), M. Bernard (LEGI), G. Balarac (LEGI) ====&lt;br /&gt;
RANS modelling has recently been developed within the YALES2 library. With this modeling strategy, the objective is to reach as quick as possible a steady state.&lt;br /&gt;
During this project, we investigate the use of a local time step to reduce the time to solution of steady computation in the incompressible solver. &lt;br /&gt;
This implies solving a variable-coefficient Poisson equation. Encouraging results were obtained in the simple case of &amp;quot;Couette plan&amp;quot; flow artificially constrained by a mesh variation. In fact, the use of local time-step reduce drastically the time to solution on this configuration. This method needs to be tested on real RANS case.&lt;br /&gt;
&lt;br /&gt;
==== N7 - Implicit time advancement for low-Reynolds number flows with particles. S. Mendez, C. Raveleau (IMAG), M. El Moatamid, V. Moureau (CORIA) ====&lt;br /&gt;
IMAG runs numerous simulations of red blood cells under flow. Those simulations are at low Reynolds number (0.001 to 1.0, typically). Splitting of the time advancement is used to treat the diffusion terms implicitly, albeit with an important numerical cost: implicit diffusion is 50 to 60% of the computational cost. Recently, M. El Moatamid implemented a genral framework to deal with implicit time advancement for scalars. In this project, the general method has been transposed to the advancement of the velocity field in the ICS and RBC solvers of YALES2/YALES2BIO. This enables testing various linear solvers (GMRES based). However, such solvers do not decrease the CPU time compared to the existing implementation. However, while working on this, it was identified that residual recycling was not activated in the current implementation of the implicit diffusion. This sped up the implicit diffusion cost by 35%, for a total gain of 20% for the computation. In addition to this achievement, moving to the framework coded by Moncef will have other beneficial side effects: we anticipate simplifying the implementation, with an easier merging between YALES2BIO and YALES2. The method will also be implemented in the electrosatic solver, for which the Poisson problem should benefit from the new GMRES-based solvers. In addition, this project highlights the importance of improving the treatment of stiff source terms in the red blood cells simulations, to be able to overcome the current limitation in time step due to those term and have a chance to benefit from higher-order time schemes, efficient at high Fourier numbers.&lt;br /&gt;
&lt;br /&gt;
=== Turbulence - L. Voivenel, CORIA &amp;amp; P. Bénard, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== T1 - FSI-1D strategy for internal flows====&lt;br /&gt;
&lt;br /&gt;
==== T2 - Dynamic Smagorinsky in Dorothy ====&lt;br /&gt;
&lt;br /&gt;
==== T3 - Turbulence injection strategy for compressible flows ====&lt;br /&gt;
&lt;br /&gt;
==== T4 - Improve wind farm modeling and simulation workflow ====&lt;br /&gt;
The YALES2 library includes an advanced modular implementation of the Actuator Line Method (ALM). This approach remains state-of-the-art when performing an LES-based analysis of a wind turbine wake. The method also provides an accurate assessment of the aerodynamic loads applied on the turbine. Still, applying this method to investigate a wind farm flow can be challenging, both in terms of computational cost and simulation setup. For instance, an inadequate management of the wind turbine individual modeling parts in a HPC context can end up being the main bottleneck of the simulation. From another perspective, a wind farm is usually composed of more than 50 wind turbines. For such a case, setting up all YALES2 required inputs manually can be very tedious and error-prone.  This project thus mainly aimed to optimize the YALES2 ALM implementation and the user experience around it. Additionally, a cost-effective alternative to the ALM when modeling wind farm flows, namely the Rotating Actuator Disk Method (ADM-R), has been implemented for further investigations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WP1''': Improve Actuator set rotor modelling&lt;br /&gt;
* Parallel processing of the ''actuator sets'' used to model the wind turbines&lt;br /&gt;
  (Felix)&lt;br /&gt;
&lt;br /&gt;
* Rotating Actuator Disk Model (ADM-R):&lt;br /&gt;
According to the usual guidelines, the mesh requirements of the ALM, to profit entirely from its reachable accuracy, can be difficult to achieve or even unaffordable when simulating a wind farm flow, especially from the industrial point of view. Alternatives are available in the literature for this kind of application. Likely, the methods from the Actuator Disk family are the most prominent ones. Several kinds of implementation exist, which mostly differ by their capability to include the wake rotation. During the workshop, a new method from the Rotating Actuator Disk kind has been implemented and underwent an early validation on a single turbine setup. Applications to wind farm flows will follow. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WP2''': Improve tools User Experience&lt;br /&gt;
&lt;br /&gt;
Three Python tools have been developed or improved :&lt;br /&gt;
*The first tool is the wind farm previsualisation tool, 'y2_wind_previsualization', which is used before the calculation run. This provides an interactive HTML interface for viewing global data for each turbine on the farm (position, hub height, yaw angle, etc.). The tool traces all of these via the parsing of the input file. &lt;br /&gt;
* The second tool is for duplicating rotor templates for a wind farm (`y2_wind_duplication`). This tool was developed in the previous ECFD, but this time it has been refactored and incorporated into the y2tools package.&lt;br /&gt;
* The third and final tool is a post-processing tool for the temporal processing of global wind turbine simulation metrics (Thrust, Power, etc.), `y2_post_wind`. This tool generates an interactive HTML plot of time-dependent global quantities.&lt;br /&gt;
&lt;br /&gt;
==== T5 - Improve atmospheric inflow turbulence ====&lt;br /&gt;
Atmospheric inflow turbulence is generated using the precursor database method. A half-channel flow driven by a pressure gradient is used to obtain the inflow which is used as inlet boundary condition for the wind turbine simulation domain. This project aimed to improve the whole methodology, from generation to injection.&lt;br /&gt;
&lt;br /&gt;
* WP1: Improve inflow generation&lt;br /&gt;
Anand: pressure controller&lt;br /&gt;
&lt;br /&gt;
* WP2: Improve injection methodology (method A)&lt;br /&gt;
The previous workflow used plane probes in the ASCII format to sample the flow. The COWIT2 toolbox was used  to convert the file into turbulence box (.man format). While functioning, this methodology had two major flaws. First the probe files are heavy ~O(10Go). Second, the method requires a lot of human effort, allowing numerous sources of errors.&lt;br /&gt;
During this workshop, a new methodology has been developed. First, the probes are generated using the HDF5 format (now available for all probe types), leading to lighter file ~O(1Go). Second, Y2_tools is used to read HDF5 format (working for probes and temporals). HDF5 file is then converted into a Look-up Table. Finally, the Look-up Table is read directly by YALES2 as a boundary conditions.&lt;br /&gt;
&lt;br /&gt;
* WP3: Improve injection methodology (method B)&lt;br /&gt;
Even though improvements achieved in WP2 prove to be very handy while removing many potential human errors, injecting a turbulent inflow through wind boxes ('offline' precursor approach) can sometimes remain cumbersome for several reasons: (1) no periodicity is enforced in the streamwise direction of those boxes, (2) potential high memory consumption,  and (3) the boxes need to be moved to other cores whenever a mesh adaptation occurs. An alternative consists in co-simulating the precursor flow and the flow of interest (refered as the 'successor' simulation) at the same time ('online' precursor approach). The inlet boundary condition for the successor flow is then obtained by mapping the outflow of the precursor domain. During the workshop, some work has been initiated to implement this kind of coupling using the CWIPI library, for which YALES2 provides already an interface.&lt;br /&gt;
&lt;br /&gt;
==== T6 - FSI model in Dorothy ====&lt;br /&gt;
&lt;br /&gt;
=== Two Phase Flow - J. Leparoux, SAFRAN &amp;amp; J. Carmona, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== TP1 - Towards very small contact angles in Nucleate boiling ====&lt;br /&gt;
&lt;br /&gt;
Participants: Henri Lam (LEGI), Mohammad Umair (LEGI), Manuel Bernard (LEGI), Robin Barbera (LEGI) and Giovanni Ghigliotti (LPSC)&lt;br /&gt;
&lt;br /&gt;
==== TP2 - Modeling spray-film interactions ====&lt;br /&gt;
&lt;br /&gt;
Participants: Nicolas Gasnier (EM2C-SafranTech), Julien Leparoux (SafranTech), Mehdi Helal (CORIA-SafranTech) and Julien Carmona (CORIA)&lt;br /&gt;
&lt;br /&gt;
==== TP3 - High-fidelity two-phase flow simulations of the purge of a fuel feed line ====&lt;br /&gt;
&lt;br /&gt;
Participants: Thomas LAROCHE (Safran HE), Romain JANODET (Safran AE), Julien Leparoux (Safran Tech) and Melody Cailler (Safran Tech)&lt;br /&gt;
&lt;br /&gt;
==== TP4 - Volume of Fluid solver in YALES2 ====&lt;br /&gt;
&lt;br /&gt;
Participants: Léa Voivenel (CORIA), Julien Carmona (CORIA), Mehdi Helal (CORIA), Pierre Portais (CORIA), Julien Leparoux (Safran Tech), Mélody Cailler (Safran Tech) and Nicolas Gasnier (EM2C / Safran Tech)&lt;br /&gt;
&lt;br /&gt;
==== TP5 - Implement a local operator to distribute the solid volume of a particle over multiple cells ====&lt;br /&gt;
&lt;br /&gt;
Participants: Théo Ndereyimana (Université de Sherbrooke), Stéphane Moreau (Université de Sherbrooke)&lt;br /&gt;
&lt;br /&gt;
==== TP6 - Complex thermodynamics in sloshing tanks ====&lt;br /&gt;
&lt;br /&gt;
Participants: C. Merlin (AGS), D. Fouquet (CORIA), V. Moureau (CORIA), J. Carmona (CORIA) and G. Lartigue (CORIA)&lt;br /&gt;
&lt;br /&gt;
=== Combustion - Y. Bechane, CORIA &amp;amp; S. Dillon, SAFRAN &amp;amp; K. Bioche, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== C1 - LES of the thermal degradation of a composite material ====&lt;br /&gt;
Participants: A. Grenouilloux (ONERA), K. Bioche (CORIA), N. Dellinger (ONERA) and R. Letournel (SafranTech)&lt;br /&gt;
&lt;br /&gt;
==== C2 - Flame stabilization by NRP plasma discharge ====&lt;br /&gt;
&lt;br /&gt;
==== C3 - Extending and validating a generalized formalism of virtual chemistry ====&lt;br /&gt;
&lt;br /&gt;
==== C4 - Turbulent combustion model for NOx prediction ====&lt;br /&gt;
&lt;br /&gt;
==== C5 - Towards 3D simulation of detonation combustion ====&lt;br /&gt;
&lt;br /&gt;
==== C6 - Flame stabilitity of flame-holders within reheat conditions ====&lt;br /&gt;
&lt;br /&gt;
==== C7 - Thermal radiation in oxyflames ====&lt;br /&gt;
&lt;br /&gt;
==== C8 - A first step toward hybrid CPU / GPU for reactive flow in YALES2 ====&lt;br /&gt;
&lt;br /&gt;
Participants: M. Laignel (CORIA), G. Lartigue (CORIA), K. Bioche (CORIA) and V. Moureau (CORIA)&lt;br /&gt;
&lt;br /&gt;
In numerical simulations of reacting flows, one of the most computationally intensive tasks is the evaluation of source terms resulting from chemical reactions in the species transport equations. This step can account for up to 90% of the total simulation cost , depending on the complexity of the kinetic mechanism involved. To reduce this cost, various techniques such as mechanism reduction, virtual chemistry, etc. have been explored. However, the emergence of GPUs as powerful accelerators offers a promising alternative by providing massive parallelism. Despite their potential, GPUs often require significant adaptation of CPU-based codes. This project aims to address this challenge by taking a first step towards a hybrid CPU/GPU framework for reactive flow simulations. Specifically, the focus is on coupling Y2 with the updated version of the stiff time integration solver (CVODE), which is compatible with GPU (CUDA, HIP, OpenMP). The ultimate goal is to establish a foundation for hybrid computations by implementing and testing the updated solver on simplified test cases.&lt;br /&gt;
&lt;br /&gt;
==== C9 - Soots numerical modeling ====&lt;br /&gt;
&lt;br /&gt;
==== C10 - TECERACT : Tabulated chemistry generator for aeronautical combustion ====&lt;br /&gt;
&lt;br /&gt;
==== C11 - Exploring efficient tabulation strategies for detailed chemistry ====&lt;br /&gt;
&lt;br /&gt;
==== C12 - Dynamic sub-grid-scale modelling of multi-regime flame wrinkling ====&lt;br /&gt;
&lt;br /&gt;
==== C13 - LES of a semi-industrial burner using a non-adiabatic virtual chemical scheme ====&lt;br /&gt;
&lt;br /&gt;
=== User Experience &amp;amp; Data -  L. Korzeczek, GDTECH ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== U1 - Low-fidelity (RANS) rotor/stator simulations, application to Kaplan Turbine - Y. Lakrifi, G. Balarac (LEGI),  R. Mercier (SAFRAN), V. Moureau (CORIA) ====&lt;br /&gt;
&lt;br /&gt;
==== U2 - Coupling PyTorch/YALES2, combustion cartesian look-up tables - J. Leparoux, N. Treleaven, S. Dillon (SAFRAN), K. Bioche, G. Lartigue (CORIA) ====&lt;br /&gt;
&lt;br /&gt;
==== U3 - Yales2 Trame Editor, toward a fully featured graphical user interface for YALES2 - L. Korzeczek, S. Meynet (GDTECH), J. Leparoux, M. Cailler (SAFRAN) ====&lt;br /&gt;
&lt;br /&gt;
Participants: Julien Leparoux (Safran Tech), Kévin Bioche (CORIA), Ghislain Lartigue (CORIA), Nicholas Treleaven (Safran Tech)&lt;br /&gt;
&lt;br /&gt;
Neural Networks offer a promising alternative to Cartesian look-up tables for combustion simulations, reducing memory footprint. In this project, we investigated how to integrate an NN model for real-time inference in the YALES2 platform, exploring two approaches: a Python interface and a Fortran Torch binding (using FTorch[https://github.com/Cambridge-ICCS/FTorch]). We validated that the model remains accurate when embedded online and identified improvements for robustness. Inference costs were evaluated on a Mac M3 and the Austral cluster, revealing a strong dependency on data volume. To optimize efficiency, we propose grouping cells at the processor level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  Masqué&lt;br /&gt;
&lt;br /&gt;
== Communications related to ECFD8 ==&lt;br /&gt;
&lt;br /&gt;
=== Conferences ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Publications ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tberthelon</name></author>	</entry>

	<entry>
		<id>https://ecfd.coria-cfd.fr/index.php?title=Ecfd:ecfd_8th_edition&amp;diff=781</id>
		<title>Ecfd:ecfd 8th edition</title>
		<link rel="alternate" type="text/html" href="https://ecfd.coria-cfd.fr/index.php?title=Ecfd:ecfd_8th_edition&amp;diff=781"/>
				<updated>2025-02-10T11:05:14Z</updated>
		
		<summary type="html">&lt;p&gt;Tberthelon: /* N4 - Determination of timestep in semi-implicit solver. T. Berthelon (LEGI), G. Balarac (LEGI), M. Bernard (LEGI) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE: ECFD workshop, 8th edition, 2025}}&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| align=&amp;quot;right&amp;quot; style=&amp;quot;text-align:center;&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
| [[File:Logo_ECFD8.png | center | thumb | 350px | ECFD8 workshop logo.]]&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* Event from '''27th of January to 7th of February 2025'''&lt;br /&gt;
* Location: [https://www.sport-normandie.fr/le-centre/le-site-de-houlgate Centre Sportif de Normandie], Houlgate, near Caen (14)&lt;br /&gt;
* Two types of sessions:&lt;br /&gt;
** common technical presentations: roadmaps, specific points&lt;br /&gt;
** mini-workshops. Potential workshops are listed below&lt;br /&gt;
* Free of charge&lt;br /&gt;
* Participants from academics, HPC center/experts and industry are welcome&lt;br /&gt;
* The number of participants is limited to 68.&lt;br /&gt;
&lt;br /&gt;
* Objectives &lt;br /&gt;
** Bring together experts in high-performance computing, applied mathematics and multi-physics CFDs&lt;br /&gt;
** Identify the technological barriers of exaflopic CFD via numerical experiments&lt;br /&gt;
** Identify industrial needs and challenges in high-performance computing&lt;br /&gt;
** Propose action plans to add to the development roadmaps of the CFD codes&lt;br /&gt;
* Organizers &lt;br /&gt;
** Guillaume Balarac (LEGI), Simon Mendez (IMAG), Pierre Bénard, Vincent Moureau, Léa Voivenel (CORIA). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ecfd8.png|600px|link=https://ecfd.coria-cfd.fr/index.php/Ecfd:ecfd_8th_edition]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Acknowledgments_ecfd8.png|text-bottom|600px]]&lt;br /&gt;
&lt;br /&gt;
== News ==&lt;br /&gt;
&lt;br /&gt;
* 23/10/2024: First announcement of the '''8th Extreme CFD Workshop &amp;amp; Hackathon''' !&lt;br /&gt;
* 22/11/2024: Deadline to submit your project&lt;br /&gt;
&lt;br /&gt;
== Thematics / Mini-workshops ==&lt;br /&gt;
&lt;br /&gt;
These mini-workshops may change and cover more or less topics. This page will be adapted according to your feedback.&lt;br /&gt;
&lt;br /&gt;
To come...&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
=== Hackathon GENCI - P. Begou, LEGI ===&lt;br /&gt;
This ECFD8 GENCI Hackathon was a rich event, involving 4 differents CFD codes (AVBP, ParaDIGM, SONICS and YALES2) using various paradigms (C++/cuda/hip, Fortran/OpenMP/OpenACC) with several SDKs (AMD, Cray/HPE, Nvidia, Gnu) on a large range of GPU architectures (Nvidia A100, GH100, AMD instinct Mi210, Mi250, Mi300). This two-week event benefited from a high level support from three HPC mentors, two on-site from AMD (J. Noudohouenou and A. Tsetoglou) and one remote from CINES (M. Boudaoud). &lt;br /&gt;
&lt;br /&gt;
==== H1 - ParaDIGM and SONICS on GPU, B. Maugars, G. Staffelbach, R.Cazalbou and B. Michel (ONERA)====&lt;br /&gt;
&lt;br /&gt;
==== H2 - AVBP GPU offloading based on OpenMP, M.Ghenai, L. Legaux and A. Dauptain (CERFACS) ====&lt;br /&gt;
 &lt;br /&gt;
==== H3 - YALES2 GPU from OpenACC to OpenMP, P. Bégou (LEGI), V. Moureau, G. Lartigue (CORIA) and R. Dubois (IMAG) ====&lt;br /&gt;
This Hackathon focuses on running Yales2 code on AMD Instinct Mi250 and Mi300 GPUs of the Adastra supercomputer (CINES).&lt;br /&gt;
Previously, a first solver in the Yales2 CFD code was successfully ported on the GPU accelerators of the Jean-Zay supercomputer (IDRIS) using Nvidia SDK but difficulties remain on Adastra AMD GPUs, mainly related to the available development tools. High compilation time and the impossibility to use debug flags at compile time as soon as OpenACC is enabled are a real challenge when tracking errors. The current project is to evaluate a freshly deployed version (at the begining of the workshop) of the AMD Fortran compiler. This requires moving to OpenMP paradigm, starting from scratch since the OpenACC branch has largely diverged from the master one while tracking spurious remaining bugs.&lt;br /&gt;
If the AMD compiler is able to build the cpu version of Yales2 &amp;quot;out of the box&amp;quot; (wich is not the case for Cray Fortran), the compilation time for each file is significantly higher. However, setting up a 2 stages dynamic compilation process allows for high parallelism that is not possible with Cray Fortran 18 and the library build time drops from nearly 2 hours (Cray Fortran 18) to 17 minutes (Amd Fortran compiler).&lt;br /&gt;
Large kernels have been ported from OpenACC to OpenMP, raising some difficulties when offloading intrinsics functions or using strutures attributes in kernels loops. These limitations were also known in the previous OpenACC work. The goal was mainly to check the correctness of the results. The offloading of the complex data structure of Yales2 code was then investigated. Here again some limitations of the &amp;quot;young&amp;quot; compiler were discovered and workarounds were implemented. Several reproducers were built during this ECFD8 and provided to developpers by the 2 on-site AMD engineers.&lt;br /&gt;
Preliminary tests on micro-applications show good performances of the generated binaries proving that this compiler could be a serious alternative on AMD GPUs and the goal is now to focus on this SDK in an OpenMP strategy while checking the portablility of this new implementation in Nvidia, Cray/HPE (and Gnu ?) environments.&lt;br /&gt;
&lt;br /&gt;
=== Mesh adaptation - A. Grenouilloux, ONERA &amp;amp; G. Balarac, LEGI ===&lt;br /&gt;
&lt;br /&gt;
=== Numerics - M. Bernard, LEGI &amp;amp; G. Lartigue, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== N1 - Traction open boundary condition  ====&lt;br /&gt;
&lt;br /&gt;
==== N2 - Treatment of Inlet conditions in High-Order solver. M. Bernard (LEGI), Ghislain Lartigue (CORIA), Guillaume Balarac (LEGI) ====&lt;br /&gt;
In the context of node-centered Finite Volumes Method, spacial accuracy of a numerical scheme depends on ability to evaluate accurately fluxes through interface of each control volume (CV). Such accurate evaluation is not straightforward, especially when dealing with distorted grids. This project follows the work of [1] where fluxes use pointwise quantities, which are reconstructed from integrated quantities advanced in time. During the previous edition of the ECFD, a new data structure has been developed to store data at location of the boundary conditions facelets, with application to wall boundary conditions. During this 8th edition of the ECFD, we used the same data structure, but dedicated to the treatment of inlet conditions.&lt;br /&gt;
The inlet condition is then either imposed directly at facelets center, or at nodes position them extrapolated to facelets center by use of Taylor expansion. For this later solution, high-order treatment requires the successive derivatives to be computed in the plane of the boundary condition. This is not done yet, leading for the moment to low accuracy results but the framework is ready for upcoming implementation.&lt;br /&gt;
&lt;br /&gt;
[1] ''A framework to perform high-order deconvolution for finite-volume method on simplicial meshes, , Bernard et. al., IJNMF 2020''&lt;br /&gt;
&lt;br /&gt;
==== N3 - Conservative mesh-to-mesh interpolation. M. Bernard (LEGI), Ghislain Lartigue (CORIA), Guillaume Balarac (LEGI) ====&lt;br /&gt;
&lt;br /&gt;
Mesh to mesh interpolations occur quite often in CFD simulations : in the context of adaptative mesh convergence or in the case of dynamic mesh adaptation for for example.&lt;br /&gt;
Quality of the solution on the destination grid will depend on the characteristics of the interpolation method.&lt;br /&gt;
In this project, we did not focus on accuracy of the interpolation method but rather on conservativity characteristics.&lt;br /&gt;
A conservative interpolation ensures that the integral of the data on the source grid is exactly retrieved on the destination grid. &lt;br /&gt;
This property is highly interesting when dealing with scalar quantities or phase indicators, whose values should remained bounded.&lt;br /&gt;
In the context of nodes centered Finite Volume schemes, the methodology we used consists in (i) reconstructing element quantity from average nodal quantities on source grid.&lt;br /&gt;
Then, for a cell of the destination mesh, (ii) computing the geometrical intersection between cells of source and destination meshes to evaluate to evaluate the rate of quantities they. &lt;br /&gt;
Eventually, (iii) redistributing the solution from elements to control volumes of the destination mesh.&lt;br /&gt;
The overall process is fully conservative as it is based on geometrical intersection of locally integrated quantities.&lt;br /&gt;
The methodology as been implemented and tested on a few basic configurations and the conservativity is retrieved.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== N4 - Determination of timestep in semi-implicit solver. T. Berthelon (LEGI), G. Balarac (LEGI), H. Lam (LEGI), M. El Moatamid (CORIA) ====&lt;br /&gt;
In order to reduce the computation time associated with incompressible LES simulations, an implicit time integration, based on BDF schemes, has been developed within the ICS solver. This integration eliminates the stability constraints associated with explicit schemes, and therefore opens up the question of the appropriate choice of time step. &lt;br /&gt;
In parallel, recent work has been carried out on meshing criteria in LES. The strategy in place consists of adapting the mesh by distinguishing two zones:&lt;br /&gt;
- &amp;quot;DNS&amp;quot; zones, where the meshing criterion is based on an estimate of the adimensioned spatial error.&lt;br /&gt;
- &amp;quot;LES&amp;quot; zones, where the meshing criterion is based on Kolmorogov theory.&lt;br /&gt;
During this project, the spatial criteria were extended to include temporal criteria. In the &amp;quot;DNS&amp;quot; zones, the time step is chosen using an estimate of the temporal error of the BDF scheme judiciously scaled to match the spatial error. In the &amp;quot;LES&amp;quot; zones, the time step is chosen using a scaling law associated with fully developed turbulence.&lt;br /&gt;
The new time step selection strategy has been tested on the case of a turbulent jet and leads to an accuracy equivalent to the explicit case while reducing the simulation return time by a factor of nearly 3.&lt;br /&gt;
&lt;br /&gt;
Another aspect of this project was to integrate certain implicit temporal schemes (C-N and SDIRK) recently developed by Mr. El Moatamid into the incompressible solver.&lt;br /&gt;
&lt;br /&gt;
==== N5 - Local timestep. T. Berthelon (LEGI), M. Bernard (LEGI), G. Balarac (LEGI) ====&lt;br /&gt;
&lt;br /&gt;
==== N7 - Implicit time advancement for low-Reynolds number flows with particles. S. Mendez, C. Raveleau (IMAG), M. El Moatamid, V. Moureau (CORIA) ====&lt;br /&gt;
IMAG runs numerous simulations of red blood cells under flow. Those simulations are at low Reynolds number (0.001 to 1.0, typically). Splitting of the time advancement is used to treat the diffusion terms implicitly, albeit with an important numerical cost: implicit diffusion is 50 to 60% of the computational cost. Recently, M. El Moatamid implemented a genral framework to deal with implicit time advancement for scalars. In this project, the general method has been transposed to the advancement of the velocity field in the ICS and RBC solvers of YALES2/YALES2BIO. This enables testing various linear solvers (GMRES based). However, such solvers do not decrease the CPU time compared to the existing implementation. However, while working on this, it was identified that residual recycling was not activated in the current implementation of the implicit diffusion. This sped up the implicit diffusion cost by 35%, for a total gain of 20% for the computation. In addition to this achievement, moving to the framework coded by Moncef will have other beneficial side effects: we anticipate simplifying the implementation, with an easier merging between YALES2BIO and YALES2. The method will also be implemented in the electrosatic solver, for which the Poisson problem should benefit from the new GMRES-based solvers. In addition, this project highlights the importance of improving the treatment of stiff source terms in the red blood cells simulations, to be able to overcome the current limitation in time step due to those term and have a chance to benefit from higher-order time schemes, efficient at high Fourier numbers.&lt;br /&gt;
&lt;br /&gt;
=== Turbulence - L. Voivenel, CORIA &amp;amp; P. Bénard, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== T1 - FSI-1D strategy for internal flows====&lt;br /&gt;
&lt;br /&gt;
==== T2 - Dynamic Smagorinsky in Dorothy ====&lt;br /&gt;
&lt;br /&gt;
==== T3 - Turbulence injection strategy for compressible flows ====&lt;br /&gt;
&lt;br /&gt;
==== T4 - Improve wind farm modeling and simulation workflow ====&lt;br /&gt;
The YALES2 library includes an advanced modular implementation of the Actuator Line Method (ALM). This approach remains state-of-the-art when performing an LES-based analysis of a wind turbine wake. The method also provides an accurate assessment of the aerodynamic loads applied on the turbine. Still, applying this method to investigate a wind farm flow can be challenging, both in terms of computational cost and simulation setup. For instance, an inadequate management of the wind turbine individual modeling parts in a HPC context can end up being the main bottleneck of the simulation. From another perspective, a wind farm is usually composed of more than 50 wind turbines. For such a case, setting up all YALES2 required inputs manually can be very tedious and error-prone.  This project thus mainly aimed to optimize the YALES2 ALM implementation and the user experience around it. Additionally, a cost-effective alternative to the ALM when modeling wind farm flows, namely the Rotating Actuator Disk Method (ADM-R), has been implemented for further investigations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WP1''': Improve Actuator set rotor modelling&lt;br /&gt;
* Parallel processing of the ''actuator sets'' used to model the wind turbines&lt;br /&gt;
  (Felix)&lt;br /&gt;
&lt;br /&gt;
* Rotating Actuator Disk Model (ADM-R):&lt;br /&gt;
According to the usual guidelines, the mesh requirements of the ALM, to profit entirely from its reachable accuracy, can be difficult to achieve or even unaffordable when simulating a wind farm flow, especially from the industrial point of view. Alternatives are available in the literature for this kind of application. Likely, the methods from the Actuator Disk family are the most prominent ones. Several kinds of implementation exist, which mostly differ by their capability to include the wake rotation. During the workshop, a new method from the Rotating Actuator Disk kind has been implemented and underwent an early validation on a single turbine setup. Applications to wind farm flows will follow. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WP2''': Improve tools User Experience&lt;br /&gt;
&lt;br /&gt;
Three Python tools have been developed or improved :&lt;br /&gt;
*The first tool is the wind farm previsualisation tool, 'y2_wind_previsualization', which is used before the calculation run. This provides an interactive HTML interface for viewing global data for each turbine on the farm (position, hub height, yaw angle, etc.). The tool traces all of these via the parsing of the input file. &lt;br /&gt;
* The second tool is for duplicating rotor templates for a wind farm (`y2_wind_duplication`). This tool was developed in the previous ECFD, but this time it has been refactored and incorporated into the y2tools package.&lt;br /&gt;
* The third and final tool is a post-processing tool for the temporal processing of global wind turbine simulation metrics (Thrust, Power, etc.), `y2_post_wind`. This tool generates an interactive HTML plot of time-dependent global quantities.&lt;br /&gt;
&lt;br /&gt;
==== T5 - Improve atmospheric inflow turbulence ====&lt;br /&gt;
Atmospheric inflow turbulence is generated using the precursor database method. A half-channel flow driven by a pressure gradient is used to obtain the inflow which is used as inlet boundary condition for the wind turbine simulation domain. This project aimed to improve the whole methodology, from generation to injection.&lt;br /&gt;
&lt;br /&gt;
* WP1: Improve inflow generation&lt;br /&gt;
Anand: pressure controller&lt;br /&gt;
&lt;br /&gt;
* WP2: Improve injection methodology (method A)&lt;br /&gt;
The previous workflow used plane probes in the ASCII format to sample the flow. The COWIT2 toolbox was used  to convert the file into turbulence box (.man format). While functioning, this methodology had two major flaws. First the probe files are heavy ~O(10Go). Second, the method requires a lot of human effort, allowing numerous sources of errors.&lt;br /&gt;
During this workshop, a new methodology has been developed. First, the probes are generated using the HDF5 format (now available for all probe types), leading to lighter file ~O(1Go). Second, Y2_tools is used to read HDF5 format (working for probes and temporals). HDF5 file is then converted into a Look-up Table. Finally, the Look-up Table is read directly by YALES2 as a boundary conditions.&lt;br /&gt;
&lt;br /&gt;
* WP3: Improve injection methodology (method B)&lt;br /&gt;
Even though improvements achieved in WP2 prove to be very handy while removing many potential human errors, injecting a turbulent inflow through wind boxes ('offline' precursor approach) can sometimes remain cumbersome for several reasons: (1) no periodicity is enforced in the streamwise direction of those boxes, (2) potential high memory consumption,  and (3) the boxes need to be moved to other cores whenever a mesh adaptation occurs. An alternative consists in co-simulating the precursor flow and the flow of interest (refered as the 'successor' simulation) at the same time ('online' precursor approach). The inlet boundary condition for the successor flow is then obtained by mapping the outflow of the precursor domain. During the workshop, some work has been initiated to implement this kind of coupling using the CWIPI library, for which YALES2 provides already an interface.&lt;br /&gt;
&lt;br /&gt;
==== T6 - FSI model in Dorothy ====&lt;br /&gt;
&lt;br /&gt;
=== Two Phase Flow - J. Leparoux, SAFRAN &amp;amp; J. Carmona, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== TP1 - Towards very small contact angles in Nucleate boiling ====&lt;br /&gt;
&lt;br /&gt;
Participants: Henri Lam (LEGI), Mohammad Umair (LEGI), Manuel Bernard (LEGI), Robin Barbera (LEGI) and Giovanni Ghigliotti (LPSC)&lt;br /&gt;
&lt;br /&gt;
==== TP2 - Modeling spray-film interactions ====&lt;br /&gt;
&lt;br /&gt;
Participants: Nicolas Gasnier (EM2C-SafranTech), Julien Leparoux (SafranTech), Mehdi Helal (CORIA-SafranTech) and Julien Carmona (CORIA)&lt;br /&gt;
&lt;br /&gt;
==== TP3 - High-fidelity two-phase flow simulations of the purge of a fuel feed line ====&lt;br /&gt;
&lt;br /&gt;
Participants: Thomas LAROCHE (Safran HE), Romain JANODET (Safran AE), Julien Leparoux (Safran Tech) and Melody Cailler (Safran Tech)&lt;br /&gt;
&lt;br /&gt;
==== TP4 - Volume of Fluid solver in YALES2 ====&lt;br /&gt;
&lt;br /&gt;
Participants: Léa Voivenel (CORIA), Julien Carmona (CORIA), Mehdi Helal (CORIA), Pierre Portais (CORIA), Julien Leparoux (Safran Tech), Mélody Cailler (Safran Tech) and Nicolas Gasnier (EM2C / Safran Tech)&lt;br /&gt;
&lt;br /&gt;
==== TP5 - Implement a local operator to distribute the solid volume of a particle over multiple cells ====&lt;br /&gt;
&lt;br /&gt;
Participants: Théo Ndereyimana (Université de Sherbrooke), Stéphane Moreau (Université de Sherbrooke)&lt;br /&gt;
&lt;br /&gt;
==== TP6 - Complex thermodynamics in sloshing tanks ====&lt;br /&gt;
&lt;br /&gt;
Participants: C. Merlin (AGS), D. Fouquet (CORIA), V. Moureau (CORIA), J. Carmona (CORIA) and G. Lartigue (CORIA)&lt;br /&gt;
&lt;br /&gt;
=== Combustion - Y. Bechane, CORIA &amp;amp; S. Dillon, SAFRAN &amp;amp; K. Bioche, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== C1 - LES of the thermal degradation of a composite material ====&lt;br /&gt;
Participants: A. Grenouilloux (ONERA), K. Bioche (CORIA), N. Dellinger (ONERA) and R. Letournel (SafranTech)&lt;br /&gt;
&lt;br /&gt;
==== C2 - Flame stabilization by NRP plasma discharge ====&lt;br /&gt;
&lt;br /&gt;
==== C3 - Extending and validating a generalized formalism of virtual chemistry ====&lt;br /&gt;
&lt;br /&gt;
==== C4 - Turbulent combustion model for NOx prediction ====&lt;br /&gt;
&lt;br /&gt;
==== C5 - Towards 3D simulation of detonation combustion ====&lt;br /&gt;
&lt;br /&gt;
==== C6 - Flame stabilitity of flame-holders within reheat conditions ====&lt;br /&gt;
&lt;br /&gt;
==== C7 - Thermal radiation in oxyflames ====&lt;br /&gt;
&lt;br /&gt;
==== C8 - A first step toward hybrid CPU / GPU for reactive flow in YALES2 ====&lt;br /&gt;
&lt;br /&gt;
Participants: M. Laignel (CORIA), G. Lartigue (CORIA), K. Bioche (CORIA) and V. Moureau (CORIA)&lt;br /&gt;
&lt;br /&gt;
In numerical simulations of reacting flows, one of the most computationally intensive tasks is the evaluation of source terms resulting from chemical reactions in the species transport equations. This step can account for up to 90% of the total simulation cost , depending on the complexity of the kinetic mechanism involved. To reduce this cost, various techniques such as mechanism reduction, virtual chemistry, etc. have been explored. However, the emergence of GPUs as powerful accelerators offers a promising alternative by providing massive parallelism. Despite their potential, GPUs often require significant adaptation of CPU-based codes. This project aims to address this challenge by taking a first step towards a hybrid CPU/GPU framework for reactive flow simulations. Specifically, the focus is on coupling Y2 with the updated version of the stiff time integration solver (CVODE), which is compatible with GPU (CUDA, HIP, OpenMP). The ultimate goal is to establish a foundation for hybrid computations by implementing and testing the updated solver on simplified test cases.&lt;br /&gt;
&lt;br /&gt;
==== C9 - Soots numerical modeling ====&lt;br /&gt;
&lt;br /&gt;
==== C10 - TECERACT : Tabulated chemistry generator for aeronautical combustion ====&lt;br /&gt;
&lt;br /&gt;
==== C11 - Exploring efficient tabulation strategies for detailed chemistry ====&lt;br /&gt;
&lt;br /&gt;
==== C12 - Dynamic sub-grid-scale modelling of multi-regime flame wrinkling ====&lt;br /&gt;
&lt;br /&gt;
==== C13 - LES of a semi-industrial burner using a non-adiabatic virtual chemical scheme ====&lt;br /&gt;
&lt;br /&gt;
=== User Experience &amp;amp; Data -  L. Korzeczek, GDTECH ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== U1 - Low-fidelity (RANS) rotor/stator simulations, application to Kaplan Turbine - Y. Lakrifi, G. Balarac (LEGI),  R. Mercier (SAFRAN), V. Moureau (CORIA) ====&lt;br /&gt;
&lt;br /&gt;
==== U2 - Coupling PyTorch/YALES2, combustion cartesian look-up tables - J. Leparoux, N. Treleaven, S. Dillon (SAFRAN), K. Bioche, G. Lartigue (CORIA) ====&lt;br /&gt;
&lt;br /&gt;
==== U3 - Yales2 Trame Editor, toward a fully featured graphical user interface for YALES2 - L. Korzeczek, S. Meynet (GDTECH), J. Leparoux, M. Cailler (SAFRAN) ====&lt;br /&gt;
&lt;br /&gt;
Participants: Julien Leparoux (Safran Tech), Kévin Bioche (CORIA), Ghislain Lartigue (CORIA), Nicholas Treleaven (Safran Tech)&lt;br /&gt;
&lt;br /&gt;
Neural Networks offer a promising alternative to Cartesian look-up tables for combustion simulations, reducing memory footprint. In this project, we investigated how to integrate an NN model for real-time inference in the YALES2 platform, exploring two approaches: a Python interface and a Fortran Torch binding (using FTorch[https://github.com/Cambridge-ICCS/FTorch]). We validated that the model remains accurate when embedded online and identified improvements for robustness. Inference costs were evaluated on a Mac M3 and the Austral cluster, revealing a strong dependency on data volume. To optimize efficiency, we propose grouping cells at the processor level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--  Masqué&lt;br /&gt;
&lt;br /&gt;
== Communications related to ECFD8 ==&lt;br /&gt;
&lt;br /&gt;
=== Conferences ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Publications ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tberthelon</name></author>	</entry>

	<entry>
		<id>https://ecfd.coria-cfd.fr/index.php?title=Ecfd:ecfd_7th_edition&amp;diff=607</id>
		<title>Ecfd:ecfd 7th edition</title>
		<link rel="alternate" type="text/html" href="https://ecfd.coria-cfd.fr/index.php?title=Ecfd:ecfd_7th_edition&amp;diff=607"/>
				<updated>2024-02-05T14:50:30Z</updated>
		
		<summary type="html">&lt;p&gt;Tberthelon: /* Numerics - S. Mendez, IMAG &amp;amp; G. Balarac, LEGI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE: ECFD workshop, 7th edition, 2024}}&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| align=&amp;quot;right&amp;quot; style=&amp;quot;text-align:center;&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
| [[File:Logo_ECFD6.png | center | thumb | 350px | ECFD6 workshop logo.]]&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* Event from '''22th of January to 2nd of February 2024'''&lt;br /&gt;
* Location: [https://www.hotelclubdelaplage.com Hôtel Club de la Plage], Merville-Franceville, near Caen (14)&lt;br /&gt;
* Two types of sessions:&lt;br /&gt;
** common technical presentations: roadmaps, specific points&lt;br /&gt;
** mini-workshops. Potential workshops are listed below&lt;br /&gt;
* Free of charge&lt;br /&gt;
* More than 70 participants from academics, HPC center/experts and industry.&lt;br /&gt;
&lt;br /&gt;
* Objectives &lt;br /&gt;
** Bring together experts in high-performance computing, applied mathematics and multi-physics CFDs&lt;br /&gt;
** Identify the technological barriers of exaflopic CFD via numerical experiments&lt;br /&gt;
** Identify industrial needs and challenges in high-performance computing&lt;br /&gt;
** Propose action plans to add to the development roadmaps of the CFD codes&lt;br /&gt;
&lt;br /&gt;
[[File:ecfd7.png|600px|link=https://ecfd.coria-cfd.fr/index.php/Ecfd:ecfd_6th_edition]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:sponsor_ecfd7.png|text-bottom|600px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== News ==&lt;br /&gt;
&lt;br /&gt;
* 19/07/2022: First announcement of the '''6th Extreme CFD Workshop &amp;amp; Hackathon''' !&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Agenda ==&lt;br /&gt;
&lt;br /&gt;
[[File:agenda_ecfd7.png|text-bottom|600px]]&lt;br /&gt;
&lt;br /&gt;
== Thematics / Mini-workshops ==&lt;br /&gt;
&lt;br /&gt;
These mini-workshops may change and cover more or less topics. This page will be adapted according to your feedback.&lt;br /&gt;
&lt;br /&gt;
To come...&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
=== Hackathon GENCI - P. Begou, LEGI ===&lt;br /&gt;
The '''GENCI Hackathon''' will be devoted to porting two CFD codes to the Mi250 GPUs of the Adastra supercomputer deployed by GENCI at CINES.&lt;br /&gt;
&lt;br /&gt;
For the '''YALES2''' code the goal is to obtain a first reference version giving the expected results then, if possible, to start its optimization to gain performance. The approach is OpenACC based with the objective of an implementation as least intrusive as possible in the existing code and which remains portable with the work done on the Nvidia GPUs of the Jean-Zay supercomputer at IDRIS.&lt;br /&gt;
&lt;br /&gt;
The porting of the '''AVBP''' code is more advanced with a prototype already functional on Adastra but &amp;quot;hard-coded&amp;quot;. The objective is to rationalize this first implementation, to integrate the latest developments in the code, to centralize memory management (host and device), to work on porting the Lagrangian part of the code and, of course, to improve the global performance.&lt;br /&gt;
&lt;br /&gt;
This Hackathon is supported by GENCI, HPE, AMD and CINES with the presence on site of several development experts on AMD GPUS.&lt;br /&gt;
&lt;br /&gt;
=== Mesh adaptation - R. Letournel, Safran ===&lt;br /&gt;
&lt;br /&gt;
==== M1: ASMR for reheat chamber applications - Paul Pouech (CERFACS), Thibault Duranton, Luis Carbajal Carrasco (Safran) ====&lt;br /&gt;
&lt;br /&gt;
Combustion in reheat chambers feature a wide range of lenght scales. Mesh refinement is thus mandatory to capture the flow characteristics within a reasonnable CPU cost for LES computations using the AVBP code. The purpose of this project is to consolidate mesh refinement criteria and strategy in an academic reference case. The retained workflow is supported by the [https://lemmings.readthedocs.io/en/latest/readme_copy.html Lemmings] code that calls the Tékigô wrapper for the mesh adaptations. During the ECFD7, the convergence time needed to have significant distribution of quantities of interest was analysed. An optimum runtime, based on a characteristic flow time-scale, was thus identified and led to a reduced running time for each adaptation step. As a second step, discussions with the ECFD7 participants led to the identification of interesting refinement criteria, namely the flame sensor or the mach rms for instance. Parametric analysis showed the robustness of the workflow based on a ponderation of different criteria. Finally, in order to facilitate the use of the workflow, efforts were made to improve the user experience by making it more human readable.&lt;br /&gt;
&lt;br /&gt;
==== M2: Parallel remeshing - B. Andrieu, C. Benazet, K. Hoogveld, B. Maugars, E. Quémerais (ONERA) ====&lt;br /&gt;
&lt;br /&gt;
Mesh adaptation is a crucial tool in order to automate industrial RANS numerical simulations. To meet this need, we need to carry out mesh adaptation as quickly as possible by setting up an efficient, parallel solution. To this end, we have explored two avenues: a parallel edge-splitting algorithm that has recently been initiated in the ParaDiGM library, and a solution based on [https://github.com/nasa/refine the refine library] for adapting meshes with MPI implementation. On the one hand, we fixed several bugs in our split operator, and validated it on test cases of increasing complexity with a node-centered solver. In addition, we've added interfaces to refine so as to avoid using files, and call directly in library mode. We also investigated geometric projection issues during the mesh adaptation procedure, notably by looking at solutions such as EGADS, which offers a simplified API for CAD interrogation. We finally implemented metric gradation (in serial), metric intersection and complexity computations. All the ingredients we've tested give us a clearer picture of the entire mesh adaptation process.&lt;br /&gt;
&lt;br /&gt;
=== Numerics - S. Mendez, IMAG &amp;amp; G. Balarac, LEGI ===&lt;br /&gt;
&lt;br /&gt;
==== N1: Treatment of boundary conditions for high-order schemes - M. Bernard &amp;amp; G. Balarac (LEGI), G. Lartigue (Total Energies) ====&lt;br /&gt;
&lt;br /&gt;
In the context of Finite Volumes Method, spacial accuracy of a numerical scheme depends on ability to evaluate accurately fluxes through interface of each control volume (CV).&lt;br /&gt;
Such accurate evaluation is not straightforward, especially when dealing with distorted grids.&lt;br /&gt;
This project follows the work of [1] where fluxes use pointwise quantities, which are reconstructed from integrated quantities advanced in time.&lt;br /&gt;
During the workshop, task force was dedicated to the treatment of **inlet** boundary conditions (BC) and **non-planar walls**.&lt;br /&gt;
For inlet BC, the key resides in the spatial integration of convective flux over discrete faces of the CV touching the boundary.&lt;br /&gt;
Such treatment lead to exact integration for linear inlet profile and large error reduction on other profiles.&lt;br /&gt;
Concerning non-planar walls, the strategy adopted consists in the enforcement of the BC on each discrete face, by modifying the normal component of the wall gradient in order to evaluate accurately the diffusive flux.&lt;br /&gt;
Again, a large reduction of this error has been observed.&lt;br /&gt;
&lt;br /&gt;
[1] : ''A framework to perform high-order deconvolution for finite-volume method on simplicial meshes, IJNMF 2020, Bernard et. al''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== N2: Implementation of linearised implicit time integration in ALE solver - T. Berthelon &amp;amp; G. Balarac (LEGI) ====&lt;br /&gt;
&lt;br /&gt;
An linearised implicit time integration has recently been developed in the incompressible solver of YALES2. This new integration scheme allows to use larger time-step that the ones constraints by classic stability criteria inherent to explicit time integration method. This allows to reduce the restitution time of Large Eddy Simulations [1].&lt;br /&gt;
The objective of this project was to implement this new time integration in the ale solver in order to be able to reduce restitution time of moving mesh configuration.&lt;br /&gt;
&lt;br /&gt;
The developments were validated on a scalar advection case and on a rotor-stator interaction case. Although the results seem to be in line with the explicit integration methods, the validation of the temporal convergence to 2nd order remains to be shown. &lt;br /&gt;
&lt;br /&gt;
[1] Toward the use of LES for industrial complex geometries. Part II: Reduce the time-to-solution by using a linearised implicit time advancement, Berthelon et al., JoT, 2022&lt;br /&gt;
&lt;br /&gt;
==== N5: Optimization of the RBC solver - F. Rojas &amp;amp; S. Mendez (IMAG) ====&lt;br /&gt;
&lt;br /&gt;
==== N6: Electrodeformation of red blood cells, extension to 3D and improved accuracy at membrane  - A. Spadotto &amp;amp; S. Mendez (IMAG), M. Bernard (LEGI) ====&lt;br /&gt;
The Leaky Dielectric Model is a popular framework to describe electric stresses over micro-scale membranes. We have adopted it to simulate the effect of a DC electric field on a red blood cell using the YALES2BIO solver. The goal of the project is to reproduce the electric charging process of the membrane, as well as the resulting stresses, which may yield to electrodeformation of the cell. From the point of view of the implementation, the grid is represented by a 2D surface mesh embedded in a 3D eulerian grid. The need to make variables stored on the surface interact with quantities stored on the Eulerian grid calls for a proper bidirectional 2D-membrane/3D-grid dynamic connectivity. The advancement of theis task during this ECFD has led to the first 3D simulation of a charging fixed spherical shell. Moreover, the estimation of grid variables on elements cut by the membrane has been improved thanks to a High-Order extrapolation. The latter has been successfully tested on 2D configurations. The project opens the way for a series of validation tests. In particular, future work will demand treatment of instabilities emerging in symmetrical configurations.&lt;br /&gt;
&lt;br /&gt;
=== Turbulence - P. Benard, CORIA &amp;amp; L. Bricteux, UMONS ===&lt;br /&gt;
&lt;br /&gt;
==== T4: Atmospheric solver ====&lt;br /&gt;
Wind turbines, bigger and bigger, are now influenced by atmospheric flows. An atmospheric solver has already been developed in YALES2 to represents some of its effects (Coriolis, veer, thermal stratification). In this continuum, the project has been divided into two work-packages. &lt;br /&gt;
- Work-package 1: The use of the Variable density solver (VDS). &lt;br /&gt;
Before ECFD7, thermal stratification was taken into account using the Boussinesq buoyancy approximation within the incompressible solver framework. Now, VDS can be used, taking into account all thermal effect. Results are promissing.&lt;br /&gt;
- Work-package 2: Wall law velocity filtering. &lt;br /&gt;
Wall law are using velocity at the first grid node to compute wall shear stress. Before ECFD7, atmospheric wall law were using the local velocity, leading sometimes to convergence errors. Now a gather-scatter filter can be used to average velocity (and temperature) at first grid node.&lt;br /&gt;
&lt;br /&gt;
=== Two Phase Flow - M. Cailler, Safran Tech &amp;amp; V. Moureau, CORIA ===&lt;br /&gt;
&lt;br /&gt;
==== P3: Blood platelets adhesion model - C. Raveleau, S. Mendez, F. Nicoud (IMAG) ====&lt;br /&gt;
&lt;br /&gt;
Medical devices in contact with blood (e.g. artificial valves) are used to treat various cardiovascular diseases, but their thrombogenicity remains the main unresolved issue in their development. A numerical model of blood platelets is being constructed to help to understand the effect of microstructuration on the thrombogenicity of artificial surface. The Force Coupling Method (FCM) was previously implemented and allows the modelisation of ellipsoidal particle and their interaction with the surrounding fluid. During the workshop, the particle model was extended to include adhesive and repulsive interactions with walls or with other particles. The adhesive bonds are modeled with springs forming when the distance between a node of a particle surface and a node of the wall or another particle is smaller than a given threshold. The stiffness of the bond is increased after a given formation time to mimic the 2-step adhesion process of platelets to von Willebrand Factor. A Lennard-Jones potential was used to model the collision of particles. Future work will aim at generalizing these implementations for an arbitrary number of particles (currently only working for 2 particles) and ensuring the interactions are unaltered by the crossing of a periodic boundary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Combustion - K. Bioche, CORIA &amp;amp; R. Mercier, Safran ===&lt;br /&gt;
&lt;br /&gt;
=== User Experience &amp;amp; Data -  L. Korzeczek, GDTech ===&lt;br /&gt;
&lt;br /&gt;
==== U4 : CWIPI 1.0 porting - N. Dellinger, B. Andrieu, K. Hoogveld, E. Quémerais (ONERA), A. Grenouilloux (CORIA), R. Letournel (Safran Tech) ====&lt;br /&gt;
&lt;br /&gt;
Coupling is a cornerstone of numerical simulation, especially for addressing multi-physics problems using highly-specialized solvers for each phenomenon. The CWIPI library, developed at ONERA for coupling codes in a massively parallel environment, has been used in YALES2 for many years for internal and external coupling.&lt;br /&gt;
Significant developments have been carried out in recent years to improve the performance and usability of CWIPI, resulting in the release of version 1 in july 2023. This version features a completely revised API to overcome the limitations of version 0.12 and offer more possibilities to users. &lt;br /&gt;
The goal of this project was to support users in their transition to version 1. A training course based on Jupyter Notebooks was first organized. Assistance was then provided to successfully port MoDeTheC's and YALES2's internal couplings to the new version. Some fixes were made in CWIPI along the way, and will be reported in a new patched version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Communications related to ECFD6 ==&lt;br /&gt;
&lt;br /&gt;
=== Conferences ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Publications ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tberthelon</name></author>	</entry>

	</feed>