ECFD workshop, 5th edition, 2022

From Extreme CFD workshop
Revision as of 11:41, 28 January 2022 by Mehdicizeron (Talk | contribs)

Jump to: navigation, search


Description

ECFD5 workshop logo.
  • Event from 23th to 28th of January 2022
  • Location: Centre Bonséjour, Merville-Franceville, near Caen (14)
  • Two types of sessions:
    • common technical presentations: roadmaps, specific points.
    • mini-workshops. Potential workshops are listed below.
  • Free of charge
  • More than 50 participants from academics (CERFACS, CORIA, IMAG, LEGI, EM2C, UMONS, UVM, VUB, UCL, TUDelft), HPC center/experts (GENCI, AMD, CINES, CRIANN) and industry (Safran, Ariane Group, Siemens-Gamesa).
  • Objectives
    • Bring together experts in high-performance computing, applied mathematics and multi-physics CFDs
    • Identify the technological barriers of exaflopic CFD via numerical experiments
    • Identify industrial needs and challenges in high-performance computing
    • Propose action plans to add to the development roadmaps of the CFD codes

News

  • 03/11/2021: First announcement of the 5th Extreme CFD Workshop & Hackathon !

Banniere ECFD5 sponso.png

  • 13/01/2022: After discussions with the participants, the 5th Extreme CFD Workshop & Hackathon is maintained as an in-person event! It will be also possible to attend to the plenary sessions and participate remotely to the workshop.
  • 14/01/2022: The ECFD5 program is online! The plenary sessions will be announced soon!
  • 20/01/2022: The plenary sessions are now defined:
    • P1 - 24/01/2022: GPU porting challenges and quantum computing, présentation machine Adastra by G. Staffelbach (CERFACS) + Presentation of the new cluster from CINES called Adastra by C. Andrieu (CINES)
    • P2 - 25/01/2022: News, perspectives and future of GPU computing applied to CFD by A. Toure (AMD)
    • P3 - 26/01/2022: Theory, applications and perspectives of the Lattice-Boltzmann Method by P. Boivin (M2P2)
    • P4 - 27/01/2022: Concepts and notions of mesh adaptation by C. Dapogny (LJK)

Agenda

ECFD5 program.png

Thematics / Mini-workshops

These mini-workshops may change and cover more or less topics. This page will be adapted according to your feedback.

Combustion - K. Bioche, VUB

Static and dynamic mesh adaptation - G. Balarac, LEGI

Multi-phase flows - M. Cailler, SAFRAN TECH

D3: convergence of the interface curvature computation. The computation of interface curvature in a levelset framework is based on the classic formula as divergence of the gradient of the levelset function. This function being computed at 2nd order, one obtains a O(0) curvature, meaning that the error does not decrease with mesh refinement. We have implemented in YALES2 a strategy proposed by Emmanuel Maître and collaborators in a finite element method based on the regularization (filtering) of the levelset gradient and curvature. This strategy has been tested for the simple test case of a static circular interface. We used two types of filters (simple gather-scatter or bilaplacian as developed by Lola Guedot (PhD thesis 2015)) on different mesh types (split quadrilaterals, isotropic triangular mesh, unstructured triangular mesh). The results are encouraging since a O(1) convergence is obtained in all cases. Further work is needed to tune the filter properties (amplitude and size) for different spatial resolutions and levelset "narrow band" width.

Numerics - G. Lartigue, CORIA

Turbulent flows - P. Bénard, CORIA

  • Sub-project 1: Optimization of the actuator set for several wind turbines in YALES2 (F. Houtin Mongrolle, S. Gremmo, E. Muller & B. Duboc)


  • Sub-project 5: TBLE wall model for LES with pressure gradient on a simple turbomachinery geometry (M. Cizeron, N. Odier, R. Vicquelin)

Wall modeling is often used in LES to alleviate the computational cost that would be required to resolve all the length scales up to the solid boundaries of the domain. The classical way of doing it is by using an algebraic model to provide the wall friction and heat flux, with a coupling to the LES solver at the first off-wall nodes. The wall model was designed from analyzing RANS equation with strong assumptions such as planar flow, equilibrium and no pressure gradient. These assumptions are often far from true in real applications, such as turbomachinery applications, where the use of a wall model is mandatory due to the size of the calculation. During this workshop, a wall model relying on the resolution of the Thin Boundary Layer Equations (TBLE) was studied, which had been implemented by EM2C. The addition of a pressure gradient to these equations has been conducted and tested, at first only for the 1D wall model solver, then on a 3D turbulent channel. It remains to be tested on a diffuser configuration with a real pressure gradient to quantify the effect of the new wall model. The influence of the point considered to do the coupling between the LES and the wall model (ie. its distance to the wall) has also been tested both for the TBLE and the original algebraic model, showing that coupling farther from the wall yields better results and reduces the so-called log-layer mismatch.

  • Sub-project 6: Tools for rough wall modelling (A. Barge, S. Meynet)

Compressible - L. Bricteux, UMONS

User experience - J. Leparoux, SAFRAN TECH

Hackathon - G. Staffelbach, CERFACS

AMD GPU hardware is still relatively unknown in our CFD community. This hackathon was the opportunity to deep dive into the AMD dev environment to prepare the arrival of AdAstra at CINES. Both YALES and AVBP have been ported to the AOMP framework using ROCm 4.5 on the GRID5000 Neowise system. CPU execution posed no issues and we were able to focus on GPU Offloading using OpenMP. On the YALES2 side, a mini-app encompassing the typical YALES2 structure hierarchy and loop execution was extracted from the code to evaluate different porting strategies and on the AVBP side the current OpenACC GPU offloading was translated to OpenMP focusing on the viscosity computation kernel. We learnt that the current supported standard of OpenMP in ROCm 4.5 does not allow for direct offloading of reference values inside an derived type structure but is was possible to use aliases such as pointers or flat array copies to do the job. This should be solved with the support of OpenMP 5.0 Another troublesome issues, was the lack of support for offloading of array vector operations (ex : array(:) = 1.0 ) rendering the explicitation of the loops for these manadatory.

Some bugs remain and it is encouraged to use the latest compiler version when working on the porting ( the release of flang 14.0.1 saved us a lot of time as we had started with 14.0.0 ). Offloading of the miniapp of YALES2 yielded a times 60 acceleration of the kernel whereas the offloading of the viscosity model in a full avbp simulation yielded an 7 times factor in performance when comparing on core to one GPU. These results are to be taken with a grain of salt but are really encouraging.

For the next steps, a porting strategy for both codes will be implemented (depending on the OpenMP 5 support ) and discussions are underway with CINES and other partners so as to offer the best experience to both code's communities on AdAstra at its release.