Difference between revisions of "Ecfd:ecfd 3rd edition"

From Extreme CFD workshop
Jump to: navigation, search
(Project #14: Méthode d'ordre élevé)
(Project #15: Utilisation d’éléments finis du second ordre dans le SMS)
Line 105: Line 105:
 
=== Project #15: Utilisation d’éléments finis du second ordre dans le SMS ===
 
=== Project #15: Utilisation d’éléments finis du second ordre dans le SMS ===
 
''T. Fabbri (LEGI), G. Lartigue (CORIA), G. Balarac (LEGI), V. Moureau (CORIA)''
 
''T. Fabbri (LEGI), G. Lartigue (CORIA), G. Balarac (LEGI), V. Moureau (CORIA)''
 +
 +
[[media:ecfd3_final_project15.pdf | Final presentation of project #15]]
  
 
=== Project #16: Development of a RANS solver in YALES2 ===
 
=== Project #16: Development of a RANS solver in YALES2 ===

Revision as of 09:15, 31 January 2020

ECFD workshop, 3rd edition, 2020

Contents

Sponsors

Ecfd3 sponsors.png


Participants

Ecfd3 participants.png

Flyer

Presentations


Project achievements

Project #1: Hackathon GENCI/ATOS/AMD/CERFACS on AVBP

C. Piechurski (GENCI), S. Jauré (ATOS), B. Pajot (ATOS), P.-A. Harraud (AMD), P. Mohanamuraly (CERFACS), G. Staffelbach (CERFACS), J. Legaux (CERFACS)

We ported the AVBP solver to the AMD Rome system available at GENCI -TGCC ( IRENE Joliot Curie). Characterisation of the application on the architecture showed a 1/3 performance dependency to bandwidth and 2/3 to compute. Strong scaling performance up to 130k cores was measured with openmpi and provided an acceleration of 75% without optimisations. Weak scaling up to 32k MPI ranks suggests that decimation of the processes by a factor 2 improves computational efficiency by up to 30%. This suggests a trade off between mpi imbalance and decimation is possible if imbalance is higher than 30% to improve time to solution.

Currently Openmpi offers the best perfofrmance, intelmpi is still a bit unstable.

During the Hackathon we also introduced colour based cache blocking using ColPack in the code in order to use OpenMP without critical sections. On a 2x18 core Skylake processor the new implementation offered similar speedup using full threading versus full MPI with the best trade off being 4 MPI and 9 threads per MPI. On AMD Rome, Full threading did not offer much acceleration and needs to be inversigated but 8 MPI and 16 threads per MPI seem quite promising.

Final presentation of project #1

Project #2: Hackathon GENCI/ATOS/AMD/CORIA on YALES2

C. Piechurski (GENCI), S. Jauré (ATOS), P.-A. Harraud (AMD), P. Mohanamuraly (CERFACS), G.Lartigue (CORIA), F. Gava (CORIA), P. Begou (LEGI)

Final presentation of project #2

Project #3: Implementation of a secondary atomization model in YALES2

Final presentation of project #3

Project #4: Application to combustion and lubrication applications

Final presentation of project #4

Project #5: Jet-in-crossflow par une méthode d’interface diffuse

Final presentation of project #5

Project #6: Accurate numerical predicti􏴇on of vorti􏴇cal flows using AMR

Final presentation of project #6

Project #7: Modélisation de parois pour la simulation des grandes échelles

Final presentation of project #7

Project #8: Implémentation du calcul de la distance à une interface liquide-gaz proche d’une paroi sur maillage non structuré 3D avec YALES2

Final presentation of project #8

Project #9: Remeshed particle method at high Schmidt and Reynolds number

S. Santoso (LJK), J.-B. Lagaert (Math Orsay), G.Balarac (LEGI)

We study the advection of a scalar function in turbulent flows with a multimesh method. The finite volume method is used to solve Navier-Stokes equations on an unstructured mesh (YALES2). The advection equation is solved with remeshed particle method on a cartesian mesh. In the context of parallel computing, we face a very unbalanced problem since a large number of particles are created in a very fine meshed zone. Our strategy to load-balance the problem is to give a weight to every element group which is equal to the density of particle.

Final presentation of project #9

Project #10: Remaillage dynamique pour la combustion turbulente prémélangée

W. Agostinelli, O. Dounia, , T. Jaravel, O. Vermorel

The objective of the project was to evaluate the potential of adaptative mesh refinement (AMR) for premixed combustion in unsteady systems. Three target cases were identified: a semi-vented deflagration with laminar to turbulent transition, a planar detonation wave, and a bluff-body stabilized burner subjected to thermoacoustic oscillations. The simulations are performed with AVBP and coupled to the AMR implementation of YALES2. Several metrics and remeshing criterions were developed to identify and correctly resolve both the combustion wave front and the turbulent flow. The comparison of numerical results with reference simulations showed that the main features of the physics could be recovered with a significant speed-up in term of computational cost.

Final presentation of project #10

Project #11: Multiphysics coupling for wind turbine wake modeling

Final presentation of project #11

Project #12: Stability of a semi-implicit compressible cavitation solver

Final presentation of project #12

Project #13: DNS of droplet dynamics and evaporation : comparison between structured and unstructured solvers

Final presentation of project #13

Project #14: Méthode d'ordre élevé

M. Bernard (LEGI), G. Lartigue (CORIA), G. Balarac (LEGI), V. Moureau (CORIA)

Final presentation of project #14

Project #15: Utilisation d’éléments finis du second ordre dans le SMS

T. Fabbri (LEGI), G. Lartigue (CORIA), G. Balarac (LEGI), V. Moureau (CORIA)

Final presentation of project #15

Project #16: Development of a RANS solver in YALES2

Project #17: COUPLING OF A FLUID PLASMA SOLVER WITH A LAGRANGIAN SOLVER FOR THE MODELING OF DUSTY

Project #18: L’Evaporo O Maıtre

Project #19: The Clone Wars

Project #20: Stiff complex fluid simulation with YALES2

Sam Whitmore, Yves Dubief, M2CE, University of Vermont

The objective was to simulate (1) ionized gases and (2) polymer solutions in flows using YALES2. Both problems are challenging owing to their stiff thermodynamics (1) or polymer dynamics (2). Significant gains were achieved in the implementation of the respective models thanks to the stiff integrator library CVODE. The plasma flow demonstrated an increase in time step of two orders of magnitude compared to previous implementation of the plasma chemistry in the variable density solver. Polymer models are notoriously prone to numerical instability. Again the use of CVODE showed equivalent if not superior stability of the solution at a fraction of the cost of commonly employed algorithms designed to address the stiffness of the problem.

Project #21: AVBP Dense Gases

Project #22: Numerical prediction of wind turbine wakes using AMR