Deep Reinforcement Learning for the Management of the Wall Regeneration Cycle in Wall-Bounded Turbulent Flows
Cavallazzi, G. M. ORCID: 0000-0001-7529-3256, Guastoni, L., Vinuesa, R. & Pinelli, A. ORCID: 0000-0001-5564-9032 (2024). Deep Reinforcement Learning for the Management of the Wall Regeneration Cycle in Wall-Bounded Turbulent Flows. Flow, Turbulence and Combustion, doi: 10.1007/s10494-024-00609-4
Abstract
The wall cycle in wall-bounded turbulent flows is a complex turbulence regeneration mechanism that remains not fully understood. This study explores the potential of deep reinforcement learning (DRL) for managing the wall regeneration cycle to achieve desired flow dynamics. To create a robust framework for DRL-based flow control, we have integrated the StableBaselines3 DRL libraries with the open-source direct numerical simulation (DNS) solver CaNS. The DRL agent interacts with the DNS environment, learning policies that modify wall boundary conditions to optimise objectives such as the reduction of the skin-friction coefficient or the enhancement of certain coherent structures’ features. The implementation makes use of the message-passing-interface (MPI) wrappers for efficient communication between the Python-based DRL agent and the DNS solver, ensuring scalability on high-performance computing architectures. Initial experiments demonstrate the capability of DRL to achieve drag reduction rates comparable with those achieved via traditional methods, although limited to short time intervals. We also propose a strategy to enhance the coherence of velocity streaks, assuming that maintaining straight streaks can inhibit instability and further reduce skin-friction. Our results highlight the promise of DRL in flow-control applications and underscore the need for more advanced control laws and objective functions. Future work will focus on optimising actuation intervals and exploring new computational architectures to extend the applicability and the efficiency of DRL in turbulent flow management.
Publication Type: | Article |
---|---|
Additional Information: | This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Publisher Keywords: | Flow control, Drag reduction, Direct numerical simulation, Deep reinforcement learning |
Subjects: | T Technology > TA Engineering (General). Civil engineering (General) T Technology > TL Motor vehicles. Aeronautics. Astronautics |
Departments: | School of Science & Technology School of Science & Technology > Engineering |
SWORD Depositor: |
Available under License Creative Commons: Attribution International Public License 4.0.
Download (2MB) | Preview
Export
Downloads
Downloads per month over past year