Speaker
Description
Modern particle accelerators operate in highly complex, nonlinear, and time-varying regimes, where optimal performance relies on the coordinated tuning of many coupled parameters under uncertainty and noise. Traditional control and optimization strategies based on physics models, linearization, or manual tuning often struggle to adapt in real time to changing beam conditions, hardware drifts, and incomplete diagnostics.
These challenges are particularly relevant at the CERN Linear Electron Accelerator for Research (CLEAR) facility, which supports a wide range of experiments requiring diverse beam configurations. Among these, medical irradiation experiments demand dedicated beam settings, including configurations that produce flat and uniform transverse beam profiles at the sample location using a dual-scattering system. Establishing and maintaining such beam conditions requires significant machine time and careful manual tuning, while stable and reproducible beam parameters are essential for experimental reliability.
Reinforcement Learning (RL) offers a promising framework for autonomous accelerator operation by enabling control agents to learn optimal tuning policies through interaction with the machine or high-fidelity simulations, with the potential to reduce setup time and improve beam stability. To address these challenges, an RL-based beam-flattening algorithm is being developed to autonomously optimize the beam profile by tuning quadrupole and corrector magnets that steer and shape the beam onto the scattering system. The approach has been implemented and validated using a simulation model of the CLEAR beamline and is planned for deployment during the 2026 experimental run.
| Student | Yes |
|---|