Celeritas¶
Celeritas is a Monte Carlo particle transport code for simulating High Energy Physics (HEP) detectors on general purpose GPUs. Motivated originally by the massive computational requirements of the High Luminosity upgrade to the Large Hadron Collider, the code’s goal is to accelerate the most computationally challenging simulation problems in HEP.
- Release:
0.6.0-42+develop.e99da9f5
- Date:
Jun 02, 2025
Introduction¶
New projects in High Energy Physics (HEP) and upgrades to existing ones promise new discoveries but at the cost of increased hardware complexity and data readout rates. Deducing new physics from detector readouts requires a proportional increase in computational resources. The High Luminosity Large Hadron Collider (HL-LHC) detectors will require more computational resources than are available with traditional CPU-based computing grids. For example, the CMS collaboration forecasts [CMS Offline Software and Computing, 2021] that when the upgrade is brought online, computational resource requirements will exceed availability by more than a factor of two, about 40% of which is Monte Carlo (MC) detector simulation, without substantial research and development improvements.
Celeritas [Johnson et al., 2023, Johnson et al., 2025, Johnson et al., 2024, Johnson et al., 2023, Johnson et al., 2022, Johnson et al., 2023] is a MC particle transport code designed for high performance simulation of complex HEP detectors on GPU-accelerated hardware. Its immediate goal is to simulate electromagnetic (EM) physics for HL-LHC detectors with no loss in fidelity, acting as a plugin to accelerate existing Geant4 [Allison et al., 2016] workflows by “offloading” selected particles to Celeritas to transport on GPU.
Background¶
Note
This background section is largely a quote of the Celeritas R&D report [Johnson et al., 2024].
The first investigation of using GPUs to accelerate Geant4 computing was a tangential part of the GeantV project [Amadio et al., 2018], which had the primary goal of using CPU SIMD hardware to accelerate detector simulation. Toward the end of that experiment, a follow-up GeantX group [Canal, 2019] brought Fermilab and ORNL computational physicists together with computing experts from NERSC and Argonne to brainstorm pathways to exascale for detector simulation. This essentially informal collaboration was funded by ECP, inspired by the success of the ExaSMR code [Merzari et al., 2023] that successfully developed new algorithms for MC neutronics in nuclear reactors. The broad scope of “implementing Geant4 on GPU” lead to many useful discussions but ultimately proved intractable as a starting point.
Celeritas was founded from those discussions as an entirely new project with the goal of incrementally developing GPU-targeted transport algorithms specifically for computationally intensive LHC simulations. The target of LHC production use is motivated by the high luminosity HL-LHC upgrade, which will drive simulation requirements well beyond the projected computing capacity that relies on traditional multicore CPU hardware [CMS Offline Software and Computing, 2021, The ATLAS Collaboration, 2020].
At the same time as LHC demands more compute capacity, the HPC landscape has changed so that GPUs are responsible for larger amounts of processing power due to their energy efficiency [Khan et al., 2021]. Similarly, as machine learning tools become more widespread across all scientific disciplines, GPU uptake will continue to grow. In this scenario, the primary goal of Celeritas is to enable HEP simulation to take advantage of this increasing supply of GPU hardware.
At the same time, Celeritas strives for a higher simulation throughput per unit power using GPUs compared to a CPU-only machine. Because detector simulation is only a fraction of the experiment toolchain, and the initial capabilities of Celeritas will accelerate only a fraction of that, it is unreasonable to assume that the hypothetical power efficiency of Celeritas will drive any architectural purchasing decisions for new hardware for WLCG. However, as more components of experiment toolchains use GPUs for acceleration for numerical simulations, reconstruction, and machine learning models, the economic considerations will likely change to favor an increasing fraction of machines with heterogeneous architectures.
Overview¶
This user manual is written for three audiences with different goals: Geant4 toolkit users for integrating Celeritas as a plugin, advanced users for extending Celeritas with new physics, and developers for maintaining and advancing the codebase.
Installation and usage¶
The Installation section describes how to obtain and set up a working copy of Celeritas. Once installed, Celeritas can be used as a software library for integrating directly into experiment frameworks and user applications, or its front end applications can be used to evaluate performance benchmarks and perform some simple analyses.
GPU usage¶
Celeritas is designed to use GPUs for simulation. When built with CUDA or HIP support, the code automatically copies problem data to device during construction. See System for details on initializing and accessing the device.
Geometry¶
Celeritas has two main choices for model geometry representation and navigation. VecGeom [Apostolakis et al., 2015] is a CUDA-compatible library for navigation on Geant4 detector geometries. ORANGE is a work in progress for surface-based geometry navigation that is “platform portable”, i.e., able to run on GPUs from multiple vendors.
Celeritas wraps both geometry packages with a uniform interface for changing and querying the geometry state.
Units¶
The Celeritas default unit system is Gaussian CGS, but it can be configured to use SI or CLHEP unit systems as well. A compile-time metadata class allows safe interoperable use of macroscopic-scale units and atomic-scale values such as MeV. For more details, see the Units and constants section of the API documentation.
EM Physics¶
Celeritas implements physics processes and models for transporting electron, positron, and gamma particles. Initial support is being added for muon EM physics. Implementation details of these models and their corresponding Geant4 classes are documented in EM Physics.
Optical Physics¶
Optical physics is being added to Celeritas to support various astroparticle, high energy physics, and nuclear physics experiments including LZ, Calvision, DUNE, and ePIC. See the Optical physics section of the implementation details.
Stepping loop¶
In Celeritas, the core algorithm is a loop interchange between particle tracks and steps. Traditionally, in a CPU-based simulation, the outer loop iterates over particle tracks, while the inner loop handles steps. Each step includes actions such as evaluating cross sections, calculating distances to geometry boundaries, and managing interactions that produce secondaries.
Celeritas vectorizes this process by reversing the loop structure on the GPU. The outer loop is over step iterations, and the inner loop processes track slots, which are elements in a fixed-size vector of active tracks. The stepping loop in Celeritas is thus a sorted loop over actions, with each action typically corresponding to a kernel launch on the GPU (or an inner loop over tracks when running on the CPU).
See Stepping mechanics for implementation details on the ordering of actions and the status of a track slot during iteration.
Usage¶
Celeritas includes a core set of libraries for internal and external use, as well as several helper applications and front ends.
Using Celeritas
Implementation¶
The bulk of Celeritas’ code is in several code libraries to be used by external users and application developers. Currently, the most stable and user-ready component of Celeritas is its Geant4 interface for offloading. This section includes detailed descriptions of the physics model implementations, and high-level summaries of the Celeritas Application Programming Interfaces (APIs). Cursory documentation for many of the classes and other data constructs are described in this user manual, but further details for developers can be found in the full Doxygen-generated developer documentation.
The Celeritas codebase lives under the src/
directory and is partitioned
into several libraries of increasing complexity:
corecel
for GPU/CPU abstractions,
geocel
for geometry interfaces and wrappers to external libraries,
orange
for the ORANGE platform-portable geometry implementation,
celeritas
for the GPU implementation of physics and MC particle tracking, and
accel
for the Geant4 integration library.
Additional top-level files provide access to version and configuration attributes.
Note
When building Celeritas, regardless of the configured dependencies, all of the documented API code in corecel
, orange
,
and celeritas
(except possibly headers ending in .json.hh
,
.device.hh
, etc.) will compile and can link to downstream code. However,
some classes will throw celeritas::RuntimeError
if they lack the required
functionality.
If Geant4 is disabled, the accel
library will not be built or installed,
because every component of that library requires Geant4.
Implementation details
Development Guide¶
The agility, extensibility, and performance of Celeritas depend strongly on software infrastructure and best practices. This section describes how to modify and extend the codebase.
Examples¶
A few standalone codes demonstrate how to use Celeritas as an app and as a library, in independent and Geant4-integrated contexts.
Acknowledgments¶
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of High Energy Physics, Scientific Discovery through Advanced Computing (SciDAC) program.
This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.
This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Finally, thanks to the many contributors to Celeritas, and to the greater HEP software community for the numerous interactions over the years.
References¶
John Allen and Ken Kennedy. Automatic Loop Interchange. In SIGPLAN Notlce8, volume 19, 233–246. June 1984.
J. Allison, K. Amako, J. Apostolakis, P. Arce, M. Asai, T. Aso, E. Bagli, A. Bagulya, S. Banerjee, G. Barrand, B.R. Beck, A.G. Bogdanov, D. Brandt, J.M.C. Brown, H. Burkhardt, \relax Ph. Canal, D. Cano-Ott, S. Chauvie, K. Cho, G.A.P. Cirrone, G. Cooperman, M.A. Cortés-Giraldo, G. Cosmo, G. Cuttone, G. Depaola, L. Desorgher, X. Dong, A. Dotti, V.D. Elvira, G. Folger, Z. Francis, A. Galoyan, L. Garnier, M. Gayer, K.L. Genser, V.M. Grichine, S. Guatelli, P. Guèye, P. Gumplinger, A.S. Howard, I. Hřivnáčová, S. Hwang, S. Incerti, A. Ivanchenko, V.N. Ivanchenko, F.W. Jones, S.Y. Jun, P. Kaitaniemi, N. Karakatsanis, M. Karamitros, M. Kelsey, A. Kimura, T. Koi, H. Kurashige, A. Lechner, S.B. Lee, F. Longo, M. Maire, D. Mancusi, A. Mantero, E. Mendoza, B. Morgan, K. Murakami, T. Nikitina, L. Pandola, P. Paprocki, J. Perl, I. Petrović, M.G. Pia, W. Pokorski, J.M. Quesada, M. Raine, M.A. Reis, A. Ribon, A. Ristić Fira, F. Romano, G. Russo, G. Santin, T. Sasaki, D. Sawkey, J.I. Shin, I.I. Strakovsky, A. Taborda, S. Tanaka, B. Tomé, T. Toshito, H.N. Tran, P.R. Truscott, L. Urban, V. Uzhinsky, J.M. Verbeke, M. Verderi, B.L. Wendt, H. Wenzel, D.H. Wright, D.M. Wright, T. Yamashita, J. Yarba, and H. Yoshida. Recent developments in Geant4. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 835:186–225, November 2016. doi:10.1016/j.nima.2016.06.125.
G Amadio, Ananya, J Apostolakis, M Bandieramonte, S Behera, A Bhattacharyya, R Brun, P Canal, F Carminati, G Cosmo, V Drogan, L Duhem, D Elvira, K Genser, A Gheata, M Gheata, I Goulas, F Hariri, V Ivantchenko, S Jun, P Karpinski, G Khattak, D Konstantinov, H Kumawat, G Lima, J Martínez-Castro, P Mendez Lorenzo, A Miranda-Aguilar, K Nikolics, M Novak, E Orlova, W Pokorski, A Ribon, R Sehgal, R Schmitz, S Sharan, O Shadura, S Vallecorsa, and S Wenzel. GeantV alpha release. Journal of Physics: Conference Series, 1085:032037, September 2018. doi:10.1088/1742-6596/1085/3/032037.
J Apostolakis, M Bandieramonte, G Bitzes, R Brun, P Canal, F Carminati, G Cosmo, J C De Fine Licht, L Duhem, V D Elvira, A Gheata, S Y Jun, G Lima, T Nikitina, M Novak, R Sehgal, O Shadura, and S Wenzel. Towards a high performance geometry library for particle-detector simulations. Journal of Physics: Conference Series, 608:012023, May 2015. doi:10.1088/1742-6596/608/1/012023.
D. Attwood, P. Bell, S. Bull, T. McMahon, J. Wilson, R. Fernow, P. Gruber, A. Jamdagni, K. Long, E. McKigney, P. Savage, M. Curtis-Rouse, T.R. Edgecock, M. Ellis, J. Lidbury, W. Murray, P. Norton, K. Peach, K. Ishida, Y. Matsuda, K. Nagamine, S. Nakamura, G.M. Marshall, S. Benveniste, D. Cline, Y. Fukui, K. Lee, Y. Pischalnikov, S. Holmes, and A. Bogacz. The scattering of muons in low-Z materials. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 251(1):41–55, September 2006. doi:10.1016/j.nimb.2006.05.006.
H. A. Bethe. Molière's Theory of Multiple Scattering. Physical Review, 89(6):1256–1266, March 1953. doi:10.1103/PhysRev.89.1256.
Simon Blyth. Opticks: GPU Optical Photon Simulation for Particle Physics using NVIDIA OptiX. EPJ Web of Conferences, 214:02027, 2019. doi:10.1051/epjconf/201921402027.
René Brun, F Bruyant, Federico Carminati, Simone Giani, M Maire, A McPherson, G Patrick, and L Urban. GEANT: Detector Description and Simulation Tool; Oct 1994. W5013, CERN, 1993. doi:10.17181/CERN.MUHF.DMJ1.
J.C. Butcher and H. Messel. Electron number distribution in electron-photon showers in air and aluminium absorbers. Nuclear Physics, 20:15–128, October 1960. doi:10.1016/0029-5582(60)90162-0.
A.V. Butkevich, R.P. Kokoulin, G.V. Matushko, and S.P. Mikheyev. Comments on multiple scattering of high-energy muons in thick layers. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 488(1-2):282–294, August 2002. doi:10.1016/S0168-9002(02)00478-3.
Philippe Canal. Geant Exascale Pilot Project. September 2019. URL: https://indico.cern.ch/event/825306/contributions/3568404/.
R. Chytracek, J. Mccormick, W. Pokorski, and G. Santin. Geometry Description Markup Language for Physics Simulation and Analysis Applications. IEEE Transactions on Nuclear Science, 53(5):2892–2896, October 2006. doi:10.1109/TNS.2006.881062.
Joseph C. Collins. Testing, Selection, and Implementation of Random Number Generators:. Technical Report, Defense Technical Information Center, Fort Belvoir, VA, July 2008. doi:10.21236/ADA486379.
C. J. Everett and E. D. Cashwell. A Monte Carlo Sampler. Technical Report LA-5061-MS, Los Alamos Scientific Laboratory, 1972. URL: https://doi.org/10.2172/4589395.
Hiroshi Haramoto, Makoto Matsumoto, Takuji Nishimura, François Panneton, and Pierre L'Ecuyer. Efficient Jump Ahead for $\mathbb F$\textsubscript 2 -Linear Random Number Generators. INFORMS Journal on Computing, 20(3):385–390, August 2008. doi:10.1287/ijoc.1070.0251.
Richard H. Helm. Inelastic and Elastic Scattering of 187-Mev Electrons from Selected Even-Even Nuclei. Physical Review, 104(5):1466–1475, December 1956. doi:10.1103/PhysRev.104.1466.
Seth R. Johnson, Stefano Castro Tognini, Elliott Biondo, Thomas Evans, Julien Esseiva, Philippe Canal, Amanda Lund, Ben Morgan, Soon Yung Jun, Guilherme Lima, Marcel Demarteau, and Paul Romano. Celeritas R&D Report: Accelerating Geant4. Technical Report ORNL/TM-2023/3204, Oak Ridge National Laboratory, January 2024. doi:10.2172/2281972.
Seth R. Johnson, Rob Lefebvre, and Kursat Bekar. ORANGE: Oak Ridge Advanced Nested Geometry Engine. Technical Report ORNL/TM-2023/3190, Oak Ridge National Laboratory, 2025.
Seth R. Johnson, Guilherme Lima, and Ben Morgan. G4VG 1.0. Github, January 2025. doi:10.5281/ZENODO.15450226.
Seth R. Johnson, Amanda Lund, Julien Esseiva, Elliott Biondo, Stefano Tognini, Tom Evans, Guilherme Lima, Hayden Hollenbeck, Soon Yung Jun, Andrey Prokopenko, Ben Morgan, Philippe Canal, and Paul Romano. Celeritas 0.4. Github, November 2023. doi:10.5281/ZENODO.15175889.
Seth R. Johnson, Amanda Lund, Julien Esseiva, Philippe Canal, Elliott Biondo, Hayden Hollenbeck, Stefano Tognini, Lance Bullerwell, Soon Yung Jun, Guilherme Lima, Damien L-G, Sakib Rahman, Ben Morgan, and Paul Romano. Celeritas 0.6. Github, April 2025. doi:10.5281/ZENODO.15281109.
Seth R. Johnson, Amanda Lund, Julien Esseiva, Soon Yung Jun, Guilherme Lima, Stefano Tognini, Ben Morgan, Hayden Hollenbeck, Vidor Heli Lujan Montiel, Philippe Canal, Elliott Biondo, Shane Hart, Damien L-G, Peter Heywood, and Tom Evans. Celeritas 0.5. Github, October 2024. doi:10.5281/ZENODO.15175891.
Seth R. Johnson, Amanda Lund, Julien Esseiva, Stefano Tognini, Elliott Biondo, Philippe Canal, Soon Yung Jun, Ben Morgan, Guilherme Lima, and Paul Romano. Celeritas 0.3. Github, June 2023. doi:10.5281/ZENODO.15175887.
Seth R. Johnson, Amanda Lund, Soon Yung Jun, Stefano Tognini, Paul Romano, Philippe Canal, Guilherme Lima, Vincent R. Pascuzzi, Ben Morgan, Tom Evans, and Doaa Deeb. Celeritas 0.1. Github, August 2022. doi:10.5281/ZENODO.15175721.
Seth R. Johnson, Amanda Lund, Stefano Tognini, Soon Yung Jun, Elliott Biondo, Philippe Canal, Guilherme Lima, Julien Esseiva, Ben Morgan, Paul Romano, Damien L-G, and Tom Evans. Celeritas 0.2. Github, January 2023. doi:10.5281/ZENODO.15175723.
Iwan Kawrakow and Alex F. Bielajew. On the condensed history technique for electron transport. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 142(3):253–280, 1998. doi:10.1016/S0168-583X(98)00274-2.
Awais Khan, Hyogi Sim, Sudharshan S. Vazhkudai, Ali R. Butt, and Youngjae Kim. An Analysis of System Balance and Architectural Trends Based on Top500 Supercomputers. In The International Conference on High Performance Computing in Asia-Pacific Region, 11–22. Virtual Event Republic of Korea, January 2021. ACM. doi:10.1145/3432261.3432263.
Donald Ervin Knuth. The Art of Computer Programming. Addison-Wesley Series in Computer Science and Information Processing. Addison-Wesley, Reading, Mass. Munich, 1968. ISBN 978-0-201-48541-7 978-0-321-75104-1.
L Landau and Isaak Pomeranchuk. Limits of applicability of the theory of bremsstrahlung electrons and pair production for high energies. Doklady Akademii Nauk SSSR, 92:535–536, 1953. URL: https://books.google.com/books/download/Limits_of_Applicability_of_the_Theory_of.pdf?id=2ii4aFRzEUkC&output=pdf&sig=ACfU3U0e3nqy1CL4oklQ1tAdKTPDkdsL9w.
Lev Davidovich Landau and Evgen\'ıj M\'ıkha\'ılov\'ıtsj L\'ıfsh\'ıts. Electrodynamics of Continuous Media. Volume 2. Pergamon Press Oxford, 1884.
Claude Leroy and Pier-Georgio Rancoita. Principles of Radiation Interaction in Matter and Detection. World Scientific, New Jersey, 4th edition edition, 2016. ISBN 978-981-4603-20-1. URL: https://doi-org.ezproxy.cern.ch/10.1142/9167.
Teng Lijian, Hou Qing, and Luo Zhengming. Analytic fitting to the mott cross section of electrons. Radiation Physics and Chemistry, 45(2):235–245, February 1995. doi:10.1016/0969-806X(94)00063-8.
Leif Lönnblad. CLHEP—a project for designing a C++ class library for high energy physics. Computer Physics Communications, 84(1-3):307–316, November 1994. doi:10.1016/0010-4655(94)90217-8.
George Marsaglia. Xorshift RNGs. Journal of Statistical software, 8:1–6, 2003.
George Marsaglia and Wai Wan Tsang. A simple method for generating gamma variables. ACM Transactions on Mathematical Software, 26(3):363–372, September 2000. doi:10.1145/358407.358414.
Makoto Matsumoto, Isaku Wada, Ai Kuramoto, and Hyo Ashihara. Common defects in initialization of pseudorandom number generators. ACM Transactions on Modeling and Computer Simulation, 17(4):15, September 2007. doi:10.1145/1276927.1276928.
Elia Merzari, Steven Hamilton, Thomas Evans, Misun Min, Paul Fischer, Stefan Kerkemeier, Jun Fang, Paul Romano, Yu-Hsiang Lan, Malachi Phillips, Elliott Biondo, Katherine Royston, Tim Warburton, Noel Chalmers, and Thilina Rathnayake. Exascale Multiphysics Nuclear Reactor Simulations for Advanced Designs. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC23. Denver, CO, USA, 2023. Association for Computing Machinery. doi:10.1145/3581784.3627038.
A. B. Migdal. Bremsstrahlung and Pair Production in Condensed Media at High Energies. Physical Review, 103(6):1811–1820, September 1956. doi:10.1103/PhysRev.103.1811.
Stephen M. Seltzer and Martin J. Berger. Bremsstrahlung spectra from electron interactions with screened atomic nuclei and orbital electrons. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 12(1):95–134, August 1985. doi:10.1016/0168-583X(85)90707-4.
Stephen M. Seltzer and Martin J. Berger. Bremsstrahlung energy spectra from electrons with kinetic energy 1 keV–10 GeV incident on screened nuclei and orbital electrons of neutral atoms with Z = 1–100. Atomic Data and Nuclear Data Tables, 35(3):345–418, November 1986. doi:10.1016/0092-640X(86)90014-8.
Lawrence F Shampine. Some Practical Runge-Kutta Formulas. Mathematics of Computation, 46(173):135–150, January 1986. doi:10.2307/2008219.
Todor Stanev, \relax Ch. Vankov, R. E. Streitmatter, R. W. Ellsworth, and Theodore Bowen. Development of ultrahigh-energy electromagnetic cascades in water and lead including the Landau-Pomeranchuk-Migdal effect. Physical Review D, 25(5):1291–1304, March 1982. doi:10.1103/PhysRevD.25.1291.
Eite Tiesinga, Peter J. Mohr, David B. Newell, and Barry N. Taylor. CODATA Recommended Values of the Fundamental Physical Constants: 2018. Journal of Physical and Chemical Reference Data, 50(3):033105, September 2021. doi:10.1063/5.0064853.
Yung-Su Tsai. Pair production and bremsstrahlung of charged leptons. Reviews of Modern Physics, 46(4):815–851, October 1974. doi:10.1103/RevModPhys.46.815.
László Urbán. A Model for Multiple Scattering in Geant4. Technical Report CERN-OPEN-2006-077, CERN, Geneva. Switzerland, 2006. URL: https://cds.cern.ch/record/1004190/.
Bureau International des Poids et Mesures. The International System of Units. Technical Report, International Bureau of Weights and Measures, 2019. URL: https://www.bipm.org/en/publications/si-brochure.
CMS Offline Software and Computing. Evolution of the CMS Computing Model towards Phase-2. CMS Note 2021/001, CERN, Geneva. Switzerland, January 2021. URL: https://cds.cern.ch/record/2751565/files/NOTE2021_001.pdf.
COPE Council. COPE Discussion Document: Authorship. Technical Report, Committee on Publication Ethics, September 2019. doi:10.24318/cope.2019.3.3.
J M Fernández-Varea, J. B. Mayol, and Salvat. On the theory and simulation of multiple elastic scattering of electrons. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, B73:447–473, 1993. doi:10.1016/0168-583X(93)95827-R.
Hollenbach, DF, Petrie, LM, and Landers, NF. KENO-VI: A Monte Carlo Criticality Program with generalized quadratic geometry. In Physics and Methods in Criticality Safety. Nashville, TN, September 1993.
The ATLAS Collaboration. ATLAS HL-LHC Computing Conceptual Design Report. Technical Report CERN-LHCC-2020-015/LHCC-G-178, CERN, November 2020. URL: https://twiki.cern.ch/twiki/bin/view/AtlasPublic/ComputingandSoftwarePublicResults.
The Geant4 Collaboration. Geant4 physics reference manual. 2023. URL: https://geant4-userdoc.web.cern.ch/UsersGuides/PhysicsReferenceManual/html/index.html.
Release History¶
Release history
License¶
Celeritas is copyrighted and licensed under the following terms and conditions.