Installation

Celeritas is designed to be easy to install for a multitude of use cases. Users should probably use Spack or another package manager to install Celeritas, and developers should install the software dependencies, configure, and build themselves.

If not using a package manager, you should download the latest develop archive or release version, or clone with Git:

$ git clone https://github.com/celeritas-project/celeritas.git

Then, with the necessary dependencies installed, you can configure and build.

$ cd celeritas
$ mkdir build && cd build
$ cmake ..
$ ninja

The following section describes details of configuration and more advanced use cases for building.

Installing with Spack

Celeritas is available through the Spack package manager, which is designed for HPC environments and scientific software, including HEP packages. The Celeritas Spack package supports a wide range of configuration options including GPU acceleration (CUDA and HIP), geometry backends (VecGeom and ORANGE), I/O implementations (ROOT, HepMC3), and profiling (Perfetto). This makes it easy to install Celeritas with the exact feature set needed for the user’s application. The typical HEP use case, which requires Geant4 and VecGeom, is built by default. .. code:

# Use an Nvidia A100 GPU with VecGeom
$ spack install celeritas +cuda cuda_arch=80 +vecgeom
# Use an AMD MI250x
$ spack install celeritas +rocm amdgpu_target=gfx90a
# Add celeritas to your $PATH
$ spack load celeritas

Software dependencies

Celeritas is built using modern CMake. It has multiple dependencies to operate as a full-featured code, but each dependency can be individually disabled as needed.

The code requires external dependencies to build with full functionality, but none of them need to be installed externally for the code to work. Most can be omitted entirely to enable limited development on experimental HPC systems or personal machines with fewer available components. Items with an asterisk * in the category below will be fetched from the internet if required but not available on the user’s system.

Component

Category

Description

CLI11

Runtime*

Command line parsing

CUDA

Runtime

GPU computation

DD4hep

Runtime

HEP detector framework integration

Geant4

Runtime

Physics data and user integration

G4EMLOW

Runtime

EM physics model data

G4VG

Runtime

Geant4-to-VecGeom translation

HepMC3

Runtime

Event input

HIP

Runtime

GPU computation

LArSoft

Runtime

LAr detector framework integration

libpng

Runtime

PNG output for raytracing

nljson

Runtime*

Simple text-based I/O for diagnostics and program setup

Open MPI

Runtime

Shared-memory parallelism

ROOT

Runtime

Input and output

VecGeom

Runtime

On-device navigation of GDML-defined detector geometry

Breathe

Docs

Generating code documentation inside user docs

Doxygen

Docs

Code documentation

Sphinx

Docs

User documentation

sphinxbib

Docs

Reference generation for user documentation

clang-format

Development

C++ code formatting

codespell

Development

Spell checking

CMake

Development

Build system

Git

Development

Repository management

pre-commit

Development

Formatting enforcement

GoogleTest

Development*

Test harness

Perfetto

Development*

CPU profiling

Ideally you will build Celeritas with all dependencies to gain the full functionality of the code, but there are circumstances in which you may not have (or want) all the dependencies or features available.

Note

The LArSoft metapackage in Celeritas looks for a few specific components of the full LArSoft stack: cetmodules, art, and LArSoft’s data object model.

Installing dependencies with Spack

When building locally for development or deployment, the recommended way to install dependencies is with Spack. Celeritas includes Spack development environments at scripts/spack/env-which.yaml for development and execution. To install these dependencies for basic use with an Nvidia GPU:

  • Clone and load Spack following its getting started instructions.

  • Run spack external find cuda to inform Spack of the existing CUDA installation.

  • Create the Celeritas development environment with spack env create celeritas scripts/spack/env-cuda.yaml

  • Activate the environment with spack env activate celeritas

  • Tell Spack to default to building with CUDA support with the command spack config add "packages:all:prefer:cuda_arch=<ARCH>", where <ARCH> is the numeric portion of the CUDA architecture flags.

  • Install all the dependencies with spack install.

The dependency requirements for Celeritas are:

packages:
  all:
    prefer:
    - "generator=ninja build_system=cmake"
    - "build_type=Release"
    - "cxxstd=20"
  cli11:
    require: '@2.4:'
  cmake:
    require: '@3.21:'
  nlohmann-json:
    require: '@3.7.0:'
  geant4:
    require: '@10.5:'
    prefer:
    - '@11.3'
  googletest:
    require: '@1.10:'
  py-identify: # needed by py-pre-commit
    require: '@2.6:'
  python:
    require: '@3.9:'
  covfie:
    require: '@0.13:'
  g4vg:
    require: '@1.0.3:'
  vecgeom:
    require:
    - '@1.2.10:'
    - +gdml
    prefer:
    - '@1'
  root:
    require:
    - '@6.28:'
    prefer:
    - ~aqua ~davix ~examples ~opengl ~x ~tbb ~webgui +root7
  dd4hep:
    require:
    - '@1.33:'
    prefer:
    - ~utilityapps ~ddeve

and the full list of packages used by Celeritas is:

  specs:
    # Packages required for a *truly minimal* build
    - cli11
    - cmake
    - nlohmann-json
    # ... plus packages required for *basic* realistic development
    - geant4
    - git
    - googletest
    - py-pre-commit
    # ... plus *recommended* options
    - covfie
    - g4vg
    - hepmc3
    - ninja
    - vecgeom
    - root
    # documentation
    - doxygen
    - py-breathe
    - py-furo
    - py-sphinx
    - py-sphinxcontrib-bibtex
    # experimental
    - dd4hep
    - mpi
    # R&D
    - git-lfs
    - gh
    # CI
    - py-gcovr

With this environment (with CUDA enabled), all Celeritas tests should be enabled and all should pass. Celeritas is build-compatible with older versions of some dependencies (e.g., Geant4@10.6 and VecGeom@1.2.7), but some tests may fail, indicating a change in behavior or a bug fix in that package.

Once the Celeritas Spack environment has been installed, set your shell’s environment variables (PATH, CMAKE_PREFIX_PATH, …) by activating it with spack env activate celeritas.

Developer build script

Celeritas includes a build script, scripts/build.sh, that configures and builds the code on development machines. It includes environment files for quickly getting started on systems including NERSC’s Perlmutter, ORNL’s ExCL and Frontier systems, and ANL’s JLSE.

It intelligently configures the build environment by:

  • detecting the system hostname and loading the corresponding environment file from scripts/env/hostname.sh (if available)

  • detecting and loading apptainer-specific setups using environment variables

  • linking the appropriate CMake user presets from scripts/cmake-presets/system.json

  • detecting and enabling ccache for faster rebuilds

  • configuring, building, and testing, and

  • installing pre-commit hooks for code quality checks.

The script accepts a preset name as the first argument, followed by any additional CMake configuration arguments. For example:

$ ./scripts/build.sh default -DCELERITAS_DEBUG=ON

Common presets include minimal (fewest dependencies), default (detecting dependencies from the environment), and full (all optional features enabled). System-specific presets such as reldeb-orange may also be available.

The provided environment files for certain shared HPC systems may change key variables such as XDG_CACHE_HOME. In such cases, the build script will modify the shell’s rc file to source the environment script at login.

Configuring Celeritas

By default, the CMake code in Celeritas queries available packages and sets several CELERITAS_USE_package options based on what it finds, so you have a good chance of successfully configuring Celeritas on the first go. Some optional features will fail during configuration if their required dependencies are missing, but they will update the CMake cache variable so that the next configure will succeed (with that component disabled).

The interactive ccmake tool is highly recommended for exploring the Celeritas configuration options, since it provides both documentation and an easy way to toggle through all the valid options.

Additionally, CMake presets are included for both general and machine-specific cases. These presets bundle sets of useful options and compiler flags.

CMake variables

CELERITAS_USE_package

Enable features of the given dependency. The configuration will fail if the dependent package is not found.

CELERITAS_BUILTIN_package

Force a package to be built from an internally downloaded copy (when true/on) or externally installed code (when false/off).

CELERITAS_BUILD_DOCS|TESTS

Build optional documentation and/or tests.

CELERITAS_CORE_GEO

Select the geometry package used by the Celeritas stepping loop. Valid options include VecGeom, Geant4, and ORANGE. There are limits on compatibility: Geant4 is not compatible with GPU-enabled or OpenMP builds, and VecGeom is not compatible with HIP.

CELERITAS_CORE_RNG

Select the pseudorandom number generator. Current options are platform-dependent implementations of XORWOW.

CELERITAS_DEBUG

Enable detailed runtime assertions. These will slow down the code considerably. A separate CELERITAS_DEBUG_DEVICE option allows debug checking inside device code to be enabled/disabled since the generated code substantially increases kernel size and build time in addition to affecting performance more substantially.

CELERITAS_OPENMP

Choose between no multithreaded OpenMP parallelism (disabled), event-level parallelism for the celer-sim app, and track-level parallelism. OpenMP should be disabled with multithreaded Geant4 but will work correctly with single-threaded applications.

CELERITAS_REAL_TYPE

Choose between double and float real numbers across the codebase. This is currently experimental.

CELERITAS_RESEED

Choose when the random number generator is reseeded. Valid options include trackslot and track. With trackslot, each trackslot gets a unique random number generator. With track, every particle track gets a unique random number generator. The track option provides improved reproducibility at greater computational expense.

CELERITAS_UNITS

Choose the native Celeritas unit system: see the unit documentation.

CELERITAS_CODATA

Choose the default set of experimentally measured CODATA constants: see the constants documentation.

Celeritas libraries (generally) use CMake-provided default properties. These can be changed with standard CMake variables such as BUILD_SHARED_LIBS to enable shared libraries, CMAKE_POSITION_INDEPENDENT_CODE, etc.

CMake presets

To manage multiple builds with different configure options (debug or release, VecGeom or ORANGE), you can use the CMake presets provided by Celeritas via the CMakePresets.json file for CMake 3.21 and higher:

$ cmake --preset=default

The three main options are “minimal”, “default”, and “full”, which all set different expectations for available dependencies.

Note

If your CMake version is too old, you may get an unhelpful message:

CMake Error: Could not read presets from celeritas: Unrecognized "version"
field

which is just a poor way of saying the version in the CMakePresets.json file is newer than that version knows how to handle.

If you want to add your own set of custom options and flags, create a CMakeUserPresets.json file or, if you wish to contribute on a regular basis, create a preset at scripts/cmake-presets/HOSTNAME.json and call scripts/build.sh {preset} to create the symlink, configure the preset, build, and test.

UPS for LArSoft

Since LArSoft and DUNE (see LArSoft for DUNE) require many infrastructure components specific to the Fermilab UPS packaging system and art framework, it is difficult to install on a typical user system. However, the necessary dependencies are available as “build products” via the DUNE/LArSoft/Fermilab CVMFS distribution and built through a standard Fermilab-provided Apptainer image. Building Celeritas for LArSoft is trivial once the LArSoft development environment has been set up.

Note

UPS and these images are in the process of being replaced with a Spack toolchain. If you are using a Spack-based distribution of larsoft/dunesw already, you should be able to install Celeritas with the standard instructions above.

Apptainer

UPS-based builds always happen within a containerized system. These instructions demonstrate container execution for two use cases: using CUDA on the ExCL milan2 system, and without CUDA on Fermilab’s scisoftbuild01 machine.

To enable CUDA, launch the fnal-dev-sl7:latest Apptainer image, stored on CVMFS, with CUDA forwarding enabled (and the CUDA directory forwarded via -B):

APPTAINER_DIR=/usr
IMAGE_DIR=/cvmfs/singularity.opensciencegrid.org/fermilab
IMAGE=fnal-dev-sl7:latest
exec $APPTAINER_DIR/bin/apptainer \
  shell --shell=/bin/bash \
  -B /cvmfs,$CUDA_HOME,$SCRATCHDIR,$HOME \
  --nv --ipc --pid  \
  ${IMAGE_DIR}/${IMAGE}

This command is wrapped into the apptainer-fnal shell command when scripts/env/excl.sh is sourced.

The --ipc --pid options ask Apptainer to give the container isolated interprocess communication and process ID namespaces for VM-like process isolation.

On Fermilab machines, most of which require Kerberos authentication and do not have CUDA support, omit the --nv flag and forward the hosts files.

APPTAINER_DIR=/cvmfs/oasis.opensciencegrid.org/mis/apptainer/current # codespell:ignore
IMAGE_DIR=/cvmfs/singularity.opensciencegrid.org/fermilab
IMAGE=fnal-dev-sl7:latest
exec $APPTAINER_DIR/bin/apptainer \
  shell --shell=/bin/bash \
  -B /cvmfs,$SCRATCHDIR,$HOME,$XDG_RUNTIME_DIR,/opt,/etc/hostname,/etc/hosts,/etc/krb5.conf  \
  --ipc --pid  \
  ${IMAGE_DIR}/${IMAGE}

This script forwards:

  • the cvmfs and /opt directories to provide build tools and products,

  • the higher-performance temporary build directories in $SCRATCHDIR,

  • the home directory for source code and shell scripts,

  • $XDG_RUNTIME_DIR to allow ssh-agent forwarding, and

  • network configuration files.

Important

Because the fnal-dev-sl7 uses a very old operating system, the default LArG4 installation will likely fail to load when enabling CUDA with the --nv flag, which forwards a number of host libraries to the container. If this happens, you will see an error:

Unable to load requested library .../liblarg4_Services_LArG4Detector_service.so
/lib64/libc.so.6: version 'GLIBC_2.38' not found (required by /.singularity.d/libs/libGLX.so.0)

This is due to Geant4’s visualization functionality (which uses OpenGL). It can be fixed by commenting out the lines in /etc/apptainer/nvliblist.conf that start with libGL and libgl.

Tip

Apptainer overrides the $PS1 shell prompt variable even if your forwarded home directory overrides it. To override it inside the apptainer, define the APPTAINERENV_PS1 environment variable in the bare-metal machine (i.e., the login node). For example:

APPTAINERENV_PS1='\D{%b %d %H:%M:%S} \u@\h|$APPTAINER_NAME:\w\n$ '

UPS and MRB

To set up Celeritas dependencies for minimal LArSoft development:

. /cvmfs/larsoft.opensciencegrid.org/setup_larsoft.sh
setup larsoft v10_20_01 -q e26:prof
setup cmake v3_27_4
setup cetmodules v3_24_01

The -q qualifiers denote the compiler version and flags. These dependencies are loaded automatically when using the build.sh script inside the Apptainer image.

Tip

Use the command ups list -aK+ package to list available packages.

Alternatively, for integration into DUNESW development environment:

$ source /cvmfs/dune.opensciencegrid.org/products/dune/setup_dune.sh
Setting up larsoft UPS area... /cvmfs/larsoft.opensciencegrid.org
Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/
$ setup dunesw v10_20_00d00 -q e26:prof

If using MRB with at least one repository (i.e. you called mrb g ...), cmake will be available in your $PATH.

Installing Celeritas

Celeritas does not currently have a UPS package. Instead, build and install it like any other CMake package, using the build script, the LArSoft preset (which activates CELERITAS_USE_LArSoft and disables unnecessary Software dependencies), or manually:

$ git clone https://github.com/celeritas-project/celeritas.git
$ cd celeritas
$ cmake --preset=larsoft .
$ cmake --preset=larsoft --install .

On some machines such as Perlmutter, which has Nvidia’s HPC SDK installed, you may need additional setup inside a container to configure Celeritas with CUDA:

function export-native-cuda() {
    HPCSDK_DIR="/opt/nvidia/hpc_sdk/Linux_x86_64/25.5"
    export CUDA_HOME="$HPCSDK_DIR/cuda/12.9"
    export PATH="$CUDA_HOME/bin":$PATH
    export CUDACXX="$CUDA_HOME/bin/nvcc"
    export CUDAARCHS=80 # For Nvidia A100

    export CPATH="$HPCSDK_DIR/math_libs/12.9/include:$CPATH"
    export LD_LIBRARY_PATH="$CUDA_HOME/lib64:$CUDA_HOME/nvvm/lib64:$CUDA_HOME/extras/Debugger/lib64:$HPCSDK_DIR/math_libs/12.9/lib64:$LD_LIBRARY_PATH"
}