A novel matrix method for determining time offsets between multiple detectors in high-energy experiments has been presented. Extensive simulations have demonstrated that this method outperforms the traditional linear approach in approximately 90% of the cases, yielding more precise time synchronization.

“With High energy comes great precision” - Abhishek Anil Deshmukh

Introduction

In the field of High-energy physics, precision is paramount. The task of setting up an experiment in the field is a herculean effort, requiring collaboration between multiple universities, experts, industries, and most importantly funders. These experiments are generally made up of different detectors which all measure the same event. Since, the time precision of these detectors is extremely high, getting all the detectors synced is far from trivial.

Traditionally, researchers have relied on a Linear Method to synchronise detectors. We measure the offset of first detector which other and use those offsets for correction. While this approach has served its purpose, it often falls short of achieving the desired level of precision. To address this challenge, we developed a novel method, the matrix method, designed to optimize time offsets between multiple detectors.

In this blog post, we will delve into the intricacies of detector synchronisation, explore the limitations of the linear method, and unveil the potential of the matrix method. We will share our simulation results, demonstrating the superior performance of our approach.

The Matrix Method: A Novel Approach

Imagine a group of people trying to clap in perfect sync. It's challenging, right? Now, imagine those people are high-tech detectors trying to capture particles moving at the speed of light. The stakes are even higher!

This is where the matrix method comes in. Instead of relying on a simple pairwise comparison of detector timings, we consider all detectors simultaneously. We construct a matrix representing the time offsets between every pair of detectors. By minimizing a carefully chosen loss function, we optimize all time offsets collectively.

Think of it as a complex puzzle where each piece influences the others. By solving the puzzle as a whole, we achieve a more accurate and robust solution.

The correlation matrix can be defined as follows $$ C_{ij} = \alpha_j - \alpha_i $$ Here, $\alpha$ is the offset of the particular detector. One has to have a method to measure this. In mCBM we can look at time-correlation plots and see a peak at the offset value. Since this is the measurement, the error on the offsets is on this value.

A good set of offsets should make $C_{ij}$ if corrected. So we minimise the sum of $C_{ij}$, which we define as loss function.

$$\mathcal{L} = \sum_{ij}[C_{ij} + \alpha'_i - \alpha'_j]$$

Where, $\alpha'_{i}$ is the current solution. To find best set of offsets, we use gradient descent and minimize the loss function above. This gives a good solution. But after testing a bunch of loss functions, the one that provides the best results is the following

$$ \mathcal{L} = \sum_{ij}{\frac{C_{ij} - \alpha'_i + \alpha'_j}{\sqrt{\sigma_i^2 + \sigma_i^2} }} $$ Where $\sigma_i$ is the time resolution of the i-th detector. It is hard to say weather there exists a better

Simulation Results

To test the efficacy of the matrix method, we conducted extensive simulations. We compared its performance against the traditional linear method under various conditions like different number of detectors, different detector precisions etc.

In a simulation, we start with a set of offsets of the detectors, calculate the correlation matrix and then add noise into the matrix. The amplitude of noise is resolution between the two detectors. $\Delta C_{ij} = \sqrt{\sigma_i^2 + \sigma_j^2}$ where $\sigma_i$ is the time-resolution of the $i^{\text{th}}$ detector. Now Linear and Matrix Method are used on this noised correlation matrix to get back the offsets.

In our simulations, we began with a predefined set of detector offsets. A correlation matrix was calculated based on these offsets. Subsequently, noise was introduced to this matrix. The noise amplitude was determined by the resolution of the involved detectors, calculated as $\Delta C_{ij} = \sqrt{\sigma_i^2 + \sigma_j^2}$​, where $\sigma_i$​ represents the time resolution of the i-th detector. Both the linear and matrix methods were then applied to this noisy correlation matrix to recover the original offsets.

Below are 4 examples of cases with 7 detectors and a time resolution of 7 ns on each detector.

Matrix method (MM in green) consistently predicts the output closer to the true offsets (in blue) compared to Linear Method (LM in orange).

To look at simulation results collectively let us define a distance from the solution.

$$ D = \sum_{i}{|\alpha_i - \alpha_i'|} $$ Where $D$ is the distance, $\alpha$ are the true offsets, and $\alpha'$ is the correction predicted by method used.

In the following plot summarises $10^4$ simulation results for the same parameters as above.

  • X-axis: Distance between linear method solution and true offset.
  • Y-axis: Distance between matrix method solution and true offset.
  • Color: Number of occurrences (log scale).
  • Diagonal line: Represents equal performance between both methods.
  • Top right: Percentage of cases where the matrix method outperformed the linear method.

As you can see, the matrix method consistently outperforms the linear method in determining accurate time offsets. In approximately 90% of our simulations, the matrix method yielded more precise results. This improvement is particularly significant when dealing with a large number of detectors or when the time offset errors exhibit complex correlations.

The Challenge: Implementation in mCBM

The mCBM experiment at the SIS18 accelerator of GSI/FAIR is a precursor to the larger CBM experiment. It serves as a testbed for the data acquisition system, online reconstruction, and analysis techniques that will be employed in CBM. As a PhD student, I am contributing to this ambitious project by developing novel methods to enhance the precision of our experimental setup.

While the matrix method shows promising results in our simulations, its direct application to the mCBM experiment proved to be challenging. The mCBM collaboration already employs a time offset correction method, and further refinements at the nanosecond level were deemed unnecessary at the current stage of data analysis.

This decision was primarily driven by the complexities involved in incorporating such precise timing information into the existing data processing pipeline. Additionally, the expected impact on the overall experiment's physics goals was considered to be relatively minor.

However, we believe that the matrix method holds significant potential for future applications in high-energy physics. As detector technology continues to advance and experimental requirements become more stringent, methods like ours could play a crucial role in achieving optimal performance.

Conclusion

Achieving precise synchronization between multiple detectors is a critical challenge in high-energy physics experiments. The traditional linear method, while serviceable, often falls short of the required accuracy. Our proposed matrix method offers a significant improvement, demonstrating superior performance in determining time offsets through extensive simulations.

While the immediate application of the matrix method to the mCBM experiment is currently hindered by existing data processing constraints, its potential benefits for future experiments and refined analysis techniques cannot be overstated.

We believe that the matrix method represents a valuable contribution to the field of high-energy physics and encourages further exploration of its applications.