1 Introduction

By the start of the third long shutdown of the LHC late 2023, the accumulated radiation damage to the current endcap calorimeters of CMS detector is expected to be so severe that new endcaps calorimeters must be installed. In addition, to cope with the increase in luminosity proposed by the high-luminosity LHC (HL-LHC) project, and the resulting large number of simultaneous interactions at each bunch-crossing (pile-up) and radiation damage, the High Granularity Calorimeter (HGCAL) project [1] is developing a sampling calorimeter based on the CALICE/ILC concepts [2], but tailored to the CMS/LHC conditions. Compared to the Calice design, the HGCAL will include high precision time-of-arrival (TOA) measurement capability as a means of reducing the impact of the high pile-up, circuits for generating reduced-resolution energy measurement data for the trigger, and must respect a tight power budget to make the cooling requirements of the detector manageable. Although the HGCAL will use both silicon sensors and scintillator+silicon photomultiplier sensors, both using very similar front-end electronics, this paper will focus on the electronics requirements for the more challenging silicon sensors.

2 Front-End Electronics

A sketch of the currently envisioned architecture for the HGCAL frontend is shown in Fig. 1. To enable calibration using minimum ionizing particles (MIP), which deposit about 3.6 fC in a 300 \(\mu \)m sensor, an equivalent input charge noise of less than 0.32 fC (2000 e-) is specified. To reach the 15 bit dynamic range required to cover the specified 10 pC full-scale range, a time over threshold (TOT) circuit will extend the measurement range of a charge-preamplifier and Analog to Digital Converter (ADC) path beyond the 11 bit range of the ADC. To simplify the logic for summing (nominally 4) sensor cells to form the reduced-resolution trigger data, as well as to simplify the offline (but still time-critical) high-level trigger analysis, the linear regions of the ADC and TOT paths should exhibit some overlap. A high dynamic-range charge injection circuit with high relative accuracy will be used for transferring the MIP-based calibration to the TOT range.

To enable the required 20 ps TOA resolution the front-end must have a large gain-bandwidth product, and a phase-stable reference clock signal must be distributed to each front-end ASIC. The shaping-time of the front-end is a trade-off between electrical noise and measurement corruption due to a slow decay of charge deposited in preceding bunch-crossings. As shown in Fig. 2 digital filtering [3] can mitigate the latter, and would thus allow the shaper time-constant to be increased from 10–15 ns to 30 ns. Finally, the whole analog front-end must respect a power budget of only 10 mW per channel.

Fig. 1.
figure 1

The current concept for the HGCAL frontend. Charge deposited in the sensor cell on the left is converted to a voltage by the charge-sensitive preamplifier, before being digitized either using the ADC or the TOT circuit. At the same time the time of arrival is measured by a TOA circuit. Not shown: Circuits for sensor leakage current compensation, and circuits that enable sensors of both polarities to be used. From CMS-CR-2017-156. Published with permission by CERN.

Fig. 2.
figure 2

Left: An illustration of the equivalent input charge noise as a function of shaping time for CR-RC shaping with equal time-constants, based on a simulation with an input voltage noise of 0.5 nV/\(\sqrt{Hz}\) and a total sensor+PCB+frontend capacitance of 80 pF. 10-bit digitalization after the first-stage amplification stage is assumed. Right: the system impulse response before and after digital (FIR) filtering using 3 coefficients for \(\tau \)=30 ns. From CMS-CR-2017-156. Published with permission by CERN.

3 Data Readout and Processing

An overview of the current system design concept for the HGCAL detector is shown in Fig. 3. At the full 40 MHz acquisition rate, with 6 million channels and about 34 bits per channel, the raw data generated in the detector will be just above 8 Pbit/s, which would require some 800000 10 Gbit optical links to read out. To keep the cost of the system acceptable, this number has to be reduced by almost two orders of magnitude. This will be achieved using on-detector buffer memory capable of storing 12.5 \(\mu \)s worth of data, to be read out in response to the CMS trigger at a rate of up to 750 kHz. These data are also zero-suppressed by omitting charge deposits close to the noise level, and only transmit the TOT and TOA data fields when these contain meaningful results. These measures are expected to reduce triggered data bandwidth enough to fit into about 6000 10 Gbit lpGBT [4] links.

Fig. 3.
figure 3

The signals deposited in each of the approximately 6M sensors cells, distributed over about 600 m\(^2\) of sensor wafers, are digitized by frontend ASICs located on sensor PCBs that connect to the sensor wafers using through-hole wire bonds. The digital data streams from the frontend ASICs will be sent over electrical links to the panel PCBs where streams from multiple sensors are aggregated in concentrator ASICs before being sent off-detector using lpGBT-based optical links. From CMS-CR-2017-156. Published with permission by CERN.

At the same time, the HGCAL detector must contribute real-time information to the L1 trigger decision. As mentioned above, the spatial resolution will be reduced by a factor of four within the front-end ASICs by summing adjacent cells into trigger cells, and by only using half of the layers for generating trigger data. Additionally, only trigger cells, or possibly regions of trigger cells, where substantial deposits have been detected will be transmitted. All in all, we expect to use on the order of 8000 optical links for bringing the trigger information out of the detector.

Once out of the detector, the energy measurements from the individual trigger cells must be merged into 3-dimensional clusters. The current approach is based on finding connected regions around “seed” hits (cells exceeding a comparatively high energy threshold) on each layer to form 2D clusters, and in a second processing step merging these into 3-dimensional clusters that correspond to incoming particles. To meet throughput and latency requirements these operations will be implemented using a bank of FPGAs likely housed on ATCA boards about 70 m from the detector.

4 Conclusions and Future Work

While the high luminosity planned for HL-LHC will speed up the accumulation of statistics substantially, it has significant implications for the new endcap calorimeters of CMS. To date, rapid progress has been made by leveraging experience and R&D from the CALICE project. Nevertheless, significant work remains to develop a detector that can manage the requirements at CMS during the HL-LHC era in terms pile-up-rejection, power-consumption and data throughput. A first testbeam experiment in 2016 [5] delivered promising results and a second testbeam during the summer of 2017 will explore TOT and TOA measurements using a front-end ASIC made specifically for this test. Two sets of prototype circuits for the HGCAL frontend have been manufactured, with a third prototype due to be submitted for manufacturing in June 2017.

Looking ahead, the HGCAL project is due to deliver a Technical Design Report by November 2017, with full-architecture system tests planned for 2018. This rapid pace is necessary as production needs to start in 2020 in order to have the final detector ready for installation in 2024–2025.