The rapid advancement of cloud computing, artificial intelligence, and data analytics has led to a surge in demand for higher interconnect bandwidth in data centers and high-performance computing systems. This escalating demand necessitates increased per-channel data rates for serial links that enable communication between ICs in these systems.
To meet these challenges, serial link receiver front-ends utilize an analog-to-digital converter (ADC) combined with digital signal processing (DSP), as shown in Fig. 1. This architecture enables inter-symbol interference (ISI) cancellation and symbol detection in the digital domain, leveraging the advantages of technology scaling. However, it imposes stringent requirements on the ADC, including an extremely high sampling rate and a wide Nyquist bandwidth, to support the increasing data rates.
With the rapid advancement of technology, the demands for ADCs in serial link systems have grown significantly, requiring sampling rates that reach tens or even hundreds of gigahertz. At the same time, the increasing complexity of digital circuits, driven by the widespread adoption of DSP, has pushed the transition to advanced process technologies.
This research explores innovative channel ADC architectures and speed enhancement techniques to achieve extremely high sampling rates while maintaining a 5–6 ENOB resolution in advancing CMOS technology. The goal is to optimize power and silicon area efficiency. Enhancing the speed of a single-channel ADC can reduce the dependency on front-end circuits alongside a lower time-interleaving factor. This work aims to identify optimal design trade-offs for performance, power, and area efficiency. The performance design targets for channel ADCs are illustrated in Fig. 2.