1. Home
  2. Featured Articles
  3. 5G Timing Requirements and the Importance of GNSS Antennas/Receivers

5G Timing Requirements and the Importance of GNSS Antennas/Receivers


by Mark Miller, Product Manager, L-com


Next generation cellular wireless standards have progressively tightened timing and synchronization requirements for various 5G eMBB, MMTC, and uRLLC applications. Cooperative radio techniques such as CoMP and CA have brought another level of constraint to the synchronization chain, where standards such as eCPRI support additional restrictions caused by the intra-PHY split for inter- and intra-band carrier aggregation (CA), MIMO, downlink Coordinated Multi-Point transmission/reception (CoMP), and uplink CoMP. The GNSS satellite constellations stand at the source of the timing chain with highly accurate rubidium clocks. Understanding the general workings of these receivers and how they serve as the master clocks for tight synchronization chains can better illuminate the timing requirements of 5G.

Stringent Timing Requirements for 5G

The infrastructure for backhaul is an evolving problem in the attempt to service the stringent latency requirements of cellular standards 3GPP, and ultimately, the key performance indicators (KPI) for 5G. Capacity, reach, and E2E latency are all parameters that are tightening for all of the various 5G applications including massive machine type communications (MMTC), enhanced mobile broadband (eMBB), and ultra reliable low latency communications (uRLLC). The major change in topology for a 4G base station (eNodeB) splits the radio and processing functions by moving the remote radio head (RRH) from the base of the tower up to the antenna, while the compute-intensive applications remained with the baseband unit (BBU). In 5G, there is a two- tier architecture that splits the baseband processing and radio functions between three types of units: the remote radio unit (RRU), distributed unit (DU), and centralized unit (CU). Low latency services fall under the distributed tier while compute intensive applications are performed in the centralized tier [1]. 

Often known as fronthaul, the separation between the RRU and the DU exposes the CPRI protocol while backhaul connects the 5G base station (gNB) to the 5G core network. Where point-to-point fiber optic propagation time is on average 5 µs per km. In order to maintain the strict 5G NR timing specifications (refer to Table 1), most fronthaul connections will not have a reach beyond 10 km, where backhaul connections have less strict timing requirements and can go beyond 20 km. The network implementations of the 5G use cases listed in Table 1 vary; for instance, a mMTC application might use a closed, plant-based network, while multiple eMBB services can use the same equipment/hardware with different fronthaul transport services [2]. Despite the multitude of hardware or services used for the fronthaul network, base E2E delays are necessary to meet NR and ITU 5G standards. Meeting latency requirements on an existing structure is also modeled on the different class of service (CoS) for mobile traffic from the DU or RU; this type of scheduling can vary the amount of allocated bandwidth based on a “high priority” CoS or a “best-effort” CoS. According to the expected fronthaul network for 5G schemes with an intra-PHY split eCPRI—a “High” CoS use case includes uRLLC (fast user plane) applications that exhibit a maximum one-way frame delay from 25 us to 500 us, while a “Low” CoS would experience a one-way frame delay at a maximum of 100 ms [3]. 

Table 1: General data rate and latency expectations for various 5G use cases [2]

How the Various Functional Splits Affect Timing

As shown in Figure 1, one way latency requirements grow tighter for lower functional splits where the intra-PHY splits found in option 7 are specified by four different groups: 3GPP (option 7-1/7-2/7-3, option 7a/7b/7c), eCPRI (ID,IID,Iu), Small Cell forum (I,II,III,IIIb), and xRAN/O-RAN (Split 7-2x). 

Figure 1: CU, DU, and RU functions according to functional splits found in 4G, (a) high layer (F1), (b) low layer (Fx), and (c) cascaded split [2]

In order to best understand the implications of tighter latency and synchronization requirements, it makes sense to analyze the most stringent timing requirements found in option 7. This is due to the fact that these timing requirements significantly alter the design considerations involved for xhaul as well as the synchronization chain from the air interface to the packet network/5G core. The specified 3GPP time alignment error (TAE) for various cellular features is listed in Table 2 along with the time errors (TE) at the user network interface (UNI). The UNI is the physical point at which a subscriber or service is responsible, or, the point between the RU and the transport network. The relative time error (|TE|relative) is then the difference in time between the UNIs of the local cluster—the end application can either use an integrated telecom time slave clock (T-TSC) with the RU, or one that is provided by 1PPS or a similar interface at the edge of the transport network (UNI). The TAE can be defined as the time error between antenna ports, while the absolute time error (|TE|absolute) is the difference in time between the reference clock (PRTC/T-GM) and the local clock. As made apparent by Table 2, the time error requirements between UNIs grow tighter with complex 5G technologies such as MIMO. 

Table 2: eCPRI timing error requirements for various 5G features

Timing from Edge to Core

From the edge to the centralized 5G core, the network could use one of several types of synchronization protocols, including Precision Time Protocol (PTP), Synchronous Ethernet (SyncE) Network Time Protocol (NTP). All of these protocols the synchronization of a highly stable master clock with the slave clocks down the line (Figure 2). The slave clocks are numerous and can include the telecom boundary clock (T-BC), the telecom transparent clock (T-TC), and the telecom time slave clock (T-TSC). However, the master clock, or the Primary Reference Time Clock (PRTC), is often reliant on GNSS satellite constellations. 

Figure 2: Two different synchronization chain topologies for mobile traffic with the various master and slave clocks used

GNSS: GPS Background

Global Navigation Satellite Systems (GNSS) such as global position system (GPS), quasi-zenith satellite system (QZSS), BEIDOU, GALILEO, and GLONASS provide the backbone for accurate timing and location/positioning in systems all over the globe. Of the multitude of GNSS, GPS is the oldest and most accurate navigation system. The GPS architecture includes three segments: the space, control and user segments. In the space segment, a constellation of 32 satellites in MEO (~26,500 km altitude) continually transmits L-band signals centered at 1575.42 Hz (L1) and 1227.60 MHz (L2) to the control and user segment comprised of ground control stations and military/civilian receivers, respectively. 

By design, the transmitted radio signals include time patterns relative to the satellite’s onboard precision rubidium clock to compute relative ranges. From this data, position can be triangulated with a varying level of accuracy. This is accomplished through the use of two different signal modulations: Coarse Acquisition (C/A) and Precision (P) code. Both signals include pseudo-random digital codes that can be compared to the receiver’s to gather the pseudoranges, carrier phase, and doppler.

The GNSS Front-End

As shown in Figure 3, the GNSS receiver RF signal chain is not more or less typical with the antenna, prefilter, LNA, frequency downconversion and analog-to-digital conversion before the digital signal processing. Oftentimes, the filter and preamplification are housed within the antenna to send a clean signal for downconversion and signal processing. This process relies heavily on several factors: proper filtering, a low noise LNA, and stable/clean clock source(s).  

Figure 3: Functional block diagram of GNSS receiver

The Antenna Unit

Because of the extremely low powered received signals on the order of -160 dBw, the GNSS receiver is particularly susceptible to interference and must, by design, have a high sensitivity and low SNR. It is especially difficult to obtain proper reception in urban environments that are typically loaded with interference and obstacles that cause multipath and fading effects. The filtering accomplished in the antenna housing is vital for rejecting out-of-band signals, reducing the noise, mitigating the impact of aliasing, and providing burnout protection by removing high power RFI found within multi-located base station environments. The typical design parameters used in assessing GNSS antennas are as follows [4]:

  • Operational frequency band
  • Polarization
  • Gain and beamwidth (half power beamwidth)
  • Phase center stability 
  • Axial Ratio, cross-polarization discrimination, and multipath discrimination

Most of these parameters help to mitigate the detrimental effects of multipath at the receiver. Both GNSS transmit and receive antennas are right hand circularly polarized (RHCP) since linearly polarized signals will typically experience polarization changes whilst traveling through the ionosphere due to earth’s magnetic field (Faraday effect). This requires tightly aligned linearly polarized feeds to overcome the Faraday effect where slight misalignments readily damage signal integrity at the receiver. Moreover, an RHCP signal inherently has a higher immunity to multipath at the receiver from various obstacles such as rain, sea water, or a building—a multipath signal would be received as left hand circularly polarization (LHCP) and rejected. 

As shown in Equation 1, , the axial ratio (|R|) describes the complete polarization state of an antenna. When |R| is 1, the polarization ellipse becomes circular, whereas an infinite |R| would describe a linearly polarized radiation pattern. The cross polarization discrimination and cross polarization angle are directly correlated to axial ratio and are all functions of elevation and aximuth angles. In general, these parameters indicate a level of polarization efficiency and ultimately, the ability of the antenna to reject single-bounce multipath, the strongest multipath signal that can be caught by a receiver. Typically, axial ratios for GPS antennas do not go beyond 3.0 dB. 

The gain pattern must also exhibit a minimal radiation pattern (a null) below the horizon in order to prevent the adverse effects of multipath, interference, and jamming. This minimum elevation angle, however can attenuation GPS satellite signals at low to moderate elevation angles.  The half power beamwidth (HPBW) involves the angle in which relative power is more than 50% of the peak power and is a relevant parameter as it relates to the level of access the receiver has to orbiting satellites. A GNSS antenna would therefore require a high enough HPBW (100º) and gain to acquire satellites in orbit with deep nulls that fall below the horizon to mitigate interference. 


In order to meet 5G timing accuracy requirements, GNSS receivers will likely need to accomplish nanosecond precision with multi-band technologies over multigigabit ethernet interfaces that support PTP, NTP, or SyncE. Not only is this highly reliant on the receiver signal chain, but also on the performance of the GNSS antenna. Naturally, this includes adequate filtering that is often included in the antenna housing as well the appropriate radiation pattern and gain for satellite reception. 


1. https://5g-ppp.eu/wp-content/uploads/2019/07/5G-PPP-5G-Architecture-White-Paper_v3.0_PublicConsultation.pdf

2. ITU Series G Supplement 66, “Transmission Systems and Media Digital Systems and Networks,” July 2019

3. http://www.cpri.info/downloads/Requirements_for_the_eCPRI_Transport_Network_V1_2_2018_06_25.pdf

4. https://www.intechopen.com/books/multifunctional-operation-and-application-of-gps/antennas-and-front-end-in-gnss

5. Rao, B. Rama. GPS/GNSS Antennas. Artech House, 2013.