by Steven Pong, Product Manager, Pasternack Enterprises
Interference with commercial, industrial, public safety, and defense-related systems using the electromagnetic spectrum is severe and getting worse, rising in direct relation to the number of emitters encountered at the most widely-used frequencies between about 700 MHz and 3 GHz and 2.4 to 6 GHz. Modern systems employ a variety of “virtual” techniques to mitigate the effects of interference, but RF and microwave filters remain a formidable tool in dealing with the most difficult challenges. To understand why these measures are required, it helps to look at the major contributors to the problem and the interference potential for emerging services to increase it in the future.
In the early days of wireless communications, there were comparatively few sources of interference, other than those created by the system itself. Today, there are more services than there is spectrum to accommodate them, at least in the most desirable portion of the spectrum in which signal propagation characteristics are favorable. Consequently, regulatory agencies such as the Federal Communications Commission in the U.S. have shuffled services around through “rebanding,” created spectrum-sharing schemes, carved out slices of spectrum between services, and reduced guard bands between channels. The result of these efforts is, inevitably, interference.
In addition, cellular, public safety, and unlicensed devices are crammed in snippets within little more than 2 GHz of total spectrum, which increases the possibility of interference. There are also new services either being proposed or about to enter service within the next few years as the federal government moves to provide wireless access to the Internet in rural areas. Further, the government has already allocated new services at 3.5 GHz, has its eyes on spectrum around 5 GHz, and is seeking recommendations on how to best utilize spectrum at other frequencies as well.
The situation is no better for the Department of Defense, which has the onerous task of managing the spectrum it uses at thousands of sites in the U.S. and throughout the world. In addition to protecting their own services, the Army, Air Force, Navy, Marines, and Coast Guard, CIA, Department of Homeland Security, and other agencies ensure their equipment doesn’t interfere with other services that vary from country to country, and can fend off attempts to jam signals by state and non-state actors and even individuals. All this is occurring while DoD is attempting to better integrate the terrestrial and satellite communications, radar, and positioning, navigation, and timing systems used by its services to create a networked battlefield environment. In short, the interference challenges faced by DoD are arguably greater than those of any other organization in the world.
Consider, for example, the Arleigh Burke destroyer, which is crammed with emitters ranging from ELF through millimeter wavelengths for radar, EW, and communications, the antennas for which can often be only a few inches from each other. Combine this with the fixed wing, helicopter, and UAV aircraft on board and interference becomes a virtual certainty. When even a single system is updated, the change affects every other RF and microwave system on board, and filters are usually a solution to the interference.
The problem is so severe that the Office of Naval Research created the high-level Integrated Topside (InTop) program with the goal of reducing the number of antennas to dramatically reduce the potential for interference. It’s been supplemented by the Electromagnetic Command and Control (EMC2) program that expands this goal to ships, ground vehicles, and terrestrial sites. The program, which could ultimately reach $800 million over five years, teams 12 contractors with the task of allowing multiple services and operating frequencies to be accommodated using a single antenna. Both programs have formidable goals ,but no matter how successful they are, interference will still be a problem.
The Looming Presence of IoT
The Internet of Things sounds innocent enough, but as it requires the ability to connect a huge number of devices, many close to each other, to themselves and the Internet, and because so many solutions operate at 2.45 GHz, interference will almost certainly occur somewhere in the network. All short-range wireless solutions such as Bluetooth®, Wi-Fi, ZigBee, Thread, Z-Wave are limited to very low RF power outputs by design. However, put enough of them together and interference and the electromagnetic environment becomes so dense that filters will be required with extremely high levels of rejection. The PE8701 bandpass filter is such a device and has a passband of 2.4 to 2.5 GHz with insertion loss of 1 dB and 50 dB of rejection outside the passband. It handles 5W CW (Figure 1).
There are already examples of how interference from some IoT devices is affecting radar detectors and garage door openers, the latter increasingly using wireless communications between the control board and keypads. Radar detectors are already using proprietary techniques to filter out signals from components in collision avoidance radar systems that are becoming standard even on lower-priced vehicles today. Even a so-called “smart” LED light bulb requires an integrated electronic circuit, which generates interfering signals that defeat the garage door opener’s communications path. There are many more—and IoT is only in its earliest stages.
An Accident Waiting to Happen?
Although typically not considered an IoT application, the looming autonomous vehicle onslaught will be one of the most complicated communications (and IoT) applications ever devised. As these vehicles ultimately roam the streets and highways throughout the world, they’ll be connected to one of two networks:
The Dedicated Short-Range Communications (DSRC) band at 5.9 GHz allocated for this purpose by the FCC in the 1990s and made an IEEE standard (802.11p).
A cellular network using either of the two low-power, low-data-rate IoT protocols, LTE-M or NB-IoT.
At the moment there is no definitive answer to which one of these solutions will be used in the U.S., even though DSRC was supposed to be the de facto choice. However, the cellular industry has recently stoked up its marketing and political efforts to make its solution, which it calls cellular V2x, the winner. It really doesn’t matter which one is chosen, as both will operate at 5.9 GHz as this frequency is already allocated for intelligent transportation systems, and will require connectivity between vehicles, to and from external sensors (primarily cameras), from GPS, Wi-Fi, and whatever cellular standard is used once driverless vehicles are deployed. If DSRC is chosen, there will be a need for tens or even hundreds of thousands of Roadside Units (RSUs) to which the vehicles must also communicate.
Unlike many other communications applications, intelligent transportation systems like autonomous vehicles cannot tolerate even minimal degradation or failure of data links, as vehicles must be updated in near real time. This obviously presents an enormous challenge in itself, but if interference is present (an inevitability), the results could be not just a dropped call but a fatal accident.
Public Safety Communications
Beginning in 2004, the FCC began its 800 MHz “rebanding” initiative to ensure that public safety operation would be less affected (i.e., interfered with) by nearby cellular services by moving operating frequencies of narrowband and broadband services away from each other. At that time, the issue was protecting the incumbent public safety land mobile radios from interference from Nextel Communications’ iDEN systems.
Nextel (and later Sprint, which acquired the company) ultimately agreed to make the move, and the process was supposed to be completed by the end of 2008 but took nearly a decade longer. Now, rebanding is almost complete but unfortunately it appears it hasn’t entirely solved the problem, as reports indicate interference continues to be a problem in some scenarios.
However, even before the hands-on work got underway, it was realized that the move alone would not solve the problem, and it didn’t. It should be noted that a solution proposed by Jay Jacobsmeyer, owner of Pericle Communications, was to use filters to keep out-of-band emissions in check, and he even described a ceramic filter placed between the radio and antenna that could make a significant improvement. This simple, inexpensive solution appears never to have been widely adopted.
Although Verizon and other carriers use LTE at 700 MHz, carriers will also be deploying it around 800 MHz, which once again poses the possibility of interference. Fortunately, public safety is not to be trifled with for obvious reasons and the cellular industry, FCC, and public safety organizations have been somewhat less at odds since rebanding began.
There is also FirstNet, the first broadband data (and possibly voice) networks ever deployed nationwide, which will allow information sharing via LTE for voice, data, video, images, and text, without concerns about network congestion. AT&T was awarded the contract build and maintain the network for 25 years, which operates at 700 MHz, is likely to be completed sooner than expected, with at least 10,000 towers or more than a third of the total by the end of the year. The possibility of interference exists in this system as well.
One of the latest ways for the FCC to “create new spectrum” is through spectrum sharing, in which new services can operate on the same frequency as incumbent ones. None of the ways to achieve this are simple, and to date there have been no successful shared deployments, although the use of TV “white spaces” in the VHF and UHF spectrum is ongoing and is a core component of the new Citizens Broadband Radio Service (CRBS) at 3.5 GHz.
Both require an elaborate scheme called a Spectrum Access System (SAS) to ensure that the new services do not interfere with the incumbents, most of which are coastal radars. It doesn’t take much imagination to realize that there is probably no way interference will be eliminated using this approach.
An Old Interferer Revisited
Very-low-frequency signals are produced by every type of system powered by power-line AC voltage—every digital clock, set-top box, TV set, microwave oven, fluorescent or LED light bulb as well as any device using a switching power supply creates interference and can swamp a receiver. In addition, although computing equipment must meet rules for emissions through shielding, they still emit prodigious amounts of noise.
Electrical interference detectable by a receiver has been around since the earliest days of radio and when modulation techniques were analog, it was easy to spot it: just listen to an AM radio. But today, transmission schemes are digital and a smartphone won’t audibly demonstrate its effects until the signal-to-noise ratio is so low that it drops a call or data session. And even though this noise may not be perceptible, it is still silently degrading receiver performance.
Filters to the Rescue?
Add up these problems, of which the above are only the top tier, and the result is an electromagnetic blanket covering every square inch of populated places. Wherever interference rears its head, the solution will be provided by RF and microwave filters, which remain the most viable tools for mitigating interference in deployed systems. They’ve saved countless base stations, radars, and assorted other systems faced with errant signals from being degraded or rendered useless. Now they’ll be called on to deliver the same benefits to a new family of systems facing a very old menace.