Although it is just easier to buy the equipment available to test the synchronism of two independent systems, we discuss here the measures of synchronicity – Accuracy of Time. The reader would thus be able to choose among the available products in market and appreciate them better (after having gone through the Synchronization basics and timing-protocols and the technology-comparison etc). Two measures of quality of any single oscillator and clock are Jitter and Wander. And measures of synchronization achieved between two different clocks or oscillators, through whatever means, are TiE, MTIE, TDEV etc. These are explained further.
Quality of an Oscillator
Jitter is any deviation in the rise or fall instant of any pulse, or, change in pulse width introduced due to any such deviation. The best method to visualize jitter is if several clock cycles or pulses are superimposed on one another, the resulting pattern would tend to lose its sharpness and edges for a high jittery clock as compared to a clock that has lesser jitter. Jitter, however, does not mean that the frequency is even marginally different. A 10 MHz oscillator with higher jitter would still oscillate 10 million times in a second; however, not all those vibrations would be of exactly the same width (or duration) and would differ slightly. A few would be a little shorter while others a little longer, but in all divide 1 second into 10 million “almost” equal parts. The preciseness of this “almost” is the Jitter characteristic of the oscillator.
Figure: Non-aligned edges represent Jitter
In the above figure, both the clocks have same frequency, but the jittered clock has its edges displaced off at several instants. Jitter can be caused by power supply noise, heat generated by the oscillator itself and can affect any electronic circuit or system or network or application that is sensitive and requires precise instants for functioning including VoIP and IEEE 1588. The PDV that affects the IEEE 1588 is nothing but “jitter”, and we just call it packet-delay-variation in context of packet switched networks. Jitter is but a statistical dispersion in the delay of events (clock pulses or packets). It can be Gaussian (random) dispersion or deterministic based on duty-cycle or due to interference. Jitter is specified in Unit Intervals (UI), such that one UI of jitter is equal to one data bit-width, irrespective of the data rate. For example, at a data rate of 2048 kbit/s, one UI is equivalent to 488 ns, whereas at a data rate of 155.52 Mbit/s, one UI is equivalent to 6.4 ns. Jitter causes bit-errors in networks.
Wander is a low-frequency variation in the clock signal when compared to a precise clock. Ideally, a low-frequency jitter is known as a wander and is measured in nanoseconds
Wander manifest as a phase-shift in the clock over periods greater than one second. Unlike jitter, it is easier to imagine wander as the slight and gradual sways in the speed (ticks) of a clock to and fro, sometimes becoming faster and sometimes slower (due to aging, temperature changes). Wander, since it is accumulating in nature, can only be partly filtered out and causes incorrect synchronization or even total loss of synchronization. Voice calls (fixed or cellular) will be lost, fax machines will misprint, and data will be lost or frequently retransmitted, the reason enough we are studying synchronization here.
Besides jitter and wander there is “drift” in clock frequency which may be unidirectional, bi-directional or cyclical. Imagine drift as a change in the position of an un-anchored ship on a coast, it may drift away with wind or the waves or come closer. Here coast and the ships are analogous to two different (independent) clocks and if not anchored (synchronized) would drift apart. Eg. On a radio transmitter, frequency drift can cause a radio station to drift into an adjacent channel, causing illegal interference.
Jitter and Wander, both have an amplitude and frequency. Wander variations occur over a period greater than 0.1 s (10 Hz), and vary over time. Therefore, wander measurements must be performed over a longer period of time (e.g., 24 h). Unlike jitter testing, wander testing requires a very stable reference clock (i.e., a Cesium or Rubidium clock).
Synchronous Ethernet standards have requirements for DPLLS that have strict guidelines for jitter tolerance and introduction by employing anti-jitter circuitry in the hardware. In 1588, the clock servo algorithms should have necessary jitter filtering and PDV filtering. Only then those time-stamps can be used to synchronize precisely.
Quality of Synchronization
Time Interval Error (usually a time-plot) is the phase difference between the measured signal and the reference signal in nanoseconds.
Your watch is sometimes a few nanoseconds ahead of mine and sometimes behind due to jitter or synchronization latency or some other reason. When this difference is plotted on a time-scale, it shows something like the following figure. The lesser the spread and amplitudes of spikes in this graph, the better the synchronization. Also if you are able to draw a trend-line in this data, it would denote a wander and drift in the clock. The whole idea of 1588 synchronization algorithm is to process the available time-stamps and figure out this trend-line, in the presence of network anomalies and impairments.
Maximum Time Interval Error (or Maximum TIE) is a quantitative measure of the worst case phase variation of a signal with respect to a perfect signal over a given period of time. It denotes a peak-to-peak wander and is a monotonically increasing graph usually on a log scale. In simple terms, over a given window size, the maximum TIE is plotted on the log scale. The window keeps on moving ahead with time elapsed and any new maximums (worst cases) are plotted. Eventually the graph stabilized to the maximum worst-case error seen in that time. The telecom standards have provided masks for the MTIE that define an upper limit for this error for selecting an oscillator or testing synchronized clock quality.
Time Deviation is a little complex to explain in few short sentences. Mathematically, it is the root mean square of the filtered TIE with the band-pass filter centered on frequency 0.42/t. A time-period of minimum 3t is required and 12t is recommended for RMS averages to settle down. It is a measure of wander that characterizes its spectral content.
The principal message that TDEV provides is related to the stability of a clock (or oscillator) or synchronization. Suppose the clock was used to measure an event of duration T and this is done many times. One source of error would be the frequency offset, which would introduce an (fixed, i.e. constant) error every time the measurement is made. The clock noise would introduce an (random) error each time the measurement is made. TDEV(T) is the standard deviation of this random component. [Source: SyncUniversity.org]
There are other measures like ADEV (Allan deviation), MDEV and MADEV that are used for advanced tests for frequency stability that you can read about.