VNAs achieve much of their accuracy by applying a set of error corrections to a measurement data set.
The error corrections are obtained by making ‘raw’ measurements of a set of known parts, most commonly a short circuit, open circuit and load resistor (the OSL parts). The correction data may assume each of these parts is ideal, or it may provide for inclusion of a more sophisticated model of their imperfection. This process is known as calibration of the instrument and test fixture. nanovna-Q appears to have some fixed departure compensation to suit the SMA cal parts, less suited to other test fixtures.
So, when you make a measurement at some frequency, the correction data for THAT frequency is retrieved and used to correct the measurement.
What if there is not correction data for THAT frequency? There are two approaches:
- a calibration run is required for exactly the same frequency range and steps (linear, logarithmic, size) as the intended measurement; and
- existing calibration data is interpolated to the frequency of interest.
The interpolation method is convenient, but adds uncertainty (error) to the measurement. Some commercial VNAs will NOT interpolate.
The nanoVNA will interpolate, and with interpolation comes increased uncertainty.
An uncorrected sweep of a reasonably known DUT is revealing of the instrument inherent error.
The DUT is a 12m length of LMR400.
Let’s first estimate how it should behave.
The VNA contains a directional coupler nominally designed / calibrated for Zo=50+j0Ω, and in use, VNAs are invariably used to display measurements in terms of some purely real impedance, commonly 50Ω.
Though the DUT characteristic impedance (Zo) is nominally 50Ω, it is not EXACTLY 50+j0Ω and so there are departures in the displayed values wrt 50Ω from what might happen in terms of the actual Zo.
We can calculate the magnitude of Gamma for our 12m OC section of LMR400 over a range of frequencies.
|Gamma| vs frequency is a smooth curve as a result of line attenuation increasing with frequency. As a result in the small departure in Zo, |Gamma| wrt 50Ω has a superimposed small decaying oscillation.
The phase angle of Gamma is is also of interest, but it is best viewed as calculated Group Delay. Group Delay is proportional to the slope of phase divided by frequency, and is measured as time.
Above is the expected Group Delay characteristic. You will note that it is an oscillation about a constant time being the actual propagation delay of the DUT. The oscillation is again a consequence of the small departure in actual Zo from the coupler’s 50Ω. Note the oscillation is approximately constant amplitude over the frequency range.
Let’s look at some raw measurements and compare them with the expected behavior.
Above is the raw |s21|dB, somewhat equivalent to the calculated |Gamma|dB plotted earlier but there is some offset due to the bridge loss.
Comparing the two, we find:
- there is an underlying convex up curve (rather that a very slight convex down);
- the oscillation amplitude is relatively larger; and
- the oscillation amplitude increases with frequency (rather than decrease)
Conveniently, nanoVNA MOD v3 is capable of direct display of Group Delay. We observe that over this frequency range it is a somewhat irregular growing amplitude oscillation superimposed on an approximately constant term being the nominal propagation delay of the line.
The differences of the last two plots with the expected behavior are all hints of imperfection in the Port 1 measurement bridge and signal processing.
That is not reason to condemn the instrument, the correction algorithms based on a calibration measurement of known parts comes to our aid and can to a large extent correct the imperfect raw measurement.
Firstly, the correction process itself is based on a measurement with has uncertainty, so there is measurement ‘noise’ introduced in the correction process. VNAs commonly average measurements made during the calibration process to reduce measurement noise inherent in the calibration data set. It does not seem that the firmware used does such averaging.
The bigger issue is the uncertainty introduced by interpolation where that is used (eg when the sweep frequency set is not exactly the same as the calibration data set). As mentioned, this does not happen in many commercial instruments because as soon as sweep frequency parameters are changed, they become uncalibrated and measurements cannot be made until they are re-calibrated… inconvenient, inflexible, but more accurate.
If you have in mind a ‘universal’ calibration from 0.01-1500MHz, the 101 calibration data set points are 15MHz apart, and as a result of the bridge imperfections indicated earlier, interpolation is likely to introduce significant error.
A more accurate calibration
The most accurate measurements start with accurate calibration and no interpolation, so calibrate for exactly the frequency range to be measured.
Sure, use interpolation as a convenience, but when final accurate figures are needed, calibrate for the intended measurement sweep.
Sweeping across the transition from fundamental to harmonic mode (or higher harmonic modes) is fraught with risk of glitches.