WSPR for A/B tests – a discussion – part 3

Continuing from WSPR for A/B tests – a discussion – part 2.

Other tests for normality

Above is a frequency histogram of the experiment log.

I used the Shapiro-Wilks test for normality earlier, it is one of many, and they each have strengths and weakness, or sensitivities to some types of non-normality if you like.

Chi-squared test for normality

We could shop for a normality test that is less bothered by the rounded data. Pearson’s Chi-squared test is an obvious choice as it compares the frequency histogram on chosen classes with the expected distribution if the data was normal. So if we cleverly make the classes 1dB, we might have a test that is not sensitive to the rounded data. Continue reading WSPR for A/B tests – a discussion – part 3

Adjusting modulation level on FM mobiles etc.

One frequently hears FM radios on the VHF bands that high or low in modulation level which exacerbates the problem of copying stations whilst mobile.

The defence often given is that it is so hard to measure frequency modulation, that it take an expensive deviation meter and they are scarce.

This article explains how to make accurate measurements using equipment often found around ham shacks, and could certainly be cobbled together from the resources of a few ham shacks. The figures and example given apply to nominal 25kHz channeled radios, adjustments are need for narrow channel radios.

There are three steps where calibration is progressively transferred through a measurement chain:

  1. calibrate a modulator (an ordinary FM transmitter);
  2. calibrate a demodulator (an ordinary FM receiver) using the calibrated modulator;
  3. measure the unknown transmitter using the calibrated modulator.


1. Calibrate a modulator

The usual method of calibrating a modulator is to use the spectral properties of an FM signal.

One could use a spectrum analyser to find the calibration point, adjusting the modulation level and  detecting the null of the carrier or sidebands according to the Bessel function.

Since the instrumentation is used to detect the null of a carrier or sideband component, and the null is very sensitive, a narrow band receiver can be used for the calibration procedure.

A practical approach

This is a procedure to calibrate a frequency modulator at a single modulating frequency using an SSB receiver to detect the first carrier zero.

  1. Prepare to modulate the carrier source (the transmitter) with a 1kHz (exactly) sine wave modulation source, adjust to zero modulation level and key the transmitter up.
  2. Couple a small amount of the carrier to an SSB receiver and tune in the carrier to a beat note of about 800 Hz.
  3. Slowly increase the modulation until you hear the carrier beat disappear. Carefully find this null position of the carrier beat note. Note that you will also hear one or more sidebands when the modulation is applied, ignore these and just listen for the null of the carrier.

The modulation index is now 2.4, and therefore the deviation is 2.4kHz.

The technique is very sensitive and very accurate, and error will mostly be attributed to the accuracy of the modulating frequency.

You have read about it, click to listen to a demonstration. This demonstration uses an SSB receiver with a 3.5kHz IF bandwidth, but I have used the technique with receivers with a 10kHz IF bandwidth, you just hear more of the sidebands, but concentrate on the carrier beat and null it out. The test receiver could be a high quality communications receiver or a scanner with a BFO. You could sample the modulated signal at the carrier frequency, or by sniffing some signal from the IF of a super-heterodyne receiver.

2. Calibrating a demodulator

Having calibrated a modulator, we can set a receiver up to demodulate that signal and calibrate its output voltage against the known deviation of the source.

Above, an oscilloscope is connected to the receiver output and the volume control is adjusted until the peak voltage is 2.4 divisions, corresponding to peak deviation of 2.4kHz. Continue reading Adjusting modulation level on FM mobiles etc.

WSPR for A/B tests – a discussion – part 2

Continuing from WSPR for A/B tests – a discussion – part 1.

Above is a frequency histogram of the experiment log.


The histogram uses 1dB intervals for the bars, so it chunks the data into discrete bands, and that hides an important issue with WSPR SNR data, its granularity is 1dB, so it is a very coarse measure given the spread of the data.

Lets compare the probability distribution of the measured difference data with an ideal normal distribution.

Above is a quantile-quantile (Q-Q) plot of the raw data and an ideal response with the same standard deviation as the raw data. The data is for 4508 points, so these dots each typically represent a large number of observations, more so in the middle region. Continue reading WSPR for A/B tests – a discussion – part 2

WSPR for A/B tests – a discussion – part 1

The WSPR  User Manual sets out the purpose of WSPR:

The WSPR software is designed for probing potential radio propagation paths using low power beacon-like transmissions.

Though that talks about measuring radio paths, it is often used to compare transmitters or receivers over radio paths.

WSPR SNR measurements include the end to end radio path, which on some bands is highly variable, so using WSPR reported SNR values to compare two transmitters can be quite challenging.

A/B tests

We are all familiar with ad-hoc tests where a station might switch between two antennas and ask for comparative reports from receiving stations. At time when the radio path characteristics change greatly, changes in transmitter are often masked or confused by path variation.

Of course some practitioners will conduct several so-called A/B changes, perhaps as many as five and someone (receiver or transmitter) makes an informal judgement of the central tendency of the observations. The observations might be given in quite subjective terms, or in quantitative terms, possibly from an S meter of unknown calibration.

Normal distribution

Repeated measurements of the same thing, or same type of thing (eg 10 measurements of 1 new dry cell, or one measurement each of 10 new dry cells) tend to yield a set of slightly different observations.

For a lot of common physical things, the distribution of repeated measurements follows a bell shaped probability curve.

Most things that we repeatedly measure will return slightly different results from observation to observation due to various contributions in an imperfect world.

Above is a plot of the probability distribution of a normally distributed random variable with mean=1 and variance=1 (standard deviation=1). Continue reading WSPR for A/B tests – a discussion – part 1