Continuing from WSPR for A/B tests – a discussion – part 3.
Another technique for exploring the relationship between pair variable is a regression model. In the case of these experiments, a simple model that is a good candidate is that SNR_B=m*SNR_A+b, a simple linear regression. A simple solution is to find m and b to minimise the sum of squares of errors between the predicted SNR_B and measured SNR_B.
Above is a frequency distribution of data extracted from a month studied in 2011. There are almost half a million spots on 40m contributing to this analysis, so it covers a wide range of propagation conditions during the month, and includes all stations spotted by all stations.
If we think about the experiment, you might think that using low tx power, poorer SNR will be the result of weaker signals and more common than higher SNR ones. The chart above shows that below SNR of about -22dB, the probability of decoding falls off. Importantly in a paired experiment, decoding an SNR at say -25dB is more likely to have a higher paired value that at higher SNR. A similar effect occurs at very high SNR due to saturation of the detector. The shape of the probability curve warns us that where is expected value of SNR_b is close to SNR_A, if SNR_A is lower than about -18dB or higher than about -2dB then there will be bias introduced.
The chart above shows the mean, standard deviation and count of observations in Richard’s test data against SNR_A. The blue curve shows departures at the high and low end, evidence of the effects discussed above.
So, what if we discard observations where -18<=SNR_A<=-2?
We have discarded half of the observations, the remaining difference data still fails normality tests (1dB rounding is a killer), and we get a model that SNR_B=1.007*SNR_A-0.194. We can calculate a standard error on these coefficients, but since that is based on normal distribution it is technically invalid.
Note that the slope is 1.007. Ideally it should be 1.000. Can we tweak the regression to make slope=1 and find the intercept b?
Yes, that can be done and it is equivalent to finding the mean of the differences (SNR_B-SNR_A) which is what we did earlier, but this time with filtered data to reduce the bias at top and bottom SNR. The mean of the differences is -0.194 as against the earlier calculation for the full data set of -0.09dB, or the regression intercept of -0.194.
Improving the data
Higher resolution data
The simplest step to improve the value of the data is to give it a chance of being normally distributed. Experimental data is commonly reduced in precision in recording, but when the discrete steps become too large, the data suffers. An experiment was conducted discrete scalar normal variable with a range of N and SD, and normality tests performed. It turns out that for large experiments (say more than 100 observations), the discrete step size needs be smaller than SD/50.
In this case, it would require WSPR SNR data reported to 0.01dB. You might think such high resolution measurements can’t be that accurate, that they will incorporate measurement noise. That is true, but let the statistical analysis can discover the underlying relationship, and a whole raft of techniques become possible if the data then turns out to be normally distributed.
But for a number of reasons, this is not likely to happen.
Taking more observations helps to make the sample more representative of the underlying population from which it is drawn.
If the data is normally distributed, the confidence interval is proportional to N^-2 for large N.
The variance of the data may be improved by manually searching for observers that make the greatest contribution for variance (eg highly variable local noise), and excluding them.
Avoiding observers at some distances, bearings and time may reduce the contribution to variance due to increased sensitivity to propagation conditions, polarity, etc.
To be continued at owenduffy.net.