The worst one was disassembled. These are build a bit like mobile phones, the manufacturer did not want them being disassembled.
Here are pics of the evidence of the problem.
Above is the component side of the PCB, the right hand side is opposite to the rotary encoder for the jog wheel. The PCB around the hand soldered terminals of the encoder has a large amount of flux residue which is usually corrosive to a greater or lesser extent. Over time with atmospheric humidity, this spreads wider and wider, and in this case has penetrated both the encoder and the switch which is to the right of the PCB opening.
Above is a pic of flux residue / corrosion where the LDR is hand soldered to the board.
This corrosion is a systematic problem, it is about the quality of the flux used for hand soldering.
This is not economically repairable, at a minimum it needs procurement of replacement LDR, encoder and tactile switch, cleaning the PCB and replacement of the parts. There may be other parts affected.
Above, a peek into the second unit shows the same corrosive residue under the rotary encoder. It probably has other instances, certainly the jog switch is unreliable.
These are destined for the bin, there is no point trying to repair them.
It speaks volumes for ISDT who are probably a fabless designer and chose poor quality manufacture.
I don’t see myself risking purchase of any more ISDT product.
]]>Above is the graph scaled R/ω and jX/ω, and an untitled X axis, though it would appear to be frequency in Hz (scaled by the M multiplier).
I had difficulty reconciling the Y values plotted for R/ω and jX/ω with the displayed R,jX values.
David F4HTQ offers the following explanation online.
I add some explanations.
I asked Rune if he could add this graphic because it is very useful.
It display curves that have exactly the same shape as the complex permittivity curves (μ’r and μ”r) of the ferrite datasheets.The values do not match those of the constructor curve ( to have the right value the software might know the exact geometry of the inductor) , but the shape is absolutely identical.
This allow to easy identify unknown ferrite core, and to better understand how to use it in a RF device.
He says permittivity… but he is talking about permeability.
The quote seems to say the Y axis scale is worthless?
In any event, the underlying R,X data only follows µ at frequencies well below the self resonant frequency (SRF) of the inductor.
I have a small ferrite cored inductor, 4t on a FB432402, which I will measure s11 from 110MHz. A marker was set at nominally 1.5MHz, it is actually 1.54MHz due to the scan set. Normallys one would use the least number of turns for good measurement, the 4t inductor happened to be at hand and suits this study.
Firstly, lets estimate the permeability of the core material.
Note that ferrite has a wide tolerance range, and is temperature sensitive.
Now lets estimate Z # 1.54MHz, ignoring the effects of self resonance, ie Cs=0.
So, we estimate Z=33+j252.
Looking at the first marker report at left, Z=31.4+j198. That is in the ballpark of estimate value of Z=32+j245, so the measurement looks valid.
Now lets focus on the graph of R/ω and jX/ω above. I have scaled values by eye and tabulated them along with the reported f,R,X values.
I have also calculated R/ω and jX/ω from the measurement data, and the ratio of the displayed values with the calculated values, and the ratio is consistently around 6.3e1, probably actually 2πe1 if the plotted values were captured more accurately.
So, it does not look like the plotted values are actually R/ω and jX/ω at all, but the result of some untidy mathematics and a failure to test the solution, possibly all a result of the value of the underlying concept.
Recalling that the apparent inductance of a toroidal inductor of medium to high permeability well below SRF is \(L=N^2 \mu \sum \frac{A}{l}\) where l is the path length \(l=2 \pi r\), N is the number of turns and A is the cross section area, and noting for µ is a complex value for ferrite, so jωL has a real component which models core loss.
The quantity \(\sum \frac{A}{l}\) captures the core geometry, and it or transforms are often given in datasheets, eg Fairrite often gives the inverse in /cm, \(\sum \frac{l}{a}\).
So, we can say that \(Z=R+\jmath X=\jmath \omega N^2 (\mu^{\prime}\jmath \mu^{\prime\prime}) \sum \frac{A}{l}\) and therefore \(\frac{R+\jmath X}{\omega}=\jmath N^2 (\mu^{\prime}\jmath \mu^{\prime\prime}) \sum \frac{A}{l}\) and rearranging that, \(\mu^{\prime}+\jmath \mu^{\prime\prime}=\frac{X+\jmath R}{\omega} \frac{1}{N^2 \sum \frac{A}{l}}\) .
We can factor permeability of free space out so that we see relative permeability: \(\mu_r^{\prime}=\frac{X}{\omega} \frac{1}{\mu_0N^2 \sum \frac{A}{l}}\) and \(\mu_r^{\prime\prime}=\frac{R}{\omega} \frac{1}{\mu_0N^2 \sum \frac{A}{l}}\).
So as the quote states, the shape of the R/ω and jX/ω does follow that of relative permeability, but only well below SRF, and the constant of proportionality is \(\frac{1}{\mu_0 N^2 \sum \frac{A}{l}}\)
Is it a magic view (even if implemented accurately)? You decide.
]]>Above is the schematic of the amplifier, analysis here is of the 25W configuration using a 2n5591.
The figure above shows the details of T1, a Ruthroff 1:4 unun.
The initial question was whether this would work as an air cored structure… but the question seemed motivated by difficulty in getting the amplifier to work properly.
So, let’s review the matching scheme. It is a combination of three components, T1, C4 and C5.
Consulting the datasheet, we see that the recommended load for the 2n5991 for 25W out on 12.5V at 175MHz is 4+j2Ω. That will be a little different at 144MHz due to the transistor capacitance having different susceptance at the lower frequency, but not greatly, it is a good place to start.
As mentioned there are three components in the matching network, but the operation of T1 is far from nominal 1:4, and for a transformation from 4Ω to 16Ω, you would choose a line with Zo=8Ω, that is not practicable, so there will be standing waves on that line section and therefore significant impedance transformation.
Since there is significant impedance transformation on the line, the characteristics of the line become important.
The originally specified #20 (0.81mm) was not on hand but some 0.71mm is available and will be used.
Minimum enamel thickness specified for 0.7mm wire ranges 3080µm, let’s assume the medium covering of 53µm. Average cover may be a little more. The wire measures 0.755mm overall, but that alone does not imply the enamel thickness.
Using TWLLC, we can get a ball part estimate of Zo using a guess of vf=0.7 based on experience.
0.071 ECW twisted pair 

Parameters  
Conductivity  5.800e+7 S/m 
Rel permeability  1.000 
Diameter  0.000710 m 
Spacing  0.000763 m 
Velocity factor  0.700 
Loss tangent  0.000e+0 
Frequency  146.000 MHz 
Twist rate  100 t/m 
Length  1.000 m 
Results  
Zo  33.50j0.68 Ω 
Velocity Factor  0.7000 
Twist factor  0.9725 
So, Zo in the range 3035Ω is likely.
A test section of 255m length was made and measured with SC and OC terminations using a VNWA3E.
Above are the s11 measurements for SC and OC.
From that dataset we can calculate Zo.
Calculation of Zo over most of this range looks ok, it has the typical turn up at low frequencies, and there is a problem measuring close to its quarter wave resonance. Around 150MHz, Zo is around 33Ω, quite close to expectation.
We can also calculate vf.
vf is 0.665 around 150MHz, so the earlier guess was not too far off the mark.
Let’s build a Simsmith model to find a matching solution and explore the sensitivity to component values.
Taking the target load impedance for the source to be 4+j2Ω, we can use Simsmith to model the network and tweak it for a match to a 4+j2Ω generator.
Element D models the Ruthroff 1:4 unun transmission line transformer. Lcm is a calculated value for the common mode inductance of the transmission line section, a two turn solenoid to accommodate the length of the transmission line section.
It is not very convenient to work with a Smith chart with complex reference impedance, as can be seen it warps the Z space.
Instead, let’s add an element so that we can use a purely real Zo.
Above, Z1 offsets the j2 component of the desired network input impedance so that the Zo is 4Ω for a ‘normal’ Smith chart scaling. Z1 is not part of the actual network, but purely a fixup.
Element D models the Ruthroff 1:4 unun transmission line transformer. Lcm is a calculated value for the common mode inductance of the transmission line section, a two turn solenoid to accommodate the length of the transmission line section.
If you follow the impedance changes at each element of the Simsmith model, C2 and D are the most significant (excluding from the dummy Z1). Impedance transformation in D is mostly due to transmission line effects.
Not surprisingly, matching is very sensitive to C2 and length, vf and Zo of the transformer D. As it turns out, the common mode inductance Lcm is not very critical, hence no need for a magnetic core.
ARRL. 1977. The radio amateurs handbook. ARRL p453
]]>(Austin 1987) described a multiband HF antenna that is very popular with hams some thirty years later.
In his article, Austin explained the characteristic of a single wire multiband antenna with a series section matching transformer. The geometry is quite similar to the G5RV with hybrid open wire and coax feed, but Austin pursued lengths of the dipole legs, and matching section length and Zo to optimise VSWR(50).
The design was never an ‘all band’ antenna, but rather a multiband antenna with low feed point VSWR(50) on several bands. Austin tabulated the frequency relationship of the optimised bands for the case of a 400Ω matching section, and they were in the ratio of 1:1.97:2.52:3.47:4.04. If the first frequency was chosen to be 7.2MHz, the other centre frequencies would be 14.2, 18.1, 25.0 and 29.1MHz.
To give insight into behaviour of the ZS6BKW I have built and NEC4.2 model of a ZS6BKW with dipole 28.5m (L1) of 2mm dia copper wire at height of 10m above ‘average’ ground (σ=0.005 εr=13), and 13.44m (L2) electrical length of 400Ω lossless transmission line. L2 was tweaked to optimise alignment of the VSWR(50) response with the ham bands. The model assumes no feedline common mode current.
Above is the VSWR(50) response of the model from 330MHz. Minimum VSWR near the nominated five bands is quite low. Note that VSWR(50) at 80m is quite poor.
Note that the ‘notches’ of minimum VSWR are quite narrow. It is perhaps naive to think of building this without fine tuning of L1 and L2 to optimise VSWR(50) for the installation scenario.
Above is a Smith chart of the NEC model Zin from 330MHz, and it can be seen that the impedance falls within the Target Area
(TA) identified by Austin and highlighted here in cyan. The model reconciles with and confirms Austin’s design method. The cursor is at 3.6MHz (the intersections with the left outer red line and the green radial) and it can be seen that it falls well outside the TA, in fact Zin @ 3.6MHz is 8.09+j38.08Ω which gives VSWR(50)=9.8.
(Austin 2007) gave a chart that shows the range of combinations of L1 and L2 that are likely to give a good five band response. The dimensions used for the NEC model are shown at the red dot.
The target of Austin’s design was low VSWR(50) on five bands which might allow direct connection to modern transceivers that require a fairly low VSWR(50) load (VSWR<2).
The other benefit of low VSWR(50) is reduction of loss due to standing waves on 50Ω feed line.
A practical issue is that the ‘sweet’ spot is quite narrow, and not necessarily aligned as one might want on each band.
Measuring VSWR and frequencies of minimum VSWR can inform efforts to tweak L1 and L2 to better fit the installation scenario. A flash analyser is not needed for this, VSWR is the optimisation objective and hams pursuing other objective like Xin=0 are on the wrong tram. Obviously an analyser that can capture a VSWR sweep and save it is a productivity tool, but you can do this job with a MFJ259B and pencil and paper (as we did once upon a time).
Measurements should be made with the feed line common mode current path as it will be when the antenna is in use. That means that if you take the coax off the back of the transmitter or ATU to make the measurements, that you bond the coax shield to the transmitter or ATU chassis with a very short wire to maintain the common mode current path and the effect it may have on measurements.
Transmitters with wider range matching such as older valve transmitters with Pi networks, or modern transmitters with internal tuners, or use of an external tuner can make operation to band edges more practical.
So, it may be worthwhile to choose a coax with fairly low matched line loss so that even if operating outside of the sweet spot, that radiation efficiency remains fairly good.
Minimisation of common mode current is important to achieving VSWR response close to prediction. Symmetry is important, and it is prudent to include an effective common mode choke at either end of the coax, or wherever else works best.
Don’t overlook inclusion of the correct velocity factor in calculating the length L2.
Whilst discussing optimisation, loss in the open wire line should not be ignored. Though very popular, I would not be using the common windowed ladder line using copper clad steel, especially the multi stranded type as they do not deliver copper like performance on the lower bands.
There have been various recipes for extending the common ZS6BKW to 80m, one of the most popular is that proposed by W5DXP. His method inserts a 500pF capacitor in series with one leg of the open wire line at the transmitter end.
Whilst there is little doubt that it worked for him, is it a solution for the scenario modelled above?
Above, a Simsmith model of the series capacitor shows that in this scenario, a rather good but practical ZS6BKW, it makes VSWR(50) worse. There is no value of series capacitance that will match 8.09+j38.08Ω to 50Ω, or indeed of any single component.
If 500pF or any other capacitance works for you, it is because your ZS6BKW departs significantly from the scenario discussed here.
Though often referred to as an optimsed G5RV
, there is no similarity to the G5RV beyond the somewhat similar geometry.
The optimisation objective is VSWR(50) at the frequencies of interest.
There is scope to tweak the lengths L1 and L2 to optimise the VSWR(50) curve in an individual installation.
The DfuSe package has dependencies, including Visual C++ 2013 runtime.
The highlighted DLL is from Visual C++ 2013.
There are periodic updates to various Microsoft DLLs, and they are not necessarily included in Windows Update.
There was an issue with some app installations causing damage to earlier versions of Visual C++ 2013 runtime, so whilst it might have worked at one time, it could be broken.
To ensure that you have the latest version of the runtime, Google for “Update for Visual C++ 2013 and Visual C++ Redistributable Package”.
WARNING: Smart people download this ONLY from the Microsoft site, getting it somewhere else risks malicious downloads directly or indirectly.
Try that and your DfuSeDemo may work properly.
]]>The screenshot above demonstrates its use where the DUT and Att_{12} are cryogenically cooled.
For most applications, the default value of 290K is appropriate, so though the form has a few more fields, there isn’t more data entry for most usage.
The calcs have not changed, just replacement of a global T_{att} with T for each instance. The input form and output form have been reformatted to suit.
]]>
I have deferred trying the new Antscope2 until now to allow it to reach some maturity.
This article is a brief review of Antscope2 v1.0.10, brevity driven by the need to cut losses and run.
The first thing I noted is the difficulty in reading some textual data due to low contrast. The mid blue on mid grey above is very hard to read and would be even harder outdoors if measurements were being made in that environment. I did not search for alternative themes, none jumped out, but out of the box, this is very limiting. FAIL.
First up I tried to ‘open’ an archived Antscope file… but Antscope2 does not offer backwards compatibility. If you have saved .antscope files for historical, documentation etc reasons, you cannot open them directly to view in Antscope 2. FAIL.
Don’t underestimate the value of saved measurements, especially where they cannot conveniently be repeated, or for instance are required as a baseline in a now and then measurement (Baselining an antenna system with an analyser). So, don’t uninstall Antscope, you are going to need it to access archived measurements.
I did open a .s1p file that I had saved with Antscope, it was a measurement of the common mode impedance of a choke balun.
Above is the archived .s1p file with R, X scales set to maximum. As you can see, in the area of interest (115MHz), the traces are off scale (though you can see spot measurement reported in the cursor information area, even if hard to read due to low contrast). FAIL.
Above is the original plot from Antscope v4.2.57.
Since I use a back level version of Antscope to obtain more useful graphs, on the basis of the initial tests of Anstscope2 v1.0.10 I will not waste further time on it.
In my experience, the software is a really important part of exploiting the AA600 and if Antscope2 is Rig Expert’s direction, then another analyser is the answer.
]]>It is certainly an interesting subject to most hams with a deep interest in antenna systems.
So called A/B comparisons of antennas are as old as ham radio itself, and experience hams know that they are quite flawed.
Because ionospheric propagation paths vary from moment to moment, the challenge is to make a measurement that is directly comparable with one made at a slightly different place, or frequency or time. Accuracy is improved by making several measurements, and finding a central value, more observations tends to reduce uncertainty in that estimate of the population central value.
The challenge is finding that central tendency.
There are three common methods of estimating the central tendency of a set of figures:
The mean is a popular and well known measure of central tendency. It is a very good estimate of the central tendency of Normally distributed data, and in that case, we can compare means and calculate confidence levels for assertions about the difference between means. The mean is very susceptible to errors due to outliers, and skewed distributions.
The median is usually a better measure for skewed data.
The mode is if you like, the most frequent or popular value and has a great risk of being quite misleading on this type of data.
A recent article (Appleyard 2018) in Radcom provides a useful example for discussion.
Appleyard gives a summary table where he shows means of a set of RBN measurements of signals from two stations observed at 21 remote stations, and differences in those means.
There are some inconsistencies between the text and data recorded in the RBN database on the day.
It appears likely that callsign MX0NCA was used for the inland station, and the RBN shows 10 reports by DF7GB on that morning.
By eye, the full set of 10 observations do not appear to be Normally distributed, and in fact the IQR at 10.0dB is some 40% wider than would be expected of normally distributed data. A more sophisticated test for Normality is the ShapiroWilk test, and it gives a probability that falsely rejecting that the data is drawn from a Normally distributed population is 0.96%, in plain speak it is very unlikely that the 10 observations were drawn from a Normally distributed population. For this reason, the mean is not a very good estimator of its central tendency, and operations like finding the difference of the means (as shown in the table) is not valid.
For this data set, the mean is 15.5, median is 18 and mode is 20. What do you think the central tendency is of the graphed data? The median is probably a better estimate of the central tendency of this data (but note that there is no basis for taking the difference of the medians).
It appears that some data may have been excluded from the table summary as the given mean value of 14.8 is different to that of the full set of 10 observations.
An important attribute of Normally distributed data is that the mean of the sum (or difference) of two normally distributed variables is the sum (or difference) of the means of each.
The most important consequence of this is that since the antenna means are of nonNormal data, calculated difference in the third column is not a valid indicator of the difference in the antennas observed at that receiver.
Whilst it is not valid to find the difference of the means of nonnormal data, the individual paired SNR observations may reveal a strong relationship between the two antennas.
Appleyard gives another summary table where he shows means of a set of RBN measurements of signals from four stations observed at 16 remote stations, and differences in those means.
The question again arises whether the observations are normally distributed, whether the mean is a good measure of central tendency. It is a rather complex two way table of measurements, one that probably cannot use methods that depend on Normal data.
The observations by each of the 16 remote sites could be seen as independent clusters of measurements of each of the transmitters.
Above, a plot of the means (even if they are not a good measure) doesn’t suggest a clear winner, and it can be seen that a transmitter that is clearly better at some remote sites, is not at others. Are the apparent differences due mainly to the antennas, or are they obfuscated by other variables.
The data defies a quantitative measure of the differences of the antennas with declared confidence limits.
So, you might ask, “what does this tell you about the relative merits of the various antennas tested?”
Appleyard states:
In terms of the shortterm variation in S/N, we have found that the averaging of at least three successive reports mostly takes out these perturbations.
The analysis above of the set of 10 where 50% of the S/N observations were spread over 10dB (the IQR) contradicts that position.
Too few observations gives very wide uncertainty in any conclusions and it tends that for nonparametric analyses, even more observations are necessary.
Nonparametric studies comparing S/N observations in a two way analysis in another context become of useful accuracy with hundreds of paired observations, and that would seem to be impractical for RBN sourced observations.
Good experiments don’t usually happen by accident. The questions to be answered (the null hypotheses in statistical terms) need to be though through and the experiment design to capture enough data to hopefully provide valid results.
Capturing field data is an expensive process, and ability to do a first pass analysis while the experiment is set up can help avoid a wasted venture.
When the observation data cannot be shown to be Normally distributed, means are not a good measure of central tendency, and the whole raft of parametric statistical techniques premised on Normal distribution are unavailable.
It is likely that nonparametric techniques are needed for analysis, and the sheer volume of observations might not be practical from RBN.
Designing with magnetics can be a complicated process, and it starts with using reliable data and reliable relationships, algorithms, and tools.
Be very wary of:
One reputable manufacturer of a wide range of ferrite cores is Fairrite. Lets use their databook as an example for design data.
A ferrite cored toroidal inductor has important characteristics that make design a challenge:
(1) is dealt with by using the correct complex permeability in calculations.
(2) has little effect at less than say one tenth of the lowest self resonance frequency, and up to about half that first self resonant frequency can be modelled reasonably well by a small equivalent shunt capacitance.
Let work through two different formats of specification data, the first is common for ‘ordinary’ toroids, the second for ‘suppression sleeves’.
Lets look at the entry for a 5943003821 which is known commonly in ham circles as a FT24043. Here is a clip from Fairrite’s catalogue 17th Ed.
Lets find the impedance of an 3t winding on this core at 3.6MHz, firstly ignoring self resonance.
Lets use Calculate ferrite cored inductor (from Al) .
From the datasheet, is Σl/A 920/m (multiply the /cm value by 100 to convert).
Lets use Calculate ferrite cored inductor – ΣA/l or Σl/A .
The results reconcile well with the previous case.
From the datasheet, dimensions are 62.8×34.2×13.7mm.
Lets use Calculate ferrite cored inductor – rectangular cross section .
The result is close to the previous cases, but a tiny bit higher as this model assumes sharp edges on the toroid whereas they are chamfered and that slightly reduces the cross section area. The error is small in terms of the specified tolerance of the cores, so it is inconsequential.
Lets look at the entry for a 2643625002. Here is a clip from Fairrite’s catalogue 17th Ed, in this case the format is that used for many cores classed as suppression cores.
From the datasheet, dimensions are 16.25×7.9×14.3mm.
Lets use Calculate ferrite cored inductor – rectangular cross section .
Al is the inductance of a single turn at a frequency where µ=µi (µi is the initial permeability, permeability at the lowest frequencies.)
Al is usually calculated from measurement of impedance or inductance with a small number of turns at around 10kHz.
It can also be estimated from initial permeability (µi) and dimensions, or Σl/A or ΣA/l.
Taking the last example, lets calculate the impedance at 10kHz.
Above, Ls is 1.65µH, so Al=1650nH. The calculator also conveniently gives ΣA/l=0.00164m, and of course Σl/A is the inverse, 610/m.
If you measure L and divide by n^2, be careful that the measurement is at a frequency where µ=µi.
As mention earlier, these devices are really resonators and exhibit self resonance. Up to about half that first self resonant frequency these effects can be modelled reasonably well by a small equivalent shunt capacitance.
So the first step is to carefully measure the first self resonant frequency, carefully meaning to ensure that the test fixture is not disturbing the thing being measured.
Above is a plot of calculated impedance for 11t on the 5943003821 used above.
Above is the same scenario with Cs=2pF to calibrate the self resonant frequency to measurement of a prototype.
]]>
The first experiment was a calibration run if you like to explore the nature of simultaneous WSRP SNR reports for two transmitters using different call signs on slightly different frequencies simultaneously feeding approximately the same power to the same antenna.
This article is about the second test which he describes:
The second test uses a WSPRlite directly feeding the same stacked Yagis, and the second WSPRlite feeding nearly identical stacked Yagis that point directly through the other stack located four wavelengths directly in front. Power at each antenna was about 140 milliwatts for each WSPRlite.
The data for the test interval was extracted from DXplorer, and the statistic of main interest is the paired SNR differences, these are the differences in a report from the same station of the two signals in the same measurement WSPR interval.
There is an immediate temptation of compare the average difference, it is simple and quick. But, it is my experience that WSPR SNR data are not normally distributed and applying parametric statistics (ie statistical methods that depend on knowledge of the underlying distribution) is seriously flawed.
We might expect that whilst the observed SNR varies up and down with fading etc, that the SNR measured due to one antenna relative to the other depends on their gain in the direction of the observer. Even though the two identical antennas point in the same direction for this test, the proximity of one antenna to the other is likely to affect their relative gain in different directions.
What of the distribution of the difference data?
Above is a frequency histogram of the distribution about the mean (4.2). Each of the middle bars (0.675σ) should contain 25% of the 815 observations (204). It is clearly grossly asymmetric and is most unlikely to be normally distributed. A ShapiroWik test for normality gives a probability that it is normal p=4.3e39.
So lets forget about parametric statistics based on normal distribution, means, standard deviation, Student’s ttest etc are unsound for making inferences because they depend on normality.
Differently to the first experiment where both transmitters fed the same antenna and we might expect that simultaneous observations at each stations might be approximately equal, in this case there are two apparently identical antennas, one close to and pointing through the other and the question is are they in fact identical in performance or is there some measurable interaction.
So, lets look at the data in a way that might expose their behaviour.
Above is a scatter chart of the 815 paired SNR reports (where an individual station simultaneously decoded both transmitters). Note that many of the dots account for scores of observations, all observations are used to calculate the trend line.
In contrast to the previous test, there is quite a spread of data but a simple least squares linear regression returns a R^2 result that indicates a moderately strong model with a Y intercept of 3.3dB (ie that there is 3.3dB difference between the systems)
We can reasonably draw the conclusion that there is a significant interaction between the otherwise identical antennas.
In fact subsetting the data to select reports that were within +/ 5° of boresight, the difference was more like 5dB.
This raises the question of the design of an experiment, the hypothesis to be tested and then designing the experiment to collect unbiased observations that should permit a conclusion to be drawn.
One has little control of the location of observers in WSPR, their appearance is for the most part random. However, one can fairly easily filter the observations collected to excise observations outside a given azimuth range, and distance range (which might imply elevation of the propagation path). Filtering in this way ensures that the data is more relevant to the hypothesis being tested, and that should result in better correlation, less uncertainty in the result.
]]>