The DfuSe package has dependencies, including Visual C++ 2013 runtime.

The highlighted DLL is from Visual C++ 2013.

There are periodic updates to various Microsoft DLLs, and they are not necessarily included in Windows Update.

There was an issue with some app installations causing damage to earlier versions of Visual C++ 2013 runtime, so whilst it might have worked at one time, it could be broken.

To ensure that you have the latest version of the runtime, Google for “Update for Visual C++ 2013 and Visual C++ Redistributable Package”.

**WARNING: Smart people download this ONLY from the Microsoft site, getting it somewhere else risks malicious downloads directly or indirectly.**

Try that and your DfuSeDemo may work properly.

]]>The screenshot above demonstrates its use where the DUT and Att_{12} are cryogenically cooled.

For most applications, the default value of 290K is appropriate, so though the form has a few more fields, there isn’t more data entry for most usage.

The calcs have not changed, just replacement of a global T_{att} with T for each instance. The input form and output form have been reformatted to suit.

]]>

I have deferred trying the new Antscope2 until now to allow it to reach some maturity.

This article is a brief review of Antscope2 v1.0.10, brevity driven by the need to cut losses and run.

The first thing I noted is the difficulty in reading some textual data due to low contrast. The mid blue on mid grey above is very hard to read and would be even harder outdoors if measurements were being made in that environment. I did not search for alternative themes, none jumped out, but out of the box, this is very limiting. FAIL.

First up I tried to ‘open’ an archived Antscope file… but Antscope2 does not offer backwards compatibility. If you have saved .antscope files for historical, documentation etc reasons, you cannot open them directly to view in Antscope 2. FAIL.

Don’t underestimate the value of saved measurements, especially where they cannot conveniently be repeated, or for instance are required as a baseline in a now and then measurement (Baselining an antenna system with an analyser). So, don’t uninstall Antscope, you are going to need it to access archived measurements.

I did open a .s1p file that I had saved with Antscope, it was a measurement of the common mode impedance of a choke balun.

Above is the archived .s1p file with R, X scales set to maximum. As you can see, in the area of interest (1-15MHz), the traces are off scale (though you can see spot measurement reported in the cursor information area, even if hard to read due to low contrast). FAIL.

Above is the original plot from Antscope v4.2.57.

Since I use a back level version of Antscope to obtain more useful graphs, on the basis of the initial tests of Anstscope2 v1.0.10 I will not waste further time on it.

In my experience, the software is a really important part of exploiting the AA-600 and if Antscope2 is Rig Expert’s direction, then another analyser is the answer.

]]>It is certainly an interesting subject to most hams with a deep interest in antenna systems.

So called A/B comparisons of antennas are as old as ham radio itself, and experience hams know that they are quite flawed.

Because ionospheric propagation paths vary from moment to moment, the challenge is to make a measurement that is directly comparable with one made at a slightly different place, or frequency or time. Accuracy is improved by making several measurements, and finding a central value, more observations tends to reduce uncertainty in that estimate of the population central value.

The challenge is finding that central tendency.

There are three common methods of estimating the central tendency of a set of figures:

- mean (or average);
- median (or middle value); and
- mode (or most common value).

The mean is a popular and well known measure of central tendency. It is a very good estimate of the central tendency of Normally distributed data, and in that case, we can compare means and calculate confidence levels for assertions about the difference between means. The mean is very susceptible to errors due to outliers, and skewed distributions.

The median is usually a better measure for skewed data.

The mode is if you like, the most frequent or popular value and has a great risk of being quite misleading on this type of data.

A recent article (Appleyard 2018) in Radcom provides a useful example for discussion.

Appleyard gives a summary table where he shows means of a set of RBN measurements of signals from two stations observed at 21 remote stations, and differences in those means.

There are some inconsistencies between the text and data recorded in the RBN database on the day.

It appears likely that callsign MX0NCA was used for the inland station, and the RBN shows 10 reports by DF7GB on that morning.

By eye, the full set of 10 observations do not appear to be Normally distributed, and in fact the IQR at 10.0dB is some 40% wider than would be expected of normally distributed data. A more sophisticated test for Normality is the Shapiro-Wilk test, and it gives a probability that falsely rejecting that the data is drawn from a Normally distributed population is 0.96%, in plain speak it is very unlikely that the 10 observations were drawn from a Normally distributed population. For this reason, the mean is not a very good estimator of its central tendency, and operations like finding the difference of the means (as shown in the table) is not valid.

For this data set, the mean is 15.5, median is 18 and mode is 20. What do you think the central tendency is of the graphed data? The median is probably a better estimate of the central tendency of this data (but note that there is no basis for taking the difference of the medians).

It appears that some data may have been excluded from the table summary as the given mean value of 14.8 is different to that of the full set of 10 observations.

An important attribute of Normally distributed data is that the mean of the sum (or difference) of two normally distributed variables is the sum (or difference) of the means of each.

The most important consequence of this is that since the antenna means are of non-Normal data, calculated difference in the third column is not a valid indicator of the difference in the antennas observed at that receiver.

Whilst it is not valid to find the difference of the means of non-normal data, the individual paired SNR observations may reveal a strong relationship between the two antennas.

Appleyard gives another summary table where he shows means of a set of RBN measurements of signals from four stations observed at 16 remote stations, and differences in those means.

The question again arises whether the observations are normally distributed, whether the mean is a good measure of central tendency. It is a rather complex two way table of measurements, one that probably cannot use methods that depend on Normal data.

The observations by each of the 16 remote sites could be seen as independent clusters of measurements of each of the transmitters.

Above, a plot of the means (even if they are not a good measure) doesn’t suggest a clear winner, and it can be seen that a transmitter that is clearly better at some remote sites, is not at others. Are the apparent differences due mainly to the antennas, or are they obfuscated by other variables.

The data defies a quantitative measure of the differences of the antennas with declared confidence limits.

So, you might ask, “what does this tell you about the relative merits of the various antennas tested?”

Appleyard states:

In terms of the short-term variation in S/N, we have found that the averaging of at least three successive reports mostly takes out these perturbations.

The analysis above of the set of 10 where 50% of the S/N observations were spread over 10dB (the IQR) contradicts that position.

Too few observations gives very wide uncertainty in any conclusions and it tends that for non-parametric analyses, even more observations are necessary.

Non-parametric studies comparing S/N observations in a two way analysis in another context become of useful accuracy with hundreds of paired observations, and that would seem to be impractical for RBN sourced observations.

Good experiments don’t usually happen by accident. The questions to be answered (the null hypotheses in statistical terms) need to be though through and the experiment design to capture enough data to hopefully provide valid results.

Capturing field data is an expensive process, and ability to do a first pass analysis while the experiment is set up can help avoid a wasted venture.

When the observation data cannot be shown to be Normally distributed, means are not a good measure of central tendency, and the whole raft of parametric statistical techniques premised on Normal distribution are unavailable.

It is likely that non-parametric techniques are needed for analysis, and the sheer volume of observations might not be practical from RBN.

- Appleyard, S. Jun 2018. Using the reverse beacon network to test antennas In Radcom.

Designing with magnetics can be a complicated process, and it starts with using reliable data and reliable relationships, algorithms, and tools.

Be very wary of:

- published data, especially on seller’s websites, they are often contain significant errors;
- application specific calculators, most are not suitable for ferrite cored inductors at RF; and
- bait and switch where the seller pretends to sell brand name product, but ships a substitute that may or may not comply with specifications.

One reputable manufacturer of a wide range of ferrite cores is Fair-rite. Lets use their databook as an example for design data.

A ferrite cored toroidal inductor has important characteristics that make design a challenge:

- ferrite permeability is a complex value that is frequency dependent; and
- the ‘inductor’ is more completely a resonator.

(1) is dealt with by using the correct complex permeability in calculations.

(2) has little effect at less than say one tenth of the lowest self resonance frequency, and up to about half that first self resonant frequency can be modelled reasonably well by a small equivalent shunt capacitance.

Let work through two different formats of specification data, the first is common for ‘ordinary’ toroids, the second for ‘suppression sleeves’.

Lets look at the entry for a 5943003821 which is known commonly in ham circles as a FT240-43. Here is a clip from Fair-rite’s catalogue 17th Ed.

Lets find the impedance of an 3t winding on this core at 3.6MHz, firstly ignoring self resonance.

Lets use Calculate ferrite cored inductor (from Al) .

From the datasheet, is Σl/A 920/m (multiply the /cm value by 100 to convert).

Lets use Calculate ferrite cored inductor – ΣA/l or Σl/A .

The results reconcile well with the previous case.

From the datasheet, dimensions are 62.8×34.2×13.7mm.

Lets use Calculate ferrite cored inductor – rectangular cross section .

The result is close to the previous cases, but a tiny bit higher as this model assumes sharp edges on the toroid whereas they are chamfered and that slightly reduces the cross section area. The error is small in terms of the specified tolerance of the cores, so it is inconsequential.

Lets look at the entry for a 2643625002. Here is a clip from Fair-rite’s catalogue 17th Ed, in this case the format is that used for many cores classed as suppression cores.

From the datasheet, dimensions are 16.25×7.9×14.3mm.

Lets use Calculate ferrite cored inductor – rectangular cross section .

Al is the inductance of a single turn at a frequency where µ=µi (µi is the initial permeability, permeability at the lowest frequencies.)

Al is usually calculated from measurement of impedance or inductance with a small number of turns at around 10kHz.

It can also be estimated from initial permeability (µi) and dimensions, or Σl/A or ΣA/l.

Taking the last example, lets calculate the impedance at 10kHz.

Above, Ls is 1.65µH, so Al=1650nH. The calculator also conveniently gives ΣA/l=0.00164m, and of course Σl/A is the inverse, 610/m.

If you measure L and divide by n^2, be careful that the measurement is at a frequency where µ=µi.

As mention earlier, these devices are really resonators and exhibit self resonance. Up to about half that first self resonant frequency these effects can be modelled reasonably well by a small equivalent shunt capacitance.

So the first step is to carefully measure the first self resonant frequency, carefully meaning to ensure that the test fixture is not disturbing the thing being measured.

Above is a plot of calculated impedance for 11t on the 5943003821 used above.

Above is the same scenario with Cs=2pF to calibrate the self resonant frequency to measurement of a prototype.

- Duffy, O. 2015. A method for estimating the impedance of a ferrite cored toroidal inductor at RF. https://owenduffy.net/files/EstimateZFerriteToroidInductor.pdf.
- Snelling, E C. Soft ferrites properties and applications. Iliffe books 1969.

]]>

The first experiment was a calibration run if you like to explore the nature of simultaneous WSRP SNR reports for two transmitters using different call signs on slightly different frequencies simultaneously feeding approximately the same power to the same antenna.

This article is about the second test which he describes:

The second test uses a WSPRlite directly feeding the same stacked Yagis, and the second WSPRlite feeding nearly identical stacked Yagis that point directly through the other stack located four wavelengths directly in front. Power at each antenna was about 140 milliwatts for each WSPRlite.

The data for the test interval was extracted from DXplorer, and the statistic of main interest is the paired SNR differences, these are the differences in a report from the same station of the two signals in the same measurement WSPR interval.

There is an immediate temptation of compare the average difference, it is simple and quick. But, it is my experience that WSPR SNR data are not normally distributed and applying parametric statistics (ie statistical methods that depend on knowledge of the underlying distribution) is seriously flawed.

We might expect that whilst the observed SNR varies up and down with fading etc, that the SNR measured due to one antenna relative to the other depends on their gain in the direction of the observer. Even though the two identical antennas point in the same direction for this test, the proximity of one antenna to the other is likely to affect their relative gain in different directions.

What of the distribution of the difference data?

Above is a frequency histogram of the distribution about the mean (4.2). Each of the middle bars (0.675σ) should contain 25% of the 815 observations (204). It is clearly grossly asymmetric and is most unlikely to be normally distributed. A Shapiro-Wik test for normality gives a probability that it is normal p=4.3e-39.

So lets forget about parametric statistics based on normal distribution, means, standard deviation, Student’s t-test etc are unsound for making inferences because they depend on normality.

Differently to the first experiment where both transmitters fed the same antenna and we might expect that simultaneous observations at each stations might be approximately equal, in this case there are two apparently identical antennas, one close to and pointing through the other and the question is are they in fact identical in performance or is there some measurable interaction.

So, lets look at the data in a way that might expose their behaviour.

Above is a scatter chart of the 815 paired SNR reports (where an individual station simultaneously decoded both transmitters). Note that many of the dots account for scores of observations, all observations are used to calculate the trend line.

In contrast to the previous test, there is quite a spread of data but a simple least squares linear regression returns a R^2 result that indicates a moderately strong model with a Y intercept of -3.3dB (ie that there is -3.3dB difference between the systems)

We can reasonably draw the conclusion that there is a significant interaction between the otherwise identical antennas.

In fact sub-setting the data to select reports that were within +/- 5° of boresight, the difference was more like -5dB.

This raises the question of the design of an experiment, the hypothesis to be tested and then designing the experiment to collect unbiased observations that should permit a conclusion to be drawn.

One has little control of the location of observers in WSPR, their appearance is for the most part random. However, one can fairly easily filter the observations collected to excise observations outside a given azimuth range, and distance range (which might imply elevation of the propagation path). Filtering in this way ensures that the data is more relevant to the hypothesis being tested, and that should result in better correlation, less uncertainty in the result.

]]>Above is a plot from that article. I cannot be sure what version of Antscope was used to create the graph, but it was no later than v4.2.57, as one of the ‘improvements’ of v4.2.62 and v4.2.63 was to reduce zooming of the Z scales to a maximum of 600Ω.

Above, the graph zoomed to the improved maximum of 600Ω. At this scale, you can also see one of the other defects of Antscope in that the graph is not properly windowed, eg X at the cursor is -1121Ω, not -600Ω as shown by the green plot line.

I did contact Rigexpert support regarding the disabled zoom but was summarily dismissed by Oleg with the advice to use the earlier versions.

So, if you want the convenience of zooming out to the extent available in v4.2.57, then you need to downgrade to it (note the old location for the cable data and ini files etc). I could not find disclosure of what changes were made in subsequent versions, so there is risk that an important fix or feature is lost in the quest to restore the old zoom capability.

The other option of course is to export the data and plot it in Excel or some other graphics package, but that denies the great utility of previewing the data immediately that a measurement is made and being able to conveniently and quickly redo the scan if changed parameters are needed.

In my experience, the software supplied with most ham grade analysers and VNAs is pretty shabby and lets the overall package down. Antscope is no exception.

To the proposition put to me that the AA-600 is not capable of measuring the common mode impedance of a broadband current balun (as described at Measuring balun common mode impedance – #1), I respond that it is capable of measuring R and X to several thousand ohms and of correctly measuring the sign of X, but the latest software (v4.2.63) has been changed to deny convenient and meaningful direct graphic presentation of the measured data and you need to export the data and use another tool for presentation and analysis.

One could obsess over the capacity to measure Z in the range 5,000 to 50,000Ω and the appropriateness of reflection measurement technique, but the key parameters for broadband baluns are not the maximum impedance components but the minimum impedance over the desired operating range and measurement to 5,000Ω is quite adequate for that task in most cases. Note that Rigexperts (or other analysers) with less ADC bits than the AA-600 (16) might be quite challenged for this application.

]]>Since it is only addition of a signature, the versions have not been updated, and the update will not trigger the new version detection built into the applications.

See Digital document signatures for information on getting the CA certificate for which you will then want to edit the trust settings.

There was a quite recent update to FSM v1.11.0, and a more recent update to add the signatures.

The signatures give you confidence about the origin of the installer, and that it has not been intercepted by one of the download sources that wrap the software in an adware enabled installer (eg OpenCandy). Always download my software from my site, there are NO authorised distributors!

]]>

I used an AIMuhf for Measuring balun common mode impedance – #2 using the SOL calibration facility.

AIM also claims to have a means of backing out a known transmission line between reference plane and DUT. This article discusses use of AIM’s Refer to Antenna facility.

AIM’s developer recently said of AIM’s Refer to Antenna facility:

Version 882 does have a problem with the Refer to Antenna function. Version 865A can be used for this function.

This function does have it’s limits though. It should only be used for good quality coax. The impedance and velocity factor of coax is not constant over the whole length and this limits the accuracy. Also the impedance may not be equal to the “nominal” impedance in the catalog. The impedance of 50 ohm cable can vary quite a bit. AC6LA.com has some interesting data showing how coax parameters vary with frequency.

Custom cal is much better when it is possible to put the cal loads at the far end of the transmission line. This takes into account variations in impedance, velocity factor, and loss and it can be used when there is coax and ladder line in one transmission line system.

This article looks at use of AIM’s Refer to Antenna facility in AIM 865A to measure a choke at the end of 0.93m of RG58.

Above is the result of a scan with the AIM calibrated at its coax socket.

Above, the same DUT with Refer to Antenna turned on prior to the scan. The results are plainly nonsense, the negative resistance values are not physically possible on such a DUT.

Above, the result of saving and reloading the graph. Now the resistance is positive, but the reactance below choke self resonance is negative whereas it should be positive, nonsense again. Then there are the glitches at 18 and 28MHz. The Refer to Antenna facility is seriously broken and appears to have been released without having been adequately tested.

To the latter part of the developer’s comment:

This function does have it’s limits though. It should only be used for good quality coax. The impedance and velocity factor of coax is not constant over the whole length and this limits the accuracy. Also the impedance may not be equal to the “nominal” impedance in the catalog. The impedance of 50 ohm cable can vary quite a bit. AC6LA.com has some interesting data showing how coax parameters vary with frequency.

Custom cal is much better when it is possible to put the cal loads at the far end of the transmission line. This takes into account variations in impedance, velocity factor, and loss and it can be used when there is coax and ladder line in one transmission line system.

That is all true, but the question is whether AIM 865A achieves what can be achieved within those limitations.

Above is the data entry screen for Refer to Antenna. On it, the loss at 1MHz is specified, and AIM extrapolates that to the measurement frequency. The algorithm is not exposed, but with only one loss parameter, it is likely that it calculates loss(f)=loss@1MHz*fMHz^0.5. So, for the above case, it would extrapolate loss/100m of RG58 to be 1.332*1000^0.5=42.1dB whereas the loss of RG58 at 1000MHz is more like 70.8dB, almost 30dB higher!

- AIMuhf and AIM865A as a measurement system produces inconsistent and unreliable results, the Refer to Antenna facility is severely broken.
- The model for line loss is probably over simplified and not very accurate at higher frequencies, it could be more accurate with a better model.

At Measuring balun common mode impedance – #2 I mentioned a glitch on the AIMuhf scan that appears to be a defect of the instrument / client software and that it undermines confidence in the system.

The article documents a test of a known load to attempt to prove the measurement system good.

Note that AIM 865A is not the current version, but problems with the current version were described at AIM 882 produces internally inconsistent results.

The scenario is:

- AIMuhf with a NM to SMA-F adapter, and 1m of RG58 SOL calibrated;
- test load comprising 25+j0Ω at the end of 1.14m of RG58C/U.

I note though that attempts to SOL calibrate to this test fixture produced a warning no matter how many times the process was tried, connectors cleaned and fastened to specification torque.

Above is the scan using AIM865A. Note the glitches around 46 and 140MHz, these are artifacts of the measurement system, they are not expected and do not appear on a measurement of the test load using a Rigexpert AA-600.

Exploring the glitches reveals another defect of the AIM software in the magnified view of the cursor in the previous screen shot. Rs,Xs at the cursor is given as 82.686,-0.254 in the right hand side of the display, but the magnified view of the curve indicates more like 60,5… there appears to be some misalignment of the blue cursor line and ‘logical’ cursor.

Above is a measurement of the same test load connected to the instrument using a N-M to BNC-F adapter and the instrument SOL calibrated at its N connector. No glitches evident. This scan is consistent with expectation, and with the Rigexpert AA-600 measurement.

The “Refer to antenna” function does not work, though presumably it estimates line loss on an assumption that it is entirely conductor loss and the result of well developed skin effect at 1MHz… both dubious assumptions, and so you might not want to use it far from 1MHz anyway.

- AIMuhf and AIM865A as a measurement system produces inconsistent and unreliable results on a simple known load

]]>