- RDS1021i; and
- VDS1022i.

The RDS1021i is a single channel ‘pen scope’ with 25MHz bandwidth, and the i suffix denotes isolation of the USB ground from the instrument ground.

The supplied application software is fairly good, but has some usability issues:

- the application opens with a fixed (ie non configurable) window size that may be larger than the physical screen size;
- the application does not persist the window size to new application sessions;
- the supplied driver (libusbK) and application combination are very flaky, the driver often disconnects and the scope has to be unplugged and reinserted to restart operation;
- USB operation was less reliable on a cascaded USB3 hub;
- on two of three hosts tested, the RDS1021i is identified as “Unknown device #1” in Windows;
- True triggered operation is not possible with the timebase set below 50ms/div (1s total sweep), it forces auto trigger mode disabling true triggered display, and as a result, disabling single sweep.

A good time base trigger facility is essential to effective utilisation of any DSO, virtual or not.

Experimentation with libusb v1.2.6 seemed to resolve the USB driver flakiness, this can easily be installed using zadig.

The roller ball device is rather dicky to use.

The VDS1022i is a two channel scope with 25MHz bandwidth, and the i suffix denotes isolation of the USB ground from the instrument ground.

The supplied application software is fairly good, but has some usability issues:

- the application opens with a fixed (ie non configurable) window size that may be larger than the physical screen size;
- the application does not persist the window size to new application sessions;
- the supplied driver (libusbK) and application combination are very flaky, the driver often disconnects and the scope has to be unplugged and reinserted to restart operation;
- USB operation was less reliable on a cascaded USB3 hub;
- on two of hosts tested, the VDS1022i is identified as “Unknown device #1” in Windows;
- True triggered operation is not possible with the timebase set below 50ms/div (1s total sweep), it forces auto trigger mode disabling true triggered display, and as a result, disabling single sweep.

A good time base trigger facility is essential to effective utilisation of any DSO, virtual or not.

Experimentation with libusb v1.2.6 seemed to resolve the USB driver flakiness, this can easily be installed using zadig.

A test was conducted to see if both devices and their respective applications are compatible for concurrent operation.

On one of three hosts tested, the RDS1021i and VDS1022i both identified as “Oscilloscope” in Windows, they did not identify by model number. The applications did not automatically select the compatible device type, and present a list of identically named devices for the user to choose for each one.

It would be nicer if a single application supported both hardware types and effectively mapped the devices to different channels, but that is not so. Diagnostic messages suggest that each application, although they look the same, load their respective device specific FPGA images to the attached device (rather than recognising the device type and loading the relevant image).

Assuming that the issues that exist with these two products exist also with the VDS 100MHz 4 channel devices which are priced at $800+, I would not be even thinking of them unless there was some specific need for the PC integration.

A good time base trigger facility is essential to effective utilisation of any DSO, virtual or not… and both devices fall well short of the mark. Lack of performance in the timebase area makes channel performance somewhat irrelevant.

The application software and firmware on the tested device is a year or more old which hints a lack of interest by the manufacturer in fixing significant problems. Their web site hints a lack of support resources.

There is no doubt that they are both much better than commonly available sound card oscilloscopes with out without an interface unit, and these low end models at modest prices might be good value for some applications, but they fall a long way short of a good quality DSO.

]]>Above is a frequency histogram of the experiment log.

I used the Shapiro-Wilks test for normality earlier, it is one of many, and they each have strengths and weakness, or sensitivities to some types of non-normality if you like.

We could shop for a normality test that is less bothered by the rounded data. Pearson’s Chi-squared test is an obvious choice as it compares the frequency histogram on chosen classes with the expected distribution if the data was normal. So if we cleverly make the classes 1dB, we might have a test that is not sensitive to the rounded data.

Above is a plot of the count for each 1dB bin against the expected count if the data was normally distributed.

It is not visible on this chart, but there is one observation at -10dB, just one. But that one observation causes the Chi-square test to reject the hypothesis that this is normally distributed data. One observation in 4508, the Chi-square test is tolerant of rounding (where applied in a complementary way), but it is very sensitive to outliers.

Occasionally experimental data is likely to contain outliers, data points that are distant from the rest of the data points.

If it can be shown that they are erroneous, then discarding them is ethical.

Discarding them for convenience is of doubtful ethic.

The dataset above has just one outlier that swings a Chi-squared normality test from very weak to very strong, but other more robust normality tests choke on the rounded data. So by shopping for a more friendly test, and excising outliers, it could be argued that this data is normal, and the strong parametric conclusions given in the last article held to apply.

The experiment was one of four at 0, -3, 3, and 6dB differences, and none of the other experiments can reasonably be argued to be normally distributed.

Continued at WSPR for A/B tests – a discussion – part 3.

]]>The defence often given is that it is so hard to measure frequency modulation, that it take an expensive deviation meter and they are scarce.

This article explains how to make accurate measurements using equipment often found around ham shacks, and could certainly be cobbled together from the resources of a few ham shacks. The figures and example given apply to nominal 25kHz channeled radios, adjustments are need for narrow channel radios.

There are three steps where calibration is progressively transferred through a measurement chain:

- calibrate a modulator (an ordinary FM transmitter);
- calibrate a demodulator (an ordinary FM receiver) using the calibrated modulator;
- measure the unknown transmitter using the calibrated modulator.

The usual method of calibrating a modulator is to use the spectral properties of an FM signal.

One could use a spectrum analyser to find the calibration point, adjusting the modulation level and detecting the null of the carrier or sidebands according to the Bessel function.

Since the instrumentation is used to detect the null of a carrier or sideband component, and the null is very sensitive, a narrow band receiver can be used for the calibration procedure.

This is a procedure to calibrate a frequency modulator at a single modulating frequency using an SSB receiver to detect the first carrier zero.

- Prepare to modulate the carrier source (the transmitter) with a 1kHz (exactly) sine wave modulation source, adjust to zero modulation level and key the transmitter up.
- Couple a small amount of the carrier to an SSB receiver and tune in the carrier to a beat note of about 800 Hz.
- Slowly increase the modulation until you hear the carrier beat disappear. Carefully find this null position of the carrier beat note. Note that you will also hear one or more sidebands when the modulation is applied, ignore these and just listen for the null of the carrier.

The modulation index is now 2.4, and therefore the deviation is 2.4kHz.

The technique is very sensitive and very accurate, and error will mostly be attributed to the accuracy of the modulating frequency.

You have read about it, click to listen to a demonstration. This demonstration uses an SSB receiver with a 3.5kHz IF bandwidth, but I have used the technique with receivers with a 10kHz IF bandwidth, you just hear more of the sidebands, but concentrate on the carrier beat and null it out. The test receiver could be a high quality communications receiver or a scanner with a BFO. You could sample the modulated signal at the carrier frequency, or by sniffing some signal from the IF of a super-heterodyne receiver.

Having calibrated a modulator, we can set a receiver up to demodulate that signal and calibrate its output voltage against the known deviation of the source.

Above, an oscilloscope is connected to the receiver output and the volume control is adjusted until the peak voltage is 2.4 divisions, corresponding to peak deviation of 2.4kHz.

The instrument does not need to re a real scope, it could be a soundcard scope like Soundcard Oscilloscope, or sound recording software like Audacity.

Having calibrated the receiver and display, we can go on to measure the unknown transmitter(s).

Above is a capture of speed input to the transmitter under test. It can be see that most of the time, peaks are reading the equivalent of 3kHz, very occasionally reaching 4kHz.

Well, although everyone refers to it as FM, it is FM with 6dB/octave pre-mphasis over the entire speed spectrum which gives the characteristic as PM. The demodulation process approximately equalises the pre-emphasis so the audio channel is nearly flat from end to end.

To a certain extent then, it is a nonsense to talk of peak deviation of frequency in kHz when in fact it is PM and the peak deviation is phase in radians. Nevertheless, that is the convention.

The process above adjusts the transmitter for speech drive equivalent to 3kHz peak deviation at 1kHz. Many test sets used in the Land Mobile Radio field measure deviation in a non de-emphasised demodulator, some offer a choice of with or without de-emphasis. In practice, it makes little difference on voice tests of male speakers.

The procedure above was compared with that obtained using a Motorola R2009D Communications Monitor, and the results are consistent.

It is important to understand that FM Land Mobile Radios often (usually) include a peak limiter in the tx audio path, and there is likely to be two adjustments, one prior to the limiter often labelled “mic gain” and another between limiter and modulator often labelled “deviation”. The deviation control is usually adjusted so the peak deviation does not exceed 3-4kHz under the loudest speech peaks, and the mic gain is adjusted so that normal speaking levels cause 3kHz peak deviation (equivalent).

If stations report your deviation is high or low, firstly make sure you address the microphone properly and speak with a consistent and strong voice. If it turns out your voice is unusually soft or load, it is probably the mic gain control that requires adjustment rather than the deviation control, yet people tend to thing “low deviation – wind the deviation control up”.

This is not the only way to achieve the outcome, but an example to show that practical tests of good quality can be designed to exploit available equipment. This is after all, the role of the measurement technician.

]]>I finally removed the back and visually inspected it.

It is a bit agricultural, but lets press on.

Above, the hairspring hints the cause of the accuracy issues.

The end curve of the hairspring between the end collet (upper left) and regulator arm and through to the dog leg formed in the spring is not a constant radius from the jewel, and further, the dog leg is badly formed and distorts the hairspring coils off the balance pin centre. If you compare the space between the outer spring turns at the bottom of the pic and top of the pic, the off centre distortion is evident.

When the watch stabilises and amplitude maximises, the inner part of the dog leg section actually touches the adjacent coil, which of course will compromise rate.

The measured beat error (a measure of oscillator asymmetry) was 7ms, huge, and probably the main contribution to fairly poor positional error, and observed rate change with main spring wind.

A couple of minutes with tweezers to carefully reshape the hairspring in situ gave a much more pleasing shape.

The hairspring end curve was fairly easy to reshape, so it probably isn’t steel. Nevertheless, the watch was demagnetised as a precaution.

Since for timing longer intervals, the stopwatch is almost entirely used flat on its back and with a nearly fully wound spring, the rate was adjusted in that condition. As it turned out, beat error was less than 0.3ms as a result of the spring reshaping and there was no need to tweak the isochronism adjustment

Above, a pic of the timegrapher. The watch runs like this hour in hour out, and when bumped, it stabilises without the glitch caused by the touching turns.

The stopwatch was replaced with a Casio MS-70W which has been an excellent tool, and apart from the rubber ring that holds the strap on, has required no maintenance. So, whilst the mechanical stopwatch is not ‘needed’, it now works well and is an acceptable reserve that does not require a battery.

]]>Above is a frequency histogram of the experiment log.

The histogram uses 1dB intervals for the bars, so it chunks the data into discrete bands, and that hides an important issue with WSPR SNR data, its granularity is 1dB, so it is a very coarse measure given the spread of the data.

Lets compare the probability distribution of the measured difference data with an ideal normal distribution.

Above is a quantile-quantile (Q-Q) plot of the raw data and an ideal response with the same standard deviation as the raw data. The data is for 4508 points, so these dots each typically represent a large number of observations, more so in the middle region.

There are two main departures:

- the response is a staircase rather than a straight line;
- the response departs from a straight line by curving to higher slope at the low end and high end.

(1) is due to the 1dB granularity of WSPR reports. The representation of an underlying continuous variable as an integer adds statistical noise and causes the data to fail tests of normality (so invalidating parametric methods dependent on normality).

(2) is a characteristic of the measurement system which exhibits some non-linearity of response at the low and high ends of the WSPR detector range. The effect of this defect is diminished by the lower number of observations in the lower and upper tails of the data.

So, to the eye, the data might at first look normally distributed but it fails normality tests. The Mean is -0.09dBm, the median is 0dB, and a Shapiro-Wilk normality test gives a probability 2.62e-37 that the data is normally distributed… extremely unlikely.

We have a set of observations with a mean of -0.09dB, and the question arises whether there is in fact a difference between transmitters A and B, or was this mean a result of chance.

In statistical speak, we want to test the null hypothesis Ho: there is no difference between A and B.

If this were normally distributed data, we could use a paired Student’s t test to test that hypothesis, and further, we could use the properties of a normal distribution to set confidence limits to the calculated difference.

It is not normally distributed data, so we could apply a non-parametric test for Ho.

The Wilcoxon signed rank test is suitable, and calculated probability that Ho is true (the data are not different) is 2.9e-24. It is extremely unlikely that A and B are the same, or that B-A=0 (ie that this result occurred by chance). Although we can say that with conviction, we cannot set confidence limits on the calculated mean (-0.09dB).

We can observe that mean of the measurements was -0.09dB and 95% of the measurements fell within the range -3.0 to 3.0dB.

Were the data normally distributed, we could calculate a confidence interval base on SD and N and say that the difference of B and A is -0.09dB +/-0.070dB with 95% confidence.

The latter is a stronger statement as it makes inference about B wrt A whereas the statement before that simply reports measurements.

Continued at WSPR for A/B tests – a discussion – part 3.

]]>The WSPR software is designed for probing potential radio propagation paths using low power beacon-like transmissions.

Though that talks about measuring radio paths, it is often used to compare transmitters or receivers over radio paths.

WSPR SNR measurements include the end to end radio path, which on some bands is highly variable, so using WSPR reported SNR values to compare two transmitters can be quite challenging.

We are all familiar with ad-hoc tests where a station might switch between two antennas and ask for comparative reports from receiving stations. At time when the radio path characteristics change greatly, changes in transmitter are often masked or confused by path variation.

Of course some practitioners will conduct several so-called A/B changes, perhaps as many as five and someone (receiver or transmitter) makes an informal judgement of the central tendency of the observations. The observations might be given in quite subjective terms, or in quantitative terms, possibly from an S meter of unknown calibration.

Repeated measurements of the same thing, or same type of thing (eg 10 measurements of 1 new dry cell, or one measurement each of 10 new dry cells) tend to yield a set of slightly different observations.

For a lot of common physical things, the distribution of repeated measurements follows a bell shaped probability curve.

Most things that we repeatedly measure will return slightly different results from observation to observation due to various contributions in an imperfect world.

Above is a plot of the probability distribution of a normally distributed random variable with mean=1 and variance=1 (standard deviation=1).

There is a wealth of statistical techniques that can be applied to normally distributed data.

Whilst the normal distribution is very common. some phenomena exhibit a distribution where the log of the variable is randomly distributed, a log-normal distribution.

G3CWI recently conducted an experiment where two WSPR transmitters were combined to a single antenna, and observations collected from receivers that decoded both transmissions in a WSPR 2 minute measurement interval. There are more than 4000 paired observations of A, B and B-A.

In fact, the difference data B-A contains more information than the sets A and B in isolation, the pairing of the observations makes for increased statistical power and reduced confounding effects.

Above is a frequency histogram of the experiment log. You might notice a resemblance to the normal curve shown earlier. It is in fact an approximately log normal response to S/N, but normal response to S/N expressed in dB.

The parametric statistical methods that can be used for normally distributed data can be used with log-normal distributions (with appropriate log adjustments).

We will consider the WSPR SNR in dB to be approximately normally distributed (though the underlying SNR is log-normal), which leads to the question “how approximately?”

Continues at WSPR for A/B tests – a discussion – part 2.

]]>It is certainly an interesting subject to most hams with a deep interest in antenna systems.

So called A/B comparisons of antennas are as old as ham radio itself, and experience hams know that they are quite flawed.

Because ionospheric propagation paths vary from moment to moment, the challenge is to make a measurement that is directly comparable with one made at a slightly different place, or frequency or time. Accuracy is improved by making several measurements, and finding a central value, more observations tends to reduce uncertainty in that estimate of the population central value.

The challenge is finding that central tendency.

There are three common methods of estimating the central tendency of a set of figures:

- mean (or average);
- median (or middle value); and
- mode (or most common value).

The mean is a popular and well known measure of central tendency. It is a very good estimate of the central tendency of Normally distributed data, and in that case, we can compare means and calculate confidence levels for assertions about the difference between means. The mean is very susceptible to errors due to outliers, and skewed distributions.

The median is usually a better measure for skewed data.

The mode is if you like, the most frequent or popular value and has a great risk of being quite misleading on this type of data.

A recent article (Appleyard 2018) in Radcom provides a useful example for discussion.

Appleyard gives a summary table where he shows means of a set of RBN measurements of signals from two stations observed at 21 remote stations, and differences in those means.

There are some inconsistencies between the text and data recorded in the RBN database on the day.

It appears likely that callsign MX0NCA was used for the inland station, and the RBN shows 10 reports by DF7GB on that morning.

By eye, the full set of 10 observations do not appear to be Normally distributed, and in fact the IQR at 10.0dB is some 40% wider than would be expected of normally distributed data. A more sophisticated test for Normality is the Shapiro-Wilk test, and it gives a probability that falsely rejecting that the data is drawn from a Normally distributed population is 0.96%, in plain speak it is very unlikely that the 10 observations were drawn from a Normally distributed population. For this reason, the mean is not a very good estimator of its central tendency, and operations like finding the difference of the means (as shown in the table) is not valid.

For this data set, the mean is 15.5, median is 18 and mode is 20. What do you think the central tendency is of the graphed data? The median is probably a better estimate of the central tendency of this data (but note that there is no basis for taking the difference of the medians).

It appears that some data may have been excluded from the table summary as the given mean value of 14.8 is different to that of the full set of 10 observations.

An important attribute of Normally distributed data is that the mean of the sum (or difference) of two normally distributed variables is the sum (or difference) of the means of each.

The most important consequence of this is that since the antenna means are of non-Normal data, calculated difference in the third column is not a valid indicator of the difference in the antennas observed at that receiver.

Whilst it is not valid to find the difference of the means of non-normal data, the individual paired SNR observations may reveal a strong relationship between the two antennas.

Appleyard gives another summary table where he shows means of a set of RBN measurements of signals from four stations observed at 16 remote stations, and differences in those means.

The question again arises whether the observations are normally distributed, whether the mean is a good measure of central tendency. It is a rather complex two way table of measurements, one that probably cannot use methods that depend on Normal data.

The observations by each of the 16 remote sites could be seen as independent clusters of measurements of each of the transmitters.

Above, a plot of the means (even if they are not a good measure) doesn’t suggest a clear winner, and it can be seen that a transmitter that is clearly better at some remote sites, is not at others. Are the apparent differences due mainly to the antennas, or are they obfuscated by other variables.

The data defies a quantitative measure of the differences of the antennas with declared confidence limits.

So, you might ask, “what does this tell you about the relative merits of the various antennas tested?”

Appleyard states:

In terms of the short-term variation in S/N, we have found that the averaging of at least three successive reports mostly takes out these perturbations.

The analysis above of the set of 10 where 50% of the S/N observations were spread over 10dB (the IQR) contradicts that position.

Too few observations gives very wide uncertainty in any conclusions and it tends that for non-parametric analyses, even more observations are necessary.

Non-parametric studies comparing S/N observations in a two way analysis in another context become of useful accuracy with hundreds of paired observations, and that would seem to be impractical for RBN sourced observations.

Good experiments don’t usually happen by accident. The questions to be answered (the null hypotheses in statistical terms) need to be though through and the experiment design to capture enough data to hopefully provide valid results.

Capturing field data is an expensive process, and ability to do a first pass analysis while the experiment is set up can help avoid a wasted venture.

When the observation data cannot be shown to be Normally distributed, means are not a good measure of central tendency, and the whole raft of parametric statistical techniques premised on Normal distribution are unavailable.

It is likely that non-parametric techniques are needed for analysis, and the sheer volume of observations might not be practical from RBN.

- Appleyard, S. Jun 2018. Using the reverse beacon network to test antennas In Radcom.

Designing with magnetics can be a complicated process, and it starts with using reliable data and reliable relationships, algorithms, and tools.

Be very wary of:

- published data, especially on seller’s websites, they are often contain significant errors;
- application specific calculators, most are not suitable for ferrite cored inductors at RF; and
- bait and switch where the seller pretends to sell brand name product, but ships a substitute that may or may not comply with specifications.

One reputable manufacturer of a wide range of ferrite cores is Fair-rite. Lets use their databook as an example for design data.

A ferrite cored toroidal inductor has important characteristics that make design a challenge:

- ferrite permeability is a complex value that is frequency dependent; and
- the ‘inductor’ is more completely a resonator.

(1) is dealt with by using the correct complex permeability in calculations.

(2) has little effect at less than say one tenth of the lowest self resonance frequency, and up to about half that first self resonant frequency can be modelled reasonably well by a small equivalent shunt capacitance.

Let work through two different formats of specification data, the first is common for ‘ordinary’ toroids, the second for ‘suppression sleeves’.

Lets look at the entry for a 5943003821 which is known commonly in ham circles as a FT240-43. Here is a clip from Fair-rite’s catalogue 17th Ed.

Lets find the impedance of an 3t winding on this core at 3.6MHz, firstly ignoring self resonance.

Lets use Calculate ferrite cored inductor (from Al) .

From the datasheet, is Σl/A 920/m (multiply the /cm value by 100 to convert).

Lets use Calculate ferrite cored inductor – ΣA/l or Σl/A .

The results reconcile well with the previous case.

From the datasheet, dimensions are 62.8×34.2×13.7mm.

Lets use Calculate ferrite cored inductor – rectangular cross section .

The result is close to the previous cases, but a tiny bit higher as this model assumes sharp edges on the toroid whereas they are chamfered and that slightly reduces the cross section area. The error is small in terms of the specified tolerance of the cores, so it is inconsequential.

Lets look at the entry for a 5943003821 which is known common in ham circles as a FT240-43. Here is a clip from Fair-rite’s catalogue 17th Ed, in this case the format is that used for many cores classed as suppression cores.

From the datasheet, dimensions are 62.8×34.2×13.7mm.

Lets use Calculate ferrite cored inductor – rectangular cross section .

Al is the inductance of a single turn at a frequency where µ=µi (µi is the initial permeability, permeability at the lowest frequencies.)

Al is usually calculated from measurement of impedance or inductance with a small number of turns at around 10kHz.

It can also be estimated from initial permeability (µi) and dimensions, or Σl/A or ΣA/l.

Taking the last example, lets calculate the impedance at 10kHz.

Above, Ls is 1.65µH, so Al=1650nH. The calculator also conveniently gives ΣA/l=0.00164m, and of course Σl/A is the inverse, 610/m.

If you measure L and divide by n^2, be careful that the measurement is at a frequency where µ=µi.

As mention earlier, these devices are really resonators and exhibit self resonance. Up to about half that first self resonant frequency these effects can be modelled reasonably well by a small equivalent shunt capacitance.

So the first step is to carefully measure the first self resonant frequency, carefully meaning to ensure that the test fixture is not disturbing the thing being measured.

Above is a plot of calculated impedance for 11t on the 5943003821 used above.

Above is the same scenario with Cs=2pF to calibrate the self resonant frequency to measurement of a prototype.

- Duffy, O. 2015. A method for estimating the impedance of a ferrite cored toroidal inductor at RF. https://owenduffy.net/files/EstimateZFerriteToroidInductor.pdf.
- Snelling, E C. Soft ferrites properties and applications. Iliffe books 1969.

]]>

I have been offered input VSWR curves for such a configuration, and they are impressive… but VSWR curves do not address the question of loss / efficiency.

Note that building loss into antenna system components is a legitimate and common method of taming VSWR excursions, eg TTFD, CHA250, many EFHW transformers, but in some applications, users may prioritise radiated power over VSWR.

Objectives are:

- used with a load such that the input impedance Zin is approximately 50+j0Ω, Gin=0.02S;
- broadband operation from 3.5-30MHz;
- VSWR < 2 with nominal 3200Ω load; and
- transformer efficiency > 90% at 3.6MHz.

The following describes such a transformer using a Fair-rite 2643625002 core (16.25×7.29×14.3mm #43).

I mentioned in the reference article that the metric ΣA/l captures the geometry, the larger it is, the fewer turns for same inductance / impedance. ΣA/l for the chosen core is 3.5 times that of a FT82-43 yet it is only 1.6 times the mass.

The transformer is wound as an autotransformer, 3+21 turns, ie 1:8 turns ratio.

Firstly, lets estimate at 3.6MHz minimum number of turns to ensure that magnetising conductance is less than about 0.002S (for better than 90% core efficiency).

Above, 3t on the primary delivers Gcore<0.002S.

Above is a sweep of the uncompensated prototype with a 3220+50Ω load.

Let work through a loss analysis.

Because of the division of power between the 3220Ω resistor and VNA input, there is effectively an attenuator of -10*log(50/(50+3220))=18.16dB, so |S21| has a component due to this division. Lets call this element the LoadAttenuator.

Zin=46.52+j6.72Ω. From that we can find Mismatch Loss.

MismatchLoss is 0.03dB.

Loss (to mean PowerIn/PowerOut) can be calculated in dB as -|S21|-LoadAttenuator-MismatchLoss=–18.64-18.16-0.03=0.450dB, or an efficiency of 10^(-0.45/10)=90.2%.

Note that there is some uncertainty in the measurements, but we can be confident that the loss is no where near the figure estimated for the FT82-43 design.

A 100pF silvered mica was connected in shunt with the transformer primary. This is not an optimal value, benefit may be obtained by exploring small changes to that value.

Above is a sweep of the roughly compensated transformer. The capacitor makes very little difference to the low frequency behavior, but it reduces the input VSWR significantly at the high end. VSWR<1.8 over all of HF.

This transformer has more surface area than a FT82-43 based one, so it has higher capacity to dissipate heat, and it is more efficient, so it will have higher power capacity than the FT82-43 based one.

The tests here were using a dummy load on the transformer, and that did allow confirmation of the design and expected loss at 3.6MHz.

Real end fed antennas operated harmonically do not present a constant impedance, not even in harmonically related bands. Note that the resonances do not necessarily line up harmonically, there is commonly some enharmonic effect.

Being a more efficient design that some, it might result is a wider VSWR excursion that those others as transformer loss can serve to mask the variations in the radiator itself.

Well, in ham radio, everything works. But systems that work better increase the prospects of contacts.

- FT82-43 matching transformer for an EFHW
- Find |Z|,R,|X| from VSWR,|Z|,R,Ro
- A new impedance calculator for RF inductors on ferrite cores
- Calculate ferrite cored inductor (from Al)
- Calculate VSWR and Return Loss from Zload (or Yload) and Zo
- Duffy, O. 2015. A method for estimating the impedance of a ferrite cored toroidal inductor at RF. https://owenduffy.net/files/EstimateZFerriteToroidInductor.pdf.
- ———. 2006. A method for estimating the impedance of a ferrite cored toroidal inductor at RF. VK1OD.net (offline).

URLs are automatically rewritten in most if not all cases, provided you have not disabled redirects / rewrites of URLs.

Some older browsers may not follow the rewrites… so if you are using XP or older and IE, things might not work for you.

Of course you can manually edit any bookmarks you have to change the protocol prefix from http:// to https://. If your link does not have the prefix, it will need https:// ahead of the host name, either manually or an automated rewrite / redirect from owenduffy.net.

Off site links to non SSL sites may cause warnings in your browser to the effect you are now entering an unprotected site, they refer to where you are going rather than where you have been..

Some software authored by Owen Duffy has a facility to check for updates (eg nfm, fsm). It is advisable to exercise that function as it will store the new check URL. The workaround to allow this to continue to happen is temporary and may be disabled in the future.

]]>