The SARC-100 is one of these low end analysers, it and its many close derivatives are marketed under various model names.

The sign of reactance discusses a major weakness of these and many other low end instruments in that they do not ‘measure’ the sign of reactance, displaying the magnitude of reactance and leaving it to the user to solve the sign problem.

SM6WHY is one of the many who have produced software for the SARC-100 that purports to solve the sign of reactance problem. He gives this graphic on his website to demonstrate the capability of his software used with a SARC-100 (which does not sense the sign of reactance).

Above is part of the graphic he offers. Though the image is poor quality, the VSWR plot appears smooth and quite typical of that which might be obtained by measuring an antenna system near its VSWR minimum.

However the accompanying Smith chart plot which has points plotted with both negative and positive reactance is inconsistent with the VSWR plot and appears flawed.

Remembering that VSWR along the Smith chart curve is simply the distance to the chart centre, so as you ‘travel’ along the Smith chart curve, the distance of each point to the centre should follow the VSWR plot

Smith chart curves of real system resemble spirals where short sections of the spiral resemble circular arcs about some centre (not necessarily the chart centre though), and those arcs are clockwise with increasing frequency.

So, the fact that the two ends of the curve here end in clockwise arcs is a warning that something is very wrong.

The graphic above is the same graphic with the contrast improved to see the SWR curve more clearly, and my annotation of some Smith chart points for discussion.

If you make the assumption that the plotted R value is approximately correct, and that the magnitude of X is approximately correct, you can then consider alternate sign of X for each of the plotted points to see what is needed to produce a curve they obeys the ‘rules’ of real systems.

One solution, and the most likely solution is that all the points highlight in green are of the opposite sign. It is quite a different plot with those changes applied.

- Reliably determining the sign of reactance from measurements of R,|X| is challenging.

- The sign of reactance
- [SARK-100 Pc Scan] SWR Analyzer Software (accessed 28/05/2017)

The obvious source ie eBay which means running the gamut of Chinese sellers, sellers who rarely understand the product they sell and probably expect the same of buyers.

Component sales tend to fall into categories:

- those with headline descriptions that have very brief description of characteristics; and
- those whose descriptive content claims well known part numbers for which datasheets can separately be found;
- those with detailed specifications offered.

In the case of category 1, it is very hard to have confidence that the components will deliver required performance, and headline descriptions on eBay are often used as competitive search keywords and do not apply to the goods on offer. These are probably best skipped unless they are the only option.

Category 2 provides a better option, and the question then on delivery is whether the goods are compliant with the part number offered. There is a considerable risk of counterfeit or fake parts that are not equivalent to the claimed part number, even where brand names are cited.

The third category can provide suitable product, but it takes some leg work, more than ‘due diligence’ to check the description for consistency and form an idea about its reliability, fit to the requirements and then value for money, seller reputation etc. This can be a lot of work for a few dollars worth of parts, but is a better option than category 1.

An example of process for category 3 checks.

Above is the specifications of a seller’s offering for “3W super bright LEDs”. Of course there are many sellers offering exactly the same product with the same description and many at the same price.

But, since my need was for white LEDs, lets extract the key information from the table. Headline power is 3W, 3.1V, 0.75A, 390,000mcd, 130°.

Lets feed them into a calculator (Calculate luminous flux (lm) from luminous intensity (cd) and apex angle (°)) and find the luminous efficiency, practical budget LEDs typically fall into the range 50-150lm/W, usually toward the lower end.

Wow, the data gives a luminous efficiency of 609lm/W, this is unbelievable, it is 10 times what is believable. Note too that the power input is less than 80% of the headline rating, but even if it was 3W, the calculated luminous efficiency would still be ridiculously high. It would be high risk to proceed with purchase, they cannot meet all of the stated specifications.

Now as I mentioned, there will be scores of sellers using exactly the same description, the same graphic and most at similar prices.

Above is the calculator result for another product offering. The calculated luminous efficiency is about as high as typical of most product, but believable. Again the power input is less than the headline 3W description, which highlights the need to select based on the luminous output rather than input power.

The project requirement is for luminous flux density of 10lx at 2m, and the calculator shows that should be achieved.

None of this is to imply that there aren’t good quality products offered on eBay, but there is a lot of product with fraudulent claims whether the seller understands them or not.

Buying success usually needs due diligence in carefully checking all aspects of product offering, and then preparedness to pursue a fair outcome if the product does not meet its claims.

]]>The original poster clearly had the impression that this improvement of the original VSWR=1.3 would make a large difference.

The only other option for me is to remove the shunt and set my swr back to 1.3:1 and not be able to communicate.

It is difficult to predict the difference in power delivered to to the antenna when perfectly matched (including the effect of matching loss) and that delivered to the antenna as is.

Before you jump to the conclusion that the classic formula for MismatchLoss applies, it applies only where the source is well represented as a Thevenin equivalent linear circuit, and typical ham transmitters for 80m are not well represented by that model, so the model is not applicable.

The difference can be measured in the actual implementation by using a directional wattmeter calibrate for any real Zo to measure:

- the forward and reverse power to the poorly matched antenna and finding the difference which is the power delivered to the antenna system (including matching network); and
- the power delivered to a matched load.

The basis of this is sound, see Power in a mismatched transmission line for proof.

It is worth noting that a practical transceiver specified to operate into a nominal load of 50+j0Ω may well deliver maximum power into a slightly different load.

Transmitters are designed to deliver a given output power into a nominal load with some limits on distortion, heating, operation of active devices within ratings etc.

Whilst it is unwise to operate a transmitter into a load outside of its ratings, the original poster’s case of VSWR=1.3 would be within the ratings of most practical transmitters and the transmitter is likely to work substantially as specified.

The shunt matching solution does improve VSWR, but it also has narrower bandwidth than the ‘unmatched’ vertical which is already much narrower on 80m than is convenient to most operation.

A broadband autotransformer would be a better solution, albeit probably a little less efficient and potentially less efficient than the ‘unmatched’ antenna.

A practical loaded mobile whip on 80m is likely to have a radiation efficiency in the region of a tenth of that of a good half wave dipole antenna system, so expectation of results needs to be tempered by that.

An observation: VSWR=1.3 is a little lower than would commonly be the case for this type of antenna, and might well indicate a problem.

- Any degradation of power output into the original poster’s VSWR=1.3 antenna is likely to be very small and insignificant, the transmitter is likely to be operating substantially as specified in every respect.
- The shunt match narrows the VSWR bandwidth of in this case an already narrow antenna.
- Other matching arrangements may have advantages, but also potentially less efficient than the ‘unmatched’ antenna which in this case is not a particularly poor match.

This article presents a derivation of the power at a point in a transmission line in terms of ρ (the magnitude of the complex reflection coefficient Γ) and Forward Power and Reflected Power as might be indicated by a Directional Wattmeter. Mismatch Loss is also explained.

We start by deriving the apparent power P_{a} in terms of transmission line parameters.

P_{a} has both reactive and real components. We are only interested in the real component of power.

The quantity |V_{f}|^{2}/2Z_{0} is commonly known as P_{fwd}, and |V_{r}|^{2}/2Z_{0} is commonly known as P_{ref}, but P_{r}=P_{fwd}-P_{ref} is true **only** when Z_{0} is real.

Mismatch Loss is a measure of the reduction of power in a load due to mismatch. From the above, it can be seen that MismatchLoss=-10log(1-ρ^{2}). The definition of Mismatch Loss says nothing of any change in loss or dissipation inside the source, just the reduced power available in the mismatched load.Likewise, the calculation of power and Mismatch Loss from ρ as derived above is valid **only** if the underlying Z_{0} is real and the source impedance is exactly Z_{0}. Mismatch Loss is widely misused, if the underlying criteria are not satisfied, the basis for the calculation does not exist and the figure obtained is in error.

It is the impedance Zo for which the Directional Wattmeter is calibrated that is important, not the Zo of the transmission line external to the instrument, or even the internal line to a certain extent (neither of which are perfectly real). If the Directional Wattmeter’s sampling element is calibrated for zero Pref on a practical resistive termination at the end of a very short transmission line, the error in assuming that Pr=Pfwd-Pref is very small, insignificant wrt typical standard error of RF power measurement.

For example, if a Bird 43 calibrated for 50Ω is inserted in a 75Ω line, Pfwd reads 100W and Pref reads 10W, the power (ie rate at which energy flows past that point) is 90W.

Obviously, as Pref approaches Pfwd, the standard error of the calculated power increases, and the technique is of limited use in extreme VSWR cases.

- Mismatch Loss (-10log(1-ρ
^{2}))dB, is the reduction of power in a load due to mismatch when:- Zo is real (purely resistive);
- the equivalent source impedance is equal to Zo; and
- it implies nothing about dissipation within source.

- A Directional Wattmeter can be used to determine power in a transmission line:
- if the calibration impedance is real (purely resistive);
- even if the impedance at the point it is sampling is not equal to its calibration impedance;
- a known resistive load is not necessary; and
- power equals Forward Power less Reflected Power.

How to get the most out of an 80 mobile antenna?…I am using a hustler antenna and I had the swr down to 1.3:1. I started researching how to make the antenna better and it seems that maybe an inductive shunt at the base of the antenna to ground would help. I don’t have the equipment to analyze the antenna and the shunt reactance. I made a 9 turn coil 1″ in diameter and 1″ long using n0. 12 awg thnn wire. I installed the coil at the base of the antenna and now the best swr that I can get is 1.8:1. So is there a way that I can set up the coil and antenna using only an swr meter?…

After 25 responses, none of the online experts have offered a direct answer or explanation.

The coil inductance is too low, try a solenoid of 13 turns, 40mm diameter and 40mm length.

An antenna of this type at minimum VSWR will have a feed point impedance of near zero reactance and resistance equal to 50 divided by the measured VSWR, so in this case 39Ω.

A characteristic of this type of antenna is that near resonance (ie near zero reactance), reactance changes with frequency much more quickly than resistance, so much so that we can treat the resistance as approximately constant for the purposes of solving the matching network.

A strategy for matching is to create an L network being a capacitive reactance in series with the base of the antenna, and a shunt inductance. The capacitive reactance can be obtained by ‘detuning’ (shortening) the whip if it has a suitable adjustment.

The chart above from one of my articles provides help in designing a suitable shunt inductor. With series R of 39Ω, the required shunt reactance is around 98Ω. At 3.7MHz, that is an inductance of 4.2µH. The calculator at http://hamwaves.com/antennas/inductance.html can be used to explore coil parameters to get in the ball park of 4.2µH.

Adjustment is done by connecting the 4.2µH coil and adjusting the whip tip shorter until minimum VSWR is found. Iteratively changing the inductor a little and re-trimming the whip should allow finding a perfect match (ie VSWR=1).

So is there a way that I can set up the coil and antenna using only an swr meter?

QED.

]]>His experiment connected a WSPR modulated RF source directly to an SDR receiver, and he recorded WSPR’s receive SNR reports vs input attenuation and configured SDR receiver bandwidth. The direct connection means the test is not subject to normal radio path effects like fading.

The table above is derived from Talbot’s, his information about the RF source (-30dBm) and attenuator settings are converted to receiver input power (dBm).

Above is the same data charted. A linear line fit to the 300Hz data is also included, it is a very good fit. The issue that Talbot raised is that the reported SNR is quite dependent on receiver bandwidth.

The chart above shows the sensitivity to bandwidth here at -120dBm input power.

The table above is a calculation of receiver noise figure implied by the first table. The mean of the measurements with 300 and 500Hz bandwidth is 27.2dB, and the cells between 26 and 28dB are highlighted in green. The receiver noise figure should be largely independent of the receiver bandwidth setting.

It is apparent that the results derived from reported SNR show that bandwidths below 200Hz return incorrect SNR, and the right hand column suggests significant non-linearity at SNR=2dB, a hint of some non-linearity of the measurement system (here with only one signal).

Talbot, A. Feb 2010. WSPR reported S/N measurements. http://www.g4jnt.com/WSPR_SNR_test.pdf (accessed 21 May 2017)

]]>

The first experiment was a calibration run if you like to explore the nature of simultaneous WSRP SNR reports for two transmitters using different call signs on slightly different frequencies simultaneously feeding approximately the same power to the same antenna.

This article is about the second test which he describes:

The second test uses a WSPRlite directly feeding the same stacked Yagis, and the second WSPRlite feeding nearly identical stacked Yagis that point directly through the other stack located four wavelengths directly in front. Power at each antenna was about 140 milliwatts for each WSPRlite.

The data for the test interval was extracted from DXplorer, and the statistic of main interest is the paired SNR differences, these are the differences in a report from the same station of the two signals in the same measurement WSPR interval.

There is an immediate temptation of compare the average difference, it is simple and quick. But, it is my experience that WSPR SNR data are not normally distributed and applying parametric statistics (ie statistical methods that depend on knowledge of the underlying distribution) is seriously flawed.

We might expect that whilst the observed SNR varies up and down with fading etc, that the SNR measured due to one antenna relative to the other depends on their gain in the direction of the observer. Even though the two identical antennas point in the same direction for this test, the proximity of one antenna to the other is likely to affect their relative gain in different directions.

What of the distribution of the difference data?

Above is a frequency histogram of the distribution about the mean (4.2). Each of the middle bars (0.675σ) should contain 25% of the 815 observations (204). It is clearly grossly asymmetric and is most unlikely to be normally distributed. A Shapiro-Wik test for normality gives a probability that it is normal p=4.3e-39.

So lets forget about parametric statistics based on normal distribution, means, standard deviation, Student’s t-test etc are unsound for making inferences because they depend on normality.

Differently to the first experiment where both transmitters fed the same antenna and we might expect that simultaneous observations at each stations might be approximately equal, in this case there are two apparently identical antennas, one close to and pointing through the other and the question is are they in fact identical in performance or is there some measurable interaction.

So, lets look at the data in a way that might expose their behaviour.

Above is a scatter chart of the 815 paired SNR reports (where an individual station simultaneously decoded both transmitters). Note that many of the dots account for scores of observations, all observations are used to calculate the trend line.

In contrast to the previous test, there is quite a spread of data but a simple least squares linear regression returns a R^2 result that indicates a moderately strong model with a Y intercept of -3.3dB (ie that there is -3.3dB difference between the systems)

We can reasonably draw the conclusion that there is a significant interaction between the otherwise identical antennas.

In fact sub-setting the data to select reports that were within +/- 5° of boresight, the difference was more like -5dB.

This raises the question of the design of an experiment, the hypothesis to be tested and then designing the experiment to collect unbiased observations that should permit a conclusion to be drawn.

One has little control of the location of observers in WSPR, their appearance is for the most part random. However, one can fairly easily filter the observations collected to excise observations outside a given azimuth range, and distance range (which might imply elevation of the propagation path). Filtering in this way ensures that the data is more relevant to the hypothesis being tested, and that should result in better correlation, less uncertainty in the result.

]]>Firstly, lets describe a loop for study, a square diamond with sides of 760mm (30″) of 2mm diameter copper fed in one corner at 7.1MHz.

Calculate small loop Antenna Factor models a small loop in free space (therefore does not include ground losses).

Above is the calculator result, the key figures are Antenna Factor 31.75dB and Gain -44.5dBi.

An NEC-4.2 model was constructed with external excitation (1V/m) incident on the loop which has a 50+j0Ω load inserted at the feed point to represent the receiver load.

Here is the model source.

CM Small square untuned loop CM NEC-4.2 CM CM 1. Plane wave excitation CM CM Owen Duffy CM Note: rotations might not work properly in various NEC-2 versions, beware of segment size issues in NEC-2. CE GW 1 5 -0.38 0 -0.38 0.38 0 -0.38 0.001 GM 1 3 0 90 0 0 0 0 1 GM 0 0 0 90 0 0 0 2 1 GE 0 LD 5 0 0 0 58000000 LD 4 1 1 1 50 0 GN -1 EK EX 1 1 1 0 45 0 0 0 0 0 FR 0 0 0 0 7.1 0 EN

The key result to be extracted from the model run is the current in the 50Ω resistor in segment 1 of wire 1. The magnitude of the current is 5.1204E-04, so the voltage developed in the resistor V=5.1074-04*50=0.02554V. Antenna Factor is the ratio of the E field excitation to the terminal voltage of the receiver, so in dB it is 20*log(1/0.02554) =31.83 dB/m.

The NEC model’s 31.83 dB/m is close to the calculator prediction of 31.75dB/m, but includes the benefit of the lossy ground reflection .

Likewise, Gain calculated from the NEC Antenna Factor of 31.83 dB/m is -44.6 dB, about a tenth of a dB of the original calculator prediction.

Of course, transmission line loss to receiver needs to be factored in separately.

Within the stated limits of the models, valid models should provide consistent results, and they do in this case.

Results should be validated by measurement, and whilst I have not measured this particular loop, I have validated a slightly smaller loop (600mm square) that I use regularly for field strength measurement and so have confidence in the modelling tools for this application.

My correspondent’s report of a 30″ square untuned loop with gain of -10dBi on 7MHz suggests a misunderstanding or the online expert’s model is seriously flawed.

It is common that extravagant claims are made of small loops by would be aficionados, be wary.

]]>

The first experiment was a calibration run if you like to explore the nature of simultaneous WSRP SNR reports for two transmitters using different call signs on slightly different frequencies (19Hz in this case) feeding approximately the same power to the same antenna.

The first test uses two WSPRlites feeding the same antenna through a magic-T combiner producing a data set consisting of 900 pairs of SNR reports from Europe with only about 70 milliwatts from each WSPRlite at the antenna feed.

The data for the test interval was extracted from DXplorer, and the statistic of main interest is the paired SNR differences, these are the differences in a report from the same station of the two signals in the same measurement WSPR interval.

There is an immediate temptation of compare the average difference, it is simple and quick. But, it is my experience that WSPR SNR data are not normally distributed and applying parametric statistics (ie statistical methods that depend on knowledge of the underlying distribution) is seriously flawed.

We might expect that whilst the observed SNR varies up and down with fading etc, that the SNR measured due to one transmitter is approximately equal to that of the other, ie that the simultaneous difference observations should be close to zero in this scenario.

What of the distribution of the difference data?

Above is a frequency histogram of the distribution about the mean (0). Interpretation is frustrated by the discrete nature of the SNR statistic (1dB steps), it is asymmetric and a Shapiro-Wik test for normality gives a probability that it is normal p=1.4e-43.

So lets forget about parametric statistics based on normal distribution, means, standard deviation, Student’s t-test etc are unsound for making inferences because they depend on normality.

Nevertheless, we might expect that there is a relationship between the SNR reports for both transmitters, We might expect that SNR_W3GRF=SNR_W3LPL.

So, lets look at the data in a way that might expose such a relationship.

Above is a 3D plot of the observations which shows the count of spots for each combination of SNR due to the two transmitters. The chart shows us that whilst there were more spots at low SNR, the SNRs from both are almost always almost the same.

A small departure can be seen where a little ridge exists in front of the main data.

Lets look at in 2D.

Above is a 2D chart of the same data. Note that there are 902 observations, so many of the dots account for scores of observations as can be seen in the 3D chart above.

The outliers can be seen more clearly, but it isn’t so obvious that there is only a small number of them. In fact, examining the data showed that the outliers came from only one observer, and all of their observations (about 15) were outliers. There is a strong case to exclude them as anomalous.

Above after stripping the 15 anomalous records, there is an obvious trend. Above, we have used Excel to add a linear trend line to the data. Remember that individual dots may account for scores of observations, all observations are used to calculate the trend line.

Above is a more detailed regression result using Excel’s LINEST() function.

The coefficients tell us that the slope of the curve fit is almost exactly 1 (0.9975) and so the system appears quite linear over the range tested. The intercept is -0.1684 with, it is the difference in SNR between the two transmitters and as might be expected it is fairly close to zero.

Data obtained for any experiment needs careful review. There are a host of problems which influence WSPR data quality, some inherent in the system, some related to the end stations, some in the data archive. In my experience, WSPR data deserves great more attention to identify and excise anomalous records.

The experiment described here assumes a single ‘spot’ record lodged by each station hearing each of the transmitters and is spoiled if there is more than a single record for each. It has been observed that there can be more than a single record (eg if a call sign was simultaneously active on more than one receiver on that frequency), and those records should be excised to improve data quality (DXplorer contains a facility to do that and an override switch).

- The observed SNR difference was not normally distributed and therefore unsuitable for parametric statistical analysis based on normal distribution.
- Careful examination of the data highlighted a very small number of outliers which were excised to improve the model quality.
- A linear regression is a non-parametric analysis that produced a very low error model explaining the dataset.
- The actual power output of each transmitter was not measured but estimated and that contributes to measured SNR difference.
- The test results were that the differences between simultaneous measurements of SNR for each transmitter at a number of observing stations was -0.24dB with standard error of 0.1dB.
- The results are quite consistent with almost equal transmitters feeding the same antenna, and suggest that the method might lend itself to comparison of two different antennas using two WSPRlites.

I have a 30″ square loop of #12 wire that I use for receiving, and when I attach it to the receiver on 40m, the audio output voltage goes up three times or more. Do I need an amplifiers, or will it worsen things?

It is possible to determine the ambient noise temperature from the true noise power change over that of a matched termination.

The equivalent noise temperature of the receiver is implied by its Noise Figure when it is terminated with a matched termination. Noise due to an open circuit or short circuit input is not defined.

The correspondent re-measured with a termination, and as it turned out, the results were much the same, so lets work the case of voltage increasing by a factor of three.

Without going any further, we can calculate the degradation in External S/N by the receiver, total noise power is proportional to (3^2) times internal noise, so S/N degradation is 10*log(9/(9-1))=0.51dB… very little.

It is true that an amplifier is unlikely to improve things and will be likely to degrade things because of intermodulation distortion that is inherent in them, more so if it overloads on broadband signal input.

But let’s go on to estimate the ambient noise figure Fa.

It is really important for this process that the AGC does not change the receiver gain, and there is no overload or clipping. The latter means DO NOT SWITCH THE AGC OFF, the S meter deflects, you need extra input attenuation to keep things linear.

Now lets assume the receiver has a Noise Figure of 6dB (most modern HF transceivers are in that ball park).

We need to estimate the gain of the antenna, we will use Calculate small loop Antenna Factor.

Ok, terminated in 50Ω, the untuned small loop has a gain of -43.4dBi. So, it captures only a very small portion of the external noise, but even so it delivers sufficient to the receiver to increase the output voltage by a factor of 3.

The noise floor of a 2kHz effective noise bandwidth receiver with noise figure of 6dB is -135dBm. The total noise equivalent input power with output voltage raised by a factor of 3 is -135+20log(3)=-125.5dBm. If we allow for the antenna gain of -43.4dBi, the receiver input power with a lossless isotropic antenna would be around -125.5–43.4=-82.1dBm which is about S7.5 on the common ham scale… so a quite high noise level.

Lets use Ambient noise calculator to find the ambient noise.

Above is the input form for the scenario.

Above is the result, Fa is 58.4dB which turns out to be a pretty high noise level.

From the above chart (ITU-R P.372-12 (7/2015)), we can see that the predicted ambient noise figure Fa in business precincts is around 55dB, and at the lower limit, Galactic noise is about 35dB.

So, the correspondent’s ambient noise is at the high end of expectations, indeed higher that you would expect in most residential areas so it begs the question whether there is some strong local noise source that can be reduced.

A small passive loop may be sufficient to achieve small S/N degradation on low HF bands in scenarios where the ambient noise level is high to extreme.

If the passive loop is sufficient to obtain small S/N degradation, then an amplifier may well worsen things.

The real problem may be the quite high ambient noise level, and that may be resolveable.

- ITU-R. Jul 2015. Recommendation ITU-R P.372-12 (7/2015) Radio noise.
- owenduffy.net/files/NoiseAndReceivers.pdf
- Receiver sensitivity metric converter
- Calculate small loop Antenna Factor
- Convert Antenna Factor and Gain
- RxActiveNoise spreadsheet (zipped)

]]>