The earliest method of locating a cable fault was a binary chop… which would mean deploying a cable ship, grapnelling for the cable, hauling it to the surface with a special grapnell that severed the cable when tension was too great, buoying off one end and steaming back to the other to haul it on board, clean it up and test to the far cable station. New cable was spliced and the cable ship steamed back to find the buoy and pull that end on board, clean and test to the other end. This was done to localise the fault, and eventually replace a fault section of cable. During this longish period, the cable was out of service.

Oliver Heaviside, a telegraphist and self taught mathematician applied his mind to predicting the distance to fault based on resistance measurements from a land end applying Blavier’s test to a submarine cable. Blavier’s test assumes a single ground fault, and a test from both ends can improve confidence that that is likely to be the case.

Let u be the resistance end to end of the installed conductor based on length and resistance per km, \(u=666.7 \cdot 3.24=2160 \Omega\).

Let x be the resistance from Newbiggen-by-the-sea (local) to the fault. The resistance of the remaining cable conductor is u-x.

Let y be the measured resistance from the fault point to ground (the sea or cable amour if present).

If the remote end (Sondervig) is grounded, the expected resistance w is given by \(w=x+\frac1{\frac1y+\frac1{u-x}}=x+\frac{y(u-x)}{y+u-x}\) and was measured to be 970Ω.

If the remote is is open, the resistance v was measured at 1040Ω, now \(v=x+y\) so \(y=v-x\) which we can substitute.

\(w=x+\frac1{\frac1y+\frac1{u-x}}=x+\frac{y(u-x)}{y+u-x}=x+\frac{(v-x)(u-x)}{(v-x)+u-x}\\\)Now it is in terms of the three known values u,v,w and unknown x.

\(w(v-2x+u)=(v-2x+u)x+(v-x)(u-x)\\\) \(x=w- \sqrt{(w-v)(w-u)}\\\)So when measurements gave \(v=1040 \Omega\) and \(w=970 \Omega\) we can calculate that the distance to fault is the lesser root, 212.3km from Newbiggen-by-the-sea. (The greater root would imply a -ve value for y which is not physically possible.)

]]>The single most common factor in their cases is an attempt to use TDR mode of the VNA.

Well, hams do fuss over the accuracy of quarter wave sections used in matching systems when they are not all that critical… but if you are measuring the tuned line lengths that connect the stages of a repeater duplexer, the lengths are quite critical if you want to achieve the best notch depths.

That said, only the naive think that a nanoVNA is suited to the repeater duplexer application where you would typically want to measure notches well over 90dB.

The VNA is not a ‘true’ TDR, but an FDR (Frequency Domain Reflectometer) where a range of frequencies are swept and an equivalent time domain response is constructed using an Inverse Fast Fourier Transform (IFFT).

In the case of a FDR, the maximum cable distance and the resolution are influenced by the frequency range swept and the number of points in the sweep.

\(d_{max}=\frac{c_0 vf (points-1)}{2(F_2-F_1)}\\resolution=\frac{c_0 vf}{2(F_2-F_1)}\\\) where c0 is the speed of light, 299792458m/s.

Let’s consider the hand held nanoVNA which has its best performance below 300MHz and sweeps 101 points. If we sweep from 1 to 299MHz (to avoid the inherent glitch at 300MHz), we have a maximum distance of 33.2m and resolution of 0.332m.

Here is such a sweep of a cable of length around 1.2m.

The marker is close to the apparent peak of the response at about 11.8ns (1.17m), and each step of the marker is 1.3ns (0.129m).

If we sweep to 900MHz, we do get better resolution (albeit for shorter dmax).

The resolution is reduced to 0.435ns (0.043m)

If you want mm resolution for short line sections, you need a VNA that sweeps a much wider frequency range and / or much more sweep points.

Above, nanoVNA-saver results on the same DUT with smoothing of 100 sweeps produces a nice clean looking graph and a calculated distance to fault of 1.222m, mm resolution implied by the number format… but are you mislead?

We can do a s11 sweep of a short circuit or open circuit line section (just as in the FDR / TDR case), but make the sweep quite narrow (ie high resolution) around a quarter wave or half wave resonance.

Above is a very narrow sweep with 1kHz resolution at 40MHz, ie 0.0025% resolution. From the interpolated resonance frequency of 40.4MHz and previously measured vf, we can calculate the physical length to be 1.224m… with resolution of 0.0000306m.

Many analysers and VNAs sport a Distance to Fault mode, and it is commonly a FDR implementation. These can be very effective productivity tools in identifying not just cable opens and shorts, but loose connectors, pinched cable etc.

The foregoing discussion shows that FDR / Distance to Fault may not be adequate for tuning of critical line sections, but it often has sufficient resolution for identifying the locality and severity of a fault.

Things have come a long way in the around 150 years since Oliver Heaviside successfully applied his mind to location of faults in submarine telegraph cables.

Whilst the TDR mode of a VNA looks an appealing way to measure line length, with low end instruments like the nanoVNA it does not have adequate resolution for demanding applications.

]]>NEC says,

according to NEC, and the like.

Readers should take this to mean that the author denies their contribution in making assumptions and building the model, and the influence on the stated results.

It is basically a disclaimer that disowns their work.

In a past life I wrote and maintained software tools used for design of buildings covering a wide range of disciplines (eg CAD, structure strength, power distribution, heating and cooling loads, passive solar design etc).

The people who used the tools were qualified engineers and architects who took total responsibility for their

design work, even when using tools created and tested by others. It was at all times the responsibility of the user their

to validate designs, they owned

the design and were held responsible and accountable.

When it was discovered that a certain series of Intel Pentium chips had a defect in floating point calculations, it sent us into a spin to find whether it impacted the accuracy of tools, particularly to compare test suite results with calculations on other unaffected processors… but it was made clear that at the end of the day, responsibility for a design lay entirely with the engineer or architect signing off on the work.

But in amateur radio forums, the accepted thing is to deny personal contribution to the building of NEC models, the oft unstated assumptions, blame it all on NEC. Certainly gives meaning to amateur.

]]>A first observation of listening to a SSB telephony signal is an excessive low frequency rumble from the speaker indicative of a baseband response to quite low frequencies, much lower than needed or desirable for SSB telephony.

The most common application of such a filter is reception of A1 Morse code.

Above is a screenshot of the filter settings.

Above is a plot of the response of the filter. It is hardly an idealised rectangular filter response. Though the response might be well suited to Morse code reception, it is an issue when measurements make assumptions about the ENB. The response is not well suited to narrowband data such as RTTY etc.

A summary of the filter response follows.

Locut 0Hz.

sox: bin_width_hz=10.000Hz

Filter -6dB response: 460-770Hz=310Hz.

ENB=224Hz with respect to gain at 610Hz (passband centre frequency).

ENB=222Hz with respect to gain at 590Hz (max gain frequency).

ENB=222Hz with respect to gain at 600Hz.

If we take the gain reference frequency to be 600Hz, there is 3.5dB less noise admitted by this filter than an idealised rectangular filter. Measurements such as the ARRL MDS that might assume 500Hz bandwidth will have 3.5dB error.

A 1000Hz filter might be well suited to narrow band data reception, many of the so-called ham digital modes.

Above is a screenshot of the filter settings.

Above is a plot of the response of the filter. It is fairly close to an idealised rectangular filter response.

There appears to be no means to offset the filter at baseband frequency.

A summary of the filter response follows.

Locut 0Hz.

sox: bin_width_hz=10.000Hz

Filter -6dB response: 110-950Hz=840Hz.

ENB=823Hz with respect to gain at 530Hz (passband centre frequency).

ENB=716Hz with respect to gain at 200Hz (max gain frequency).

ENB=800Hz with respect to gain at 500Hz.

If we take the gain reference frequency to be 500Hz, there is 0.97dB less noise admitted by this filter than an idealised rectangular filter.

Above is a screenshot of the filter settings.

Above is a plot of the response of the filter. It is fairly close to an idealised rectangular filter response.

There appears to be no means to offset the filter at baseband frequency.

A summary of the filter response follows.

Locut 0Hz.

sox: bin_width_hz=10.000Hz

Filter -6dB response: 110-2350Hz=2240Hz.

ENB=2353Hz with respect to gain at 1230Hz (passband centre frequency).

ENB=1829Hz with respect to gain at 210Hz (max gain frequency).

ENB=2255Hz with respect to gain at 1000Hz.

If we take the gain reference frequency to be 1000Hz, there is 0.27dB less noise admitted by this filter than an idealised rectangular filter.

SDR# does not appear to have a convenient facility to shift or offset the baseband response.

Above is the baseband response in 2400Hz USB mode as show in the SDR# window. Note that the response rolls off below 100Hz, whereas good conventional SSB Telephony receivers would have a 6dB response from say 250-2750Hz for a ENB of 2400Hz. The lower -6dB point for this response is 110Hz.

This leads to substantial low frequency component that is not a priority for SSB telephony, and in the case where the transmitter is band limited to 300-2700Hz, the filter admits unnecessary noise and the low end and cuts of a little of the high end. It is a hammy sammy approach where recognised speech characteristics, conventions and compatibility between transmitter and receiver are jettisoned.

The basic 1000Hz USB filter provides a response close to ideal, centred around 530Hz, and its ENB is 800Hz (-0.07dB on 1000Hz).

There appears no facility in SDR# to save a number of filter settings for later recall, so the process of configuring SDR# for measurement is a bit tedious.

My attention has been draw to the facility to drag the upper and lower limits of the IF passband, thanks Martin.

Above is an example where a 500Hz passband is centred on 1500Hz at baseband.

As soon as another mode is selected, the setting is lost and there appears no facility to save a set of settings for later recall. Note the inconsistency between the two displayed bandwidth figures.

Yes, it works but it is not convenient and not practical for save / recall of a standardised set of measurement or reception conditions.

]]>Having selected a candidate core, the main questions need to be answered:

- how many turns are sufficient for acceptable InsertionVSWR at low frequencies and core loss; and
- what value of shunt capacitance best compensates the effect of leakage inductance at high frequencies?

Lets look at a simplified equivalent circuit of such a transformer, and all components are referred to the 50Ω input side of the transformer.

Above is a simplified model that will illustrate the issues. For simplicity, the model is somewhat idealised in that the components are lossless.

- L1 represents the leakage inductance;
- L2 represents the magnetising inductance; and
- C1 is a compensation capacitor.

Since the magnetising inductance is assumed lossless, this article will not address design for core loss.

So, it is obvious that the InsertionVSWR curve is pretty poor at both high and low end.

Let’s look at a Smith chart presentation of the same information, it is so much more revealing.

Above is the Smith chart plot. Remember that the points go clockwise on the arc with increasing frequency, and that InsertionVSWR is a function of the distance from the centre to the point on the locus… we want to minimise that distance. Remember also that the circles that are tangential to the left had edge are conductance circles, they are the locus of constant G.

Now lets analyse the response.

Note that from 1 to 3MHz, the shape of the response tends to a circle tangential to the left hand edge, it a constant G circle. So, G is constant but susceptance B is frequency dependent and -ve. This the the response of a constant resistance R in parallel with a constant inductance (\(B=\frac {-1} {2 \pi f L}\), \(Y= G + jB = \frac 1 R – \frac {j} {2 \pi f L}\)). A part of that susceptance (shunt inductance) is due to the magnetising inductance L2 which contributes to the poor Insertion VSWR at low frequencies.

Note that from 12 to 15Hz, the shape of the response tends to a circle tangential to the left hand edge, it a constant G circle. So, G is constant but susceptance B is frequency dependent and +ve. This the the response of a constant resistance R in parallel with a constant capacitance (\(B=2 \pi f C\), \(Y= G + jB = \frac 1 R + j 2 \pi f C\)). A part of that susceptance (shunt capacitance) is due to the compensation capacitor C1 which contributes to the poor Insertion VSWR at high frequencies.

Lets adjust L2 and C1 for a better InsertionVSWR response.

Above is the response with L2=12µH and C1=80pF. Note that the distance to the centre is improved (and therefore InsertionVSWR is improved). The kink in the response is common, that is typically the mid region where InsertionVSWR is minimum.

It is still not a good response, the InsertionVSWR at the high end is too high, and compensation with C1 does not adequately address the leakage inductance. So, as a candidate design, this one has too much leakage inductance which might be addressed by improving winding geometry and increasing core permeability.

As mentioned, real tranformers using ferrite cores have permeability that is complex (ie includes loss) and dependent on frequency (ie inductance is constant).

Above, the magenta curve is measurement of a real transformer from 1-11MHz with nominal resistance load and three compensation options:

- cyan: 0pF, too little compensation;
- magenta: 80pF, optimal compensation; and
- blue: 250pF, to much compensation.

It should be no surprise that 80pF is close to optimal. Susceptance B at the cyan X is -0.00575S, and broadly, we want to cancel that with the compensation capacitor so we come so \(C=\frac{B}{2 \pi f}=\frac{0.00575}{2 \pi 11e6}=83pF\).

With optimal compensation (80pF in this case) The insertionVSWR at 3MHz is 1.8, probably acceptable for this type of transformer but it is still quite high (4.3) at 11MHz, which hints that leakage inductance needs to be addressed by improving winding geometry and possibly increasing permeability.

Keep in mind that measurements with a nominal resistive load are a guide, measurements with the real antenna wire are very important.

]]>A common piece of advice is to visualise the capture area

of the individual Yagi, and to stack them so that their capture areas just touch… with the intimation that if they overlap, then significant gain is lost.

Above is a diagram from F4AZF illustrating the concept. Similar diagrams exist on plenty of web sites, so it may not be original to F4AZF.

Now Capture Area or Effective Aperture Ae is a well known concept in industry and explained in most basic antenna text books. In concept, the amount of power available from a plane wave by an antenna is given \(P=S Ae\) where S is the power density of the wave (W/m^2) and Ae is the Effective Aperture. We can calculate \(Ae = G \frac{\pi {\lambda}^2}{4}\).

So, let’s consider a 17 element DL6WU for 144MHz, with a gain of 16.7dB (G=46.5) and optimal stacking distances of 4.133m and 4.332m (Estimating Beamwidth of DL6WU long boom Yagis for the purpose of calculating an optimum stacking distance).

We can calculate Ae to be \(Ae = G \frac{\pi {\lambda}^2}{4}=157.9m^2\).

Let’s calculate the area of a rectangular stacking box \(A_{sb}=4.133 \cdot 4.332 = 17.9m^2\).

So, how do you possibly contain the 157.9m^2 capture area within the 17.9m^2 stacking box, no matter what shape you make the capture area.

Clearly, the concept is flawed. It is another of those simplistic explanations that is appealing at first glance… but deeply flawed… specious!

Popularity does not determine fact… well in science anyway.

]]>The datasheet contains some specifications that should allow calculation of S/N degradation (SND) in a given ambient noise context (such as ITU-R P.372). Of particular interest to me is the frequency range 2-30MHz, but mainly 2-15MHz.

The specifications would appear to be based on models of the active antenna in free space, or measurements of the device using a dummy antenna. So, the challenge is to derive some equivalent noise estimates that can be compared to P.372 ambient noise, and with adjustment for the likely effects of real ground.

Key specifications:

- plot of measured output noise of the amplifier, and receiver noise in 1kHz ENB;
- Antenna Factor (AF) from a simulation.

Above is the published noise measurements at the receiver input terminals. The graph was digitised and then a cubic spline interpolation used to populate a table.

Above is the assumed test configuration. We will assume that the receiver is accurately calibrated (both power and bandwidth), and that the noise power due to internal noise in the amplifier is the reported noise (the orange curve) less the receiver internal noise (the blue curve) measured with a 50Ω termination on the input. Of course these measurements need to be converted to power to perform the subtraction, and as part of the calculation, power in 1kHz will be transformed to power/Hz because Noise Power Density (NPD) is easier to work with.

From the NPD of the amplifier internal noise at the output terminals, we can calculate component equivalent Noise Figure (NF) and equivalent noise temperature which are both frequency dependent. The output terminals of the amplifier are the reference terminals at which we will compare external noise and total internal noise, both referred to that reference point.

We can then build a more complex model incorporating the feed line loss (10m of CAT6 FTP) and a receiver of given NF, find the ambient noise referred to the amplifier output terminals and solve for SND. We will assume that the loss in the balun unit is so small that relocating it to after the CAT6 feed line does not introduce significant error.

Recall that Gain and AF are related, every one dB increase in Gain corresponds to exactly one dB decrease in AF.

It is the Average AF that is used to calculated ambient noise capture (assuming it is from all directions). We can calculate frequency dependent Average Gain from Average AF, and use that to calculate how much of the P.372 ambient noise appears at the reference terminals.

We will assume that the specification AF is given at maximum response (the usual convention), and that the Directivity of a short dipole in free space is 1.76dB, so the Average AF would be 3.76dB/m.

So, we will calculate Tamb’ being Tamp/Gain, and T’int being the sum of internal noise contributions of the receiver, lossy feed line, and amplifier all of these referred to the reference terminals. SND is then simply \(SND=10 log \frac{T_{amb}’+T_{int}’}{T_{amb}’}\).

Above is the table of calculations.

Above is a graphic summary of the analysis, the key metric being SND. Now P.372 is based on a survey with short vertical monopoles, so it probably overestimates noise captured by the short horizontal dipole by some dB.

The assumed Directivity and radiation efficiency based on a model at 7MHz are go to perhaps 15MHz at which point the length of the dipole and its height become more significant in terms of electrical length, and the pattern changes.

Note that this analysis assumes a linear receive chain, it does not include the effects of IMD.

So, whilst active short dipole antennas are not very popular in the ham world, they are popular in commercial and military applications, and in this instance, the AAA-1C would appear to perform quite well. This is of course only a desk study, the final test is of the real antenna system… though that is a little way off as post from Bulgaria to Australia is currently suspended.

Discussion with Chavdev (LZ1AQ) suggests that the assumptions made in this article are reasonable.

]]>In that instance, the design approach was to find a loop geometry that when combined with a practical amplifier of given (frequency independent) NoiseFigure (NF), would achieve a given worst case S/N degradation (SND). Whilst several options for amplifier Rin were considered in the simple analytical model, the NEC mode of the antenna in presence of real ground steered the design to Rin=100Ω.

A question that commonly arises is that of Rin, there being two predominant schools of thought:

- Rin should be very low, of the order of 2Ω; and
- Rin should be the ‘standard’ 50Ω.

Each is limiting… often the case of simplistic Rules of Thumb (RoT).

Let’s plot loop gain and antenna factor for two scenarios, Rin=2Ω and Rin=100Ω (as used in the final design) from the simple model of the loop used at Small untuned loop for receiving – a design walk through #2.

Above, loop gain is dominated by the impedance mismatch between the source with Zs=Rr+Xl and the load being Rin. We can see that the case of Rin=100Ω achieves higher gain at the higher frequencies by way of less mismatch loss than the Rin=2Ω case.

Above is a plot of AF for the two cases. Recall that AF is the ratio of the electric field strength to the loaded loop terminal voltage. Note that the Rin=2Ω case has almost flat AF from 0.2MHz up, whereas the Rin=100Ω is only flattening towards 10MHz. A very flat AF response is a desirable feature of a field strength measuring instrument, but is has much less value for a conventional receiving system.

Looking back at the gain plot, it is evident that the flat AF response comes at the cost of considerably lower gain at the higher frequencies. The effect of that is that receiver internal noise becomes more limiting unless that gain shortfall can be made up with low loss amplification, and therein lies the challenge.

The approach discussed at Small untuned loop for receiving – a design walk through #1 was not a design for constant AF, the main design objective was SND in a given ambient noise context… and that objective is directly relevant to ordinary receivers.

]]>(Ikin 2016) proposes a different method of measuring noise figure NF.

Therefore, the LNA noise figure can be derived by measuring the noise with the LNA input terminated with a resistor equal to its input impedance. Then with the measurement repeated with the resistor removed, so that the LNA input is terminated by its own Dynamic Impedance. The difference in the noise ref. the above measurements will give a figure in dB which is equal to the noise reduction of the LNA verses thermal noise at 290K. Converting the dB difference into an attenuation power ratio then multiplying this by 290K gives the LNA Noise Temperature. Then using the Noise Temperature to dB conversion table yields the LNA Noise Figure. See Table 1.

The explanation is not very clear to me, and there is no mathematical proof of the technique offered… so a bit unsatisfying… but it is oft cited in ham online discussions.

I have taken the liberty to extend Ikin’s Table 1 to include some more values of column 1 for comparison with a more conventional Y factor test of a receiver’s noise figure.

Above is the extended table. The formulas in all cells of a column are the same, the highlighted row is for later reference.

A test setup was arranged to measure the noise output power of an IC-7300 receiver which has a sensitivity specification that hints should have a NF≅5.4dB. The relative noise output power for four conditions was recorded in the table below.

Ikin’s method calls for calculating the third minus second rows, -0.17dB, and looking it up in his table. In my extended table LnaNoiseDifference=-0.17dB corresponds to NF=3.10dB.

We can find the NF using the conventional Y factor method from the values in the third and fourth rows.

The result is NF=5.14dB (quite close to the expected value based on sensistivity specification).

Ikin’s so called dynamic impedance method gave quite a different result in this case, 3.10 vs 5.14dB, quite a large discrepancy.

The chart above shows the relative level of the four measurements. The value of the last two is that they can be used to determine the NF using the well established theory explained at AN 57-1.

The values in the first columns are dependent on the internal implementation of the amplifier, and cannot reliable infer NF.

- Hewlett Packard. Jul 1983. Fundamentals of RF and microwave noise figure measurement. AN 57-1
- Ikin, A. 2016. Measuring noise figure using the dynamic impedance method.

Let’s review of the concepts of noise figure, equivalent noise temperature and measurement.

Firstly let’s consider the nature of noise. The noise we are discussing is dominated by thermal noise, the noise due to random thermal agitation of charge carriers in conductors. Johnson noise (as it is known) has a uniform spectral power density, ie a uniform power/bandwidth. The maximum thermal noise power density available from a resistor at temperature T is given by \(NPD=k_B T\) where Boltzman’s constant k_{B}=1.38064852e-23 (and of course the load must be matched to obtain that maximum noise power density). Temperature is absolute temperature, it is measured in Kelvins and 0°C≅273K.

Noise Figure NF by definition is the reduction in S/N ratio (in dB) across a system component. So, we can write \(NF=10 log \frac{S_{in}}{N_{in}}- 10 log \frac{S_{out}}{N_{out}}\).

One of the many methods of characterising the internal noise contribution of an amplifier is to treat it as noiseless and derive an equivalent temperature of a matched input resistor that delivers equivalent noise, this temperature is known as the equivalent noise temperature Te of the amplifier.

So for example, if we were to place a 50Ω resistor on the input of a nominally 50Ω input amplifier, and raised its temperature from 0K to the point T where the noise output power of the amplifier doubled, would could infer that the internal noise of the amplifier could be represented by an input resistor at temperature T. Fine in concept, but not very practical.

Applying a little maths, we do have a practical measurement method which is known as the Y factor method. It involves measuring the ratio of noise power output (Y) for two different source resistor temperatures, Tc and Th. We can say that \(NF=10 log \frac{(\frac{T_h}{290}-1)-Y(\frac{T_c}{290}-1)}{Y-1}\).

AN 57-1 contains a detailed mathematical explanation / proof of the Y factor method.

We can buy a noise source off the shelf, they come in a range of hot and cold temperatures. For example, one with specified Excess Noise Ratio (a common method of specifying them) has Th=9461K and Tc=290K. If we measured a DUT and observed that Y=3 (4.77dB) we could calculate that NF=12dB.

This method of noise figure measurement is practical and used widely. Note that the DUT always has its nominal terminations applied to the input and output, the system gain is maintained, just the input equivalent noise temperature is varied.

Some amplifiers are not intended to be impedance matched at the input (ie optimised for maximum gain), but are optimised for noise figure by controlling the source impedance seen at the active device. Notwithstanding that the input is not impedance matched, noise figure measurements are made in the same way as for a matched system as they figures are applicable to the application where for example the source might be a nominal 50Ω antenna system.

So, NF is characterised for an amplifier with its intended / nominal source and load impedances.

Nothing about the NF implies the equivalent internal noise with a short circuit SC or open circuit OC input. The behaviour of an amplifier under those conditions is internal implementation dependent (ie variable from one amplifier design to another) and since it is not related to the amplifier’s NF, it is quite wrong to make inferences based on noise measured with SC or OC input.

So this raises the question of NF measurements made with a 50Ω source on an amplifier normally used with a different source impedance, and possibly a frequency dependent source impedance. An example of this might be an active loop amplifier where the source impedance looks more like a simple inductor.

Well clearly the measurement based on a 50Ω source does not apply exactly as amplifier internal noise is often sensitive to the source impedance, but for smallish departures, the error might be smallish.

A better approach might be to measure the amplifier with its intended source impedance. In the case of the example active loop antenna, the amplifier could be connected to a dummy equivalent inductor, all housed in a shielded enclosure and the output noise power measured with a spectrum analyser to give an equivalent noise power density at the output terminals. Knowing the AntennaFactor of the combination, that output power density could be referred to the air interface. This is often done and the active antenna internal noise expressed as an equivalent field strength in 1Hz, eg 0.02µV/m in 1Hz. For example the AAA-1C loop and amplifier specifies Antenna Factor Ka 2 dB meters-1 @ 10 MHz

and MDS @ 10MHz 0.7 uV/m , Noise bandwidth =1KHz and

to mean equivalent internal noise 0.022µV/m in 1Hz @ 10MHz at the air interface. 0.022µV/m in 1Hz infers Te=6.655e6K and NF=43.608dB again, at the air interface. These figures can be used with the ambient noise figure to calculate the S/N degradation (SND).

A spectrum analyser or the like can be used to measure the total noise power density at the output of the loop amplifier with the input connected to a dummy antenna network (all of it shielded) and to calculate the equivalent noise temperature and noise figure at that point. For example, if we measured -116dBm in 1kHz bandwidth, Te=1.793e+5K and NF=27.9dB. Knowledge of the gain from air interface to that reference point is needed to compare ambient noise to the internal noise and to calculate SND, that knowledge might come from published specifications or a mix of measurements and modelling of the loaded antenna.

The mention of a spectrum analyser invites the question about the suitability of an SDR receiver. If the receiver is known to be calibrated, there is no non-linear process like noise cancellation active, and the ENB of the filter is known accurately, it may be a suitable instrument.

In both cases, the instruments are usually calculated for total input power, ie external signal and noise plus internal noise, so to find external noise (ie from the preamp) allowance must be made for the instrument NF (ie it needs to be known if the measured power is anywhere near the instrument noise floor).

Field strength / receive power converter may assist in some of the calculations.

The foregoing discussion assumes a linear receiver, and does not include the effects of intermodulation distortion IMD that can be hugely significant, especially in poor designs.

Part of the problem of IMD is that the effects depend on the individual deployment context, one user may have quite a different experience to another.

There are a huge number of published active loop antenna designs and variant, and a smaller number of commercial products. Most are without useful specifications which is understandable since most of the market are swayed more by anecdotal user experiences and theory based metrics and measurement.

- Hewlett Packard. Jul 1983. Fundamentals of RF and microwave noise figure measurement. AN 57-1