PERL vs Python pre / post processing NEC

I have used PERL to script NEC runs, and then to read the huge volume of output to produce simpler summary tables. This has provided facility to run a very large number of models with some variation in one or more model parameters. One of the early published web articles was Feeding a G5RV published in 2005, but I had been using PERL for that purpose for quite some time before that, and in my ‘day job’ since early 1990s. Ham projects led to development of some application specific libraries to model transmission lines and ATUs.

Like PERL, Python had its origins in the late 1980s, but it has really only come of age in recent years with the release of v3. Python appears running on all sorts of things from microcontrollers up, and is probably the most popular scripting language today.

I have been using Python quite a lot over the last five years, and on a sleepless night, I converted a recent pair of PERL v5.28 scripts for NEC modelling to Python v3.9. It was not an automated conversion, they are both quite quirky languages, and the conversions were hand crafted… not a lot of code, put performance is important. The test project did not use the custom libraries, they have not been converted yet.

One script generates tailored NEC card decks and runs NEC. This is not very intensive work, it basically iterates a loop of some parameter variable(s) and does some symbolic substitution into a template to produce the set of .nec files… and executes NEC on those files. Most of the run time is the NEC execution time.

The second script reads the NEC output and extracts quantities of interest to produce a summary table. This is more intensive, it is mostly about parsing the output file and extracting data from certain records and relies to a fair extent on regular expression (RE) processing for convenience… and the latter can be compute intensive. NEC output usually has fine detail pattern results, so the files typically run to tens of thousands of records each. A run every 0.1MHz from 1 to 30MHz creates 1GB of output in total.

The Python script is structurally similar to the PERL script, and pretty much uses the same balance of RE and substring processing of the records… but the run times are stunningly different, and somewhat surprising.

On a study project of a small transmitting loop, 96 model files were generated and processed by NEC to produce 96 output files of 33,001 lines each for a total of 3.1 million lines (384MB).

PERL processed the output data in 3.2s and Python in 38s, over ten times as long.

No attempt was made to optimise or tune either of the programs, just good code written, and in fact the Python used explicitly precompile RE objects which ought save significant time.

I have not been enthusiatic about PERL v6, now called RAKU… but on the basis of this test, perhaps I should look more closely. A great value in PERL is the extensive libraries developed by the community, so utility of RAKU depends for me on combability of the PERL libs.