This is only a preview of the March 2026 issue of Practical Electronics. You can view 0 of the 80 pages in the full issue. Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Articles in this series:
Items relevant to "Power LCR Meter Part 2":
|
The dark arts of technology
Techno Talk
From physics experiments to smartphone sensors, reverse engineering,
obfuscation and generative AI—an exploration of how abstraction
explains, empowers and occasionally conceals modern technology.
When I was in high school, in
those days of yore, the sort of experiments we performed were, in hindsight,
laughable in their simplicity. We’re
talking about trolleys rolling down
inclines, falling weights and pulleys,
vibrating strings and tuning forks,
ripple tanks and water waves, and…
need I say more?
The rudimentary sensors at our disposal were along the lines of mercury
thermometers and stopwatches. Our
data recorders were plain paper and
graph paper. No accelerometers, no velocity plots, no real-time graphs. Just
paper, pencils and patience.
The overarching reality of 1970s high
school physics was that sensors were
expensive and rare (a single analog
meter or a monstrous CRT-based oscilloscope often served an entire class). Data
was sparse, slow and hand-recorded.
Experiments focused on validating
theories, not exploration; students
adapted to the equipment, not the
other way around.
The PhyPhox app.
56
The PhyPhox app
Returning to the present day, smart-
phones boast many advanced sensors:
accelerometers, gyroscopes, magnetometers, cameras, microphones, a pressure
sensor, light sensors, tempeature sensors,
satellite navigation… the list goes on.
Matching the sensors found in a
modern smartphone would have required a fully equipped university lab
in my school days, and that’s assuming such sensors were available at all!
This leads us to a free app called
PhyPhox (phyphox.org), which turns
one’s smartphone into a portable physics lab, using its built-in sensors to run
science experiments and collect realworld data. I’m trying to imagine the
expression on my physics teacher’s
face if I were to travel back in time
and demonstrate this to him.
How low can you go?
At the lowest level of abstraction, digital computers run machine code, the
most basic form of instruction they can
understand. This code consists of binary
numbers (strings of 0s and 1s) that tell
the processor exactly what to do, such
as moving data, adding two values, or
jumping to another location in memory.
Each instruction directly controls the
hardware, with no translation needed.
In the very early days of computing,
circa the late 1940s and early 1950s,
programs were written in machine code
that precisely matched the processor’s
instruction set. On machines like ENIAC,
this could even mean setting physical
switches or using plugboards to define
how the machine was to behave.
The next level up the abstraction
ladder is assembly language. This language uses short mnemonic words (like
ADD, LOAD and JMP) and symbolic
names for memory locations. In a sense,
assembly language is machine code
written with words instead of numbers.
The first assembly-language programs
were written on paper and translated by
hand into machine code. Later, programs
called assemblers automated this process, taking assembly language as their
input and producing machine code.
Early assemblers were themselves written in machine code or hand-translated
from assembly, after which no sane programmer wanted to go back.
Max the Magnificent
It wasn’t long before we climbed still
higher up the abstraction ladder, in the
form of languages like FORTRAN in
the late 1950s and C in the early 1970s.
Moving to higher levels of abstraction shifts our focus from mechanism
to intent. It reduces the cognitive load
while programming, improves our
ability to visualise and explore the
solution space, accelerates iteration
and enables portability (ie, code is no
longer tied to a single machine and its
instruction set). Still, this can come at
the cost of some loss of low-level visibility and control.
Programs called compilers take these
high-level representations as input and
generate assembly or machine code as
output. What most people don’t realise
is that compilers almost invariably generate an intermediate representation.
Initially, this was assembly language,
but most modern compilers translate
source code into one or more intermediate representations (IRs), perform
analysis and optimisation on those IRs,
then produce the final machine code.
One advantage of this is that, by separating the compiler into a front end
(specific language to IR) and a back
end (IR to assembly/machine code),
the back end can be used for multiple
different language compilers, and the
front end can target multiple different
machine architectures.
Foiling reverse engineering
‘Reverse engineering’ is the analysis of
a finished system to infer its structure,
function and design without access to the
original specifications, drawings or code.
Whether applied to hardware or
software, reverse engineering by bad
actors sits at the heart of supply-chain
attacks, malware, intellectual property
theft, product cloning and contemporary cyber warfare.
The word “obfuscation” comes from
the Latin “ob” (against, toward, in the
way of, over) and “fuscāre” (to darken).
So “obfuscāre” literally meant “to darken
over” or “to obscure”. This word entered
the English language in the early 1500s.
In the context of computer programming, ‘obfuscation’ typically refers to
deliberately making source code hard
to read or understand while keeping
it functionally correct.
Practical Electronics | March | 2026
Consider a function called IsEven()
that accepts an integer (whole number)
value of x and returns true if the number
is even, or false if it isn’t. This could
be implemented as “return (x % 2) ==
0”, for example.
Renaming this function to something
like aj53t() and changing the code to
“return !((x & 1) ^ 0)” will cause people’s
eyes to glaze over while they try to wrap
their brains around what’s going on.
This leads us to the competition
called The International Obfuscated
C Code Contest (ioccc.org). Since the
1960s, people have used obfuscation to
protect their source code. Techniques
include deliberately awful and/or misleading variable and function names,
excessive use of macros and indirection,
confusing control flows and even self-
modifying code (yes, really).
All this makes reverse engineering
painful and debugging one’s code… let’s
say, “character building”. The irony was
that, until recently, most obfuscation was
relatively shallow; skilled programmers
could still unravel it, but the original
authors often couldn’t understand their
own work six months later.
As a result, source-code-level obfuscation was frequently more effective
against maintenance than against piracy.
Yet another consideration is that
modern compilers are ruthlessly clever.
If any obfuscation that’s been applied
doesn’t change program semantics,
can be proven redundant, or produces
predictable results, the optimiser will
happily remove it in the name of speed,
size, or clarity (well, the optimiser’s
idea of clarity).
From the compiler’s point of view,
obfuscation often looks just like bad
code. In this context, most of my own
code is self-obfuscating, which means I
can boast a hitherto unrecognised talent
(my mum will be so proud).
Things get worse because there are
disassemblers that can convert machine
code into human-readable assembly
code, and decompilers that can accept
assembly code and generate easy-tounderstand pseudo source code.
As one example, consider the opensource software reverse-engineering
(SRE) suite Ghidra (ghidra.net). This
little scamp can analyse compiled
executables (machine code) and reconstruct a human-readable representation
of how they work.
Darkness as a defence
From ancient Greek mythology, Nyx is
the goddess and personification of night.
She isn’t merely darkness; she represents
obscurity, concealment, and what lies
beyond ordinary perception. In many
mythological tales, night wasn’t just the
absence of light; it offered protection,
uncertainty, and strategic advantage.
That which couldn’t be clearly seen
couldn’t be easily challenged.
This metaphor carries surprisingly
well into modern software security.
An example of the result of
decompilng binary code
without obfuscation (top;
quite readable) and after
using Nyx (bottom; useless).
Practical Electronics | March | 2026
Recently, I was chatting with Dr. Nils
Albartus, Embedded Security Specialist
and Technical Solutions Director at
Emproof (emproof.com). The company’s flagship product, Nyx, is designed
to make software behave a little like
its mythological namesake: make it
harder to see, harder to understand,
and therefore harder to attack.
Emproof Nyx offers both passive
and active protection techniques and
technologies. In the case of passive
protection, Nyx can read a program
in machine code and obfuscate it
into complex, misleading forms that
confuse or break disassemblers and
decompilers like Ghidra.
Nyx can also augment the code with
active protections, including runtime
guards that can detect breakpoints, tracing hooks and virtualised execution.
If the code detects tampering, it can
halt execution, revert to a safe path,
or (very sneakily) continue running,
but with misleading data.
Marching to a different beat
I’ve long been a fan of XMOS’s multicore XCORE devices (xmos.com). Each
physical core runs multiple hardwarescheduled threads, so a single core
behaves like several ‘virtual’ cores running in parallel. These XCORE chips
offer clock-cycle-accurate determinism and reaction times measured in
nanoseconds, making them ideal for
real-time, timing-critical applications
where predictability is king.
However, XCORE devices demand
that you use a parallel programming
mindset. Developers must think in
terms of concurrent tasks, message
passing and precise timing. These
devices are technically well-suited
to a far broader range of applications
than they’re currently used for, but the
learning curve has acted as a throttle.
I was recently chatting with Mark
Lippett, president and CEO of XMOS.
Mark told me that the need to overcome
this learning curve prompted the folks
at XMOS to introduce GenSoC. This tool
provides a generative-AI interface that
allows developers to specify an XCOREbased design using natural language
rather than low-level parallel code.
For example, a user can describe requirements like, “I want to create an
audio pipeline with USB input, I²S
output, sample-rate conversion and deterministic low latency”, and GenSoC
will generate a complete, working
XCORE-based SoC design in seconds.
And so we find ourselves armed with
pocket laboratories, AI design assistants, and tools that can turn darkness
into clarity (or vice versa). I don’t know
about you, but I’m almost scared to think
what we’ll discover next.
PE
57
|