This is only a preview of the January 2026 issue of Silicon Chip. You can view 35 of the 104 pages in the full issue, including the advertisments. For full access, purchase the issue for $10.00 or subscribe for access to the latest issues. Articles in this series:
Items relevant to "DCC Base Station":
Articles in this series:
Items relevant to "Remote Speaker Switch":
Articles in this series:
Items relevant to "Earth Radio, Part 2":
Purchase a printed copy of this issue for $14.00. |
ACOUSTIC IMAGING
Image source: https://unsplash.com/photos/empty-chairs-in-a-room-3rW1HAakg8g
By Dr David Maddison, VK3DSM
Those of us lucky enough to still have good hearing in both ears can
instinctively tell where sound is coming from. However, some sounds can be
difficult to locate; sometimes, doing so is a matter of life and death! That is
where technology comes to the rescue, with Acoustic Imaging Systems.
W
ouldn’t it be nice to locate the
source of a sound that we can
hear but can’t see or locate precisely?
Depending on their level, frequency
and spectra, sounds are not as easy
to locate as certain other phenomena,
such as light leaking into a darkened
room.
Seeing sounds as an image is not
altogether unusual. Animals such as
bats and dolphins use sound to ‘see’
(see Fig.1). The same can be said for
medical ultrasounds and submarine
sonar.
With active sonar, a sound wave is
emitted and its reflection from the target is analysed to form an image. Alternatively, for passive sonar, no sound is
emitted by the sonar; instead, it listens
to sound waves emitted or reflected by
objects being surveilled.
Directional or stereo microphones,
or our ears, can give some cues as to
the location of a sound based on differential timing, frequency shaping (due
to the shape of the ear and head) and
so on. However, it can be difficult to
12
Silicon Chip
locate a sound precisely; sometimes
we only know the general area.
At times, sounds can appear to come
from one place but are really coming
from another, perhaps due to reflections, refraction, standing waves or
other phenomena.
However, there is a way to visualise
the source of sounds precisely, making them visible to us in the same way
as we can see the source of light leaking into a darkened room. The source
of the sound can be rendered visible
by a device called an acoustic imaging camera.
In contrast with the active sonar
mentioned above, where acoustic
signals are reflected back to form an
image, in acoustic imaging, signals are
only received from an external source.
Like passive sonar, acoustic imaging
relies on detecting sounds directly
from the source, but it visualises sound
fields for applications like industrial
monitoring, setting it apart from passive sonar’s underwater tracking role.
With an acoustic imaging camera,
Australia's electronics magazine
sound waves are detected and pinpointed using a microphone array
for precise location. The sounds are
overlaid in real time (or sometimes
later) onto a digital camera image of
the scene of interest. Acoustic imaging can also detect sounds inaudible
to the human ear (eg, infrasound or
ultrasound).
It should be noted that, confusingly,
there are other devices also called
acoustic cameras that emit acoustic
signals for tracking like active sonar.
In this article, unless stated, we are
only describing the passive devices.
The core of acoustic imaging lies
in beamforming, a technique that
electronically shapes received sound
(or radio) signals into focused beams
by adjusting their timing (phase)
and strength (amplitude) to enhance
sounds from specific directions while
reducing others.
We previously mentioned beamforming in our September 2020 article about 5G Networks (siliconchip.
au/Article/14572).
siliconchip.com.au
Visualising sounds as acoustic
imaging is just one application of this
technology. Others include acoustic
microscopy, ultrasound imaging, photoacoustic imaging and thermoacoustic imaging, as well as sonar, which
will not be discussed in this article.
Sonar was already described in some
detail in our June 2019 article on that
topic (siliconchip.au/Article/11664).
How it is used
Examples of the use of acoustic cameras include locating the source of an
unwanted sound to rectify it, such as
reducing noise in prototype motor
vehicles, aircraft, trains or other vehicles. It can also be used to locate a gas
leak in a chemical plant, which often
can be hard to detect otherwise (eg, if
it’s a clear gas escaping).
Alternatively, we might want to
analyse the frequency spectrum of
sounds emanating from certain locations for various diagnostic or suppression purposes. We can also map traffic
noises or locate the origins of noises
from wildlife. It could also be used to
analyse the source of noise entering a
building from outside, so that soundproofing can be installed.
In fact, just about anywhere there is
a sound that needs to be eliminated,
located or analysed, there is an application for the acoustic camera.
We previously published a review
of the CAE SoundCam (October 2020;
siliconchip.au/Article/14610). It was
one of the first commercial devices
on the market and took ~15 years to
develop. In this article, we will go into
more detail about the theory of operation of such devices and the latest
developments.
17th century CE Sir Isaac Newton
attempted to measure the speed of
sound and understood sound to be a
wave like a water wave.
1626 Sir Francis Bacon emphasised
the importance of investigating “the
nature of sounds in general” which he
called “acoustica”. His observations
and experiments on sounds were published posthumously in 1627, in Sylva
Sylvarum (siliconchip.au/link/ac95).
He observed “frisk and sprinkle” when
he rubbed the rim of a glass of water.
1671 Robert Hooke saw patterns
on a flour-covered plate along which
a violin bow was drawn.
1680 Ernst Chladni repeated and
enhanced Hooke’s work and developed a method to show the various
modes of vibration of rigid plates.
1877-1878 Lord Rayleigh laid the
foundations for the theory of the
behaviour of sound waves in his treatise, “The Theory of Sound”.
19th century Hermann von Helmholtz made substantial contributions
to acoustics.
20th century Microphones and
oscilloscopes greatly facilitated the
study of acoustics.
1910s to 1920s Sonar was developed
for imaging underwater.
1917 Nobel Prize winner Jean-
Baptiste Perrin invented the télé-
sitemètre for the French military, for
the acoustic detection of enemy aircraft. In 1917, it was said to be able
to detect aircraft 7-8km away with an
angular error of 2-3°.
It used two sets of a number of
sub-arrays of listening horns grouped
together and combined via an acoustic waveguide to a listening point at
each of the observer’s ears. It was a
type of acoustic beamforming before
its modern implementation with computers and signal processing. A version appeared on the cover of 1930
Popular Mechanics (Fig.2). According
to the magazine, that version “automatically registers their flying speed,
altitude and distance from the finder”.
1930s to 1940s Directional microphone arrays emerged for sound ranging during World War II, advancing
multi-microphone techniques. Phased
array antennas were used similarly
for radar.
1940s to 1950s Phased arrays of
hydrophones were used for sonar.
Sonar principles were applied in the
development of medical ultrasound.
1960s to 1970s acoustic methods
were developed for non-destructively
testing materials, eg, looking for cracks
in aircraft parts or other critical components. Beamforming techniques
were used in medical ultrasound.
History of acoustic imaging
Developments leading up to acoustic imaging included the following
discoveries regarding the behaviour
of sound and developments in
beam-forming:
6th century BCE Pythagoras studied
musical sounds from vibrating strings.
4th century BCE Aristotle suggested
that sound propagates as motion
through air.
1st century BCE Vitruvius contributed to the acoustic design of theatres
and determined the correct mechanism of sound wave transmission.
6th century CE Boethius documented a link between pitch and frequency.
siliconchip.com.au
Fig.1: an
image of a
man as seen by a dolphin’s
natural sonar. Source:
www.speakdolphin.
com/pressRelease/
Press_Release_what_the_
dolphin_saw.pdf
Fig.2: the cover of
Popular Mechanics from
1930 shows a version of
Jean Baptiste Perrin’s
télésitemètre.
Australia's electronics magazine
January 2026 13
Fourier Transforms for Dummies
Fourier transforms let us view signals in terms of their frequencies rather than time; a bit like
turning a recording of a song into its individual notes. Fourier theory says that any waveform
can be represented as the sum of sinewaves of different frequencies, phases and amplitudes.
If you are not familiar with a Fourier transform, it may seem like a complex and exotic mathematical concept that you are unlikely to ever fully understand. However, it actually turns out
to be relatively simple when you think about it the right way.
One way to approach it is to consider the inverse Fourier transform first. If a Fourier transform
turns regularly sampled time-domain amplitude data into frequency/phase data (as a complex
number, but don’t worry about that now), the inverse Fourier transform turns frequency/phase
data back into a set of points sampled at fixed intervals in time. Its output is exactly the input
of the original Fourier transform.
The frequencies that we’re breaking the signal down into are at fixed intervals (eg, DC, 100Hz,
200Hz, 300Hz etc), so the output of the Fourier transform is simply a series of amplitudes and
phases, with each frequency ‘bin’ allocated a scaling factor and phase offset.
We can easily visualise how to reverse the Fourier transform. You take a sinewave at each
frequency, scale it by the corresponding amplitude value, shift it by the phase shift, and add
the lot together. Voilà, you have your original waveform back.
Mathematically, this is just a linear operation – a kind of matrix multiplication – where each
row represents one sinewave at a different frequency.
After all, a sinewave of a specific frequency sampled at specific intervals is simply a set of
numbers between -1 and +1 calculated using the sin(ωt) function. If we expand that function
to Asin(ωt + φ), where A is the amplitude scaling
factor and φ is the phase shift, we get our original
sinewave back. Then we just need to add them
up, giving us the final formula:
In this formula: xn is the nth input sample; N is
the total number of samples in the transform; k is the frequency bin index; and Xk is the result
for a given k. If you haven’t studied high-level maths, that may look like gobbledegook, but it’s
essentially just performing the sum-of-scaled-and-phase-shifted-sinewaves mentioned above,
with some normalisation applied so the magnitude of our result matches the original scale.
Now, through the lovely properties of linear algebra, it turns out that the forward Fourier transform has almost exactly the same formula, with just a
sign change and the removal of the scaling factor (as
per convention). It is:
How can our sum-of-sinewaves algorithm break
down a time-domain signal into its constituent sinewaves? It makes sense if you think of it this way: what a Fourier transform is essentially doing
is calculating the correlation between the input signal and each sinewave at a different frequency. A correlation is a statistical calculation that tells you how similar two sets of data are,
with a larger result meaning they are more similar. Its formula is quite simple:
In other words, the correlation between two sets of discrete data is simply the sum of the products of corresponding data points. If you think about it, if your data rises and
falls at a similar rate to the sinewave you’re correlating it with,
you’re going to get a large resulting sum. If they are not synchronised, the products are going to essentially be random and cancel out when you sum them.
So, the scary-looking Fourier transform formula above is basically just doing this correlation
with a set of sinewaves at different frequencies, and out pop the correlated sinewave amplitudes.
By using complex numbers, the transform simultaneously captures both amplitude and phase;
the magnitude of the complex number gives the amplitude, while its angle gives the phase.
Finally, to resolve any confusion over the use of complex numbers giving us the phase shift;
there is a simpler, geometric way to think of what we’re doing.
Effectively, we are correlating the input signal with each sinewave along with its corresponding cosine wave, ie, the same sinewave phase shifted by 90°. The cosine component
(the real part) measures how much the input aligns with a zero-phase reference wave. The
sine component (the imaginary part) measures how much it aligns with a 90°-shifted version
of the same frequency.
Together, these two numbers form a 2D vector: one axis for cosine, one for sine. That vector’s angle gives you the phase of that frequency in the signal, ie, how far along the cycle your
signal’s version of that frequency is compared to the reference cosine. The length (magnitude)
of that vector gives you the amplitude, or how strongly that frequency appears in your signal.
In summary, the Fourier transform is a set of two orthogonal correlations, with sine and
cosine waves, at various frequencies, producing vectors where the angle represents phase shift
and the length, amplitude. So while it’s advanced mathematics, it’s also incredibly elegant once
you understand what’s going on.
14
Silicon Chip
Australia's electronics magazine
Fig.3: the concept of beamforming.
The beam is electronically scanned
to capture the signal from various
parts of a soundscape, producing a
sound map.
1970s the first experimental acoustic imaging systems emerged, using
arrays to map sound sources, influenced by sonar and ultrasound. In
1974, John Billingsley invented the
first “acoustic telescope”, a precursor
to the acoustic camera.
1976 Billingsley and Roger Kinns
develop a full-scale acoustic microscope system to analyse sounds from
the Rolls Royce Olympus engine used
in the Concorde. It used 14 condenser
microphones, with signals digitised
with 8-bit resolution at a sampling
rate of 20kHz. The computer used
had a memory of 48kiB and data was
stored on floppy disks with a capacity of 300kiB. The processed data was
displayed on a colour TV.
This was the basis of modern systems, and in the following decades,
improvements were made in the sampling rate, number of microphones,
digitisation resolution, software and
size and portability of the equipment.
This was also the first time a real engineering problem, the determination
of noise sources from the engine, had
been analysed with acoustic imaging
techniques.
1980s to present digital signal processing methods were developed, and
high-speed computers enabled realtime beam-forming.
1997 a reporter coined the term
“acoustic camera”.
2001 the first commercial acoustic
camera was introduced by GFaI tech
GmbH (www.gfaitech.com). The introduction of commercial devices marked
the transition from research to practical tools, integrating digital signal processing (DSP) and array technology.
siliconchip.com.au
Fig.4: beamforming in the time domain using the delay-and-sum
technique. Original source: www.gfaitech.com/knowledge/faq/delayand-sum-beamforming-in-the-time-domain
Fig.5: how a Fourier transform
converts data between the time and
frequency domains. Original source:
https://visualizingmathsandphysics.
blogspot.com/2015/06/fouriertransforms-intuitively.html
2000s to present advances in array
design and software have refined
acoustic imaging for industrial and
environmental use.
How they work
An acoustic imaging camera uses
an array of multiple microphones
to detect the source of a sound. One
microphone cannot locate the source
of a sound; two microphones can to a
certain extent, like our ears, but even
that does not give precise locations.
For example, the shape of our ears
combined with our brain is how we
determine where sound is coming
from. If you were to change the shape
of your ears, it would take some time
before your brain could readjust, and
therefore you wouldn’t be able to precisely pinpoint where sound was coming from.
An array of microphones, often 64
or more, is necessary so that triangulation and advanced mathematical
techniques can be used to locate the
source of the sound very precisely,
while also filtering them by frequency.
The microphones may be sensitive
to frequencies from around 2kHz to
100kHz (well above what we can hear,
ie, ultrasound).
The precise method used to locate
sounds is called beamforming, a signal processing technique also used
for radio waves. It is how a mobile
phone tower focuses its radio lobe
directly at your phone to maximise
the signal it receives while using minimal power and not interfering with
other devices.
In acoustic imaging, beamforming
works differently. The camera, acting
siliconchip.com.au
as a receiver, focuses on acoustic
energy naturally emitted by a sound
source, enhancing sounds from specific directions while ignoring others.
Essentially, it is the reverse process
used for transmitting signals.
Acoustic beam-forming
The microphone array of an acoustic imaging camera is in the form of a
geometric array. Sound waves reaching individual microphones are processed in such a way that some sounds
from particular directions are selectively reinforced while others from
different directions are attenuated by
adjusting their relative amplitudes
and phases.
The ‘sound field’ is scanned either
sequentially or digitally all at once,
similar to how a spectrum analyser
can be swept or a ‘snapshot’ processed
using a Fourier transform. This amplifies and reinforces sounds from particular directions while attenuating others, thus building up an image showing intensity and frequency of sounds
from particular areas – see Fig.3.
Methods of acoustic beamforming
using microphone arrays to produce
directional images include:
Delay-and-sum technique
This is one of the simplest and most
common methods of acoustic beamforming. Consider a microphone array
that is picking up sound waves from
multiple directions.
Because sound waves travel at a
more-or-less constant, finite speed
(about 343m/s in air at sea level with
average pressure, temperature and
humidity), the sound waves from a
Australia's electronics magazine
specific direction will arrive at each
microphone at a slightly different
time.
That time difference is determined
by the distance between the microphone and the sound source. Delayand-sum adjusts for these time differences in software by delaying the
signal from each microphone so that
waves from the desired direction align
exactly when it adds them together.
If a desired sound wave comes from
straight ahead, the closest microphone
will receive it first; others will be
slightly delayed. The software of the
signal processor will delay the signal
of the first (closest) microphone the
most, and the others less so. When the
signals are summed, the desired signal from straight ahead is reinforced,
while others from undesired directions are attenuated or cancelled.
Since this technique focuses on one
direction at a time, it is repeated across
the entire sound field, thus building an
image. It is computationally straightforward, making it suitable for realtime imaging. This is less effective
than other techniques in noisy environments or in complex sound fields,
though. It generates a sound intensity
map only, and does not separate individual sound frequencies.
The beamforming and acoustic
map generation process seems complicated, but it is simple in principle
(although more complex in practice).
Fig.4 shows an example with two
sound sources, Source 1 (red) and
Source 2 (blue), and four microphones
(yellow circles). The steps are:
1. Signal acquisition: microphones
record the sounds from a sound field
January 2026 15
Fig.6: delay-and-sum
beamforming in the
frequency domain.
of interest; four waveforms recorded
are shown at the bottom. The plots
show sound pressure (vertical axis)
vs time (horizontal axis). The relative
positions (in time) of the red and blue
waveforms vary for each microphone
based on its relative proximity to the
sound source.
2. A time delay is added: each waveform has a distance along the time axis
(horizontal) relative to its position
from the source. The actual distances
can be worked out by knowing the distance between the microphones and
sound sources, and the speed of sound.
We are interested in mapping
Source 1 (Source 2 can be mapped at
another time on another part of the
sound field scan). A variable time
delay indicated by ∆tx is added to each
microphone waveform so the signals
from Source 1 (red) for each microphone are aligned.
3. Signal summing: the signals with
the time delays ∆t1, ∆t2, ∆t3 and ∆t4
are summed, resulting in a combined
waveform where the signals from
Source 1 are strengthened and those
from Source 2 are not.
4. Signal normalisation: the signals
are then normalised based on the number of microphones. The time delay
to the largest peak is a measure of the
position of the sound source in the
sound field.
5. Mapping: the process is repeated
over the entire sound field to create
an acoustic map, showing the sound
16
Silicon Chip
intensity at different locations.
Frequency-domain
beamforming
This technique processes sound
in the frequency domain rather than
the time domain. Thus, the frequency
spectrum of each sound source can
be analysed. It allows the determination of which frequencies come from
which directions so that acoustic maps
of both sound intensity and frequency
can be created.
It uses beamforming techniques on
each frequency band. It is computationally intensive and is often performed by post-processing data rather
than in real time.
Frequency domain beamforming
is shown in Fig.6. In the approach
described here, it is based on delayand-sum beamforming. The steps are
as follows:
1. Signal acquisition: identical to
the delay-and-sum technique.
2. Fourier transformation: the ‘Fourier transform’ is a powerful mathematical tool that converts a signal such
as sound pressure over time, known
as the time domain, into its underlying frequency components and their
amplitudes, represented in the frequency domain (see panel).
It decomposes a signal into a combination of sinewaves that represent
both the amplitude and phase angle
for each frequency component in the
signal. Plots of amplitude vs frequency
and phase angle vs frequency can be
made from this information.
This offers two views of the same
data, revealing, for example, which
frequencies dominate (see Fig.5). For
instance, just as a piano chord can be
separated into individual notes, the
transform can break down the hum of
machinery into its distinct frequency
parts, aiding acoustic imaging analysis.
3. Phase vs frequency determination (Fig.6): Fourier analysis is applied
to the amplitude vs time signal from
each microphone to give a spectrum
showing phase vs frequency representing the signals received at each of the
four microphones. Each of the four
signals from each microphone can be
seen to have a different phase angle as
a function of the frequency.
4. Phase adjustment: a time delay
correction aligns the phases for Source
1, making its red signals in phase,
Fig.8: adaptive beam-forming; the reception pattern of the lobes of the
microphone array is shown. Undesired signals coming from directions other
than the main beam are nulled in the signal processing. Original source:
www.researchgate.net/publication/283639759
Australia's electronics magazine
siliconchip.com.au
Fig.7: phased-array
beam-forming. The
signals from each
microphone (p1, p2
& p3) are phaseshifted into
alignment and
summed for each
look direction to
maximise signal
strength. Source:
https://dspace.mit.edu/
handle/1721.1/154270
while Source 2’s blue signals remain
out of phase. This is evident in the
lower middle graphs of Fig.6, where
red signals align at the same phase
angle, and blue ones diverge.
5. Summing: the adjusted signals are
summed and normalised by the number of microphones. The in-phase red
signals of Source 1 strengthen (overlapping as a single peak), while the
out-of-phase blue signals interfere
destructively, reducing their strength.
6. Mapping: the summed values for
each frequency can be plotted on an
acoustic map, with the positions of
the sources of each frequency being
determined from the time delay and
phase angle information, resulting in
a “heat map” of sound intensity and
frequency.
Phased-array technique
The phased-array technique is a
beam-forming method that uses precise control of the phase, the position
of each acoustic signal’s sinewave
cycle received by microphones, to
electronically steer the listening beam
across the sound field (see Fig.7).
Unlike delay-and-sum, it adjusts
the phase of each microphone’s signal, causing acoustic wavefronts to
interfere constructively and reinforce
sounds from the target direction while
destructively cancelling others. This
offers excellent directional precision,
ideal for imaging dynamic sources,
but demands computationally intensive processing and careful equipment
calibration.
Adaptive beam-forming
Adaptive beamforming (Fig.8)
adjusts to challenging sound environments by modifying delays and microphone weightings (amplification) in
real time to suppress noise or interference, such as from a specific direction. This dynamic approach requires
significant processing power, although
it is ideal for complex acoustic imaging tasks.
Acoustic imaging system
configurations
Acoustic imaging cameras come
either as fully integrated all-in-one
units (handheld) or as separate micro-
phone and camera arrays, data acquisition units and a laptop computer (see
Fig.9). The sound map being recorded
and processed here is shown in Fig.10.
Handheld acoustic imaging
cameras
For industrial inspection purposes,
it is often more convenient to use an
all-in-one handheld acoustic camera
rather than separate system components.
The SoundCam Ultra is a handheld
unit that images audible sound and
ultrasound (see siliconchip.au/link/
ac97). It is used for compressed air/gas
leak localisation, vacuum leak localisation, partial discharge localisation,
condition-based monitoring, animal
studies and non-destructive testing.
Another example is the GFaI tech
Mikado. It uses an array of 96 digital
MEMS microphones and a Microsoft
Surface Pro tablet as its data processing
and display unit – see Fig.11.
Acoustic microphone arrays
Separate microphone arrays are also
available for use with the separate
Fig.9: a GFaI tech acoustic imaging camera system
with separate components (microphone array,
data recorder and computer) recording sounds
from a sewing machine. Source: www.gfai.
de/fileadmin/user_upload/GFaI_product_
sheet_acoustic_camera_en.pdf
Fig.10: a sound map from the sewing
machine being recorded in Fig.9.
Source: www.gfai.de/fileadmin/
user_upload/GFaI_product_sheet_
acoustic_camera_en.pdf
siliconchip.com.au
Australia's electronics magazine
January 2026 17
cameras, data recording units and a
computer with the appropriate software.
The spacing and relative location of
microphones in an acoustic imaging
array are crucial, carefully designed to
optimise goals like resolution (clarity
of sound sources), side-lobe suppression (reducing unwanted beams) and
spatial aliasing reduction (avoiding
imaging artefacts).
These microphone arrays can be 2D
linear (square or rectangular), circular,
random, or even follow a Fibonacci
pattern, similar to a sunflower. Various 3D arrangements are also possible.
A key design rule is that the microphone spacing should be less than
half the wavelength of the highest frequency to prevent aliasing (derived
from the Nyquist-Shannon sampling
theorem). The relevant equation is d
= v ÷ 2fmax, where d is the spacing in
metres, v is the speed of sound in air
(343m/s), and fmax is the maximum
frequency to be imaged.
For example, to image up to 5kHz
(a wavelength of 68.6mm), the spacing should be about 34mm; for up to
20kHz (a wavelength of 17.15mm), it
should be around 8.6mm.
One example of a 2D microphone
array is the SoundCam Octagon
(Fig.12), which has 192 MEMS microphones along with an integrated camera, data recorder and notebook computer running suitable software. The
large number of microphones allows
very high resolution imaging and
acoustic holography (more on that
later).
Another example of a 3D microphone array is GFaI tech’s Sphere48
AC Pro48 channel system for acoustic measurements in 2D and 3D with
48 electret condenser microphones
(see siliconchip.au/link/ac96). It has
a frequency response from 20Hz to
20kHz. It is designed for sound localisation in confined spaces such as a
motor vehicle.
It is used with NoiseImage software that allows sound sources to be
isolated, localised and analysed with
respect to both frequency and time
response. It also allows a 3D acoustic
map to be produced, and imagery is
provided by an integrated Intel RealSense Depth Camera to record depth
information.
Suggested uses include noise, vibration and harshness (NVH) analysis in
cars, trains and aeroplanes; location of
squeaks and rattles in vehicles; leakage detection; and sound design and
analysis of building acoustics.
An additional example of a 2D array
is the Fibonacci120 AC Pro (Fig.13), a
120-element microphone array in the
form of a Fibonacci pattern. It allows
near-field and far-field measurements
and, according to the manufacturer,
the spiral pattern gives the “highest
possible spatial resolution and the best
possible map dynamics”.
A further example of a microphone
array is the GFaI tech Star48 AC Pro
(siliconchip.au/link/ac9c). It is optimised for mid-range frequency measurements of outdoor objects like aircraft flyovers or the observation of
large wildlife, like elephants.
Applications
In this section, we will discuss various applications of acoustic imaging.
Acoustic detection of drones
Hostile drones pose risks to military
and civilian people and infrastructure;
therefore, their detection is extremely
important. Drones can be flown autonomously, without RF communications
(or via fibre-optic cables), making their
detection even more difficult. Their
small size can also make radar detection difficult.
Airspeed Electronics Ltd (www.
airspeed-electronics.com) has developed passive acoustic imaging arrays
to detect drones (Fig.14), which can
each detect small quadcopters at a
range of 200-300m. Each sensor can
be integrated into a network to make
a fully scalable array connected by
wireless mesh radio.
Multiple sensors enable accurate
target location via triangulation. A
drone’s acoustic signature also provides valuable information such as
the number of rotors, pitch imbalances
and rapid pitch variations, which
allow the drone class to be detected,
an estimate of its payload mass (weight
can affect the rotor pitch) and whether
the drone is manually or autonomously controlled.
Airspeed’s microphone arrays
use phased-array signal processing
to help separate drone sounds from
other background noises. Electret
condenser microphones are used in
Airspeed’s microphone arrays as they
have superior performance to MEMS
Fig.12: the SoundCam Octagon has an integrated camera
and data recorder. Source: www.gfaitech.com/products/
acoustic-camera/all-in-one-soundcam-octagon
Fig.13: the GFaI tech Fibonacci120 AC Pro. Source:
www.gfaitech.com/fileadmin/gfaitech/documents/
datasheets/acoustic-camera-fibonacci-array-120datasheet-20.pdf
Fig.11: the GFaI tech
Mikado. The object
behind the device is the
microphone array (the
video camera is not
visible). Source: www.
gfaitech.com/products/
acoustic-camera/handheldsoundcam-mikado
18
Silicon Chip
siliconchip.com.au
microphones, according to the company.
Airspeed performs its own in-house
modelling and performance evaluation of microphones; a simulation of
a microphone array is shown inset in
Fig.14. Fig.15 shows the dashboard
from a sensor array tracking a small
drone.
Aircraft
An example of the acoustic analysis of a business jet is shown in Fig.16,
The image shown represents a spectral
analysis for the third octave band of
315Hz at 53dBA (“A-weighted decibel”, a sound measurement weighted
to reflect human hearing).
The hardware setup is the same as
described below for the car measurements. The software used was Photo
3D and Spectral Analysis 3D for precalculated narrowband analysis to create acoustic photos from a spectrum.
Building acoustics
Acoustic imaging can be used in
concert halls and other large interior spaces to optimise acoustics. It
can diagnose and correct acoustic
problems such as undesirable echoes
(reflections), absorption of sounds, or
differential absorption or reflections of
sounds of different frequencies.
Acoustic imaging can be used to
optimise ‘acoustic comfort’ in buildings by detecting the source of sound
leaks or the effectiveness of various
acoustic treatments. For example this
video (https://youtu.be/ykchSQX-sfg)
shows a Sorama CAM iV64 being used
to detect sound leaks around a window frame.
Fig.14: a network of Airspeed’s TS-16 acoustic remote sensors at the British
Army’s AWE-24 exercise, Salisbury Plain, UK. Inset: a simulated beam pattern
from a microphone array at 1.2kHz. Source: www.airspeed-electronics.com/
technology
Fig.15: drone tracking by Airspeed using an acoustic image array. The image at
upper left shows the target drone location by azimuth and elevation. At upper
right is a polar plot, while the lower left shows a view from the target drone;
at lower middle is a spectrogram of the target, and the lower right shows the
predicted target type based on spectral information. Source: www.airspeedelectronics.com/technology
Cars
Automotive and other engineers
strive to minimise NVH (noise, vibration & harshness) in vehicles (or other
machines). For cars, NVH can be perceived as unwanted and unpleasant for
passengers and drivers. These sounds
may originate from the engine, drivetrain, suspension, tyres, road, air conditioning, wind noise etc.
One way to locate the source of these
noises is through the use of acoustic imaging cameras. Some examples
of locating such noises are shown in
Figs.17 & 18. The experimental setup
to obtain those images comprised the
GFaI tech Sphere48 AC Pro microphone array mapping frequencies from
291Hz to 20kHz.
Fig.16: acoustic measurement and
location-finding in a Bombardier
BD-700 - 1A10 business jet.
Source: www.gfaitech.com/
applications/aircraft-interior
siliconchip.com.au
Australia's electronics magazine
January 2026 19
Figs.17 & 18:
analysing and
locating noise
sources in VW
interiors with
a microphone
array. Source:
www.gfaitech.
com/applications/
vehicle-interior
Also used were an mcdRec data
recorder with a sampling rate of
192kHz and a depth of 32 bits, and
NoiseImage software with the Acoustic Photo 2D and Acoustic Photo
3D modules for mapping the sound
sources onto a common interior or
exterior CAD model. Other software
modules used include the Record
Module, Spectral Analysis, Advanced
Algorithms and Project Manager.
Cooling fans
Acoustic imaging technology can be
used to develop quieter cooling fans in
electronic equipment. For example, PC
fan manufacturer Cooler Master uses
this technology, as shown in Fig.19.
Notua also use similar technology to
develop their fans, this includes acoustic imaging to map noise (siliconchip.
au/link/ac99).
Drone-based acoustic imaging
Acoustic imaging cameras can be
mounted on drones (see Fig.21) for
various purposes such as industrial
inspection, natural disaster response
or security. The obvious problem
of self-induced drone noise can be
reduced by spectral (Fig.20) and other
methods, such as making sure the
beam-forming direction ignores any
part of the drone’s airframe.
Fig.19: Cooler Master computer fans are developed with a Sorama acoustic
camera. Source: https://youtu.be/0UFli2BUCL4
Fig.20: a block diagram of a spectral
‘denoising’ scheme to remove selfgenerated noise from a drone-mounted
acoustic camera. Original source:
https://doi.org/10.3390/drones5030075
Fig.21: the Crysound (www.crysound.com) CRY2626G is the first dronemounted acoustic camera designed for detecting pressurised system
leaks and electrical partial discharge. Source: https://sdtultrasound.com/
products/crysound/cry2626g/
20
Silicon Chip
Australia's electronics magazine
Echoes in rooms
The sampling rates for acoustic
imaging can be as high as 200kHz.
Thus, it is possible to watch echoes
bounce around a room, as shown
in Fig.22. The picture shows all the
bounces, but in reality they happen
sequentially.
Electrical discharge inspection
Detecting high-voltage partial discharges from insulation and corona
discharges is a necessary task to prevent dangerous or expensive problems
in high-voltage installations.
Techniques such as infrared thermography are not always reliable for
detecting them because certain types
of discharges might not cause a significant temperature rise, or not pinpoint
the exact location of the problem.
In addition, in a high-voltage installation, heat may be generated for other
reasons. It is also often difficult to
detect the sounds that these discharges
make using the ear or microphones.
Thus, acoustic imaging can be a good
tool to detect such problems.
siliconchip.com.au
Fig.22: echoes bouncing around a room. Sources: https://
petapixel.com/2023/03/23/how-acoustic-cameras-can-seesound/ & https://youtu.be/QtMTvsi-4Hw
Acoustic imaging is also potentially
safer in the hazardous environment
of high-voltage installations, as it can
be used from further away than some
other techniques.
An example of discharge detection
is shown in Fig.23. The instrument
used is the Fluke ii915. Research has
shown that the frequency of sound
emissions from electrical discharges
is mostly in the range of 20-110kHz,
with 95% of the acoustic energy in the
range of 48kHz to 100kHz, with a peak
frequency of 68.3kHz.
Thus, this instrument is optimised
for detection at those frequencies.
Fixed or mobile applications
The Sorama L642 (https://sorama.
eu/products/l642-acoustic-monitor)
can be permanently mounted on a pole
or placed on a mobile robot for continuous monitoring or inspections. It
can be used indoors or outdoors, in a
Fig.24: detecting a noisy vehicle
exhaust with a Sorama L642. Source:
https://sorama.eu/solutions/vehicledetection-system
siliconchip.com.au
Fig.23: detecting high-voltage electrical discharges using
the Fluke ii915. Source: www.seesound.com.au/partialdischarge
factory environment or even an urban
environment to monitor noises and
their sources.
One application is to detect noisy
vehicles, as shown in Fig.24 and
https://youtu.be/fQEkkFGPbU8
Gas leak detection
Acoustic imaging can be used
for gas leak detection and is able to
detect leaks that people cannot even
hear. This method of gas leak detection is considered superior to, or at
least supplemental to, gas detectors,
because acoustic imaging can detect a
small leak before there is a substantial
buildup of gas (see Fig.25).
fireworks, noisy vehicles or alarms
going off.
Hydrogen leak inspection
Finding hydrogen leaks is difficult,
as hydrogen can escape from the smallest openings. Acoustic cameras such as
those from Sorama have been designed
specifically to be able to detect hydrogen leaks from tanks, pipes and valves
– see Fig.26.
Mechanical inspection
Acoustic imaging can discover
defective parts of machinery, such as
a defective robot joint that has developed a squeak.
General environmental monitoring
The Sorama L642 series can be
used for noise measurements and
anomaly detection in urban environmental monitoring, such as identifying the location of inappropriately lit
Mining equipment
Sounds from mining equipment can
be identified and appropriate action
taken. These sounds can indicate a
possible occupational safety concern.
One example is abnormal noise from
Fig.25: gas leak detection using a
Sorama acoustic imager. Source:
https://sorama.eu/solutions/gas-leakinspection
Fig.26: detecting a hydrogen leak
from a valve using a Sorama camera.
Source: https://sorama.eu/solutions/
hydrogen-leak-inspection
Australia's electronics magazine
January 2026 21
the conveyor bridge of an excavator
(see siliconchip.au/link/ac99).
Road noise management
We already mentioned the Sorama
L642, but other companies make
devices for monitoring noisy vehicles.
Noisy vehicle detection technologies
are already on trial in Australia:
siliconchip.au/link/ac91
siliconchip.au/link/ac92
Editor’s note – there are several large
boxes in the middle of Foreshore Road
near Port Botany in Sydney, powered
by solar panels, that appear to be used
to monitor noise from the many trucks
on that road.
Apart from Sorama, companies that
make noisy vehicle detection systems
include SoundVue (https://soundvue.
com – used in Australia), General
Noise (www.generalnoise.co.uk) and
acoem (www.acoem.com/en).
Fig.27: an overhead view of Philips Stadion with acoustic camera data overlaid.
Source: https://sorama.eu/fan-behavior-analytics-with-acoustic-data-engaginginsights-for-sports
Fig.28: an acoustic image of a high-speed train. Source: www.gfaitech.com/
knowledge/faq/passby-2d-integration-time
Fig.29: studying elephant vocalisations in Nepal. Source: https://youtu.be/
Xl7LnAob2T8
22
Silicon Chip
Australia's electronics magazine
In stadiums
Acoustic imaging is used to analyse,
map and localise cheers from fans in
stadiums. Competitions can be organised to enhance fan engagement so that
the loudest and proudest fans win.
The winner for the noisiest fans or
real-time noise production by fans can
be determined with a “SoundSurface
map” display on the large screen being
shared in real time at the stadium
and on social media – see Fig.27. The
noise level changes second by second
and corresponds to events happening
within the game being observed, such
as scoring a goal.
There are two different Sorama
acoustic camera systems installed at
the Philips Stadion in the Netherlands.
One is the Sorama L642XL, which
is equipped with 64 microphones
arranged in a sunflower pattern to provide seat-level accuracy, right down to
individual fan reactions.
The other system uses 30 Sorama
L642 cameras, covering all seats, to
observe crowd behaviour at a higher
level. The system can also detect
unwanted chanting, shouting, slurs
or breaking glass.
Trains
Investigating noises emanating from
trains was one of the first commercial usages of acoustic imaging. An
acoustic image of a high-speed train
is shown in Fig.28. Not surprisingly,
the wheels seem to be the main source
of noise, but there was also noise from
siliconchip.com.au
Figs.30 & 31: examples of vibration analysis using the GFaI tech WaveCam software on large structures such as a wind
turbine and tower, and smaller structures such as a car engine. Source: www.gfaitech.com/products/structural-dynamics/
vibration-analysis-with-wavecam
the pantograph. This discovery led to
design efforts to minimise noise from
that source.
Vibration analysis
Vibration analysis can be used as
a supplementary technique to acoustic imaging. It is performed optically,
using a camera and software to detect
small variations in an image due to
vibrations. GFaI tech offers the WaveCam software for this purpose.
Figs.30 & 31 show some examples
of such vibration analysis. A combination of both vibration analysis
and acoustic imaging can be used
to give a deeper understanding of
a vibration and noise problem, as
shown in the video at https://youtu.
be/0Z7E5Ql7Xiw
Vacuum cleaner development
Perhaps one of the noisiest domestic appliances is the vacuum cleaner,
so it is not surprising that considerable efforts are made to quieten these
machines. Figs.32 & 33 show frame
grabs from Steve Mould’s video at
https://youtu.be/QtMTvsi-4Hw showing sources of sound from a vacuum
cleaner; one at 400Hz, the other at
7000Hz.
Wildlife
Acoustic imaging cameras have
been used to study wildlife vocalisations, such as elephant sounds, including infrasound – see Fig.29. A better
understanding can thus be made of
how animals communicate and the
parts of the body involved in generating various sounds.
Acoustic holography
Acoustic holography is a specialised technique that reconstructs the
entire sound field (a 3D representation
of the distribution of sound waves),
including amplitude and phase over a
surface or volume, based on measurements taken at a limited set of points.
It uses wave propagation principles
to create a ‘holographic’ representation, akin to optical holography, but
with sound waves. It uses some of the
same techniques as acoustic imaging,
such as acoustic wave analysis, microphone arrays and signal processing,
and can be seen as an extension of
acoustic imaging.
It has niche applications in research,
requiring extremely advanced mathematical models. Acoustic imaging
maps sound sources using beamforming, while acoustic holography
extends this by reconstructing the
full sound field, including phase, for
a detailed analysis. Acoustic imaging can be seen as a ‘snapshot’, while
acoustic holography is a complete 3D
model of sound.
The future of acoustic imaging
Over the last few years, the cost of
acoustic imaging has gone down, and
the capabilities have gone up. Possible
or likely developments in the future
include higher-resolution microphone
arrays, integration with AI for automated source detection, plus cheaper
and more portable designs.
Challenges include improving low-
frequency detection, reducing setup
complexity (although existing handheld units are virtually ‘plug & play’),
and handling reverberation, where the
sound reflects off multiple surfaces
even after the source has stopped.
Research trends include advanced
signal processing, wearable sound
cameras (possibly with military applications) and multi-modal imaging
(say, measuring vibration and sound
at the same time by the same device).
Future applications include the use
in robotic imaging, smart cities and
SC
consumer applications.
Figs.32 & 33: frame grabs from https://youtu.be/QtMTvsi-4Hw showing noise from a vacuum cleaner. On the left, it shows
the 400Hz noise from the tube, while on the right, the 7kHz noise is coming exclusively from the motor.
siliconchip.com.au
Australia's electronics magazine
January 2026 23
|