This is only a preview of the August 2020 issue of Practical Electronics. You can view 0 of the 72 pages in the full issue. Articles in this series:
|
Circuit Surgery
Regular clinic by Ian Bell
LTspice – Using behavioural sources
at LTspice sources, having previously used behavioural sources to
draw waveforms to illustrate the article on
class D, G and H amplifiers, and because
I had been asked about getting waveform
data in and out of LTspice for use with
other applications. We looked at the basics
of sources and the details of data input/
export, including the use of WAV files.
WAV files are a useful format for recording and importing for a wide range of
circuits in LTspice – not just audio – but
for audio circuits they have the advantage
of enabling us to listen to the simulated
waveforms. Following on from last month’s
introduction, this month we will look at
behavioural sources in more detail. To
make this more fun we will make use of
the WAV export capability, illustrating
behavioural sources by getting LTspice
to synthesise a musical scale.
Recap – behavioural sources
WAV files
As discussed last month, behavioural
sources (LTspice BV and BI elements)
facilitate the use of a large range of
mathematical expressions to define their
output. These expressions can involve time
as well as the circuit voltages on any wire (to
ground), voltage differences (between two
wires) and the current in any element. Last
month, we just used simple expressions,
for example, the following source value
equation will output a voltage which is
5000 times the current in resistor R1.
V=-5000*I(R1)
The LTspice behavioural sources provide
basic operators such as addition and
multiplication, logic operations (eg, AND
and OR), conditional operations such as less
than, and over 50 mathematical functions,
including trigonometric functions, min/
max, delays and random numbers.
The waveforms from WAV files can be
input to an LTspice simulation by placing a
voltage or current source on the schematic
and setting the value to the source, eg:
wavefile=filename chan=channel
Practical Electronics | August | 2020
Here, channel is the channel number of
the waveform in the WAV file. The filename
can include the full path if the WAV file is
not to be in the same folder as the LTspice
schematic. To export waveforms, place a
.wave directive on the schematic (using
the the .op toolbar button):
.wave filename nbits samplerate
V(net) …
Here, filename is the name of the WAV
to be written to, nbits is he number of
bits used for the wave data values (from 1
to 32), and samplerate is the sample rate
of the waveform in Hz of the file from 1 to
4,294,967,295. The settings are followed
by the list of net voltages to include in
the file (the number listed determines the
number of channels). You can think of this
directive creating a virtual analogue-todigital converter which writes to the file.
In last month’s article we also discussed
the requirements for using WAV files
correctly. The amplitude of the waveforms
from WAV files is limited to ±1V, which
means it is often necessary to scale the
waveforms to match them to/from the
circuit’s signal levels. This can be done
using behavioural sources, as demonstrated
last month. Inputs are straightforward
as the WAV ±1V range is known, but
outputs from it may be necessary to run
the simulation twice – the first time to
measure the signal’s peak and the second
with the scaling correctly set.
It is useful to be able to view and
manipulate the WAV files outside of
LTspice; for example, to check that the
waveforms in the WAV are as expected
and correctly match the simulation. A
useful tool for this is Audacity, a free audio
editor and recorder. Audacity can display
waveforms and play the audio content. It
also has numerous processing capabilities,
including signal normalisation, which is
useful in preparing inputs to simulations
to cover the full WAV signal range.
basic sources are unable to generate
them. Another common use is as part of
accurate models of circuits like op amps
(macromodels) that do not reveal all the
details of the component level design –
this is often done by device manufactures.
A third use is in the process of creating
a new design – to model the ideas at an
abstract level, rather than designing a
full schematic. Such behavioural design
is common in professional engineering
and in some cases, such digital circuits
are described in languages such as VHDL
and Verilog, the detailed design can be
created from the behavioural design
automatically by software.
ADSR idea
As an example of a behavioural design,
but mainly just to illustrate some of the
capabilities of these behavioural sources,
we will describe the creation of an LTspice
simulation that represents the behaviour
of a sound synthesis circuit similar to
that which you might find in an analogue
musical synthesiser. Regular readers will
recall the MIDI Ultimate Synthesiser
project (PE, February to July 2019). This
synthesiser is typical in using an ADSR
(Attack Decay Sustain Release) circuit
to shape the envelope of the sounds it
creates. The envelope (see Fig.1) is the
variation of the amplitude of a single
musical note, or percussive sound, with
time. The amplitude initially rises to a
peak (attack) and then deceases (decay) to
a level which remains constant for a while
(sustain) until the amplitude decreases to
zero when the sound completes (release).
Suitable choice of speed of attack, decay
and release, and the level of sustain allow
A ttack
D elay
Sustain
R elease
Volume
L
ast month we started looking
Using Behavioural Sources
The most straightforward use of behaviour
sources is in creating complex waveforms
as inputs to your circuits, where the
T ime
Fig.1. ADSR amplitude envelope.
43
VSupply VSustain
C ontroller
SA
A ttack
SD
D ecay
R
K ey
pressed
G ate
–
E nd attack
D
A D SR
+
C omparator
R
A
VMax
R
C ontinuous
sound signal
R
C
R elease
SR
–
+
G ain
Sound w ith
enve lope
Fig.2. Music synthesiser ADSR envelope concept circuit.
Fig.3. The resistor, switch and capacitor network for the ADSR circuit.
the desired quality of sound to be achieved;
for example, to help mimic a physical
musical instrument.
The ADSR circuit in the MIDI Ultimate
Synthesiser uses over 60 components and
also requires further circuits to generate
the sound signal, apply the envelope to
the signal and create the trigger signal
that indicates when a note is played.
Imagine that the idea of an ADSR circuit
was new and you wanted to develop it
from scratch – there would be no existing
circuits to borrow from. You could try to
build a complete prototype, maybe with
Fig.4. Gate signal which produces twelve
gate (key-pressed) pulses.
44
a few hundred components. If it did not
work well – maybe there were problems
with the basic design concepts, or the
detailed implementation – then debug
may be very difficult. However, if you do
not mind working with some mathematics
then the design could be developed first in
behavioural form to evaluate and hone the
basic ideas before creating the full circuit.
ADSR concept circuit
A concept schematic of the ADSR circuit
is shown in Fig.3. We will use a mixture
of real ‘components’ (resistors, capacitors
Fig.5. The attack signal.
and switches) and behavioural sources
to implement the circuit – the choice is
based on finding the easiest way to model
the behaviour. The sound signal source,
control circuit, comparator and voltagecontrolled amplifier (VCA) will be largely
implemented with behavioural sources.
Our ADSR circuit model follows the
same basic principle as the one in the
MIDI Ultimate Synthesiser, although
slightly simplified. It operates as follows.
The timing of the attack decay and release
phases of the envelope are controlled by
the charging or discharging of the capacitor
(C) via the variable resistors RA, RD and
RR respectively (see Fig.3). The voltage on
C (labelled ‘ADSR’ in the figure) is used
to control the envelope of the sound via
the VCA. When no note is being played
the key pressed signal will be off (logic 0),
which will cause the controller to switch
on its release output (and turn the attack
and decay outputs off). This will turn
on switch SR, discharging C through RR.
After the release time, the ADSR voltage
will be close to zero and no signal output
will occur. When the next note is played
the key-pressed signal goes high, causing
the controller to switch the release output
off and the attack output on, switching on
SA and charging C through RA towards the
supply voltage. When the ADSR voltage
reaches its maximum value (set by Vmax)
the comparator will switch, causing the
controller to switch the attack output
off and the decay output on. C will then
discharge towards Vsustain via SD and RD. If
the key is pressed for sufficiently long, the
ADSR voltage will level off towards Vsustain.
This brings us back to the point where the
key is released and C is discharged from
whatever voltage it is currently holding
via SR and RR. The shape of the envelope
is controlled by the three resistors, the
sustain voltage and length of key-press. In
the original design C can also be switched
to extend the range of timing.
Creating the simulation
To create this simulation, we have to decide
what ‘music’ the simulated synthesiser is
going to ‘play’ – we will get it to create a
twelve-note chromatic scale starting from
the commonly used 440Hz reference note
(A above middle C). The notes will all last a
quarter second and
be played every
half second.
We s t a r t b y
drawing the
resistor, capacitor
and switch
network using
basic components
(see Fig.3). This is
derived directly
from the relevant
parts of Fig.2. The
Practical Electronics | August | 2020
switches use SPICE S elements, for which we have to provide a
model using a .model statement:
.model Eswitch SW(Ron=.01 Roff=100Meg Vt=0.5)
This defines a model called Eswitch (electronic switch) that is
close to ideal in that it has an on resistance of 0.01Ω and an off
resistance of 100MΩ. The switch is off when the control input
voltage is below the threshold (Vt) of 0.5V, and on when it is
above the threshold.
The release switch (SR) is controlled directly by the gate (key
pressed) signal. When the gate is off the release switch is on and
vice versa – this is a logic NOT operation which is implemented
using LTspice’s behavioural logic elements. These may be another
topic for another month, for now we just need to know they
provide standard logic functions, do not need supplies and by
default use a 1V logic signal – which is why the switches were
configured with a 0.5V threshold (half the logic voltage).
The timing of the twelve regularly spaced notes can be created
using a standard voltage source configured in pulse mode (see
Fig.4). The pulses are 1V with a period of 0.5s and an on time
of 0.25s. The first pulse goes high after a delay of 0.25s. The
rise and fall times of transitions are 100µs.
Control of the attack and decay switches (S A and SD) is
little more complex and is where we will start to see use of
behavioural sources. We also need another logic element – a
set-reset flip-flop (see Fig.5), which also features in the design
of the original synthesiser. When the gate signal switches on,
we set the flip-flop, which in turn activates the attack switch.
When the ADSR voltage reaches the maximum value (Vmax in
Fig.2) the attack phase is stopped by resetting the flip-flop. This
requires that we generate two signals: one to start the attack
phase and another to end it.
Edge detection
will output ±10kV pulses for the 100µs periods while the gate
pulse is switching. This will not prevent the circuit working
– the behavioural flip-flip is not real and will work fine with
a 10kV input, but we want to keep the control logic to 1V for
consistency. Also, we have negative pulses (for the 1 to 0 changes),
which again will not affect the flip-flop, but which we do not
need. There is an LTspice function called ‘unit step’ (u) which is
defined as outputting 1V if the input is greater than 0, otherwise
it outputs zero. If we apply the derivative of the gate voltage to
this, we will get 1V when the positive edge is occurring and zero
at all other times. So, the source function becomes:
V=u(ddt(V(gate)))
(See Fig.5.)
The signal to end the attack phase is simpler. For the circuit
in Fig.2 we see this occurs when the ADSR voltage is greater
than Vmax (detected by the comparator). In the original circuit
the supply was 12V (as it is in Fig.3) and Vmax was 10V. We can
create a signal that is 1V when the ADSR voltage is above 10V
(0 otherwise) using a ‘greater than’ conditional operator (>)
V=V(adsr) > 10
As shown on Fig.5. You can use four conditional operators (>
< >= >=).
Delays
Control of the decay switch seems straightforward. The decay
switch is on when the gate signal is on (a key is pressed) AND
we are not in the attack phase. We can detect the two conditions
with V(attack) < 0.5 and V(gate) > 0.5. LTspice has
logical operators (AND: &, OR: |, XOR: ^ and NOT: !) so we
can write the full condition for the delay signal as:
V=V(attack) < 0.5 & V(gate) > 0.5
To start the attack phase, we need to detect the positive edge (0
to 1 change) of the gate signal. There are different ways to do
this in a real circuit (flip-flop or RC circuit) but with behavioural
sources we do not have to worry about the
implementation, just the function required.
When a positive edge occurs, there is a
positive rate of change of the signal voltage.
Mathematically, ‘rate of change’ is found
by differentiating a function and LTspice
provides a time-derivative function (ddt)
to calculate this. We set the value of a
behavioural source to:
Unfortunately, this will cause a simulation failure. The problem
is due to the attack signal being controlled by the gate signal,
V=ddt(V(gate))
Now the source will output a voltage equal
to the rate of change of the gate signal.
It will be zero except when the pulse is
changing – detecting the edges.
With setup of the Vgate pulse source
described above, the edge change is 1V
in 100µs, which is 10kV/s, so this source
Fig.6. The decay signal.
Practical Electronics | August | 2020
Fig.7. Simulation results showing the control signals and resulting ADSR envelope.
45
Fig.8. Stepped waveform using V=ceil(time).
which is in turn controlled by attack. With these functions having
zero delay (unlike anything in a real circuit) the simulation can
lock up. The solution is to introduce a delay – LTspice has a
function delay to achieve this. If we have a voltage V(sig) we
can create a version of this signal delayed by time tdelay using:
V=delay(V(sig), tdelay)
Using delay tends to slow down the simulation, particularly if
the delay time is short compared with other activity. For this
circuit 300µs worked. See Fig.6, where the source expression is:
V=delay(V(attack) < 0.5 & V(gate) > 0.5, 300u)
Running a simulation with all the elements from Fig.3 to Fig.6
produces the results in Fig.7, which shows a single ADSR
cycle. We can play with the values of R1, R2, V3 and R3 (Fig.3)
and the on time for Vgate pulse source (Fig.4) to change the
envelope shape.
We now have an envelope but no tone signal to apply it to.
In the real synthesiser this may come from a voltage-controlled
oscillator (VCO). We can do something similar with behavioural
sources – that is, create a wave whose frequency is controlled
by a voltage. We will start with a sinewave as this provides a
single fundamental frequency (the pitch of a musical note). We
could extend this to create waveforms with different sound
qualities (timbre in musical terms) by adding harmonics
(multiples of the fundamental frequency).
Sines and times
The sine function is familiar to many from school trigonometry.
In that context we typically take an angle θ and find its sine,
written as sin(θ), in which the angle is measured in degrees. If
Fig.9. Creating the tones of the chromatic scale.
46
we plot a graph of sin(θ) against θ we get
the familiar sinewave shape, which repeats
every 360°. However, the degree is not
the only unit for measuring angles and in
mathematics, and LTspice’s trigonometric
functions, angles are measured in radians.
The conversion is straightforward: 360° is
2 radians, so sin(θ) repeats every 2 with
θ in radians (see https://en.wikipedia.org/
wiki/Radian if you are new to radians).
For generating a sine waveform we
need to apply a value to the sine function
which varies with time. For a frequency of
f Hz we need the sine function to repeat
every 1/f seconds. If we use sin(t), where
t is time the wave will repeat every 2
seconds. If we multiply time by 2 , that
is use sin(2 t) the wave will repeat every
second (when t = 1 we evaluate sin(2 )).
To set a different frequency we multiply
2 t by the frequency, that is use sin(2 ft).
For example, for 100Hz we have sin(2 × 100 × t), we evaluate
sin(2 ) at t = 1/100s, so the waveform repeats every 1/100th of
a second as required.
Translating sin(2 ft) into LTspice syntax we can generate a
100Hz sinewave with a behavioural source using:
V= sin(2*pi*100*time)
Note that pi and time are keywords recognised in LTspice
expressions, time is a value equal to the current simulation
time. We can use a voltage to set the frequency simply by
replacing the fixed frequency value in the above expression
with a reference to that voltage, for example:
V= sin(2*pi*100*time*V(freqcontrol))
Scales
Given that we are aiming for a chromatic scale of musical
notes we need to generate signals of the correct frequencies.
The frequency of a music note, fn, which is N steps away from
a reference frequency f0 can be found using the formula:
fn = f0 × 2N/12
We can write this formula in LTspice syntax. The multiply
and to-the-power-of operators are * and ** respectively, in
common with many programming languages. With f0 = 440Hz
we could use:
V=440*2**(N/12)
This expression creates a voltage numerically equal to the
frequency we want. To make this work we need a value of
Fig.10. Final output source – this takes
on the role of the VCA.
Practical Electronics | August | 2020
N, which must be an integer (whole
number); for this we can use an integer
voltage value (1V, 2V…). Given that we
want to run up the musical scale we need
a voltage which steps through integer
voltages with time. We can create a
voltage source which directly depends
on time:
V=time
This will create a linear ramp increasing
at 1V/s (volt per second), but not the
steps we need. To create the steps, we
can round the voltage of this ramp to the
nearest integer using the LTspice ceil(x)
function, which outputs an integer equal
or greater than x. Using the following we
get the waveform shown in Fig.8:
V=ceil(time)
For our chromatic scale (12 notes in
six seconds) we need to step at twice
this speed, which we can do using 2 ×
time. Also, to coordinate with the note
changes, which start at 0.25s, we need
to shift the waveform in time, which
can be done by adding or subtracting a
value from time. Specifically, we want
the fundamental note (N = 0) to start at
0.25s. If we subtract 1.5 from 2 × time
the value from the ceil function will
start at –1 (at time = 0) and switch to
0 at 0.25s. So, for our N value we can
use the following expression:
Fig.11. Complete chromatic scale of notes with ADSR envelopes.
V=ceil(2*time-1.5)
We not have to use a separate voltage
source for this. We can substitute this
expression into the frequency-control
voltage expression above to get:
V=440*(2**(ceil(2*time-1.5)/12))
The above discussion leads to us adding
two voltage sources to the schematic –
one to generate the frequency control
voltage and the other to produce the
musical tones. This is shown in Fig.9.
Envelope
We now have the tone signal and need
to apply the envelope to it (the function
of the VCA in the real system). This is
easy to do by multiplying the tone by the
ADSR signal. Given that the maximum
amplitude of the ADSR signal is 10V
and the maximum of the sine function
is 1, this will result in a 10V peak tonewith-envelope signal. It would be fun
to output the signal to a WAV file so we
can listen to the effects of changing the
ADSR parameters. As discussed last
month, voltages written to the WAV file
need to be limited to ±1V, so we need
to divide the 10V signal by 10. From
Practical Electronics | August | 2020
Fig.12. Zoom-in to show tone signal and part of one note envelope.
this the expression for the final output
voltage source is:
V=V(tone)*V(adsr)/10
This is shown in Fig.10, which also
includes the .wave and .tran directives.
Putting all parts from Fig.3 to Fig.6, and
Fig.9 and Fig.10 on one schematic, and
simulating produces the results shown
in Fig.11. The tone signal details cannot
be seen at this scale so, Fig.12 shows a
zoom-in to part of one note. We can listen
to the scale using an audio player and
load it into Audacity to check the WAV
file waveform is correct.
This simulation was contrived to
illustrate use of behavioural sources, so
the approach may not be the best way
of doing things. For example, changing
it to produce a different note sequences
would be difficult – it may be better for
the notes to be defined by a PWL input
file. The simulation can be extended
and improved in other ways too; for
example, adding harmonics to the tone
waveform for timbre and using LTspice’s
random number functions to create
noise waveforms for percussive sounds.
With these improvement in mind, a
creative reader could produce a musical
composition synthesised by LTspice!
Simulation files
Most, but not every month, LTSpice
is used to support descriptions and
analysis in Circuit Surgery.
The examples and files are available
for download from the PE website.
47
|