• wiggly-slider_05

  • wiggly-slider_03

SYNTHESIZERS – A LANGUAGE FOR SOUND DESIGN

 

 

Roland JP-8080 overhead view of front panel

Roland JP-8080 overhead view of front panel

 

The layout of controls on the front of a classic synth interface like this one, above, shows much of what can be done in synthesis.  It’s great fun to interact with the controls in real-time and see what things do.  The interactions between controls can make it hard to see what they actually do, but soon you will find your way around without too much trouble.

There are characteristics to sounds in the real world, and the synthesizer deals with sound using terminology that reflects the real world of sounds as well as jargon that only exists in the virtual world.  Learning the jargon is so helpful.  It will increase your studio chops no end, as well as adding countless sonic possibilities to your production palette.

Sound has three very helpful properties – it has frequency, it has amplitude, and it has phase, all of which affect the sound you hear.  I find phase confuses more people than amplitude or frequency.  I’ll deal with that in a moment.

Frequency is easy – that’s almost another name for the pitch of a sound, but not quite.  The sensation of a fixed pitch is not necessary to have a frequency component.   Any sound has frequency components, regardless of recognizable pitch information.

On a synth, you can either play a specific pitch, or you can glide towards it, ideally over a user-determined amount of time.  Some sounds do not have the property of pitch.

Sound pressure is either positive or negative in motion through space, meaning it’s either compressing or expanding as it goes along.  This is amplitude.  The number of amplitude changes in one second from positive to negative and back again will determine the pitch information.  If you get ten wavefront cycles in a second, then you have a sound at ten cps (cycles per second).  This cps measure came to be called Hertz rather than cps, after a famous scientist, and so the ten cycles per second wave is scientifically defined as being 10Hz at standard room temperature and air pressure.   The lowest notes in a mix you are likely to reproduce on your home speakers is probably around 35 Hz if you are lucky, and certainly won’t be anywhere near as low as 10 Hz.  If you add a subwoofer to your system, you may reach the lowest sub-bass notes, but below about 20Hz you will have trouble determining pitch reliably.

Amplitude is how loud a sound is, casually speaking, which indicates much of how much energy it has.  It’s a description of the sound pressure wavefront’s motion towards and away from a centre point at rest (the centre line in the warveform display).

In both cases the phenomenon is one that changes over time, so one can graph progress of the sound  on paper.  The simplest example to look at is the sine wave, since that follows a fixed and simple pattern of repeating amplitude cycles over time at whatever pitch is selected.  Combining sine waves of different qualities provided the basis for the earliest versions of additive synthesis, and this gave us a good understanding of the physical laws of how sounds behave.

The waveform of a sound is displayed in DAWs routinely, and you’ll see it on SoundCloud as well.  There is plenty to be learned from a critical look at waveforms.  They are a two-dimensional graphical representation of a sound moment by moment along the timeline, at least in terms of energy (amplitude).

The centre line of the waveform, horizontally running left to right along the waveform timeline, is the mark at which there is zero amplitude in the waveform.  In normal circumstances, the amplitude will move through this point, this centre line, to emerge the other side of the centre line, continuing on it’s way to a peak position, whether negative or positive.  From there, it returns back the way it came, with a moment of inertia if you will at maximum amplitude before changing direction.  In reality, of course, it would be more accurate to think of sound in three dimensions moving against a timeline, but it makes visualization far more difficult, and explanations much harder to convey.

We spend time on sine wave functions in high school mathematics.  These are familiar shapes to the majority of people.

What many of us are often unaware of is that the behaviour of speakers moving air during playback of audio is a direct analog of the waveform shown onscreen.  When the speaker cone moves outwards, the wave’s amplitude is heading in the positive (upwards) direction, and when it is returning from it’s point of maximum forward travel to it’s most recessed inward position, the amplitude is heading in the negative (below the line) direction.  You literally have a graphic representation of the behaviour of your speakers when the waveform is played back.

In an empty waveform (a recording made without an input signal applied), the centre line is shown but there is no audio going above and below the line.  The centre line is the point of zero amplitude, and correlates with the speaker at rest.  If you play the empty file, that’s exactly what you will get from the speaker – silence, and no movement.

It’s important to note that loudness is not amplitude – they are related concepts, but only  in the last couple of years have loudness measurements become possible against an international relative standard.   Today, there is legislation in place in Europe and elsewhere that makes broadcasters mitigate and measure their loudness behaviours.  There are legal limits to how loud things can get, against a relative standard.  This is achieved in practice by using meter calibration standards like the mastering guru Bob Katz’s K-system.

If you are able to set your system up to work in the K-system, using loudness metering plug-ins from Waves and others will make your life much easier in terms of getting good crest factor recording results, meaning the difference between average and peak program levels in your masters.  Modern loudness in mastering is largely a matter of maximizing average levels as musically and cleanly as possible, while controlling peak levels so they don’t exceed the digital clipping limit of 0 dBFS.  No matter what, you do not want to exceed that limit even for the briefest moments between individual samples – even these moments can be interpolated as intersample peaks that exceed digital maximum and enter the deeply unpleasant digital clipping zone.

Prior to the concept of calibrating monitoring levels to an agreed standard, there was no way to measure loudness in a meaningful way that legislators could rely upon.  This progress means we can finally make it illegal for advertisements on television to be deafening in comparison to the program material we actually pay our money to enjoy.  Hallelujah!

Before that, it was not possible to measure loudness in the scientific sense because it is actually a relative term – in other words, louder means louder than something specific.   Amplitude, however, is an absolute scientific measure.

Frequency and amplitude properties are rarely found in the physical world occurring as a simple sine wave.  In fact, sounds are usually made up of a complex pile of waves all with differing properties moment by moment.  Some may last longer than others, some may be louder, some may be pitched apart, but all may be occurring at once.  A complicated situation to face when trying to generate sounds that have musical uses.

Thankfully, Jean-Baptiste Fourier, an 18th century mathematics whiz, came up with a way to describe summed sine waves, which is actually something that any complicated sound wavefronts can be reduced to.   The Fourier analysis is the basis of synthesis and sampling.  There’s no need to use mathematics yourself here, so don’t worry!

When we see a waveform display in a DAW, we are seeing a composite mugshot of the various sine waves that have been summed together to produce the audio.

Phase, a critically important topic in recording, is important here.  When two sine waves are combined in the real world, they can either sum together constructively, or cancel each other out destructively.  This is a literally a matter of degree.

High frequencies have waveform cycles that are spaced really closely together, and low frequencies have less closely spaced wave cycles.  This is why low numbers like 10 Hz are lows, and high numbers are highs, like 20,000 Hz, also known as 20 kiloHertz, or 20 kHz, or casually just 20k. It’s the number of cycles per second again.

In the real world, a sub-bass note can have a fundamental frequency (the primary tone component) that travel for over 50 feet of distance in order to complete a single wave cycle during propagation of the wavefront outwards from a speaker playing it back.  In other words, to hear the whole cycle you would need to be more than 50 feet away from the speaker reproducing the sound.

Highs have a lot less energy and are closed together to the point that the highest sounds we humans can hear are only  millimetres in wave-cycle length.   The distances are small enough that tiny movements of your head in the listening position will affect your perception of the high frequency content of your audio.  Play a single 10 kHz test tone (a simple sine wave at 10,000 cycles per second).  You will find you can move your head a few inches in any direction and hear all kinds of subtle changes in perceived level of the test tone.

Phase arises as an issue when dealing with the lower frequencies, those below about 4 kHz but especially bass sounds like kick, bass, floor tom, etc.  Anything with significant low-frequency components, in fact.

If two frequencies are a long way apart in frequency, then their phase relationship will change very quickly many times during a single cycle of whichever is the lower frequency.  This defeats a meaningful discussion, and problems do not arise in everyday recording practice.

The closer together they are, then the slower the phase relationship is going to be changing over time.

As they get very close, just like when you tune a note on the piano or guitar relative to other notes, the phase relationship becomes very audible in a way we call “beating”.  The frequency of this beating effect will be the difference between the two frequencies, so if you combine two sine waves of 100 Hz and 104 Hz, you get the beating at 4Hz, which is the difference.

It’s called the difference tone, technically, but if it’s lower than about 20Hz it appears to be a pulsing quality in the sound rather than a discrete pitched note.

In the above example, there would be discrete frequency components at 4Hz, 100Hz and 104Hz.  Also, you would hear sound at 100Hz, and 104Hz, but the 4Hz wave would simply cause the other two frequencies to appear to be pulsing or beating together.  If the difference between the two were greater than about 2oHz, the result is the perception of a new frequency component in the sound.  If you mix 250 Hz with 350 Hz, you will get a 100Hz difference tone that is audible too.

Deliberate beating of frequencies is a common musical effect, particularly in the higher frequency sounds, because low frequencies are much more likely to cancel each other out to some degree when summed together, which is unwanted 95% of the time.

This is where the PHASE button on your consoles and virtual mixers comes into play.

Summing low frequencies together causes sum and difference behaviours depending on the frequencies so combined.

When one frequency is played, it travels a fixed number of cycles per second, as we discussed already – it’s Hertz value.

If you combine another sound at a different Hertz value, it’s unavoidable that the two must necessarily be out of sync at times.  Also, if you combine the same frequency but each one began playing at a different point in the timeline, then the same thing occurs.  In the latter case, the phase relationship is fixed, since they are the same frequency – the same cps or Hertz value.  In the former case, there will be a shifting timing of the phase relationship.

At the times when waves sum, the sound gets louder, due to increased amplitude.  When they cancel, the sound gets quieter, since amplitude is lessened.  We measure the wave cycle in degrees, with a rotation of 360 degrees being needed to reach the start of the next “peak to peak via trough” cycle in any wave, no matter the frequency.

When waves are very close in degree of phase rotation, they will beat.

When a wave is at the exact opposite point of the cycle, namely 180 degrees away, from the point the other wave is at, then the waves cancel each other out completely.

When they are at the same point in the wave cycle, they will sum completely, doubling in amplitude.

All this happens in the air, too, and we hear the results around us all the time.

Sound pressure waves will reach a microphone diaphragm at different times at different distances to different sources, and the resultant sound is combined as it enters the diaphragm.  You are capturing the acoustical cancellation or summing in air of the various sources you can hear, as they cancel or sum at each component frequency on arrival at the microphone position.  This encourages you to consider where you put a microphone fairly carefully, especially with low-end sounds, the ones most susceptible to phase issues.

The PHASE button allows you to make a primitive but effective decision to move the relative polarity of a signal (whether it is treated as a negative or positive start to the wave cycles by the electronic components).  You can either start at zero  degrees or 180 degrees, in practice an arbitrary decision.  You just combine the two sounds into a mono speaker, and hear them while experimenting with the two possible positions of the PHASE button.  When you are on the inferior setting, which will be the one with the worst cancellation, you will find a loss of bass in playback.  It will be thinner, less solid, and generally weedy in comparison.  It’s possible it will be hard to detect differences, and this is because the sounds are relatively in phase to begin with.  This is sheer luck a lot of the time.

Pick your setting, PHASE button in or out, and away you go.  The drums and bass give a clear indication, usually.

If you try to choose a PHASE setting for a mono sound, you will hear no difference until you try to combine it with another sound.  Phase is a relative term, relative to a point in a cycle.  The differences cause the problems, due to the cancelling or summing of the component waves.  This translates in practice to things getting louder or softer note by note.  All the D’s are too loud, all the C’s are too soft.  The effect is most prominent at the low end, as I said, so typically it’s an uneven bass response you get as a result.

Today, there are far more useful tools, such as Phase Rotation plug-ins that allow you to choose exactly how many degrees you wish to rotate phase, rather than being stuck with settling for the winner of zero degrees or 180 degrees.

Creative phase adjustments can make the most of a multi-miked drum kit recording, because of the many microphones typically used, and the many signals that get combined, and their generous low frequency content in most cases.  This is where the phase rotator really comes into it’s own.

Well, the language of synths is complicated and covers a lot of ground.  There are many components, such as filters, envelopes, oscillators, amplifiers, and so on.  There are pitch wheels and modulation wheels, and vast hordes of MIDI control signals.

Join me tomorrow for the next instalment, “SYNTHESIZERS – COMMUNICATING IDEAS WITH OTHER MUSICIANS”, when we will look at filters and envelopes in particular as a means of sharing your more nebulous sonic ideas with others in language they can all agree upon and understand once they learn the jargon.

It’ll be fun.  We’ll sync up tomorrow!  Bleep!

, , , , , , , , ,

No comments yet.

Leave a Reply

dream beautiful music tonight