This week’s lectorial touches the fundamentals of sound. Having done music and audio technology back in Singapore at Singapore Polytechnic, I felt very much back in my element. It was a good refresher after a few weeks of school and going through various topics, mostly with regards to film making or media in general.
When our lecture, Rachel, flashed John Cage’s 4′ 33″ on the screen, I chuckled to myself reminiscing of the times back in poly when my course mates and I were fooling around pretending to perform 4′ 33″ and we found it a convenient excuse for submitting works that might not be pleasing to our lecturers or just short of par from the assignment criteria.
I first heard of John Cage in a MIDI, Synthesis and Composition class where the lecturer was teaching a composition technique used by composers called “formalisms” and “chance”. It is how writing music could be left to the rolling a dice and leaving it to chance to see which number appears on top of the dice, or just putting some boundaries and parameters into a software and allow it to generate notes from there. At first glance, it might seem random, but how you alter the boundaries and parameters does affect how the music is being produced or played, therefore, it is not random, but in stead, left to chance. It is quite a complex theory to get the head around, but once you get it, it is quite simple. Ideally, “minimalist” music could be derived from this where the concept of less is more applies to this form of music. Just by adjusting the boundaries and parameters to narrow choice of notes and frequency of how many times the notes are being played, the software will then compose a piece which again is let to chance to see what notes are going to be played after the other.
Sound is such a broad topic that one can even complete an entire bachelor’s programme on it. Like myself, I graduated with a Music and Audio Technology diploma, and decided to enrol on the Bachelor of Communications (Media) to broaden my skill sets as well as build my network with people abroad. Sound covers from sound effects to music, dialogues to monologues, diegetic to non-diegetic sounds, and the list goes on. Not forgetting the technology and production side of sound from recording to mixing and processing. It is true that sound can make or break a film. How could you tell if the budget of the film is high or low? Just listen to the sound. If the sound is boomy, unrefined, echoey, jarring, muddy, the budget of that particular film is probably pretty low. Unlike video where you can do some minor corrections and editing if there was a flaw in the visuals, you can never correct or edit waveforms if you have recorded something with other sounds that didn’t come from the original source.
The art of recording is delicately simple, just point a microphone to the source. How hard can that be? Well, there are many things that needs to be taken into consideration before you hit that red dot, otherwise known as the record button. The level of your signal coming in, making sure it’s not too low or too high (otherwise clipping or distortion might occur). The ambient sound that might be bleeding into the microphone. If you’re miking a musical instrument, you have to decide on the type of mic? (A condenser or dynamic or ribbon. Different make and model does shape your recording as well). The type of miking technique? (XY, MS, 120, mono). And of course, making sure the microphones that you’re using isn’t phasing, but that’s one of the few things you can correct when you’re mixing.
Now to mixing. As straight forward as mixing goes, you just have to make sure nothing’s too loud, nothing’s too soft, everything is level, panning left and right to have a good stereo mix, it is still very hard to get that right sweet spot where everything isn’t fighting with something else. Human beings have the threshold of hearing of 20hz and threshold of pain at 20khz, anything in between is audible to the human brain. Therefore, mix engineers have that band width to play around with various frequency levels, what to boost, what to cut and so on. Mixing is one of the key stages in producing an artist’s album or in a feature film. Best way to tell a good mix from a bad mix is that you don’t notice anything that might stand out from the mix. Everything should be buttered together nice and smooth. Nothing too loud, nothing too soft.
One thing I realised that was not mentioned in the lectorial was the cocktail party effect. Wikipedia’s definition states, “The cocktail party effect is the phenomenon of being able to focus one’s auditory attention on a particular stimulus while filtering out a range of other stimuli, much the same way that a partygoer can focus on a single conversation in a noisy room.This effect is what allows most people to “tune into” a single voice and “tune out” all others. It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one’s name in another conversation.“. Don’t you ever wonder why is it that when you’re in a packed train with people talking and the baby in the pram is crying and someone’s got their earphones blasting out heavy metal music, but you are still able to have a conversation with your friend? That’s the cocktail party effect working right there. Our minds have a built-in mixing console to help us filter out sounds that are not as desired and turn up the those that we focus on or are paying attention to.
I could go on and on about sound and how it affects our everyday lives, our music, our TV and film, just because it is one of my interest and what I have studied prior to this. In a nutshell, sound is everywhere and it’s presence not only can be heard, but felt. Can be touch people, and spread a message across to thousands, or millions.
Recent Comments