For the first workshop of the week, we were sent down to the dungeons of building 9 (the edit suites), work on the footages we shot in week #2, I blogged about how we went in the post, https://www.mediafactory.org.au/wen-sem-chin/2017/03/09/true-to-form-sounds-like-teen-spirit-week-2-2/. And today, we started off editing some of the footages, and not forgetting, mixing in some of the sounds we recorded earlier last week too.

Paul gave us a very in depth and detailed lecture on file management and all the necessary routines (can’t think of a better word), that everyone should practice before they even come close to cutting up and editing raw footages. It was a good thing that Paul brought this up during lecture, and I can’t stress how important it is to manage our files properly, neatly, organised-ly, and full blown OC(Deeee), as it is that much easier to keep track of things when we’re down the road of actual editing. However, I don’t think a single lecture would cultivate this habit into individuals, as it is a thing that is really hard to understand, unless we are in the middle of a editing session, or we have done this “X” number of times until it becomes second nature. Just like athletes developing muscle memory through drills and practice, I believe this is one of the things where we have to really go through it step by step to develop the same “muscle memory” except maybe not on a physical level.

One thing new that I’ve learned from today’s lecture was that you could sync sound on Premiere without having to do it the old school way with the slate, you could “Merge Cells” and it was automatically recognise the waveforms from the onboard camera microphone and from the microphone we used to record into the Zoom F4, and Premiere would create a new “clip” with the 2 sounds synced, and you can have a mixed of both. I thought that was really neat, as it would have saved a lot of time syncing sound, however (call me old), but I still would sync sound with a slate or a clap, as that is an idiot proof way and sometimes the computer might stuff things up without yourself knowing, and midway through an edit you realised your audio is not synced with the visual or you might have synced the wrong audio track at the very beginning.

We got split into our various groups that we worked with last week and started editing on the footage we shot previously. I decided to explore the effects of using the sounds we recorded earlier in week #2 and laying it over the visuals we shot later in the week.  As discussed in my earlier post, I decided to juxtapose the vision of walking barefoot on metal steps with the sound of walking with rubber soled shoes. I manually synced the sound to the steps one by one, and the effect was quite intriguing. It plays with my mind as if the subject’s feet were made of something hard or the stairs were made of a different material instead of metal. It was an interesting exercise. The next scene was the subject walking on a carpeted corridor barefooted as well. And I decided to lay in the sounds of water running and my buddy stepping on a bunch of leaves. This made the scene looked and sounded like as if the carpet was wet, and the footsteps on leaves gave a very soft timbre instead of the usual hard clocky sound you get when you’re wearing hard sole shoes on concrete.

This exercise and experiment made me realise how much we concentrate and focus on getting the perfect, correct, and the most true sound we can get to represent what happens on the vision, but little do we know that messing around with different sounds could also “bluff” our minds to think not everything in the vision is what we might seem or expect. Unfortunately, due to the time we spent discussing on shots and how to shoot different ideas, and also some unforeseen circumstances, we were left with little time for the actual shooting, and the entire video added up to only 13 seconds.