Previous to this lecture I had little knowledge on D.I.Y electronics, but it has been something that has always piqued my interest as it would be a great skill to have so I can apply it to my work as an artist. In this lecture, we were given a 9v battery, crocodile clips, breadboards, and loudspeakers. I will focus on my work with the loudspeakers. We were given loudspeakers in numerous sizes which affects the sound when manipulated with 2 paperclips placed on top of the loudspeaker. Here is a clip below of my classmate Rysia Kaczmar creating some speaker noise.
If you manage to get the paperclips positioned correctly the speaker will start playing harsh feedback sounds to you which can be changed by moving the paperclips. You can also change the sound by putting materials on top of the speaker e.g. sand. As an artist that works heavily in noise, I found this fascinating and something I am very keen to do in the future. This could be used in a live improvisational set-up. Since if you use different size loudspeakers you get different tones and feedback, if you used different speakers in performance it could be very interesting, especially if mic’d up to an fx unit, turning this into a musique concrete, noise performance.
In our lecture, we learnt about Pure Data (often referred to as PD) and how to use it. PD is an open-source piece of software that can be used in multiple ways, from recording and generating sound to controlling lighting rigs, and animation. There are different versions of pure data with varying user-friendliness. There are five ‘main flavours’ of PD, vanilla, ceammc, L2Ork, purr data, and plug data. These all do something unique and your personal choice will vary on what you need and how well you know PD. I have only used vanilla as it is the most simple for beginners.
We learnt about how to build oscillators in PD, which showed me just how much you can do with the software. You can build as many oscillators as you want, and with enough research and dedication, you could build a whole synthesiser. Before this lecture, I didn’t know this was possible. We were also taught about how to use objects, which is what you use to build things in PD. We also learnt that the use of tilde (~) is very important to PD, for example, you cannot type the word dac (Digital to analog converter), it will not work if you do not use a tilde behind it, it should look like this: dac~.
While using PD I realised the number of possibilities it has, so when I went home I installed some premade PD patches to experiment with. I installed noise and drone generators and after playing with them, I decided to find out how it was made by looking at the specs of it. I found it hard to understand at first, but this is something I want to learn in the future.
Mixing is one of the most crucial stages of recording. This is where your sound starts to finalise, where you choose how loud sounds are, where to bring sound in and out, and what direction the sound will go (panning). Originally I planned to mix The Alphabet in 7.4.1 (Dolby Atmos). I did some research on Dolby Atmos and thought it was a great idea. This decision was made in week 5, just over the halfway mark. At this point I was already making the sound for The Alphabet and thought that I would have enough time to mix in Dolby Atmos, as the weeks went on the sound was still evolving and growing and not nearing the mix stage and the longer it went on the less time I’d have to mix in Dolby Atmos.
I decided to scrap mixing in Dolby Atmos due to time restraints, it is something I’d be very interested to revisit as I really enjoyed researching about it and also what Dolby Atmos is capable of. I have also never mixed in surround sound, let alone something like Dolby Atmos. So I think choosing to scrap Dolby Atmos was a smart idea.
Once I scrapped mixing in Dolby Atmos I decided to mix in 5.1 Surround Sound.
(5.1 Surround Diagram)
5.1 surround sound is one of the most common and accessible types of surround sound you will find, being used in professional workspaces, and being used in home sound systems. I decided to use 5.1 as it was my first time mixing in surround sound and it would be a good starting place as it is basic enough for me to learn and mix in a short amount of time. I ended up mixing in the performance lab at LCC. I mixed over two 3-hour sessions. I did plan to mix in the composition room originally but it was booked up consistently.
The sound art technicians and I patched the performance lab sound system to be configuarted to 5.1. I wrote down this patch to remember it so I could patch it myself and teach myself how to use the room. Here is the document I wrote the patch down in. I attempted to use pro tools to mix in 5.1 but after lots of technical difficulties and help from the technicians, we decided logic would be the best. Considering I only had about 6 hours over 2 days to mix, I don’t have a lot of time to troubleshoot. Logic luckily is very simple with mixing in surround sound, so it was a relief.
I started experimenting with how I would pan my sound, in The Alphabet there are specific sounds that are intentionally placed in certain places, these sounds are the sounds from the girl and the figure that you hear in the middle of the piece. The gasps from both the girl, the figure, and the scream of the figure dying. These are panned to a specific place so the sound of these objects is consistently coming from the same place. It shows that there is intention in where these sounds are coming from and works well.
Mixing in 5.1 was a very informative experience, the first session was spent with me experimenting with the 5.1 set-up and what it is capable of, and during the second session I started auto-panning sounds around the speakers and automating the mix. I made a big mistake once I bounced my mix. I did not realise that I had to bounce a 5.1 mix differently from a normal mix. The other mistake I made was instead of sending my original stems to 5 different buses I panned and mixed them all individually, which is also incorrect. This mixed with not having any time left booked to mix or correct my changes left me to submit my stereo mix instead. Even though it is unfortunate that I could not submit my 5.1 mix, it was still a very informative experience and will take what I learnt from this experience and push on with it in the future.
My stereo mix was mixed on my monitors in my flat and was mixed in Ableton 11 live. This mix is what I’ve been working on since I started the proper score after my demo, I have played it through the composition room, the performance lab, and the small sound room and made changes where I needed to. I feel very confident in its quality. I have also shown people my work and gotten some great feedback, It was mentioned that when she wakes up from the dream to try and distance the sound from the rest of the piece. I did attempt to do this but also still keep a bit of that dreamlike quality as I find the ending even more of a nightmare than the actual dream.
I was also told that the scream could be changed and eq to sound like she is trying to scream but cannot. I found this idea great and quite scary but with the attempts, I made by eq and effects I could not get it to sound how I would like it to. So I left the original scream in.
Very early on in this project and once I decided what I wanted to create the sound for I made a demo project, where I could experiment with different sounds and concepts to see what works and what does not.
The idea of a demo for me is to have a space to experiment with multiple ideas. I think this is a very important thing to do, especially before starting a long project like this unit.
Before starting the demo I knew I wanted to experiment with drones, creating low and brooding drones has been a part of this process since day one. I also experimented with some foley recordings, at the 01:40:00 mark you can hear a scraping sound panned to the right. This is a recording of me scraping my student i.d. on the metal grates near the sound arts rooms. This was recorded with a zoom H5 recorder, pitched up and left warped. Ableton’s warp feature lets you time-stretch audio to the tempo of the project, I like to personally use it as a way to glitch and make your track trip up over itself.
Looking back on the demo from writing this (this was written after I mixed in 5.1 surround) it is not good, but clearly has ideas such as drones and vocal manipulation that continue into the final mix of The Alphabet. I think having a demo project where you can make as many mistakes as you want and experiment as much as you want before creating your proper session is very beneficial for me.
I’ve decided for this project to analyse films that are about dreams, dreamlike, or follow a child’s point of view. This will be good for my analysation skills for understanding the effects of film on people and how the sound in that film can also affect the viewer.
What films?
I’ve chosen 2 films to analyse
The Wizard Of Oz – (1939)
Mulholland Drive – David Lynch (2001)
The first film I chose was the Wizard Of Oz, which is a classic example of a dreamlike movie. It also comes from a child’s perspective which is very important for me to analyse. Also given the film’s age a lot of the effects and the overall look makes the whole film feel more dreamlike. I think this is a crucial film to talk about when discussing dreamlike films.
The second film I chose was Mulholland Drive, since I am already working on The Alphabet which was created by Lynch, I thought it would be good to revisit some of his filmography. I re-watched a compilation of his short films, Eraserhead, and Mulholland Drive. Mulholland Drive I found to be a very interesting analysation of both dreams, people, and the film industry. Analysing this film will be good for both my analysis skills and this project.
You can find my analysis of both these films in my blog titled ‘Dreamlike Film Analysis’.
Blauvelt, C. (2020). David Lynch Has Always Understood That Sound Is Key to Immersion. [online] IndieWire. Available at: https://www.indiewire.com/influencers/twin-peaks-director-david-lynch/.
The Paris Review (2014). David Lynch on Alan Splet. YouTube. Available at: https://www.youtube.com/watch?v=nSkyGRyUIEM
Hood, B. (2018). David Lynch knows he’s not the best dad. [online] Page Six. Available at: https://pagesix.com/2018/06/25/david-lynch-knows-hes-not-the-best-dad
Eraserhead (1977) is David Lynch’s debut feature film. Eraserhead follows a naive young man called Henry, who lives in an industrial and noisy world. Henry accidentally gets his girlfriend, Mary X, pregnant and Mary gives birth to a mutated baby. After living with Henry and the baby, Mary can’t take it anymore and leaves Henry to take care of the child. Following this we see Henry deal with the concepts of sex, fatherhood, adulthood, and mental health. The themes of sex and children are also seen in The Alphabet, which was created 8 years before Eraserhead was released.
The overall sound of Eraserhead is industrial, noisy, and claustrophobic. The sound design of Eraserhead was created by David Lynch and Alan Splet. Splet and Lynch sat down and listened through stock sound effects, however they decided to record the sound from scratch.
In an interview, Lynch says “picture dictates sound,” but he also says that it can be the other way round and that “sounds will conjure an image.” These two quotes are very true to Eraserhead, as the sound design helps the viewer feel more immersed in the film.
Eraserhead had a very low budget of $10,000. Eraserhead started as a student film when Lynch was studying at AFI in Los Angeles. Soon, the budget had run out and Lynch had to gather money in other ways. This happened multiple times, and every time the budget ran out, production would halt until they could afford to start again. Because of this, Lynch and Splet had a very small amount of money to use, so they had a D.I.Y. mindset while creating the sound.
I found Eraserhead’s sound design to be very inspirational to me while working on The Alphabet. Both the final product and the process of how Lynch and Splet created the sound was interesting and helpful to me because they had to use what was available to them to create the sound. I took inspiration from this process by using LCC’s foley and composition room to record vocals and foley and to mix the audio. I also used sounds from around me in my flat by recording my sink, boiler, and gas pipes with contact microphones and hydrophones.
One recurring sound throughout Eraserhead is the sound of wind, as heard in this scene (1:58 – 2:11). The wind in this movie was recorded by Splet in a village in Scotland called Findhorn. I found the use of taking wind, a normal everyday sound and making it disturbing and overbearing. In The Alphabet, I used the wind to use as the sound of a moving object. Instead of recording natural wind, I thought of creating this sound from another source. I used the Moog Mother 32 which is a semi analog synthesiser. The synthesiser has a noise function which creates a windlike noise. The reason I used the Mother 32 instead of real wind was that I could match the wind to the picture better then if I was to do it practically with real wind foley.
While conceptualising the sound for The Alphabet, the most important part for me was sonically creating a child’s nightmare. I thought scavenging charity shops for children’s toys and instruments would be an interesting way to create that sound. I walked through some charity shops in Woolworth after a university lecture, unsure what exactly I was looking for. The final shop I went into was stocked full of children’s toys that spoke or played songs. What caught my eye were two cassette tapes of children’s nursery rhymes. They were priced at 50p each and I bought them and started thinking about how these could be used.
During the first scene of The Alphabet, there is a collage of letters, coloured polka dots and nature imagery. This is the only scene in the film that visually doesn’t look like it’s in a horror film. I thought this would be a perfect opportunity to use my cassette player.
One of the cassettes was broken but one was fully intact. The cassette featured nursery rhyme classics like ‘London Bridge Is Falling Down’, ‘Twinkle Twinkle Little Star’, and ‘My Fair Lady’, the latter being the song sampled for the piece.
I recorded the tapes onto Ableton and started experimenting with the tapes by using the tape machine as an instrument. My tape machine has a pause machine which if you hold it down slightly will manipulate the tape and speed up the song. You can hear what that sounds like unedited here.
I was left with this recording and thought “How could I make this scary?”. I transposed down the track, which starts creating an eerie atmosphere. I used a plugin called Backmask which can reverse your audio and cut between reversed and non-reversed audio, which gives the track an abrupt, complex texture and sound that really enhanced the track. I also used MISHBY which is a broken tape machine plugin that specialises in distortion, detuning, and warping tracks. This gave the track a bit more edge by using distortion on the track. You can hear the final edit of this here
During the end credits of The Alphabet you can hear more tape manipulation, I resampled the track you here earlier in the films and transposed it down to make it even more warped and dark. I thought this would be an interesting motif to use as the tape manipulation is one of the first sounds you hear and the final one you hear.
Mulholland Drive is David Lynch’s 9th feature film, we follow Rita who is a survivor of a car crash but is stricken with amnesia after the accident. She finds herself in struggling actor Betty’s aunt’s house, where Betty finds Rita and is willing to help Rita find her identity.
Mulholland Drive at heart is a critique of the film industry and in particular how women are treated by the film industry. How the film industry will make or break a woman’s career if they don’t do something they are uncomfortable with, and the shady behind-the-scenes of controlling producers and egotistical directors.
The film is also incredibly surreal and dreamlike. Mulholland Drive constantly has this dreamlike aroma to it, the way the film is shot, edited and how the music and sound design are used blend together to make a nightmarish tone.
The film’s composer is Angelo Baldiamati who is a frequent collaborator of David Lynch working on many of his other works like Twin Peaks and Blue Velvet. Angelo creates a score full of deep brooding drones, the track Mr. Roque Betty’s Theme is a great example of this.
The track is occupied by a consistent drone with sounds drifting in and out. The main drone sounds airy, engulfed in this beautiful and long reverb. I found this really interesting and important to me working on the alphabet, by incorporating heavily reverbed drones into my piece, these drones can be found throughout most of Lynch’s films. The piece then breaks into a soaringly beautiful synth string passage which makes for a very emotional piece of music.
Film 2 – The Wizard Of Oz
The Wizard Of Oz is the 1939 adaptation of the book of the same name by Lyman Frank Baum. This film starts with our main protagonist Dorothy (Judy Garland) living a normal ‘black and white life in Kansas, she has a hard-working family and a dog called Toto. After the attempted euthanization of her dog, Dorothy’s house is swept up in a tornado, when the house has landed she walks outside into the land of oz.
The Wizard Of Oz is very much a surreal film, everything from the sets, characters, music, and special fx, everything feels surreal and dreamlike. This film also feels haunted, everything that could’ve gone wrong on a film set went wrong on The Wizard Of Oz. Directors came and went throughout filming leaving the finished work being directed by 4 different directors. Actor Margaret Hamilton being set on fire by a malfunction on set, using asbestos for some of the special effects e.g. snow. All of this makes watching the film feel very uneasy and uncomfortable at points, especially when you can point out when these things happened.
I’ve always been mildly creeped out by The Wizard Of Oz, but believe it to be a surrealist masterpiece. The way they introduce the characters felt very appropriate and worked to create the dreamlike feeling the film is going for. The film is very colourful in particular with the colours Green, Yellow, and Blue. The way the colours are very bright, and still, and are used heavily makes the film feel very hypnotic.
The Wizard Of Oz has apparently also been a big inspiration for David Lynch, recently a film called Lynch/Oz has been doing the film festival circuit to great reviews. I cannot find much information on David Lynch in The Wizard Of Oz but it’s clear how the dreamlike feeling of The Wizard Of Oz could have inspired him as a child and even as an adult.
I collaborated with my friend Rysia Kaczmar to work with vocals for the sound of this project. Rysia is my bandmate and also a very diverse vocalist. We booked the composition and foley room at LCC to record lots of vocal tracks for the piece. I showed Rysia The Alphabet and told her the concept of a child’s nightmare and she liked the concept and I started working on a script for her.
I wrote a script for Rysia consisting of lines from The Alphabet I wanted to keep like “Please remember you are dealing with the human form” and the children reciting their A,B,C’s. Rysia is a very performative and an intense vocalist, she is very good with improvisation which I think worked well for the piece because her expressive and intense style adds to the eerie atmosphere of the short.
The script I wrote for Rysia was done in my notes, which I really regret. I did go on to write a script which I timestamped where the vocal lines will go. I wish I did that before we recorded so now I know to do that beforehand. The reason I did do it on my notes was that Rysia is a very improvisational vocalist and diverse, so I thought the fewer instructions the better. I do agree with this to an extent but I do wish I wrote the script earlier.
The composition room gave us some technical difficulties when we started, with issues of no sound coming out. But when that was troubleshooted and solved we went straight to recording. I recorded onto Ableton as I was recording and creating my score on Ableton. I do plan to mix it in Pro Tools and not Ableton. All the lines were recorded dry with no effects on the vocals, as that would all be done in post by me.
This recording session taught me how important vocal and foley recording is to create the overall sound of the film. Having this hands-on experience with recording was very beneficial to me. I learnt a lot about setting up microphones, routing them to the computer, troubleshooting issues, monitoring vocals, giving instructions on what to do vocally to Rysia, recording vocal foley, and working in a space like this.
This last paragraph was written just after the final edit was done before mixing.
The vocal sounds I was left with after this session was incredible and very important to the sound of the film. After some experimentation with the vocals by using effects and techniques like transposing, I found how the vocals should sound. I did lots of manipulation with Rysias vocals, I used them for the gasp of the figure at the 02:25:26 mark. Rysia imitated a baby crying which was used at the 01:00:00 mark. The crying sound transposed down -12st and was run through a plug-in called MISHBY, which is a tape machine emulator that specialises in detuning, distorting, and warping sounds. I found that worked really well on the baby crying and is also used on most of Rysia’s vocal tracks as a distortion.
The alphabet recite at the end of the alphabet was performed excellently by Rysia, it synced up perfectly with no editing, I only edited out the breathes between words. These vocals are run through a soviet wire recorder emulator called Wires. This gives the vocals a crackly and distorted sound, they also detuned Rysia’s vocals slightly creating more of an uneasy atmosphere.
Here are some of Rysia’s Vocals dry, then with effects.