Audio post production is much more than simply adjusting volume levels and mixing tracks. Transforming production sound into a powerful soundtrack requires time, technical skill, creative vision and execution, as well as a full set of professional audio tools. The good news is that DaVinci Resolve 15 includes the tools to create a professional soundtrack from start to finish. Before you dive into the following audio chapters, it’s a good idea to understand the audio post production process and workflow.
Keep in mind that many elements affect the workflow you’ll use: the type of project, budget, format, length, deliverables and distribution methods often dictate the size of the post audio team, amount of time, and tools available to get the job done. For this introduction, let’s focus on the fundamental post production audio processes necessary for both narrative and documentary style projects. Although the following pages explain the different jobs and stages in audio post production.
What is Audio Post-Production?
Let’s start with a few basic terms. Audio post production refers to the process of making a soundtrack for moving images. Notice the use of “moving images,” which encompasses all projects great and small from movie theaters to streaming videos and everything in between. A soundtrack is simply the audio that accompanies a finished project.
How your audience experiences the finished project is greatly influenced by the
soundtrack. In fact, a well executed soundtrack may go unnoticed for hours by the audience while it is immersed in show. On the other hand, it takes only a few seconds of an amateurish or sloppy soundtrack to lose the audience not only from the story, but possibly from the theater or to a different channel.
If you’ve ever recorded or watched a home movie, especially one shot at an exciting public place such as the beach or an amusement park, then you’ve got first hand experience with some of the inherent challenges in both recording and listening to natural production sound. All those excess environmental sounds and distractions create a need for audio post production to transform raw sound into successful soundtracks with clear dialogue, realistic effects, and lush acoustic soundscapes wrapped in an emotionally powerful score.
What is the Audio Post Production Workflow?
Since the advent of synced sound in motion pictures, the first rule of audio post has been, “Never start working on audio until the picture is locked.” Locked suggests that there will be no more changes to the picture edit from this point forward.
In reality, changes always happen. Why does this matter? Because, soundtracks need to maintain a frame accurate relationship with the picture to stay in sync. If they are off by as little as one or two frames, the sight and sound will be noticeably out of sync, a situation that is distracting, unprofessional and likely to lose your audience.
In a traditional post production workflow, changes to locked picture have a cascading snowball effect on audio post. But when you’re working with some editing software like DaVinci Resolve, which is the only professional editing software that includes a full digital audio workstation (DAW), no matter what editing changes are made, you can update your project immediately and efficiently. This gives you tremendous creative flexibility if you are working on your own, because you can go back and forth between editing picture, audio work and color correction as often as needed.
Now let’s break down the different phases and jobs in a traditional audio post production workflow.
Spotting the Soundtrack
A spotting session is when the supervising sound editor and the sound designer
(often the same person) sit down with the director, editor and composer to look for soundtrack elements that need to be added, fixed or re-recorded. Notes from a spotting session are combined into a spotting list that details music cues, important sound effects, dialogue fixes, and additional audio notes.
Production Dialogue Editing
Dialogue editing is the tedious behind the scenes task of splitting dialogue into separate tracks, removing unwanted sounds, replacing individual words or phrases for clarity and balancing separate clip audio levels for consistency. Why go to all that trouble?
Because spoken words are as important to a soundtrack as the lead vocals in a hit song. Keep in mind that dialogue editors are responsible for all spoken words including dialogue, narration, and voice over.
Production dialogue editing starts with creating separate tracks for each character, then moving all of those dialogue clips into a specific track. This crucial step is necessary because each voice in a production is different and, therefore, needs to be processed individually with volume normalization, equalization, and effects specific to that voice.
Next, the dialogue editor cleans up the tracks and removes any unwanted human sounds (like tongue clicks and lip smacks). If a distracting sound can be physically cut out, this is the time to do it. Plug-ins and effects can help eliminate unwanted clicks, pops, and noise automatically; but be aware that any processing you add to a clip, can affect a voice, as well.
After the dialogue is cleaned up, the volume levels are balanced to be consistent on each 156 dialogue track. If dialogue can’t be used because it is damaged, noisy, or unclear, it must be replaced with audio from other takes or re-recorded. The process of re-recording production dialogue is called automatic dialogue replacement (ADR) or looping.
Sound Design and Sound Effects Editing
Once the dialogue editing is finished, the creative process begins! The sound designer’s creative input to the soundtrack is similar to that of the director of photography for the picture. Sound designers are responsible for the overall acoustic experience for the audience.
They also oversee the many individual tracks of sound and music that comprise the soundtrack. These audio tracks include dialogue, ambience, hard sound effects, and foley sounds.
Not only do sound designers determine the aural illusion and mood of the soundtrack, they also create, record, and enhance sound elements that only exist in their imaginations. After all, many projects need sound effects that don’t exist in the real world.
Where do you go to record dragons, aliens, or zombies?
Those sounds must be created or designed from scratch using a combination of real sounds, simulated sounds, and a lot of processing and effects.
While the sound designer determines the depth and detail of the sound effects tracks, the sound effects editor places each sound effect in corresponding tracks. Sound effects fall into four main categories:
Natural sound, also known as Nat sound or production sound, is anything other than dialogue recorded by a microphone on location during the shoot.
Ambience, or ambient sound, is the realistic conglomerate of sounds that establish a location, such as waves rhythmically crashing and sea birds chattering for remote seaside ambience.
Hard sound effects are so named because they need to be physically synced to picture and are necessary for the story or scene.
Foley sound consists of any character-driven sound effects caused by characters interacting with their onscreen environments. Foley sounds are named after Jack Foley, a legendary sound editor at Universal Studios, who originally developed the technique of recording reenactments on a stage. Foley sound replaces the original production audio for everything from fist fights to footsteps and clothing movement.
Music Editing
Music editing involves placing different music elements into the soundtrack to enhance the mood or story.
All soundtrack music falls into one of two categories:
Music occurring within the scene that the characters can hear, so-called source or diegetic music; and non-diegetic music that is added in post for the benefit of the audience, the background score.
Diegetic music needs special attention to make sure that the volume levels, placement, effects and presence fit the context of the scene.
Non-diegetic music added in post production for emotional effect or impact includes the score, stingers, and stabs. Stingers are singular notes or chords that build tension and suspense. Stabs are quick bursts of music that work like an exclamation point to draw attention to something or someone in the story or narration.
Enhancing and Sweetening Tracks
Once the dialogue tracks are edited and the sound effects and music added, it’s time make subtle improvements to the sound of each track so that they work in context with the other tracks in the mix.
The tools used to improve the sound in a track are similar in many ways to the tools colorists use to improve individual shots within a scene.
For all intents and purposes this process could be called audio correction. You manipulate four fundamental elements to enhance or “sweeten” audio tracks so they work together as intended in the final mix: volume level, dynamics, equalization and pan.
Volume controls are used to adjust the loudness of a track on a decibel scale, and are similar to luminance (brightness) because both volume and luminance have strict broadcast standards, and are usually the first thing the audience notices in each scene. Volume levels can be adjusted on each clip, track, and the main output, just as luminance (black and white levels) can be adjusted on individual clips, scenes, and output.
Dynamics controls adjust the dynamic range, which is difference between the loudest and quietest peaks in a track. A track’s dynamic range is very similar to contrast within a shot.
A track with a high dynamic range has very loud and quiet elements within the track, such as a character whispering and then screaming in the same scene.
A low dynamic range would be fairly flat, such as a commercial voiceover in which the volume level of the talent is very even from start to finish.
Pan controls place the sound of a track within a panoramic stereo field. These controls are used to compose the acoustic experience just as a cinematographer composes the visuals of a shot. Tracks can be precisely located anywhere from left to right to sound as if they come from an offscreen source, or somewhere within the frame.
Equalization (EQ) controls manipulate specific frequencies to enhance the overall sound, and are just like working with color, saturation, and hue in color correction.
For example: the human voice is based on a fundamental frequency shared by millions, the additional frequencies add tonal qualities to “color” the voice and make it unique and recognizable.
The primary function of equalization is to lower frequencies that detract from the voice and boost the positive frequencies to improve the overall sound.
Mixing and Mastering
The last step of audio post is mixing the tracks and mastering the output. Assuming that all of the other steps were completed prior to the mix, the process is fairly straightforward.
The goal of mixing and mastering is to balance the levels coming from each track so they sound good as a whole. This is accomplished by making subtle changes to the track levels, or combining similar tracks into submixes to make them easier to control with one fader.
The final master needs to sound great and meet delivery standards for loudness.
Fortunately, the Fairlight page includes everything you need to mix tracks and loudness meters to make sure the levels are right on target.
Now that you understand some of the technical steps and creative tools that are essential in an audio post production workflow, you can dive in to the upcoming lessons and start putting them to use on your own projects!
تعليقات