Thursday, October 28, 2010

Structure free is an instrument plug-in that does not require a midi controller to function. The default setting is a sine wave patch. To load a patch, click on the browser tab – application-digidesign-protools creative collection/Structure free. There are 6 smart knobs and a master fader. There are factory pre-assigned, useful parameters. There is Chorus Mix, Reverb Mix, Cutoff (applies an EQ filter), Resonance (controls the resonance of the cutoff), Attack, Release, and Master control. There are green keys that can loop patches and creates a variation of the groove and blue keys. These key switches are velocity sensitive. You can change the pitch and tempo of the drum kits. Key switches do not produce pitch, they send info. The keys are green when used, and blue when not used. Key switches produce a variation in the loop of the patch. Pull up a loop, note where the key switches are, play them play with your left hand while you use switch keys with your right while recording. Bring up the midi edit window, and pull the key switches out of their range and they will become pitches. The patch module contains the patch list. You can create, midi assign, mix, select, route, and group patches. By clicking on patch, you have the option to load a new patch, add a patch, duplicate, remove one or all patches.

Friday, October 22, 2010

Vacuum and Boom

Vacuum is a monophonic digital synthesizer that generates tones. It is monophonic because can only play one midi note at a time. Polyphonic synths allow you to play chords. The more important parts of the Vacuum are the oscillators and the envelopes. This is where the majority of the sounds are manipulated. The two top left modules are VTO’s 1 and 2 (vacuum tube oscillator). The VTO’s are where the sound begins to generate. There are range knobs that allow you to change which overtones are emphasized. In WIDE mode, the fine knob has a 5 octave range, and while not in WIDE mode it has a range of + or – 7 semitones. LO mode turns VTO 2 into a low frequency oscillator, or LFO. VTO also switches between blending octaves. There are suttle changes when switching through these modes and they can create thick, detuned sounds. There are different wave shapes to choose from on the VTOs: Triangle, Sawtooth, and Pulse waves PW0 and PWO50. The Envelope to shape knob controls the modulation of the current VTO shape via envelope 1. There is a mixer next to the VTO’s that blends the two together. The ring modulator sends a variable amount of VTO 1 and 2. It heterodynes 2 waves and outputs the sum and the difference of the waves. There are high pass filters and low pass filters, and they both do exactly what they sound like. A high pass filter lets all of the high frequencies pass through while keeping out the low frequencies. A low pass filter does the opposite – keeps the highs out and lets the lows through. These filters have a slope that sets the curve, and that changes how many dB/octave is attenuated. A steeper slope on a high pass filter will result in cutting out more low end. These filters have cutoffs as well. This control determines when the frequencies roll off in reference to the audible sound spectrum, 20Hz-20KHz. The resonance setting affects the filter resonance. The saturation knob distorts the resonant frequency. The envelopes change the shape over time. The attack of a wave is the amount of time it takes for a sound to get to full volume. The decay is the point where the noise decreases in amplitude, and the sustain is the persistence of the sound after the attack. The release is the point at which the sound stops, and some sound may persist at the end but that is considered to be reverberation, and that sound and length varies depending on the size and type of room. In the BOOM drum machine plug, there are 10 different kits. There are ten different “channels” where you can have different piece of the kit on. Each channel has options to change pan, level, tone, and decay. Each channel also has a solo and mute button. The matrix on the left provides a visual overview of the current drum pattern you have selected. Each row corresponds to a channel. You can click the little red dots and they get brighter and darker. Off is like a rest, and bright red is a loud it, and the dimmer ones are softer hits. This option enables you to put feel into a beat by changing the dynamics of it. There are 16 steps available to make a beat, that would be in 4/4. You can copy patterns to any one of the 16 patterns, or delete any pattern.

Thursday, October 14, 2010

A few more things to know about ProTools LE: Markers and Crash Course Automation Tips, and MIDI control assignments.

Markers make your work flow more efficient and help keep your sessions visually organized. Where it says Markers at the top of your tracks in the edit window, click the triangle “∇” icon to make the ruler appear across the screen. This way you can see the markers that you place down for the different scenes in your movie scoring project. Markers make it easy so see where certain actions or sound effects need to happen. They are also for other editing notes or reminders. You can enter these while your movie plays by pressing ENTER+ENTER …not return+return!!! If you are fast, you can type the name for the marker as the movie plays. By using the key command “decimal point + the marker # on the numeric pad + decimal point“ it brings up the Memory Locations window where you can view all of your markers in a column list. By pressing Command + numeric #5 it brings up the window for the counter options on your markers. You can select to view Bars/Beats, Min/Sec, or Samples. There is a sub-counter as well that lets you view a secondary view with any of those previous 3 options. There is a comments section for typing notes, and a sort by time option that sorts the markers in the list by time. Automation is a fantastic but slippery slope when it comes to mixing. It should really be one of the last things you do with mixing to avoid many issues the main being that it could result in a huge waste of time. Automation is a really cool way to change/manipulate/affect every single available parameter on the inserts/sends, and dynamics and effects plug-ins. Say we have a delay on a track and we want it in one part of a project but not in another. First, in the mix window, you have to click on the processor you want to automate. There’s an AUTO button at the top of the window. Clicking that brings up a list of all of the parameters on one side. If you have multiple inserts and sends, they will appear in the middle and you can select which one you want to enable. Select the parameter you want to automate and click add. This way is good for learning the names of the parameters on the plug-ins you are automating. Once you get to know the names, there is a quicker way to get these parameters enabled. Holding down all three of the modifier keys (control+option+command) and clicking directly on the little green light (dim when automation disabled) by the parameter enables it for automation. Also, control click shows a menu where you can choose “plug in automation enable”. In the edit window, go to where the flashing fader is on the track you want to affect. Click on the bar that ProTools defaults as WAVEFORM. There will be options for volume, pan, mute, and any of the things you enabled. A fader is static, and doesn’t move by itself in a mix. If we want a part turned down every time play a song, automate the volume of the track. Be in smart tool mode to manipulate automation. When you press control, the pencil tool allows you to draw lines and different shapes that are user definable. There’s a drop down menu by the pencil tool that lets you choose from freehand, triangle, square, or random shapes. Parabolic and s-curve are two shapes that I believe only ProTools HD will let you do. By using automation, you can change countless parameters, and it can create some really cool effects without having to use to many tracks. The downside is once you write automation, you are stuck with it until you manually delete it from the automation section. This can make things really complicated, and you want to be sure you are committed to the effect. Press and hold command and you can click to draw in nodes. Drag the node up and down for the effect that you want. Set up a drum track with an EQ in the insert, and follow those steps. Boost the gain on one of the 7 bands and make sure the Q is more of a notch shape. Automate the frequency parameter up and down in Hz on that same band. The resulting effect is a frequency sweep. This is a cool swishing effect that sweeps through, while gaining the frequency range you selected in a linear fashion up and down the frequency spectrum. Very cool!! In drum machine plug-ins, you can automate the envelope of a sound. Attack, Decay, Sustain and Release. These are cool ways for drums to add odd, and rather awesome effects to mixes. Any patch from any synthesizer can be assigned to take place on certain ranges of the keyboard, and wherever you want them to. For this purpose I will talk about Xpand2 in Pro Tools LE. In the mix window of this synthesizer, there are 4 slots called A, B, C, and D. In each of these slots you can have patch of your choice loaded. Each slot can store up to 500 parts or presets in its bank. Each slot has an individual mix, MIDI, arpeggiator, modulation, and effects settings. You can load any four parts in to create a patch. A patch is a multi-layered, complex synth. It is possible to save the combinations you create as patches, and they will be stored in the presets menu. What is cool about this is you can load the patches on the same instrument plug-in on a different Pro Tools system. When assigning parameters to MIDI controllers and the MOD wheel, control-click and select “Learn MIDI CC”. Turning the MODulation wheel that is next to the pitch shift wheel on the controller will manipulate the parameter you have selected to learn MIDI CC. To un-learn the control, control-click again and select the “forget” option. You can use the MOD wheel for the fader, the panoramic potentiometer, FX1, FX2, and the Master volume on the insert. There are smart knobs on the top row that functions for all 4 slots at once (globally), or you can choose individual slots and edit their own smart knob parameters to really blend sounds together. Both FX columns at the bottom have equal options, each containing 3 different categories of effects. There are a handful of reverbs, delays, and modulation effects with choruses, phasers, and flangers. Each slot can have 2 effects at the same time, or just one. Enabling the Learn MIDI CC will put these effects to the MOD wheel for manual use. What is cool is you can record the MOD wheel. This is basically live automation, and you are now using the MOD wheel as its own instrument! For modulation of the slot itself, click on MOD on the right, and you can now change wave types, and where the range of the slot is on the controller. By using the HI/LO function, you can set the slots to being and end on certain keys, or overlap on certain keys. In the arpeggiator, you need to power it on for it to work. You can edit the rate/mode of it by different note values: 8th,, 8th triplets, 16th, 16th triplets just to name a few. The Latch mode endlessly loops the notes that you press until other keys are pressed.

Friday, October 8, 2010

Over the last weekend, we had to choose a video of our choice to do a score on. We also had to make a 6-week production schedule of a projected plan. Over this last week, I got a big start on my project doing most of my work with the MIDI and audio tracks using Logic Pro 8. I am very pleased with the extensive library of synths and all kinds of instrument and effects plugins. I am going with a video that is a CGI adventure in space, floating away from planet earth. It is the intro scene from the movie Contact. The instrumentation will be drums, a harp, a string section, and a guitar. I want to make these instruments sound like they’re floating in space, and I want to have different instruments represent different planets and themes that are happening in the video. Adding the right amount of the right type of reverb will help define the spatial atmosphere. I came up with many great ideas this week for my video. As planets are going by the screen and floating away, I want them to have a huge rumbling sound. To get this effect I knew that either a pre-recorded sound effect or a lot of reverb would do the trick. I decided to create my own sound effect with a lot of reverb. I was messing with a kick drum with a reverb on it. I set the reverb to have an infinite length. I turned the input of the dry kick signal all the way down, and fed the reverb with a 100% wet signal. The resulting sound was an explosion with the attack of the kick, followed by a tail and sustain of very low sub bass frequencies. This sound was going to be perfect for the sound of the planets going by, as I can automate the bass sound to swell up and down in volume as the planets are closer and fade down when they drift away. I recorded this sound of the MIDI kick and its reverb to an audio track, and that was my sample. I finished the “rumbling” automation for the whole video that is 2:45 in length. A few meteor showers fly by and I wanted to capture a sound effect for those too, so I played around with some synthesizers in Logic Pro 8. I found a cool air swell patch that was just what the video needed. I played around with a few chords on the harp patch I was going use, and decided to base the music for my movie around the C SUS4 chord. To me this suspension chord creates a sense of tension and release, and is always very close to a solid resolution, keeping the emotion moving and interesting for this particular video. I recorded a phrase in 11/4 for the harp, on the keyboard. I then programmed a drum beat to that, using a standard drum kit, with a kick/snare/crash/hi-hat/ride/toms. In order to make the kit sound like it was in space and not in a studio or quiet room, I bussed the output of the midi tracks to a single aux track as drum mix. Then I created a send from that channel to additional aux track to be used for effects processing on the drum kit. I introduce the music as a fade in representing planet earth, and as Earth floats off in the distance, I automated the volume parameter with a downward slope, and gradually increased the wet mix setting on the reverb, and it gave a really cool effect. There is a part in the video that looks like space is being engulfed by flames, and I wanted to find a huge, fiery sound. I decided to take a rain patch from a synth in Protools, and created a swell, pushing more notes down as the fire got thicker. It sounded too much like rain, so I added a reverb and a distortion plug-in to it to make it sound like a raging firestorm. What I like to call little blue ice stars fly by at one point. In this part of the video, I want everything to be silent, more resembling a space atmosphere. I was looking for a wind-swishing type sound, and used the same air-space sound that I used to introduce the film. A very short guitar riff that has a space reverb as well really fills things out. I set up auxiliary tracks for every instrument so they could have a separate channel for all of the effects and automation. The strings fill out a nice layer and they are characteristic of floating in the clouds. I added a really high-pitched choir patch for a “wonder-invoking” effect, and kept it really low in the mix. All in all I have a total of 27 tracks being used for this score.

Friday, October 1, 2010

This week we were assigned a 3 part vocal/creative music project, where we were required to record our voices by reading a text excerpt in regards the Apollo 11 crew that recorded “live” footage from space of planet Earth. There was a debate because Armstrong claimed that the craft was 130,000 miles from earth, but at the end of the footage, blue sky was revealed. Blue sky would have not been seen had the spacecraft been as far away from earth as they said. I used an SM58 to record the vocals. For some reason, the input signal wasn’t very hot, so we had to crank up the input on the Mbox full blast to even get a medium signal level. It was enough to get a recording, but there was hiss coming from the input over the whole track when I listened back to it. I put a denoiser plug-in on the track and tweaked the parameters to get the track sounding clear and normal. I changed the gain on the entire audio file as well, and that helped to bring it up in overall volume. The second part of the assignment was to chop up the original audio and rearrange the words into the order that they were on the instructions. We also had to reverse a few words, using the reverse function in the Audiosuite menu. The third part was to do to something creative with the audio. Some people just went crazy with effects, pitch shifting, and other processing. Others added music to their projects. I decided to write a little 45 second jam, and edited my voice in parts to make it sound like it was skipping. I also matched my talking to the beat, and kept it in rhythm through out the track. Working from home for the music part of it, I was using Logic Pro 8. I really liked some of the synth patches that I came across and wanted to use them. I decided to use 3 layers: A drumbeat, a wide/open ambient string patch with delay, phase, and cutoff processing, and a melody line with a sawtooth synth. So I programmed a drumbeat, using the Ultrabeat drum synth. I added an ambient patch from the ES2 synthesizer, and picked a monophonic patch from the same synth for the melody. After getting all of these set up as midi tracks, I wanted to ultimately work with audio. I sent the output of the midi tracks to the input of a new audio track that was record enabled. I recorded the midi on the audio tracks after I was done quantizing and everything was set. Now I had 3 stereo tracks to work with. I didn’t use mono tracks because a lot of the processing on the patches I used had stereo effects. I then brought 5 audio tracks into a Protools session: The original vocal track, the rearranged vocal track, the drums, the ambience, and the melody. I added 3 more tracks because I was planning on splicing up the vocals and words onto tracks with various effects and panning. The reason I did this was because I wanted to have a designated track for the different pans, and did want to worry about automating the panning. 1 of the tracks was panned hard right, the other hard left, and the other hard center. I used the L side for audio that would be pitch shifted down 1 – 15 semitones. The R track was used for audio pitch shifted up 1 – 15 semitones. The center was used for the all of the vocals that were to remain the same pitch. Sometimes I would chop up a syllable from a word and repeat it to simulate a delay, and other times I would automate a delay plug-in to be present or not. The project ended up turning out cool, and I had a lot of fun doing it.

I also researched and wrote a paper for the make up test this week:


A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.
This week we were assigned a 3 part vocal/creative music project, where we were required to record our voices by reading a text excerpt in regards the Apollo 11 crew that recorded “live” footage from space of planet Earth. There was a debate because Armstrong claimed that the craft was 130,000 miles from earth, but at the end of the footage, blue sky was revealed. Blue sky would have not been seen had the spacecraft been as far away from earth as they said. I used an SM58 to record the vocals. For some reason, the input signal wasn’t very hot, so we had to crank up the input on the Mbox full blast to even get a medium signal level. It was enough to get a recording, but there was hiss coming from the input over the whole track when I listened back to it. I put a denoiser plug-in on the track and tweaked the parameters to get the track sounding clear and normal. I changed the gain on the entire audio file as well, and that helped to bring it up in overall volume. The second part of the assignment was to chop up the original audio and rearrange the words into the order that they were on the instructions. We also had to reverse a few words, using the reverse function in the Audiosuite menu. The third part was to do to something creative with the audio. Some people just went crazy with effects, pitch shifting, and other processing. Others added music to their projects. I decided to write a little 45 second jam, and edited my voice in parts to make it sound like it was skipping. I also matched my talking to the beat, and kept it in rhythm through out the track. Working from home for the music part of it, I was using Logic Pro 8. I really liked some of the synth patches that I came across and wanted to use them. I decided to use 3 layers: A drumbeat, a wide/open ambient string patch with delay, phase, and cutoff processing, and a melody line with a sawtooth synth. So I programmed a drumbeat, using the Ultrabeat drum synth. I added an ambient patch from the ES2 synthesizer, and picked a monophonic patch from the same synth for the melody. After getting all of these set up as midi tracks, I wanted to ultimately work with audio. I sent the output of the midi tracks to the input of a new audio track that was record enabled. I recorded the midi on the audio tracks after I was done quantizing and everything was set. Now I had 3 stereo tracks to work with. I didn’t use mono tracks because a lot of the processing on the patches I used had stereo effects. I then brought 5 audio tracks into a Protools session: The original vocal track, the rearranged vocal track, the drums, the ambience, and the melody. I added 3 more tracks because I was planning on splicing up the vocals and words onto tracks with various effects and panning. The reason I did this was because I wanted to have a designated track for the different pans, and did want to worry about automating the panning. 1 of the tracks was panned hard right, the other hard left, and the other hard center. I used the L side for audio that would be pitch shifted down 1 – 15 semitones. The R track was used for audio pitch shifted up 1 – 15 semitones. The center was used for the all of the vocals that were to remain the same pitch. Sometimes I would chop up a syllable from a word and repeat it to simulate a delay, and other times I would automate a delay plug-in to be present or not. The project ended up turning out cool, and I had a lot of fun doing it.

I also researched and wrote a paper for the make up test this week:


A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.