My video project has been in its final stages for a little while now, and I have really had some time to mix everything and make the journey through space a believable one. I honestly am super disappointed in what they did with the original score. It seems really boring to me… I found a sound in Reason earlier this week that would fit the score, and blended that in earlier this week. I am excited to see everyone’s videos!
This week we started using Sibelius, a music notation program. On a grand staff, we notated a simple version of Deck the Halls. You can write lyrics to go along with the notes, and you can write the chords along the top. Digital notation programs are super time-saving tools, and I can bet almost no one actually writes out notation anymore. After using this program, I am definitely less scared of scoring and charting songs.
Friday, December 3, 2010
Friday, November 19, 2010
To open an empty rack, Open- Reason folder – Templates – Empty rack. To add a piece of gear to the rack you can double click, click and drag, or click create.
In Subtractor, click the folder icon by the patch window to load a new patch. After selecting a group of patches, you can use the little arrows to browse through the
Difference between insert effects and send effects.
Insert effects: sends 100% of a signal through the device Device-Effects INSERT-Mixer
100%
Send effects: sends a portion of a signal. signals-effects SEND-Mixer
Aux level knob
Starts at a MONO output going into the INPUT LEFT, but is then processed into a stereo and comes out L and R.
A send leaves from the auxiliarys in the mixer as a mono signal, and returns from the UN-16 as a stereo signal into the AUX returns R and L.
ALT and DUB
When you’re recording, after you record the take, pressing alt will create another lane above that with the first take automatically muted.
DUB creates a new lane, but it doesn’t mute the previously recorded track, so you hear what you are recording as well as the prior take.
Can adjust pitch and it wont affect tempo, and can adjust tempo and it won’t affect the pitch. Can shape the envelope of the loop, use filtering.
Toggle between Song and Edit windows. Song mode shows the regions you are working on, and Edit mode switches the view to the piano roll and the lanes where automation can be drawn in.
In Subtractor, click the folder icon by the patch window to load a new patch. After selecting a group of patches, you can use the little arrows to browse through the
Difference between insert effects and send effects.
Insert effects: sends 100% of a signal through the device Device-Effects INSERT-Mixer
100%
Send effects: sends a portion of a signal. signals-effects SEND-Mixer
Aux level knob
Starts at a MONO output going into the INPUT LEFT, but is then processed into a stereo and comes out L and R.
A send leaves from the auxiliarys in the mixer as a mono signal, and returns from the UN-16 as a stereo signal into the AUX returns R and L.
ALT and DUB
When you’re recording, after you record the take, pressing alt will create another lane above that with the first take automatically muted.
DUB creates a new lane, but it doesn’t mute the previously recorded track, so you hear what you are recording as well as the prior take.
Can adjust pitch and it wont affect tempo, and can adjust tempo and it won’t affect the pitch. Can shape the envelope of the loop, use filtering.
Toggle between Song and Edit windows. Song mode shows the regions you are working on, and Edit mode switches the view to the piano roll and the lanes where automation can be drawn in.
Friday, November 12, 2010
REASON
In Reason’s preferences, in keyboards and control surfaces, click add to add the MIDI controller you are using. We are using the M-Audio Keystation 61es. Select the correct manufacturer and oxygen 61. For the IN, select the Mbox.
The top part of the window is called the rack. Hitting TAB toggles between the font and back of the digital gear you are using. There are virtual wires that you can draw signals paths with. They are used to make connections between devices, effects that are sends or inserts, and mixers. At the bottom is the transport with stop, record, rewind and play. Can adjust tempo, do over dubs, tap tempo. There is pre-roll option as well. Reason rewire allows reason to be used as a plug in with Pro Tools. There are up to 64 available tracks out of ProTools. Above the transport is the sequencer window. Each instrument has its own track. Each track has many lanes. With lanes you can record many takes on one track. In the track list is where you can mute, solo, and record enable. The tool window has 4 modes: Device Pallete, Sequencer Tools, Groove Settings, and Song Samples.
Propellerhead is the company that makes Reason and also makes Recycle. The company started chopping audio samples into discrete chunks, and the whole file is saved as a REX file. You can expand or retract the rex file, time stretch, and change pitch while maintaining tempo.
All the instruments run through the channel mixer, then to the Hardware Interface.
The Combinator: is similar to Xpand and Structure, but you can combine many more devices than the others can.
The mixer has 14 stereo channel, 4 stereo effects sends, and a 2 band EQ.
The Line Mixer 6:2 has 6 lines summed to a two channel stereo output.
The Subtractor is a modeled polyphonic analog synth.
Thor is a polysonic synth.
The Malmstrom – polyphonic synth
NN19 digital Sampler – Allows you to sample and load samples, and create multisample patches.
NN-XT Digital Sampler – a little more advanced that the NN19sample, load preset samples, and custom samples, modify them with the built in synthesizer.
Dr. OctoRex is a Loop Player.
ReDrum – 8 different programmable pattern and 4 different groups. Can program up to 64 different steps
Kong – is a drum synth, with 16 different instrument channel options.
Reverbs, Phasers, and Delays
SHIFT-RETURN brings you back to where you started, and twice in a row sends the playhead back to the beginning. Space Bar is play and stop.
To open an empty rack, Open- Reason folder – Templates – Empty rack. To add a piece of gear to the rack you can double click, click and drag, or click create.
In Subtractor, click the folder icon by the patch window to load a new patch. After selecting a group of patches, you can use the little arrows to browse through the
Difference between insert effects and send effects.
Insert effects: sends 100% of a signal through the device Device-Effects INSERT-Mixer
100%
Send effects: sends a portion of a signals-effects SEND-Mixer
Aux level knob
Starts at a MONO output going into the INPUT LEFT, but is then processed into a stereo and comes out a L and R.
A send leaves from the auxiliarys in the mixer as a mono signal, and returns from the UN-16 as a stereo signal into the AUX returns R and L.
The top part of the window is called the rack. Hitting TAB toggles between the font and back of the digital gear you are using. There are virtual wires that you can draw signals paths with. They are used to make connections between devices, effects that are sends or inserts, and mixers. At the bottom is the transport with stop, record, rewind and play. Can adjust tempo, do over dubs, tap tempo. There is pre-roll option as well. Reason rewire allows reason to be used as a plug in with Pro Tools. There are up to 64 available tracks out of ProTools. Above the transport is the sequencer window. Each instrument has its own track. Each track has many lanes. With lanes you can record many takes on one track. In the track list is where you can mute, solo, and record enable. The tool window has 4 modes: Device Pallete, Sequencer Tools, Groove Settings, and Song Samples.
Propellerhead is the company that makes Reason and also makes Recycle. The company started chopping audio samples into discrete chunks, and the whole file is saved as a REX file. You can expand or retract the rex file, time stretch, and change pitch while maintaining tempo.
All the instruments run through the channel mixer, then to the Hardware Interface.
The Combinator: is similar to Xpand and Structure, but you can combine many more devices than the others can.
The mixer has 14 stereo channel, 4 stereo effects sends, and a 2 band EQ.
The Line Mixer 6:2 has 6 lines summed to a two channel stereo output.
The Subtractor is a modeled polyphonic analog synth.
Thor is a polysonic synth.
The Malmstrom – polyphonic synth
NN19 digital Sampler – Allows you to sample and load samples, and create multisample patches.
NN-XT Digital Sampler – a little more advanced that the NN19sample, load preset samples, and custom samples, modify them with the built in synthesizer.
Dr. OctoRex is a Loop Player.
ReDrum – 8 different programmable pattern and 4 different groups. Can program up to 64 different steps
Kong – is a drum synth, with 16 different instrument channel options.
Reverbs, Phasers, and Delays
SHIFT-RETURN brings you back to where you started, and twice in a row sends the playhead back to the beginning. Space Bar is play and stop.
To open an empty rack, Open- Reason folder – Templates – Empty rack. To add a piece of gear to the rack you can double click, click and drag, or click create.
In Subtractor, click the folder icon by the patch window to load a new patch. After selecting a group of patches, you can use the little arrows to browse through the
Difference between insert effects and send effects.
Insert effects: sends 100% of a signal through the device Device-Effects INSERT-Mixer
100%
Send effects: sends a portion of a signals-effects SEND-Mixer
Aux level knob
Starts at a MONO output going into the INPUT LEFT, but is then processed into a stereo and comes out a L and R.
A send leaves from the auxiliarys in the mixer as a mono signal, and returns from the UN-16 as a stereo signal into the AUX returns R and L.
Thursday, October 28, 2010
Structure free is an instrument plug-in that does not require a midi controller to function. The default setting is a sine wave patch. To load a patch, click on the browser tab – application-digidesign-protools creative collection/Structure free. There are 6 smart knobs and a master fader. There are factory pre-assigned, useful parameters. There is Chorus Mix, Reverb Mix, Cutoff (applies an EQ filter), Resonance (controls the resonance of the cutoff), Attack, Release, and Master control. There are green keys that can loop patches and creates a variation of the groove and blue keys. These key switches are velocity sensitive. You can change the pitch and tempo of the drum kits. Key switches do not produce pitch, they send info. The keys are green when used, and blue when not used. Key switches produce a variation in the loop of the patch. Pull up a loop, note where the key switches are, play them play with your left hand while you use switch keys with your right while recording. Bring up the midi edit window, and pull the key switches out of their range and they will become pitches. The patch module contains the patch list. You can create, midi assign, mix, select, route, and group patches. By clicking on patch, you have the option to load a new patch, add a patch, duplicate, remove one or all patches.
Friday, October 22, 2010
Vacuum and Boom
Vacuum is a monophonic digital synthesizer that generates tones. It is monophonic because can only play one midi note at a time. Polyphonic synths allow you to play chords. The more important parts of the Vacuum are the oscillators and the envelopes. This is where the majority of the sounds are manipulated. The two top left modules are VTO’s 1 and 2 (vacuum tube oscillator). The VTO’s are where the sound begins to generate. There are range knobs that allow you to change which overtones are emphasized. In WIDE mode, the fine knob has a 5 octave range, and while not in WIDE mode it has a range of + or – 7 semitones. LO mode turns VTO 2 into a low frequency oscillator, or LFO. VTO also switches between blending octaves. There are suttle changes when switching through these modes and they can create thick, detuned sounds. There are different wave shapes to choose from on the VTOs: Triangle, Sawtooth, and Pulse waves PW0 and PWO50. The Envelope to shape knob controls the modulation of the current VTO shape via envelope 1. There is a mixer next to the VTO’s that blends the two together. The ring modulator sends a variable amount of VTO 1 and 2. It heterodynes 2 waves and outputs the sum and the difference of the waves. There are high pass filters and low pass filters, and they both do exactly what they sound like. A high pass filter lets all of the high frequencies pass through while keeping out the low frequencies. A low pass filter does the opposite – keeps the highs out and lets the lows through. These filters have a slope that sets the curve, and that changes how many dB/octave is attenuated. A steeper slope on a high pass filter will result in cutting out more low end. These filters have cutoffs as well. This control determines when the frequencies roll off in reference to the audible sound spectrum, 20Hz-20KHz. The resonance setting affects the filter resonance. The saturation knob distorts the resonant frequency. The envelopes change the shape over time. The attack of a wave is the amount of time it takes for a sound to get to full volume. The decay is the point where the noise decreases in amplitude, and the sustain is the persistence of the sound after the attack. The release is the point at which the sound stops, and some sound may persist at the end but that is considered to be reverberation, and that sound and length varies depending on the size and type of room. In the BOOM drum machine plug, there are 10 different kits. There are ten different “channels” where you can have different piece of the kit on. Each channel has options to change pan, level, tone, and decay. Each channel also has a solo and mute button. The matrix on the left provides a visual overview of the current drum pattern you have selected. Each row corresponds to a channel. You can click the little red dots and they get brighter and darker. Off is like a rest, and bright red is a loud it, and the dimmer ones are softer hits. This option enables you to put feel into a beat by changing the dynamics of it. There are 16 steps available to make a beat, that would be in 4/4. You can copy patterns to any one of the 16 patterns, or delete any pattern.
Thursday, October 14, 2010
A few more things to know about ProTools LE: Markers and Crash Course Automation Tips, and MIDI control assignments.
Markers make your work flow more efficient and help keep your sessions visually organized. Where it says Markers at the top of your tracks in the edit window, click the triangle “∇” icon to make the ruler appear across the screen. This way you can see the markers that you place down for the different scenes in your movie scoring project. Markers make it easy so see where certain actions or sound effects need to happen. They are also for other editing notes or reminders. You can enter these while your movie plays by pressing ENTER+ENTER …not return+return!!! If you are fast, you can type the name for the marker as the movie plays. By using the key command “decimal point + the marker # on the numeric pad + decimal point“ it brings up the Memory Locations window where you can view all of your markers in a column list. By pressing Command + numeric #5 it brings up the window for the counter options on your markers. You can select to view Bars/Beats, Min/Sec, or Samples. There is a sub-counter as well that lets you view a secondary view with any of those previous 3 options. There is a comments section for typing notes, and a sort by time option that sorts the markers in the list by time. Automation is a fantastic but slippery slope when it comes to mixing. It should really be one of the last things you do with mixing to avoid many issues the main being that it could result in a huge waste of time. Automation is a really cool way to change/manipulate/affect every single available parameter on the inserts/sends, and dynamics and effects plug-ins. Say we have a delay on a track and we want it in one part of a project but not in another. First, in the mix window, you have to click on the processor you want to automate. There’s an AUTO button at the top of the window. Clicking that brings up a list of all of the parameters on one side. If you have multiple inserts and sends, they will appear in the middle and you can select which one you want to enable. Select the parameter you want to automate and click add. This way is good for learning the names of the parameters on the plug-ins you are automating. Once you get to know the names, there is a quicker way to get these parameters enabled. Holding down all three of the modifier keys (control+option+command) and clicking directly on the little green light (dim when automation disabled) by the parameter enables it for automation. Also, control click shows a menu where you can choose “plug in automation enable”. In the edit window, go to where the flashing fader is on the track you want to affect. Click on the bar that ProTools defaults as WAVEFORM. There will be options for volume, pan, mute, and any of the things you enabled. A fader is static, and doesn’t move by itself in a mix. If we want a part turned down every time play a song, automate the volume of the track. Be in smart tool mode to manipulate automation. When you press control, the pencil tool allows you to draw lines and different shapes that are user definable. There’s a drop down menu by the pencil tool that lets you choose from freehand, triangle, square, or random shapes. Parabolic and s-curve are two shapes that I believe only ProTools HD will let you do. By using automation, you can change countless parameters, and it can create some really cool effects without having to use to many tracks. The downside is once you write automation, you are stuck with it until you manually delete it from the automation section. This can make things really complicated, and you want to be sure you are committed to the effect. Press and hold command and you can click to draw in nodes. Drag the node up and down for the effect that you want. Set up a drum track with an EQ in the insert, and follow those steps. Boost the gain on one of the 7 bands and make sure the Q is more of a notch shape. Automate the frequency parameter up and down in Hz on that same band. The resulting effect is a frequency sweep. This is a cool swishing effect that sweeps through, while gaining the frequency range you selected in a linear fashion up and down the frequency spectrum. Very cool!! In drum machine plug-ins, you can automate the envelope of a sound. Attack, Decay, Sustain and Release. These are cool ways for drums to add odd, and rather awesome effects to mixes. Any patch from any synthesizer can be assigned to take place on certain ranges of the keyboard, and wherever you want them to. For this purpose I will talk about Xpand2 in Pro Tools LE. In the mix window of this synthesizer, there are 4 slots called A, B, C, and D. In each of these slots you can have patch of your choice loaded. Each slot can store up to 500 parts or presets in its bank. Each slot has an individual mix, MIDI, arpeggiator, modulation, and effects settings. You can load any four parts in to create a patch. A patch is a multi-layered, complex synth. It is possible to save the combinations you create as patches, and they will be stored in the presets menu. What is cool about this is you can load the patches on the same instrument plug-in on a different Pro Tools system. When assigning parameters to MIDI controllers and the MOD wheel, control-click and select “Learn MIDI CC”. Turning the MODulation wheel that is next to the pitch shift wheel on the controller will manipulate the parameter you have selected to learn MIDI CC. To un-learn the control, control-click again and select the “forget” option. You can use the MOD wheel for the fader, the panoramic potentiometer, FX1, FX2, and the Master volume on the insert. There are smart knobs on the top row that functions for all 4 slots at once (globally), or you can choose individual slots and edit their own smart knob parameters to really blend sounds together. Both FX columns at the bottom have equal options, each containing 3 different categories of effects. There are a handful of reverbs, delays, and modulation effects with choruses, phasers, and flangers. Each slot can have 2 effects at the same time, or just one. Enabling the Learn MIDI CC will put these effects to the MOD wheel for manual use. What is cool is you can record the MOD wheel. This is basically live automation, and you are now using the MOD wheel as its own instrument! For modulation of the slot itself, click on MOD on the right, and you can now change wave types, and where the range of the slot is on the controller. By using the HI/LO function, you can set the slots to being and end on certain keys, or overlap on certain keys. In the arpeggiator, you need to power it on for it to work. You can edit the rate/mode of it by different note values: 8th,, 8th triplets, 16th, 16th triplets just to name a few. The Latch mode endlessly loops the notes that you press until other keys are pressed.
Friday, October 8, 2010
Over the last weekend, we had to choose a video of our choice to do a score on. We also had to make a 6-week production schedule of a projected plan. Over this last week, I got a big start on my project doing most of my work with the MIDI and audio tracks using Logic Pro 8. I am very pleased with the extensive library of synths and all kinds of instrument and effects plugins. I am going with a video that is a CGI adventure in space, floating away from planet earth. It is the intro scene from the movie Contact. The instrumentation will be drums, a harp, a string section, and a guitar. I want to make these instruments sound like they’re floating in space, and I want to have different instruments represent different planets and themes that are happening in the video. Adding the right amount of the right type of reverb will help define the spatial atmosphere. I came up with many great ideas this week for my video. As planets are going by the screen and floating away, I want them to have a huge rumbling sound. To get this effect I knew that either a pre-recorded sound effect or a lot of reverb would do the trick. I decided to create my own sound effect with a lot of reverb. I was messing with a kick drum with a reverb on it. I set the reverb to have an infinite length. I turned the input of the dry kick signal all the way down, and fed the reverb with a 100% wet signal. The resulting sound was an explosion with the attack of the kick, followed by a tail and sustain of very low sub bass frequencies. This sound was going to be perfect for the sound of the planets going by, as I can automate the bass sound to swell up and down in volume as the planets are closer and fade down when they drift away. I recorded this sound of the MIDI kick and its reverb to an audio track, and that was my sample. I finished the “rumbling” automation for the whole video that is 2:45 in length. A few meteor showers fly by and I wanted to capture a sound effect for those too, so I played around with some synthesizers in Logic Pro 8. I found a cool air swell patch that was just what the video needed. I played around with a few chords on the harp patch I was going use, and decided to base the music for my movie around the C SUS4 chord. To me this suspension chord creates a sense of tension and release, and is always very close to a solid resolution, keeping the emotion moving and interesting for this particular video. I recorded a phrase in 11/4 for the harp, on the keyboard. I then programmed a drum beat to that, using a standard drum kit, with a kick/snare/crash/hi-hat/ride/toms. In order to make the kit sound like it was in space and not in a studio or quiet room, I bussed the output of the midi tracks to a single aux track as drum mix. Then I created a send from that channel to additional aux track to be used for effects processing on the drum kit. I introduce the music as a fade in representing planet earth, and as Earth floats off in the distance, I automated the volume parameter with a downward slope, and gradually increased the wet mix setting on the reverb, and it gave a really cool effect. There is a part in the video that looks like space is being engulfed by flames, and I wanted to find a huge, fiery sound. I decided to take a rain patch from a synth in Protools, and created a swell, pushing more notes down as the fire got thicker. It sounded too much like rain, so I added a reverb and a distortion plug-in to it to make it sound like a raging firestorm. What I like to call little blue ice stars fly by at one point. In this part of the video, I want everything to be silent, more resembling a space atmosphere. I was looking for a wind-swishing type sound, and used the same air-space sound that I used to introduce the film. A very short guitar riff that has a space reverb as well really fills things out. I set up auxiliary tracks for every instrument so they could have a separate channel for all of the effects and automation. The strings fill out a nice layer and they are characteristic of floating in the clouds. I added a really high-pitched choir patch for a “wonder-invoking” effect, and kept it really low in the mix. All in all I have a total of 27 tracks being used for this score.
Friday, October 1, 2010
This week we were assigned a 3 part vocal/creative music project, where we were required to record our voices by reading a text excerpt in regards the Apollo 11 crew that recorded “live” footage from space of planet Earth. There was a debate because Armstrong claimed that the craft was 130,000 miles from earth, but at the end of the footage, blue sky was revealed. Blue sky would have not been seen had the spacecraft been as far away from earth as they said. I used an SM58 to record the vocals. For some reason, the input signal wasn’t very hot, so we had to crank up the input on the Mbox full blast to even get a medium signal level. It was enough to get a recording, but there was hiss coming from the input over the whole track when I listened back to it. I put a denoiser plug-in on the track and tweaked the parameters to get the track sounding clear and normal. I changed the gain on the entire audio file as well, and that helped to bring it up in overall volume. The second part of the assignment was to chop up the original audio and rearrange the words into the order that they were on the instructions. We also had to reverse a few words, using the reverse function in the Audiosuite menu. The third part was to do to something creative with the audio. Some people just went crazy with effects, pitch shifting, and other processing. Others added music to their projects. I decided to write a little 45 second jam, and edited my voice in parts to make it sound like it was skipping. I also matched my talking to the beat, and kept it in rhythm through out the track. Working from home for the music part of it, I was using Logic Pro 8. I really liked some of the synth patches that I came across and wanted to use them. I decided to use 3 layers: A drumbeat, a wide/open ambient string patch with delay, phase, and cutoff processing, and a melody line with a sawtooth synth. So I programmed a drumbeat, using the Ultrabeat drum synth. I added an ambient patch from the ES2 synthesizer, and picked a monophonic patch from the same synth for the melody. After getting all of these set up as midi tracks, I wanted to ultimately work with audio. I sent the output of the midi tracks to the input of a new audio track that was record enabled. I recorded the midi on the audio tracks after I was done quantizing and everything was set. Now I had 3 stereo tracks to work with. I didn’t use mono tracks because a lot of the processing on the patches I used had stereo effects. I then brought 5 audio tracks into a Protools session: The original vocal track, the rearranged vocal track, the drums, the ambience, and the melody. I added 3 more tracks because I was planning on splicing up the vocals and words onto tracks with various effects and panning. The reason I did this was because I wanted to have a designated track for the different pans, and did want to worry about automating the panning. 1 of the tracks was panned hard right, the other hard left, and the other hard center. I used the L side for audio that would be pitch shifted down 1 – 15 semitones. The R track was used for audio pitch shifted up 1 – 15 semitones. The center was used for the all of the vocals that were to remain the same pitch. Sometimes I would chop up a syllable from a word and repeat it to simulate a delay, and other times I would automate a delay plug-in to be present or not. The project ended up turning out cool, and I had a lot of fun doing it.
I also researched and wrote a paper for the make up test this week:
A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.
I also researched and wrote a paper for the make up test this week:
A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.
This week we were assigned a 3 part vocal/creative music project, where we were required to record our voices by reading a text excerpt in regards the Apollo 11 crew that recorded “live” footage from space of planet Earth. There was a debate because Armstrong claimed that the craft was 130,000 miles from earth, but at the end of the footage, blue sky was revealed. Blue sky would have not been seen had the spacecraft been as far away from earth as they said. I used an SM58 to record the vocals. For some reason, the input signal wasn’t very hot, so we had to crank up the input on the Mbox full blast to even get a medium signal level. It was enough to get a recording, but there was hiss coming from the input over the whole track when I listened back to it. I put a denoiser plug-in on the track and tweaked the parameters to get the track sounding clear and normal. I changed the gain on the entire audio file as well, and that helped to bring it up in overall volume. The second part of the assignment was to chop up the original audio and rearrange the words into the order that they were on the instructions. We also had to reverse a few words, using the reverse function in the Audiosuite menu. The third part was to do to something creative with the audio. Some people just went crazy with effects, pitch shifting, and other processing. Others added music to their projects. I decided to write a little 45 second jam, and edited my voice in parts to make it sound like it was skipping. I also matched my talking to the beat, and kept it in rhythm through out the track. Working from home for the music part of it, I was using Logic Pro 8. I really liked some of the synth patches that I came across and wanted to use them. I decided to use 3 layers: A drumbeat, a wide/open ambient string patch with delay, phase, and cutoff processing, and a melody line with a sawtooth synth. So I programmed a drumbeat, using the Ultrabeat drum synth. I added an ambient patch from the ES2 synthesizer, and picked a monophonic patch from the same synth for the melody. After getting all of these set up as midi tracks, I wanted to ultimately work with audio. I sent the output of the midi tracks to the input of a new audio track that was record enabled. I recorded the midi on the audio tracks after I was done quantizing and everything was set. Now I had 3 stereo tracks to work with. I didn’t use mono tracks because a lot of the processing on the patches I used had stereo effects. I then brought 5 audio tracks into a Protools session: The original vocal track, the rearranged vocal track, the drums, the ambience, and the melody. I added 3 more tracks because I was planning on splicing up the vocals and words onto tracks with various effects and panning. The reason I did this was because I wanted to have a designated track for the different pans, and did want to worry about automating the panning. 1 of the tracks was panned hard right, the other hard left, and the other hard center. I used the L side for audio that would be pitch shifted down 1 – 15 semitones. The R track was used for audio pitch shifted up 1 – 15 semitones. The center was used for the all of the vocals that were to remain the same pitch. Sometimes I would chop up a syllable from a word and repeat it to simulate a delay, and other times I would automate a delay plug-in to be present or not. The project ended up turning out cool, and I had a lot of fun doing it.
I also researched and wrote a paper for the make up test this week:
A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.
I also researched and wrote a paper for the make up test this week:
A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.
Friday, September 24, 2010
This week there was a two part written/lab test on terms and Protools. Some terms we were responsible for knowing were Sample Rate, which is the number of times per second the audio signal is sampled, a few options are 44.1K and 48K. Bit depth is a number of how many chunks the audio is sampled into. The lab portion of the test consisted of setting up the MIDI/AUX/AUDIO track matrix and recording some midi to audio. Then we were to step input a scale with the midi controller. There was a creative mix part of the assignment to rearrange the original track, add effects, and play with different software instruments. We listened to these in class and there were some good and interesting remixes. A couple of them sounded like a ton of random, non-musical tones. Some important things to be aware of when composing your own piece are to use some repetition and references. We as humans are interested in patterns and familiarity. I see a big difference between repetition and monotony are two different characteristics of music. You can have repetition without monotony, and it takes making slight differences in the loops to give spice but keep familiarity. My final remix ended up being very different, completely different than the original - chord progressions and all, but I found myself on a creative kick and just had some musical ideas in my head that I needed to record. Being a drummer, I was thinking of a simple groove in 4/4 using only a kick, snare, and hi hat. I recorded it by playing the midi-controller with my hands and using the BOOM instrument plug-in on protools for a tight, electronic sounding kit. I looped this groove, loaded an instrument track with piano, and began playing to it. I started in c minor like the original composition did, but went to the b7 instead of the ii chord. I soon found a melody that I really liked so I laid that down and in listening to the mix, thought of what other instruments I could bring in to benefit the overall jam. There wasn’t any bass yet, and listening to the beat against the piano, I added a 16th note pulsing bass synth sound to make the music feel like its moving rhythmically and give it some bottom end. I used the digital Vacuum tube plug-in, activated the LFO cut off parameter and synced it to a 16th note value. Now that I had the rhythm section in place and a nice single melody on top, it still needed to be filled out so I added a spacey warped strings patch to give a constant ambiance. I arpeggiated the same chord tones that I was using for the melody, and backed off the attack in sound envelope editor so it created a swell, and a I put a flanger on it. I played all of these parts, recording live, without quantize on. Then I went into the piano editor and made any necessary corrections. I like doing things in layers, and subdividing beats to give one 4 bar loop many different feels. Another in class assignment this week was to record a simple drum beat at 100bpm and an a minor scale played melodically on the piano. Then we were to record ourselves singing along to the scale, saying a-b-c-d-e-f-g along with it. After getting that recorded, we grouped together the two tracks because we needed to section out the different letters and piano we recorded. We rearranged them into the C major scale c-d-e-f-g-a-b, and then into a-c-e-g-b-d-f-c, and d-f-a-g-b-d-c. Then we into AUDIOSUITE > pitch shift. We doubled the vocal track and started to pitch the notes. We did a minor 3rd up and a dimished 5th up on a third track, creating a diminished chord. We then played with reversing the sound in the audiosuite’s other section.
Friday, September 17, 2010
MIDI and Instrument Tracks
Here’s a little info on the menus at the top of pro tools. The FILE menu allows you to create new sessions or load new templates. You can save your sessions, copies of your session, and quitting the program. Importing files is another valuable selection under file. In EDIT you can copy and paste information but it is much better to learn the quick keys. The VIEW menu lets you select what control windows you want to see. You can also view the number of tracks displayed. The TRACKS menu lets you creak and edit tracks, click tracks, instrument tracks. In REGION you can loop, group, rename, use elastic audio properties. EVENT is for all MIDI editing and functions. AUDIOSUITE has all the dynamics and effects processing. OPTIONS lets you select play and record modes. Setup is for hardware and software settings, and the WINDOW menu allows you to bring up different editing windows with the use of key commands and edit the way your transport window looks. They are really important to ensure an efficient workflow. There are a few ways to get MIDI into protools, and have midi tracks as well as audio tracks that were recorded from MIDI. Create 3 tracks in a new session, a MIDI track, aux track, and audio track. On the inserts section of the aux track pick a drum machine instrument plug in. Aux tracks do not generate audio or midi. They are simply a way to route audio to any designation. Instead of setting up the 3 tracks, a single instrument track will do. It can send and receive audio on the same channel and has the ability to route like an audio channel with sends. Set the input on MIDI track to the appropriate instrument. Send the aux track via bus 1 to the audio track and change the audio track’s input to bus 1. Make another set of tracks and put a piano on this time. Record enable the drum track and find the sample you want to use. I recorded a beat earlier this week with just a simple kick snare and hi hat pattern. I could have used Step input, but I didn’t. That would have taken me a little longer, because when I hear beats in my head it is much easier for me to play them than to notate them. I am much more a performer than a composer, but I enjoy both. Its great to compose what I perform! The channel strips in proTools have 10 inserts, 10 sends, an input/output selector, and panning muting soloing, and a fader. On the transport there is a cool wait for note button that if engaged will start recording right as the controller has been pressed. Pro tools also has many templates for different styles of music already set up to save time when creating an ideal session. Ins outs and sends are set up. Name all your tracks when you set up a session to keep things clear and locatable. Quantizing after recording and during recording are both options. Quantize snaps the midi information to grid, and it produces a more perfect sonic rhythm. Looping playback while recording and enabling MIDI merge is good to do when you want to record single parts at a time and go back and record another layer. The key command Option3 allows you to Input Quantize, where you can enable quantize in record mode. Make sure you choose the correct destination track, because you don’t want to quantize on the wrong track in the wrong value. The first assignment of the week was to turn a notated drum groove and piano chord progression into a midi session. I used the same process as I did to make my own little piece of music while working in the lab. It was a 28 measure piece, with a repeat of measures 5-16. We decided to record the drum parts all at the same time and the piano after. We did quantize while we recorded and it pretty much came out locked in the first time around.
Friday, September 3, 2010
Anatomy of an Mbox Interface
Lets go over the surface anatomy of a two analogue channel, ProTools LE Mbox. A green LED indicates that the Mbox is on. It is powered via USB and plugs directly into the computer. There is a ¼ inch stereo headphone jack that uses a TRS connection. On a standard ¼ inch jack, there is a tip, two rings, and a sleeve that make up this adaptor. There are 5 knobs on the front of this INTERFACE. One controls the volume of the headphones. Another controls the monitoring system such as a set of speakers or a PA. The third is a MIX dial, and it has variable control between audio that is being recorded, and audio that is already recorded and in the mix. By turning the dial all the way to one side, we hear only what is being recorded. When it is dialed all the way to the other side, we only hear the mix, and not what is currently being tracked. You can blend the two audio signals as well, getting a mix of both while recording, if necessary. There is a MONO button that you can toggle to hear a mono or stereo mix. This is a nice way to check for phasing in your mix. Another button is 48V, or phantom power. There are 3 different types of microphones: Dynamic, Condenser, and Ribbon microphones. Dynamic mics do not need phantom power, but are not harmed if it is enabled. Condenser mics always need phantom power, because they use an uncharged coil inside the capsule. Ribbon mics do not need phantom power, however some can be destroyed if they are given the 48V charge, because these are very sensitive microphones. Another knob on the front of this interface is a gain stage, and controls the amount of input received by channel 1. For a very high input that is clipping with the indication of the red PEAK LED, pushing in the button that says PAD will attenuate the overall gain by -20dB. A button for selecting DI (direct input/injection) or mic is also there. DI would be an electric guitar or bass, or electric keyboards. The mic setting is for just about any kind of microphone. On the back of the Mbox interface, there is a female XLR input for each of the 2 channels available to record on simultaneously. There are also ¼ DI and Line inputs for both channels. Two ¼ monitor outs for a right and left speaker, and a digital input (S/PDIF) for digital information. There is a MIDI input/output section for a MIDI controller to send and receive digital information as well. MIDI is not audio, and doesn’t actually become audio until it comes out of the speakers, and at that point it can’t even be called MIDI anymore. To get the first signal through the interface and talking to ProTools, set up a new session and create a mono audio track. In the insert section of the virtual channel strip and select the menus plug-in – other – signal generator. Select from the various waveforms for any tone, just to make sure audio is coming out of the headphones. On the I/O section of the channel strip, select input 1 for the audio INPUT path selector. Remove the insert from the channel strip. Plug a microphone into channel 1 and arm the track with the red R button on the channel strip. Record some audio onto the track. Create a second track and make sure input and record enable are set. Record another bit of audio. Now pan both tracks opposite ways and take a listen in the headphones. Toggle the MONO button to hear the mono/stereo effect. Let’s get MIDI into ProTools. Create a MIDI track and a stereo aux track. On the aux track, in the inserts section select: Instrument – piano. Set the input on the aux channel to the correct setting, and see if the MIDI controller is talking to ProTools.
Subscribe to:
Posts (Atom)