Friday, October 1, 2010

This week we were assigned a 3 part vocal/creative music project, where we were required to record our voices by reading a text excerpt in regards the Apollo 11 crew that recorded “live” footage from space of planet Earth. There was a debate because Armstrong claimed that the craft was 130,000 miles from earth, but at the end of the footage, blue sky was revealed. Blue sky would have not been seen had the spacecraft been as far away from earth as they said. I used an SM58 to record the vocals. For some reason, the input signal wasn’t very hot, so we had to crank up the input on the Mbox full blast to even get a medium signal level. It was enough to get a recording, but there was hiss coming from the input over the whole track when I listened back to it. I put a denoiser plug-in on the track and tweaked the parameters to get the track sounding clear and normal. I changed the gain on the entire audio file as well, and that helped to bring it up in overall volume. The second part of the assignment was to chop up the original audio and rearrange the words into the order that they were on the instructions. We also had to reverse a few words, using the reverse function in the Audiosuite menu. The third part was to do to something creative with the audio. Some people just went crazy with effects, pitch shifting, and other processing. Others added music to their projects. I decided to write a little 45 second jam, and edited my voice in parts to make it sound like it was skipping. I also matched my talking to the beat, and kept it in rhythm through out the track. Working from home for the music part of it, I was using Logic Pro 8. I really liked some of the synth patches that I came across and wanted to use them. I decided to use 3 layers: A drumbeat, a wide/open ambient string patch with delay, phase, and cutoff processing, and a melody line with a sawtooth synth. So I programmed a drumbeat, using the Ultrabeat drum synth. I added an ambient patch from the ES2 synthesizer, and picked a monophonic patch from the same synth for the melody. After getting all of these set up as midi tracks, I wanted to ultimately work with audio. I sent the output of the midi tracks to the input of a new audio track that was record enabled. I recorded the midi on the audio tracks after I was done quantizing and everything was set. Now I had 3 stereo tracks to work with. I didn’t use mono tracks because a lot of the processing on the patches I used had stereo effects. I then brought 5 audio tracks into a Protools session: The original vocal track, the rearranged vocal track, the drums, the ambience, and the melody. I added 3 more tracks because I was planning on splicing up the vocals and words onto tracks with various effects and panning. The reason I did this was because I wanted to have a designated track for the different pans, and did want to worry about automating the panning. 1 of the tracks was panned hard right, the other hard left, and the other hard center. I used the L side for audio that would be pitch shifted down 1 – 15 semitones. The R track was used for audio pitch shifted up 1 – 15 semitones. The center was used for the all of the vocals that were to remain the same pitch. Sometimes I would chop up a syllable from a word and repeat it to simulate a delay, and other times I would automate a delay plug-in to be present or not. The project ended up turning out cool, and I had a lot of fun doing it.

I also researched and wrote a paper for the make up test this week:


A sampling rate is the rate at which how many times per second a snapshot of audio is taken. A sample is really just a digital snapshot of the audio at a point in time. A few sample rate choices available today are 44.1kHz or 48kHz (44,100 or 48,000 little snapshots of audio, or times per second). I would think of this as a flipbook type thing where there are 44,100 pages with little pictures on them. Flipping though the entire 44,100 pages in the book in one second is an analogy similar to what a sample rate does in the computer, in one second of time. Sample rate can affect the analog to digital converting process. If a sample rate is too low, it can produce the wrong tones or tones that aren’t coming from the actual source. I found this out the other day, when I recorded myself speaking with the sample rate set at 48kHz and loaded the file into a session whose sample rate was set at 44.1kHz. When I played back the audio, my voice sounded pitched down and much lower than I normally speak. In deciding what the problem may be and remembering that I originally recorded at 48kHz, I switched the sample rate setting in my DAW from 44.1kHz back to the rate the audio was recorded at, the 48kHz setting. This solved my problem and played back the audio at normal pitch! Different sampling rates are used for various types of media, because the rate affects the overall frequency response. A sampling rate of 44.1kHz is used for mainly for music recording since most music ranges in the normal sound spectrum, 20Hz – 20kHz. The 48kHz sample rate is used for more professional audio and film. Lower sample rates are used for voice applications. Sample rate is going to result in a greater file size, so you should use the lowest sample rate that is required to give the audio quality you need to the project, to avoid having unnecessarily large files.
Bit depth uses the binary digit code of 0’s and 1’s that encodes the value of each sample. These numbers are not to be confused with one and ten. Another name for bit depth can be word length, with respect to the binary code. Bit depth affects the resolution of amplitude and the dynamic range. It groups samples together into blocks, with a specified numerical resolution. Bit depth values for audio recording are 16 and 24 bits. The higher the bit depth, the better the audio quality. However, just like sample rate - the higher the value, the more hard-drive space the audio will take up. If a high sample rate is not chosen, the visual depiction of the audio will appear pixilated, and the audio itself will sound rigid. The old Nintendo system from the later 1980’s is a great example of that type of sound quality because it used an 8-bit system. 1 bit in the binary system would consist of a ‘0’ and ‘1’. We can think of this as representing the number two, because there are two different possible encoding combinations: ‘01’ and ‘10’. So given that bit depth is an exponential function, and 1 bit represents the number two: if we are using a bit depth of 16, the number of segments that the samples will be grouped or quantized together would be 216 = 65,536. 1 segment will fit a number of samples in it. There are more segments because of the higher bit depth, and this means that less samples need to fit in each block. The result is more smoothly-processed audio from the computer.
When opening a Pro Tools session, several folders are created when you save a session. There is an audio files folder, in which any recorded audio is saved as its own file. The two typical audio file formats are WAV and AIFF, but wave is more universal and is typically the default setting for MAC and Windows. A Pro Tools session file is also created. It is a documentation of the session. This encases all of the audio tracks, settings, audio and video files, and edits that are part of the session. This allows you to save different versions of your project without changing the actual audio, because you can change the edit information or the input/output assignments. These files show up as .ptf format. A wave cache file is another component that is created with a new session. Similar to the pro tools session file, the wave cache file stores data, but it stores only the waveform display data or waveform overview for the tracks. A fade files folder creates separate files when you add fades to regions. This way, if you lose fade files is a session, they can be easily located in their own folder, and reapplied to the session that is being worked on. Another folder created for a new session is a MIDI files folder, which saves any MIDI files that were exported. When doing a project using MIDI instrument tracks, the files will not be saved as MIDI files in the specific folder if they weren’t manually exported. After being exported, the MIDI files are typically named after the session. A region groups folder will also be created, and same with the MIDI files folder, if you do not manually export any regions as a group in your session, the folder will stay empty, and eventually delete itself because it is not being used. A video files folder can be created for use of, video! When you import a QuickTime movie file or a video file that is already in digital form, Pro Tools remembers where the video is stored on the computer, and will not store it in the video files folder. If you use a digital converter when importing a video file into Pro Tools, it will save it in the video files folder. The session file backups folder will only be effective if you enable the AutoSave function. Then your session files will be saved automatically. The rendered files folder is enabled when using elastic-audio processing. The folder will not contain any files if rendered audio-elastic is committed to an audio file. A new file will be created and stored in the audio files folder.
The smart tool is an option that allows the use of multiple editing tools at once, depending on where the cursor is on a region. By clicking the bar at the top of the 3 editing tools, you enable the smart tool. By doing this, you can use the grabber, the selector, the trimmer, and a fade tool without have to switch back and forth with key commands of the mouse. Moving the mouse over various areas in a region enables the different editing tools. The grabber looks like a little hand, and allow you to grab onto regions and move them around anywhere in the arrange window. You can find this tool by floating the cursor over the bottom half of any audio region. The selector is the “I beam” or standard cursor that allows you to select an edit or cut point in a region. This is useful because also it allows playback to start wherever the selection was made. This tool is accessed in the upper half of any region. The trimmer tool turns the cursor into brackets, and this enables the extension or retraction of regions. This makes it easy to quickly see all or a part of the audio file that we are dealing with, with a simple click and drag. Moving the cursor to the outer right or left edges and on the bottom half of the region allows for the use of the trimmer tool. The fade tool is enabled at the top left and right corners of a region. It looks like a little square, diagonally sectioned off and colored half gray, half white. By clicking and dragging, you can draw a fade of desired length. Pro Tools uses the default fade settings, so if the fade is to slow or too fast, you may have to go in and change the fade settings. This is used for eliminating digital clips or pops that can result from a cut edit. The smart tool does not relieve pops or clicks in audio. That is up to the pencil tool, which is not a part of the smart tool.
The edit modes in Pro Tools allow user definable parameters for the specific or unspecific placement of any region or clip. There are four different edit modes: Slip, Grid, Shuffle, Spot, and Snap to Grid. Slip is a very high-resolution edit mode, and when enabled shifts an audio region in increments as small as samples or ticks. It is useful if a region is a few milliseconds off and needs a slight adjustment. Be careful when working in slip mode, since the regions move so freely. If you are clicking on a lot of regions and moving quickly, you may accidentally shift a region by only a few milliseconds. The resolution is so fine that a region may not appear to have moved, when in fact it may have. However it is great for editing without time/grid restrictions. Overlapping of regions or completely covering a region is possible in slip mode. Grid mode locks the regions into a specific grid, defined by your selected time scale and grid resolution. If the grid is set at a 16th note resolution, the region or MIDI notes will snap to the nearest 16th note. The same applies for bars and beats. Shuffle mode moves all the regions to te right of an edit over. If you cut audio from a region in shuffle mode, everything to the right will snap over to the left, closing the gap that would have been created. If a segment is added into a region, everything will shift to the right to make room for the new section. This mode is great for avoiding overlapping of regions, and ensures that regions line up right next to each other. Spot mode is for specific destination of a region. By clicking on a region, a box will pop up that allows the user to define at what point in time they want the region to start or be moved to. This mode is useful for voice over, and film scoring for the exact location of where an audio region should start. Snap to grid allows you to be in grid mode at the same time you are in spot, slip, or shuffle mode. It will move regions by grid, but edit selections will comply with the primary edit mode.
MIDI is not audio, and audio cannot pass through a MIDI track. It is a way to enable electronic instruments, computers, and controllers to communicate with each other. MIDI runs on a 16-channel, 128 note system. MIDI is an information process that sends a series of numbers and control messages to an interface. Pushing down a key on a MIDI controller and hearing a piano sound is very misleading to what MIDI actually is. By pushing a key and letting it go, the keyboard will send MIDI info to tell the computer if the MIDI channel is “on”, and then “off”. Instructions for pitch, volume, velocity, duration, and the order of notes being to be played is included in MIDI data. A specific audio sample, such as a patch, snare drum, or a single guitar note needs to be assigned to that MIDI channel or note for it to produce a sound. Audio is transmitted as an electric signal and is represented graphically as waveform, while MIDI is transmitted as data or information and appears as blocks in the piano roll, and are defined as numbers in the event zone. MIDI is primary source in drum machines, synthesizers, and software instruments. Audio is a graphical representation of changes in atmospheric pressure. Audio begins its process with an analogue – digital conversion, where as MIDI begins as a digital format. MIDI files are stored as .SMF for Standard Midi File. Audio files are commonly stored as .wav or .aiff file types. Audio is created by an analogue signal being sent through a microphone and into many different conversions. Transduction takes place in the capsule (more specifically the diaphragm) of the microphone and transforms these changes in atmospheric pressure into a flux, or an electrical current. The electrical signal is then converted by using variations of digital numerical data (depends on sample rate and bit depth settings) that is sent to the computer for us to visualize as a digital waveform. That is the point of where an analogue/digital conversion happens. After being processed into the computer, the digital audio leaves the computer and returns to the interface, where a digital/analogue conversion happens. The electric signal is then sent to a speaker, for us to perceive as real audio.

No comments:

Post a Comment