I really love the ending of "Black Man" by Stevie Wonder, as he essentially resings the first verse and does some ad-libs on the talkbox. However, there's a slightly annoying "History Lesson" at the end of the track. So I ran the track through audacity to get the backing talkbox track more prominent:
more njs era talkbox layers, this time from talkbox-maestro Fingazz of The Untouchables, done in the style of Teddy's "new jack jazz" with the upright bass:
INTRO
Come Inside
(Fingazz Mix)
Fingazz's YouTube channel with talkbox-making vids in the studio:
He did the little dancebreak part of Remember The Time.. love it. Too bad there isn't a 12" version of RTT out there. I've got the remixes and ripped the video track, but its just not the same.
Current Blue Note jazz pianist Robert Glasper's new(ish) album featuring this mint Herbie cover with some revamping of the talkbox and gorgeous drum and bass work.
Watch out for his background layering of multiple talkbox harmonies.
Not every lead singer is blessed with perfect pitch. This incredible tool helps correct vocal imperfections by automatically adjusting the vocal track to the intended notes. If automatic mode isn't for you, you can also dive in and tweak notes one by one with a virtual keyboard. There's even a graphical interface that allows you to visualize the vocal track and bump notes up and down in a way that isn't disruptive or obvious to the listener.
Although the program is best known for the singing-through-a-fan, robotic vocal style that has dominated pop radio in recent years with stars like Lady Gaga, T-Pain and countless others, Auto-Tune is in fact widely used in the studio and at concerts to make artists' sound pitch-perfect.
"Quite frankly, [use of Auto-Tune] happens on almost all vocal performances you hear on the radio," said Marco Alpert, vice president of marketing for Antares Audio Technologies, the company that holds the trademark and patent for Auto-Tune.
The beauty of Auto-Tune, Alpert said, is that instead of an artist having to sing take after take, struggling to get through a song flawlessly, Auto-Tune can clean up small goofs.
"It used to be that singers would have to sing a song over and over, and by that time you've lost the emotional content of the performance," Alpert said. "Auto-Tune is used most often for an artist who has delivered a fabulous performance emotionally and there may be a few pitch problems here and there . . . [the software] can save a once-in-a-lifetime performance."
How it works
Auto-Tune users set a reference point – a scale or specific notes, for example – and a rate at which derivations from this point will be digitally corrected. This rate can be carefully calibrated so a voice sounds "natural," by tacking the voice smoothly back to the reference pitch. Or, artists can make the correction happen quickly and artificially, which results in the warbling, digitized voices now all the rage in pop, hip-hip, reggae and other types of music.
Auto-Tune's invention sprung from a quite unrelated field: prospecting for oil underground using sound waves. Andy Hildebrand, a geophysicist who worked with Exxon, came up with a technique called autocorrelation to interpret these waves. During the 1990s, Hildebrand founded the company that later became Antares, and he applied his tools to voices.
The recording industry pounced on the technology, and the first song credited (or bemoaned) for introducing Auto-Tune to the masses was Cher's 1998 hit "Believe.
Although a success with audio engineers, Auto-Tune remained largely out of sight until 2003 when rhythm and blues crooner T-Pain discovered its voice-altering effects.
Considered the first electrical speech synthesizer, VODER (Voice Operation DEmonstratoR) was developed by Homer Dudley at Bell Labs and demonstrated at both the 1939 New York World's Fair and the 1939 Golden Gate International Exposition. The Voder synthesized human speech by imitating the effects of the human vocal tract. The operator could select one of two basic sounds by using a wrist bar. A buzz tone generated by a relaxation oscillator produced the voiced vowels and nasal sounds, with the pitch controlled by a foot pedal. A hissing noise produced by a gas discharge tube created the sibilants (voiceless fricative sounds). These initial sounds were passed through a bank of 10 band pass filters that were selected by keys; their outputs were combined, amplified and fed to a loudspeaker. The filters were controlled by a set of keys and a foot pedal to convert the hisses and tones into vowels, consonants, and inflections. Additional special keys were provided to make the plosive sounds such as "p" or "d", and the affrictive sounds of the "j" in "jaw" and the "ch" in "cheese". This was a complex machine to operate. After months of practice, a trained operator could produce recognizable speech.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.