Main Menu

Tip jar

If you like CaB and wish to support it, you can use PayPal or KoFi. Thank you, and I hope you continue to enjoy the site - Neil.

Buy Me a Coffee at ko-fi.com

Support CaB

Recent

Welcome to Cook'd and Bomb'd. Please login or sign up.

March 28, 2024, 06:15:36 PM

Login with username, password and session length

Cabbers producing music

Started by popcorn, October 30, 2018, 12:03:56 PM

Previous topic - Next topic

popcorn

Quote from: NoSleep on December 09, 2018, 12:34:57 PM
The synths will be used for filtering and gating. Then some of that is pitched up or down an octave using a pitchshifter or octivider. They say they split it up into three sections, one at the original pitch, one up an octave and one an octave below (or maybe two octaves up, dunno, not heard it). Putting the same guitar through each sequenced section would result in three separate tracks sounding like one complete take.

OK, that makes sense... kinda. Though I'm still not sure why exactly the synths were necessary, listening to the sound.

The fun thing about the riff is that the pitch-shifting effect is sequenced. So actually it's maybe not exactly what you're imagining - you can only hear one note (original pitch or pitch-shifted) at a time, creating an arpeggiated effect.

NoSleep

You need the synths because they're filtering the results a bit and gating the notes. Three separate sequences that only fill all the gaps when played at once.

popcorn

Yes, that makes sense. Thanks for explaining.

I'd love to try this. I wonder if there's a way to use soft synths to process audio signals in Ableton. I've never tried that.

NoSleep

This would be piss-easy in DAW using gates and filterbanks.

NoSleep

Something like Camel Audio's Alchemy would do the trick almost instantly except Apple bought them up and incorporated Alchemy into Logic. There must be some other companies doing similar (or an old [k] of Alchemy somewhere out there).

popcorn

Quote from: NoSleep on December 09, 2018, 12:43:06 PM
This would be piss-easy in DAW using gates and filterbanks.

Yeah, I could just take a guitar sound and gate and filter and split it in the normal ways, I'm just wondering if you can actually plug it into a softsynth. Sounds like a laugh...

buzby

Quote from: Darles Chickens on October 30, 2018, 02:13:31 PM
One disadvantage of digital over analogue is that you'll be up against the problem of aliasing. Essentially the problem stems from the idea that a digital waveform is not a perfect representation of the original analogue sound, but is built from samples taken at a constant rate (44.1kHz or whatever) which approximates the sound. This sample rate is normally fixed, regardless of how you manipulate it later.

Imagine if you want to reproduce the effect of playing a tape at 1.25x speed, but digitally.  You can't change the digital sample rate, so instead what you do is you step through the original digital wavetable at 1.25x speed and build a new wavetable from the points that you hit.  This can never be as precise as just playing the original analogue data at 1.25x speed because:

  • The original digital source is itself lossy and can't provide a precise value for the wave at a given point when the point you hit is between two samples.
  • The new wavetable is sampled at the same rate as the original, and so many of its values have to be approximated by some kind of best guess, given the two neighbouring samples.

In practice, signal degradation in analogue processing probably actually leads to a worse result than digital manipulation, but you can sometimes still hear the effects of aliasing in very high frequency sounds like cymbals which have been manipulated, particularly if the software uses naive algorithms to approximate the value of the wave between samples.. The problem is normally compounded if you manipulate an already manipulated piece of sound.
The quantisation effect can be compensated for by recording or generating the original waveform at a higher sampling frequency (e.g 96kHz), so that when slowing it down for playbac kmore of the original information is preserved. This is basically the same thing that is done to produce smooth slow-motion shots on film - the camera is overcranked when filming, so when the shot is played at 24fps the motion is still smooth.

On magnetic tape, the speed the tape passes the record and playback heads determines how much bandwidth can be recorded without loss of fidelity. The tape has a fixed number of magnetic particles per square inch, so if you can get more inches of tape per second (IPS) past the heads you have more headroom and a wider frequency bandwidth available. That can be done by either using wider tape (1/2" instead of !/4" or increasing the speed (or both - the wider tape format also has the benefit of having a lower noise floor). Reel-to-reel decks have fixed speeds based on halving the speed -  30 IPS,, 15 IPS, 7.5 IPS (standard play) and 3.5 IPS (long play). with a corresponding degradation in bandwidth as the speed decreases. It's a lot more complex tha that in reality, as variations in  tape formulations, recording bias settings, EQ and so on make a lot of difference

The varispeed controls used on most analogue multitrack transports didn't usually have enough range for frequency response loss due to the reduction in bandwidth to become an issue. The main uses for them were tithings like compensating for an instrument being out of tune, or to pitch backing tracks into a lower or higher key if they were outside the comfortable range of the singer. That is one of the techniques Martin Hannett used when recording Ian Curtis's vocals, as he wanted to sound more like Jim Morrison but it was below his comfortable range - the backing track was pitched up a couple of semitones to record the vocals, so when they were played back he was more of a baritone.

buzby

Quote from: popcorn on December 09, 2018, 12:26:49 PM
OK, thanks... but I'm still a bit confused about the process here.

So they program a sequence to modulate the guitar signal rhythmically, pitching the octave up and down at different points. That makes 100% sense. But are they repitching it using the synths? And if not, I'm not sure what they used the synths to do. To affect the filter sound? Add distortion? etc.

There's a video showing how to do it using pedals - one is a very simple MIDI step sequencer which is programmed to drive a whammy pedal that's doing an octave up and octave down split. For the recording they  fed the guitar into a synth to do the same thing as the whammy pedal (and use the synth's own filter to process the guitar).

Feeding external inputs though a synth's filters is as old as the hills - it's what Brian Eno did in early Roxy Music with his VCS-3. It was a lot easier on older analogue synths as they either used patch cords so you plug the audio straight into filter section, or had an external input jack specifically for the purpose.

popcorn

Quote from: buzby on December 09, 2018, 01:30:30 PM
There's a video showing how to do it using pedals - one is a very simple MIDI step sequencer which is programmed to drive a whammy pedal that's doing an octave up and octave down split. For the recording they  fed the guitar into a synth to do the same thing as the whammy pedal (and use the synth's own filter to process the guitar).

Yes, I understand the midi/pedal stuff, I'm just not 100% on how the synths were involved. I don't know how you use a synth as a pitch shifter, for example. Unless they just used the synths to filter and maybe gate(?) the sound and the pitch shifting was done with a separate device.

buzby

Quote from: popcorn on December 09, 2018, 01:40:27 PM
Yes, I understand the midi/pedal stuff, I'm just not 100% on how the synths were involved. I don't know how you use a synth as a pitch shifter, for example. Unless they just used the synths to filter and maybe gate(?) the sound and the pitch shifting was done with a separate device.
In Muse's case they were using 3 modular synths to do the job of the distortion pedal in the video  I linked to. They used a seperate pitchshifter to get 2 outputs an octave above and below the input from the guitar, then used the 3 signals as inputs for the filter sections of 3 modular synths (an ARP 2600, Korg MS20 and EMS Synthi). The synth filters were then used to manipulate the audio signals, and a sequencer was hooked up to all three that was basically used to gate notes from each synth in turn to get the octave-hopping pattern (the synths weren't doing any pitch manipualtion, that was done by the external  pitch shifter) and the 3 outputs would have then then recorded as 3 separate tracks and combined into the stereo mix (from what I remember one octave is panned left, the other is panned right and the original note is in the centre).

popcorn

Buzby, as ever your music production knowledge is a treat. Thank you so much.

hummingofevil

Re: the original question about slowing tape down and aliasing.

The Nyquist frequency is a measure of the cut off of upper frequencies in digital recording. The principle is that to record a frequency f, you need a sample frequency of at least 2f (if you sample at frequencies lower than this you basically miss out the peaks and troughs from original signal - it's easy to show in diagram but hard to describe).



popcorn

It's time for another round of popcorn's "How did they done that then?" Today's subject: the keys in Everything In Its Right Place by Radiohead.

The received wisdom is that they did it on a Rhodes piano (which is how they played it live for 15 years), but that definitely ain't a Rhodes. It's a synth - you can hear the filters opening up in the second half of the song.

There's good reason to believe it was a Prophet-5, as Radiohead were definitely using that synth on Kid A and Amnesiac, and here's this quote from Thom Yorke:

Steve Lamacq: And originally the same sort of keyboard was used on...
Thom: (clearing throat deliberately) Japan. They used to use it an awful lot to get all those wacky sort of like well, tin drum sounds.

It's not completely obvious unless you know the band, but he's referring to the album Tin Drum by Japan, which according to my research used the Prophet-5. And as this  quick YouTube video demonstrates, it's simple to recreate on the Arturia Prophet-5 clone.

What puzzles me is the stereo effect. You can recreate that by putting a stereo effect on the channel, but if you listen to the track, you can hear they're actually different. The filter opens up in different places.

Did they record midi, then feed it back into the synth while fucking with the filters, creating two different takes? If so, they must have had a modded Prophet-5 as it was made pre-midi. Is there another way they could have done this?

BTW, I'm not trying to recreate any of these sounds to use in my own music, I'm just fascinated to know what the process was. It's educational.

NoSleep

From one listen:

I think the stereo effect may be ADT (Automatic Double Tracking) as the right hand channel seems to dominate, which suggests it's the original signal. The reason being that our ears always point us to the earliest occurring signal as the direction from where it is coming from. It may be that there are two extra signals being generated from one panned to the centre of the mix and the emphasis toward the right hand side is because the two signals generated from the original are staggered in time. It's also possible that they have been detuned very slightly to make it shimmer some more.

I don't think the filtery synthy sounds are necessarily from the same keyboard, although it's possible that they are. If so, then they could be completely different patches. It's possible that we are sometimes hearing three instances of the same synth placed Left - Mid - Right and sometimes they change the patch to another sound, as the electric piano sound just seems to drop back at these times, rather than disappear completely (although it disappears from that position).

popcorn

#44
I don't think it's just ADT as the left and right channel are noticeably different at several points. This is very easy to notice from about 3:40.

My suspicion is that they used the same midi data to play the track twice, and recorded different takes while screwing with the filter. And then used ADT or a similar effect (ie delaying one of them by a few miliseconds) to widen the stereo image.

the

Is it remarkable how or why? Plinky plonky plinky plonky, copy Aphex Twin. Fuck 'em. Squinty priveleged warble tossbag

Edit: bit pissed obviously :D :D :D fuck you

popcorn


buzby

#47
I think virtually every Prophet 5 still in existence has probably had either the Kenton or Wine Country MIDI retrofit kit installed. The Wine Country kit is a reproduction of the original SCI factory MIDI retrofit they offered for the final Rev3-on Prophet 5s. The Kenton kit has the advantage of being able to route the MIDI Velocity data to control the VCA or VCF input.

Radiohead have 2 Prophet 5s (one was used instead of a piano to record the track at the suggestion of Nigel Godrich), and they were occasionally used live instead of the Rhodes, but since 2016 they have used a Prophet 08 to play EIIRP live instead of the P5 or Rhodes (it's a lot less hassle to keep running in a live environment than a P5 is nowadays)
Glastonbury 2017 - Prophet 08

popcorn

Buzby, what's your source on how many Prophet-5s Radiohead own? That's the sort of thing that's so specific I just have to ask about it.

One of them went kaput during their 2001 South Park gig, prompting them to say fuck it and play Creep instead.

buzby

Quote from: popcorn on January 09, 2019, 10:58:08 AM
Buzby, what's your source on how many Prophet-5s Radiohead own? That's the sort of thing that's so specific I just have to ask about it.

One of them went kaput during their 2001 South Park gig, prompting them to say fuck it and play Creep instead.
From the King Of Gear:
Thom's keyboards
Johnny's keyboards
(Thom and Johnny both use Prophet 08s on different songs)

popcorn

Ah right, the King of Gear. I know that guy. Good chap, great site, but he's not 100% on everything. For a very long time he said it was a Rhodes on that track...

buzby

Quote from: popcorn on January 09, 2019, 11:45:07 AM
Ah right, the King of Gear. I know that guy. Good chap, great site, but he's not 100% on everything. For a very long time he said it was a Rhodes on that track...
Some people think it was a Crumar DP50 or 80 (an old analogue piano synth with controllable filters), but there's no hard evidence for that, whereas there's actual interview comments, studio pictures and live performances that show them using Prophet 5s.

popcorn

Quote from: buzby on January 09, 2019, 12:15:04 PM
Some people think it was a Crumar DP50 or 80 (an old analogue piano synth with controllable filters), but there's no hard evidence for that, whereas there's actual interview comments, studio pictures and live performances that show them using Prophet 5s.

Aye, I think the fact that they used a Prophet-5 is indisputable.

buzby

Quote from: popcorn on January 08, 2019, 04:17:16 PM
I don't think it's just ADT as the left and right channel are noticeably different at several points. This is very easy to notice from about 3:40.

My suspicion is that they used the same midi data to play the track twice, and recorded different takes while screwing with the filter. And then used ADT or a similar effect (ie delaying one of them by a few miliseconds) to widen the stereo image.

I stuck this on when i got home to have a listen through headphones. You are right that there are 2 separate tracks of Prophet 5 playing the same thing (presumably through MIDI), though the Rhodes-style patches they are playing have slightly different filter or EQ settings from the get-go (the track on the left seems to have more bass). They are also hard-panned left and right, and to increase the stereo field the track panned to the right seems to have a very short pre-delay on it (like Martin Hannett's signature sound). At various points through the track the filter resonance/cutoff are manipulated on each of the two tracks.

At the time of the recording they may have only had one P5 which would have meant running through the track twice playing back the MIDI data (which was likely recorded through a master keyboard), but with two P5s you could do it in real time, particularly if they both had the Kenton MIDI kit as you could map a MIDI CC value on a knob controller to Velocity and use that to record the filter tweaking via MIDI too (the wild fast filter tweaks in the last 30 seconds of the track have a bit of a 'granular' sound to them, like they would if they were produced from quantised to 0-127 velocity step values rather than an analogue potentiometer sweep).

In that Glastonbury 2017 video I posted earlier, it looks like the double tracking and filter manipulation are being done by Ed using his pedal voard. There's definitely a delay in there, and he may have a MIDI controller to control the filter on the P08 as it's not Thom doing it, and Johnny's doing his KAOSS pad "sample and hold" thing.

popcorn

Thanks a lot for that, buzby. Didn't know any of that stuff about the midi kits. The observation about the filter sweep is neat too. It does so interest me how sounds are achieved.

Wish I had titled this thread "How did they done that then?" and made it into a proper thread about reckoning how sounds were made, as we have a few clued-in people here.

Golden E. Pump

Does anyone know how to speed up a vocal track using Logic Pro X once it's been recorded (without using the chipmunk sound effect mic track thing)? Similar to how Prince used to speed his vocals up to make his voice higher, aka the Camille voice.

NoSleep

There is a plugin in Logic called Vocal Transformer which separates the Mickey Mouse from the pitch change so that you can play around to your heart's content. Basically you can pitch up a vocal without it sounding mickey mouse or you can keep the same pitch and mickey mouse it. This all works pitching down and making a voice sound like Barry White, too.

NoSleep

Let me know if that isn't what you meant. There's some other pitch changing plugs and possibilities in Logic.

buzby

Prince used both tape varispeed (literally the chipmunk effect, slowing down the tape during recording so the vocals are pitched up when played back) and an Eventide H3000 Harmonizer (multi effects processor with pitch correction function) to manipulate his vocals. The plugins NoSleep is talking about will do a similar job to the Eventide's pitch correction algorithm.

NoSleep

The more standard pitch shifter, Pitchshifter II, is the one for doing what the H3000 is famous form (it increasingly did more through the various models).