compilerbitch: That's me, that is! (Default)

No, I’m not talking about snake oil audio jitter that can be improved by buying $1500 cables hand-plated with gold between the thighs of virgins. I’m talking about the real thing — the kind that has been pissing me off since roughly 1991 and the demise of computers that were capable of decent timing accuracy.


Way back in the late ’80s, I had a commercial studio. We had a lot of synths and various outboard, most of which being set up so that it could be sequenced via honest-to-goodness 5-pin DIN MIDI. Our master sequencer was none other than an Atart ST. Spectacularly basic by modern standards, but, funny story: this thing was bang-on, dead nuts accurate with regard to timing, both running of its internal clock and synchronized with our 2″ 24-track tape recorder via an SMPTE timecode box.


Back then, if you hit record on the sequencer and played something, when you played it back it came back as pretty much identical to what you played in the first place. A common technique (that I still use to this day) when writing was doing everything real-time with MIDI, then when the time came to track things ‘for-real’, running the sequence one track at a time to get the best possible results. Timing was accurate enough that if I accidentally ended up playing back from tape at the same time as having the ST locked to SMPTE, you’d hear the audio flanging.


Fast forward to the mid ’90s, when I started trying to put a home studio together following the demise of my original studio, a master’s degree and an aborted attempt at a PhD. This time around, I had much better computer hardware, running Windows. Everything you’d read, all the glossy mags, all thought that this was the bee’s knees, but when I tried to record anything I started wondering if I’d lost my ability to play anything in time. I’d complain about it, and people would tell me I was stupid, being too picky or that I just had bad hardware. Trust me, I spent a LOT on getting my hardware right, but my timing still sucked donkey’s balls. Though I really wanted to be able to use sequencers for composition, I think everything I recorded was played real time pretty much up to about 3 or 4 years ago. Same problem — I’d play something and it would come back wrong. Sure, I could quantize it, which would reduce (but not eliminate) the suck, but I couldn’t really understand why nobody else was bothered by this. Many years ago I had an email conversation with the head of the MIDI standards committee (or whatever it called itself at the time). His response was basically, yes, we know, but we’re not going to do anything about it because nobody cares, and our members are all from hardware companies who don’t give a rat’s ass about professional musicians and only want to sell cheap sound cards. At the time I was trying to persuade him to lobby for the introduction of timestamped record and playback of MIDI preserved through record and playback driver chains on Windows and Macs, but it fell on deaf ears. From reading I’ve done more recently, this has indeed been introduced, but many drivers and hardware devices just ignore the timestamps, as does some sequencing software.


Fast forward a bit more, up to about 3 years ago when I started recording the couple of albums I have out right now. They were both recorded mostly using Ableton and softsynths. Yes, timing still sucked on the record side, horribly so in terms of latency and pretty badly in terms of jitter, but since I was sequencing within Ableton and the softsynths’ timing was essentially sample-accurate, I could quantize things back into some resemblance of not sucking so badly that I wanted to chainsaw my investment in equipment, but it still wasn’t right. I still couldn’t do things that I took for granted in 1988. Sound quality was greatly superior, but what use is that when I can’t play in time any more? Seriously?


Not too long ago I pulled one of my old hardware synths, an Ensoniq SQ-80, out of storage with a view to getting it working. I hooked it up via MIDI to my M-Audio (now Avid) Fasttrack Pro interface, using Ableton as a sequencer. I looped the MIDI back into the SQ-80, which has a neat feature that lets you internally disconnect the keyboard from its sound generation for exactly this purpose. I set up Ableton to loop what I was playing back into the SQ-80. I will not be polite here: the results were beyond awful. We’re not talking about me being an oldtimer primadonna with golden ears whining about minutiae — the latency was so bad that it was unplayable, but worse (even) than that, I couldn’t play more than a few notes without notes getting stuck on. Playing slowly, this would happen every minute or two, but play a fast run and you could pretty much guarantee an immediate failure. It wasn’t even close to working, let alone usable.


Before anyone has a go at me for having a broken SQ-80, no, it’s not. I hooked a MIDI cable from the Out back to the In, and it played fine, no noticeable latency, just as if I was running it conventionally. Hooking up the MIDI out from Ableton to a Vermona MIDI-to-CV interface in my modular synth also gave the same stuck notes from hell and horrific latency and jitter. I tried sending MIDI clock — it was all over the place, terrible, unusable.


OK, so what is jitter and latency and why does it suck so much?


Latency is easy to explain — it’s just delay. Notes go in, they get delayed, and the come out again. You hit a note and it sounds slightly late. This is unpleasant for musicians to deal with because it makes the instrument feel dead. English doesn’t have good ways of describing it, but ‘spongy’ or even ‘rubbery’ come to mind. I end up hitting the notes harder subconsciously to compensate. If you practice long enough, your brain starts to compensate for the delay, but it’s not an ideal situation.


Jitter is much, much worse. It’s like latency in principle, except that the timing of the delay varies randomly. This can’t be compensated for or learned around because it’s not predictable. If there’s a lot of it, it can make a seasoned pro sound like a 3-year old’s first glockenspiel session.


Most people probably can’t consciously hear the difference. I have to assume this because the purveyors of modern music gear would have torch-wielding-peasants camped outside their design centres as we speak. However, if you learn pretty much any musical instrument of the old-fashioned kind, you have to train your ears to be sensitive to timing in order to learn how to actually play in time accurately. Some instruments favour this more than others — bass guitar, drums and percussion being probably the most prominent. I’m unfortunate enough to play two of the three, so my ears scream, “NOOOOOOOOO!” at things that most people might just find a little sloppy or just unimpressive.


Innerclock Systems have recently published the results of a very detailed study into the timing behaviour of music gear, both vintage and new. These numbers make fascinating reading if you’re just the right kind of obsessively nerdy.


So these fancy computer based sequencers have been around for a long time now, but it’s interesting that the most significant beat-driven music genres of the last couple of decades haven’t really been based on them. Rather, house music was TR808 and TB303, techno was more 909, electro and hip-hop were heavily driven by the Akai MPC series. What do all of these systems have?


Better than 1ms latency and jitter, often <i>much</i> better.


What does Ableton have? Maybe 30 milliseconds of latency on a relatively fast, well set up system. The best I’ve been able to manage with solid reliability is 57.2ms, though I’ve been able to unreliably manage about 15ms of output latency. This is just the audio drivers — the VST and AU instruments add their own latency and jitter, as does USB if that is the route by which note information is finding its way inside. I’ve not tested it, but it wouldn’t surprise me if I often see worse than 100ms of latency when I’m playing a relatively complicated setup. At 120 beats per minute, this is 20% of the length of a quarter note, or nearly a whole 16th!


The reason why this happens is because of the nature of Windows and MacOS. They are multitasking, multithreaded operating systems that are tweaked to give a good user experience first, but aren’t really tweaked for accurate timing at the sub-millisecond level. Since lots of tasks are always competing for processor time, there is no guarantee that the code necessary to deal with incoming or outgoing audio data or MIDI information will actually get executed when it really needs to be. Consequently, it’s necessary to use buffering in both directions to take up the slack — you basically need enough buffering to cope with the uncertainty in the response time. Buffering adds a delay, queuing up data so that when the interrupt routine actually executes that it won’t run out of information causing a dropout in the audio. This is where the latency comes from. Some software, like Ableton, lets you tweak the buffer size so that it is just big enough to prevent dropout. Faster computers with more CPUs can often manage with smaller buffers, but this isn’t guaranteed. Audio is relatively easy to buffer because it’s a simple stream of numbers at a continuous rate. MIDI data is basically just note on and note off switching information, much much lower data rate, but no less timing sensitive than audio. Ideally, incoming and outgoing notes should be timestamped, so you get consistent delays rather than jitter, but it seems that this is even now still usually not bothered with. I actually suspect that Ableton, as used in my test case mentioned above, wasn’t even sending out MIDI data in the order that it arrived, so note offs were going out before their corresponding note ons, resulting in stuck notes.


The use case that I really would love Ableton for would be playing it via an external MIDI controller, maybe a keyboard, maybe an Eigenharp, then have it do clever things with that MIDI data and send it out to other instruments, including my modular synth. Nope. It kinda sorta works a bit, but it’s not really acceptable for serious use.


I think that the music equipment business has been doing the, “Nothing to see here, move along now!” trick for a long time. There are workarounds for some of this. If you’re recording external audio, so long as you’re not trying to  monitor what you’re playing through the system, you can get away with a lot of latency because you can simply delay the audio after the fact so that everything lines up. Ableton does this, as does Logic, ProTools, etc. If you’re playing a softsynth, you only get the outgoing half of the latency, which tends to be relatively consistent rather than jittery, so it’s not too horrible unless you’re playing something very fast or percussive. I once tried playing percussion real-time via USB MIDI and a softsynth (Battery). Um… no. It was not a pleasant experience. I suppose that some people manage to practice enough that they adjust. Try this: look at some videos of people playing softsynths or triggering samples via something like an Ableton Push or one of the many USB MIDI based MPC clones  — look really closely, and you’ll always see them hitting the pads noticeably before you hear the sound. Then look at videos of people playing actual drums. Yep, it’s enough to be visible if you know what you’re looking for.


I think the industry’s main Hail Mary these days is the fact that relatively few people actually learn keyboards traditionally, so they depend on step sequencing and quantization. That’s how I got through my last two albums, at least for the parts I didn’t just play real-time. If you’re inside the sequencer’s world and have advance knowledge of when to trigger a note, you can send it out via a softsynth essentially dead-on accurately.


So how do you fix this?


There are a couple of companies out there who are making a business out of this. There’s Innerclock Systems, mentioned previously, and Expert Sleepers, both of whom essentially send out sync data over audio and convert this to MIDI in hardware. Expert Sleepers additionally can also output CV/Gate or MIDI note information as well as just clocks. This helps and might even be a complete solution if I wasn’t really a keyboard player, since they solve the accurate MIDI timing problem for Ableton on the outgoing side, but they can’t help with latency so they are not a complete solution.


The Akai MPC Renaissance is an interesting beast. It looks like an older MPC, but it actually uses software running on a Mac or PC. It does have its own audio and MIDI I/O and seemingly supports MIDI time code and MIDI clocks. I’d like to believe that it might solve the problem, but I have my doubts.


Other than building a studio from all-pre-1990 gear, not something that’s really all that feasible due to the demise of tape, here’s what I’m probably attempting:



  1. Using Ableton and a Mac as a last resort. It still wins for mixing and mastering, that’s unlikely to change, but I’m unlikely to use it for tracking or live use any more.

  2. Moving to for-real, 5-pin DIN MIDI and not using USB MIDI at all. Old fashioned MIDI might be crusty and slow, but it has essentially zero latency or jitter unless it’s overloaded.

    1. Which has the corollary: don’t overload MIDI interfaces, so I need a bunch of them.



  3. Find a way to do sequencing that isn’t Ableton and is probably hardware based. Favourite is probably an older Akai MPC from the hip hop era, or maybe something like a Cirklon, though they are relatively spendy (this is a Sarahism for not available cheap on eBay), though the Cirklon doesn’t have the kind of interface I really want. I like modular-style sequencers (I have 3 of them in Euro format!), but they are a ‘thing’ in their own right and not something I’d want to use to put together a whole track.

  4. Find a way to synchronize that sequencer to a digital recorder (which probably won’t be Ableton either — I’m experimenting with using a Tascam DP32SD instead).


This would potentially make for a very fun and fast workflow — mess around with the MPC and the modular to make something interesting and then basically just hit record on the Tascam. Rinse, repeat. Then, later, dump the audio from the Tascam into Ableton for editing and mixing, which is something that Ableton does supremely well.


By way of an experiment, I was playing around with my Korg Electribe yesterday. I was clocking it directly from the clock output on my Eurorack Trigger Riot module, with some hard snappy bass sequenced with an 8 step analog sequencer, audio coming from a couple of analog sawtooth oscillators into a clone Moog System 55 low pass filter. All I can say is, I think I’d forgotten what that kind of really tightly synchronized thwack-you-in-the-bum beat was all about.


I’m not in love with the Electribe. I don’t hate it, but it has a soggy feel, which I’m putting down to latency between hitting the pads and the audio. It’s not terrible, I can play it. Sequenced it’s fine, but I like to finger drum, so timing is important to me. I’m starting to think that my best bet might be to find an old Akai MPC, a couple of which had built-in SMPTE timecode reader-generators. The way this would work is you ‘stripe’ a spare track on the multitrack (the DP32 has thirty two of them, so that’s no big deal), then hook it up to a spare output (probably an FX send) so that when you hit play this streams back into the MPC causing it to jump to the right part of the song and start playing. I could then sync up my modular and/or other hardware synths and have the timing dead nuts. Finding a good condition MPC 2000XL or an MPC 4000 looks like it might do the trick, these being the only MPCs that (to my knowledge) included built-in SMPTE LTC. They actually do a decent job of sequencing MIDI from external keyboards, though they are better known for drums and sampling, obviously. I’ll miss the softsynths, but they could still be used to add some overdubs once the mix ends up back in Ableton, so that’s not so much of a big deal. That said, the possibilities of the Eurorack modular are nothing short of astonishing, so it wouldn’t hurt to be able to concentrate on that.


I have some eBay cheapassing in my future, I think…




Please note: this was cross-posted from my main blog at http://www.mageofmachines.com/main/2015/09/13/things-that-keep-me-awake-at-night-356-audiomidi-timing-jitter-and-latency/ -- If you want me to definitely see your replies, please reply there rather than here.

#MoMBlog, #Musings, #Recording
compilerbitch: That's me, that is! (Default)

I kind of got all enthused today and started designing a modular synthesizer.


Note to future self: I apologize, I knew it was going to be a pain in the ass, but it’ll be awesome. Honest!


I’ve wanted a modular since I was in the single-digits-of-age. Specifically, I wanted a Moog modular. A System 55. A really really big System 55. I just wanted to get that out there right away, so you all realize that the crazy started really young.


In more recent times, two albums ago realistically, I started using softsynths in a big way. I have a fairly chunky investment in Native Instruments and East West plugins as well as licenses for Logic, Ableton Live and suchforth. Don’t get me wrong — softsynths are awesome and are utterly unapproachable for sheer awesomeness-of-noise-per-buck. The only problem is that they drive me batshit freaking nuts because of the delay and latency associated with doing digital audio with an operating system that really wasn’t designed for it.


I’m not a classically trained player, but I’m probably describable as traditionally trained, in the sense that I put in the years of no friends and 8 hours a day and the bleeding fingers. I can, and actually usually do, play my instruments real-time and have the skills to pull that off. This also comes with an annoying tendency to be able to hear when timing or pitch is off when any normal sane human would be totally fine with it and already on the way to the pub, thank you very much. Pitch isn’t so much of an issue with digital gear, but AARGH FREAKING BASTARD timing kind-of is. I can hear when something is a few milliseconds off — not only does it make things sound wrong, but it also throws off my playing. If you can tell the difference between a really good wired gaming mouse and the nasty Bluetooth mouse that came with your computer, it’s like that, but much worse.


I’m wanting to start on another album, but I really want to fix this before I get into it because it’s just way too frustrating otherwise. I’d also like to find a solution for gigs that gives me rock solid timing — I’ve done some live performances with softsynths, but it is quite challenging to pull off playing anything fast when you can’t quite hear exactly what you’re playing when you’re playing it.


Old skool analog has no timing problems whatsoever. Response time is basically in the order of single audio cycles, not the time it takes to queue up a couple of buffers and hoof them out over USB to an audio interface. Faster than I can hear, which is fast enough anyway. The undisputed Kings and Queens of analog are of course modular synthesizers. Happily, these days they are merely really expensive and complicated rather than the price of a Ferrari and complicated. I’m not so great about the really expensive. I’m OK with the complicated.


A ridiculously awesome A-100 system that does not existIf you want to lose the rest of your day, go to Analog Haven and have a play with their modulargrid tool. This thing lets you spec out an arbitrarily enormous modular in one of several rack formats with modules from about 30 or so manufacturers. I had a go with it and came up with a truly epic modular configuration that would have cost >$10k. Basically I set out to spec a synth that could do the kind of things I typically do real-time with soft synths, except (nearly) all analog and fully real time. I went for the Doepfer A-100 Eurorack format because of the large variety of third-party modules available off-the-shelf and relatively low cost relative to some of the other formats. That and the smaller module size means I could cram more awesome into the lack of space that I haven’t quite got available. The picture above (you may have to go over to the original post on mageofmachines.com in order to see all the images if you don’t see them) is not actually real — I made it with modulargrid. It’s basically three synthesizers in one, maybe four, kind-of. There are enough VCOs, Moog clone ladder filters, VCAs, VCOs and ADSRs to manage a 4 voice polyphonic pad, simultaneously with a 2 oscillator bass monosynth, a completely separate 2 voice Karplus-Strong plucked string synth, a 4 operator monophonic FM synth, digital delay, digital reverb and 16 channel CV, 16 channel gate and 8 channel MIDI interface that will let the whole lot be run from Ableton Live. That’s a LOT of stuff in about the same physical space as a midrange Moog modular, for about a fifth the price. That said, a fifth of the price of a really nice car is still more than I want to pay, and I kind of have the bug to actually design some of this stuff.


There are a few things that give me the heebie jeebies. I’m not so keen on doing my own scratch-built VCOs because getting them dialed in so their tuning tracks accurately without drifting with temperature is the kind of problem I could solve, but I’d kind of rather it was someone else’s problem, if you see what I mean.


So basically to sum up the stuff in the rack, there are a lot of little 4 into 1 audio mixers. These things are pretty much ubiquitous because there are so many cases where several signals need to be combined before moving on to the next stage. I picked out the modules so I hopefully wouldn’t run out of mixing capability if I tried to patch just about everything all at once. I think I counted 7 of those. I threw in a 4×4 matrix mixer too because it looked like a nice thing to have. Mixers aren’t too difficult to design — not necessarily trivial because there is still a fair bit you have to get right, but it seemed like a tractable place to start.


This morning I had at LTSpice and came up with a basic circuit for a module that loosely corresponds to a Doepfer A138a Linear Mixer. Since I seem to need a boatload of these things, it’s low-hanging fruit that’s worth going after, I think.


m001-ltspice-modelThis is what I knocked up fairly quickly. A word of warning — I’ve not built one yet, so if you pick up on this circuit and find it doesn’t work, then sorry, but not my fault. :-) It’s a bit messy, but basically it models a 4 input mixer with a pot for each channel and a master gain pot. There is some extra voodoo in there for ESD protection on the inputs and outputs and for bandlimiting — it should have a -3db point at about 75kHz and optionally also at about 5Hz. There’s a not-drawn bypass switch that bridges out the cap that links the summing bus to the final gain stage so you can choose whether to run this DC or AC coupled. The idea is to have an essentially flat response from 20Hz to 20kHz, but not have too much response outside that band because modulars have lots of wire everywhere whose main object in life is to receive the transmissions of passing taxi drivers. I also separately came up with a really simple analog 8-0-8 LDO reg and dual pi filter arrangement that should squash any hum on the power well below the noise floor. Yeah, NE5532 opamps, old skool, I know, but there is plenty of drooled-over classic gear stuffed full of those things and they are both multiple sourced and really cheap. Input impedance should be very close to 100k, with output impedance close to 1k, as per the Doepfer standard. I have to say, it’s really nice to do some old skool dual rail analog again after all these years. Running 3.3V single-supply is all the rage, but it’s just not the same somehow. :-)


m001-schematicI redrew the schematic in EAGLE next. It was actually pretty straighforward — for once, I could find all the parts I needed in the EAGLE library. Normally, this never freaking happens, so I have to spend hours creating the parts from scratch. But anyway, here it is. It looks a little odd because things that I’d normally draw as variable resistors are drawn as header connectors because, at least for the prototypes, the connectors and pots are going to be hand wired rather than soldered direct to the PCBs. This is actually standard practice for most modules of this kind, but to be honest if I ever have to make a lot of these things I’ll be looking to eliminate that wiring because it’s the most likely single point of failure.


m001-pcbNext up was a PCB. This was actually pretty straighforward. I went for 0402 surface mount for most of the passives, but chose to go old skool for the regs because of easy availability, low cost and extreme reliability. I went for through hole inductors because I won’t know exactly what I’ll be dealing with until I measure a real synth power bus, so it may be that something as simple and cheap as a wire link with a ferrite bead threaded on it will be sufficient. If not, I can chuck in a 1 microhenry inductor with decently low series resistance easily enough. The ESD protection is a bunch of surface mount Schottky diodes that clamp the inputs and outputs around the power rails — to maintain their own survivability, they are at the inboard end of series resistors and they have low value ceramic caps across them. Seems like overkill somehow — this board is about 80% power filtering, ESD protection and band limiting and about 20% actually doing stuff.


Web


So anyway, I think the panel artwork will look something like this. Kind of like the Doepfer module that inspired it, but with the AC/DC coupling switch. I kind of want to go for a minimalist black panel look too, because Modular Moog (and end of argument). Since I have the CNC equipment for it, I’ll most likely engrave the panels from sign material — this stuff has a white core with a shiny black surface laminate, so I think it’ll look really good. It’ll be rather slow  to machine, but at least all the holes and the outer cutout can be done at the same time (though not with the same cutter, naturally!).


I’m not quite done poking at this yet. I think I’m going to swap out the 3 pin power connector for one that matches the Doepfer standard directly to make cabling easier. I think I also need to give the gain pot circuit a bit of attention too to limit the maximum gain. Channel activity blinkenlights would be nice too. Hmmm…


I’m intending writing up the designs as I go along. I’m going to open source the design. I’ll probably end up selling partial and/or complete kits or maybe even prebuilt modules if people are interested, but first time around this is for my own use.


So what do you think? Am I as nuts as I think I am for attempting this?




Please note: this was cross-posted from my main blog at http://www.mageofmachines.com/main/2015/02/08/oops-i-appear-to-be-designing-a-modular-synthesizer/ -- If you want me to definitely see your replies, please reply there rather than here.

#Electronics, #ModularSynthModules

Profile

compilerbitch: That's me, that is! (Default)
compilerbitch

January 2016

S M T W T F S
     12
3 45 6789
10111213 141516
17181920212223
24 252627282930
31      

Syndicate

RSS Atom
Page generated May. 31st, 2025 12:36 pm

Style Credit

Expand Cut Tags

No cut tags