Please note: this was cross-posted from my main blog at http://www.mageofmachines.com/main/
Many apologies if you wanted to come and see me perform on Saturday at the Analog Ladies event in SF. As most of you probably know already if you read my previous post, I’m about hald way through my notice period as I leave NASA and go to work at Google. One of my projects, a space camera for the SOAREX-9 spacecraft, is running late so I need to work through the weekend so I can deliver it in time for launch. It’s not my fault it’s late, but launch deadlines are what they are, so needs must.
I can say with absolute truthfulness that I will not miss this kind of last minute crazy, but a part of me will miss it forever.
I’m sending this out via my blog because it’s probably the quickest way to reach everyone. Today I just accepted a job at the Google Mountain View campus working in the Security and Privacy Group.
This is… kind of huge. The money and benefits are way better, but the reason I’m really making the move is that I’ve found it harder and harder to deliver on projects in recent times due to ever-tightening budget restrictions and negligible support in terms of access to resources. I’m looking forward to working on new interesting problems that have more of a direct impact on people’s lives — the group I’ll be joining, amongst other things, has the job of keeping our email out the of the hands of people we’d rather not have reading it, as well as doing a lot of other stuff to protect privacy. Kind of the sharp end of the not-being-evil bit, I think.
My last day at Ames will be January 21st. I’ll be starting at Google on the 25th — first week will be all the usual Noogler orientation stuff. And yes, apparently, I really do get a hat with a propeller on it.
This is all pretty difficult to wrap my head around. It’s been 10 years, more than twice longer than any other job I’ve ever had (not counting the various inter-contractor transfers).
Boggle. I can’t really believe I just pressed send on the bunch of emails telling people I’m leaving.
Many thanks are due to Maria Vorontsova, who kindly gave me a copy of my old album, Element 115, which dates to the late 1980s. I’m intending releasing it via the usual iTunes/Spotify/Google Play/whatever route, but I want to do some mastering on it first. In the mean time, here are some (unmastered) mixes from the depths of digital time to keep you going.
There’s also a previously unreleased remix I did (with permission) for the folk god, Martin Carthy. Well… if you ask me to remix a folk track, you should expect the results to be a bit off-kilter. Anyway, here it is:
For the nerdy out there, the synth you hear on the album is mostly a Korg Z1, with a few things from my old Ensoniq SQ-80. The Martin Carthy remix was done entirely using Symbolic Sound’s Kyma.
This here thingummy is a Lexicon Vortex. It’s a rather rare effects processor dating to the late ’80s/early ’90s. The chips in this seem to indicate that mine is roughly 93 vintage. I bought it as not working, sans power supply. The eBay listing said that it wasn’t working and that one of the knobs made the numbers change wildly on the display, and the level knob was scratchy.
The power supply issue was annoying — it requires a 9V AC supply, something rather rare. I was fishing around online looking for one, then noticed I actually had one. Sitting on my lab bench. D’oh. Anyway, I plugged it in and everything lit up. The level knob was indeed scratchy, but about 5 minutes of turning it back and forth fixed that. As for the other fault, it was a simple case of the person who had it previously not actually reading the manual. It had me confused briefly, long enough to actually take the thing apart and put it back together again (I wanted to check the caps anyway, so that’s my excuse and I’m sticking to it).
So it seems I have a fully working thing, at maybe a 5th of the going rate. It’s in pretty good condition for its age — near mint.
As for what it sounds like… It’s basically a bit like a delay/chorus/early reflections/random delay kind of thing, but with some rather odd weirdness thrown in. I can’t say it sounds exactly like anything else. A bit dark and dirty, maybe. Right up my street, actually! It occurs to me that it would be interesting to patch this inline with a more conventional reverb or delay to dirty up the tail and add a bit of interest.
Next repair is going to be a DBX 160A compressor that needs a couple of replacement switches, once they arrive from Harman.
In other news, the studio build is going, albeit slowly. I have nearly everything I need now, barring some cables that I have on order that should be here next week. I have a large box sitting there containing half a dozen 16 way TRS snakes and a vast quantity of TRS patch cables and another box containing 16 TRS to XLR female and another 16 TRS to XLR male, various MIDI cables, etc. One disappointment was I was hoping to use a couple of Behringer Ultrapatch Pro patchbays, which are really well made actually, but on testing them they turned out to not be TRS, i.e., they are unbalanced only, so I had to order a couple more Nady PB48 patchbays instead. I think I should have a total of 6, which is probably enough for all the gear. I think, if I’ve counted properly.
Tonight I had a bit of determination come upon me, so I decided to have a go at a couple of fixes on two newly acquired pieces of studio gear: an Ensoniq DP Pro and a Yamaha SPX 900. Both are digital reverb/multi-effects processors at the middle-end of the market for their time. Both are long obsolete, but they (now) work great and I’m happy to have them.
OK, so first up was the SPX 900. This was an easy one — just a dead battery. Like most pieces of studio gear from that era, this device uses battery backed static RAM to store user configuration information and patches, so when that battery dies, usually after about 15-20 years, it’s time for a new one.
Popping the lid off shows some admittedly fairly old technology, but this thing is built like a tank. Surprisingly, the power supply on the left appears to be linear — no switching regulators in sight! Badass. Anyway, it just took a couple of minutes to desolder the battery (you can see it as an orange ring with a silver center on the bottom right) and solder in a replacement.
On power-up, the first time I got a message saying that memory corruption had been detected (well, duh), then on a second power cycle it just came up fine straight to patch 1, like it’s supposed to. Before the battery swap, it was giving a battery low warning on power up. It didn’t have any user patches in it that I cared about, so I didn’t bother saving them (e.g. via MIDI sysex) or doing something heroic with a lab power supply.
Next up was an Ensoniq DP Pro. I just got it a couple of days ago — it was seemingly working, but the LEDs on the left of the front panel didn’t seem to be doing anything, nor did the 4 digit 7 segment display. I did a minor gulp when the person who sold it to me mentioned that he’d had his tech replace the battery — this always makes me worry, because I’ve had way more beyond-economic-repair situations caused by hamfisted repair attempts than actual failures. True to form, on investigation, someone had made like a gorilla with a couple of ribbon cables that normally go between the main board and a 2 board stack that includes all the displays that weren’t working (bottom left on the disassembled view). Not only had they ripped them out, they just tucked the damned things in so it looked like they were connected. ‘Ere, it just came off in me ‘and, Guv. Anyway, after a bit of faffy taking the front panel assembly apart so I could get at the connectors, it seemed that the fine pitch ribbon connectors did look slightly damaged, but they did still work. Similarly, the cables had seen better days, but did work fine once I reassembled everything.
Yaay, lighty up blinkenlightythings!
Anyway, a quick word about these two processors. I must say, they both punch above their weight. The SPX-900 really does feel like the classic SPX-90 on steroids — a much cleaner sound, though not quite as buttery smooth as a REV-5. Many more algorithms, though, including a couple that are heading toward Eventide’s Black Hole algorithm. The reverbs are typically Yamaha, not as colored as those from something like a Lexicon or quite as weird as an Eventide, but very usable indeed. The DP Pro was a bit of a surprise — it really is excellent. I’d put its sound as being somewhere between a Lexicon and an Eventide, in the sense that it sounds good and dense, but it’s capable of plenty of batshit. It has a dual algorithm structure, where the algorithms can be set up to work in series or in parallel, rather like an Eventide Eclipse. I think this thing is an unheard-of classic waiting to happen, so I’m glad I got my hands on one before someone famous decides they are cool.
Fixing things always cheers me up — I’ve no idea why.
Rather than waiting for the next album to come along, here’s the first track I’ve completed in a long while, here goes nothing:
Much more chilled out and even (gasp, dare I say it) New Agey than my previous work. I hope you like it. If you do, please share!
I was wondering whether to actually post about this or not, but like it or not, I think I pretty much have to do so.
I’m not exactly sure what caused this most recent transphobic crapstorm, but I suspect it was most likely the ugly defeat of the City of Houston’s HERO ballot measure. This was a right-wing photo opportunity for hate groups to openly walk around wearing T-shirts emblazoned with their transphobic crapulence. Not entirely surprisingly, the ballot measure was defeated. Let’s keep this in perspective: this was one city, admittedly a relatively progressive one, within a sea of good old boys, guns, oversized trucks and Confederate flags. This should really be an object lesson in exactly why it is that civil rights should not be decided by the popular vote, because as with so many similar cases in the past, all this proves is that the majority of people who can be whipped up sufficiently to rise from their pools of acquiescence to actually vote are mostly bigots.
Following shortly thereafter, a disgusting poll on change.org (no, I’m not linking it here because it doesn’t need my contribution to its Google page rank) went up that was exhorting major LGBT organizations to drop the T, becoming specifically LGB groups and explicitly excluding transgender people. I strongly suspect that its proposer is a right-wing sock puppet, but there are plenty of queer bigots out there so this may not necessarily be the case. This is really beyond belief. Trans people have had a long history of being in the front line of the fight for queer rights — we don’t get to hide or slink back into the woodwork like so many cisnormative LGBs, so for us all-too-often we have to fight or die, in many cases literally. The T has always been the poor cousin of the LGBT world — personally I don’t have much to do with lesbian and gay events because frankly so many people can be total bastards to trans people within those communities, so it just isn’t worth the risk in return for any meager benefit. I’m seeing lots of TERF language showing up — members of the T community have ‘different concerns’ than those of the LGB community, so we’re better off separate. BULL FUCKING SHIT. So what if we have different concerns? Lesbians and gay men have different concerns, so why wouldn’t they prefer to part ways? The hidden agenda here isn’t so much that they want to kick out the T. They want to disenfranchise the people who check more than one box — people who are both transgender and lesbian, gay or bisexual are really the people they want rid of. Why? Quite simply because they are transphobic bigots who don’t want nasty icky trans people anywhere near them, or (particularly) calling them out on their bigotry.
I am over, seriously over, commentary from people who want the T people out saying that, ‘we stand for trans people, but we want our own separate space from them.’ Yeah, right. As I said previously, BULL FUCKING SHIT. What you really don’t want is to confront your own bigotry, it’s really that simple. ‘Oh, I’m not a bigot, I find it really upsetting to be called a bigot, all I want is .’ BULLSHIT. If you hate trans people, at least have the fucking decency to admit to it. Maybe there should be a symbol, or a particular way of dressing you might want to adopt so we can easily spot you and avoid the hell out of you — you don’t want us anywhere near you, so why not?
In the pagan community, a number of elders recently publicly supported the kick-out-the-T measure. Aline O’Brien (aka Macha Nightmare), Ruth Barrett and Luisah Teish all signed the kick-out-the-T petition. Macha Nightmare has subsequently backpedaled and is now claiming that she disavows the petition, though I’m given to wonder why the hell she even thought for one second that signing it in the first place was any way appropriate. Luisah Teish is also backpedaling in her own not-actually-apologizing kind of way. Ruth Barrett is doubling down and crying victim, just like she always has. There has been quite a bit of discussion about whether or not these people deserve to be regarded as Pagan elders as a consequence. I find this a little ridiculous, because realistically eldership really just means that you happen to have a number of students and/or friends and followers who regard you as an elder, which is clearly the case for all of these people. None of them are my elders, and they will have my respect when hell freezes over.
Over the last couple of years or so I’ve equipped a pretty decent electronics lab for cents on the dollar relative to how much ‘real people’ would spend — I’m now having a go at doing the same thing for building a hopefully pretty decent home studio setup for music production purposes.
More than one person has asked me how I manage to do this — I’ve picked up some truly ridiculous bargains — so I must be doing something right.
I hope this helps, and good luck cheapassing your way to success! 😉
Thanks to the wonders of eBay bottom-feeding, I’ve welcomed home an old friend this weekend. It’s a slightly beaten up but fully working Yamaha REV-5 digital reverb of late 1980s vintage. It’s not one of the better known or most sought after reverbs from that period — for that you’re looking at a Lexicon, probably a PCM70 or PCM80, but the REV-5 is an overlooked gem in my not so humble opinion. It’s a truism that everyone hates everything on the internet, so you can find plenty of people dissing this particular device, but I think this is inappropriate.
Back when I had my for-real studio in the late ’80s, a REV-5 was my main reverb, with a couple of SPX-90s and an Alesis as backup. For the uninitiated, the REV-5 is a bit like an SPX-90 on steroids — much, much cleaner sounding and capable of being dialed up to super-dense or down to being very thin. It does all the things you’d want of a high-end reverb, realistically. You can dial in early reflections separately from the reverb tail, with direct control over the first three initial reflections. You can edit the amount of diffusion, and dial between a very even sounding (Yamaha-ish) tail and a more coloured Lexicon-ish tail. It can also do most of the SPX-90 tricks like chorus, flanging, symphonic, delays, gated and reverse reverbs, pitch shifting, etc., but at far higher audio quality. The front panel has a 3-band parametric (peaking, fixed Q, variable frequency) EQ that makes it trivial to fiddle with the frequency response. The user interface adds a lot more buttons relative to the SPX series, making it possible to directly enter parameters. You can also get one-button access to the 7 most commonly used factory or user patches.
I got change out of $100 for this thing. Just as the turn of the ’90s was the right time to buy analog synths, right now is the right time to build up a collection of rack gear as all of the larger old-skool studios are closing and/or shifting over to in-the-box or hybrid ProTools setups. I doubt the recording world will flip back to the old days of large consoles, but I have a gut feeling that as more and more people (like me, as it happens) have tried a heavily in-the-box approach for a while, they will want to go back to using more hardware.
I have to admit that part of this is because of bit rot. If you have an in-the-box recording setup, you’re stuck with the computer industry’s obsolescence cycle, so you can expect your investment to be largely obsolete within 5 years and probably completely unusable in 10. However, here I am fiddling around with a nearly 30 year old rack mount reverb unit that not only works perfectly but sounds better than just about any plugin-based reverb I’ve used.
I’m not really entirely buying the whole analog summing thing — though it may make a (very) small difference in some cases, I don’t really subscribe to the idea that it’s a good idea to drop a ton of money on something you can barely hear. Far better to spend that on something you really can hear, which is why ancient outboard gear is such a steal right now, particularly if you’re looking to avoid sounding exactly like everyone else with a copy of Ableton Live and NI Massive.
PS: I still want a Lexicon PCM70 or PCM 80. The eBay bottom feeding continues…
Yeah, yeah, I know. It’s PC vs Mac all over again. This has been a bugbear of mine for years. I’ll come on to that, but for now, here’s a link to an LA Times article by Chris Kornelis that I think does a really good job at talking about this issue, including some lesser-known history:
Go and read the article. It’s actually pretty good.
I’ve talked about this previously elsewhere, most recently on Quora. For the can’t-bother-to-click-challenged, here’s a cut & paste of the text of my reply:
You are actually asking the wrong question. You should really be asking which of vinyl or digital formats have the best mastering engineers. From experience, that would be the vinyl. They have to be, mostly because vinyl is actually pretty awful by default. It has relatively poor dynamic range, poor stereo separation, difficulties representing low frequencies accurately, the list goes on. If the engineer gets it wrong when cutting the master, the stylus on a record player can distort or even completely jump out of a groove. They have to match the track spacing to the amount of bass signal, or adjacent tracks bleed into each other. Signal to noise ratio isn’t particularly good either — it’s a little better than consumer/prosumer tape, but nowhere near as good as half inch 30ips, and nowhere even vaguely close to 24 bit digital. Contrary to what another poster said, phase response is weird going on awful because of the precompensation filtering that is necessary to get any bass response at all. On the other hand, most digital formats have pretty good phase response, an essentially completely flat frequency response and if you are using decent equipment, very little coloration of any kind. Technically, any way you compare the formats, digital wins by a huge margin. So why does vinyl often sound better, with better stereo imaging, warmer livelier sound? Quite simply because mastering for digital does not require anything like the skill of mastering for vinyl. A really good mastering engineer can turn a good mix into something that sounds truly amazing — there are mastering engineers who can do digital really well, but they are rare. Vinyl mastering engineers, in many cases, are true masters of their art, no pun intended.
It was interesting seeing the reference to Bob Clearmountain complaining of the awfulness of vinyl, back in the day. My own experience was pretty similar to his — I remember having to tweak and tweak and tweak on mixes to keep bass under control. I’d start with something earthshaking in the studio’s control room, but end up sending out a master that was at best describable as polite. It’s fascinating, now, seeing that CD is essentially in its death throes as a format with vinyl doing better and better numbers year on year. I wouldn’t rule out doing vinyl again, but, do I have to? Really? Such a pain in the ass!
One modifier on this: I’m really talking about uncompressed digital formats here. Though higher bit rate MP3 and MP3-like formats sound fine to me, lower bit rate MP3 encodings can sound really, really bad on some material. The acid test is something with sharp clicks — low bit rate MP3 renders them as really brief farts. Pff! Pff! Or sometimes ffP! ffP! ffP!, which can be a little more distracting.
In other news, I just picked up (very cheaply via eBay) an Akai MPC 1000 drum machine. I knew it had some issues, but it looks like I’ll need to do a pretty extensive refurb on it to get it going. It’s missing a fader knob, has a damaged switch (which still works, but is wonky), another that is sticky like someone spilled something into it, and seemingly most of the pads are either dead or so worn that I just about have to karate-punch them to get them to respond. Thanks to mpcstuff.com, I have a replacement pad sensor board, a set of replacement pads (modified to be more sensitive than standard), a replacement switch assembly for the broken one, and a couple of replacement fader knobs on the way. I also have (from elsewhere) a RAM upgrade on the way, and will probably put JJOS on it. The other thing that just showed up is a JL Cooper SMPTE linear timecode sync box, so I’ll most likely be trying that out over the weekend. My quest for timing that doesn’t suck continues!
PS: Yes, it’s my birthday today. And International Talk Like a Pirate day. Y’arr, maties!
I’ve written about the Keithley before, but the HP 6612B power supply has been sitting in a box for a few days waiting on me getting time to unpack it and check it out. I’m very pleased. Can we say dead nuts? It’s just as accurate in voltage mode. This will be a very useful addition to the lab because it’ll avoid my usual spaghetti wiring monitoring voltage and current with two separate bench multimeters. I also rather like the fact that it has GPIB, which given its built-in measurement capabilities has Possibilities. It’s not a SMU, but it’s a credible quarter-of-a-SMU, and seemingly accurate enough to do duty as a voltage/current standard if I only need 0.1%-ish precision.
If anyone gets the SMU joke, I’ll be impressed! 😉
Not. Actually. Satire.
Help us, Eb Metasonix! You’re our only hope!
No, I’m not talking about snake oil audio jitter that can be improved by buying $1500 cables hand-plated with gold between the thighs of virgins. I’m talking about the real thing — the kind that has been pissing me off since roughly 1991 and the demise of computers that were capable of decent timing accuracy.
Way back in the late ’80s, I had a commercial studio. We had a lot of synths and various outboard, most of which being set up so that it could be sequenced via honest-to-goodness 5-pin DIN MIDI. Our master sequencer was none other than an Atart ST. Spectacularly basic by modern standards, but, funny story: this thing was bang-on, dead nuts accurate with regard to timing, both running of its internal clock and synchronized with our 2″ 24-track tape recorder via an SMPTE timecode box.
Back then, if you hit record on the sequencer and played something, when you played it back it came back as pretty much identical to what you played in the first place. A common technique (that I still use to this day) when writing was doing everything real-time with MIDI, then when the time came to track things ‘for-real’, running the sequence one track at a time to get the best possible results. Timing was accurate enough that if I accidentally ended up playing back from tape at the same time as having the ST locked to SMPTE, you’d hear the audio flanging.
Fast forward to the mid ’90s, when I started trying to put a home studio together following the demise of my original studio, a master’s degree and an aborted attempt at a PhD. This time around, I had much better computer hardware, running Windows. Everything you’d read, all the glossy mags, all thought that this was the bee’s knees, but when I tried to record anything I started wondering if I’d lost my ability to play anything in time. I’d complain about it, and people would tell me I was stupid, being too picky or that I just had bad hardware. Trust me, I spent a LOT on getting my hardware right, but my timing still sucked donkey’s balls. Though I really wanted to be able to use sequencers for composition, I think everything I recorded was played real time pretty much up to about 3 or 4 years ago. Same problem — I’d play something and it would come back wrong. Sure, I could quantize it, which would reduce (but not eliminate) the suck, but I couldn’t really understand why nobody else was bothered by this. Many years ago I had an email conversation with the head of the MIDI standards committee (or whatever it called itself at the time). His response was basically, yes, we know, but we’re not going to do anything about it because nobody cares, and our members are all from hardware companies who don’t give a rat’s ass about professional musicians and only want to sell cheap sound cards. At the time I was trying to persuade him to lobby for the introduction of timestamped record and playback of MIDI preserved through record and playback driver chains on Windows and Macs, but it fell on deaf ears. From reading I’ve done more recently, this has indeed been introduced, but many drivers and hardware devices just ignore the timestamps, as does some sequencing software.
Fast forward a bit more, up to about 3 years ago when I started recording the couple of albums I have out right now. They were both recorded mostly using Ableton and softsynths. Yes, timing still sucked on the record side, horribly so in terms of latency and pretty badly in terms of jitter, but since I was sequencing within Ableton and the softsynths’ timing was essentially sample-accurate, I could quantize things back into some resemblance of not sucking so badly that I wanted to chainsaw my investment in equipment, but it still wasn’t right. I still couldn’t do things that I took for granted in 1988. Sound quality was greatly superior, but what use is that when I can’t play in time any more? Seriously?
Not too long ago I pulled one of my old hardware synths, an Ensoniq SQ-80, out of storage with a view to getting it working. I hooked it up via MIDI to my M-Audio (now Avid) Fasttrack Pro interface, using Ableton as a sequencer. I looped the MIDI back into the SQ-80, which has a neat feature that lets you internally disconnect the keyboard from its sound generation for exactly this purpose. I set up Ableton to loop what I was playing back into the SQ-80. I will not be polite here: the results were beyond awful. We’re not talking about me being an oldtimer primadonna with golden ears whining about minutiae — the latency was so bad that it was unplayable, but worse (even) than that, I couldn’t play more than a few notes without notes getting stuck on. Playing slowly, this would happen every minute or two, but play a fast run and you could pretty much guarantee an immediate failure. It wasn’t even close to working, let alone usable.
Before anyone has a go at me for having a broken SQ-80, no, it’s not. I hooked a MIDI cable from the Out back to the In, and it played fine, no noticeable latency, just as if I was running it conventionally. Hooking up the MIDI out from Ableton to a Vermona MIDI-to-CV interface in my modular synth also gave the same stuck notes from hell and horrific latency and jitter. I tried sending MIDI clock — it was all over the place, terrible, unusable.
OK, so what is jitter and latency and why does it suck so much?
Latency is easy to explain — it’s just delay. Notes go in, they get delayed, and the come out again. You hit a note and it sounds slightly late. This is unpleasant for musicians to deal with because it makes the instrument feel dead. English doesn’t have good ways of describing it, but ‘spongy’ or even ‘rubbery’ come to mind. I end up hitting the notes harder subconsciously to compensate. If you practice long enough, your brain starts to compensate for the delay, but it’s not an ideal situation.
Jitter is much, much worse. It’s like latency in principle, except that the timing of the delay varies randomly. This can’t be compensated for or learned around because it’s not predictable. If there’s a lot of it, it can make a seasoned pro sound like a 3-year old’s first glockenspiel session.
Most people probably can’t consciously hear the difference. I have to assume this because the purveyors of modern music gear would have torch-wielding-peasants camped outside their design centres as we speak. However, if you learn pretty much any musical instrument of the old-fashioned kind, you have to train your ears to be sensitive to timing in order to learn how to actually play in time accurately. Some instruments favour this more than others — bass guitar, drums and percussion being probably the most prominent. I’m unfortunate enough to play two of the three, so my ears scream, “NOOOOOOOOO!” at things that most people might just find a little sloppy or just unimpressive.
Innerclock Systems have recently published the results of a very detailed study into the timing behaviour of music gear, both vintage and new. These numbers make fascinating reading if you’re just the right kind of obsessively nerdy.
So these fancy computer based sequencers have been around for a long time now, but it’s interesting that the most significant beat-driven music genres of the last couple of decades haven’t really been based on them. Rather, house music was TR808 and TB303, techno was more 909, electro and hip-hop were heavily driven by the Akai MPC series. What do all of these systems have?
Better than 1ms latency and jitter, often <i>much</i> better.
What does Ableton have? Maybe 30 milliseconds of latency on a relatively fast, well set up system. The best I’ve been able to manage with solid reliability is 57.2ms, though I’ve been able to unreliably manage about 15ms of output latency. This is just the audio drivers — the VST and AU instruments add their own latency and jitter, as does USB if that is the route by which note information is finding its way inside. I’ve not tested it, but it wouldn’t surprise me if I often see worse than 100ms of latency when I’m playing a relatively complicated setup. At 120 beats per minute, this is 20% of the length of a quarter note, or nearly a whole 16th!
The reason why this happens is because of the nature of Windows and MacOS. They are multitasking, multithreaded operating systems that are tweaked to give a good user experience first, but aren’t really tweaked for accurate timing at the sub-millisecond level. Since lots of tasks are always competing for processor time, there is no guarantee that the code necessary to deal with incoming or outgoing audio data or MIDI information will actually get executed when it really needs to be. Consequently, it’s necessary to use buffering in both directions to take up the slack — you basically need enough buffering to cope with the uncertainty in the response time. Buffering adds a delay, queuing up data so that when the interrupt routine actually executes that it won’t run out of information causing a dropout in the audio. This is where the latency comes from. Some software, like Ableton, lets you tweak the buffer size so that it is just big enough to prevent dropout. Faster computers with more CPUs can often manage with smaller buffers, but this isn’t guaranteed. Audio is relatively easy to buffer because it’s a simple stream of numbers at a continuous rate. MIDI data is basically just note on and note off switching information, much much lower data rate, but no less timing sensitive than audio. Ideally, incoming and outgoing notes should be timestamped, so you get consistent delays rather than jitter, but it seems that this is even now still usually not bothered with. I actually suspect that Ableton, as used in my test case mentioned above, wasn’t even sending out MIDI data in the order that it arrived, so note offs were going out before their corresponding note ons, resulting in stuck notes.
The use case that I really would love Ableton for would be playing it via an external MIDI controller, maybe a keyboard, maybe an Eigenharp, then have it do clever things with that MIDI data and send it out to other instruments, including my modular synth. Nope. It kinda sorta works a bit, but it’s not really acceptable for serious use.
I think that the music equipment business has been doing the, “Nothing to see here, move along now!” trick for a long time. There are workarounds for some of this. If you’re recording external audio, so long as you’re not trying to monitor what you’re playing through the system, you can get away with a lot of latency because you can simply delay the audio after the fact so that everything lines up. Ableton does this, as does Logic, ProTools, etc. If you’re playing a softsynth, you only get the outgoing half of the latency, which tends to be relatively consistent rather than jittery, so it’s not too horrible unless you’re playing something very fast or percussive. I once tried playing percussion real-time via USB MIDI and a softsynth (Battery). Um… no. It was not a pleasant experience. I suppose that some people manage to practice enough that they adjust. Try this: look at some videos of people playing softsynths or triggering samples via something like an Ableton Push or one of the many USB MIDI based MPC clones — look really closely, and you’ll always see them hitting the pads noticeably before you hear the sound. Then look at videos of people playing actual drums. Yep, it’s enough to be visible if you know what you’re looking for.
I think the industry’s main Hail Mary these days is the fact that relatively few people actually learn keyboards traditionally, so they depend on step sequencing and quantization. That’s how I got through my last two albums, at least for the parts I didn’t just play real-time. If you’re inside the sequencer’s world and have advance knowledge of when to trigger a note, you can send it out via a softsynth essentially dead-on accurately.
So how do you fix this?
There are a couple of companies out there who are making a business out of this. There’s Innerclock Systems, mentioned previously, and Expert Sleepers, both of whom essentially send out sync data over audio and convert this to MIDI in hardware. Expert Sleepers additionally can also output CV/Gate or MIDI note information as well as just clocks. This helps and might even be a complete solution if I wasn’t really a keyboard player, since they solve the accurate MIDI timing problem for Ableton on the outgoing side, but they can’t help with latency so they are not a complete solution.
The Akai MPC Renaissance is an interesting beast. It looks like an older MPC, but it actually uses software running on a Mac or PC. It does have its own audio and MIDI I/O and seemingly supports MIDI time code and MIDI clocks. I’d like to believe that it might solve the problem, but I have my doubts.
Other than building a studio from all-pre-1990 gear, not something that’s really all that feasible due to the demise of tape, here’s what I’m probably attempting:
This would potentially make for a very fun and fast workflow — mess around with the MPC and the modular to make something interesting and then basically just hit record on the Tascam. Rinse, repeat. Then, later, dump the audio from the Tascam into Ableton for editing and mixing, which is something that Ableton does supremely well.
By way of an experiment, I was playing around with my Korg Electribe yesterday. I was clocking it directly from the clock output on my Eurorack Trigger Riot module, with some hard snappy bass sequenced with an 8 step analog sequencer, audio coming from a couple of analog sawtooth oscillators into a clone Moog System 55 low pass filter. All I can say is, I think I’d forgotten what that kind of really tightly synchronized thwack-you-in-the-bum beat was all about.
I’m not in love with the Electribe. I don’t hate it, but it has a soggy feel, which I’m putting down to latency between hitting the pads and the audio. It’s not terrible, I can play it. Sequenced it’s fine, but I like to finger drum, so timing is important to me. I’m starting to think that my best bet might be to find an old Akai MPC, a couple of which had built-in SMPTE timecode reader-generators. The way this would work is you ‘stripe’ a spare track on the multitrack (the DP32 has thirty two of them, so that’s no big deal), then hook it up to a spare output (probably an FX send) so that when you hit play this streams back into the MPC causing it to jump to the right part of the song and start playing. I could then sync up my modular and/or other hardware synths and have the timing dead nuts. Finding a good condition MPC 2000XL or an MPC 4000 looks like it might do the trick, these being the only MPCs that (to my knowledge) included built-in SMPTE LTC. They actually do a decent job of sequencing MIDI from external keyboards, though they are better known for drums and sampling, obviously. I’ll miss the softsynths, but they could still be used to add some overdubs once the mix ends up back in Ableton, so that’s not so much of a big deal. That said, the possibilities of the Eurorack modular are nothing short of astonishing, so it wouldn’t hurt to be able to concentrate on that.
I have some eBay cheapassing in my future, I think…
I just picked up a very cheap Keithley 2015 THD multimeter from South Korea via eBay. It showed up today in very good condition — it could do with calibration, but everything seems to work fine on it. The display is quite dim, as is common with older vacuum electroluminescent displays. The good news, though, is this display module is still carried in stock and can be ordered (new!) from the Tektronix spares department for the princely sum of $66. Score! With the not-cheap shipping, new display and even with cal, this comes out as still a pretty good buy.
On the calibration side, I suspect it would cost about $120 to send it out to Simco or some such, which is probably well worth the money. Alternatively, I might actually send out my big HP, get that cal’d, then use that to calibrate the Keithley using a transfer standard. I need to get at least one of these meters done soon anyway.
Anyway, about the Keithley. It’s a full-featured 6.5 digit digital multimeter with the usual compliment of DC and AC voltage and current measurement, 2- and 4-wire resistance measurement as well as a few extra niceties like support for thermocouples, frequency and period measurement, diode checking, continuity, etc. DC/AC volts has a resolution of 0.1 microvolts, current 0.01 microamps. The continuity beep has user selectable sensitivity and is extremely fast responding. Of particular interest is a BNC on the back which outputs a reference audio signal that when analyzed by the main inputs can be used to measure total harmonic distortion in an audio circuit. It’s possible to do THD measurement with a spectrum analyzer or a scope with FFT capabilities, but this thing is essentially a 1-button test. Other than this, it’s built as a system DMM, so it has an extra set of probe connections on the back along with pretty extensive GPIB and front-panel programmability. It’s not quite as cool as the current model Keysight and Keithley meters with graphical displays and built-in data logging, but for the price I’ll certainly not be complaining.
NASA Edge (part of NASA Public Affairs, I think, I’m not certain) has released a rather cool photo of the Resource Prospector rover busy doing its stuff at Johnson Space Center as part of the RP15 rover tests. As some of you already know, I designed a camera that looks out of the bottom at the soil below, right at the point where a drill impacts the surface and pulls up material from below.
It’s often frustrating not being able to freely share details of what I do, but for once this picture is fair game, so have at!
This is high resolution image, so if you want the full res version, click and it shall be yours.
I’d like to join a gym again. I like working out. I get very little exercise right now, and weight machines and treadmills are a couple of the things that do work for me without causing damage (iffy tendons due to some unspecified variant of arthritis). I’d like to be able to go swimming again, something I’ve not really done more than a couple of times in 20 years.
So why don’t I just go and do that?
Well, the reason is pretty simple: fear.
I’m a member of the T in LGBT. This means that I’m essentially at risk anywhere that might be transphobic by policy. If a gym or pool manager decided to call the cops because, oh, you know, I’d taken a pee in the bathroom, I stand a pretty decent chance of ending up dead, incarcerated, deported or even ‘just’ traumatized to the point where I won’t want to leave the house for a couple of years. I would also be at risk if some random asshole-of-the-public decided to call 911, too, but I have less control over that.
Today Gina and I were talking about the possibility of joining a gym, ideally one with a pool, that would let me work out while Gina swims. 24 Hour Fitness seemed like a good option, being that they have a location fairly close to us. I decided to check them out. I’d been a member a few years ago for a while, but let it lapse. I’d never had problems whilst there, but I avoided the locker room and was close enough to home that I could get away with not changing in the gym.
This time around, I wanted to really know what their policies were. From their web site (24 Hour Fitness Membership Policies):
24 Hour seeks, enrolls and maintains memberships without regard to race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, marital status, sex, sexual orientation or age. It is further club policy that no circumstance or conduct undertaken by club personnel shall have the effect of discrimination on the basis of any of the aforementioned classifications. All club members shall have full and equal access to the club facility. All members with disabilities shall be entitled to reasonable accommodations for their physical and mental impairments. Any member who believes that he/she is/has been treated unfairly on any of the aforementioned matters should first report to club management or to 24 Hour at 1 (800) 432-6348.
At first sight, this looks like a really good statement. However, there is one thing missing. Exactly one thing missing. I’m not going to belabor the point by spelling it out, but whenever I see this, I can’t help translating this as:
24 Hour welcomes absolutely everyone. NO WAIT, NOT YOU, YOU CAN FUCK OFF. It is further club policy that no circumstance or conduct undertaken by club personnel shall have the effect of discrimination on the basis of any of the aforementioned classifications, BUT PEOPLE LIKE YOU ARE FAIR GAME. All club members shall have full and equal access to the club facility (REMEMBER, NOT YOU). All members with disabilities shall be entitled to reasonable accommodations for their physical and mental impairments (BUT NOT YOU). &c, blah blah blah.
Yeah, that’s nice. Makes you feel all fuzzy and welcome, yes?
I’ve been making a bit of an effort to get things-GPIB-related working a bit better around the lab. I have been hair-tearing a bit with my Prologix ethernet-GPIB interface recently. I seem to be able to get it to mostly work fine with single instruments, but it’s not happy with some of them in combination, particularly when my HP5342A microwave frequency counter is on the bus. This could of course be the counter, but anyway, I was fiddling with my HP8591A spectrum analyzer from LabVIEW via the Prologix with some level of success. I did, however, remember a bit of software I picked up some time ago — KE5FX’s HP7470 plotter emulator. This thing works great with the HP speccy!
Anyway, here is a plot from it:
You can initiate the plot from within the plotter emulator by selecting the GPIB device from a pulldown menu. It takes about 2 or 3 seconds to pull the data down, then the plot appears. The trick to get it looking sexy like this one is to set the image size as big as possible, swap to a black background and the alternate colors, then save the image as a BMP, load it in Photoshop, scale it back down a bit and bring up the levels. OK, I know not everyone is as much of a Photoshop junkie as me, but Gimp will do this sort of thing just fine too. Then again, if you’re not too finicky, you might find the raw output in default mode just fine anyway.
So, KE5FX, if you’re Googling yourself and spot this post, many thanks and 73s de NQ6K.
PS: The plot is a QPSK signal (just random data from a PRBS generator) at 250 MHz with a 5 MHz symbol rate, 100 averages.
As some of you may know, but most won’t, I’ve been a user of Omnifocus through various versions for several years now. At a superficial level, it’s a to-do listing app that cloud-syncs across Macs, iPads and iPhones, so your to-do items can follow you from device to device. Integration with Siri on mobile devices also works out nicely, letting you say ‘Siri, remind me to buy the cat a new Ferrari,’ which will automagickally create a reminder to bat the car a new ferrite, or something.
If you look at Omnifocus as ‘just’ a to-do list app, you’re not quite getting the point. For me, I’m way over my head on multiple projects at once much of the time. I literally have so many to do items that it’s impossible to remember them, let alone track them, and am well into the territory that makes a linear list long enough that finding anything isn’t really feasible.
Yes, I’ve read Getting Things Done, by David Allen. I found many of his ideas really interesting, and I think I’m now using most of them.
So what’s this GTD thing all about?
Well, the basic idea is that it isn’t sufficient to just divide your to-do items into projects — rather, you also divide them into contexts, giving you a second view into the mess of items. What’s meant by a project is pretty obvious — something like, ‘Remodel the kitchen’ would be a great example. Individual tasks should be things like, ‘Order a new stove,’ something that is essentially a single thing that needs done that doesn’t break down finer than that. Importantly, tasks should not be split up between personal and work — the system really works best when you glom all of your tasks into it. Contexts indicate where the task is to be carried out (with a loose definition of ‘where’). Email, Home Depot, In The Garage, At Work, etc., are simple examples of contexts. I like to break down both projects and contexts hierarchically. Breaking down projects makes immediate sense, e.g.:
Home : Kitchen Remodel
Work : Project Alpha : Presentations : How to Pickle your Ooblefetzer
Some real(ish) examples of broken-down contexts would be:
Computer : Internet : Facebook
Work : Bldg 123 : Conference Room 6
Computer : Purchasing : Amazon
Outdoors : Mall : Home Depot
Outdoors : Mall : Safeways
What this lets you do is things like deciding to head to Home Depot and then easily pick up a list of everything you need to do while you’re there, even if those things are spread across many projects. That’s the Getting Things Done level. But you can kick it up a notch — if you are going to the mall, you can easily see everything that needs doing under every context that derives from that. Really, GTD and GTD-like systems can’t ever make time where none exists, but they are brilliant at not forgetting things and avoiding wasting time repeating things that didn’t really need to be repeated.
Another thing GTD is awesome at is C. A. R. Hoare’s concept of ‘waiting faster.’ Everyone hates waiting — I’m sure I, like most people, feel like I waste half my life waiting for things: stuff to be delivered, other people to reply to emails, applications to be processed, etc. Tony Hoare (admittedly in the context of the mathematics of concurrent processes, but hey, I’ll steal anything that works!) suggested that by waiting for as many things to happen at once, then acting on whichever one completes first, you end up waiting as little as possible and being as efficient as possible. I use Omnifocus to track everything and everyone I’m waiting on, which means that I don’t need to get stressed out by asking lots of people for lots of things all at once. The difference this makes to my effectiveness is pretty surprising.
Omnifocus also lets you tag every task with your estimate of the time it will take. This takes a bit of work to maintain, but it gives you a very important third route into the data. I use this to create a ‘Fast Attack’ view of my to-dos, which cuts across all my projects and contexts, limited to tasks taking no more than an hour and sorted so that the fastest things happen first. With this, if I’m told I have half an hour before a takeaway shows up, for example, lets me rattle off a few emails or update my timesheet or whatever with time I’d otherwise probably spend staring mindlessly at Facebook.
Setting deadlines on tasks is really important. It’s a GTD principle, but Omnifocus does this really well. You can defer a task, which means that it will be hidden until a specific date and time, or set a due date, which will start warning you when it’s coming up and nag you when the date has passed. From personal experience, I have learned only to ever set due dates when there really is a due date for the task — if I ever get carried away and start creating a schedule for myself, all that happens is that everything gets out of hand and nothing really gets done, and I’m too scared to open OmniFocus because there are 58 red tasks staring me in the face. No, don’t do that. If it’s something like a paper that’s due on a particular date and time, go for it. That’s what this is for. But don’t ever use due dates when there isn’t really a hard deadline, or you’re missing the point of the system. Omnifocus has some very nice features for creating repeating tasks — I can, for example, have it remind me to suggest going to see a film. If I check this off, the reminder goes away for 2 weeks, then starts popping up again. The other kind of repeating tasks have a hard interval, so I have reminders to submit my time sheets, do my weekly and monthly reporting, pay my rent, etc.
Omnifocus implements GTD’s recommendation to regularly review your task lists. You can set, per task, an interval over which you want to review everything. Some people like to set this to 1 week, but I actually like it do it daily. If I don’t have time, it can wait until tomorrow, but by going through my task lists even very cursorily once a day, ruthlessly putting projects on hold if I can’t work on them yet, deferring tasks until later when I can, fixing things up as plans change, is really the only way I can keep everything on track.
So far, this is all standard(ish) Omnifocus and GTD. I have some of my own tweaks and brain-hacks, however.
My Omnifocus Kanban hack
One other feature I’ve had a love/hate relationship with in Omnifocus is flags. You can flag an item, which visibly shows its importance, and can be sorted against or shown in its own query. I find this psychologically bad — if I have flagged items, it stresses me out, and I also don’t necessarily make good decisions about what to work on if something is nagging at me. Flags are an invitation to procrastinate, in my opinion. Instead, I abuse the flag system for something completely different — Kanban. The Kanban idea comes from manufacturing, where the idea is that you have a table with (nominally) 3 columns — the left column is things to do, the right column is things that are completed and the middle column is things that are in progress. So much so obvious, but Kanban’s magic special sauce is that only a fixed number of things at most are allowed into the middle column at once. The idea is that this stops manufacturing processes from getting gridlocked or producing lots of stuff that isn’t really needed yet. Omnifocus doesn’t really support Kanban, but it’s possible to abuse the flag system for it. Basically, if something isn’t flagged, it’s in the ‘left’ column. If it’s flagged, it’s in the middle column. If I’ve already checked it off, it’s in the right column, logically speaking, though I never actually get to see something that looks like a traditional Kanban board. So basically, I let myself flag 3 to 5 things I’m ‘doing’ at once. Even this is really too many, but what it does is give me a one button view of the stuff I Really Am Getting On With Right Now. My ‘Fast Attack’ query covers all the little faffy short tasks that aren’t really even worth flagging because they get done really quickly anyway. Between those two, and just these two, I know what I should be doing, and don’t forget anything. Psychologically, this really helps, because these lists never have more than 4 or 5 items in the Flagged/In Progress view and maybe a couple of dozen in the fast attack view, so it doesn’t get overwhelming.
The Input/Output Hack
This one is due to me personally as best I can tell. I had 3 or 4 false starts implementing GTD which kind of worked but always ended up failing. In a couple of cases, the amount of stuff just got out of hand and I couldn’t really cope with it, to the point that the system just fell apart. In a couple of cases, it worked so well that I ran myself into physical exhaustion that took weeks to recover from. This is the most recent version of my personal system that, so far, seems to be working really well for me.
I have a very strong work ethic. In work time I tend to do work stuff. That means that I tend to prioritize things that I need to deliver to someone or do for someone extremely highly, to the extent that this dominates. In extremis, I’ve found myself working crazy hours on a project and literally only eating, sleeping or doing work directly on that project, never allowing myself to prioritize anything else. As a consequence, I tend to build up what I have come to call infrastructure debt. By never really allowing myself time to build infrastructure — to do the things that you need to do that allow the things to get done, I’m always way more stressed than is OK, and tend to be cobbling together ways of working rather than having everything to hand. It occurred to me that I needed a brain hack to fix this, and it was going to take something like Omnifocus to pull it off. Thing is, I have no difficulty figuring out what needs done to put all this infrastructure in place, it’s just that, normally, I was not allowing myself to spend any time on it. The mythical ‘free time’ never actually occurred, because I was either working or flat out exhausted.
Here’s the hack. I think it’s pretty cool.
I now divide all my projects, without exception, into the following four categories:
These are the four folders at the top of my project hierarchy. Work projects mostly go into Output. Personal projects that create something also go into Output. Stuff I need to do so that I can effectively work on Output tasks goes into Input. I can use the perspectives feature in Omnifocus (Pro version only, but well worth the $$$) to create myself a set of 3 buttons:
So basically, on a morning, I can decide. Am I exhausted? Then I should click Rest Day, use that as a suggestion for something to do and a reminder of things that Must Get Done Or Else. If I’m feeling really ‘On’, I’ll click Output Day, which houses the tasks that typically need the most braining. If I’m kind of in the middle, not really feeling focused enough for detailed work, I’ll click Input Day, whose tasks tend more toward the physical. My work ethic guilt makes it hard to hit anything other than Output Day, but I know the consequences of that all too well. In all cases, if I decide to do a task that’s really brief, I’ll do it and just check it off. If it’s something more substantial (more than an hour typically), I’ll flag it and add it to my Kanban-hack-repurposed Flagged list — by keeping this list to no more than 3 – 5 items, it stops me from being overambitious and running myself into the ground with overwork. Also, I know I really suck at multitasking, so the best hack for dealing with that is to only do things one at a time, which is kind of the point of all this.
Summarizing my System
To sum up, the way I work this is each morning, with my coffee, I generally do a daily review of all my tasks, so by the end of that I have checked off anything I missed and have a pretty good idea where I’m at. I mercilessly put projects on hold if I can’t work on them because I’m waiting for something — this is key to keeping things manageable, as is using Defer to throw something forward in time to pick up on again later. By looking at my Flagged/In Progress button, I can remind myself what I’m in the middle of, and add one or two more things to that list from my Input or Output perspectives. If I have a few minutes to spare, I can use my Fast Attack perspective to kick out a few emails or whatever. I capture new tasks straight into Omnifocus wherever possible, but I do heavily use the ability to create tasks via email otherwise, then I file that task appropriately next time I do a review.
Right now, I have 3 concurrent major projects, a fourth semi-unpaid work project, a musical personal project, social stuff and other stuff I wouldn’t mention here all going on at once, and amazingly it’s not really stressing me too much and I’m pretty much staying on top of it all. Considering that I am someone who always regarded themselves as really sucking at this kind of being-organized, this says a lot.
YMMV. IANAL. I am not your mother. GTD doesn’t work for everyone, particularly if you don’t have much leeway in organizing your time. GTD has a cultlike following, for sure, but I’m not a true believer — I junked it several times before hitting on this approach, particularly the Input / Output hack. I am not inherently awesome, and do screw up sometimes.
I seem to be getting asked my opinion about Caitlyn Jenner’s Vanity Fair debut quite a lot right now. Rather than repeat myself all over social media, I thought it might make sense to write something about it here.
Firstly, this is complicated. Gender is complicated. I barely understand my own, so I definitely don’t understand anyone else’s. People seem to be looking for an explanation, nevertheless, so I’ll do my best.
OK, so where do we start? The first thing to understand is that Caitlyn most likely identified as female way back, certainly for years, probably right back to childhood. She was an Olympic athlete – this is something that I saw the edges of, back in my late teens when I was involved in yacht racing. The level of singular purpose, focus and dedication that getting to the Olympics requires is quite beyond anything 99.999% of us will ever see. You can take it from this that she has, at least the capability of, a singularity of purpose and drive to succeed that most of us have no concept of.
She’s not dumb. That much seems clear. Few people involved in sport at that level are, from personal experience. A decision to transition would not have been easy – it never is – but doing so when you’re already very much in the public eye is pretty much unprecedented. The closest previous case might be Lana Wachovski, but in her case she was not anything like as in the public eye as Caitlyn.
Transitioning is by nature a slow and difficult process. Having access to the money for surgery, particularly facial feminization surgery, makes a huge difference to both the timeline and ultimate outcome, but still, it’s not an instantaneous thing. Yet, the media was carefully managed to report this as a BOOM! MIKE DROP! event. Caitlyn has basically gone at transitioning like someone wanting to win an Olympic gold, and given the photos, seriously nailed it. It’s like she chose femininity as a metric and decided to max it out, to the extent that maybe 0.01% of cisgender women could even get close.
I don’t know if any of this is a good or bad thing for transgender people. Only time will tell. For sure, it’s got people talking to an extent that has barely happened previously, with the exception only possibly of Laverne Cox’s Time Magazine interview. None of this would have been imaginable 20 odd years ago when I transitioned. Not 10 years ago, or even for that matter 5 years ago. The mainstream media are being positive about transgender issues. The orthodoxy has flipped. Transphobia is now being clearly called out as hate – everyone who has posted transphobic opinion about Caitlyn’s coming out seem to be being eviscerated by public opinion. It seems that wrong-gendering or deadnaming Caitlyn is a one-way ticket to douchenozzleville, population you. Fox News is having a predictable shit fit, but just seems to be digging itself deep.
My gut tells me this is probably all for the good.
Only time will tell.