Yup, I use Max (in its M4L incarnation) all the time...right alongside the Hewlett-Packard sine generators, the 1960s beatboxes, the wall of processors, the modular sandbox, the cool polysynths, etc etc etc etc...

A lot of the reason for WHY I have all of this stuff in one room is because, when I was still in academic study, I ran across two different professors who insisted that you had to keep all of these different electronic music media separate. And, frankly, I didn't see any rationale for that.

In one case, at the University of Tennessee, I got tasked to do a semester final project purely on the Synclavier. By that point, I knew how...well, ANAL...the prof in question was about the parameters of his assignments. But I also knew he was one of these guys that claimed he knew exactly what you were doing in a piece when, clearly, he didn't. The other thing he insisted on was that I had to use the "snazzy" new Yamaha automated mixer...piece...of...crap in full automation, synced with a set of visuals (slides, synced with a "buzz track").

Yeah, right. OK...first thing was, turn OFF the hideous moving fader nonsense. I'd been mixing without automation for about a decade at that point, and that was on big desks like MTSU's Harrison MR3. Then there was the Synclavier itself. First up, said prof made a BIG point of noting that Synclaviers have no noise generation capabilities...only pure sines and harmonics. Yeah, right. So, if I set the fundamentals for a patch at 1, 2, and 3 Hz, then start combining partials above the 16th harmonic at 100% level...oh, LOOK! NOISE! Sorta...but it needed a "touch", so I dragged the EML 200 out of the "analog" studio into the "digital" one and used it to nudge the FM pandemonium into the right "feel". I needed some delay as well...supposedly, I was to use resources in the Synclavier patch, but that noise-band thing really ate up the cycles. So...yet another no-no, I pressed the studio's PrimeTime II delay into service and futzed with the EQ to make it a tiny bit more "brittle".

In short, I broke pretty much EVERY parameter in the assignment. And what happened? Well...

I got a huge "A" on this, and the prof was utterly floored at my "command of the Synclavier". And at that point, I changed composition studios and kicked HIS sorry ass to the CURB.

It wouldn't have been possible to get that "A" had I followed his dicta. And I got it by doing these "forbidden" things, most notably dragging bits of one studio into another, where they presumably weren't supposed to be. Or at least, according to that clown, they weren't. That was Clue #1.

Clue #2 was when, upon arriving at Illinois, I discovered a situation where there were all of these "media separations" like that in the Experimental Music Studios, but to an even more fanatical degree. In fact, one night in the Moog studio I had to deal with some utter batshit insanity that went like this...

ME (talking to prof, who has just interrupted my session work for no good reason): Uh...about that pair of Symetrix gates. Where are the patchpoints for those?

PROF: Oh, you don't know how to use those.

ME: Excuse me? I've used things that're far more complex than them for years...

PROF: No...you DON'T know how to use those.

ME: [blank stare typically seen on my face when dealing with blithering idiots, followed by...] OK, right. Tell ya what...if I figure out where the patchpoints are, I'm going to use them ANYWAY, and I dare you to figure out where I did that.

PROF: [shocked look due to being unable to process dealing with person with real-world audio engineering experience]

Did I ever use them? Heh...but anyway, this nonsense was typical. It was SO typical, in fact, that Sal Martirano (who I was studying composition with while there, then later privately after I'd given up on academic composition) had found it necessary to set up a totally separate studio in the Comm West building about 1/4 mile away, and this was largely due to the fact that HIS explorations involved mixing the must-never-touch-each-other media to explore how primitive AI-type structures could be used for "directed improvisation". This was in early 1992, mind you; the only things like that were stuff like M, Max which was still really only on the NeXT as part of the ISPW rig, and things cobbled up by intrepid souls like...well, Sal. And it was Sal that encouraged me to combine as many working paradigms in one studio as possible. Even HE wondered what the results would be, and I'm glad I got to play him some of my very early efforts in that direction before he died in 1995.

So, for 25+ years now, my reasoning behind all of this gear is that there ARE things to be gained from combining all of these sonic vectors at will. OK, fine...this mid-60s Bruel & Kjaer filter isn't supposed to have a Roland TR-606 fed thru it...but what if you DO that? And of course, the results are very, very cool. Then whip that into Ableton, slap some Max-driven processing using a Lorenz attractor on it...yeah, baybee....filter it all through these Krohn-hite scientific-grade tube bandpass mo'fos and pump it into that cool new Neve-equipped Steinberg A-D to the 2-track (which isn't a 2-track because it's not even an effin' tape machine!).

THAT is how to do this. Careful combinations, like knowing when, how, and how much of a certain spice to use if you were a chef. But like I noted before, it really takes a lot of restraint to avoid wanting to slab every sonic generator and processor onto things. Some of that comes from knowing, simply, that doing so would be a hellacious amount of WORK. Yeah...uh, no. But also, from knowing that that's not a possible choice, and doing so more rapidly exhausts the possibilities inherent in tracking a few, specific, and well-crafted sounds because you're (futilely) trying to bring in ALL the possibilities AT ONCE. Not a good idea. But just like you don't play every string on a violin at the same time whenever the instrument makes a sound, you come to understand that there ARE limits inherent in a huge rig like this. It doesn't really want you to connect everything to everything else to generate...well, something dense and impenetrable that would probably suck on epic terms. Instead, you learn...or infer...what combinations TO use for just the right touch. And that's what makes this a lot like using a large-scale modular.

In fact, it sort of resembles that, when you take into account all of the routing patchbays in use in here. That's something from a different academic studio, though...specifically, the original one at Syracuse that was designed by some guy who knew that this was the way to make that open architecture work, the way to allow that interesting interconnectivity,...

...Bob Moog. I ain't gonna argue with that.