I’ve been dabbling in modular for quite a while, but up until recently I mostly treated my system as a collection of independent monosynth voices. Lately I’ve started taking modular much more seriously, and my goal has shifted toward building a performance-oriented system capable of generating full, semi-generative tracks — IDM, acid, techno, etc.

What I’m aiming for is less of a traditional “play notes on a synth” workflow and more of a living system where sequencing, modulation, probability, and interaction drive the music forward. I want the patch to behave like an ecosystem — patterns evolving, rhythms mutating, voices influencing each other — rather than a set of isolated voices running static loops.

As you can see, I already have a wide range of sound sources, but I suspect what I really need now is more plumbing — routing, modulation infrastructure, logic, interaction, and system-level control. The RackBrute sits horizontally at the base of the Mega Rack and functions as a kind of control surface. That’s where I keep most of my logic and trigger processing. I’m still very much learning, but I’m trying to move toward a setup where everything interacts dynamically instead of operating independently.

I’d love to hear ideas from people who think in terms of systems rather than individual modules. What would help transform this into a more cohesive, interacting ecosystem? Where are the likely weak points? What kinds of “plumbing” tend to unlock the most potential in large, performance-focused modular systems?

For recording, I plan to capture evolving takes as stems into a Bluebox and then develop them further outside the rack.

I’m approaching this with a beginner’s mindset and fully aware there’s a lot I don’t yet understand, so I’d really appreciate any guidance, architectural thoughts, or suggestions on how to make the system more cohesive, playable, and alive.

Here is the rack brute that sits at the base of the mega rack:
ModularGrid Rack

Here is the mega rack so far:
ModularGrid Rack

For some reason, the thumbnail of the mega rack is not loading correctly so youll have to click it for accuracy.

Any help or advice would be greatly appreciated.


Ive been patching since 2017 but I realize I am pretty much still a noob and need some advice. Here is my studio rack so far: ModularGrid Rack

i had AI help me explain what im trying to do so please pardon some of the phrasing lo

My rack feels like a collection of great-sounding monosynths that don’t really talk to each other. I patch and record one voice at a time instead of letting things evolve together. I want to move toward an ecosystem with controlled variation—probability, conditional behavior, subtle randomness—while keeping things musical and intentional.
What I’m after
A semi-autonomous system I can steer and perform with. I want controlled randomness that creates variation within boundaries, not full generative chaos or “set and forget” patches. The rack should feel alive without constant repatching.
Clocking and sequencing
Everything runs from one master clock. I’m into variation, swing, and probability, but not free-running drift. Considering Metropolix as the sequencing core, but I’m trying to avoid a setup where one sequencer just drives a bunch of static voices.
Where it falls apart
Voices don’t influence or condition each other. No probability or decision-making in gates or modulation. No internal cause-and-effect. Everything behaves deterministically unless I manually intervene.
What I think is missing
Probably not sound sources, but a control layer. Things like probability-based gate behavior, modulation that only sometimes applies, CV influencing other CV (not just audio), utilities like attenuverters, VCAs for CV, S&H/T&H, logic, comparators.
What I’m asking
How do you introduce controlled randomness without losing musical focus? How do you design interaction between voices instead of just parallel modulation? What architectural approaches make a rack feel like a system rather than separate instruments? What patch concepts encourage evolution while staying steerable?
If you’ve moved from “great-sounding voices” to “interesting behavior,” I’d love to hear how you approached it.


hey Chris thanks so much for your detailed response. I'm checking out all of the modules you recommended. That was very helpful.

One thing I didn’t originally ask in my post (and probably should have) is about audio routing and multitracking. My ideal end goal is to be able to record each drum voice separately into my DAW so I can do detailed mixing and processing there, rather than committing to a summed mix inside the rack.

Because of that, I’m a bit unsure how modules like Aikido (which I do like conceptually) fit into that picture, since they seem more performance / submix-oriented. I did try going the ES-9 route for multitrack capture, but I’m already using an SSL 18 as my main interface and couldn’t get the two to play nicely together, so I’m rethinking the overall approach.

So I’m curious how you handle this in practice:
• Do you typically multitrack individual drum voices into a DAW, or commit to submixes?
• Are you using an interface module, external interface inputs, or something else entirely?
• If you’re recording drums, where in your system does “mixing” actually happen for you: in-rack, in the DAW, or both?

I’m trying to avoid overbuilding something that’s great for jamming but awkward for recording, so any insight into how you balance sequencing/logic fun with clean multitrack capture would be hugely appreciated.

Thanks again — your reply already helped me reframe a lot of this.


IDM / generative drum rack with Metron — learning probability, CV processing, and controlled chaos

Hello everyone,

I recently moved my main system into an Erica Synths Mega Case, which leaves me with a spare Pittsburgh Modular EP-270 that I’d like to turn into a dedicated drum rack.

I produce IDM / electronic / acid / techno, and I want this case to be a performance-focused drum instrument with probability, glitchy rhythms, and evolving patterns. I’m intentionally trying to learn how CV processing, logic, and generative systems work rather than just assembling a static drum machine.

My plan is to use Metron as the primary sequencer, handling core patterns and probabilities, but then heavily process its outputs to introduce variation, fills, density changes, and “controlled chaos.” I’m interested in both per-step probability and CV-driven probability/modulation.

Here’s the rack concept I’m currently sketching:
ModularGrid Rack

Drum voices I’m planning around so far:
• Battering Ram
• Jomox ModBase 09 MkII
• Trinity 2.0
• Plonk
• Tiptop Audio 808 Hats
• Tiptop Audio 909 Hats
• Tiptop Audio 808 Snare
• Tiptop Audio 909 Cymbal

Metron would be the main sequencer, and I want to process it with IDUM (and likely other modules), but I’m not yet sure what the right supporting ecosystem looks like.

Things I’m specifically hoping to learn / get advice on:
• What kinds of CV processors, logic, switches, random sources, and utilities pair best with Metron for IDM-style drums?
• Where do people typically introduce probability and variation:
at the sequencer, via trigger processing, via CV modulation, or all three?
• How do you add glitch, fills, and rhythmic mutation without the groove falling apart?
• What’s a good balance between drum voices vs. “plumbing” in a dedicated drum case?
• Any recommended patching concepts for steering generative drums live (macros, performance gestures, density control, etc.)?

This rack will usually be clocked externally, not the master.

I’m very open to being told I’m overthinking things or missing obvious fundamentals — the main goal here is to actually understand how generative / CV-driven drum systems work in practice.

Thanks in advance, and I really appreciate any guidance or patching philosophy you’re willing to share.


Thanks all — super helpful points.

To clarify my setup/goals:

Sequencing = Digitakt II + Bloom v1
Digitakt usually master clock but also use Ableton for studio stuff
Goal = IDM/acid performance rig
Want: Atlantix + 1 extra voice(CS-L, sounds so amazing), modular percussion accents, hands-on modulation

Totally hear you on utilities / “plumbing” — that’s what I’m focusing on dialing in:

Buffered mults for pitch
Attenuverters/offsets
VCAs for modulating the modulation
Small sub-mixers + performance mixer
Clean clock + reset routing
Mutes + FX send/return

Drums plan = Digitakt as backbone + modular glitch layer
Leaning toward Modbap Trinity 2.0 for playable modular percussion — if anyone’s used it in a Digitakt setup, curious how it felt.

If you have favorite utility chains / patch habits for performance (think Autechre-ish movement without chaos), I'd love to hear them.


Hey everyone,

I’m trying to build a coherent performance system for IDM / acid using a Pittsburgh Modular EP-270 case. This is mainly a practice live-set system before I eventually move to an Intellijel Palette performance case.

I have a big “parts bin” of modules and want to form them into a focused, playable, musically coherent live instrument — not just fill space. I really value performability, hands-on modulation, and classic IDM drum programming (Aphex / Autechre-ish).

I’m thinking of adding the Modbap Trinity 2.0 for drum duties (love the classic glitchy/IDM drum vibe) but I am open to ideas for drums. I really don't like the sample drum and was planning on selling it

Goals

Live IDM / acid performance (sequenced + hands-on modulation)

Punchy, glitchable drums

Melodic + bassline voices with motion

Deep modulation playground, but not chaotic

Good mixing / utilities for performance

"Playable", not academic — I want muscle memory and flow

Gear Info

Here’s my currently owned modules rack
ModularGrid Rack

I’d love suggestions for:

Which modules to include

Row layout for best playability

Which utilities / modulators to prioritize

Anything in my rack that is better left out for a live setup

Optional: if anyone feels like actually building me a rack mockup from my available modules, that would be amazing.

Thanks for helping me whip this chaos into a real performance instrument. Looking forward to your ideas.


I’m still having a lot of trouble with external pitch tracking.

the Hector it doesn't track pitch very well. The pitch tracking is so off that I can't use it unless I use CV from inside Hector.

Even after calibrating Hector using an Ornament and Crime, it won’t play in tune when receiving external CV. Notes are noticeably off, even within a single octave. I’ve confirmed that the CV signal from my Bloom sequencer is accurate—other oscillators in my system track perfectly—but Hector does not track properly unless I route the signal through a quantizer.

While that workaround helps, it uses additional CPU, which I’d prefer to avoid. Pitch tracking seems fine when patching internally within Hector—it’s only with external CV that I’m having this issue.

also, when you output pitch from a quantizer app, the destination synth does not patch.

just for simplicities sake, if i take an lfo, feed it into a quantizer(everything internally in Hector), then output the quantizer to output 1 then patch output 1 into an external oscillator it will always be out of tune.

i have emailed poly effects and they have not been helpful whatsoever. I am not buying a poly effects anything in the future because there was little support. I watched the videos and I still don't understand.

"You'll need to run into the pitch cal modules first for external pitch CV. This is because we use the same inputs for CV and audio, so we can't calibrate them by default as it would distort the audio. So you need to run into pitch cal and then get your fixed up pitch from that. There's instructions in the video series where I talk about those modules."

I have done this and it works for external pitch cv coming in but i have to calibrate it literally every time i turn it on and i dont know how to send pitch cv out of it. its not very useful if i cant output pitch cv or play it without having to calibrate it again.

is there any way to get accurate pitch cv coming out of it to sequence an external module?