![]() It would be similar to physically based rendering in graphics: for example, it would enforce a physically correct relationship between how high a harmonic is and how long it rings, and maybe some other relationship about what happens to harmonics at the start of the sound, etc. The whole experience made me suspect that there's an alternative approach to building modular synths, based on physical facts about sound (as opposed to either starting with oscillators, or going all the way to digital modeling of strings and bows). For example, here's a short formula I came up with a month ago for generating a pluck-like sound: It's much simpler than doing the same with prebuilt bricks. You need to learn some math about how sound works, but then you'll be unstoppable. If I could go back and give advice to my months-younger self, I'd tell me to skip oscillators and filters, and jump straight into generating the sound with code. Also I can get inharmonicity, smearing of harmonics (like in PADsynth algorithm), and other nice things that are out of reach if you start with periodic oscillators. This way I can mimic some nice properties of physical sounds, like "the nth harmonic has initial amplitude 1/n and decay time 1/n". ![]() Workflow 4: write code to generate the sound on the fly, as a sequence of samples. This gave me some more interesting and configurable echoes, but then why start with an oscillator at all? So Workflow 3: generate an impulse response wav file with Python, and use that in a convolution reverb to filter the sawtooth. For impulse responses, use random sounds downloaded from the internet (like wood strikes), or mixtures of existing instrument sounds. Workflow 2: feed a sawtooth oscillator through a convolution reverb that uses a custom impulse response. Eventually I realized that it will always sound mechanical, no matter how many filters you stack. Workflow 1: feed a sawtooth oscillator through filters controllable with knobs. I went through a succession of different workflows, spending a couple weeks on each. I guess one reason I fell in love with Octatrack is that many aspects of it feel like using a tracker - the insanely powerful sample mangling and "parameter locks" are pretty much given when using (sample-based) trackers.This winter I went down this rabbit hole and back up, trying to approximate some acoustic instrument sounds. I have a soft spot for a Renoise because of that, but it's really quite a full featured and extensible little thing these days even if you look at it as someone who doesn't have a background in tracking. I started with trackers back in '95 and used them almost exclusively for the next 6 years or so. You can definitely midi seq your monosynth with it, and a lot more, if you just make friends with the UI. ![]() Basically it's quite a powerful MIDI + audio sequencer and sampler that still happens to have a tracker interface. ![]() It supports MIDI I/O (very extensively, there's even patch editors for old digital synths built on top of it), scripting (anyone can extend and modify quite a few aspects of the software, there's "clip" type live UI and the aforementioned synth editor for example) and VST synths and fx, has a flexible mixer, etc. But they look a bit creepy for me, and i need ability to midi seq my monosynth.įWIW and sorry for sounding like a fanboy, but Renoise has come quite far from "good old tracker". Rec.Koner wrote:- Good old trackers (Renoise, SunVox, FT etc.).
0 Comments
Leave a Reply. |