![max msp gswitch for signals max msp gswitch for signals](https://docs.cycling74.com/max5/tutorials/msp-tut/images/image024.gif)
![max msp gswitch for signals max msp gswitch for signals](https://i.ytimg.com/vi/Bc9mM3WCENA/maxresdefault.jpg)
The premiere was July 1988, and so it was time to write another program, and this time I decided to make something that was actually reusable. “The piece was by Philippe Manoury, and it was called "Pluton," it was for piano and electronics. Max would be the second program Puckette wrote for IRCAM, but the first to be indefinitely reusable. While many of the underlying ideas behind Max arose in the rich atmosphere of the MIT Experimental Music Studio in the early 1980s, Max would take its form throughout a heated, excited period of brainstorming and experimentation among a small group of researchers and musicians, composers, and performers at IRCAM during the period of 1985-1990. And so there was a piece coming up, and I was hired to help work on it,” Puckette explained.
#Max msp gswitch for signals how to#
They had the hardware all very well down, but they didn't actually have any idea how to write software for doing real-time music performances, and so every time anyone did a piece of electronic music at IRCAM, they would have to have the software custom-written for that particular piece. “It was being paid by IRCAM, which was the 'Institut de Recherche et Coordination Acoustique/Musique,' which is this research institution in Paris that was run by Pierre Boulez. Talking to Puckette in San Diego where he teaches at UCSD, he explained how Max was developed in 1988, at IRCAM, because he had a concert scheduled. Its simple form denies its most innovative qualities, as it’s creator Miller Puckette once noted, “Most of what is essentially Max lies beneath the surface.” It scrambles and deforms and filters a complex landscape of sound sources, and yet can be used in an intuitive way.
![max msp gswitch for signals max msp gswitch for signals](https://cdn.shopify.com/s/files/1/0150/7190/3844/files/RGBLink-MSP-Series-Signal-extenders-MSP318-4-t2.jpg)
It can connect anything to anything else. I am so fucking drunk I’m sorry if that didn’t make much sense.Its interface is minimal yet utilitarian, with a protocol for scheduling control and audio sample computations, an approach to modularization and component intercommunication, and a graphical representation and editor for patches. Snapshot just gives you a snapshot of the signal at whatever you set the polling rate to, so say you’ve got your sample rate at 44100hz, and you have your snapshot object set up to give you an output every ten milliseconds, you’re really just getting a small fraction of the samples you want to operate on (100 out of every 44100 if I’m not mistaken), and then when you use sig~ to turn it back to audio, you just get the samples you operated on, which is nowhere close to all of them. So if you want to do math on a signal, you’re gonna wanna do that math on every single sample, if you don’t do it on every single sample, you’ll just get a weird super lofi approximation of the signal +math.
![max msp gswitch for signals max msp gswitch for signals](https://live.staticflickr.com/2288/1546278152_18dc57220d_b.jpg)
The short answer is this, signal messages happen at audio rate which is 44100 times a second (unless you set it to something higher cause you’re a dick and want to render super high quality sine waves at 20khz - I’m just kidding relax, other sample rates are great), messages happen WAY slower. What math do you want to do? Why can’t you do it on the signal?