Yes! I want this for MainStage -- this would allow me to automate my weekly template setup for playing at my church. Each week before practicing I look up the songs in Planning Center and create a new MainStage concert file with one patch per song, and add notes to each patch screen with the song's key, etc. Automating this would save me the time of doing the busy work and get right to practicing.
I’ve had some fun building simple instruments in the browser using AI and piping midi to Live, then munging from there [0]. The whole principle of fully AI generated music leaves me cold but AI as a sort of sidechain to the creative process seems potentially interesting.
For me, the point of making music is making it myself. If want to have something done for me I could just play someone else's record and pretend like I made it.
The is the age-old music parochial thing. "Oh, he's just in a cover band, he doesn't write anything" / "Oh, she's just a composer, she can't even play the stuff she writes" / "Oh, he writes and plays his own stuff but knows fuck all about theory so it's not real music" / etc.
Me, I'm having a blast with claude code, MCP, and Ableton. I'm directing harmony and asking for arrangements and variations in rhythm, mixing, and production. Don't know if that counts as "making it myself", but then I was writing music before I could actually play any instrument at all, so :shrug:
Previous generations might have said the same thing about Ableton itself, vs playing a physical instrument. In that regard, AI might become just another power tool for creative expression.
Agree to some extent. At some point though we jump the thin line between creative expression and… magic?
Like if at some point I can just say “Generate a song similar to Smooth Criminal, different enough to not trigger copyright claims” and it just works, and everyone loves it… well is that creative thinking?
I get why people make gut statements like this, and to me something does feel different about AI.
But I realize I have not seen any criticisms of AI generated music that are meaningfully different from criticisms I've heard of other advances/changes in music technology, whether performance or recording.
Sampling, scratching, drum machines, autotune, electric guitars even.
I will caveat my first comment by also noting that I am well versed in computer music history, and read many many papers in CMJ[1] and elsewhere about generative and automatic composition tools such as Emily Howell[2]. I do NOT have a problem with generative, algorithmic and automatic composition in this sense, as an extension of the creative intentions of the human composer, in the right context. See also Autechre[3] for what can be done with Markov chains and good taste. What we are discussing here is the musical equivalent of a dishwasher.
Addendum: I would highly recommend the Margaret Boden book referenced in the wiki on David Cope/Emily Howell, which is an absolutely fascinating read and was incredibly far-sighted in its enquiries on this topic.
Can I ask what the specific markers / qualifiers are for you to consider (let's call them) 'classical' generative and algorithmic techniques fair game in creative composition, but LLM agent based techniques not so?
To me, it seems like the "do it for me" aspect is similar, just at different levels of abstraction.
Very cool! I posted my own experiments in this area a few months back, which were an iteration on an existing Ableton MCP. It’s great to see more people experimenting in the spaces of interfacing with complex applications and music production.
Does anyone know of other MCP servers for similar music creative tools? I'm interested in things like sonic-pi, strudel.cc and orcas. But very open to anything. I think there is a good opportunity for kids to learn using these tools, especially if I can wire it into my mycroft.ai/neon device.
Ever wanted to control Ableton with just your voice? Me too! I made this MCP server so I could just ask Codex to do anything in Ableton Live for me, while I was nap-trapped by my baby.
The chat messages I sent to Codex to make this:
in ableton, make a self reflective song, with audio vocals (via macos say) and chip tunes and 80's drum machines. should be a real edm banger
i want midi for everything but vocals please, with ableton devices. not prerendered audio for instruments
needs some fills
and should hit way harder after "3-2-1 i become the sound"
the vocals are squished too much (read too quickly), give them a little more length
add some dynamics, the song is basically one volume. and some pumping side chain
improve dynamics of the clap, seems a bit flat and indistinguished, want it harder after the 3-2-1 drop
introduce a new element on a new track after the 3-2-1 drop, that comes in but then recedes before the final exit
doesn't seem like the new thing has any notes
the element is a bit muddy/indistinct. perhaps it needs simplification and more space, different instrument choice, i dunno</i>
Honestly, music generation is solved. You don’t have fine control at this point but people are unable to tell the difference anymore. There are tons of YouTube videos with blind tests between real artists and AI and people have no idea.
Yes! I want this for MainStage -- this would allow me to automate my weekly template setup for playing at my church. Each week before practicing I look up the songs in Planning Center and create a new MainStage concert file with one patch per song, and add notes to each patch screen with the song's key, etc. Automating this would save me the time of doing the busy work and get right to practicing.
Things I would use AI for in music production:
1. Generating track layouts (add tracks + empty audio/midi clips throughout)
2. Generating MIDI sequences
3. Generating Serum patches
4. Extracting stems from existing audio
5. Automating common workflows (eg sidechaining)
6. Semantic search of sample library
That being said, I don't think I want a full agentic workflow for vibe-producing. Point solutions seems like a better fit for me, personally.
I’ve had some fun building simple instruments in the browser using AI and piping midi to Live, then munging from there [0]. The whole principle of fully AI generated music leaves me cold but AI as a sort of sidechain to the creative process seems potentially interesting.
[0] https://variousbits.net/2026/02/22/building-generative-music...
Generating MIDI sequences
https://www.muse.art/home
For me, the point of making music is making it myself. If want to have something done for me I could just play someone else's record and pretend like I made it.
Welcome to the era of instant gratification.
The is the age-old music parochial thing. "Oh, he's just in a cover band, he doesn't write anything" / "Oh, she's just a composer, she can't even play the stuff she writes" / "Oh, he writes and plays his own stuff but knows fuck all about theory so it's not real music" / etc.
Me, I'm having a blast with claude code, MCP, and Ableton. I'm directing harmony and asking for arrangements and variations in rhythm, mixing, and production. Don't know if that counts as "making it myself", but then I was writing music before I could actually play any instrument at all, so :shrug:
Previous generations might have said the same thing about Ableton itself, vs playing a physical instrument. In that regard, AI might become just another power tool for creative expression.
Agree to some extent. At some point though we jump the thin line between creative expression and… magic?
Like if at some point I can just say “Generate a song similar to Smooth Criminal, different enough to not trigger copyright claims” and it just works, and everyone loves it… well is that creative thinking?
I get why people make gut statements like this, and to me something does feel different about AI.
But I realize I have not seen any criticisms of AI generated music that are meaningfully different from criticisms I've heard of other advances/changes in music technology, whether performance or recording.
Sampling, scratching, drum machines, autotune, electric guitars even.
I will caveat my first comment by also noting that I am well versed in computer music history, and read many many papers in CMJ[1] and elsewhere about generative and automatic composition tools such as Emily Howell[2]. I do NOT have a problem with generative, algorithmic and automatic composition in this sense, as an extension of the creative intentions of the human composer, in the right context. See also Autechre[3] for what can be done with Markov chains and good taste. What we are discussing here is the musical equivalent of a dishwasher.
[1] http://www.computermusicjournal.org/
[2] https://en.wikipedia.org/wiki/David_Cope#Emily_Howell
[3] http://autechre.ws/
Addendum: I would highly recommend the Margaret Boden book referenced in the wiki on David Cope/Emily Howell, which is an absolutely fascinating read and was incredibly far-sighted in its enquiries on this topic.
Can I ask what the specific markers / qualifiers are for you to consider (let's call them) 'classical' generative and algorithmic techniques fair game in creative composition, but LLM agent based techniques not so?
To me, it seems like the "do it for me" aspect is similar, just at different levels of abstraction.
Very cool! I posted my own experiments in this area a few months back, which were an iteration on an existing Ableton MCP. It’s great to see more people experimenting in the spaces of interfacing with complex applications and music production.
https://news.ycombinator.com/item?id=46428922
Does anyone know of other MCP servers for similar music creative tools? I'm interested in things like sonic-pi, strudel.cc and orcas. But very open to anything. I think there is a good opportunity for kids to learn using these tools, especially if I can wire it into my mycroft.ai/neon device.
Ever wanted to control Ableton with just your voice? Me too! I made this MCP server so I could just ask Codex to do anything in Ableton Live for me, while I was nap-trapped by my baby.
The chat messages I sent to Codex to make this:
in ableton, make a self reflective song, with audio vocals (via macos say) and chip tunes and 80's drum machines. should be a real edm banger
i want midi for everything but vocals please, with ableton devices. not prerendered audio for instruments
needs some fills
and should hit way harder after "3-2-1 i become the sound"
the vocals are squished too much (read too quickly), give them a little more length
add some dynamics, the song is basically one volume. and some pumping side chain
improve dynamics of the clap, seems a bit flat and indistinguished, want it harder after the 3-2-1 drop
introduce a new element on a new track after the 3-2-1 drop, that comes in but then recedes before the final exit
doesn't seem like the new thing has any notes
the element is a bit muddy/indistinct. perhaps it needs simplification and more space, different instrument choice, i dunno</i>
> Ever wanted to control Ableton with just your voice?
Never.
I guess the guidelines don't apply to you, as long as you disagree vehemently enough with the OP's basic intent.
No amount of guidelines will make me lie in my replies.
> should be a real edm banger
I'm afraid Codex ignored that one.
If you’ve gone to the trouble of setting up Ableton MCP, you’ve already worked harder than Suno requires to make a banger
Honestly, music generation is solved. You don’t have fine control at this point but people are unable to tell the difference anymore. There are tons of YouTube videos with blind tests between real artists and AI and people have no idea.
The Matrix wants everything solved, doesn't it?
Solved !== Good
XD
is this using M4L or the LOM ?
Object model