Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An Approach to Sound Synthesis with L-Systems (ho.name)
90 points by sargstuff on Jan 10, 2024 | hide | past | favorite | 20 comments


It’s cool people are messing with this stuff but for me the results of modulating an oscillator with it dont sound that interesting unless you’re into modular synths with chaos oscillator type stuff. Most fitting use of L systems seems to be 3D plant geometry generation


non-standard take on MIDI / portability / automatically generating post-script like description.

Similar to cnc g-code / post script printing language. Just a more strictly math based view.

Text way of conveying non-linear to linear transform in a portable format without using musical notation. aka raw non-linear piano string vibration to linear 'piano key'.

debatable, but programming language book on sed and how to use regular expressions to manipulate the ascii l-system sounds would seem to be a bit ligher reading/learning than multiple semesters of engineering level calculus to learn mathematical equations for manipulating ocillations.

High school level introduction to l-systems presented as logo programming. [0][1]

----

[0] :Turtle Geometry: The Computer as a Medium for Exploring Mathematics (Artificial Intelligence) by Harold Abelson (Author), Andrea Disessa (Author)

[1] : https://en.wikipedia.org/wiki/Logo_(programming_language)


On L systems, "The Computational Beauty of Nature" teaches you this, among Mandelbrot (and more fractals), how Scheme (Lisp) works and the dynamics of nature systems.

After that, I would suggest all to read "Gödel, Escher, Bach" among SICP with Chicken Scheme to do the exercises.

Then switch back and forth from The Computational Beauty of Nature and GEB.

My ~/.csirc:

    (import scheme)
    (import (srfi 203))
    (import (srfi 216))

Run "chicken-install srfi-203 && chicken-install srfi-216" as root too.


The similarity is likely due to the fact that they were both early standards built for limited environments.

I find systems like this are always tighter than systems designed after the WWW became most people's computer experience.


Nice! I did this back in 2003 for my MEng, and took a similar approach but using MIDI and musical notes/scales rather than direct waveforms.

I did get some promising results - what are fugues but fractal music? - but was never musical enough to do anything more with it.


I don’t see how fugues are fractal. Care to explain?


Focusing on just the music, sounds like difference in song complexity / embelishment spinoffs.

Different approach constrasting tree generated by room sonar example[0] and Salieri/Mozart song[1].

Initial Salieri song simple 'tree' per single source mono-soundings.

Mozart 'sterioized' Salieri's initial song by embelishment/counterpoint. aka the more detailed 'tree' spin-offs.

---

[0] : http://sites.music.columbia.edu/cmc/courses/g6610/fall2003/w...

[1] : Mozart insults Salieri by playing he own piece better than he did : https://www.youtube.com/watch?v=9jlQiHHMlkA


I didn't quite understand your first link, or what it has to do with music. But the second link points to something that is neither a fugue nor fractal in any way. Mozart's piece played there has no fractal properties, and deviates completely from the original Salieri melody. Perhaps you're confusing fractality with a regular counterpoint?

To clarify, a temporally fractal musical composition would be self similar on different time scales. For example, if we consider a 7 note melody, then we could, in the simplest case, make it fractal by inserting the same 7 note melody (perhaps using a different key, or even a slight variation, but still easily recognizable as the original melody) between selected two notes (preferably neighboring) of the original melody.

Still, I find your remark "what are fugues but fractal music?" interesting, because fugues might be the most suitable musical structures to have a fractal property. In a complex 4 or 5 voice fugue, as the voices partially overlap, sometimes a higher level melody emerges, and this melody could, in principle, be constructed to mimic the common theme. I have never noticed any fugues to be fractal, but I wouldn't be surprised if a genius like Bach had tried to compose such a fugue, and it would be interesting to hear it.


What about comparing fractal scale differences between a song done in two different composition styles (where scales don't quite match up)?

Medieval style 'Take on me' : https://www.youtube.com/watch?v=yC7KlX9B2Q8 Modern pop style 'Take on me' : https://www.youtube.com/watch?v=3bT5bFzQInQ


Ok, was viewing more from waveform point of view aka attack, decay,sustain, release (ADSR). Where each component of ADSR is a "voice" track. Put each ADSR in one of of 4 ecludian quadrants for 'time reference'

Would suggest Lewis Carrol's 'Through the looking glass' per quadroons. Unfortuantely, Lewis Carrol didn't recognize time as an aspect of quadroons. (where time is big part of music). From wave form aspect, changing scale (mapping scale) would produce richer form of information vs. non-fractal would result in just zooming in to specific proportion of interest.

Ignoring fact human voice can't generate 'pure tones'; four voice parts impart additional harmonics not noted on page. just the 'instrument' variatioin: beetles songs pre-stratocast vs. post-stratocast.

Another way to look at it, is single voice part, progression left to right. non-fractal note is stricktly a single tone, no sub-harmonics. fractal note is everything associate with single tone within given tone range (harmonic resonance). L-system branchs equivalent to sub-harmonics, where each sub-branch is addtional harmonic.

Simple graph combo of ADSR/4 voices gives smooth flow/progression vs. adding spin offs to any of the ADSR/4 voices either directly or because combining produces addtional harmonics beyond original ADSR/4 voices would give 'fractal results'.

ASDR quadrant -- simple non-fractal "pure" tone would have simple circle in each of 4 ecludian quadrants. Additional subtones would be addtional circle(s) in relevant quadrant, where circle size related to reference tone (smaller for sub harmonics, larger for over-tones). no decay -> no circle in corresponding decay quadrant. aka wider harmonics range looking at, more detail & more addative harmonics, Less harmonics range, less detail / less addative harmonics to plot. More voice vocal parts, more potential for additional harmonics combinations.

simple instrument example -- beetles before stratocaster guitar vs. post stratocaster guitar songs. Later much fuller tones & less distinct harmonics.

from complete composition view: mozart's heros metalica : https://www.youtube.com/watch?v=UBfsS1EGyWc


Interestingly, I use phasor to control a buffer reader, and I can get somewhat similar results.

https://youtu.be/UZxZ8vpBOPo?si=hZKTb3nWhpj4uY-z


yeah, but fractal garbage collection bit messy/unbounded.


I really enjoyed the pulsar synthesis sounds: https://nathan.ho.name/posts/pulsar-synthesis/

Very Aphex twin.


Amazing stuff. I'm going to have to start messing with Super collider now :)


The sounds produced remind me so very much of early Human League, with the track - The Dignity Of Labour pts 1-4(1979), with parts 2 and 4 sticking out more in the deja vue. https://youtu.be/Yfh8zsF5308?si=PlYXLZb3Jqks7U_Y


There's also been some papers on l-systems for composition leveraging a traditional scale: https://www-users.york.ac.uk/~ss44/bib/ss/nonstd/eurogp05.pd...


It would be interesting to see this done one the resulting waveform itself instead of modulations. So direct generation of waveforms using L-Systems.


Some how, don't think Lisp fourier series via UTF character fitting was a defining motivation for being able to stack UTF chars into single char space.

Perhaps nyquest plot[2], nyquestgui[1] or "Inverse Procedural Modeling of Branching Structures by Inferring L-Systems"[0] discusses an approach.

More 'standard' approach would be to have sound image in svg format then convert svg image to g-code. use image scaling to filter out higher/lower level harmonics.

g-code used in cnc / laster cutter machines is similar to l-system working/local system coordinates.

Lot more verbose, but likely could be culled & transformed into a slimmer 'l-system' like take.

inkscape is a graphics program with the ability to convert svg to g-code.

----

[0] : https://news.ycombinator.com/item?id=38945108

[1] : https://lpsa.swarthmore.edu/Nyquist/NyquistGui.html#Introduc...

[2] : https://lpsa.swarthmore.edu/Nyquist/Nyquist.html


Reminds me of speech inflection.


Vocoder more representative of the ADSR approach.[0]

Although adding the orchestral music score single track equivalent of speech production looks about the same.[1]

Space / technical complexity not withstanding, could generate a full choir & full orchestra with 1 vocoder per singer/ 1 vocoder per orchestra instrument.

Suppose chatgpt could assist with reducing/minimizing amount of vocoders needed via waveform overlap reductions at the expense of being specific for given set of song(s).

----

[0] : https://www.youtube.com/watch?v=TsdOej_nC1M

[1] : https://www.youtube.com/watch?v=JManm091qWI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: