logo by @camiloferrua
Repository
https://github.com/to-the-sun/amanuensis
The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.
If you're interested in trying it out, please get a hold of me! Playtesters wanted!
New Features
- What feature(s) did you add?
At this point the fundamental functionality of The Amanuensis has been developed to a sufficient extent and the most pertinent task at hand can now be said to simply be making the system smarter. In other words, better at determining what portions of your playing are "good" and which are "bad". Currently this analysis is all about rhythm. This is the work that really interests me and I've been waiting for a long time to get to this point. The first upgrade in this respect has just been implemented.
Obviously hand-picked examples are merely anecdotal, but I believe the following sample song illustrates what this update is trying to achieve. Notice the rhythmic consistency and the ease with which you can begin to bob your head to it:
Contrast that with the older example below, which has rhythmic consistency within many of its parts, but feels much more amorphous overall:
The old algorithm calculated all of the intervals between any incoming beat and all the past beats and then projected them forward in time, summing them into predicted moments of greater likelihood for the occurrence of future beats. Each incoming beat was then compared with this graph to determine whether or not it was a "hit" or "miss".
the "graph" I'm referring to can be seen along the bottom in this old demo video. As they develop, the spikes in it are moments of greater likelihood for beats to come in
The new algorithm still projects surrounding intervals out into time, but rather than adding them all together, it does so individually and in the same moment as the incoming beat, thereby retaining more information. In particular, it can differentiate between beats that are "connected
" through repeated intervals with the extant song, already recorded and playing back. The idea being that everything captured will be forced to adhere and connect to a sort of rhythmic lattice of patterned intervals.
The screenshot below is of a test patch I put together in the course of designing this new upgrade. Here you can see the "lattice" I'm talking about. The white ticks in the first row are all of the beats captured by the old algorithm in a span of a chosen song. Each row beyond the second shows one rhythmic pattern that was found. Columns with no blue line running through them are not connected through any pattern of intervals to any other beats and therefore should be excluded by the new algorithm. Upon listening to the song I felt that these areas did correlate with more a-rhythmic moments.
The above visualization is not yet part of The Amanuensis proper, but it may be integrated in some form in the future. The new method also leaves access to a greater number of parameters and statistics which could be displayed in real-time on a future GUI, in order to facilitate greater understanding of the analysis being conducted.
- How did you implement it/them?
If you're not familiar, Max is a visual language and textual representations like those shown for each commit on Github aren't particularly comprehensible to humans. You won't find any of the commenting there either. Therefore, the work completed will be presented using images instead. Read the comments in those to get an idea of how the code works. I'll keep my description here about the process of writing that code.
These are the primary commits involved:
In deciding which of the user's input notes are connected
rhythmically to the already-recorded song, it was first necessary to document the beats in the song as millisecond timestamps which could be compared to each other. Since all of the heavy lifting needs to be done with a gen
object (which executes at a low level in C) it was necessary to use a buffer~
to store them, as this is the only sort of "array" accessible by gen
.
p document_song
in consciousness.maxpat, complete with commenting
In order to compare song beats with the beats of the user's recent playing, they needed to be converted from beat position in the song to real-time millisecond timestamps (or "frames"). At the beginning of each loop of the song the frame is now sampled and stored with the following line added to progression.gendsp:
starting_frame.poke(stats.peek(0), 0);
This allows for conversion, as well as signaling when the play_head
index should return to 0 and begin overwriting entries in the buffer~
.
Previously, the method of incorporating song beats into the rhythmic analysis relied on resampling the actual audio envelopes and sending them in for analysis as if they were essentially also being played in real time by another user. This method is much less sloppy and also allows for analysis of song beats into the future, since they're stored for the length of the song in an iterable list.
the genxper code inside the gen
in the subpatcher document_song
, complete with commenting
With all of this setup taken care of, the meat of this update constitutes a complete reworking of the analysis at the very heart of The Amanuensis. The following is the all-new code which supplanted large portions of rhythm.gendsp in consciousness.maxpat. The rest of the genxper code found there was also heavily modified and many now-extraneous portions of the rest of consciousness.maxpat were trimmed away as well.
/*
# the essential calculation made by the script is determining the "likelihood" of a played note, essentially whether
# it is a "hit" or "miss". every interval between the incoming note and both the user's recent playing and the beats
already in the song is checked to see if at least one more matching interval exists further out into the past or future. If so,
the played note is determined to be "patterned". If this pattern of at least 3 notes includes an already established
song beat, it is also determined to be "connected".in this way, played notes are considered hits if they are "connected"
to the existing song through some sort of pattern. Before the song begins, simply being "patterned" is enough to begin the song
with a click track equal to the interval of the pattern.
*/
likelihood = 0;
//out7 = rhythm.peek(timestamp % dim(rhythm));
connected = 0;
patterned = 0;
needle = play_head.peek(0); //needle is at most recent song beat (in the past)
//now = timestamp - starting_frame.peek(0) + stats.peek(6); //add click because the song starts on beat one
for(j = timestamps; j >= 0; j -= 1) { //reverberate from the past to incoming timestamp through recent playing
past_timestamp = playing.peek(j); //starting at the biggest intervals and working down for the sake of inserting
interval = now - past_timestamp; //current timestamp as easily as possible (at index 0). for() starts at last
if(interval <= wake + atom) { //timestamp for sake of moving all timestamps and because a pattern might still be found
target = past_timestamp - interval; //in greater song from that interval. EDIT: 1 PAST last timestamp because with 0 timestamps
patterned = 0; //1 still needs to be inserted. Interval should safely be impossibly large in this case
l = needle; //external declaration allows duplicate while()s to iterate without repetition
if(lock) {
check = song.peek(l); //1st checks song back into time
if(check) { //assumes there might be a point briefly when the song locks but has no recitation
while(check >= target - tolerance && l >= 0) {
if(abs(check - target) <= tolerance) { //success
patterned = interval;
connected = 1;
//involvements += 1;
break;
}
l -= 1;
check = song.peek(l);
}
}
}
if(!connected) { //(save processing where possible) then checks recent playing
for(k = j + 1; k < timestamps; k += 1) { //this loop still works up through remaining timestamps (even though
check = playing.peek(k); //the outer loop is unintuitively working down)
if(abs(check - target) <= tolerance) { //tentative success
patterned = interval; //patterned documents interval of pattern. This should be the only one conveyed to click assignment
//involvements += 1;
target -= interval; //still needs to find a connection in song
}
if(check < target - tolerance) {
break;
}
if(lock) {
check = song.peek(l); //duplicate while() iterates further if target has been extended
if(check) { //assumes there might be a point briefly when the song locks but has no recitation
while(check >= target - tolerance && l >= 0) {
if(abs(check - target) <= tolerance) { //success
patterned = interval;
connected = 1;
//involvements += 1;
break;
}
l -= 1;
check = song.peek(l);
}
}
}
}
}
if(patterned) {
//patterns += 1;
//involvements += 2; //+2 for beat and future_beat
}
playing.poke(past_timestamp, j + 1); //move this timestamp up one spot
//timestamps = max(timestamps, j + 2); //j is the index, timestamps is the quantity
}
else if(timestamps) { //clean up this timestamp
playing.poke(0, j);
timestamps -= 1;
}
if(!j) { //when at the beginning, insert current timestamp
playing.poke(now, 0);
timestamps += 1;
}
}
if(lock && song.peek(0)) { //assumes there might be a point briefly when the song locks but has no recitation
song_size = song_beats.peek(0);
for(j = needle + 1; j < song_size - 1; j += 1) { //reverberate forward through song
future_beat = song.peek(j);
interval = future_beat - now;
if(interval <= tolerance) { //success; played beat coincides with a song beat
patterned = interval;
connected = 1;
//involvements += 2;
break;
}
else if(interval <= wake + atom) {
target = future_beat + interval;
patterned = 0;
for(k = j + 1; k < song_size; k += 1) { //check remaining beats
check = song.peek(k);
if(abs(check - target) <= tolerance) { //success
patterned = interval;
connected = 1;
//involvements += 1;
//target += interval; //no need to keep the chain going until more in-depth stats are desired
}
if(check > target + tolerance) { //overshot
break;
}
}
if(patterned) {
//patterns += 1;
//involvements += 2; //+2 for beat and future_beat
break; //for now, only point is to determine connected: all loops can end after success
}
}
else {
break;
}
}
for(j = needle; j >= 1; j -= 1) { //reverberate backward through song
past_beat = song.peek(j);
interval = now - past_beat;
if(interval <= tolerance) { //success; played beat coincides with a song beat
patterned = interval;
connected = 1;
//involvements += 2;
break;
}
else if(interval <= wake + atom) {
target = past_beat - interval;
patterned = 0;
for(k = j - 1; k >= 0; k -= 1) { //check remaining beats
check = song.peek(k);
if(abs(check - target) <= tolerance) { //success
patterned = interval;
connected = 1;
//involvements += 1;
//target += interval; //no need to keep the chain going until more in-depth stats are desired
}
if(check < target - tolerance) {
break;
}
}
if(!connected) { //save processing where possible
for(k = 0; k < timestamps; k += 1) {
check = playing.peek(k);
if(abs(check - target) <= tolerance) { //success
patterned = interval;
connected = 1; //results in a connection because past_beat is part of song
//involvements += 1;
//target += interval; //no need to keep the chain going until more in-depth stats are desired
}
if(check < target - tolerance) { //overshot
break;
}
}
}
if(patterned) {
//patterns += 1;
//involvements += 2; //+2 for beat and future_beat
break; //for now, only point is to determine connected: all loops can end after success
}
}
else {
break;
}
}
}
if(connected) {
likelihood = 1;
}
if(!lock && patterned) { //finding any pattern is the criteria for song starting
stats.poke(1, 10); //lock
lock = 1;
likelihood = 1;
}
GitHub Account
To see a full history of updates, blog posts, demo songs, etc., check out my Steemit blog @to-the-sun.
Until next time, farewell, and may your vessel reach the singularity intact
To the Sun
Your contribution has been evaluated according to Utopian policies and guidelines, as well as a predefined set of questions pertaining to the category.
To view those questions and the relevant answers related to your post, click here.
Need help? Chat with us on Discord.
[utopian-moderator]
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Thank you for your review, @helo! Keep up the good work!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
I don't understand all the code mumble jumble. But from what I could understand, the project sounds interesting. I'd love to try.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Great! What's your instrument of choice? If you look on the Github readme there are some tutorials that will help you get up and running. If anything gives you trouble don't hesitate to get a hold of me. I'd be glad to walk you through it. Are you on discord? I'm @to_the_sun#5590.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
@harry-heightz if you run into a problem where it's really hard to capture more than one beat in your song, I just pushed an update last night that should fix it.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Hey, @to-the-sun!
Thanks for contributing on Utopian.
We’re already looking forward to your next contribution!
Get higher incentives and support Utopian.io!
Simply set @utopian.pay as a 5% (or higher) payout beneficiary on your contribution post (via SteemPlus or Steeditor).
Want to chat? Join us on Discord https://discord.gg/h52nFrV.
Vote for Utopian Witness!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit