The discussion about technological growth and the ever so evolution of Artificial Intelligence has revolved around a lot of matters such as vehicles, mobile communication devices, visualization applications such as cameras and so on. Now, the route has paved its way into the audio industry as well in the form of 3rd party plugins. Let's take a look at some of the companies and their plugins that are already released for commercial use and how they affect the industry and the professionals in it.
Before we get into the audio side, let's breeze through Artificial Intelligence, or A.I. We know it's created to help mankind live a far more technologically advanced and easy life where most things we do can be done by machines for us. And that's the most basic and easiest way to explain A.I too. Anyway, A.I alone has a few form as we all know by now. The basic form is the "Machine Learning". "Machine Learning" is indeed a subfield of A.I where it is based on algorithms set to "learn" and perform tasks requested or programmed. Technically, it does not know or understand "why" or "how" it's doing them, it just follows the algorithm programmed and performs. This form of A.I puts humans in a safe spot because we can reason and understand our actions. However, A.I is focused around machine learning but in a bigger scale where it's meant and expected to understand, reason and create its own knowledge for the course of actions it takes over the tasks assigned. This is basically developing machines that can be "humans". Now that we know a little about A.I, it's easier to see and understand how companies have taken this technology into the audio industry. I'll be looking at plugins, which are the most common application in audio for 99% of the people but I'm sure there are more fields in the professional audio where A.I technology may have been implemented. That can be more complex and draggy so we'll stick with plugins for today.
Accusonus Drumatom is believed to be one of the first of its kind in the music processing software to be developed based on "Machine Learning". It was released in 2014 mainly targeting to function as drum recording microphone bleed reduction tool. From the word "Machine Learning", you would've already know that it depends on what it receives and told to do and does not know the logic behind it. Basically what Drumatom does is it analyzes multitrack drum recording that is imported into it through some complex processing and comes up with an articulate calculation to identify microphone bleeds within the individual drum tracks which then can be further controlled and fine tuned by the engineer with only two knobs to reduce the microphone bleed within the tracks. How is that possible? Well, as we discussed before, Drummatom uses "Machine Learning" algorithm, so it "learns" from the multitrack drum recording from several microphones that you import into it to some what guess which are microphone bleeds and which are the focused sound. The downside of it being a "Machine Learning" A.I though, is that it does not entirely understand what microphone particularly means like we do, it's only following algorithm, so it may detect non-bleeds and bleeds. This requires the engineer to experiment with the software between the different drum tracks to get the most optimal outcome desired. The behavior of the application varies depending on the input signal that it receives so choosing the proper tracks may be crucial for a proper result.
Accusonus have also developed another piece of software called Regroover. By definition, Regroover was created to un-mix drum/beat loops. Huh? Calm down. What that means is it has the ability to use A.I to analyze any single file drums/beats or percussion loops and split them into multiple tracks, effectively separating the several contents/instruments within the loop into their own single tracks allowing musicians to now, be able to isolate elements in a loop that they specifically want. This is also intended in the purpose to give musicians more freedom and creativity in manipulating drums/beat loops beyond their original form.
There other applications such as the Superior Drummer 3 from Toontrack that also has microphone bleed reduction in it. Superior Drummer 3 is basically a professional drum programming application which allows musicians to program live drums within the digital software that sounds and feels organic and natural as it contains thousands of live drum sounds that are all professionally recorded and processed by the best engineers in the world. It has the ability to allow musicians to program custom or replace their own drum recording with the sample in the application's library, or own samples to create a more clean and professional sounding drums. It uses "Machine Learning" in all of this aspects to get the best results. Musicians can also direct the application to listen to a particular track among all the drum tracks and reference that particular track to reduce the microphone bleeds. This gives the engineer more focused options and a better result as a whole when done right.
Keeping the drums theme going, let's take a look at Atlas by Algonaut. This is very recent release (June 2018). Atlas is a drum sampler that uses complex A.I technology developed by Algonaut called SoundClass. According to Algonaut, SoundClass A.I is built based on modified audio analysis methods, deep learning and customized neural networks which has the ability to listen and learn any audio introduced to it and determine which category it belongs to such as Kick, Snare, Clap, Hats etc. The idea here is to allow musicians to import their own sets of samples or samples within the software and create custom drum loops that Atlas forms in the click of a button. This is targeted to reduce the time consumption in finding for the right samples out of thousands one may have but simply audition them randomly as Atlas switches between the samples each time the user click the loops creation button. It maps all the sounds into a unique dotted map within the software and categorize them based on their characteristics. If zoomed in to the map, the dots get smaller and more dots seem to appear representing all the samples imported. They are all placed accordingly from how they sound to what they are. Musicians can drag and drop any samples into the sequencer section and keep it moving as they audition until they find the right sound they like. There are much more functions such as locking a sample into place and switch others as the user auditions the sounds so that the locked sample won't be switched. Users can also "Like/Dislike" the samples Atlas chose. What this does is, "Disliked" samples won't be chosen again and any samples that sound similar to that will also not be chosen when the user is auditioning the sounds while maintaining around the sounds that the user "Liked". This technology in particular have taken A.I technology to the next step in drums programming. For musicians who like to work fast, this application will certainly serve them well. According to Algonaut, they are not intending to replace musicians with A.I but instead allowing them to work together to remove and solve problems that blocks creativity during production.
iZotope is at the moment, pioneering the A.I technology within the audio application world in my opinion. They have successfully released several products that uses A.I in audio restoration, mixing and mastering process. Their main applications that use A.I are RX Audio Editor, Neutron and Ozone. And recently, they've released Nectar Elements. We'll briefly go over these applications.
iZotope RX is a professional all in one audio restoration application that allows engineers to fix almost any problems in their audio track. From vocals, sound effects, live recordings to virtual instruments. RX has the ability and functions to correct issues like noise, unwanted sounds, hum, hiss, plosives, rustles, wind, breath control, ambient matching, bleeding and much more. All of this in a single application. Every processing module that is programmed within RX is highly intelligent and efficient. According to how their parameters are set, RX can literally take audio restoration to the level never been able to achieve before. Truly remarkable. Some might not agree that this application uses A.I but I certainly think it does. (My personal favorite application too).
iZotope Neutron is an all-in-one audio mixing tool. It was developed to create "starting points" for engineers to mix from. The idea here is when you open Neutron on a track for example a vocal rack, it will require you to play the portion of the track to analyze it with Track Assistant. Once it's done analyzing, the A.I would automatically determine what track is it (kick, snare, vocal etc) and fix a set of parameters on processors like EQ, Compressor, Gate, Exciter and Transient Designer based on what it "thinks" best for the track depending on the analyzing method chosen and the signal that it was fed. This practically "mixes" the track for you. However, it's not always perfect. Neutron will create a starting point for you, and depending on your decision, you can tweak the setting further to get what you want. iZotope took this technology further with plugin intercommunication. If there are several instances of Neutron or Ozone plugin within your session, they can intercommunicate as far as the setting and parameters go. This allows cross referencing, analyzing and processing within the plugins on different tracks. A small example would be Kick and Bass. If you have Neutron on both of the tracks and you open the one on Bass, there will an option called "Masking". Once clicked, it will show all the available instances of Neutron within the session as dropdown menu that you can select. If you select the Neutron on the Kick track, you will see a split view of both your current Bass track's analyzer and the Kick track's analyzer in the same plugin window. Now when played, Neutron will highlight areas both this tracks are fighting for frequencies allowing you to easily correct the frequencies using your preferred processing. This in particular, have made a lot of engineers happy since they can easily detect frequency based problems within their session. Neutron certainly took mixing to the next level. In a way allowing new engineers or those with no engineering knowledge to mix with a single click.
iZotope Ozone is pretty straight forward. It's an iconic mastering plugin with the addition of A.I in their latest version with the name Master Assistant. All you have to do is import your song in and analyze it based on the platform you're releasing it on, either streaming or CD. Alternatively, you can also import a reference track for Ozone to refer to and master your song. Once it's done analyzing, similar to Neutron, Ozone would choose a bunch of processing that it finds fit from EQ, Tape Emulation, Exciter, Compressor, Spectral Shaper, Stereo Imager and Limiter and set the parameters on the chosen processing based on what it "thinks" appropriate for the song. And of course you can tweak the parameters further if you don't like what it did or leave it if you like it. You can master more than one song within the application.
Finally from iZotope, their latest release the Nectar Elements. There's nothing much to say because its extremely straight forward. Nectar Elements is a vocal processing/mixing tool designed specifically for vocals. Once it's opened on a vocal track, it would ask the user to select the analyzing method similar to Neutron and Ozone. Once you've selected, it would require you to play a portion of the vocal and it would analyze and it under the hood. After the analyzing is done, Nectar Elements would provide you with 6 parameters. Pitch, Clarity, De-ess, Dynamics, Tone and Space. All of them with only a single vertical fader on them for you to tweak if you choose to. Compared to Neutron and Ozone, Nectar Elements does not have detailed parameters for users to tweak. This is probably because of it being the "Elements" edition and most probably there would be an "Advanced" edition releasing soon. This plugin is created to mix vocals be it in a song or dialogue quickly and with less effort. That's pretty much it for Nectar Elements. Yeah... Cute little plugin.
The last company I want to look at is Zypnaptiq. As far as I know there are two plugins that Zypnaptiq developed that uses A.I technology. I'll breeze through them to keep the post more focused.
Adaptiverb is a extremely non-conventional reverb plugin. Unlike common reverb plugin, Adaptiverb uses A.I technology to add ambience to sound according the sound itself and the way it sounds in the input of the plugin mainly for sound design and movie scores. The A.I learns from the input signal and applies ambience accordingly to what it "thinks" best. But that's not all that Adaptiverb does. It has more functions that the user can tweak to further enhance the sound as they please like richness, pitch randomization, harmonic filtering that uses A.I as well and much more. All around, this is really a masterpiece of a reverb plugin especially if you're into film score mixing or sound designing to enhance the sound and create amazing ambience for the sound without sounding too muddy or washed because of the A.I that constantly "Adapts" to the sound coming into the input signal.
Unveil is focusing or de-reverberation plugin that uses A.I. Its pretty simple to use and straight forward. What it does is allows the user to reduce the ambience from a sound and focus on the main element or the other way around, reduce the main element and bring up the ambience of it. This can very useful for both film post production and music production. Being able to reduce the reverberation of a foley that is recorded in the middle of a room to focus on the recorded sound, increase the ambience of a big machinery or synth and reduce the main sound to create dynamic scene ambience for films or games or plain removing ambience from drums for a track. This opens up a wide range of possibilities for sound designing especially.
Final mention goes to Toneboosters Equalizer 4. A Dynamic EQ that has "Learning Machine" and A.I to suggest best EQ settings for a signal similar to iZotope Neutron.
With all that said, what does this mean to the audio industry? Will A.I replace musicians and engineers in the near future? At the moment, it won't because all this tools are only as good as they're programmed to. The the vibe, the out-of-the box thinking, creative processes and "breaking the rules" to get an amazing sound are the core factors humans posses. And these are the factors that makes songs sound out of the ordinary. A.I is getting better over the day. In a few years I can definitely see A.I mix and master a whole song, heck even compose all with a few clicks. That may create a clean, "perfect" song, but it would still miss the human element to it. But that don't mean there aren't any possibilities. Technology is unpredictable. Anything can happen. So we'll just have to wait and see. For now, these are excellent tools to add support the creative process. So might as well make use of it. For young engineers who learn off of the internet, don't feel too comfortable with this tools. I'd highly suggest you to learn the basics at least to make sure you have a solid foundation and you understand the processes that takes place in audio engineering. As I always say, be a knowledgeable person.
Until next time, have a great day #steemians!
Follow me on other social media.
Twitter
Instagram
Check out these amazing Steemit initiatives for musicians as well!
Hip Hop The Blockchain
Record Pool
Block Mountain
out of this list i like the Izotope RX and Unveil... definitely going to check those two out.. good stuff! AI and Mixing! Machine learning! almost like a personal assistant in a way if done right!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
RX is definitely a tool I use a lot. Especially recording in my room, it's super easy to clean up my recording. And you're right. The technology behind this new stuff coming out, they're great. For people who knows what they're doing, this tools can really be a creative spark.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Your post was manually selected and voted for by @illuminati-inc (IINC) with support of @curie. About IINC: here. About Curie: here.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
@illkrux you were flagged by a worthless gang of trolls, so, I gave you an upvote to counteract it! Enjoy!!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Congratulations @illkrux! You received a personal award!
Click here to view your Board
Do not miss the last post from @steemitboard:
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit