My second post here. This entry includes the word “animation”, however the word is applied quite loosely here to the content. That is to say, “animation” is something that can be many things and has inherent in it many permutations. All graphics running on a computer are animations in some sense of the word - the CPU delivers one frame at a time to the screen. Usually 30 or 60 of these frames are produced in a second.
The artist or musician often has use for constructing visual plays, within performances or installations. It is also (in the experience of the author) interesting to think about how to create small movies that are a version or prototype of an installation or performance.
- Transparent png component of the end result.
To create a movie that retains user interaction is quite a challenging task. There are various ways to approach it. One can make a movie based on some kind of screen capture software. This is fine up to a point but sometimes the quality of the image is not what one might wish. Another way of approaching it is to capture each frame of the interaction as a png, jpeg or pdf. This has some advantages and is very useful for capturing a stream of images from an animated graphic piece. However, depending on the complexity of the piece, the CPU can sometimes find it hard going to save images while doing the other rendering tasks. One way to overcome this is to use separate threads to manage the different activities. Both OpenFrameworks (C++) and Processing (Java) offer this kind of multi-threading.
A question that is raised here: Is the end result a High Definition movie? Short answer: In my experience it is a good way of prototyping an idea (installation or animation).
Specifically we are taking a series of png files that are put back together (in Final Cut Pro, Blender or Processing) to create a movie. We are focusing on this type of procedure as a way of prototyping an idea. In the present case the “final product” (that which we ultimately have in our sights) is centred upon a real-time computer process. An installation (for example) that will actually be a computer running the different screens, interactions and animations (for example in a gallery space). The capture process described here is merely to provide a good blueprint to the ideas. That said, the process itself is quite useful and gives a highly useable result.
Creating a stream of png files from a graphic context is not difficult. In Processing we make use of “save” or “saveFrame”. In OpenFrameworks we use “saveScreen”. To create a multi-threaded image saver there is a library available in OpenFrameworks called “ofximageScreenSaver”. In Processing there are some ways of doing it without multi-threading, but to actually multi-thread one needs to make use of code found in the forum discussions or build it from scratch. Either way one will need to sub-class the “thread” class and work inside that space.
In OpenFrameworks (hereafter OF) it is relatively simple to modify the image add-on to allow for starting the interaction (for example the music) with the recording sequence. Once one has a stream of graphics that coincide with the user interaction and the sound interaction one presses another button to release the buffer of images. This is then written to disk.
In the present case an extended experiment was not conducted, rather one prototype sound-art piece was made. This process will be described. It is natural that the singular process presented here will be an overview of the process as a whole. Some aspects of the process will be left unexplored and some components will leave questions unanswered. There will however be a good overview of one particular implementation.
The first thing that is required is a kind of sketch of the overall form of the work. This is like a “storyboard” that we will refer to. In the present case, a series of quick drawings of what was required in each section, was produced. An overview such as this allows the artist to get a high level perspective on what will be required. Below are the sections mapped out in our basic storyboard / narrative.
1). The first section will consist of a “broken up star-field” that pulses and spins with the beat of a techno kick. An arpeggiated chord (actually a series of scaled “harmonics”) will be triggered also but will not affect the motion triggered by the kick. These sounds are created in SuperCollider.
2). The second section will continue this idea with some dynamic changes to different components - a “landscape” aesthetic is introduced with the addition of a moon.
3). The next section will be a close-up of the moon with some colour changes and tinting dynamically applied. A granulated sound cloud “texture” will overlay this section (it will start in the previous section and bridge across).
4). Section 4 will consist of coloured bands that stretch across the screen pulsing in time with the SuperCollider tones.
5). Section 5 is a kind of “star-scape” made from OpenFrameworks code. The moon appears here also (different phase and colour).
The first section is created using an FFT class inside of OF to create different types of spin within the graphics components. The components themselves are created using a type of circular structure (similar to a flower constructed with arms arranged from 0 - two pi). The circular structure is however totally deconstructed and broken up to achieve a particular aesthetic goal. The result is a particular type of abstract visual field that moves and changes to the kick drum.
The kick drum is added as a .wav file to the OF program. There is no complex interaction in this initial prototype. One could quite easily change some of the components to reflect some user interaction (the intention is to incorporate such interaction in the final product).
The kick drum is then passed through the FFT and the different frequency bins modulate the graphic component at different levels and in different amounts.
Section 3 uses a granulation process developed in OpenFrameworks. There is no overt interaction but the elements are applied by ear and eye. Different elements are added to Final Cut Pro and arranged to fit a looser overall aesthetic scheme.
Section 4 extends this kind of approach with a few changes. The coloured bands are created in Processing with a multi-threaded approach that also keeps the background transparent. This allows one to insert the bands and retain any background that we might wish to retain. Thus we have the moving bands (coloured dynamically) that can be inserted into the overall scheme and over any backgrounds that we might wish to use them with.
In the present case the coloured bands are just imported into Final Cut Pro and then moved around according to the aesthetic target. Here the SuperCollider arpeggios seem to trigger the appearance of a band and then in the next instant change relationship. Towards the end the sounds seem to be triggered by a more complex interaction between two colour bands, making the viewer search a little for the key or “solution”. Such complexities of rhythm or dynamic interaction create a space that invites longer participation (it takes longer to discern how the elements are relating).
Such “hacks” being described here are not really reflecting actual interaction such as we provide in section 1, but rather provide a good basis for structuring actual, more refined, interaction. The prototype in such cases points us towards a certain complex type of interaction that needs to be programmatically achieved.
Other such hacks include adding sounds into a piece with the use of Logic Pro. This type of work allows one to sync up filter sweeps (for example) to one particular motion in the animation. I find this useful for creating gestural motion and sweeps within the works. Logic Pro is suited to this purpose as you can add a movie file straight into the timeline of your piece and work with the file in real time.
YouTube Link:
Sound-art prototype: end result