Conference Programme - Clár na Comhdhála
Day 1 - Friday
Location: Bewerunge Room9:45 Conference Welcome
Music Programming Languages10:00 Invited Session: What’s New in FAUST
Music Programming Languages10:30 Invited Session: Introducing Kronos - A Novel Approach to Signal Processing Languages
Music Programming Languages11:00 Invited Session: Running Csound in Parallel
Music Programming Languages11:45 Invited Session: Pd (aka Pure Data)
Music Programming Languages12:15 Semantic Aspects of Parallelism for SuperCollider
Supernova is a new implementation of the SuperCollider server scsynth, with a multi-threaded audio synthesis engine. To make use of this thread-level parallelism, two extensions has been introduced to the concept of the SuperCollider node graph, exposing parallelism explicitly to the user. This paper discusses the semantic inplications of these extensions.
Sound Synthesis14:00 Mimesis and Shaping - Imitative Additive Synthesis in Csound
This paper is to introduce a realisation of Imitative Additive Synthesis in Csound, which can be employed for the realtime analysis of the spectrum of a sound suitable for additive synthesis. The implementation described here can be used for analysing, re-playing and modifying sounds in a live situation, as well as saving the analysis results for further use.
Sound Synthesis14:45 Particle synthesis - a unified model for granular synthesis
The article describes an implementation of a synthesis module capable of performing all known types of time based granular synthesis. The term particle synthesis is used to cover granular synthesis and all its variations. An important motivation for this all-inclusive implementation is to facilitate interpolation between the known varieties of particle synthesis. The requirements, design and implementation of the synthesis generator is presented and discussed. Examples of individual varieties are implemented along with a longer interpolated sequence morphing between them. Finally an application, the Hadron Particle Synthesizer, is briefly presented.
Sound Synthesis15:30 Audio Plugin development with Cabbage
This paper describes a novel new approach to developing audio plugins with Csound. It begins with an short historical overview of projects that led to the development of Cabbage as it exists today and continues with a more detailed description of Cabbage and its use with digital audio work stations. The paper concludes with an example of an audio effect plugins and a simple MIDI based plugin instrument.
Systems and Language16:30 Poing Impératif: Compiling Imperative and Object Oriented Code to Faust
This paper presents a new compiler called Poing Impératif. Poing Impératif extends Faust with imperative and object oriented features. Imperative and object oriented features make it easier to start using Faust without having to immediately start thinking in fully functional terms. Furthermore, imperative and object oriented features may enable semi-automatic translation of imperative and object oriented code to Faust. Performance seems equal to pure Faust code if using one of Faust's own delay operators instead of the array functionality provided by Poing Impératif.
Systems and Language17:15 Compiling Signal Processing Code embedded in Haskell via LLVM
We discuss a programming language for real-time audio signal processing that is embedded in the functional language Haskell and uses the Low-Level Virtual Machine as back-end. With that framework we can code with the comfort and type safety of Haskell while achieving maximum efficiency of fast inner loops and full vectorisation. This way Haskell becomes a valuable alternative to special purpose signal processing languages.
Workshops & Events14:00 Configuring your system for low-latency real-time audio processing - Workshop
Just installing GNU/Linux and a real-time kernel doesn't turn your system in a real-time low latency environment at once. A properly configured system demands some more modifications to what most distributions set up by default. And in a lot of cases a real-time kernel isn't even necessary. So besides going through some of the most important ways to improve the performance of your system this workshop may also debunk some tenacious myths.
16:30 Transmittance - Workshop
Transmittance explores artistic and scientific collaboration which is local, global, networked and broadcasted. It aims to research performative possibilities of streaming, broadcasting and telepresence forging new types of performance and audience with focus on critical and socially aware artistic languages. A Transmittance context: Free streaming technology and telepresence software opened a possibility for new affordable spaces for performance. In late capitalism we are bombarded with images of body. We are manipulated with the notions of freedom and pleasure through the visual. It's an environment that increasingly segregates people by way of ethically questionable free market logic principles. Free software and open standards encourage innovation, cooperation and solidarity in developed and developing world. Processing of the audio/visual and the corporeal on a stage is an important strategy to subvert the numbing and blinding effects of contemporary life. Let's invent our own ways how to position our perceptions of self in relation to freedom, pleasure and society of (promised) wealth. Let's open new spaces of visibility for performance and new media art. We need to actively bypass power structures that have a hold on physical spaces of artistic representation. Let's forge connections to new online audiences and collide different specifics of various artistic media.
19:30 Blue Water - Concert
This piece is an attempt to imagine life inside the water. Life inside the water is showed a surreal world full of reflected lights, strange codes, games of colours. Objects and people reflected in water, go into a new dimension where realize water is life, purity, vital energy, and also destructive nature force. All the sounds have been processed with Csound program.
19:30 Superficies de Placer - Concert
19:30 Aeolus Study - Concert
The composition is both a self-similar or mensuration canon in which a copy of the motive is shrunk and stacked on top of each note of the motive, and so on for four layers (i.e., generated the same way as a Koch curve in fractal geometry), and also a regular canon, at the octave, of the self-similar canon. -- The composition was realized using the composer's Silencio system for algorithmic composition in Lua, which runs on Android smartphones as well as personal computer.
19:30 Music for Flesh II - Concert
Music for Flesh II is a solo piece for augmented muscles sounds, which aims at demonstrating an experimental coupling between theatrical gesture and biological sounds of human body. Composition was developed as a first milestone of the Xth Sense project, a broader ongoing research which investigates exploratory, open source applications of biophysical sound design for musical performance and responsive milieux. Presently Xth Sense technology consists of a low-cost, biosensing wearable device and a Pure Data-based framework for capture, analysis and real time processing of biological sounds. Performer’s voluntary muscles contractions produce kinetic energy, an acoustic sound which is captured by the biosensor and deployed as only sonic source. At the same time the biological signal undergoes a features extraction which provides several control parameters for the real time sound processing. Performer's body is not only a controller, but, more importantly, it becomes a truly real musical instrument; performer is capable of actually crafting music exciting his muscles fibres. Such paradigm attempts at informing classical gestural control of music; during the concert both performer and listeners can perceive an authentic auditive and cognitive intimacy; the neat and natural responsiveness of the system prompts a suggestive and unconventional sound-gestures coupling.
19:30 Refract.ed - Concert
The sound material of 'refract.ed' consists of audified and partly later signal processed seismometer recordings of the 2008 Sichuan Earthquake in China. Following 'underground sounds' - an electroacoustic piece also dealing with the audification of earthquake data - 'refract.ed' focuses even stronger on working with and listening to very specific sounds of an earthquake's process in order to further explore certain sonic textures. Therefore the entire piece is based on only one soundfile of a length of 20 seconds, in this case representing slightly less than half an hour of earthquake activity. The data for 'refract.ed' was provided via a real-time data server belonging to the GEOFON network of seismic stations and converted to audio data using programmes specifically developed for this purpose in the course of working on 'underground sounds'. Special thanks to Anna Saranti.
19:30 Convergences - Concert
21:30 Little Soldier Joe - Concert
Oyvind Brandtsegg, Carl Haakon Waadeland
Linux Sound Night
21:30 Sublimation Revision/Transmittance - Concert
Luka Princic // Nova DeViator
Linux Sound Night
21:30 Notstandskomitee - Concert
Linux Sound Night
21:30 Improv Set - Concert
Linux Sound Night
21:30 The Infinite Repeat - Concert
Linux Sound Night
21:30 Jazz Bending - Concert
Linux Sound Night
21:30 Improv - Concert
Jesse Ronneau, Piaras Hoban
Linux Sound Night
Day 2 - Saturday
Location: Bewerunge Room10:00 Keynote (cancelled) -- Lightning talks
(60 min) Fons Adriaensen
keynote canceled (presenter held up) - instead: "lightning talks" and "news in Linux-Audio"
Audio Infrastructure and Broadcast11:00 OpenDAW - a reference platform for GNU/Linux Audio
Not being able to give a definitive answer to the question "Will my application or hardware work reliably with GNU/Linux?" presents a barrier to adoption by pro audio users. The OpenDAW platform offers a potential solution.
Audio Infrastructure and Broadcast11:45 PulseAudio on Mac OS X
PulseAudio is becoming the standard audio environment on many Linux dektops nowadays. As it offers network transparency as well as other interesting features OS X won't offer its users natively, it's time to have a closer look on possibilities on how to port this piece of software over to yet another platform. In the recent months, I put some effort into attempts to port PulseAudio to Mac OS X, aiming for cross-platform interoperability between heterogen audio host networks and enabling other features PulseAudio offers such as uPnP/AV streaming. This paper will first give a short overview about how CoreAudio, the native audio system on Mac OS X, is structured, the various ways that can be used to plug into this system, and then focus on the steps it takes to port PulseAudio to Mac OS X.
Audio Infrastructure and Broadcast12:30 Airtime: Scheduled Audio Playout for Broadcast Radio
We present Airtime, a new open source web application targeted at broadcast radio for automated audio playout. Airtime's workflow is adapted to a multi-user environment found at radio stations with program managers and DJs. Airtime is written in PHP and Python and uses Liquidsoap to drive the audio playout.
Interfaces for Music Instruments14:15 A position sensing midi drum interface
This paper describes my attempts at making a cheap, playable, midi drum interface which is capable of detecting not only the time and velocity of a strike, but also its location on the playing surface, so that the sound can be modulated accordingly. The design I settled on uses an aluminium sheet as the playing surface, with piezo sensors in each corner to detect the position and velocity of each strike. I also discuss the electronics (arduino based), the driver software, and the synths I have written which use this interface.
Interfaces for Music Instruments15:00 Towards new mapping strategies for digital musical instruments
This paper contends that the design of digital mu-sical instruments for live performance and compo-sition has been hindered by the tendency to create novel applications which fail to offer musicians access to the more perceptually-significant aspects of electronic music. Therefore, as a means of addressing this problem, this paper promotes the establishment of a more intelligent approach to the construction of digital musical instruments: one that is informed by relevant studies in human-computer interaction, cognitive psychology and product design.
Systems and Language16:00 An LLVM bitcode interface between Pure and Faust
This paper discusses the new LLVM bitcode interface between Faust and Pure which allows direct linkage of Pure code with Faust programs, as well as inlining of Faust code in Pure scripts. The interface makes it much easier to integrate signal processing components written in Faust with the symbolic processing and metaprogramming capabilities provided by the Pure language. It also opens new possibilities to leverage Pure and its LLVM-based JIT (just-in-time) compiler as an interactive frontend for Faust programming.
Systems and Language16:45 Python For Audio Signal Processing
This paper discusses the use of Python for developing audio signal processing applications. Overviews of Python language, NumPy, SciPy and Matplotlib are given, which together form a powerful platform for scientfic computing. We then show how SciPy was used to create two audio programming libraries, and describe ways that Python can be integrated with the SndObj library and Pure Data, two existing environments for music composition and signal processing.
Systems and Language17:30 An Interface for Realtime Music Using Interpreted Haskell
Graphical sequencers have limits in their use as live performance tools. It is hypothesized that those limits can be ovecome through live coding or text-based interfaces. Using a general purpose programming language has advantages over that of a domain-specific language. However, a barrier for a musician wanting to use a general purpose language for computer music has been the lack of high-level music-specific abstractions designed for realtime manipulation, such as those for time. A library for Haskell was developed to give computer musicians a high-level interface for a heterogenous output enviroment.
Workshops & Events14:00 PulseAudio Developer's Meetup and Working Session - Workshop
(120 min) Kurt Taylor, Lennart Poettering » Location: O'Callaghan Room
PulseAudio Developer's Meetup and Working Session - Developer introductions - Release content and schedule - Technical discussions - Coding breakouts
14:15 A musician's workflow: composing, recording and arranging with qtractor and friends - Workshop
The GNU/Linux audio environment is very much based on modularity as opposed to the monolithic approach on other platforms. As a result the GNU/Linux audio environment is very flexible and can be considered more an extension of an analogue set up with its intrinsic pros and cons. This is a totally different paradigm than used on other platforms so musicians coming from those other platforms can have a hard time adapting to the GNU/Linux modular approach. The aim of this workshop is to show by the use of a musician's workflow how you can benefit from the countless possibilities such a modular environment has to offer.
16:00 DIY Radio Station Storage, Scheduling and Streaming with Airtime - Workshop
This workshop covers installation, configuration and use of Airtime for an FM, DAB or online streaming radio station. Participants should bring a laptop with Ubuntu 10.04 'Lucid' LTS or Debian Squeeze installed, if they wish to follow the installation steps.
20:00 Ambi-Schlagfluss - Concert
This piece is based on my imaginings of a fluid made of sound, nervous and excited. The elements of this fluid are small percussive sounds. The different branches of the fluid are performing different expressive movements and shapes. Sometimes they seem almost recognizable, but this remains uncertain because of their inherent restlessness. Sound spatialization is essential for the piece. The listener is inside of the sound and its different movements. I used ambisonic tools; I found them very suitable for my goals here. They were coded in Csound, based on the works at the ICST in Zurich (special thanks to Johannes Schuett). For controlling the sound fluid, I developed my own software instrument, with Lisp as programming language and Csound as audio engine.
20:00 Spherical Voices - Concert
The musical material of the piece is formed of the opening chord from Chopin's nocturne in D flat major, which is the subject of transformations, mainly spectral shifting. The chord forms the sonority of the piece, and, contrary to Chopin, stays unresolved till the end. The harmonies begin with sharp attack of the piano and gradually transform into voices moving around. Several sound transformation techniques have been used - frequency shifting, granular synthesis, convolution, all realized with Supercollider.
20:00 Rrumpff tillff toooo? - Concert
In the composition "Rrummpff tillff toooo ?" the composer used different speech fragments of the "Sonate in Urlauten" from the dadaist Kurt Schwitters and transformed these fragments by using different technics of digital sound synthesis as granular synthesis and phase vocoding. The software he used to construct his algorithmic framework for the transformation of speech fragments is the computer language scheme and the sound synthesis software CSound.
20:00 El Sonido Electrico - Concert
"The daily electric sound, immersed in the atmosphere of the concert hall" is a purely conceptual work. A need for a deep listening to the same sounds that surround the music to be heard. The title of the work is the concept of it. The work is completed only when reproduced, as it includes sounds that occur at the same time, making it always fresh and looking positively to the well known "noises" inside or outside the concert hall. Situations that are familiar as composers, sound scenarios that go far beyond a cell phone if it rings, someone sneezes or as a sweet or simple everyday electric sound of the place. Clearly suggest a new approach to the composer's more akin to the role of a free improvisations listens carefully before playing.
20:00 Circus - Concert
This piece arose out of curiosity regarding the sound of a spinning coin. The material consists of the directed, so to speak microscopic, recordings of spinning objects on various smooth surfaces. The sounds of timpani are used in contrast as a spacial component. Both the structure of the movement and the sounds of the objects and the surface are treated as themes, which are varied during the piece. The palette of variations ranges from structural to sonical and implies variation of speed, reverse playback, movement in space, gathering and separation of parts, filtering, convolution and more. As the listener is encouraged to try to follow the evolution of the different elements throughout the piece.
20:00 Earth Songs - Concert
This is the first version of a series of pieces for processed sounds and computer that relate to creation, creatures and their meditation on existence. "Earth Songs", the first segment of a larger work, is a slow meditation on creation with a relentless metric structure that remains mostly unchanged through the piece. "Earth Songs" is rendered through analysis and resynthesis of voice and percussive sounds with particular inflections created through SuperCollider code, including data scanning functions driven by Gendy unit generators. The piece is played in real-time, with the performer executing pre-composed segments of SC code that represent utterances, phrases and rythmic structures and is spatialized around the audience using Ambisonics. -- In memory of Max Mathews, who lead the way in teaching computers how to sing.
Day 3 - Sunday
Location: Bewerunge Room
Audio Programming10:30 FluidSynth real-time and thread safety challenges
FluidSynth takes soundfonts and MIDI data as input, and gives rendered audio samples as output. On the surface, this might sound simple, but doing it with hard real-time guarantees, perfect timing, and in a thread safe way, is difficult. This paper discusses the different approaches that have been used in FluidSynth to solve that problem both in the past, present, as well as suggestions for the future.
Audio Programming11:15 Web-enabled Audio-synthesis Development Environment (WADE)
Collaboration and education in the world of digital audio synthesis has been facilitated in many ways by the internet and its World Wide Web. Numerous projects have endeavoured to progress beyond text-based applications to facilitate wider modes of collaborative activity such as, network musical performance and composition. When developing a software application one is faced with technology decisions which limit the scope for future development. Similarly, the choice of one audio synthesis language or engine over another can limit the potential user base of the application. This paper will describe how, through the WADE system architecture it is possible to integrate existing software in a highly scalable and deployable way. The current incarnation of the WADE system, as depicted in the WADE Examples section below, uses the Csound synthesis engine and audio synthesis language as its audio server
Audio Programming12:00 Low-Latency Audio on Linux by Means of Real-Time Scheduling
In this paper, we propose to use resource reservations scheduling and feedback-based allocation techniques for the provisioning of proper timeliness guarantees to audio processing applications. This allows real-time audio tasks to meet the tight timing constraints characterizing them, even if other interactive activities are present in the system. The JACK sound infrastructure has been modified, leveraging the real-time scheduler present in the Adaptive Quality of Service Architecture (AQuoSA). The effectiveness of the proposed approach, which does not require any modification to existing JACK clients, is validated through extensive experiments under different load conditions.
Audio Programming12:45 Loudness measurement according to EBU R-128
This paper introduces a new Linux application implementing the loudmess and level measuring algorithms proposed in recommendation R-128 of the European Broadcasting Union. The aim of this proposed standard is to ease the production of audio content having a defined subjective loudness and loudness range. The algorithms specified by R-128 and related standard documents and the rationale for them are explained. In the final section of the paper some implementation issues are discussed.
Environments and Composition14:30 Medusa - A Distributed Sound Environment
Nowadays the use of computers to replace specialized (and expensive) audio equipment in studios, live concerts and composing is a reality. This paper introduces Medusa, a distributed sound environment that allows several machines connected in a local area network to share multiple streams of audio and MIDI, and to replace hardware mixers and also specialized multi-channel audio cables by network communication. Medusa has no centralized servers: any computer in the local environment may act as a server of audio/MIDI streams, and as client to remote audio/MIDI streams. We discuss the implementation of Medusa in terms of desirable features, and report user experience with a group of composers from the University of São Paulo/Brazil.
Environments and Composition15:15 Xth Sense: researching biological sounds of human body for an experimental paradigm of musical performance
This paper seeks to outline methods underlying the development of the Xth Sense project, an ongoing research which investigates exploratory applications of biophysical sound design for musical performance and responsive milieux. Firstly, an exploratory study of the acoustics of body sounds, namely muscle sounds is illustrated. I describe the development of an audio synthesis model for muscle sounds1 which offered a deeper understanding of the body sound matter and provided the ground for further experimentations in signal processing and composition. Then follows a description of the development and design of the Xth Sense - a wearable hardware sensor device for capturing biological body sounds, this was implemented in the realization of Music for Flesh I - a first attempt at musical performance. Eventually I elaborate on the relations between the role of sound in cognitive experience and the acoustic capabilities of biologic body sounds and try to establish an array of principles which will provide the backbone of the next stage of my inquiry.
Environments and Composition16:00 Composing a piece for piano and electronics on Linux
In this paper the author reports his experience about a rather complex music-creation scenario using Linux: successful composition of a piece for piano and electronics using FLOSS software. The whole workflow, from composition to recording and final production of a high-quality printed score, is presented describing the chosen tools, adopted strategies, issues and the overall experience.
16:45 Closing Ceremony
Workshops & Events14:30 Cancelled: Din - Workshop
(90 min) S. Jagannathan » Location: New Music Room
din is a free software musical instrument that uses Bezier curves for waveforms, gating, fm & am, delays and fx. Originally designed to play Indian classical music using the computer mouse like a bow it is now moving steadily towards a general purpose sound composition environment (live visual sound editing + Tcl scripting, IRC bot control etc).
The schedule is a major guideline. There is no guarantee events will take place at the announced timeslot.