Skip to content
Linux Audio Conference 2015
The Open Source Music and Sound Conference
April 9-12 @ Johannes Gutenberg University (JGU) Mainz, Germany
find us on facebook google-plus event

Conference Schedule

During the conference, live A/V streams are available for the main track: High Quality and Low Bandwidth.

Remote participants are invited to join #lac2015 on, to be able to take part in the discussions, ask questions, and get technical assistance in case of stream problems.

Conference Material can be found on the Download Page.

Timetable Format: Plain List | Table | iCal | Printable Version
All times are CEST = UTC+2


Day 1 - Thursday, April/9

Main Track

9:30 Conference Welcome
Slides  Video  
(30 min)  
Univ.-Prof. Dr. Mechthild Dreyer, Vice President of the JGU: Welcome
Univ.-Prof. Dr. Klaus Pietschmann, IK: Welcome
Albert Gräf, LAC 2015 Chair: Overview of the conference, organizational remarks

Paper Presentation
10:00 Toolbox for acoustic scene creation and rendering (TASCAR): Render methods and research applications
(45 min)  Giso Grimm, Joanna Luberadzka, Tobias Herzke, Volker Hohmann
TASCAR is a toolbox for creation and rendering of dynamic acoustic scenes that allows direct user interaction and was developed for application in hearing aid research. This paper describes the simulation methods and shows two research applications in combination with motion tracking as an example. The first study investigated to what extent individual head movement strategies can be found in different listening tasks. The second study investigated the effect of presentation of dynamic acoustic cues on the postural stability of the listeners.

Paper Presentation
10:45 Cancelled: An Update on the Development of OpenMixer
(45 min)  Elliot Kermit-Canfield, Fernando Lopez-Lezcano
This paper serves to update the community on the development of OpenMixer, an open source, multichannel routing platform designed for CCRMA. Serving as the replacement for a digital mixer, OpenMixer provides routing control, Ambisonics decoders, digital room correction, and level metering to the CCRMA Listening Room's 22.4 channel, full sphere speaker array.

14:00 Keynote: Emerging Technologies for Musical Audio Synthesis and Effects
Slides  Video  
(90 min)  Julius Orion Smith III
Note: Due to unforeseen circumstances, Julius O. Smith III won't be able to attend the conference. His keynote lecture will be presented by Romain Michon from CCRMA instead.

Lightning Talk
15:30 Live 3D Sound Spatialization with 3Dj
Slides  Video  
(15 min)  Andres Perez-Lopez
3Dj is a SuperCollider library for real-time interactive 3D sound spatialization. It arises from the need of a free, multiplatform, Linux-compatible framework for sound spatialization. 3Dj is implemented in a modular way. On the one hand, the Scene Simulator provides convenience abstractions for handling sound objects in a virtual environment (physical modeling, predefined motions, visualization). These informations are sent through OSC to the Spatial Render, which synthesizes the resulting audio. Rendering can be done in Higher Order Ambisonics (up to 3rd order), VBAP or HRTF. Communication is based on the SpatDIF protocol, facilitating the use of alternative simulators/renders. SpatDIF recording and playback capabilities are also provided. 3Dj is implemented in SuperCollider, and makes use of several free software audio projects (Ambisonics Toolkit, AmbDec, Jack).
The talk will be complemented with a demonstration of the software in the installation space.

Lightning Talk
15:45 NixOS for Musicians
Slides  Video  
(15 min)  Bart Brouns, Cillian de Róiste
Computer audio users and developers need a reliable system, yet they also need to be able to explore new tools without compromising stability. Nix, the package manager, and NixOS, the distribution, provide exactly that:

NixOS is a declarative system. This means you define your entire system in a single text file. When you deploy such a configuration, it doesn't matter what the previous state was; you get exactly what you defined.

* No more re-installs, no matter how wild you hack!! Don't worry about upgrading or changing your system; if anything goes wrong you can boot into an older system.

* Backup, version and share the configuration of your entire system. This is also great for fault finding, for example on IRC.

Nix is a purely functional language; it treats packages like immutable values. This has the following advantages:

* Package isolation: Each package is only aware of its own dependency tree, so there is no danger of dependency hell. You can have multiple versions of libraries installed at once.

* Snapshots: Export just the parts of the system (i.e. a set of binaries plus their configuration) that you need to work on a music project so you can return to that project again in years to come without having to recreate the whole system.

* Collaboration: Share exactly the same tools with a collaborator so that you can work together without the risk of running into incompatibilities or corrupted files. Nix, the package manager, can even do this across distros.

Lightning Talk
16:00 Cancelled: OMPitchField -- an OpenMusic Library
(15 min)  Anna Terzaroli
OMPitchField/OMPF is a library of functions for the computer-assisted composition and programming environment OpenMusic, created by Paul Nauert. The basic functions are: "t-primeform" "incl-vec" and "set-complement" modeled on concepts of "prime form", "interval vector" and "complement" characterizing the "pitch-class set theory", developed by Allen Forte. In OMPF, there are also additional functions useful to perform some operations (mathematical and not) on sets. These operations help to build relationships but also to find existing relationships between sets, so OMPitchField is usable in musical composition and in musical analysis. In this connection, we can consider the following functions: "find-pc-in-field", "find-bounded-chords-in-field", "make-cyc-pfield", "make-bounds-test". These functions, based on the concept of "pitch class", help composers to organize the pitch range of their own pieces. We can prove this by programming a patch in OpenMusic that uses the functions mentioned above, included in OMPitchField.

Lightning Talk
16:00 The LAC2014 Percussion Combo
Slides  Video  
(15 min)  Frank Neumann
During the excursion on the last day of LAC2014, some random sample sounds were capture outside Schloß Bruchsal, played by various artists.
This lightning talk presents the results of creating a small percussion sample set library from these recordings.

Paper Presentation
16:30 Postrum II: A Posture Aid for Trumpet Players
(45 min)  Mat Dalgleish, Chris Payne, Steve Spencer
While brass pedagogy has traditionally focussed on sound output, the importance of bodily posture to both short-term performance and longer-term injury prevention is now widely recognized. Postrum II is a Linux-based system for trumpet players that performs real-time analysis of posture and uses a combination of visual and haptic feedback to try to correct any posture issues that are found. Issues underpinning the design of the system are discussed, the transition from Mac OS X to Ubuntu detailed, and some possibilities for future work suggested.

Workshops & Events

10:00 Composing Game Soundtracks - Demonstration and Discussion - Workshop
(100 min)  Nils Gey » Location: Workshop I (P2)
This 100 minute workshop consists of two parts, a demonstration followed by a moderated analysis and discussion. It will be shown how to produce a soundtrack for a ficitional computer game. The goal is to have realized piece of music at the end of the workshop. Typical phases of a production will be shown and elaborated. Only fully digital resources (synthesizer, sample libraries) with a free license will be used. First for pedagogical reasons, so the audience can follow and reproduce the steps at home, and second as a challenge.
The second part will be analysis and discussion. The audience and the author can share their expectations, experiences, pros and contras of a linux based environment for game-soundtrack production. This discussion will be moderated and is intended to be post-processed by the author to get feedback and wishes back into the software and audio development scene.

10:00 Publishing plugins on the MOD Cloud - Workshop
(120 min)  MOD Team » Location: Workshop II (P5)
One of the most important components of the MOD cloud services is the LV2 plugin store. With it developers will be able to publish their plugins to both the devices (MOD Duo and MOD Quadra) as for the desktop users.

This workshop is intended to explain the proposed system in detail with a hands-on approach, covering the whole publishing process, including:

- creating a request for publishing
- source code verification
- plugin building for
- Intel Atom (MOD Quadra)
- ARMv7 (MOD Duo)
- Generic x86 (MOD Desktop App)
- Generic amd64
- bundle generation, verification and publishing

MOD Team members from all elements of the pipeline will be present to explain technical details and collect feedback and demands from the developers.

The licensing model propositions will also be discussed, with a special care to the distinctions on commercial x non-comercial and open-source x non-open-source.

Presenters: Alexandre Cunha, Luis Fagundes, Filipe Coelho , Bruno Gola and Ricardo Crudo.

15:30 Audio measurement workshop (part 2) - Workshop
Slides  Video  
(150 min)  Fons Adriaensen » Location: Workshop I (P2)
The continuation of the audio measurements workshop at LAC 2014, which ran out of time. Contents:

- Let participants perform actual measurements on real HW, review and discuss the results.
- Introduction to using python and pyjacktools,... to create and automate audio measurement tools.
- Introduction to matplotlib, pyqtgraphics,...for data visualisation.

Target audience: anyone using Linux and wanting to perform measurements on audio hardware and/or software

NOTE: Slides for part 1 of this workshop at LAC 2014 are available here:

16:30 Open Educational Resources: Remixing Manuals and Tutorials for Linux Audio - Workshop
(90 min)  Bruno Ruviaro » Location: Workshop II (P5)
When preparing to teach a course using free and open source music software, it is important to be able to freely remix existing materials in an easy and flexible way. There are good Linux Audio manuals and tutorials available online, but often one would like to reorder their chapters, rewrite or update text and screenshots, add new material, and so on. Sometimes an existing online resource is potentially interesting, but if it is too outdated or unfit for a specific course plan, a teacher may simply decide not to use it at all.

In this workshop, I would like to open a discussion on current platforms (such as GitHub Pages and FLOSS manuals) available for writing and sharing educational materials. With hands-on demonstration on how to remix them, I will illustrate the discussion with three recent projects I have worked on:

1) A Gentle Introduction to SuperCollider
2) Ardour 3 Beginner's Tutorial
3) Open Music Theory

15:30 Interactive virtual audio-visual concert simulation with TASCAR
(150 min)  Giso Grimm, Joanna Luberadzka, Tobias Herzke, Volker Hohmann » Location: Installation Space
TASCAR is a toolbox for acoustic scene creation and rendering. In this demonstrator three concert stages can be interactively explored in a virtual audio-visual environment. On the first stage, a time varying physical feedback model with three moving microphones and three simulated loudspeakers is placed. This simulation without any external sounds results in a sound experience very similar to Steve Reich's "pendulum music". On the second stage a jazz band is playing. The third musical event is a contemporary piece with moving virtual sources, performed on a viola da gamba ensemble.
The interactive virtual acoustic environment is rendered in real-time and illustrates different aspects of the rendering software as well as the effect of binaural hearing. The simulation is rendered 3rd order horizontal Ambisonics. It will be reproduced in an 8-channel loudspeaker setup. This demonstration complements the TASCAR paper in the main track.

20:00 Topos Concrete - Concert
  Clemens von Reusner » Location: Roter Saal
The territory (gr. topos) is a rough and harsh landscape with mountains, valleys, canyons and plains, sand and stones, though it appears evenly and smooth. The color is grey. The size is about 30 square-meters. It is the floor of a garage and it is made of concrete (engl.) / Beton (german).
Concrete is a building material, a kind of unshaped dry powder made of sand, granulated stones and cement, dusty and chaotic. Mixed with water it becomes flexible and fluid and goes into a metamorphosis to become dry again, static and resistable and of any wanted shape. Aspects of working with native granularity, fluidness as well as stiffness and different kind of acoustic spaces were leading ideas of the composition. Software: Csound, Sox, SuperCollider. The csound 3rd-order ambisonic opcodes by Jan Jacob Hofmann were used for multichannel spatialization. -- Premiere --

20:00 Klinga - Concert
  Helene Hedsund » Location: Roter Saal
The sound material in Klinga is made up of recordings of an old saw blades being hit and scraped by various objects. The piece, and all sound processing is made in an instrument built by myself in SuperCollider.

20:00 Benjolin - Concert
  Patrick Gunawan Hartono » Location: Roter Saal
Benjolin is an electroacoustic composition for eight-channel speakers, in which all the sound materials that have been used are the result of the Benjolin synthesizer designed by synth pioneer Rob Horddijk.

The compositional structure is intuitive based on gradual and dramatic dynamic changes, which are represented by random pulses and square waves as offered by Benjolin.

An open-source 3D sound algorithm was implemented for the spatialization structure of this piece, which has been realized in the SuperCollider environment.

20:00 Voce231114 - Concert
  Massimo Fragalà » Location: Roter Saal
All the sounds that form this composition derive from the elaboration of word splash that myself recorded. Starting from this sample I tried to change the physical characteristics in order to generate a range of sounds more or less different compared to their original variety. This was possible using particular technique of sound processing such as waveset distortion, brassage stretching, segmenting the sound and reassembling segments, reverberation, etc. This composition has been realized on linux kxstudio.

Day 2 - Friday, April/10

Main Track

Paper Presentation
10:00 Faust audio DSP language in the Web
(45 min)  Stephane Letz, Sarah Denoux, Yann Orlarey, Dominique Fober
With the appearance of HTML5 and the Web Audio API, a high-level JavaScript API for processing and synthesizing audio, new interesting Web applications can now be developed. The Web Audio API offers a set of native and fast C++ audio nodes, and a generic ScriptProcessor node, allowing the developer to add his own specialized node in the form of pure JavaScript code. Several projects are developing abstractions on top of the Web Audio API to extend its capabilities, and offer more complex unit generators, DSP effects libraries, or adapted syntax.
This paper brings another approach based on the use of the Faust audio DSP language to develop additional nodes to be used as basic audio DSP blocks in the Web Audio graph. Several methods have been explored: going from an experimental version that embeds the complete Faust native compilation chain (based on libfaust + LLVM) in the browser, to more portable solutions using JavaScript or the much more efficient asm.js version. Embedding the Faust compiler itself as a pure JavaScript library (produced using Emscripten) will also be described. The advantages and issues of each approach will be discussed and some benchmarks will be given.

Paper Presentation
10:45 AVTK - the UI Toolkit behind OpenAV Software
(45 min)  Harry van Haaren
AVTK is a small lightweight user interface toolkit designed for building custom graphical user interfaces interfaces. It was conceived particularly to build LV2 plugin GUIs, but is also usable for large and complex standalone programs. Focusing on user experience, AVTK promotes ease of use for novices yet affords power-users the most efficient interactivity as possible. The author feels this is particularly important in live-performance software, where user-experience and creativity are in close proximity.

Paper Presentation
11:30 Ingen: A Meta-Modular Plugin Environment
(45 min)  David Robillard
This paper introduces Ingen, a polyphonic modular host for LV2 plugins that is itself an LV2 plugin. Ingen is a client/server system with strict separation between client(s) and the audio engine. This allows for many different configurations, such as a monolithic JACK application, a plugin in another host, or a remote-controlled network service. Unlike systems which compile or export plugins, Ingen itself runs in other hosts with all editing facilities available. This allows users to place a dynamic patching environment anywhere a host supports LV2 plugins. Graphs are natively saved in LV2 format, so users can develop and share plugins with others, without any programming skills.

Lightning Talk
14:00 Testing audio plugins and UIs with Carla
Slides  Video  
(15 min)  Felipe 'falktx' Coelho

Lightning Talk
14:15 RTML: a Real-Time Machine Listening Framework
Slides  Video  
(15 min)  Andres Perez-Lopez
Music Information Retrieval is a field with a strong potential for artistic and performative usages. However, existing MIR tools are often interfaced by code, thus limiting access to people without programming knowledge. We present RTML, a graphical framework for Real-Time Machine Listening written in SuperCollider. RTML provides compact and simplified access to low- and mid level descriptors, and convenient utilities to map descriptor outputs with external applications through OSC.
The talk also serves as a background for the author's "Listening Lights" performance at the Linux Sound Night.

Lightning Talk
14:15 Cancelled: Writing Qt JACK applications with QJack
(15 min)  Jacob Dawid
The purpose of QJack is to make it easy to interface with a JACK audio server from within a Qt application. JACK (JACK Audio Connection Kit) is the de-facto standard for professional audio processing on GNU/Linux, a low-latency audio server that runs on-top of numerous sound systems. Each JACK application is able to interface with any other JACK application by offering virtual in- and output through a standardized interface, just like you would be able to connect audio devices with cables. For maximum compatibility, JACK offers a C-style API found in libjack. QJack tries to be a more convenient solution by wrapping all the C-stuff and offering a convenient C++/Qt API.
The lightning talk will give you a small insight on QJack by the developer. With a simple Qt application, we are going to read and playblack an mp3 file via JACK in about 100 lines of well-formatted code.

Lightning Talk
14:30 Improvisation: FFT Analysis/Resynthesis
(15 min)  Marc Groenewegen
Improvised talk about an interesting way (and software) to do resynthesis by performing an FFT on an entire piece of music.

Paper Presentation
14:45 Segment Synthesizer
(45 min)  Andre Sklenar, Michal Sykora, Amir Hammad, Martin Patera
This work aims to describe the Segment synthesizer, its development process, cover a description of the original Segment Synthesis and its properties, advantages and limitations. It will also clarify the way the hardware controls and the user interface were designed, Segment's connectivity and its audio features. It will also cover the accompanying set of software features -- the software and the cloud and will explain what licencing was chosen and why. It will also briefly cover the tools used to develop the product.

Paper Presentation
15:30 Sound Synthesis with Periodic Linear Time-Varying Filters
(45 min)  Antonio Goulart, Marcelo Queiroz, Joseph Timoney, Victor Lazzarini
In this survey we explore implementations of recently proposed distortion synthesis techniques, namely the Feedback Amplitude Modulation (1st and 2nd order cases) and Allpass filter coefficient modulation. These techniques are based on Periodic Linear Time-Varying systems, in which we operate by finding a suitable modulation function to obtain a desired spectrum. In order to illustrate this survey and encourage exploration of these new techniques we present examples in the Csound language.

16:30 Bitwig Studio
(90 min)  Heiko Weinen
This workshop is a short introduction to Bitwig Studio's most important concepts, how to set it up on Linux environments and a few examples of the nice possibilities that the Linux audio community already provides towards a better music production experience.

The workshop includes a Q&A section as well as a hands on part if time permits - so if you can, bring your computer and headphones.

Workshops & Events

10:45 A Musician's View: Exploring common features of Yoshimi and ZynAddSubFX - Workshop
(60 min)  Will Godfrey » Location: Workshop I (P2)
This workshop starts with the basics of just using the synth and supplied voice patches, progressing through an understanding of the structure of the synth, to editing and creating new instrument voices.
Topics Covered:
Setting up
Exploring the supplied voices
Top level controls
Multiple parts
Split keyboards
The three synth engines (in outline)
Voice kits
A more detailed look at the synth engines in turn -- degree of detail depending on time remaining and interest of the audience.

14:45 Arch Linux as a lightweight audio platform - Workshop
Slides  Video  
(90 min)  David Runge » Location: Workshop I (P2)
This workshop discusses several purposes and reasons for chosing a rolling release distribution over a frequent release candidate one which translate very well into the field of pro-audio application:
- Higher possibility for configurability
- Software versions are up-to-date
- Newer kernels guarantee (or at least raise the possibility for) better compatibility with hardware (old and new)
- Easier to build own packages, if not limited by outdated core components
- Distribution documentation is up-to-date (less possibility for ambuigity)
- Better chance for lightweight customization (if that's your thing)

In this tutorial I will elaborate further on the concepts and environments I chose to create concert- and installation-ready environments. This will include (but is not limited to):
- Arch Linux package management (pacman and aura)
- Arch Audio
- Systemd
- Awesome WM
- Realtime kernels
- Custom JACK2 startup scripts

14:45 Building HTML plugin interfaces for the MOD Web GUI - Workshop
(90 min)  MOD Team » Location: Workshop II (P5)
The remote GUI offered by the MOD platform is all based in web technologies - HTML, CSS and Javascript – and is implemented as an LV2 extension. This workshop will cover the details of the system implementation and the steps required to build a MOD GUI for an LV2 plugin. The MOD SDK can provide a basic interface very quickly and will be used as a stating point. From there, some of the possible further customizations will be explained. Also a discussion will be made regarding extending the plugins GUIs with widgets.

Presenters: Luis Fagundes, Bruno Gola and Filipe Coelho

10:00 The Sound Of People
(135 min)  David Runge » Location: Installation Space
Single-nucleotide polymorphisms (SNPs) are molecular DNA markers, that are actively being researched in general science all over the world right now. Companies like 23andMe (and others) offer sequencing parts of their customers genome (and afterwards sends that collection of data to them). To free these sets of data for use in open science, openSNP started its work on collecting, indexing and making available their users' uploads. The Sound of People was one of the first attempts of synthesizing sounds from them. It was written in the SuperCollider audio synthesis language and has been developed with the help of the Electronic Studio of TU Berlin. The software creates a unique audio experience for up to twelve speakers.

14:00 Cancelled: acoustic cluster (video presentation)
(135 min)  Mari Ohno » Location: Installation Space
A number of pipes of different lengths suspended within a space each contain a microphone and are equipped with a freely movable speaker assembly beneath them. The distance between each speaker assembly and microphone is expressed in the "howling" acoustic response. Having divided the space with pipes, moving the speaker assemblies closer to the spaces within the pipes amplifies the otherwise insignificant howling in the space outside the pipes, producing a sound like that of a wind instrument. The pitch of these responses varies with the spatial properties of each pipe. This series of phenomena seeks to make audible the normally inaudible material of space.
(This is a video presentation of the physical installation.)

20:00 speaking clock - Concert
  Mari Ohno » Location: Roter Saal
This work is an electroacoustic composition created with the recordings of speaking clocks in various sites around the world. A speaking clock is a tool of sonification of "time", a phenomenon people cannot hear. It has various expres­sions of time depending on the country or region. In this work, the music mixes various expressions of time, based on the concept of "the expression of time percep­tion". Through this work, I attempt to give listeners curious and unique feelings through the same sound experience depending on their cultural background.

20:00 Selva di varie intonazioni - Concert
  Michele Del Prete » Location: Roter Saal
Csound, tape music, 8 channel, 9'.54", 2013
The piece is based on concrete sounds I recorded in the Frari church in Venice, a building that hosts two notable organs of the XVIII century. I have recorded their stops (as well as key noises) individually and in several combinations, thus already obtaining a very rich timbral material I have later worked on exclusively in Csound.

20:00 The Hidden and Mysterious Machinery of Sound - Concert
  Fernando Lopez-Lezcano » Location: Roter Saal
I had been thinking about a piece that used simple sine oscillators in complex ways for a long time. Going back to basics, in a sense. With that idea in mind I dived into the innards of Bill Schottstaedt's new Scheme-based version of the CLM synthesis language and its s7 interpreter, and stumbled into new ugens that allowed me to pile sinewaves in many different ways. The "imaginary machines" examples I found there also helped shape the first code fragments I experimented with. What remains of many lines of discarded Scheme code is the program that writes this piece. It creates fractal machines that manipulate clockwork mechanisms, big and small "virtual gears" that interlock, work without pause, and drive the basic sound synthesis instruments. This universe of miniature machines is spread over 3D space using Ambisonics, and the resulting soundscape is made of interlocking patterns of sound that drift through space. WARNING: the piece contains repetitive phrases of sound (also known as "rhythms"), and is only intended for immature audiences.

20:00 Coloured Dots And The Voids In Between - Concert
  Jan Jacob Hofmann » Location: Roter Saal
In the piece "Coloured Dots And The Voids In Between" spatial textures of dot-like sounds occur. The fields created by that expand and evolve in space and time. Important are not only the events of sounds themselves but also the spaces in between these, which expand in different dimensions spatially and temporally, overlap and thus create the actual space. All sounds have been generated using solely the "pluck"-opcode, which simulates the sound of a plucked string.

The piece is spatially encoded in 3rd order Ambisonic and has been created with the program "Csound" and "Cmask" along with Steven Yi's environment for composition "blue" using the self-conceived editor for spatial composition "Spatial Granulator".

21:30 Embedded Artist - Concert
(30 min)  Wolfgang Spahn, Malte Steiner » Location: Installation Space
Industrial noise and digital sound processing meets multi space projections: "Embedded Artist" is a media performance by Malte Steiner and Wolfgang Spahn.
The performance combines four different layers of visions that are merging into one visual Gesamtkunstwerk: 3D models, video scratching, live camera, and mechanical effects are likewise projected within the space. "Embedded Artist" is not only projecting all over the walls, but it is filming the audience and the space and re-projecting those images. Moreover, the light beam of each projector is fractionized by a prism and therefore sends broken images onto the walls. For the performance both artists developed a system for multiple embedded systems. To achieve this the following software and programs are used: Pure Data, Raspberry Pi, Raspian, and Python. As hardware components several Raspberry Pi's were combined with Paper-Duino-Pi's and remote controlled via OSC from the performers laptops.

Day 3 - Saturday, April/11

Main Track

Paper Presentation
10:00 Low Delay Audio Streaming for a 3D Audio Recording System
(45 min)  Marzena Malczewska, Tomasz Żernicki, Piotr Szczechowiak
This paper presents a prototype of a 3D audio recording system which uses Wireless Acoustic Sensors to capture spatial audio. The sound is recorded in real-time by microphones embedded in each sensor device and streamed to a Processing Unit for 3D audio compression. One of the key problems in such systems is audio latency, therefore this paper is focused on optimising the audio streaming time. Experimentation with the prototype system have shown that it is possible to achieve an end-to-end audio delay of 10ms and less using the Opus codec.

Paper Presentation
10:45 Timing issues in desktop audio playback infrastructure
(45 min)  Alexander Patrakov
In year 2008, a feature with the name "timer-based scheduling" (also known as "glitch-free") has been introduced into PulseAudio in order to solve the conflicting requirements of low latency for VoIP applications and low amount of CPU time wasted for handling interrupts while playing music. The novel (at that time) idea was to use timer interrupts instead of sound card interrupts in order to overcome the limitation that the ALSA period size cannot be reconfigured dynamically. This idea turned out to hit some corner cases, and workarounds had to be added to PulseAudio. Despite its age, the implementation of the idea is still not 100% correct. This talk explains why it is the case and what can be done to improve the situation.

Paper Presentation
14:00 Embedded Sound Synthesis
(45 min)  Victor Lazzarini, Joseph Timoney, Shane Byrne
This article introduces the use of the Intel Galileo development board as a platform for sound synthesis and processing, in conjunction with the Csound sound and music computing system. The board includes an Arduino-compatible electronics interface, and runs an embedded systems version of the Linux operating system. The paper describes the relevant hardware and software environment. It introduces a port of Csound, which includes custom frontends that take some advantage of the board capabilities. As a case study, a MIDI synthesizer is explored as one of the many potential applications of the system. Further possibilities of the technology for Ubiquitous Music are also discussed, which use the various interfacing facilities present on the Galileo.

The demo video from the talk is available here:

Paper Presentation
14:45 MobileFaust: a Set of Tools to Make Musical Mobile Applications with the Faust Programming Language
(45 min)  Romain Michon, Julius Orion Smith III, Yann Orlarey
This work presents a series of tools to turn Faust code into various elements ranging from fully functional applications to multi-platform libraries for real time audio signal processing on iOS and Android. Technical details about their use and function are provided along with audio latency and performance comparisons, and examples of applications.

Paper Presentation
15:30 A Taste of Sound Reasoning
(45 min)  Emilio Jesús Gallego Arias, Oliver Hermant, Pierre Jouvelot
We address the question of what software verification can do for the audio community by showcasing some preliminary design ideas and tools for a new framework dedicated to the formal reasoning about Faust programs. We use as a foundation one of the strongest current proof assistants, namely Coq combined with Ssreflect. We illustrate the practical import of our approach via a use case, namely the proof that the implementation of a simple low-pass filter written in the Faust audio programming language indeed meets one of its specification properties.
The paper thus serves three purposes: (1) to provide a gentle introduction to the use of formal tools to the audio community, (2) to put forward programming and formal reasoning paradigms we think are well suited to the audio domain and (3) to illustrate this approach on a simple yet practical audio signal processing example, a low-pass filter.

16:30 First Faust Open-Source Software Award
Video  Site  
(45 min)  Yann Orlarey
The Faust Open-Source Software Competition is intended to promote innovative high-quality free audio software developed with the Faust programming language. The winning software will receive a 2000€ price which will be awarded to the best submission by an international committee of leading experts in the field. The competition is sponsored by Grame, centre national de création musicale. The deadline for submissions is March 15, please check the linked website for details. The results of the competition will be presented in this session.

17:15 Quo Vadis? Open Discussion
Slides  Video  
(45 min)  
This is the wrap-up session, in which there will be an opportunity to discuss all things related to LAC, make announcements concerning the date and organizer of the next LAC conference (if already known), etc.

Poster Presentations

Poster Session
11:30 Xenakis: from analyses to performance strategies - Poster Presentation
(60 min)  Massimo Vito Avantaggiato » Location: Foyer
This poster provides a short analysis of Orient/Occident , commissioned by UNESCO as music for a film by E. Fulchignoni, which describes the development of civilizations: the cultural backgrounds are evoked by Xenakis through various timbres and specific rhythms: a wide range of unusual sound sources are used, from the sound of a violin bow drawn over various objects to sounds from the ionosphere, and excerpt from various Xenakis' works such as Pitoprakta, Concret PH and Diamorphoses or from other GRM compositions.
We've studied the historical and technological context in which this work was conceived by using some historical sources. We've underlined some specific ideas of this composition, the composer's originality in Xenakis production and in the history of electroacoustic music. We've investigated aspects of correspondence among the constituent sound materials, illuminating the temporal relationships existing among them, exploring sound identity correspondences and variations and providing a taxonomy of recurrent phenomena to help to rationalize compositional structuring processes. In this way we can point out the relationship between ministructure and macroform, underlining the progressive aggregation process. This objective has been achieved by means of: a) a Genetic Analysis based on the study of the composer's sketches b) a listening Analysis, by following different musicological approaches. For each technique we made some graphic representations and we coordinated them: we divided the work in sections, characterized by a certain timbral and dynamic profiles homogeneity.
We've also studied the possible strategies for a live performance. (A.Vande Gorne).

Poster Session
11:30 Auphonic: web-based automatic audio mixing and post production system - Poster Presentation
(60 min)  Georg Holzmann » Location: Foyer
Auphonic is a Linux-based web service to automate audio post production and mixing using a combination of music information retrieval, machine learning and signal processing algorithms. Users upload their multi- or singletrack recordings and Auphonic will analyze the audio, applies all processing automatically and keeps learning and adapting to new data every day!
The system includes an Adaptive Leveler, which balances levels between speakers, music and applies dynamic range compression, Loudness Normalization to new broadcast standards (EBU R128, ATSC A/85, ...), a True Peak Limiter, Automatic Ducking, Noise Gate and Cross Talk Removal algorithms for multitrack recordings, Automatic Noise and Hum Reduction and much more.
Furthermore it can create multiple audio file formats with correct metadata and distribute all results to various destinations (FTP, SFTP, HTTP, WebDAV, YouTube, SoundCloud, Dropbox, Google Drive, S3,, ...). The algorithms can be used directly in the Web Interface, with an API or in a separate desktop version.
The whole system is based on open source libraries and runs on a Linux system, most of our algorithms are implemented in Python/numpy/C.
We have introduced a freemium model to finance, which allows us to work on new algorithms every day.

Poster Session
11:30 Interactive virtual audio-visual concert simulation with TASCAR - Poster Presentation
(60 min)  Giso Grimm, Joanna Luberadzka, Tobias Herzke, Volker Hohmann » Location: Foyer
TASCAR is a toolbox for acoustic scene creation and rendering. In this demonstrator three concert stages can be interactively explored in a virtual audio-visual environment. On the first stage, a time varying physical feedback model with three moving microphones and three simulated loudspeakers is placed. This simulation without any external sounds results in a sound experience very similar to Steve Reich's "pendulum music". On the second stage a jazz band is playing. The third musical event is a contemporary piece with moving virtual sources, performed on a viola da gamba ensemble.
The interactive virtual acoustic environment is rendered in real-time and illustrates different aspects of the rendering software as well as the effect of binaural hearing. The simulation is rendered 3rd order horizontal Ambisonics. (A more extensive demonstration using an 8-channel loudspeaker setup takes place on Thursday in the installation space.)

Workshops & Events

10:00 Ten years of Qstuff*: Is it good enough already? - Workshop
Slides  Video  
(120 min)  Rui Nuno Capela » Location: Workshop I (P2)
This workshop serves as a reinstatement or continuation of the previous sessions presented on LAC2013@IEM-Graz and LAC2014@ZKM-Karlsruhe, as an informal hands-on conversation, open discussion and impromptu demonstration of any relevant aspects and not so obvious ones of the Qtuff* software suite. Main subject though shall be Qtractor audio/MIDI multi-track sequencer project, while celebrating its tenth anniversary ephemeris. Both developers and users in general are invited and welcome to attend, discuss and exchange their thoughts about the specifics and the future of Qtractor and other Qtuff* as well, on how and what can be tackled, tamed and improved to push each one's objectives a little bit closer.

10:00 Building custom hardware controllers using the chain controller interface and the Arduino microcontroller - Workshop
Video  Site  
(120 min)  MOD Team » Location: Workshop II (P5)
MOD users, both of devices as of desktops, can easily create external "real life" hardware controllers for their plugins using the Control Chain interface. It is an LV2 ControlPort oriented protocol that has an Arduino shield and a Library for the Arduino IDE.

This workshop will guide the attendees on all steps necessary to create a working controller using practical existing examples. Both the electronic as the coding areas will be covered.

Presenters: Ricardo Crudo and Lucas Takejame

14:00 MIDI Player Pro - Workshop
Video  Site  
(60 min)  Hans Petter Selasky » Location: Workshop I (P2)
MidiPlayerPro, MIDIPP, is the engineers answer to playing a digital piano.
Playing a piano is a three dimensional problem. You need to know when to play a key, how hard to press it and which key to hit. MIDIPP removes the last dimension leaving you as the musician with a two dimensional problem. In fact playing the piano becomes so easy you only need a single hand to perform the most common tasks an old-style musician will need two hands for.
The software allows you to play the piano in many new and exciting ways for your own and others pleasure. It can also be used to generate simple lyrics and chord handouts when playing in a band, and has many tools for creating music and finding chords.
MIDIPP is fully open-source and is available for Linux, FreeBSD, IOS and MACOS.

15:30 Playing live with Carla - Workshop
Slides  Video  
(45 min)  Filipe Coelho » Location: Workshop I (P2)
This workshop showcases the upcoming Carla 2.0 by playing with it live. Topics covered:
Demonstrate how to use several plugins and sound banks together to create nice sounds.
Record notes and play them back in Carla via MIDI.
Automate plugin parameters via MIDI CC.
Control Carla remotely via OSC.

11:30 3Dj (software demonstration)
(45 min)  Andres Perez-Lopez » Location: Installation Space
This is a demonstration of the 3Dj SuperCollider framework for real-time sound spatialization. In the demonstration, we will explore several metaphors for sound diffusion, as algorithmic spatialization, direct sound position control through orientation sensors, or spatialization based on Music Information Retrieval. Sonic material will be both existing fixed media and live produced electroacoustic sound.

14:00 C/K/P
(135 min)  Adam Neal » Location: Installation Space
In C/K/P, you will see three presentations of three video 'panels,' which change position in each presentation. The sounds associated with each panel are present throughout, but are brought forward in the mix when their panel takes the center position.

20:00 Spielzeug #1 - poco a poco accelerando al sinus - for two Wii-Remotes - Concert
  Jonghyun Kim » Location: Roter Saal
The main concept behind this piece is changing the repetition speed. I have taken a sound file and cut it into sections. The excerpts are repeated at different playback rates. This affects the sound quality and pitch. When the repetition rate is extremely fast, the output changes dramatically. When each repetition is under 10 milliseconds in length, the original sound is no longer recognizable, and only a sine wave-like timbre remains. This process modifies the micro-structure of the sound.
Technical Summary:
This is a Wii-Remote live performance using granular synthesis. The granular-synth and algorithms was programmed in Pure Data. The performer uses Wii-Remotes in each hand and swings them in pitch and roll axis(gyroscope). The granular synthesis algorithm receives motion data from such movements of the controllers, and produces sound in realtime. The buttons on the Wii-Remote also trigger samples and changes performance mode.

20:00 Cancelled: Duo Improvisation for Live-Coding and Motion Gesture-Following - Concert
  Jonghyun Kim, Sungmin Park » Location: Roter Saal
Duo Improvisation for Live-Coding and Motion Gesture-Following

Linux Computer Ensemble is Seoul-based laptop performance group founded by Open Source Art Forum in Korea. They compose and perform with Linux Computers and Open Source Applications such Pure Data, SuperCollider and OpenFrameworks. Their performances include works for Live-Coding, Motion gesture-following, AudioVisual and Live-electronics.

20:00 TBA - Concert
  Matthias Grabenhorst, Jörn Nettingsmeier » Location: Roter Saal
Two improvisation sketches, played by Matthias and spatialized by Jörn. The plan is to do one free improvisation and one more song-oriented jazz tune, played on an electric guitar equipped with a hex pickup to allow for individual spatialization of each string, both to give the guitarist an extra layer of freedom in the treatment of melody, and to allow us to "unwrap" complex voicings by spreading them out in space.

20:00 Dance I - Concert
  Jaeseong You » Location: Roter Saal
The Dance Music series is now continuing with alphabetical index. More recent pieces tend to faithfully conform to IDM genre beyond simply borrowing its idioms and materials. In Dance I, the samples are initially put together in a random assemblage, and musical narratives and gestures are gradually extracted from such disorder. Some of the samples are created from SuperCollider in Linux (Ubuntu) and others are collected from both open source libraries such as and commercial libraries.

Sound Night
22:00 Vagabundo Barbudo meets Listening Lights - Concert
  Andres Perez-Lopez » Location: Baron
Vagabundo Barbudo is an electro-experimental dance music project. Music is produced with free software tools, and distributed with copyleft licenses. Furthermore, "source" tracks are also distributed, in order to encourage modifications. Listening Lights is a project for automatic lighting of music. The core of the project is RTML, a graphical SuperCollider framework for Real-Time Music Information Retrieval.
In the performance, the music from Vagabundo Barbudo will be played, along with automatic reactive lighting provided by RTML.

Sound Night
22:00 Hallogenerator - Concert
  Jakub Pisek, Roman Lauko » Location: Baron
Performance sets up the mirror to producers, DJs and pop singers, who want to enhance their vocal digitally. But the performance brings to the stage the essence of entertainment and intellectual content in musical form at the same time. It also peacefully attacks the technological literacy of the nation, the jazz police, and computer reliability.

This live performance is running on open source software (arch linux, radeon driver, pure data, arduino...).

Sound Night
22:00 Superdirt² - cello & live linux electro - Concert
  Vincent Rateau, Daniel Fritzsche » Location: Baron
Superdirt² - fascinating electro beats mixed in with virtuous performed cello sounds which give a result of a never achieved before dance ability! With Ras Tilo at the synthesizers and Käpt'n Dirt with the cello it provides a musical experience which is situated between drum'n'bass, house, dub, dubstep and even far beyond...

The Band:
The two independent musicians knew each other through many musical projects and as flatmates, but are coming from totally different musical genres. While Ras Tilo (Vincent Rateau) got in place as a music producer and multi-instrumentalist, Käpt'n Dirt (Daniel Fritzsche) continued his classical cello studies with an open mind and ears to new musical genres.
They are mixing musical genres, looping, synthesizing, improvising and never get tired of going beyond the thinkable knowledge of music.
Their first full-length album is called "Algoriddims" - and licensed under a Creative Commons license.

Sound Night
22:00 visinin - Concert
  William Light » Location: Baron
Synthetic, electronic club music with an organic edge. Written and performed with Renoise, Monomes, and a variety of other software, both off-the-shelf and custom.

Sound Night
22:00 Pjotr & Bass - Concert
  Pjotr Lasschuit, Bass Jansson » Location: Baron
A Live performance were interaction between live-coded visuals and hybrid-trumpet are the key-elements. The software used is Pure Data, Faust and OpenFrameworks.

Day 4 - Sunday, April/12

Workshops & Events

10:00 Excursion - (misc event)
(360 min)  
On Sunday, April 12 there will be an excursion to Rüdesheim am Rhein in the lovely Rhine valley. Please check the Excursion page for details (click on the "Site" link above). The bus takes off at 10 a.m. at the "Muschel", the shell-like lecture building, Johann-Joachim-Becher-Weg 23 on the campus, not far from the conference venue.

The schedule is a major guideline. There is no guarantee events will take place at the announced timeslot.
This page was last modified: Monday, May 18 2015 07:16 UTC - Albert Gräf Johannes Gutenberg University Institut für Kunstgeschichte und Musikwissenschaft Hochschule für Musik
Journalistisches Seminar Grame - Centre national de création musicale HKU University of the Arts Utrecht Bitwig - next generation music software for Linux, Mac, Windows
HörTech gGmbH Oldenburg MOD - Step onto the future
Locations of visitors to this page


Valid XHTML 1.0 Strict Valid CSS3 Get Firefox

LINUX® is a registered trademark of Linus Torvalds in the USA and other countries.
Hosting provided by the Virginia Tech Department of Music and DISIS.
Design and implementation by RSS.