Paper Presentations

Thursday 16 April


Morning session


09:30

Registration desk open


10:30

Opening


11:20

Stephane Letz e.a.

What’s new in JACK2?

JACK2 is the future JACK version based on the C++ multiprocessors Jackdmp version. This paper presents recent developments: the D-Bus based server control system, NetJack2 the redesigned network components for JACK and pro=EF=AC=81ling tools developed during port on Solaris.


12:00

Juuso Alasuutari

LASH Audio Session Handler: Past, Present, and Future

The LASH Audio Session Handler is a framework for saving and recalling the combined state of a collection of running audio programs. It is a central part of the Linux audio desktop, and its success or failure may well shape the way Linux audio software is received by a larger demography of users. During the past year, LASH has become ore tightly integrated with modern desktop environments and continues to evolve in this direction.


Thursday 16 April


Afternoon session


14:30

Jörn Nettingsmeier

Shake, rattle and roll – An attempt to create a “spatially correct” Ambisonic mixdown of a multi-miked organ concert

In Summer 2008, the world’s first wave field synthesis live transmission (of Olivier Messiaen’s “Livre du Saint Sacrement”) took place between Cologne Cathedral and the WFS auditorium at Technische Universität Berlin. The music was captured by 20 microphones with a mixture of spot miking and Hamasaki square technique, i.e. without a dedicated “Hauptmikrofon”, as this was found desirable for the intended reconstruction on a WFS system.
This paper describes an attempt to reconstruct the “original” spatial distribution of the three widely separated organ works from the concert recordings using Ambisonic encoding. The actual azimuths and elevations of the spot mikes were measured relative to a carefully chosen “virtual” listening position and recreated with ambisonic panning. The toolkit used for post-production (and the live transmission setup as well) consists exclusively of free software, centered around Ardour and Jack on a Linux system.


15:10

Jan Jacob Hofmann

Crafting Sound In Space

This presentation/paper will briefly discuss Ambisonics, analyze requirements for working with Ambisonics for composition, and will discuss different strategies of working with blue and Csound to compose using Ambisonics for spatialisation of sound.


16:00

Pau Arumi e.a.

3D audio with CLAM and Blender’s Game En­gine

Blender can be used as a 3D scene modeler, editor, animator, renderer and game engine. In this paper, we describe how it can be linked to a 3D sound platform working within the CLAM framework, and make special emphasis on a specific application:
the recently launched “Yo Frankie!” open content game for the Blender Game Engine. The game was hacked to interact with CLAM implementing spatial scene descriptors transmited over the Open Sound Control protocol, allowing to experiment with many different spacialization and acoustic simulation algorithms.


16:40

Michael Chapman

Ambisuite/Ambman: A utility for transforming Ambisonic files

The creation of a utility for converting and transforming ambisonic files is described. The challenges of making such a utility as independent as possible of the user’s platform and of adding a GUI to what was initially a command line tool are detailed. Though the original version was released over a year ago, a greatly revised version of
Ambisuite (ver. 0.6.0 ) is due to be released to coincide with LAC2009.


Friday 17 April


Morning session


10:00

Giso Grimm e.a.

Application of Linux Audio in Hearing Aid Research

Development of algorithms for digital hearing aid processing includes many steps from the first algorithmic idea and implementation up to field tests with hearing impaired patients. Each of these steps has its own requirements towards the development environment. However, a common platform throughout the whole development process is desirable. This paper gives an overview of the application of Linux Audio in the hearing aid algorithm development process. The performance of portable hardware in terms of delay, battery runtime and processing power is investigated.


10:50

Simone Campanini, Angelo Farina

A new Audicity feature: room objective acoustical parameters calculation module

Audacity is a popular and powerful platform for audio editing.
It has many useful features, a flexible plug-in subsystem, and is multiplatform too. The latter characteristic greatly facilitates the development of portable acoustic applications that will work regardless of the system employed by the final user. This conducted us to start the porting of Angelo Farina’s Aurora plug-in family for Adobe Audition to Audacity, beginning with the Acoustical Parameters module. In this article we describe this development work and the results obtained.


11:30

John ffitch e.a.

The Imperative for High-Performance Audio Computing

It is common knowledge that desktop computing power is now increasing mainly by the change to multi-core chips. This is a challenge for the software community in general, but is a particular problem for audio processing. Our needs are increasingly towards real-time and low latency. We propose a number of possible paths that need investigation, including multi-core and special accelerators, which may offer useful new musical tools. We define this as High-Performance Audio Computing, or HiPAC, in analogy to current HPC activity, and indicate some on-going work.


12:10

Yann Orlarey e.a.

Automatic Parallelization of Faust Code

Starting from version 0.9.9.5, Faust provides auto parallelization features based on openMP, a standard API for shared memory parallel applications. The scope of the paper is to describe the generation of parallel code and to give a detailed report of the performances that can be achieved with such parallel code.


Friday 17 April


Afternoon session


14:30

Maurizio De Cecco

jMax Phoenix: le jMax nouveau est arrivé [The new jMax has come]

The reports of the jMax death have been greatly exaggerated. Free software never dies, it just sleeps for some time. Almost nine years after the release of the project under a free license, and six years after the end of the developments by the institution that created it, some of the original project developers decided to revive it from its ashes: jMax Phoenix was born.


15:10

Hans Wilmers

Bowsense – A minimalistic Approach to Wireless Motion Sensing

A novel wireless motion sensing device is presented, that can be used for controlling of realtime electronics. The device consists of a small (20×40 mm) circuit board including all circuitry and battery, and contains a complete inertial measuring unit (IMU), measuring both acceleration and angular velocity in 3 dimensions. Data is transmitted wirelessly via a bluetooth link and can be read from applications either using bluetooth serial emulation, or using a small standalone server application that converts incoming data and sends reformatted data out via OSC.


16:00

Christophe Daudin

The Guido Engine: A toolbox for music scores rendering

The Guido Music Notation format (GMN) is a general purpose formal language for representing score level music in a platform independent plain text and human readable way.
Based on this music representation format, the Guido Lib provides a generic, portable library and API for the graphical rendering of musical scores. This paper gives an introduction to the music notation format and to the Guido graphic score rendering engine. An example of application, the GSC, is next presented.


16:40

Fernando Lopez-Lezcano

The Quest for Noiseless Computers

When The Knoll, the building that houses CCRMA, was completely renovated in 2004, new custom workstations were designed and built with the goal of being both fast machines and completely noiseless to match the architectural and acoustical design of the building.


Saturday 18 April


Morning session


10:00

Torben Hohn e.a.

Netjack – Remote music collaboration with electronic sequencers on the Internet

The JACK audio server with its ability to process audio streams of numerous applications with realtime priority, has major signi=0Ccance in context with audio processing on Linux driven personal computers. Although the Soundjack and the Jacktrip project already use JACK in terms of remote handmade music collaboration, there is currently no technology available, which supports the interconnection of electronic music sequencers.
This paper introduces the Netjack tool, which achieves sample accurate timeline synchronization by applying the delayed feedback approach (DFA) and in turn represents first solution towards this goal.


10:40

Free Ekanayaka, Daniel James

Transmission – Linux Audio goes mobile

Transmission is a custom GNU/Linux distribution designed for real=C2=ADtime audio production on mobile computing platforms, including Intel ‘Ultra Mobile devices. It was developed on behalf of Trinity Audio Group by 64 Studio Ltd. and was released to the public in the summer of 2008.


11:30

John ffitch – Keynote Speech


12:10

IOhannes Zmölnig

Reflection in Pure data

One of the more prominent realtime computer music languages on linux is Pure Data. While Pure data provides ample constructs for signal domain specific programming, it has only limited capabilities for metaprogramming. In this paper we present “iemguts”, a collection of objects that add (amongst other things) reflection capabilities to Pure Data.


Saturday 18 April


Afternoon session


14:30

Albert Graef

Signal Processing in the Pure Programming Language

This paper introduces the author’s new functional programming language Pure and discusses its use as a scripting language in signal processing applications. Pure has a IT compiler based on the LLVM compiler framework which makes execution reasonably fast and interfacing to C very easy. A built-in GSL matrix type makes it possible to handle numeric signals in an efficient way, and a Pure plugin for Miller Puckette’s Pd provides the necessary infrastructure for realtime signal processing. The paper gives a brief overview of the language and the Pd interface, and presents some examples.


15:10

Victor Lazzarini

A Distortion Synthesis Tutorial

In this article, we will be surveying the area of distortion synthesis, using the Csound language to provide tutorial examples of the different techniques. The text will discuss various methods from the classic algorithms to newer approaches. The article will concentrate on less well-known techniques as well as more recent developments, which have been less often explored in the literature. The main aims of the article are to provide a general overview of the area with some tutorial implementations of various correlate techniques.


16:00

Jürgen Reuter

Considering Transient Effect in Spectrum Analyis

Signal processing with discrete Fourier transform (DFT) works well in standard settings, but is unsatisfying for rapid changes in signal spectra. We illustrate and analyze this phenomenon, develop a novel transform and prove its close relation to the Laplace transform. We deploy our transform for deriving a replacement for the sliding window DFT. Our approach features transient effect and hence shows more natural response to rapid spectral changes.


16:40

Hans Fugal

Optimizing the Constant-Q Transform in Octave

Abstract The constant-Q transform calculates the log-frequency spectrum and is therefore better suited to many computer music tasks than the DFT, which computes the linear-frequency spectrum. The direct calculation of the constant-Q transform is computationally inefficient, and so the efficient constant-Q transform algorithm is often used. The constant-Q transform and the efficient constant-Q transform algorithm are implemented and benchmarked in Octave. The efficient algorithm is not faster than the direct algorithm for a low minimum frequency. Additional vectorization of the algorithm is slower under most circumstances, and using sparse matrices gives marginal speedup.


Sunday 19 April


Morning session


10:30

Richard Spindler

The Current State of Linux Video Production Software and its Developer Community

According to Linux and Open Source/Free Software thought leaders, the current situation in Linux video editing and production software still leaves a lot to be desired. Here I give an accurate overview of the software that is already being developed for Linux video editing, or that is currently being worked on. I will introduce the developers or/and the development communities that are behind these projects, and how collaboration between projects works. I will mention interesting events and places where Linux video developers meet, and what groups might be interested in our work, or ought to be involved into specific aspects of our work. I will outline where projects are still lacking to support specific use cases, and why they matter. Furthermore, I will discuss open problems, as well as opportunities for participation and collaboration, and I will make some predictions about the future, based on the work that was already done, and that is currently being tackled.


11:20

Ilias Anagnostopoulos

The Otherside: Web-based Collaborative Sound Synthesis System

The Otherside is an advanced open-source multimedia system, designed to run as a server application. It is essentially a sound synthesis system that allows for the creative collaboration of a number of people or machines over computer networks. Unlike most similar attempts to create network-based audio applications, the Otherside does not require any specialized audio software or massive amounts of computer processing power. The audio processing is done on the server side and directed to listeners using a simple Internet Radio protocol. Any listener can then choose to participate in the sound creation process by using a simple web-browser-based chatroom, or by utilising their own advanced systems for network control.


12:00

Martin Rumori

GROMA – Programming a permanent multichannel sound installation

In this paper I shall be addressing some of the more complex issues of developing the software for the permanent urban sound installation GROMA, which incorporates environmental sounds from two partner cities of Cologne, Rotterdam and Li=C3=A8ge. This installation has been inaugurated in 2008 at the location of Europe’s largest underground parking lot. The programming environment Supercollider was used for realising the algorithmic rules of the score and a flexible routing scheme for multichannel sound distribution.


Sunday 19 April


Afternoon session


14:30

Workshop reports


14:45

Announcements and discussion


15:15


Panel discussion:
Linux (Audio) in science and research
 


16:30

Closing