\documentclass[12pt,a4paper,oneside,titlepage]{paper} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{textcomp} \usepackage{listings} \lstdefinelanguage{Ini}{basicstyle=\ttfamily\tiny, columns=fullflexible, tag=[s]{[]}, tagstyle=\color{blue}\bfseries, usekeywordsintag=true }[html] \lstdefinelanguage{bash}{basicstyle=\ttfamily\tiny} \usepackage{ulem} \usepackage{lmodern} \usepackage{multirow} \usepackage{url} \usepackage{graphicx} \usepackage{pdfpages} \usepackage{float} \floatstyle{boxed} \restylefloat{figure} \usepackage{color} \usepackage{hyperref} \hypersetup{hidelinks, colorlinks = false} \usepackage[font=scriptsize]{caption} \usepackage[authoryear]{natbib} \graphicspath{{../images//}} \begin{document} \begin{titlepage} \centering \includegraphics[width=0.3\textwidth]{tu-berlin-logo.pdf}\par\vspace{1cm} {\scshape\LARGE Technische Universität Berlin\par} \vspace{1cm} {\scshape\Large Master Thesis\par} \vspace{1.5cm} {\huge\bfseries A Networking Extension for the SoundScape Renderer\par} \vspace{2cm} {\Large\itshape David Runge\par} \href{dave@sleepmap.de}{dave@sleepmap.de} \vfill supervised by\par Henrik von Coler and Stefan Weinzierl \vfill {\large \today\par} \end{titlepage} \pagestyle{empty} \section*{Eidesstattliche Erklärung} \vspace{1cm} Hiermit erkläre ich, dass ich die vorliegende Arbeit selbstständig und eigenhändig sowie ohne unerlaubte fremde Hilfe und ausschließlich unter Verwendung der aufgeführten Quellen und Hilfsmittel angefertigt habe.\\ Berlin, den \today\par\\ \vspace{2cm} \noindent\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\\ David Runge \begin{abstract} Wave Field Synthesis (WFS) as a technological concept has been around for many years now and all over the world several institutions run small and some even large scale setups ranging from single speaker lines to those facilitating a couple of hundred loudspeakers respectively.\\ The still evolving implementations are driven by several rendering engines, of which two free and open-source ones, namely sWONDER and SoundScape Renderer, have (partially) been developed at TU Berlin.\\ The latter due to its current design is not yet able to render for large scale setups, ie.\ those using several computers to render audio on a loudspeaker setup, due to the high amount of channels.\\ Its solid codebase however, which additionally offers a framework for many more renderering types, and the ongoing development, deems further work on this application a good future investment.\\ This work is about the extension of the SoundScape Renderer's functionality to turn it into a networking application for large scale WFS setups. \end{abstract} \setcounter{tocdepth}{4} \tableofcontents \clearpage \pagebreak \pagestyle{headings} \setcounter{page}{1} \section{Introduction} Wave Field Synthesis (WFS) describes a spatial technique for rendering audio. As such it aims at synthesizing a sound field of desired acoustic preference in a given listening area, assuming a planar reproduction to be most suitable for most applications.\\ WFS is typically implemented using a curved or linear loudspeaker array surrounding the listening area.\\ Several free and open-source renderer applications exist for WFS environments, with varying stages of feature richness.\\ The proposed work will focus on one of them and its extension towards WFS on large scale systems. \section{Spatial audio renderers and their appliance} \subsection{Wave Field Synthesis} \subsection{Higher Order Ambisonics and Vector Amplitude Panning} \subsection{Binaural synthesis} \section{Free and open-source spatial audio renderers} To date there exist three (known of) free and open-source spatial audio renderers, which are all \href{http://jackaudio.org/}{JACK Audio Connection Kit (JACK)} \citep{website:jackaudio2016} clients: \begin{itemize} \item \href{https://sourceforge.net/projects/swonder/}{sWONDER} \citep{website:swonder2016}, developed by Technische Universität Berlin, Germany \item \href{https://github.com/GameOfLife/WFSCollider}{WFSCollider} \citep{website:wfscollider2016}, developed by \href{http://gameoflife.nl/en}{Game Of Life Foundation} \citep{website:gameoflife2016}, The Hague, Netherlands \item \href{http://spatialaudio.net/ssr/}{SoundScape Renderer (SSR)} \citep{website:ssr2016}, developed by Quality \& Usability Lab, Deutsche Telekom Laboratories and TU Berlin and Institut für Nachrichtentechnik, Universität Rostock \end{itemize} Currently only WFSCollider and the SSR are actively maintained and developed, thus sWONDER, although used in some setups, loses significance. Generally it can be said, that different concepts apply to the three renderers, which are about to be explained briefly in the following sections. \subsection{WONDER} sWONDER \citep{baalman2007} consists of a set of C++ applications that provide binaural and WFS rendering. In 2007 it was specifically redesigned \citep{baalmanetal2007} to cope with large scale WFS setups in which several (computer) nodes, providing several speakers each, drive a system together.\\ In these setups each node receives all available audio streams (which represent one virtual audio source respectively) redundantly and a master application signals which node is responsible for rendering what source on which speaker.\\ It uses Open Sound Control (OSC) for messaging between its parts and for setting its controls. Apart from that, it can be controlled through a Graphical User Interface (GUI), that was specifically designed for it. Unfortunately sWONDER has not been actively maintained for several years, has a complex setup chain and many bugs, that are not likely to get fixed any time soon. \subsection{HOA-Pd} \subsection{WFSCollider} WFSCollider was built on top of \href{https://supercollider.github.io}{SuperCollider} 3.5 \citep{website:supercollider2016} and is also capable of driving large scale systems. It uses a different approach in doing so, though: Whereas with sWONDER all audio streams are distributed to each node, WFSCollider usually uses the audio files to be played on all machines simultaneously and synchronizes between them.\\ It has a feature-rich GUI in the ``many window'' style, making available time lines and movement of sources through facilitating what the sclang (SuperCollider programming language) has to offer.\\ As WFSCollider basically is SuperCollider plus extra features, it is also an OSC enabled application and can thus also be used for mere multi-channel playback of audio.\\ Although it has many useful features, it requires MacOSX (Linux version still untested) to run, is built upon a quite old version of \href{https://supercollider.github.io}{SuperCollider} and is likely never to be merged into it, due to many core changes to it. \subsection{SoundScape Renderer} SoundScape Renderer (SSR), also a C++ application, running on Linux and MacOSX, is a multi-purpose spatial audio renderer, as it is not only capable of Binaural Synthesis and WFS, but also Higher-Order Ambisonics and Vector Base Amplitude Panning.\\ It can be used with a GUI or headless (without one), depicting the virtual sources, their volumes and positions, alongside which speakers are currently used for rendering a selected source. SSR uses TCP/IP sockets for communication and is therefore not directly OSC enabled. This functionality can be achieved using the capapilities of other applications such as \href{http://puredata.info}{PureData} \citep{website:puredata2016} in combination with it though.\\ Unlike the two renderers above, the SSR is not able to run large-scale WFS setups, as it lacks the features to communicate between instances of itself on several computers, while these instances serve a subset of the available loudspeakers. \section{Extending Sound Scape Renderer functionality} The SSR, due to its diverse set of rendering engines, which are made available through an extensible framework, and its relatively clean codebase, is a good candidate for future large scale WFS setups. These type of features are not yet implemented though and will need testing.\\ Therefore I propose the implementation and testing of said feature, making the SSR capable of rendering on large scale WFS setups with many nodes, controlled by a master instance.\\ The sought implementation is inspired by the architecture of sWONDER, but instead of creating many single purpose applications, the master/node feature will be made available through flags to the ssr executable, when starting it. This behavior is already actively harnessed eg.\ for selecting one of the several rendering engines. \begin{figure}[!htb] \centering \includegraphics[scale=0.9, trim = 31mm 190mm 24mm 8mm, clip] {ssr-networking.pdf} \caption{A diagram displaying the SSR master/node setup with TCP/IP socket connections over network (green lines), audio channels (red dashed lines) and OSC connection (blue dashed line). Machines are indicated as red dashed rectangles and connections to audio hardware as outputs of SSR nodes as black lines below them.} \label{fig:ssr-networking} \end{figure} While the SSR already has an internal logic to know which loudspeaker will be used for what virtual audio source, this will have to be extended to be able to know which renderer node has to render what source on which loudspeaker (see Figure~\ref{fig:ssr-networking}). To achieve the above features, the SSR's messaging (and thus also settings) capabilities have to be extended alongside its internal logic concerning the selection of output channels (and the master to node notification thereof). To introduce as little redundant code as possible, most likely a ``the client knows all'' setup is desirable, in which each node knows about the whole setup, but is also set to only serve its own subset of loudspeakers in it. This will make sure that the rendering engine remains functional also in a small scale WFS setup.\\ The lack of a direct OSC functionality, as provided by the two other renderers, will not be problematic, as master and nodes can communicate through their builtin TCP/IP sockets directly and the master can, if needed, be controlled via OSC. \subsection{Prelimenaries} In preparation to the exposé I tried to implement a side-by-side installation, using Arch Linux on a medium scale setup, facilitating the WFS system of the Electronic Studio at TU Berlin. Unfortunately the proprietary Dante driver, that is used in that system is very complex to be built, as well as underdeveloped and thus keeps the system from being easily updated, which is needed for testing purposes (finding a suitable real-time, low-latency Linux kernel), trying out new software features, building new software and keeping a system safe. The driver will most likely require changes to the hardware due to implemention of hardware branding by the vendor and dire testing before usage.\\ Although eventually using a proper WFS setup for testing will be necessary, it is luckily not needed for implementing the features, as they can already be worked out using two machines running Linux, JACK and the development version of SSR.\\ The hardware of the large scale setup at TU Berlin in H0104 is currently about to be updated and therefore a valuable candidate for testing of the sought after SSR features. \subsection{Outline} \subsubsection{Remote controlling a server} \subsubsection{Remote controlling clients} \subsubsection{Rendering only on dedicated speakers} \subsection{Publisher/Subscriber interface} \subsection{IP interface} \subsubsection{PureData integration} \subsection{OSC interface} \subsubsection{liblo} \subsubsection{Client-Server setup} \subsubsection{Multi-layered clients} \subsubsection{Message interface} \section{Future Work} \subsection{Stress testing the OSC interface} \subsection{Implementing a NullRenderer} \subsection{Implementing AlienLoudspeaker} \subsection{Interpolation of moving sources} \pagebreak \listoffigures \pagebreak \listoftables \pagebreak \bibliographystyle{plainnat} \bibliography{../bib/ssr-networking} \end{document}