Shintaro Miyazaki: [Trans]-Sonic Archeology of Computational Assemblages
oscillation series no.2 at general public, 05.09.10
[Trans]-Sonic Archeology of Computational Assemblages
Thinking about sonic theory, the following short contribution is addressing a key issue of the subject, namely the use of sonic methods for understanding media technology. For this endeavor I suggest two complementary methods first a hardware based and second a software based approach for audifications and sonifications of processes of storage, transmission and calculation, which happen inside the networks of computational assemblages. These experimental investigations shall reveal rhythmical and sonic qualities of many technological events and processes, which are underlying our daily (inter)actions without being noticed.
Hardware based methods:
At the beginning of the first decade of the 21st century, we are surrounded by inaudible electromagnetic waves and their modulations, which transport digital data for our convenience. These wireless communication systems like Wifi, Bluetooth, UMTS, EDGE, HSPA, GSM, GPS and others can mutate into sonic instruments. The digitally modulated electromagnetic data-fields and their signals in the urban areas produced by ubiquitous BaseTransceiverStations (BTS) and all the mobile network devices can be made audible by simple signal to sound transformation (audification) using selfmade high frequency detectors, which work from 100MHz up to 2.4 GHz. Here some examples:
Because all examples above are made indoors, the BTS signals are inaudible, but you can clearly hear the background noise of household wifi networks besides the rhythms created by each of the special signals. Here is a recording of an ordinary walk through urban city space, where you can clearly hear the strong signal of a BaseTransceiverStation. There are more recordings at the website detektors.org (in collaboration with Martin Howse).
Not only intended, but also unintended emissions of electronic communication devices can be made audible. For that it is recommended to use shielded power inductors (i.e 150mH), which transform changes in its surrounding electromagnetic field to changes in electric current, which produce sound, when connected to an audio amplifier.
On 27th of Sept. 2007 a secret 1972 paper from the National Security Agency’s in-house journal Cryptologic Spectrum with the title “TEMPEST: A Signal Problem.” was declassified. “To state the general problem in brief: Any time a machine is used to process classified information electrically, the various switches, contacts, relays, and other components in that machine may emit radio frequency or acoustic energy.” This means that it is possible to relate processes of storage, transmission and calculation of machinic media assemblages to their side channel effects, thus to eavesdrop and reconstruct leaking signals via unintended ways. This has as well some epistemic values.
First of all if we know the rules of creating those signals. Thus if we know how the devices work in a technical and media archeological sense, then one can analyze the media processes by listening to their emissions and mapping them to the rules and protocols of their sources. If a musicologist knows the score, he/she can analyze the chords, melodies or rhythms by hearing an audio-recorded performance of a given musical piece. This way of analyzing music is far better and more concrete than just reading the score and more accurate than playing the piece byoneself on a piano. In a similar fashion it is closer to the material of research, when it is possible to conduct experiments with the investigated media assemblages and hear the dynamic rhythms and character of those else very dry technological artefacts. This was made with the GSM protocol stack and with the 10-Base-T Ethernet protocol (A pre-version of a publication explaining a media archeology of the ethernet was published under the title: “AlgoRHYTHMS Everywhere – a heuristic approach to everyday technologies”. Here are some sound examples:
The first one is a slowed down recording of a controlled ethernet transmission. High resolution recording (approx. 10Mhz sampling rate) was made directly by again merely hooking up one pair of the 10Base-T cable into a audio mixing device with preamplifiers and using a very fast digital sampling device. A similar situation can be seen and heard in the following video. You can hear the continuous stream of small micro-rhythmical clicks, which are equal to one frame.
Above are some more complex examples made with a power coil (AC-71/3,5MM by Monacor), which can not be fully understood and therefore have just a indirect epistemic value for a sonic theory.
Software based methods:
The possibilities of software based methods for understanding computational processes are vast. Basically all kind of lists and data arrays can be sonified. I would like to focus on one study case I conducted by analyzing sorting algorithms and creating a sonification software, which makes their structural algorithmic processes audible.
Sorting Algorithms are very basic methods, which a student of computer science learns at a very early stage of his college years. They basically sort lists of numbers in ascending order. Two very different algorithms, bubble sort and merge sort shall be focused on. Bubble Sort is a very inefficient, stupid algorithm you can easily hear its simple structure. Here a sorting process of 30 items with a caluculation speed of 75ms per operation and a frequency mapping range of 200-3000Hz:
Compare that with the following merge sort algorithm with the same conditions.
Merge Sort is a recursive algorithm, which basically divides a list in to two halfs, sorts them and merges them together. It does this recursively until a list has only 2 items to sort. Then the sorting is very simple. Compared with that bubble sort is doing a lot of unnecessary steps. Bubble sort starts at the end of a list and compares the last number with the second last number and compares them and if the second last number is biggen than the last, then swaps them. This process is repeated until bubble sort reaches the beginning of the list. Then it starts again until there are no pair of numbers where the first one is bigger than the last one. This can take a lot of time, when big number is in the middle of the list, because it needs to bubble up until it reaches the top respectively the end of the list.
The project AlgorhythmicSorting (in collaboration with Michael Chinen) was a proof of concept, that algorithms can create different rhythmical properties and that it makes sense to sonifiy their crucial processing like their loop structure and operations. It is the opposite way of investigation compared to a hardware based archeology. It is more a modelling and synthesis of sound than analyzing the emissions of communication systems. It is about creating new systems, which enables one to simulate different processes and their rhythms. More algoRhythms can be heard at the soundcloud webite of sonicarcheology.
A possible follow-up and synthesis of both hardware and software based methods is planned, but the outcome is possibly in the realm of artistic speculation. The intention is – in collaboration with Michael Chinen – to programm a software, which allows us to sonify basic computational operations in a fairly low level of computer code near to machine code. Opensource operation system debugging and analyzing software shall get modified and re-used to sonify operation codes, thread states and memory space at runtime. From the hardware approach we are looking for methods to audify the same processes by using power coils for detecting the emissions of the CPU or by using power consumption analysis. More informations will follow soon.
To sum up the epistemological value of a media archeological experimental set-up and theorization of the outcomes like it was intended to describe here, it can be claimed that digital media are dynamical time-critical systems, which operate not completely immaterial – as some media theoriest might suggest -, but on a trans-material level and can be understood by combining trans-sonic with conventional visualization methods. For this difficult endeavors not only interest into the technical sides of information technology, but an open minded ethico-asthetic attitude as a state and disposition of mind for “deep listening” is needed.
Profile of Shintaro Miyazaki
born 1980 in Berlin. Grew up in Basle, Switzerland and studied Media Theory, Musicology and Philosophy at University of Basle, Humboldt University Berlin, Technical University Berlin and Free University of Berlin, M.A. in Basle 2007. Since summer 2007 he is an independent PhD Researcher at the Chair for Media Theory of Humboldt University Berlin (Wolfgang Ernst).