Keywords

8.1 Introduction

The seminal book, Making Things Talk, provides instructions for projects that connect physical objects to the internet, such as a pet’s bed that sends text messages to a Twitter tag (Igoe 2007). Smart Things , such as LG’s recently released Wifi Washing Machine , can be remotely monitored and controlled with a browser on a Smart Phone . The Air-Quality Egg http://airqualityegg.com is connected to the Internet of Things where air-quality data can be shared and aggregated across neighborhoods, countries or the world. However, as Smart Things become more pervasive there is a problem that the interface is separate from the thing itself. In his book on the Design of Everyday Things, Donald Norman introduces the concept of the Gulf of Evaluation that describes how well an artifact supports the discovery and interpretation of its internal state (Norman 1988). A Smart Kettle that tweets the temperature of the water as it boils has a wider Gulf of Evaluation than an ordinary kettle that sings as it boils, because of the extra levels of indirection required to access and read the data on a mobile device compared to hearing the sound of the whistle.

In this chapter we research and develop interactive sonic interfaces designed to close the Gulf of Evaluation in interfaces to Smart Things . The first section describes the design of sounds to provide an emotional connection with an interactive couch designed in response to the Experimenta House of Tomorrow exhibition (Hughes et al. 2003) that asked artists to consider whether “the key to better living will be delivered through new technologies in the home”. The design of the couch to provide “both physical and emotional support” explores the principle of the hedonomics movement that proposes that interfaces to products should provide pleasurable experiences (Hancock et al. 2005). In the course of this project we developed a method for designing sounds that link interaction with emotional responses, modeled on the non-verbal communication between people and their pets (Barrass 2013). The popularity of ZiZi the Affectionate Couch in many exhibitions has demonstrated that affective sounds can make it pleasurable to interact with an inanimate object.

The next section explores the aesthetics of sounds in interfaces to Smart Things designed for outdoors sports and recreational activities. The first experiment investigates preferences between six different sonifications of the data from an accelerometer on a Smart Phone . The results indicate that recreational users prefer more musical sonic feedback while more competitive athletes prefer more synthetic functional sounds (Barrass et al. 2010). The development of interactive sonifications as interfaces to Smart Things was explored further in prototypes that synthesized sounds on Arduino microprocessors with various kinds of sensors and wireless communications. These prototypes also explored different materials that float and can be made waterproof, shapes that cue different manual interactions, and sonifications that convey different kinds of information. Technical issues with parallel data acquisition and sound synthesis on the Arduino microprocessor were addressed by extending the open source software to allow continuous real-time sound synthesis without blocking the simultaneous data acquisition from sensors (Barrass and Barrass 2013).

The final section describes the Mozzi software that grew out of these experiments. Mozzi can be used to generate algorithmic music for an artistic installation, wearable sounds for a performance, or interactive sonifications of solar panel sensors, on a small, modular and cheap Arduino , without the need for additional circuitry, message passing or external synthesizer hardware (Barrass 2013). The software architecture and Application Programming Interface (API) of Mozzi are described, with examples from an online tutorial that can guide further learning. The final section provides some examples of what can be done with Mozzi that include artistic installations and scientific applications that have been created by the Mozzi community.

8.2 Affective Sounds

The call for the Experimenta House of Tomorrow exhibition asked artists to question the effects of technology on domestic life in the future (Hughes et al. 2003). This led us to think about how the beeps of the microwave, dishwasher, and robot vacuum cleaner could be designed to be more pleasurable, engaging and communicative? What if things around the house that normally sit quietly made sounds? What kinds of sounds would furniture make, and why? To find out, we made a couch that whines for attention, yips with excitement, growls with happiness, and purrs with contentment. Would these interactive sounds increase the pleasure of sitting on it, or increase the irritation we feel when bombarded with meaningless distractions? ZiZi the Affectionate Couch is an ottoman covered in striped fake fur, shown in Fig. 8.1.

Fig. 8.1
figure 1

ZiZi the Affectionate Couch at the Experimenta House of Tomorrow exhibition. Photo courtesy of Experimenta Media Arts

A motion detector inside the couch senses movement up to 3 m away. Bumps with fur tufts on the surface suggest patting motions. The sensor signal varies in strength and pattern with the distance and kind of motion. The signal is analyzed by a PICAXE microprocessor to identify four states of interaction—no-one nearby, sitting, patting and stroking. When there is no-one nearby the couch is bored, and whines occasionally to attract attention. When someone sits on it, the couch expresses excitement with short yips, modeled on the response of a dog when it greets you. When the couch is patted, it expresses happiness through longer growls. When the couch is stroked at length, it expresses contentment by purring like a tiger. The design of sounds to convey these emotions was based on studies in which participants rated a 111 sounds on an Affect Grid, that has axes of Valence and Arousal (Bradley and Lang 2007). Valence is a rating from displeasure to pleasure, whilst Arousal is rated from sleepy to excited. The four states of interaction are shown plotted on the Affect Grid according to the emotion that should be conveyed, as shown in Fig. 8.2. The transition between states is shown by a path with a bead on it showing the current state. The sound palette is also plotted on the Affect grid, and samples that fall within the shading convey the affective response to that interaction. This Interactive Affect Design Diagram provides a new way to design sounds to convey emotion in response to interaction states. In future work the Interactive Affect Design Diagram will be tested as a way to design more complex sonic characters where positive valency sounds encourage some kinds of interaction, and negative valency sounds discourage others.

Fig. 8.2
figure 2

Designing interactive affective sounds using the affect grid

The microprocessor is connected to a RAM chip that stores 16 mono sound samples. A four channel mixer and amplifier allows up to four of the samples to be combined to create more complex sounds and reduce repetition. The sounds are played through nine sub-woofer rumble-packs to produce purring vibrations and effects. The sounds can be heard at <http://stephenbarrass.com/2010/03/02/sounds-of-zizi/>.

8.3 Aesthetics of Sonic Interfaces

Mobile Apps that track health and fitness in outdoor activities are popular on Smart Phones . However the touchscreen interface is distracting and demanding during eyes-busy, hands-busy, outdoor activities. Although speech interfaces can provide alarms and summaries, they cannot convey continuous multi-sensor data in real time. The sensors, data processing and communications built into Smart Devices led us to develop an App to explore the idea that data sonifications could provide continuous feedback for sports and recreational activities. The Sweatsonics App, shown in Fig. 8.3, was designed to study aesthetic preferences between six different sonifications of the 3 axis accelerometer on a Smart Phone . The phone was strapped to the arm of the participant, who listened through headphones whilst involved in a fitness activity of their own choosing, such as walking, jogging or tai-chi. During the activity, the time and duration of selections of different sonifications was logged.

Fig. 8.3
figure 3

Sweatsonics App on a smart phone

The results showed preferences for the sine wave sonification (which sounds like a synthesizer or theremin ), and a sonification with 3 musical instruments. The results from this pilot study raised many more questions about the aesthetics of sonification in sports and fitness. Were more competitive users choosing the sine wave which sounds like a medical device, whilst the more recreational users chose the more musical sonification ? Was there a relationship with the energy of the activity, e. g. jogging compared to tai-chi? Were there age or gender differences? What is clear is that there are many factors that affect preferences between sonifications of the same sensor data with different users in different activities and contexts.

8.4 The Sonification of Things

During the experiments with the Sweatsonics App we found that users often knocked or dropped the Smart Phone , which is expensive to fix or replace, and that watery environments like rowing on the lake were particularly risky. Rugged casings can increase the robustness, but it can be difficult to retrieve a Smart Thing that has been dropped in the lake. The PICAXE microcontroller system with the additional sample player that we made for the couch is larger than a Smart Phone , and too unwieldy to embed in a hand-sized object. However, the open source Arduino microprocessor has a capability to synthesize audio tones with variable frequency and duration. Arduino clones, with diminishing sizes, are designed to be embedded in smaller objects. This led to the choice of the Arduino in the next series of experiments with embedded sonifications as interfaces to Smart Things . The three prototypes, called Flotsam , Jetsam and Lagan , have a nautical theme motivated by our observations of the risk of dropping a Smart Phone in the lake in earlier experiments. Each prototype combines and explores a mixture of different materials, shapes, technologies, sounds, algorithms, and metaphors with a focus on robustness, waterproofing and floatability in a wet outdoor environment.

This first design was inspired by discussions about rowing, where the turning points in graphs of the acceleration of the skiff are used to understand and improve the smoothness and power of the stroke technique. The first prototype, called Flotsam , synthesises a “clicking” sound triggered by turning points in acceleration. The rate of clicks increases with the jerkiness of the movement, and stops when the movement is continuous, or stationary. This specialized Smart Thing is made of wood that floats and protects it from knocks. The two hollowed out halves are screwed together through a rubber gasket to make it watertight, as shown in Fig. 8.4.

Fig. 8.4
figure 4

Smart Flotsam

Inside is an Arduino clone, called the Boarduino , which has a smaller 75 x 20 x 10 mm footprint. There are no knobs, buttons or screens. The synthesized sounds are transmitted to the outside with a Sparkfun NS73M FM radio chip tuned between 87–108 MHz. A 30 cm antennae wire wrapped around the inside of the casing broadcasts the signal 3–6 m. Flotsam could allow a team of rowers to listen to the turning points in the motion of their skiff through radio headphones, potentially providing a common signal for synchronizing the stroke. Although the sounds come from the headphones, rather than the object itself, the direct relationship between the clicking and movement creates a sense of causality.

The click is synthesized with the built-in tone() command, set at 1 kHz frequency for 10 ms duration. In designing sounds on the Arduino it is important to understand that the tone() command blocks all other data acquisition and control functions until it completes. In testing we found that a click of 10 ms duration is audible without causing a noticeable interference with the other routines. In trials the FM transmission frequency drifted over time, which required the receiver to be re-tuned after 10–20 min, and sometimes it was difficult to find the signal again. However, the biggest problem was interference from commercial radio channels with much stronger signals scattered across the FM spectrum . Unscrewing the halves of the shell to re-tune the transmitter is inconvenient, but an external knob could compromise the robustness and waterproofing. The 9 V alkaline battery lasts 10 h, but unscrewing the halves to replace it is also inconvenient.

Flotsam demonstrated that useful sonifications can be designed with the tone() command on an Arduino that is small and cheap enough to be embedded in things used in demanding outdoor activities. It raised issues with the range of sounds that can be produced, and the technical problems of transmitting sounds from inside the object to the outside.

This next design iteration was motivated by the observation that it can be difficult to tell someone how to hold a piece of sporting equipment, such as an oar, or a tennis racquet. Could the Smart Thing itself guide the orientation and position of the user’s grip? Jetsam has a twisted shape that can be held in different ways and orientations. This time we investigated haptic vibration as a mode of information feedback that allows the object to be sealed, waterproof and floatable.

Jetsam was carved from pumice which is a natural material formed from lava foam aerated with gas bubbles that can be found floating in the sea or lakes near volcanoes, as shown in Fig. 8.5. The exterior was sealed with several layers of polyester to make a smooth and toughened surface. The halves were hollowed out to make space for an Ardweeny clone that has a 40 x 14 x 10 mm footprint that is half the length of the Boarduino . The 3 axis accelerometer was soldered to the analog inputs on the Ardweeny, which was then inserted in the bottom half, while the 9 V battery was inserted into the top half. The two halves are connected by a screw-top (made from a plastic milk bottle) that seals but can be quickly and easily opened to replace the battery and re-program parameters on the microprocessor.

Fig. 8.5
figure 5

Smart Jetsam

Mobile phone vibrators are glued inside at the top, on one side in the middle, and at the bottom, so that the haptic vibrations are localizable. The vibration is subsonic in the range from 0 to 20 Hz, after which the feeling becomes continuous. The first experiments with the tone() command to generate vibration effects used frequencies from 0–10 Hz but these could not be easily felt. This led to the development of a “pulse train” algorithm where each pulse was generated by a 1 kHz tone() with a duration of 50 ms which enabled the perception of variation in pulse rate between 0 and 10 Hz. The top vibrator pulses faster in linear steps as the x axis orientation goes from 0 to + − 90°. The bottom end pulses in response to y axis orientation, and the middle pulses with the z axis. If the object is held vertical then the top pulses at the fastest rate. A slight horizontal tilt causes the bottom to pulse slowly as well. A tilt in all three directions produces pulses at different rates at the top, bottom and middle. The pattern of haptic sensation changes as the object is held in different orientations. This prototype raises the question of how accurately a user could learn to estimate the orientation of the object from the 3 localized pulses.

Although only one tone() command can be used at a time, and it blocks all other routines, the 3 simultaneous pulse trains can be programmed using the millis() function that allows the tracking of time in the control loop. However the scheduling of the control loop varies with CPU load. At orientations with higher pulse rates in all 3 directions the pulses becomes irregular and glitchy due to the technical limitations of the Arduino to sustain the timing of the control loop at real-time rates. The trials also showed practical problems with the screwcap attachment. After several openings the wires from the battery became tangled and eventually detached, whilst multiple removals of the Ardweeny from the cavity detached the solder connections to the vibrators.

Jetsam demonstrated that haptic vibrations could also be an interface to Smart Things . This experiment also reiterated technical issues around the blocking behavior of the tone() command that effect multichannel sonic and haptic feedback on the Arduino .

The next iteration, called Lagan , is crafted from a cuttlefish backbone which is a natural material that floats and provides shock-proofing, with the benefit that it is lighter than wood. Two half shells were hollowed out and lacquered with polyester to seal them, as shown in Fig. 8.6. The problems of twisted connections in Jetsam led us to reuse the sealed gasket approach from Flotsam . This time we trialed an Arduino Nano that has a 43 × 19 × 10 mm footprint similar to the Ardweeny , with the added advantage of a lower input voltage of 6 V which allows a further reduction in size through the use of a flat Li-ion battery.

Fig. 8.6
figure 6

Smart Lagan

The sonification was designed to have a sea wave-like sound to match the organic shape and material. The sound was synthesized by varying the amplitude of white noise loaded into a cycling buffer and output to the PWM analog audio output. The need to synthesize a continuous sound without blocking data acquisition and control routines required the default timer behavior to be overridden. The variation in the acceleration in 3 directions is sonified by continuous variation in the amplitude of the synthesized sea-wave sound . The random number generator on the Arduino was too slow to interactively synthesize 3 channels of noise, so the number of channels was reduced to two. A pair of speakers were constructed on the object by gluing piezo buzzers against holes drilled through the housing that were sealed with thin flexible plastic to make them waterproof. The sound is audible but an amplifier could improve the dynamic range.

The development of a continuous sound for the Lagan prototype required changes to the low level workings of the Arduino software. This development adds a new capability to synthesize continuous sonic feedback without blocking the data acquisition and control. Building on this technical breakthrough we developed a continuous noise synthesizer that can be interactively varied in amplitude in response to a continuous stream of real-time data from a sensor.

Flotsam , Jetsam and Lagan were curated as an installation for the Conference on New Interfaces for Musical Expression in Sydney in 2010 (Barrass and Barrass 2010), shown in Fig. 8.7.

Fig. 8.7
figure 7

Flotsam , jetsam and lagan at NIME 2010

These prototypes provided insights and understanding of the design space of embedded sonifications as interfaces to Smart Things . We found that the Arduino microprocessor can be an alternative to consumer mobile devices as a platform for sonification in outdoor activities. However the tone() command limits the range of sounds to beeps and sine tones, hindering the development of more than very simple interfaces. Through these experiments it became clear that a naive approach to programming audio on Arduino would not provide satisfactory real-time performance. However, we were able to take over the timers on the Arduino to program a custom synthesis algorithm opening up the potential to design more complex sonifications.

8.5 Mozzi: An Embeddedable Sonic Interface to Smart Things

The low level modifications to the Arduino required to synthesise the sea-sound sonification for Lagan provide a foundation for other sonic metaphors and synthesis algorithms. This led to the idea to develop a general purpose synthesis library to enable a much wider and richer range of sonifications. Mozzi is an open source software library that enables the Arduino microprocessor to generate complex and interesting sounds using familiar synthesis units including oscillators, samples, delays, filters and envelopes. Mozzi is modular and can be used to construct many different sounds and instruments. The library is designed to be flexible and easy to use, while also aiming to use the processor efficiently, which is one of the hurdles preventing this kind of project from succeeding until now. To give an idea of Mozzi’s ability, one of the example sketches which comes with the library demonstrates fourteen audio oscillators playing simultaneously while also receiving real-time control data from light and temperature sensors without blocking.

Mozzi has the following features:

  • 16384 Hz audio sample rate with almost-9-bit STANDARD and 14 bit HIFI output modes.

  • Variable control rate from 64 Hz upwards.

  • Familiar audio and control units including oscillators, samples, filters, envelopes, delays and interpolation.

  • Modules providing fast asynchronous analog to digital conversion, fixed point arithmetic and other cpu-efficient utilities to help keep audio running smoothly.

  • Readymade wave tables and scripts to convert sound files or generate custom tables for Mozzi .

  • More than 30 example sketches demonstrating basic use.

  • Comprehensive API documentation.

  • Mozzi is open source software and easy to extend or adapt for specific applications.

Mozzi inherits the concepts of separate audio and control rate processes directly from Csound (Vercoe 1993) and Pure Data (Puckette 1996). The interface between Mozzi and the Arduino environment consists of four main functions startMozzi(), updateAudio(), updateControl() and audioHook(), shown in Fig. 8.8. All four are required for a Mozzi sketch to compile.

Fig. 8.8
figure 8

Mozzi architecture

startMozzi(control_rate) goes in Arduino ’s setup(). It starts the control and audio output timers, given the requested control rate in Hz as a parameter.

updateControl() is where any analog input sensing code should be placed and relatively slow changes such as LFO’s or frequency changes can be performed. An example of this is shown in section 5.2.

updateAudio() is where audio synthesis code should be placed. This runs on average 16384 times per second, so code here needs to be lean. The only other strict requirement is that it returns an integer between − 244 and 243 inclusive in STANDARD mode or − 8192 to 8191 in HIFI mode.

audioHook() goes in Arduino ’s loop(). It wraps updateAudio() and takes care of filling the output buffer, hiding the details of this from user space.

Mozzi uses hardware interrupts on the processor which automatically call interrupt service routines (ISR) at regular intervals. startMozzi() sets up two interrupts, one for audio output at a sample rate of 16384 Hz and a control interrupt which can be set by the user at 64 Hz or more, in powers of two.

In STANDARD mode, the internal timers used by Mozzi on the ATmega processors are the 16 bit Timer 1 for audio and 8 bit Timer 0 for control. HIFI mode additionally employs Timer 2 with Timer 1 for audio. Using Timer 0 disables Arduino time functions millis(), micros(), delay() and delayMicroseconds(). This saves processor time which would be spent on the interrupts and the blocking action of the delay() functions. Mozzi provides an alternative method for scheduling (see EventDelay() in the API).

Audio data is generated in updateAudio() and placed in the output buffer by audioHook(), in Arduino ’s loop(), running as fast as possible. The buffer has 256 cells which equates to a maximum delay of about 15 ms, to give leeway for control operations without interrupting audio output. The buffer is emptied behind the scenes by the regular 16384 Hz audio interrupt.

Mozzi employs pulse wave modulation (PWM ) for audio output. This allows a single Arduino pin to be allocated for output, requiring minimal external components. Depending on the application, the output signal may be adequate as it is. Passive filter designs to reduce aliasing and PWM carrier frequency noise are available on the Mozzi website if required. Mozzi has an option to process audio input. The incoming sound is sampled in the audio ISR and stored in a buffer where it can be accessed with getAudioInput() in updateAudio().

8.5.1 Application Programming Interface (API )

Mozzi has a growing collection of classes for synthesis, modules containing useful functions, commonly used wave tables, and sampled sound tables. Mozzi includes Python scripts to convert raw audio files and templates which can be used to generate other custom tables.

Descriptions of the classes currently available are shown in Table 8.1. Modules are described in Table 8.2. Comprehensive documentation of the library is available online http://sensorium.github.com/Mozzi/

Table 8.1 The current collection of Mozzi classes, with descriptions and the update rates of each
Table 8.2 Modules and descriptions

Bare bones example: playing a sine wave

This section explains a minimal Mozzi sketch step by step. The sketch plays a sine wave at a specified frequency. Although there are abundant instances online of Arduino sketches performing this task, this example illustrates the structure and gist of a bare-bones Mozzi sketch. It does not assume much previous experience with Arduino programming.

First include MozziGuts.h. This is always required, as are headers for any other Mozzi classes, modules or tables used in the sketch. In this case an oscillator will be used, and a wavetable for the oscillator to play:

figure a

The oscillator needs to be instantiated using literal numeric values as template parameters (inside the < > brackets). This allows the compiler to do some of the Oscil’s internal calculations at compile time instead of slowing down execution by repeating the same operations over and over while the program runs. An oscillator is declared as follows:

figure b

The table size must be a power of two, typically at least 256 cells and preferably longer for lower aliasing noise. This Oscil will be operating as an audio generator. AUDIO_RATE is internally defined by Mozzi , and provided here so the Oscil can calculate frequencies in relation to how often it is updated. The table_data is an array of byte sized cells contained in the table file included at the top of the sketch.

The audio sine tone oscillator is created like this:

figure c

The control rate, like the audio rate, must be a literal number and power of two to enable fast internal calculations. It is not necessary to define it as follows, but it helps to keep programs legible and simple to modify.

figure d

Now to the program functions. In Arduino ’s setup() routine goes:

figure e

This sets up one timer to call updateControl() at the rate chosen and another timer which works behind the scenes to send audio samples to the output pin at the fixed rate of 16384 Hz.

The oscillator frequency can be set in a range of ways, but in this case it will be with an unsigned integer as follows:

figure f

Now Arduino ’s setup() function looks like this:

figure g

The next parts of the sketch are updateControl() and updateAudio(), which are both required. In this example the frequency has already been set and the oscillator just needs to be run in updateAudio(), using the Oscil::next() method which returns a signed 8 bit value from the oscillator’s wavetable. The int return value of updateAudio() must be in the range − 244 to 243.

figure h

Finally, audioHook() goes in Arduino ’s loop().

figure i

This is where the sound actually gets synthesised, running as fast as possible to fill the output buffer which gets steadily emptied at Mozzi ’s audio rate. For this reason, it’s best to avoid placing any other code in loop().

It’s important to design a sketch with efficiency in mind in terms of what can be processed in updateAudio(), updateControl() and setup(). Keep updateAudio() lean, put slow changing values in updateControl(), and pre-calculate as much as possible in setup(). Control values which directly modify audio synthesis can be efficiently interpolated with a Line() object in updateAudio() if necessary.

The whole sketch is shown in Program 1.

Program 1.

Playing a sine wave at 440 Hz.

figure j

Vibrato can be added to the sketch by periodically changing the frequency of the audio wave with a low frequency oscillator. The new oscillator can use the same wave table but this time it is instantiated to update at control rate. The naming convention of using a prefix of k for control and a for audio rate units is a personal mnemonic, influenced by Csound .

figure k

This time the frequency can be set with a floating point value:

figure l

Now, using variables for depth and centre frequency, the vibrato oscillator can modulate the frequency of the audio oscillator in updateControl(). kVib.next() returns a signed byte between − 128 to 127 from the wave table, so depth has to be set proportionately.

figure m

The modified sketch complete with vibrato is listed in Program 2.

Program 2.

Playing a sine wave at 440 Hz with vibrato.

figure n

While this example uses floating point numbers, it is best to avoid their use for intensive audio code which needs to run fast, especially in updateAudio(). When the speed of integer maths is required along with fractional precision, it is better to use fixed point fractional arithmetic. The mozzi_fixmath module has number types and conversion functions which assist in keeping track of precision through complex calculations.

The Mozzi software was developed over a two year period. Many optimisation problems existed which had to be teased out one by one. The project has found solutions to the problems of affordable and easily embedded audio synthesis and has broken out of the sample-playback, single wave beeping paradigm widely accepted as embedded audio to date. This opens the way to increased creative uses of the Arduino and other compatible platforms. The first release of the library, which is called Mozzi, was made available on Github in June 2012. The range of potential applications has yet to be explored, however some examples which have appeared so far include:

A musical fruit fly experiment for a science fair at The Edge in the State Library of Queensland. Kinetic and electronic artists Clinton Freeman , Michael Candy , Daniel Flood and Mick Byrne worked with Dr Caroline Hauxwell from the Queensland University of Technology to produce an interactive installation where people could play chords which represented the different resistances of a group of pieces of fruit infested with fruit flies. According to the project documentation, resistance is sometimes used as a measure of fruit quality (Freeman 2013).

B.O.M.B.- Beat Of Magic Box -, a palm-sized interactive musical device by Yoshihito Nakanishi , designed for cooperative performance between novice participants. The devices communicate wirelessly and produce related evolving harmonic and rhythmic sequences depending on how they are handled (Nakanishi 2011).

There are several MIDI -based synthesisers using Mozzi as a synthesis engine. One example is ^ [xor] synth by Václav Peloušek , founder of the Standuino hand-made electronic music project, with six voice polyphony, velocity sensitivity, envelopes, selectable wavetables, modulation and bit-logic distortions (Peloušek 2013). Others include Arduino Mozzi synthesizer vX.0 by e-licktronic , a mono synth with selectable wavetables, LFO and resonant filtering (e-licktronic 2013), and the ironically humorous FM-based CheapSynth constructed and played by Dave Green and Dave Pape in a band called Fakebit Polytechnic (Fakebit Polytechnic 2013).

Mozzi is relatively young yet ripe for a community of open source development and practice to emerge around it. It’s easy to write new classes and to construct composite instrument sketches. Feedback from educators has shown that the library is able to be used by children. There is the potential for porting the library to new Arduino platforms as they become available. As it is, there is already a long to-do list, including a variety of half-finished sound generators and instruments, and the ever-receding lure of creative work beyond the making of the tool.

Mozzi expands the possibilities for sonification and synthesis in new contexts away from expensive hardware, cables and power requirements. The low cost and accessibility of Mozzi synthesis on open source Arduino provides a way to create applications and compositions adapted to a wide range of localized conditions. We hope that Mozzi will contribute towards a future where Smart Things sound smart, through sonic interfaces that growl, purr and sing, rather than simply beep or tweet.