I’m just finishing up a trip to IPSN’12 in Beijing this week where I presented a demonstration of my current research project. It’s a pretty cool piece of work. We wanted to develop a device that plugs into the headset port of a mobile phone and allows for capture of analog signals. To that end we developed a hardware and software system called AudioDAQ that does just that.
The system consists of a small square-inch form factor piece of hardware and server side software for processing. It allows for arbitrary analog waveforms to be exported over the microphone line in the audible range, and is powered entirely from the microphone bias voltage. This allows for data to be captured by the built-in voice recording application of the phone, and makes the system compatible with virtually every handset on the market today with no software needed on the phone-side.
Data captured in a voice recording is sent to a remote server, where an algorithm extracts the original signal and produces a plot of the data, along with a comma-separated-value file of the data points.
I’ll dive into a little bit of the design of this system. I think it’s a really good example of something that’s cleverly simple. Despite being fairly simple, we encountered significant design challenges while fine-tuning system parameters to optimize power transfer and data recovery.
Delivering Power
The AudioDAQ hardware consists of a small number of analog and digital components, that require small amounts of power to operate. Additionally we wanted to provide enough power for small active sensors to operate. Most transducers only require a small number of active op-amps to operate which, if efficiently designed, will only draw a couple hundred microwatts of power. Because of this the microphone bias voltage was a suitable candidate for powering the system.
The microphone bias voltage is traditionally used to power small amounts of amplification circuitry in modern microphones found in hands-free pieces. It has been found to be around two DC volts on most surveyed handsets, and sits behind a high-impedance resistor that prevents too much current from flowing (R1 in the photo above). Because of this resistor it does not provide too much power, usually on the order of hundreds of microwatts.
We feed the microphone bias voltage into a small ultra low dropout linear regulator to ensure it is a consistent 1.8V. This becomes important because we also use this voltage as a reference voltage for our system.
Capturing Data
Next we must capture data for recording. The microphone port cannot simply be fed a DC-valued analog signal. It has a high-pass filter, formed by C1 and R2 in the first diagram, that prevents DC values from making their way through the system. To overcome this we go with a simple, effective solution: installing an analog multiplexer to switch between system ground and the signal at a speed within the audible passband.
This multiplexer creates a square wave with an amplitude reflective of the DC value of the original signal. Phone analog front ends all have different characteristic values however, and while the magnitude is proportional to the voltage of the analog sensor signal, it does not have an implicit scale. To fix this we additionally export the reference voltage across the microphone port. By switching between ground, the signal, and a reference voltage we can determine where the signal sits between the reference voltage and ground, and scale accordingly.
To further extend the system we can add multiple channels of inputs as shown in the diagram above. This gives us the ability to simultaneously capture multiple analog signals.
An interesting design challenge was managing the tradeoff between signal fidelity and energy delivery. The amplitude of a microphone signal is approximately 10mV, which is quite small. Adding a linear regulator and a small amount of capacitance will easily drown out that signal by adding additional noise and attenuating it. To negate this we installed an additional resistor between the linear regulator and the microphone line. Sizing it carefully yields a good tradeoff between power delivery, and the isolation of the microphone line from noise from the linear regulator.
Data is captured with the voice recording app on the phone and e-mailed to our server for decoding. Because almost every phone manufactured recently has a built-in voice memo application the AudioDAQ platform is compatible with a large base of existing devices.
Processing the Data
Finally we must reconstruct the signal. Currently a small piece of Python code does this. The code grabs the multiplexed, encoded audio data, and extracts from it framing information. You can see the intermediate steps of the algorithm in the figure. It first detects the edges, finds a mean value to represent the steps between the edges, measures these values, and finally reconstructs the original signal from them. In practice this works wonderfully, with the original signal being easily recovered.
In Closing
AudioDAQ works really well and has been a big focus of my work for the past few months. You can read the published demo paper here. Signal reconstruction obtains good quality results and the technology is fairly well developed. If you have any interest in using AudioDAQ in your projects, feel free to contact me and I’d be more than happy to send you some schematics and code!