Arduino MIDI Drum .002

After several iterations I’ve come up with the basis for a functional trigger circuit.

The piezo element (across a resistor) runs into Arduino’s Analog 0 pin, and also feeds a comparator whose reference is supplied by a variable resistor (which is fed to Arduino’s A5 for monitoring). The output of the comparator drives an interrupt routine attached to Arduino’s Digital pin 2 (interrupt 0). That interrupt routine polls the piezo’s value at A0 then returns to the main loop- which does nothing right now, but is where MIDI and other processing will go.

The comparator is to provide trigger thresholding in the analog domain, relieving the Arduino of the overhead necessary to do it in software.

It turns out that the acquisition of the piezo value is fast enough that I can grab a significant series of data points into an array and print the array out to see how the thing actually responds to various events. My first discovery is that its first sample isn’t its peak and that it really does ‘ring’. This confirms my thought that a likely peak detection algorithm is to run the samples until I see a falling value, in which case the prior sample is the peak.

I tried printing the array of sampled values to Processing for graphing but am having port reading problems in Processing.

The other thing I saw is that, across a 1 megohm resistor, the piezo maxes out the Arduino’s input very fast under very small impacts, and that reducing to a 100k resistor brings the response to within a more reasonable range. This suggests that replacing this fixed resistor with a variable will allow me to adjust the overall sensitivity of the final instrument. But I don’t want to have a pot control for every piezo element, so I’ll use a set of digital potentiometers under control of a single rotary encoder. The comparators for each element can all use the same voltage reference, so only a single pot is still required for that function.

So, next steps: set up my first multiplexed piezo array, implement what I now think is the appropriate peak sensing algorithm, and collect the components necessary for my ganged sensitivity controller.

Thanks are due for my work so far to the following sources on Arduino drumkit building and basic comparator use:

http://www.electronicdrums.com/pads/pads2.htm
http://spikenzielabs.com/SpikenzieLabs/DrumKitKit.html
http://todbot.com/blog/2006/10/29/spooky-arduino-projects-4-and-musical-arduino/
http://www.facstaff.bucknell.edu/mastascu/elessonshtml/Interfaces/ConvComp.html
http://home.cogeco.ca/~rpaisley4/Comparators.html

Arduino MIDI Drum .001

I am an inveterate tabletop drummer, so when I first  heard about the Zendrum my gearlust fizzed, only to be decarbonated by the price of what is really just a bunch of piezo triggers mounted on a nice piece of wood.

So, now that I’ve got the Arduino bug (my first circuit used a potentiometer to control the brightness of an LED [chuff, preen]), all things seem possible and I think to myself, “self… we can do one of these.”

Having obtained a fistful of piezo discs and a MUX board, I’m now faced with the task of sampling the peak values of those discs faster than I can thwap them and converting their data to MIDI events.

Now, I’m obviously not the first person to come up with this idea (here, let me Google that for you), so I can vicariously consider the problems of peak detection, sample rate and so on, before I ever touch jumper wire to breadboard.

For now, I’m going to see if I can get away without external peak detection or sample/hold circuitry. I’m going to write a simple state loop with no delays and see whether it serves the purpose without ornamentation. Something like this:

The actual MIDI processing is dirt simple (actual I/O will be handled by another board), so it shouldn’t eat up a lot of time. And if either acquisition or output winds up being slower than I think, I can offload either or both processes to auxiliary microcontrollers.

If all this works reasonably well, then I can look at adding a ‘programming’ state (probably switched to by interrupt) which would allow me to assign MIDI key values, modify thresholds, etc.

I wonder just how much of this stuff you can cram into 32K anyway?

FollowBot: First Thoughts

When I decided to attend DragonCon this year I started thinking about elaborations of the idea of ‘costuming’ that went beyond attire and accessories, to environmental effects and devices. I hit on the idea of a “followbot,” based on the Star Wars mousebot which you see zipping around the imperial corridors making electronic squeaky noises.

Such a mousebot would be Arduino-based, and consist of a motor system, steering servos, and some means of controlling direction. I’ve considered making a radio-controlled or even autonomous mousebot, but in this case I want one that will actually track me and stay at heel.

I don’t want to use any optical method of motion tracking, because that would require that my own costume have special features to enable visual recognition. Neither do I want to use infrared (IR) for pretty much the same reasons… even an IR beacon would have to maintain line-of-sight, and so would necessarily have to be a costume feature – not to mention potential interference problems.

That leaves me with radio frequency (RF) tracking, and the problem of determining direction and distance from the platform to me.

My first idea is to mount a circular array of 6 or 8 RF receivers on the platform and use radio signal strength indication (RSSI) to determine the angle from it to a pinging beacon I could carry hidden. The antenna with the highest signal level would be pointing to me, and I continuously scan the array to track my movement and provide control to the steering motors. Initial reading on this topic suggests that RSSI may be unreliable, as the signal strength doesn’t really correlate with the distance… in fact, the strength may even go down as the transmitter comes closer in some instances. I suspect this has to do with measuring wavefronts with a single antenna, and I’m going to see if using the circular array allows me to compensate for that, perhaps by calculating some product of the measurements from adjacent receivers.

So far, my only other idea is to mount a kind of ‘doppler array’ of only 3 or 4 receivers, and measure the time differences of wavefronts. That would require that the signal I transmit is actually coded, so that the processing program would know which ‘pings’ to measure the differences of. Distance measurement would probably then involve a ‘pingback’ of some kind, again for timing comparison. This is a sufficiently complex task that I think I’m going to stick with the RSSI method for now.

Clearly, none of this is happening for this year’s DragonCon. This little project is going to take a while.

Gestures Above the Fold

Back when wombats roamed the earth web pages were lengthy. Content flowed freely from top to abyssal bottom, tremendous grey trunks of text broken but rarely by an image or block. This was when “hypertext” was still the paradigm, when every term and notion merited a bit of blue underline guiding the patient reader to wider reaches of raw, seething information.

Then the class of User Analysts arrived, and convinced the Clients that everything important needed to be Above the Fold. Thus began the dual age of the Splash Page and of the Ridiculously Dense Landing Page – competing theories of attention-retention that both derived from the mandate that scrolling is bad, that the user needs to see everything  now, and that she would rather click her way through to More, than to grab that scrollbar and actually navigate the browser.

In effect, this was a gestural mandate at least as much as a visual one. Consider that Mac mice didn’t (and still mostly don’t) have scrollwheels, so rather than simply folding an index finger one is required to manipulate wrist and arm to browse a window. If Macs had scrollwheels, would The Fold have become as essential, bearing  as it does the borrowed context of newsprint?

In any case, larger screens and resolutions and better design finally loosened the tyranny of The Fold, and clients became less squeamish about using all that lovely real estate afforded by modern digital media. Sure, the top splash carousel became de rigueur, but no longer was it assumed that anything more than 400 pixels from the top of the screen would vanish utterly from the user’s eye.

And then came the Pad. And here we are again.

By now, you’ve probably seen Gawker Media’s new layout. It is very much a design of the Pad era. The fold is back with a bang, and the sidebar is a gestural scrolling runway. At a stroke, Apple’s technologies have (again) both constrained and leapfrogged convention, and its metaphors are going to determine how we think about web architecture.

The medium is the technology. The technology determines the shape of information and the range of one’s interactions with it. You flip a book’s page. You click an e-reader’s pager. You scroll a mouse. You flick a pad. Each means of presentation determines information esthetics, shapes editorial decisions, adds to or subtracts from the semiotic experience… each, therefore, affects what that information means.

We’re on a cusp, I think. There seems to be an uneasy conversation between the Pad Web and the Screen Web and the Mobile Web, each elbowing  the other for consideration in the plans of digital architects. I think that in the long run more convergence is likely, rather than less, and that there will emerge a new dominant metaphor for the next decade’s information.

And then of course, it will all change again.