Mobile Platform Acoustic-Frequency Environmental Tomography

From SpeechWiki

(Difference between revisions)
Jump to: navigation, search
(Refine image-source)
Line 45: Line 45:
A wood "phonebooth" would fit almost anywhere.
A wood "phonebooth" would fit almost anywhere.
Camille can imagine a larger phonebooth at ISL, though we'd have to sell Hank on building such a contraption, and we'd want to operate it remotely since it's not walking distance.
Camille can imagine a larger phonebooth at ISL, though we'd have to sell Hank on building such a contraption, and we'd want to operate it remotely since it's not walking distance.
 +
 +
===Robot?===
 +
 +
Can we put a speaker or MLS generator (espresso machine?) next to a microphone, and use it to map out spaces?  It could have two modes:  (1) fast mode, just catch the first two or three echoes and find out where the two nearest surfaces are, and match those against things on the video camera in order to determine space geometry, (2) slow mode, measure the detailed room response at a few different locations (by moving the microphone), use this information together with video to build up and test hypotheses for the room geometry.
 +
 +
Application: we could work with IFSI to test this in their collapsed building simulator; small robot rolls its way through the collapsed building and maps it, before the firefighters go through, in order to save firefighter lives.
===Two extensions of Lae-hoon's Jan 30 paper review===
===Two extensions of Lae-hoon's Jan 30 paper review===

Revision as of 18:39, 13 February 2009

Contents

Where we're at, 2009 Feb 5

We have Sarah's speaker-to-mic recordings, dimensions/positions of room, mics, speakers:

  • raw .wav files and deconvolved .mat files
  • MLS and chirp deconvolutions
  • from each of 4 speakers, to each of 40 mic positions
  • from some speaker-pairs, to each of 24 mic positions

Speaker-pair recordings are incomplete (only 4 of 6 possible pairs). But we could use them as sanity checks on the single-speaker recordings, instead of as primary data.

The plywood cube (actually particleboard with 2x4 framing) has been demolished. The thin-glass parts of the speakers have been demolished.

ISL still has the amplifiers, speaker drivers, and mics. One of the two Earthworks omnidirectional mics is malfunctioning and needs replacing, if we need stereo recording.

ISL's multichannel recording PC, fruitfly.isl.uiuc.edu, has moved south with its 8-channel i/o interface.

If we reconstruct a plywoodcube, prefer flush-with-wall conventional speakers over the original motivation of glass-speakers-through-cubewall-slits.

What we might publish (how much work still to do)

Corpus

Like AVICAR, but to validate room response models. No room-rebuilding, no more "research." Mention image-source, as well as several other algorithms.

Refine image-source

Add frequency dependence to wall reflection and/or air transmission, and other subtle refinements as the data suggests. Have to look at CATT and other commercial packages for architectural acoustics; they include, e.g., hybrid image source/ray-tracing room responses, with frequency response of different materials implemented at each reflection.

When we discussed this in early 2008, Mark guessed at least 12 months until "good-sounding" room inverse (40 dB, not just Bowon's 10 dB) in simulation, warranted before sawing particleboard.

Mask the reverberant tail by adding 10 dB SNR noise, since later echos may overlap too much to cancel rigorously.

Validate room response models

Play sounds convolved by the plywood cube's computed inverse-impulse-response. Compare the recorded results to the original unconvolved sounds. In simulations, or with a fresh plywoodcube.

A wood "phonebooth" would fit almost anywhere. Camille can imagine a larger phonebooth at ISL, though we'd have to sell Hank on building such a contraption, and we'd want to operate it remotely since it's not walking distance.

Robot?

Can we put a speaker or MLS generator (espresso machine?) next to a microphone, and use it to map out spaces? It could have two modes: (1) fast mode, just catch the first two or three echoes and find out where the two nearest surfaces are, and match those against things on the video camera in order to determine space geometry, (2) slow mode, measure the detailed room response at a few different locations (by moving the microphone), use this information together with video to build up and test hypotheses for the room geometry.

Application: we could work with IFSI to test this in their collapsed building simulator; small robot rolls its way through the collapsed building and maps it, before the firefighters go through, in order to save firefighter lives.

Two extensions of Lae-hoon's Jan 30 paper review

1. Remove assumption of time invariance of RIR, because listeners' heads and ears move enough to degrade performance at high frequencies.

2. Extend their simulation to experiment with real microphones.

Of each mic in an array:

  • nonuniform frequency response
  • nonuniform spatial ("off-axis") response
  • nonuniform accuracy of measurement of spatial position
  • nonuniform accuracy of measurement of orientation, if mic isn't "omnidirectional"
  • nonuniform SNR
  • correlated inter-mic noise (not independent Gaussians) from multichannel preamplifier
  • actual crosstalk between channels, again from preamp
  • noises in domains other than amplitude-vs-time

At some point, even if mics cost no money, these inaccuracies suggest that adding mics would degrade rather than improve performance.

Sensitivity analysis of these things could be done entirely in simulation, as a quickly publishable result. A second paper could test that with actual experiments.

Personal tools