Dollbrain project

The MIFAVO project over the period 2007-2010 aimed to develop micropower integrated face and voice detection technology. Please contact Tobi Delbruck for questions about this project.

Motivation and goals

Natural human-machine interaction that relies on vision and audition requires state of-the-art technology that burns tens of watts, making it impossible to run the necessary algorithms continuously under battery power. It would be desirable economically and ecologically to burn full power only when the presence of a human desiring interaction is detected. The dollbrain project takes its inspiration from nature, where interactions among species require high effort (and high energy consumption) only for limited periods of time of their engagement and at all other times, the organisms are alert for general classes of stimuli but the efforts are low and their energy is conserved.

We are addressing this capability by developing integrated micropower face and voice detection technology that in conjunction can provide a reliable and power-efficient wake-up cue for higher-level processing. We are investigating both algorithms and technology.

A specific goal of this project is to build a demonstrator that uses an integrated face and speech detector chip that will be developed in the project.
Dollbrain demonstrator concept

Other goals include the following



The project started in March 2007 and concluded March 2010.

We have fabricated a 'dollbrain10' chip aimed at integrated face detection, and subsequently, another 3 chips for color detection. We also fabricated and tested a speech detection chip, also published.

The speech detector circuits were a design project for our VLSI design course in 2005, before the project started, and Thomas Koch finished the design of the speech detector in his master's project. SpeechDetector1 was fabricated and characterized. A number of design errors were discovered. A violation of the rule for the need for all butted contacts to be shorted was the main problem, and a violation of a constraint in a current-ratio extraction circuit was another problem. These errors were corrected and a new version SpeechDetector2 was sent for fabrication in September 2008.

The color pixels are based on the 'vertacolor' principle of color separation by selective absorption of photons in silicon depending on wavelength.


Partners and support

This project was a cooperation between the hardware group of Tobi Delbruck at the Institute of Neuroinformatics (INI) in Zurich and the group of Hynek Hermansky at the Center for Speech and Language Processing at Johns Hopkins University.

Dollbrain was supported by the Swiss National Science Foundation (SNF) under the project Micropower integrated face and voice detection, grant number 200021-112354/1.

Additional support was provided by the University of Zurich. Nova Sensors (through our friend J.P. Curzon who we met at the Telluruide Workshop on Neuromorphic Engineering) provided the silicon area that enabled the first vertacolor test chip.

start.txt · Last modified: 2011/05/24 22:18 by tobi
Recent changes RSS feed Creative Commons License Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki