machine learning + visualization + connectomics

Rainbow Brain Chip Brain

In connectomics, we create a map of the mammalian brain at nano-scale. For this, we acquire image stacks of rat brain using electron microscopy. These images are so high resolution that individual neurons (nerve cells) and their connections (synapses) are visible. Machine learning algorithms then classify cell structures and connections in these extremely large (terabytes or petabytes) images since manual processing is impossible.

Neurons in Rat Brain
A slice of rat brain

The ultimate goals are to understand the wiring of the brain, to cure mental and neurological diseases, and also to derive new artificial intelligence methods. These goals are still far away and the next two milestones are a fully processed 100 micron cube and then the extension to a 1 milimeter cube of brain tissue. The video on the right shows selected neurons in a 100 micron cube.

[1] A. Suissa-Peleg, D. Haehn, S. Knowles-Barley, V. Kaynig, T.R. Jones, A. Wilson, R. Schalek, J.W. Lichtman, H. Pfister: Automatic Neural Reconstruction from Petavoxel of Electron Microscopy Data, Microscopy and Microanalysis, 2016.
[2] R. Schalek, D. Lee, N. Kasthuri, A. Suissa-Peleg, T.R. Jones, V. Kaynig, D. Haehn, H. Pfister, D. Cox, J.W. Lichtman: Imaging a 1 mm 3 Volume of Rat Cortex Using a MultiBeam SEM, Microscopy and Microanalysis, 2016.


The automatic classification of cells and connections is far from perfect. Humans are needed to double-check the results. This task is called proofreading. In 2014, we published Dojo, an interactive proofreading software.

Interactive Proofreading using Dojo
The Dojo proofreading software to fix automatic labelings

Dojo enables proofreading for completely untrained people, recruited from the street. The data and results from the published user study are available as The Proofreading Benchmark.

We found that the majority of time during interactive proofreading is spent looking for errors.

To reduce this, we developed the Guided Proofreading system. Artifical intelligence suggests potential errors and corrections to a user and speeds up the proofreading task. Our results show that our trained classifier is also able to perform proofreading automatically - up to a certain threshold (and better than using Dojo :D). The video on the left shows the Guided Proofreading user interface in action, reducing proofreading to simple yes/no decisions.

[3] D. Haehn, S. Knowles-Barley, M. Roberts, J. Beyer, N. Kasthuri, J.W. Lichtman, H. Pfister: Design and evaluation of interactive proofreading tools for connectomics, IEEE transactions on visualization and computer graphics, 2014.
[4] D. Haehn, V. Kaynig, J. Tompkin, J.W. Lichtman, H. Pfister: Guided Proofreading of Automatic Segmentations for Connectomics, CoRR, 2017.


Brain data is beautiful - not only at nano-scale. In 2012, we developed XTK, the first web-based visualization framework for medical imaging data such as MRI scans.

My Brain visualized using XTK
A glimpse into my brain rendered from an MRI scan using XTK

Slice:Drop is a web-based viewer for many medical imaging formats. It is based on XTK and used by clinicians, researchers, and patients every day. The software visualizes data without requiring any server uploads.

The Slice:Drop viewer
The Slice:Drop viewer supports different visualizations

MRI data, as visualized by XTK and Slice:Drop, is much smaller than connectomics data. To visualize brains at nano-scale, we developed the MBeam viewer for the ZEISS MultiSEM 505 microscope. Using this viewer, neuroscientists are able to view high resolution images immediately after acquisition.

The ZEISS MultiSEM 505 microscope with Jeff Lichtman in front of it The MBeam viewer
The ZEISS MultiSEM 505 microscope (with Jeff Lichtman in front of it) and the MBeam viewer

From a software engineering standpoint, the MBeam viewer and Dojo provide overlapping functionality. In particular, the logic to cut-out parts of data for transfer and visualization is implemented in both products.


Therefor, we now develop Butterfly, a system for scalable data management and visualization, which unifies logic and visualization as separate modules. Using Butterfly, it is possible to rapidly develop new interactive visualizations of large scientific datasets.

[5] D. Haehn, N. Rannou, B. Ahtam, E. Grant, R. Pienaar: Neuroimaging in the browser using the X Toolkit, Frontiers in Neuroinformatics, 2014.
[6] D. Haehn, N. Rannou, E. Grant, R. Pienaar: Slice: drop: Collaborative medical imaging in the browser, ACM SIGGRAPH Computer Animation Festival, 2013.
[7] D. Haehn, J. Hoffer, B. Matejek, A. Suissa-Peleg, A.K. Al-Awami, L. Kamentsky, F. Gonda, E. Meng, W. Zhang, R. Schalek, A. Wilson, T. Parag, J. Beyer, V. Kaynig, T.R. Jones, J. Tompkin, M. Hadwiger, J.W. Lichtman, H. Pfister: Scalable Interactive Visualization for Connectomics, Informatics, 2017.

further reading

Machine Intelligence from Cortical Networks (MICrONS)

Terascale Neuroscience

AMI.js: Medical Imaging JavaScript ToolKit

A Transfer Function Editor for Browsers

Neuroglancer: a WebGL-based viewer for volumetric data

Machine Learning for humans

The Visual Computing Group @ Harvard

Written on March 2, 2017