photograph by Andrew Cutraro

SmartScopes

Bridging computer science and neuroscience.

Even when he launched his career as an engineer and computer scientist, Hanchuan Peng was drawn to the beauty of biology. Now, at the Janelia Farm Research Campus, he is a leader in developing sophisticated ways to make sense of biological images. Right now, his focus is the brain’s wiring. Peng’s toolkit might someday even help the Federal Reserve.

How do you help biologists make sense of images?

In the last 20 years, scientists and engineers have come up with super-powerful imaging systems to visualize the three-dimensional (3D) structure of a sample. With a 3D image, you’re going to have a lot of data. Confocal microscopes, for instance, can produce images with 1,000 or 10,000 times more pixels than an ordinary picture taken with an ordinary camera.

Data sets are often so large, and the incoming speed of new data is so much faster than the processing speed, that you’ll never be able to process the data manually. Therefore, we need to design a smarter way to deal with those data. It’s not just doing it faster; we have to come up with whole new ways to process data that are meaningful for biology.

In 3D microscopy, we let the computer pick out interesting objects, like particular neurons, and find the association between the cells, such as the potential connectivity between neurons. We call this automated process “image mining.”

Can you walk us through how you sift through the data?

First, you need to define the meaningful objects or patterns. We start there and consider the biologists’ knowledge. How do they define a cell, a neuron? How do they describe the cell population? We use their definitions to train our computer to recognize objects and help scientists do this large-scale, complicated job automatically.

Once we have the objects, we need to find associations among the objects—for example, which neuron connects to which other neurons. That takes several steps. First, we standardize the data into a space so that we can directly compare all these objects. Some people are taller. Some people are shorter. How do you compare their intrinsic features? You might compare the size of the head or their body with respect to their overall height. We do similar things for our image data. We normalize objects so that they can be compared to each other directly. Then we produce quantitative measurements for the objects in the image to compare their various dimensions and look for patterns.

How have you applied your image analysis tools?

Before I arrived at Janelia, I worked on mapping gene expression in the fruit fly embryo. The data sets were mostly two-dimensional, and we realized that we really needed 3D data for gene expression. We participated in several projects to measure gene expression in the fruit fly embryo and C. elegans at single-cell resolution in 3D.

These projects got us interested in the fruit fly’s nervous system, where cells’ identities are well-defined and where we could study how neurons are connected throughout the brain. We started building a 3D digital map of the entire fruit fly brain, generating a huge amount of 3D image data and then doing the association image mining.

We designed computer programs to align images and identify and trace neurons. The tools we develop are useful for more than one particular problem. For instance, we could apply similar techniques to the mouse brain and even the mouse lung.

We don’t just develop new tools for specialists in certain areas of biology. Recently, I started to combine the tools from my labs with some other tools contributed by my colleagues into a new visualization and analysis platform to analyze a diversity of large 3D images. This new system, called Vaa3D, has attracted tens of thousands of downloads from all over the world.

How did you get interested in analyzing biological image data? Why not analyze data from, say, the stock market?

I’ve been working on biology and biomedicine-related problems from the very beginning. I did my Ph.D. on artificial neural networks, so it was very natural for me to ask questions about real biology, actual brains. What does a real neural network look like, and what do we need to help us decode it?

The engineering and computer science needed to analyze stock market data and those needed to analyze biological data are not dramatically different. They just involve different types of signals. Actually, I was once approached by someone from the Federal Reserve, who asked about using my machine-learning technique to predict the stock market. I probably should follow up on that.

What's the ideal training for someone who works in your lab?

I want someone with an appreciation for the beauty of biology. It would be really challenging to have a postdoc who is a computer scientist with no appreciation for biology’s importance. Equally challenging would be a biologist who comes to work with me and doesn’t want to learn the computer science but just wants to take advantage of the computing tools. But it’s easier to teach a biologist computer science than to teach a computer scientist biology. It’s not as easy to establish a deep appreciation of biology.

What new projects are you especially excited about?

The most exciting project right now is to make our microscopes smart. A microscope is just like a camera—our eyes into biology. Though really well-developed, they are going to get even better in the foreseeable future. They will have faster speed and more sensors, and they will be able to probe deeper and see a wider breadth of a specimen. But all these developments are predictable.

The new dimension will be in giving the microscope a brain to tell it where the eyes should look. Artificial intelligence, computer vision, and machine learning will enable the microscope to analyze some of the pictures intelligently and quickly and then find an interesting area and take higher-resolution pictures. The microscope could even do certain manipulations of the sample.

I think this is the only way to handle the enormous complexity and very large scale of the nervous system. This technique is not just for neurobiology, but also for cell biology, developmental biology, structural biology—basically for everything.

I want to build an on-board system that combines image analysis, artificial intelligence, machine learning, and also automated imaging—what we call a SmartScope. I have already built a prototype in my lab at Janelia Farm.

Hanchuan Peng is a senior computer scientist at Janelia Farm Research Campus.

Scientist Profile

Janelia Lab Head
Janelia Farm Research Campus