Department News

Smarter Microscopes, Faster Maps: Lichtman Lab’s “SmartEM” Brings AI to Connectomics

Smarter Microscopes, Faster Maps: Lichtman Lab’s “SmartEM” Brings AI to Connectomics

A new paper in Nature Methods (PDF) from the labs of MCB’s Jeff Lichtman and Aravinthan Samuel, of the department of physics and Center for Brain Science, introduces an artificial intelligence–driven approach that dramatically speeds up one of neuroscience’s most data-heavy processes: mapping the brain’s wiring. 

Dubbed SmartEM, the new system enables electron microscopes to quickly conduct a rapid primary scan—recognizing important cellular structures in real time and adapting their imaging behavior accordingly—to identify the subset of regions that need a full (slow) scan. The result is a major leap forward in connectomics, the field devoted to charting neural connections at synapse-level resolution. 

“Connectomics is all about building wiring diagrams of the brain—whether in mice, worms, or humans,” explained co-first author Yaron Meirovitch, a postdoctoral researcher in the Lichtman Lab and co–first author of the paper, SMARTEM: Machine-learning guided electron microscopy. “To understand how neurons connect and communicate, we need extremely detailed images, and those images generate massive amounts of data. SmartEM helps make that process faster and more intelligent.” 

Teaching the Microscope to See 

Traditionally, electron microscopy in connectomics has been a brute-force process: a microscope scans every patch of a tissue sample at high resolution, regardless of what’s in the field of view. This exhaustive method produces petabytes of images that must later be stitched, segmented, and proofread—a painstaking, time-consuming process that can take years to complete even for a cubic millimeter of brain tissue. 

Inspired by the efficiency of human vision, where the eye uses rapid movements, or saccades, to focus on salient regions, SmartEM implements a similar “foveated” imaging strategy. Instead of scanning everything with uniform effort, the microscope first performs a quick, low-resolution pass across the sample area. A machine learning algorithm then identifies regions where automated segmentation is most likely to fail, such as complex cell boundaries, and directs the microscope to capture only those areas in higher detail. 

“We made a microscope that behaves like the human eye attached to a brain: it scans very quickly and dismisses things that it doesn’t think are salient, then spends a lot of time on what is important,” says Lichtman. “That’s a far faster and smarter way to image the world than scanning every pixel with equal effort.”

“The microscope becomes data-aware,” Meirovitch adds. “Instead of wasting time on easy-to-image areas, it learns to predict difficult structures and dwells longer to acquire high-quality data before moving on. In this way, we can reduce the time the electron beam spends on a sample by up to seven-fold without sacrificing the accuracy of the final map.” 

A Self-Learning System 

The system doesn’t just accelerate imaging; it also performs on-the-fly segmentation—coloring each cell and synapses in real time, offering researchers an immediate preview of the connectome as it’s being captured. “For Jeff, one of the most exciting aspects is that you can put tissue in the microscope and already see cells in different colors,” said Meirovitch. “That visual segmentation isn’t just for show—it’s part of how the microscope learns.” 

“It’s a dynamic process where the system sets a quality standard and then optimizes its own performance to meet it,” Meirovitch explained. “The on-the-fly segmentation provides a rapid ‘draft’ of the neural wiring. A second neural network then inspects this draft to predict where it is likely wrong and directs the microscope to go back and rescan those specific areas more slowly.” 

The team validated the system using tissue samples from worms, mice, and humans. Depending on the sample, SmartEM achieved imaging speedups ranging from fivefold to sevenfold while maintaining the same segmentation quality as conventional methods. 

In connectomics, where mapping even a small region of a mouse brain can take years, such an acceleration is transformative. “Even if our system takes 50 hours to train, the time savings and scientific payoff are enormous, “Meirovitch adds.” 

Democratizing Connectomics 

Beyond technical performance, Meirovitch sees SmartEM as a step toward democratizing connectomics—making advanced neural mapping accessible to labs around the world. 

“Right now, only a few well-funded institutions can afford to do large-scale connectomics,” he said. “But if we can make imaging several times faster—and eventually two-orders of magnitude faster—then even smaller labs with limited access to electron microscopes can perform their own connectomic studies of brain regions relevant to their neuroscientific questions. That’s really exciting.” 

The project, originally conceived through collaboration between Harvard’s Lichtman Lab and MIT’s Nir Shavit, reflects the growing intersection between neuroscience and computer science. Meirovitch, who previously completed a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) before joining Lichtman’s group, sees this cross-disciplinary approach as essential to the field’s future. “Connectomics can’t advance using only traditional biology tools,” he said. “We need algorithms, computation, and now, intelligent hardware. SmartEM is a beautiful example of how those worlds come together to make something fundamentally new.” 

With its publication in Nature Methods (PDF), SmartEM represents years of collaboration and development. The team hopes the method will not only shorten the timeline for large-scale brain projects—such as the effort to map the entire mouse brain—but also empower smaller labs to perform their own local connectomic studies. 

“Ultimately, we want every neuroscience lab to be able to do connectomics,” said Meirovitch. “This is just the beginning of that transformation. The next step is to extend this foveated imaging strategy into the third dimension, allowing a 3D-aware AI to rapidly scan tissue volume and then intelligently direct the electron beam back to specific sub-volumes that require more detail for an accurate reconstruction.” 

The paper’s first coauthors also include Ishaan Singh Chandok and Core Francisco Park, both of the Samuel lab, and Pavel Potocek of Thermo Fisher Scientific, whose collaboration and microscope technology were instrumental to the project. The team also included contributions from researchers at Johns Hopkins University, including Brock Wester.

12.30.2025 Nature Methods Research Briefing by Yaron Meirovitch (PDF)

X Share on X Bluesky Share on BlueSky
(l to r) Aravinthan Samuel, Ishaan Chandok, Yaron Meirovitch, and Jeff Lichtman

(l to r) Aravinthan Samuel, Ishaan Chandok, Yaron Meirovitch, and Jeff Lichtman