Sachin Mehta*,1, Ezgi Mercan*,1, Jamen Bartlett2, Donald Weaver2, Joann Elmore1, and Linda Shapiro1
1
University of Washington, Seattle, WA, USA
2
University of Vermont, Burlington, VT, USA
Figure: CNN architecture for segmenting whole slide breast biopsy images. Our CNN architecture incorporates following: (1) input-aware residual convolutional units, (2) dense connections between encoding and decoding blocks, (3) multiple decoders, and (4) multi-resolution input. See paper for more details. |
Abstract
We trained and applied an encoder-decoder model to semantically segment breast biopsy images into biologically meaningful tissue labels. Since conventional encoder-decoder networks cannot be applied directly on large biopsy images and the different sized structures in biopsies present novel challenges, we propose four modifications: (1) an input-aware encoding block to compensate for information loss, (2) a new dense connection pattern between encoder and decoder, (3) dense and sparse decoders to combine multi-level features, (4) a multi-resolution network that fuses the results of encoder-decoders run on different resolutions. Our model outperforms a feature-based approach and conventional encoder-decoders from the literature. We use semantic segmentations produced with our model in an automated diagnosis task and obtain higher accuracies than a baseline approach that employs an SVM for feature-based segmentation, both using the same segmentation-based diagnostic features.
Downloads
Learning to Segment Breast Biopsy Whole Slide Images
Sachin Mehta*, Ezgi Mercan*, Jamen Bartlett, Donald Weaver, Joann Elmore, and Linda Shapiro IEEE Winter Conference on Applications of Computer Vision (WACV'18) [Paper] [Source Code] |
Results
Figure: Quantitative comparison of different methods on the Breast Biopsy dataset. |
This page is adapted from PSPNet.