Files

Abstract

The locality and spatial field of view of image operators have played a major role in image analysis, from hand-crafted to deep learning methods. In Convolutional Neural Networks (CNNs), the field of view is traditionally set to very small values (e.g. 3 × 3 pixels) for individual kernels and grown throughout the network by cascading layers. Automatically learning or adapting the best spatial support of the kernels can be done by using large kernels. Due to the computation requirements of standard CNN architectures, this has been little investigated in the literature. However, if large receptive fields are needed to capture wider contextual information on a given task, it could be learned from the data. Obtaining an optimal receptive field with few layers is very relevant in applications with a limited amount of annotated training data, e.g. in medical imaging. We show that CNNs (2D U-Nets) with large kernels outperform similar models with standard small kernels on the task of nuclei segmentation in histopathology images. We observe that the large kernels mostly capture low-frequency information, which motivates the need for large kernels and their efficient compression via the Discrete Cosine Transform (DCT). Following this idea, we develop a U-Net model with wide and compressed DCT kernels that leads to similar performance and trends to the standard U-Net, with reduced complexity. Visualizations of the kernels in the spatial and frequency domains, as well as the effective receptive fields, provide insights into the models’ behaviors and the learned features.

Details

Actions

PDF