Semantic instance segmentation, that is the determination of object instances and their semantic categories, has seen drastic improvements by the application of deep neural networks. This success was shown on large annotated datasets of natural images. In biomedical research, when studying specialized sample types by means of microscopy, such large annotated datasets cannot feasibly be generated for every new type of sample, let alone when imaged with a range of microscopy modalities including upcoming techniques. Hence, to date, there are no sufficiently accurate methods to perform desired image analyses without extensive manual input. This particularly holds for microscopy data of cells in heterogeneous tissue, where the cost for accurate outlining of cell boundaries, be it as part of a manual analysis or as part of generating training data for deep learning methods, restricts the feasibility of high-content studies.
In this project, we aim at overcoming this restriction by leveraging “sparse” annotations for training deep neural networks for pixel-accurate semantic instance segmentation. We will develop a model for learning pixel-accurate instance segmentation purely from center point annotations, which is an unsolved problem for clusters of densely packed objects, like cells in tissue. Beyond leveraging center point annotations, we will investigate alternative sparse annotations, like image-level labels, in terms of their potential to be generated by crowd workers.