Deep learning algorithms have shown to be successful in applications such as speech recognition and computer vision. A key component for a high performance is the access to a large amount of labeled data. For the cases where the number of labeled training examples is too small, incomplete, or even non-existent, a manual labeling process is required, which is laborious, time-consuming, and might require expert knowledge. This is especially true for medical applications, where the data is protected and difficult for non-experts to annotate.
In this family of projects, we aim to use deep learning as a general-purpose tool for both assisting the human in the labeling process and for the process of analysing the input data (images, videos, or 3D data). The model used for the analysing process so far is a deep Convolutional Neural Network (CNN). The interaction between the human user and the deep learning algorithm ensures that the algorithm is provided with trustworthy, custom, human-decided, labeled training examples and the algorithm iteratively assists the human in the labeling process and gives insight on the learning process, which saves time and increases transparency and acceptance of the output results.
So far, this approach has been used in the following research areas:
– Computer aided diagnosis (CAD) for kidney stone detection and lung disease classification in 3D CT scans
– Labeling and object classification of satellite images
– Semantic segmentation of outdoor scenes and facial images
The video below showcase the process of manual and tool-assisted labeling for some of the above-mentioned projects. The images are first partially labeled. Meanwhile, a CNN is trained on the labeled examples and then provides suggestions for additional labels with high-confidence. The human verifies the results, make corrections, and provides additional labeled examples for problematic classes or areas.
Interactive Deep Learning for Image Labeling and Analysis is a project funded by Nyckelfonden, VINNOVA