Clearing the way in capsule endoscopy with deep learning and computer vision
- NOORDA, REINIER ALEXANDER
- Vicente Pons Beltrán Directeur/trice
- Valeriana Naranjo Ornedo Directeur/trice
Université de défendre: Universitat Politècnica de València
Fecha de defensa: 30 mai 2022
- M. Carmen Benítez Ortuzar President
- Vicente Traver Salcedo Secrétaire
- Beatriz Marcotegui Rapporteur
Type: Thèses
Résumé
Capsule endoscopy (CE) is a widely used, minimally invasive alternative to traditional endoscopy that allows visualisation of the entire small intestine, whereas more invasive procedures cannot easily do this. However, those traditional methods are still commonly the first choice of treatment for gastroenterologists as there are still important challenges surrounding the field of CE. Among others, these include the time consuming video diagnosis following the procedure, the fact that the capsule cannot be actively controlled, lack of consensus on good patient preparation and the high cost. In this doctoral thesis, we aim to extract more information from capsule endoscopy procedures to aid in alleviating these issues from a perspective that appears to be under-represented in current research. First, and as the main objective in this thesis, we aim to develop an objective, automatic cleanliness evaluation method in CE procedures to aid medical research in patient preparation methods. Namely, even though adequate patient preparation can help to obtain a cleaner intestine and thus better visibility in the resulting videos, studies on the most effective preparation method are conflicting due to the absence of such a method. Therefore, we aim to provide such a method, capable of presenting results on an intuitive scale, with a relatively light-weight novel convolutional neural network architecture at its core. We trained this model on an extensive data set of over 50,000 image patches, collected from 35 different CE procedures, and compared it with state-of-the-art classification methods. From the patch classification results, we developed a method to automatically estimate pixel-level probabilities and deduce cleanliness evaluation scores through automatically learnt thresholds. We then validated our method in a clinical setting on 30 newly collected CE videos, comparing the resulting scores to those independently assigned by human specialists. We obtained the highest classification accuracy for the proposed method (95.23%), with significantly lower average prediction times than for the second-best method. In the validation of our method, we found acceptable agreement with two human specialists compared to interhuman agreement, showing its validity as an objective evaluation method. Additionally, we aim to automatically detect and localise the tunnel in each frame, in order to help determine the capsule orientation at any given time. For this purpose, we trained an R-CNN based model, namely the light-weight YOLOv3 detector, on a total of 1385 frames, extracted from CE procedures of 10 different patients, achieving a precision of 86.55% combined with a recall of 88.79% on our test set. Extending on this, we additionally aim to visualise intestinal motility in a manner analogous to a traditional intestinal manometry, solely based on the minimally invasive technique of CE, through aligning the frames with similar orientation and using the bounding box parameters to derive adequate parameters for our tunnel segmentation method. Finally, we calculate the relative tunnel size to construct an equivalent of an intestinal manometry from visual information. Since we concluded our work, our method for automatic cleanliness evaluation has been used in a still on-going, large-scale study, with in which we actively participate. While much research focuses on automatic detection of pathologies, such as tumors, polyps and bleedings, we hope our work can make a significant contribution to extract more information from CE also in other areas that are often overlooked.