Format | |
---|---|
BibTeX | |
MARCXML | |
TextMARC | |
MARC | |
DublinCore | |
EndNote | |
NLM | |
RefWorks | |
RIS |
Résumé
This paper describes the ImageCLEF 2019 Concept Detection
Task. This is the 3rd edition of the medical caption task, after it
was rst proposed in ImageCLEF 2017. Concept detection from medical
images remains a challenging task. In 2019, the format changed
to a single subtask and it is part of the medical tasks, alongside the
tuberculosis and visual question and answering tasks. To reduce noisy
labels and limit variety, the data set focuses solely on radiology images
rather than biomedical gures, extracted from the biomedical open access
literature (PubMed Central). The development data consists of 56,629
training and 14,157 validation images, with corresponding Unied Medical
Language System (UMLS R
) concepts, extracted from the image
captions. In 2019 the participation is higher, regarding the number of
participating teams as well as the number of submitted runs. Several approaches
were used by the teams, mostly deep learning techniques. Long
short-term memory (LSTM) recurrent neural networks (RNN), adversarial
auto-encoder, convolutional neural networks (CNN) image encoders
and transfer learning-based multi-label classication models were the frequently
used approaches. Evaluation uses F1-scores computed per image
and averaged across all 10,000 test images.