Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS
Cite
Citation

Résumé

The current gold standard for interpreting patient tissue samples is the visual inspection of whole–slide histopathology images (WSIs) by pathologists. They generate a pathology report describing the main findings relevant for diagnosis and treatment planning. Search-ing for similar cases through repositories for differential diagnosis is often not done due to a lack of efficient strategies for medical case–based re-trieval. A patch–based multimodal retrieval strategy that retrieves sim-ilar pathology cases from a large data set fusing both visual and text information is explained in this paper. By fine–tuning a deep convolu-tional neural network an automatic representation is obtained for the vi-sual content of weakly annotated WSIs (using only a global cancer score and no manual annotations). The pathology text report is embedded into a category vector of the pathology terms also in a non–supervised approach. A publicly available data set of 267 prostate adenocarcinoma cases with their WSIs and corresponding pathology reports was used to train and evaluate each modality of the retrieval method. A MAP (Mean Average Precision) of 0.54 was obtained with the multimodal method in a previously unseen test set. The proposed retrieval system can help in differential diagnosis of tissue samples and during the training of pathol-ogists, exploiting the large amount of pathology data already existing digital hospital repositories.

Détails

Actions