Files

Abstract

As part of their daily workload, clinicians examine patient cases in the process of formulating a diagnosis. These large multimodal patient datasets stored in hospitals could help in retrieving relevant information for a differential diagnosis, but these are currently not fully exploited. The VISCERAL Retrieval Benchmark organized a medical case-based retrieval algorithm evaluation using multimodal (text and visual) data from radiology reports. The common dataset contained patient CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) scans and RadLex term anatomy–pathology lists from the radiology reports. A content-based retrieval method for medical cases that uses both textual and visual features is presented. It defines a weighting scheme that combines the anatomical and clinical correlations of the RadLex terms with local texture features obtained from the region of interest in the query cases. The visual features are computed using a 3D Riesz wavelet texture analysis performed on a common spatial domain to compare the images in the analogous anatomical regions of interest in the dataset images. The proposed method obtained the best mean average precision in 6 out of 10 topics and the highest number of relevant cases retrieved in the benchmark. Obtaining robust results for various pathologies, it could further be developed to perform medical case-based retrieval on large multimodal clinical datasets.

Details

Actions