Medical image retrieval can assist physicians in finding information supporting their diagnosis and fulfilling information needs. Systems that allow searching for medical images need to provide tools for quick and easy navigation and query refinement as the time available for information search is often short. Relevance feedback is a powerful tool in information retrieval. This study evaluates relevance feedback techniques with regard to the content they use. A novel relevance feedback technique that uses both text and visual information of the results is proposed. The two information modalities from the image examples are fused either at the feature level using the Rocchio algorithm or at the query list fusion step using a common late fusion rule. Results using the ImageCLEF 2012 benchmark database for medical image retrieval show the potential of relevance feedback techniques in medical image retrieval. The mean average precision (mAP) is used as the evaluation metric and the proposed method outperforms commonly-used methods. The baseline without feedback reached 16 % whereas the relevance feedback with 20 images reached up to 26.35 % with three steps and when using 100 images up to 34.87 % in four steps. Most improvements occur in the first two steps of relevance feedback and then results start to become relatively flat. This might also be due to only using positive feedback as negative feeback often also improves results after more steps. The effect of relevance feedback in automatically spelling corrected and translated queries is investigated as well. Results without mistakes were better than spell-corrected results but the spelling correction more than double results over non-corrected retrieval. Multimodal relevance feedback has shown to be able to help visual medical information retrieval. Next steps include integrating semantics into relevance feedback techniques to benefit from the structured knowledge of ontologies and experimenting on the fusion of text and visual information.