Article Preview
Top1. Introduction
The explosion of medical information in the last 10 years over the Internet has made information seeking for both textual and visual objects a very hot topic of research. In the medical domain, in particular, the vast volumes of visual information produced every day in hospitals in connection with the existence of digital Picture Archiving and Communications Systems (PACS) make the need imperative for advanced ways of searching, i.e., by moving beyond conventional textbased searching towards combining both text and visual features in search queries. Indeed, biomedical information comes in several forms: as text in scientific articles, social networks, as images or illustrations from databases and Electronic Health Records (EHR). Although many methods and tools have been developed, still, we are far from an effective solutionespecially in the case of image retrieval from large and heterogeneous databases. One way towards the improvement of current retrieval facility is data fusion. Data fusion is generally defined as the use of techniques that combines data from multiple sources and gather that information in order to achieve inferences, which will be more efficient and accurate than if they are achieved by means of a single source.
It is evident from the literature that there is a lot of room for improvement in image retrieval. For example, techniques for image annotation with semantic information, is an active research topic. Furthermore, given that the text accompanying the images is usually a short paragraph, techniques fordocumentation and query expansion may be needed to overcome the language ambiguity, such as polysemy and synonymy.
This article is an overview of the experience we have obtained through our participation in the imageCLEF in the last two years. In particular we present ways to improve retrieval performance by making use of textual as well as visual information. This information is extracted from an image itself and from textual descriptions like caption or from references to an image of an article, and ontologies. Thus, to achieve our goal we combine techniques of information retrieval, content-based image retrieval (CBIR) and natural language processing (NLP). Our objective is to aid diagnosis by finding similar cases for a patient using several resources in the literature and in databases of EHR.We conducted experiments in the imageCLEF database of the years 2015 and 2016 to see the truth of the ground.
To demonstrate our techniques, we have developed our own search engine, a hybrid system that uses both visual and textual resources. Our framework is built upon the Lucene search engine and provides several ways to combine textual and visual search results. The system is capable of: (i) starting a visual search (query by example) and applying relevance feedback with textual features that accompany an image; and, (ii) merging the results of independent text and image searches. The retrieved results can be viewed as thumbnails in a grid view sorted by relevance (Figure 1). Such a system may be used for computer-aided diagnosis, medical education and research purposes.
In Section 2, we give an illustrative example explaining the ambiguity that can engender bad consequences. However, in order to solve this problem, it has become imperative to create a collaborative space between the doctors of which we explain in section 3 the advantages and the works that have been treated in recent years in social medical networks. In sections 4, 5 and 6 we describe our method of searching multimodal information (text + image), followed by the section where we present our experimental results and finally the conclusions are drawn with proposals for further work.
Figure 1. Our medical image retrieval system on a text + image query
Top2. Medical Image Analysis Problematic
A radiologist is an expert in the domain of radiology and can interpret any medical image. But sometimes students resist in radiology who are still in the learning phase are confronted with predication problems. However, it has been proposed at Hôpital Charles-Nicolle to create an environment for the sharing of experience between doctors in order to obtain correct analyzes. (Figure 2) shows an example of subjective observation by two radiologists showing a significant difference in the treatment to be followed.
Figure 2. Example of subjective analysis of a chest X-ray