Article Preview
TopIntroduction
When a picture is presented to different people, they each describe the contents of the image very differently. However, the underlying meaning remains the same. This perspective, however, cannot be said of a computer since the method used by a system to read an image differs from that of a human. Therefore, a computer lacks the human intuition or intellect to identify objects and express them freely in a given language. The recent development of Deep Neural Networks gives more freedom and allows image captioning to be done by computers with more ease than ever before. When image captioning is mentioned, it is important to understand it’s various perceived applications:
- •
Automated cars that are self-driven: Capturing the environment of the car while it drives on its own can increase the efficiency of the self-driving system.
- •
Aids the visually impaired: One of the very popular application of deep learning is creating an application for the visually impaired which increases their safety while crossing roads. We can describe the environment around them by first capturing it and converting it into text and further, into audio.
- •
CCTV cameras: Since CCTV cameras are commonly used, any improvement to the system will have a huge impact. Enabling them to generate relevant captions will help us report malicious activity more accurately, thus curbing it.
- •
Utilizing image captioning system to help make google image search on a par with google search using the captioning system to give an image’s caption and performing search on the resulted caption.
The computer tries to comprehend images by extracting features from the image. It uses two methods to obtain these features – (1) Traditional machine learning based techniques and (2) Deep machine learning based techniques. Traditional machine learning based techniques uses manual techniques to extract features from data which can be time consuming. It is also task specific which makes it unusable while extracting features from larger and a more diverse range of datasets. Since deep learning can handle a large amount of data comprising of diverse set of images and videos, it is the preferred technique for feature extraction. It has also proved that it can handle various complexities and challenges posed by image captioning reasonable well.
The existing image captioning articles talked about in this paper further can come under the categories of (1) Types of learning (2) Architecture and (3) Feature mapping. Although there are many variations and different types of machine learning techniques, the ones which are commonly implemented are comprehensively summarised.
Figure 1.
Variation of ML techniques
TopLiterature Review
This paper uses a novel dataset that consists of human annotated images. This dataset is used in a system which links a score to those images which is then used to create a sentence. These sentences describe the contents of the image concisely. Among various methods to link images and sentences, the paper mentions Barnard et al. who talks about these methods which link the images and sentences typically has two applications: Illustration, where one finds pictures suggested by text (this text might also be part of a collection); and annotation, where one finds preceding annotations for the images (perhaps to allow keyword search to find more images) (Barnard, Duygulu, & Forsyth, 2001).