Increasing the Precision of Image Captioning

Increasing the Precision of Image Captioning

Bhargavi Sundar, Ashwin Karayil Ashokan, Nikhil Lingam
Copyright: © 2021 |Pages: 16
DOI: 10.4018/IJCVIP.2021070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In our everyday life, we come across various media that facilitate communication. Photography is one such medium used for visual communication. Although it is easy for human beings to look at a picture and describe it, it is often a hard task for a computer to generate a caption automatically if a photograph is fed to it. The recent development in deep learning and neural network has made this problem easier to work on especially if the relevant datasets are provided. This paper attempts to comprehensively summarize and present a unique perspective of the prevalent systems developed to address this problem of image captioning.
Article Preview
Top

Introduction

When a picture is presented to different people, they each describe the contents of the image very differently. However, the underlying meaning remains the same. This perspective, however, cannot be said of a computer since the method used by a system to read an image differs from that of a human. Therefore, a computer lacks the human intuition or intellect to identify objects and express them freely in a given language. The recent development of Deep Neural Networks gives more freedom and allows image captioning to be done by computers with more ease than ever before. When image captioning is mentioned, it is important to understand it’s various perceived applications:

  • Automated cars that are self-driven: Capturing the environment of the car while it drives on its own can increase the efficiency of the self-driving system.

  • Aids the visually impaired: One of the very popular application of deep learning is creating an application for the visually impaired which increases their safety while crossing roads. We can describe the environment around them by first capturing it and converting it into text and further, into audio.

  • CCTV cameras: Since CCTV cameras are commonly used, any improvement to the system will have a huge impact. Enabling them to generate relevant captions will help us report malicious activity more accurately, thus curbing it.

  • Utilizing image captioning system to help make google image search on a par with google search using the captioning system to give an image’s caption and performing search on the resulted caption.

The computer tries to comprehend images by extracting features from the image. It uses two methods to obtain these features – (1) Traditional machine learning based techniques and (2) Deep machine learning based techniques. Traditional machine learning based techniques uses manual techniques to extract features from data which can be time consuming. It is also task specific which makes it unusable while extracting features from larger and a more diverse range of datasets. Since deep learning can handle a large amount of data comprising of diverse set of images and videos, it is the preferred technique for feature extraction. It has also proved that it can handle various complexities and challenges posed by image captioning reasonable well.

The existing image captioning articles talked about in this paper further can come under the categories of (1) Types of learning (2) Architecture and (3) Feature mapping. Although there are many variations and different types of machine learning techniques, the ones which are commonly implemented are comprehensively summarised.

Figure 1.

Variation of ML techniques

IJCVIP.2021070104.f01
Top

Literature Review

  • [Every Picture Tells a Story: Generating Sentences from Images] (Farhadi, et al., 2010)

This paper uses a novel dataset that consists of human annotated images. This dataset is used in a system which links a score to those images which is then used to create a sentence. These sentences describe the contents of the image concisely. Among various methods to link images and sentences, the paper mentions Barnard et al. who talks about these methods which link the images and sentences typically has two applications: Illustration, where one finds pictures suggested by text (this text might also be part of a collection); and annotation, where one finds preceding annotations for the images (perhaps to allow keyword search to find more images) (Barnard, Duygulu, & Forsyth, 2001).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing