Some Fuzzy Tools for Evaluation of Computer Vision Algorithms

Some Fuzzy Tools for Evaluation of Computer Vision Algorithms

Andrey Osipov
Copyright: © 2018 |Pages: 14
DOI: 10.4018/IJCVIP.2018010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this article, some issues related to the performance evaluation of computer vision algorithms within the version of direct empirical supervised evaluation method developed at SRISA RAS are considered. This approach partly relies on the elements defined by using the fuzzy set theory, in particular, fuzzy similarity measures and fuzzy reference ground truth images. Some known measures of segmentation quality are considered and their extensions, representing the fuzzy similarity measures, are offered. As an example, the author considers an application of fuzzy ground truth images and fuzzy similarity measures, including some newly introduced ones, to the evaluation of face recognition algorithms.
Article Preview
Top

Introduction

For more than half a century of computer vision research thousands of various algorithms were proposed. Many of them have multiple software implementations (e. g. the famous Canny edge detector). As a result, the developer of computer vision system faces a complicated task of choosing the most appropriate algorithms for his specific purposes. For many reasons (see e. g. Wirth et. al., (2006)), evaluation of image processing and analysis algorithms for practical purposes has no unique method. To date, several attempts to classify the existing evaluation methods have been made. In particular, the following classification of evaluation methods for image segmentation has been offered in Zhang et. al., (2008):

  • 1.

    Subjective evaluation;

  • 2.

    Objective evaluation:

    • a.

      System level evaluation;

    • b.

      Direct evaluation:

      • i.

        Analytical methods;

      • ii.

        Empirical methods.

In turn, the empirical methods can be divided into supervised and unsupervised ones. In principle, such classification is appropriate to classify evaluation methods for another class of computer vision algorithms (e. g. edge detectors).

Supervised methods are also known as empirical discrepancy methods. The latter definition is probably more appropriate, since such methods perform a comparison between a processed image (algorithm’s output) against a reference image which is often referred to as a ground-truth, by using some quantitative evaluation criteria - similarity measures. The ground truth images are often manually created and contain the features which are ideal from the evaluator’s viewpoint. For instance, if we evaluate the edge detectors, then for every test image there is a matching ground truth image containing ideal (user-defined) edges.

For the purpose of empirical supervised evaluation of image processing and analysis algorithms, we have developed the software system named PICASSO. Originally it was designed to compare edge detection algorithms on a set of artificial 2D images. Its current version evaluates a wider range of algorithms including e. g. image segmentation algorithms, image restoration methods, texture analysis algorithms, (see Gribkov et. al., (2005)). Also, the testing technique has been improved. In particular, now it includes some elements of fuzzy logic. This inclusion is justified by the growing amount of image processing and analysis methods which rely on the fuzzy set theory (see Bezdek et. al. (1999) and Tizhoosh & Haussbecker, (2000)). For example, many PCA-based face recognition methods (see Yang et. al., (2010) and references therein) are using the fuzzy k – nearest neighbor algorithm offered in Keller et. al., (1985) to build up scatter matrices. Also, in processing of remote sensing images, due to insufficient resolution of the sensor, often it is difficult to assign some pixels to one pure class (e. g. to the “forest”, “water”, or “urban land”). This uncertainty has led to the idea of using elements of fuzzy set theory for handling such tasks (a thorough discussion is contained in Lu & Weng, (2007)). Obviously, the comparative evaluation technique must be able to handle these “fuzzy” methods and to compare simultaneously both “fuzzy” and “non-fuzzy” algorithms.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 2 Issues (2015)
Volume 4: 2 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing