Modeling Confidence for Assistant Systems

Modeling Confidence for Assistant Systems

Roland Kaschek
DOI: 10.4018/978-1-59140-878-9.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

An intelligent assistant system is supposed to relatively autonomously aid a master in problem solving. The way of doing so that is favored in this chapter involves master-assistant communication that leads to the assistant being exposed to a goal the master wants to achieve. The assistant—if that is possible—then creates problem solution procedures, the implementation of which is supposed to result in the goal being achieved. The assistant then chooses an implementation that can be expected to fit well its master dispositions. An assistant thus needs to know parts of its master’s cognitive structure and to be capable of reasoning about it. The chapter proposes to use models as composite verdictive entities by means of which a master may refer to a domain of individuals. The concept of judgment is assumed to be the simple verdictive entity out of which models are composed. Models are used for representing cognitive states of masters. In particular, model extensions are considered, that is, models that are closed with respect to predicate negation of judgments, conjunction of judgments, and conditional judgments. The concept of confidence is formalized and applied to model extensions for deriving quantitative assertions about judgments. It is briefly discussed how the proposed theory of confidence in judgments can be used in intelligent assistant systems.

Complete Chapter List

Search this Book:
Reset