Agent-Based Tutoring Systems by Cognitive and Affective Modeling

Agent-Based Tutoring Systems by Cognitive and Affective Modeling

Rosa Maria Viccari (Federal University of Rio Grande do Sul, Brazil), Patricia Augustin Jaques (University of Vale do Rio dos Sinos (Unisinos), Brazil) and Regina Verdin (Federal University of Rio Grande do Sul, Brazil)
Indexed In: SCOPUS View 1 More Indices
Release Date: May, 2008|Copyright: © 2008 |Pages: 392
DOI: 10.4018/978-1-59904-768-3
ISBN13: 9781599047683|ISBN10: 1599047683|EISBN13: 9781599047706|ISBN13 Softcover: 9781616926915
Hardcover:
Available
$180.00
TOTAL SAVINGS: $180.00
Benefits
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • 20% discount on 5+ titles*
E-Book:
(Multi-User License)
Available
$180.00
TOTAL SAVINGS: $180.00
Benefits
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
Hardcover +
E-Book:
(Multi-User License)
Available
$215.00
TOTAL SAVINGS: $215.00
Benefits
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
OnDemand:
(Individual Chapters)
Available
$37.50
TOTAL SAVINGS: $37.50
Benefits
  • Purchase individual chapters from this book
  • Immediate PDF download after purchase or access through your personal library
  • 20% discount on 5+ titles*
Description & Coverage
Description:

In recent years, we have observed that many educational systems, especially intelligent tutoring systems, are being implemented according to an agent paradigm. Therefore, researchers in education believe that the educational computing environments would be more pedagogically effective if they had mechanisms to show and recognize the student's emotions.

Agent-based Tutoring Systems by Cognitive and Affective Modeling intends to present a modern view of intelligent tutoring, focusing mainly on the conception of these systems according to a multi-agent approach and on the affective and cognitive modeling of the student in this kind of educational environment. Providing researchers, academians, educators, and practitioners with a critical mass of research on the theory, practice, development, and implementation of tools for knowledge representation and agent-based architectures, this Premier Reference Source is a must-have addition to every library collection.

Coverage:

The many academic areas covered in this publication include, but are not limited to:

  • Affective modeling
  • Affective tactics
  • Agent modeling
  • Agent-based architectures
  • Animated agents
  • Cognitive Informatics
  • Conversational ecological agent
  • Human-Computer Interaction
  • Intelligent learning environments
  • Intelligent learning objects
  • Intelligent Tutoring
  • Knowledge Representation
  • Multi-agent architecture
  • Pedagogical agents
  • Pedagogical notation
  • Software Engineering
  • Solidarity assimilation groups
Reviews and Testimonials

This book presents a modern view of ITS, focusing mainly on the conception of these systems according to a multiagent approach, and on the affective and cognitive modeling of the student also based on agent's technologies.

– Rosa Marie Viccari, Federal University of Rio Grande do Sul, Brazil

This is heavily illustrated, and includes screen shots for popular software, and the references and resources are comprehensive. Advances are coming quickly, but this should prove to be helpful in planning technical or long distance pedagogy long after the hardware and software is replaced by the latest and greatest.

– Book News Inc. (February 2009)

Providing researchers, academicians, educators, and practitioners with a critical mass of research on the theory, practice, development, and implementation of tools for knowledge erepresentation and agent-based architectures, this source is a valuable addition to teaching libraries.

– APADE (2008)
Table of Contents
Search this Book:
Reset
Editor Biographies
Rosa Maria Viccari received her PhD in Electric Engineering and Computers from the University of Coimbra in 1990. Currently, she is an associate Professor of the Federal University of the Rio Grande do Sul. She published 20 articles in specialized journals, 159 works in events, 10 chapters of books, and 7 published books. She also supervised 24 master works and 10 PhD theses in the area of Computer Science. Her research interests are: Intelligent Tutoring Systems, Multi-Agent Systems, Distance Education, and Affective Computing.
Patrícia Augustin Jaques earned a Ph.D. degree in Computer Science from the Federal University of Rio Grande do Sul (Brazil) in 2004. During her PhD, she spent one year researching at the Leibniz Laboratory in France. Currently, she is Associate Professor in the Master Program in Computer Science of University of Vale do Rio dos Sinos. Her research interests are Intelligent Tutoring Systems, Multi-Agent Systems, and Affective Computing.
Regina Verdin Master in Psychology (PUCRS). Doctor of Science degree (UFRGS) in Computer and Education Program. The research areas are Cognitive Psychology and Artificial Intelligence, Scientific Methods and Distance Education.
Peer Review Process
The peer review process is the driving force behind all IGI Global books and journals. All IGI Global reviewers maintain the highest ethical standards and each manuscript undergoes a rigorous double-blind peer review process, which is backed by our full membership to the Committee on Publication Ethics (COPE). Learn More >
Ethics & Malpractice
IGI Global book and journal editors and authors are provided written guidelines and checklists that must be followed to maintain the high value that IGI Global places on the work it publishes. As a full member of the Committee on Publication Ethics (COPE), all editors, authors and reviewers must adhere to specific ethical and quality standards, which includes IGI Global’s full ethics and malpractice guidelines and editorial policies. These apply to all books, journals, chapters, and articles submitted and accepted for publication. To review our full policies, conflict of interest statement, and post-publication corrections, view IGI Global’s Full Ethics and Malpractice Statement.

Preface

The main topic addressed in this book is that Intelligent Tutoring Systems (ITS) models and architectures can be designed and implemented based on the agent paradigm. More specifically, this book aims at addressing three main questions related to agents paradigm: (i) which agents technologies and methodologies can be used to design and implement ITS; (ii) how this paradigm can be useful for creating affective and cognitive students models; and (iii) how the agents approach can be employed to model and implement distributed and open architectures of ITS. Hence, this chapter introduces the matter in order to provide a general context to the reader: a background on Cognitive Artificial Intelligence, BDI (Beliefs, Desires and Intentions) architecture, Cognitive models, and architectures for ITS. It is important to point out that the purpose of this chapter is not to provide the reader with cues on how to apply theses methods or theories. The application examples will be presented in the following chapters that compose this book. Finally, this chapter presents a brief description of each chapter in this book, relating them to the topics that this book is concerned.

Artificial Intelligence (AI) is probably the most anthropomorphic of all Computer Science research areas and this fact should not be considered as a weak point; on the contrary, it is an advantage when handling with certain complex tasks or domains. It is a common implicit assumption from Natural and Applied Sciences (and also from most parts of the Computer Science) that any theoretical analysis or experimentation about concrete phenomena must avoid anthropomorphic, subjective, or emotional aspects, focusing only on objective and measurable characteristics that are not subject to different interpretations by different observers of the same phenomena. On the other hand, AI, or at least the Cognitive Science branch of AI (Simon, 1981), here called Cognitive AI (or Symbolic AI), is probably the research field of Computer Science most akin to Social Sciences. The ultimate research goal of AI is to create computer systems that reproduce intelligent behaviour (Russel & Norvig, 1995). However, the only decision process, known to date, that is able to decide if some behaviour is intelligent is its comparison with some forms of intelligent human behaviour with all subjectivity and anthropomorphic features associated with this kind of behaviour. This implies in the anthropomorphic properties of AI in general and Cognitive AI in particular.

Certainly, the main goal of AI must be pursued keeping the scientific parameters of accuracy, precision, and reproducibility at the highest possible levels. However, it is not possible to ignore that this task requires the analysis of the phenomena related to human cognition and to human subjective experience or emotions. In one way or another, all subfields of AI should answer how they will try to emulate this kind of behaviour. Some fields like Neural Networks (Haykin, 1999) and Multiagent Systems (MAS) composed of swarms of ant-like agents (Dorigo, Maniezzo, & Colorni, 1996) try an indirect approach to this subject, based on emergent properties of complex systems. Neural Networks is a traditional research line of AI that starts from the assumption that modelling the processing cells of the human brain and simulating subsystems of these cells will accomplish some important part of it (effective and efficient real-time vision, and voice recognition systems, for example). Ant-like agents and Swarm MAS studies are recent AI research lines that, inspired by the highly effective and apparently intelligent way that colonies of social insects (bees and ants, so the “ant-like” agents) manipulate the environment, assume that some MAS emulating the behaviour of complex societies composed by less complex agents is one way to try to keep the scientific accuracy and precision.

Some drawbacks of dealing with agents have been recently identified in the research area of Agent-Oriented Software Engineering (AOSE). AOSE started with works from Wooldridge and Jennings (1999), Petrie (2000), and Jennings (2001b), and currently presents several distinct methodologies for software development (see Zambonelli & Omicini (2004) and Henderson-Sellers & Gorton (2002)). The problem is stated by DeWolf and Holvoet in the OOPLSA 2005 conference: Agent-oriented methodologies today are mainly focused on engineering the microscopic issues, i.e. the agents, their rules, how they interact, etc., without explicitly engineering the required macroscopic behavior [...]. Engineering the macroscopic behavior is currently done in an ad-hoc manner because there is no well-defined process that guides engineers to address this issue. (DeWolf & Holvoet, 2005, p. 145). The “Macroscopic behaviour” cited by these authors is the desired emergent property of the MAS.

The approach carried out by Cognitive AI to handle this issue is more direct and explicit. The goal is to analyse and propose cognitive models that present: (a) viable computational interpretations, (b) epistemological and psychological foundations, and (c) precise formal specifications. Each condition of the complex and difficult goal of Cognitive AI has its justification. The cognitive models should be computational, at least from the theoretical point of view; otherwise, Cognitive AI (or AI) cannot be considered part of Computer Science. In order to not reinvent the wheel, concepts used in these models should not be based only on naive intuition or common-sense psychology, but should be rooted on explicit epistemological and psychological foundations. The formal specification is the answer to avoid excessive anthropomorphism: the precise formal definition of any concept is independent of subjective belief, perception, or emotion about this concept, even when the concept that is being formalized is the concept of “subjective belief,” “perception,” or “emotion.”

Cognitive AI is far from achieving its objectives. However, starting from the foundation of this research area in 1980 (Simon, 1981), some important results have been achieved in the last years, mainly related to BDI models for agents and Multiagent systems (Rao & Georgeff, 1991a).

One active research line of Cognitive AI is centred on the creation of Intelligent Tutoring Systems (Self, 1998) or Intelligent Learning Environments (ILE) (Fernandez-Manjon, Cigarran, Navarro, & Fernandez-Valmayor, 1998). This research field contributed with a very important Multiagent paradigm, the Student Model representation. The idea is that the Student Model representation, and related ITS MAS concepts, together with BDI models for agents, could have a profitable application in the AOSE research area.

The motivation is that the use of weak notions of agency commonly accepted in AOSE processes eventually lead to the “Macroscopic behaviour” problem (DeWolf & Holvoet, 2005). On the other hand, when the design of some MAS starts from a strong notion of agency, then the problem to achieve desired high-level properties becomes an integral part of the design process. Cognitive abstractions such as beliefs, goals, and intentions, and social abstractions like cooperation, competition, and negotiation, provide the ground where these high-level properties can be intuitively understood, and enunciated as system requirements. High-level agent architectures and models derived from ITS/ILE research, such as student models and pedagogical negotiation processes, can be used as a design framework for complex applications. Formalisms, just as modal logics, provide a way to state these requirements in a precise and non-ambiguous form. Recent development in the field of logical programming languages, like X-BDI (Móra, Lopes, Viccari, & Coelho, 1998) or AgentSpeak(L) (Rao, 1996), can transform these requirements in prototypes, at least for certain kinds of applications. Using the spiral model for software development (Boehm, 1988), these prototypes can be used in the test and validation phases of initial stages from evolutionary development cycles of complex MAS. They are proof-of-the-concept systems able to exhibit the autonomous behaviour required for the final system, but not necessarily with its real-time performance.

ITS GENERAL ARCHITECTURE

Since the 1970s, researchers in Computer Science for Education observed the necessity to use techniques from AI in the development of educational systems. One of the AI promises for the ITS is the possibility to make a more flexible and adaptable software to the users’ needs.

The traditional ITS architecture is composed of modules like Domain Knowledge (in some cases it is an Expert System), User Model, and Pedagogical Model. This traditional organization is still in use. However, in recent years, many systems for educational purposes have adopted the agent paradigm to explore the interaction and dynamic changes in teaching-learning environments. When a system has more than one agent, it is known as an MAS. Besides the change of the software engineering paradigm, the basic components (domain knowledge base, user model, and pedagogic strategies) still compose the ITS kernel. Hence, in this book, we are more interested in the use of agents technology to design and implement ITS.

To adapt the actions of a tutorial system to the student’s necessities is a complex process that requires a variety of knowledge, expertise, problem resolution capacities, and strategies of man-computer interaction, evaluation, pedagogy, and presentation of multimedia information. Breaking this process into appropriate components, which are autonomous, proactive, and flexible, can reduce the complexity to construct a tutor.

Dillenbourg and Self’s (1992) work describes a formal abstract framework that shows how the basic entities (modules or agents) of an ITS, as the tutor/domain module and the student model module, can be structurally organised in several abstraction layers. This framework also shows the relationships that take place between the entities of each layer and what kind of knowledge is related to each one of these relationships. The abstract layers of the framework form the “vertical” dimension. They are based upon the computational distinction among concrete behaviour, behavioural knowledge, and conceptual knowledge (see Figure 1). The basic relationship in the vertical dimension is of consistency between levels, that is, the consistency between the learner’s real behaviour and the knowledge about possible behaviours, and the consistency between this knowledge and the conceptual knowledge related to the learning domain. The entity subclassification, used in the framework, form the “horizontal” dimension. It assumes the existence of three entities: the “system,” “the learner,” and “the system's representation of the learner” (the student model). The identification of discrepancies between these entities forms the basic relationship among them (see Figure 1). The interaction between the learner and the system is clearly contextualized as a search space problem. Methods for establishing the search space for learner models and for carrying out the search process were also reviewed in their work.

In effect, these interactions of pedagogical nature are the most important elementary units for the analysis of the teaching and learning process. The challenge is in the search of symmetry between man and machine. Such symmetry provides the same possibilities to the user and the system as to actions, and symmetric rights for decision taking. In an asymmetric mode, an agent has always the decisive point, as there is no space for a real negotiation. In the symmetric mode, there is no predefined winner, as conflicts need to be solved through negotiation. Precisely, the cognitive processes that trigger an explicit declaration that justify an argument or refuse the partner’s point of view are most likely to explain the reasons why collaborative learning is more efficient than solitary learning.

The main functions of ITS (explanation, education, and diagnosis) are traditionally implemented as a one-way mechanism, which means that the system has total control. Recent work (Flores, .Seixas, Gluz, Patrício, Giacomel, Gonçalves, & Vicari, 2005), however, tries to treat them as bilateral processes. The model is built collaboratively, and there are some moments of negotiation. It is clear that for a negotiation to take place, there must be a level of latitude available to agents, otherwise anything can be negotiated. Discussions about the use of negotiation mechanisms in learning environments are not recent. According to Self (1992), there are two major motivations for the use of negotiation in ITS i) make it possible to foster discussions about how to proceed, which strategy to follow, and which example to look for in an attempt to decrease the control that is typical of ITS, and ii) give room for discussions that yield different viewpoints (different beliefs), providing that the agent (tutor) is not infallible. Dillenbourg and Self (1992) say that human partners do not negotiate a single shared representation, but they really develop several shared representations, that is, they move in a mosaic of different negotiation spaces.

DISCUSSIONS AND CURRENT WORK ABOUT ITS DEVELOPMENT USING AGENT ARCHITECTURE

Following the approach taken by Object Oriented design and development processes, where the computational notion of “object” is a central concept, we will start our terminology by the computational notion of an “agent.” Agents compose the basic element of computation in the domains we are aiming at, and MAS are simply systems formed by several agents working together. Adapting the most commonly used definition of agent (Jennings, 2001b) to incorporate the notion of design purpose, we assume that an agent is a computational process situated in an environment that is designed to achieve a purpose in this environment through an autonomous and flexible behavior. From this point of view, the environment is the application domain where the agent will work to achieve its purposes.

An individual agent is an autonomous entity that has metalevel knowledge about itself and about other agents in the society and, therefore, can collaborate with each other to reach a common goal in the same environment.

Agents detect properties in the environment, or more commonly, changes in these properties through perceptions. These changes happen regardless of the agent, or may occur in response to some actions executed by the agent or by other agents, but the only way the agent has to detect them is through perceptions. Its characteristic is very important for ITS, in particular, for development of the student model. As presented before, the student model represents in the computer the human student cognitive (and also affective) state about the subject. The teaching actions intend to transform the student cognitive state in order to promote learning. Then, if the ITS is modeled using agents technology, the sensors can detect these changes and the student model will be updated.

Pedagogical agents and multiagent ITS

Within the ITS area, we can consider agents as pedagogical Agents. These kinds of Agents incorporate multiple features of Entertainment Agents and Artificial Life. They have a set of normative teaching goals and plans to achieve these goals (e.g., teaching strategies), and associated resources in the learning environment. Pedagogical agents can be divided into goal driven (tutor, mentor, and assistance), and utility driven (MOO - virtual environments on the network where people can meet and communicate - and Web agents, for example). The goal-based agents decide their actions (to achieve the goal) based on information described by desirable situations. The utility-driven agents are used for pedagogical purposes such as labor agents and to help students find things.

As mentioned before, some researchers who use agents to build ITS adopt a mentalistic approach (Bratman, 1990) where the term agent means a computer system that can be viewed as consisting of mental states such as beliefs, intentions, motives, expectations, obligations, and so on. In others cases, authors adopt a generic organization to build each agent. In this organization, the project of each agent is divided into three distinct levels:

  • Decision level: it models the agent’s decision processes according to evidence received, either from artificial agents or students.
  • Operational level: it is where decisions taken in the previous level are transformed into real actions and operations in the environment. This level is also responsible for gathering/organizing evidences and information necessary for decision taking.
  • Communication level: it is responsible for the interaction among agents, and between agents and the external world. This level is responsible for sending/receiving messages among agents, and also for the interaction with users.

The levels compose the plan of construction (or functioning) of each agent, since in each level there are specific functionalities that must be modelled and implemented (modules, functions, processes, or software components). On the other hand, there must be the possibility for agents of sharing knowledge and information because of the social issues of the multiagent environment that must be handled. This implies the need of an ontology that defines meanings of messages, information, and decisions taken by each agent.

Pedagogical Agents can be modelled as

    1. Cooperative agents who work in the background as part of the architecture of the educational system, or as
    2. Personal and animated agents that interact with the user.

In the first case, the educational system is modelled and implemented using a Multiagent approach, where each agent has a specific function in the system. These agents act in the background, they are transparent to the user, and exchange information among themselves in order to carry out actions that are appropriate to improve learning. Towards this approach we highlight the works by Silveira and Viccari (1999); Frasson, et al. (Frasson, Chaffar, Abdel Razek & Ochs, 2005); and Bica et al. (Bica, Verdin, & Viccari, 2006). According to Giraffa and Vicari (1998), architectures based on this approach are variations of the traditional and functional architecture of an ITS (domain knowledge base, student model, and teaching strategies), where one or more agents implement each function of the tutor. The control is distributed among agents. However, the user sees the system as a single entity while internally, it is composed of a society of agents.

In the second case, the Animated Pedagogical Agents are human-like and animated agents, represented by a character that interacts with the student through voice and emotional attitudes and gestures. Some examples of animated pedagogical agents are Vincent (Paiva & Machado, 2002), Steve (Rickel & Johnson, 1998), and Cosmos (Lester et al., 1999).

With the purpose of understanding agents and pedagogical agent’s classification, Figure 2 illustrates the agents’ taxonomy in the context of ITS (Giraffa & Viccari, 1998).

The two major advantages of using agents in the conception of educational systems are modularity and openness. As the agents are independent, they are a powerful tool to make the tutorial system a modular system. Some efforts have been carried out towards the construction of components of tutors as agents that can be joined to form an ITS (Silveira & Viccari, 1999). Moreover, since each agent is an exclusive module independent from the others, it is easier to add other agents to these systems that will carry out new functionalities. They do not need to know the specific details to communicate with every agent, making the society more flexible and extensible. As agents are autonomous, they only need to know information about how to interact with other agents (what type of new information the system expects the agent sends, for example).

The Multiagent system’s modularity also makes possible to handle bigger and more complex problems: each agent can be specialized in its own tasks in the space problem (in terms of knowledge and abilities to solve problems). This modularity simplifies the project and development of the educational system. The developer can concentrate on knowledge representation, on the granularity analysis and on ways of reasoning that are different for the functionality of each agent. This modularity also permits the reuse of the components in different systems.

Besides, the distributed nature of the Multiagent architectures lets the functionality of an educational system to be distributed in a computer network and in different platforms. This distribution allows the tutorial system to be constructed from several components that are in different platforms, allowing the use of appropriate tools without worrying about the platform. The distributed nature of these architectures also allows partial parallel processing.

The use of BDI to model and develop ITS

If the ITS is modelled using agents technology and in particular, the BDI model, we can assume that the purpose of an agent can be fully specified by the definition of its beliefs and desires, and that the behaviour of this agent is implied by its intentions, that is, we are considering the BDI cognitive model for our agents.

The use of BDI models in AI is not a simple application of naive concepts of belief, desire, and intention in software programming. Cognitive AI makes use of precise formal concepts that, in some cases, have the possibility to pass directly from formal design to software implementation by using logical programming languages like AgentSpeak(L) and X-BDI. The BDI-models can also be implemented by several successful and powerful agent architectures (Bordini, Hubner, & Vieira, 2005; Mora et al., 1998). The use of agent and Multiagent technologies represents a new way to design and implement ITS, this is the reason why we will dedicate an important space to discuss this subject in this book.

New abstractions are being added to the traditional BDI model, allowing that “outward looking” social concepts are integrated to the traditional “inward looking” mental state concepts commonly associated to BDI models. Mental states related to expectations, confidence, planning, and emotions have made the BDI modelling closer to human behaviour. Social concepts like cooperation, competition, and negotiation (among others), are fundamental to define complex coordinated behaviours, coalition work, and to define the role and scope of organizations, institutions, and societies.

It is possible to see how these concepts and abstractions are interrelated. Traditional works in Theories of Agency from Rao and Georgeff (1991a, 1991b), Cohen and Levesque (1990, 1997), Sadek (1992), and others show on how knowledge is related to beliefs (provisional knowledge), on how beliefs are related to commitments and intentions, and on how they are related to objectives, goals, actions, and communication. A more recent work by Bordini and colleagues (2002) shows how goals are related to planning in the case of resource-limited agents, and how reasoning and planning can be related to search heuristics. In our research, we are starting to integrate, in a single conceptual and formal framework (Probabilistic Modal Logics), probabilistic subjective beliefs (bayesian beliefs) and Bayesian Networks (BN) with the other concepts of the BDI model (Gluz, Vicari, Flores, & Seixas, 2006a, 2006b). This is a small step to decrease the gap between probabilistic and purely logical knowledge representation and reasoning methods.

Such research and its recent results are bringing out a computationally reproducible understanding of human cognitive and psychological behaviour. In these fields, a Multiagent modelling paradigm was developed with interesting applications and implications to Software Engineering (SE) AOSE research areas: the Student Model paradigm.

The main idea behind this paradigm is that ITS agents must create internal models of subjects with whom they interact. Together with these subjects, there are purposeful and intelligent entities that may be artificial agents or human beings. This is the difference between ITS agents and other kinds of agents. ITS agents not only are continuously trying to understand the environment where they “live,” but they also need to know, theorize, plan, and understand several aspects of the other intelligent entities that inhabit the same environment. Without this psychological subjective knowledge, they are unable to actually teach anything. This is not a requirement for all sorts of MAS or agents, but it is the specialty of ITS agents.

The interaction of the agent with its environment is done by actions and perceptions. An action is an alteration in the external environment caused directly by the agent. From an intentional point of view, it also represents a way to attain an end (intention). Therefore, internally the agent should know (believe) the basic effects produced by possible actions and what are the relations of these actions to their intentions.

As mentioned, a particular intention is to be pursued by a plan of action that is composed of a set of actions structured by sequence, iteration, and test/choice order relations (operators). Plans are specified through planning inference processes that use a base of beliefs about which kind of strategies and tactics should be applied to achieve an intention. Plans of action do not need to be fully specified from the beginning. They can be partial and the agent can start to follow the plan and reassess or complete it during execution.

Agent’s perceptions produce updating in the base of the agent’s beliefs. However, the exact update produced by a particular perception depends on the current state of beliefs of the agent. Indeed, perceptions can be classified as expected or non-expected and, in a similar way of actions, should be related to the current set of intentions and its correlated plans.

The notions presented, up until now, provide a common set of concepts necessary to understand what are the internal (mental) states of individual agents; the relationships between these states and the agent’s behaviour; and, the effects of such behaviour in the interaction of the agent with its environment. These notions are established in a well-known Theory of Agency, and can be thought as a conceptual framework that can be used to understand the kind of agent we are intending to work on.

Those concepts are enough to analyse and describe applications based on single autonomous and deliberative agents. An application aimed at helping users to accomplish some particular task or activity can be thought in terms of a help agent that will aid users to conclude this task, for example. The purpose of this agent is to be helpful to users on how to accomplish the task, and be able to detect when is necessary to interfere (and when is not, to not be boring). The belief base of this agent will be composed of knowledge related to how to represent the task, to what perceptions and actions are related to the tasks and to the users, and similar matters. Problem-solving knowledge will be composed of methods that can be used to accomplish the task, inference processes that can identify how the user is proceeding in the resolution of the task, and detect when to interfere or not. The BDI machinery will be responsible, then, for combining these purposes, belief bases, and problem-solving knowledge in appropriate intentions that will result in the behaviour of the agent.

This conceptual framework for individual agents can be extended in several ways, depending on the properties of the environment where the agent will live in or, equivalently, the characteristics of the application domain where the agent will work.

One basic extension is to consider that all planning reasoning processes of the agent are necessarily limited to bounded rationality that forces the agent to plan and act rationally with respect to its purposes, as its knowledge and abilities (its resources) will permit. Another possibility is to include some set of assumptions in the planning reasoning processes of the agent to solve the classical frame problem in planning reasoning, and allow the agent to solve planning problems without having to worry about all the elements of the environment that are not directly related to the solution search process.

On the other hand, if the domain (environment) requires non-monotonical changes in the belief state that can produce contradictions, then it is important to explicitly handle these contradictions and the non-monotonical reasoning behind these contradictions. The work carried out with Extended Logic Programming (ELP) with Well Founded Semantics with eXplicit negation (WFSX) (Mora et al., 1998) had shown on how to handle these kinds of situations in certain domains through the use of logic programming.

Another usual extension is to add concepts related to time in order to allow the agent to reason/understand things like timeouts, instantaneous events, duration, and similar notions, which are very common in a wide range of applications, in particular for ITS development. These notions of time are already implicit in the framework because the definitions of intentions and plans are intended to achieve possible future goals. The idea here is to allow the explicit use of these notions in other kinds of reasoning.

An interesting new possibility is to consider domains that require degrees of belief. These kinds of beliefs are usually represented by subjective probabilities (Bayesian probabilities), and the conditional relations between these beliefs are expressed by Bayesian Networks (Cowel, Dawid, Lauritzen, & Spiegelhalter, 1999; Pearl, 1986, 1993). When this kind of knowledge representation is used, then usually a single Bayesian Network represents the entire belief base of the agent. Decision and planning problems are represented by extensions of Bayesian network diagrams, like Influence Diagrams (Shachter, 1986). If the uncertainty were to pervade even the actions and perceptions (observations) of the agent, then the entire decision problem of planning agent’s actions becomes a Partial Observable Markov Decision Problem (POMPD). In this case, recent works (Hui & Boutilier, 2006) have shown that Dynamic Bayesian Networks (DBN) can be successfully applied to represent these problems and specify the belief base and inference processes of the agent.

The use of Multiagent System to model and develop ITS

In terms of agent communication and Multiagent system research, the study and application of social concepts to understand and model the communication and coordination of groups (teams or societies) of agents is a relatively recent field of research. The work of Singh (1998), which identifies limits that mental states concepts present when used to give the semantics of agent communication, and Castelfranchi and Falconi (1998, 1999), which study the characteristics of social relationships based on trustfulness notion, were important starting points in this new research line. Some examples of this line are the works by Colombetti and Verdicchio (2002), which models social interactions through commitment acts executed by agents through a new logic of social interaction based on principles of cooperation; the work by Grosz, Kraus, and others (Grosz, Kraus, Sullivan, & Das, 2002); the SPIRE system, which studys how rational and collaborative agents can adapt their goals and intentions to work in a group; and the work of Fischer and Ghidini (2002), which proposes a logic of abilities, beliefs, and confidence to model the behaviour and interaction among agents.

The concepts presented here should be regarded as open research topics as other concepts associated to individual and communicative agents. They are subject to discussion and revision as the research evolves. Anyway, the social concepts presented here are based on research literature and adjust very well to the experience we have accumulated with our ITS/ILE systems.

The most basic social concept is of social relationship, which is a relationship that occurs among two (or more) members of a group, team, or society. In our case, these members are agents, and societies, groups, or teams are systems of agents. Therefore, social relationships are simply any possible relationship that can be formed among two (or more) agents of the system. In general, these relationships can be based on any perceived element of the environment or any noticeable event occurring in the environment that can be related to the action of other agents. Nevertheless, taking into account that our agents are purposeful and intentional agents, these relationships are more frequently based on some issue or matter that happens in the environment, but which depends on the purposes and intentions of the agents.

From this point of view, one dividing social issue is if the agents have common conflicting purposes or not. If the answer is true, then they can be considered as possible competitive agents. Otherwise, they can be held as cooperative agents if they have common non-conflicting purposes, or as indifferent agents if they share no common purpose at all. The conflicting purposes issue defines the competitive/cooperative social relationship among agents. It is important to note that these social issues are profoundly related to the individual and communicative aspects of the agents, in the sense, for example, that agents are really competitive agents only if they know they have common conflicting purposes based on previous communication or interactions, and they know that this conflict affects their current intentions (or already known future intentions).

Two or more agents are not always entirely competitive or cooperative due to the fact that they can share some common non-conflicting purposes and, at the same time, have other conflicting purposes. Recalling that purposes are represented by high-priority desires of the agent, this relationship can change with the passing of time because it is possible to temporarily align the desires or beliefs of agents. Following the model of Jennings et al. (Jennings, Faratin, Lomuscio, Parsons, Wooldridge, & Sierra, 2001a) about automatic agent negotiation, it is possible to consider the process of discovery and establishment of common purposes and desires among two or more agents as a negotiation interaction process when the desires are not subject to change, or as an argumentation interaction process when it is possible to change the desires of the agents.

Besides basic competition/cooperation relationships, agents can establish other sorts of social relationships. The agent can establish dependence/independence relationships with environment elements necessary/not necessary to achieve its goals, for example. The social version of this relationship is established in terms of the requirement or not of other agents to execute particular actions (or hold particular intentions) to allow the agent to achieve its goals. Similar to competitive relationships, dependence relationships also give birth to specific forms of interaction among agents. Particularly, the existence of dependence relationships among cooperative agents makes necessary the creation of coordination interaction mechanisms between these agents, to allow them to achieve their purposes.

Another example of social relationship is the confidence/doubt relationship that can be established between one agent and another in terms of the expected behaviour of this other agent in some future situation or state. In this perspective, one agent trusts or establishes a confidence relationship with another agent in respect to certain situations if it believes that the other agent will behave properly (as expected) if these situations were to occur. The relationship of confidence is the basis for any kind of social mediated agreement or social mediated commitment, which can be established among agents based on socially constructed sets of customs, practices, and behaviours.

The list of social relationships presented earlier is not intended to be an exhaustive list of possible social relationships among agents. Although incomplete, this list depends on several open research issues that imply in different assumptions that can be made about agent societies and Multiagent systems.

Probably the most important open question related to social relationships and interaction mechanisms in agent societies is if these relationships and mechanisms are preexistent to the system, and if their corresponding rules and beliefs need to be incorporated in the agents before the system starts operation, or if they arouse and evolve during the lifetime of the system based on the knowledge and behaviour of individual agents not necessarily related to social issues. These approaches cannot be considered necessarily as mutually excluding. They compose the spectrum of possible solutions on how to design and implement social interactions in Multiagent systems. Related to this issue are the problems on how the agents will know if they will compete, cooperate, or be indifferent with other agents. Also, what kind of dependencies the agent had with the other agents and if the agent will trust, or not, other agents in particular situations.

The approach taken by some particular Multiagent system to solve these matters is expressed by a set of social and communication principles to be adopted by the agents of the system. It is a common assumption that the agents of ITS Multiagent systems are cooperative agents. This is a design principle for these agents. The purposes of these agents are not conflicting by design. This same assumption is held by coordinated teams of agents seeking to solve a distributed problem, which is a common assumption in Multiagent systems aimed at distributed problem solving. The problem in these teams of agents is to identify the dependence relationships among agents. Several solutions can be used to solve this problem. Simple systems can assume that this knowledge must be incorporated by design in the belief base of each agent. Complex systems can use interaction protocols (like blackboards and contract nets) to solve the matter. In this case, only the knowledge about how to use these protocols needs to be incorporated by design in the agents.

Summarising the discussion for modelling and developing ITS, we will need a methodology with a strong and powerful set of high-level agent abstractions. It is also important that these abstractions are considered from the beginning of the software engineering process, including the requirements engineering phase. The definition of agent used in the initial works of Wooldridge and Jennings on AOSE as an “...encapsulated computer system situated in some environment and capable of flexible, autonomous action in that environment in order to meet its design objectives. ” (Jennings, 2001b, p.36) may induce the reader to adopt the weaker, non-mentalistic or cognitive notion of agency. It is an easier notion to be adopted by someone without Cognitive AI background, but that was the point of Jennings paper: to sell the idea of the applicability of agents in Software Engineering processes with minimum AI support.

COGNITIVE AI MODELING APPLIED TO ITS

In computational cognitive modelling, we hypothesize internal mental processes of human cognitive activities and express such activities by computer programs. Such computational models often consist of many components and aspects. Claims are often made that certain aspects play a key role in modelling, but such claims are sometimes not well justified or explored.

Computational cognitive modelling is an important aspect in cognitive science because it plays a central role in the computational understanding of the mind. A common methodology of cognitive science is to express a theory about human cognition in a computer program and compare its behaviour with the human cognitive behaviour.

Cognitive models can be of two major types: One type consists of the computational process models, and the other consists of (behavioural) mathematical models.

The former seeks to capture internal computational processes that are generated from cognitive behaviour (Corbett & Anderson, 1992). The latter works through measuring a number of behavioural parameters (such as recall rates, response time, or learning curves) in a precise way; through mathematical equations is the work developed by (Coombs, 1970). If we can find suitable measures and relations of these measures that reveal fundamental regularities, mathematical models can be used as tools to gain insights into cognitive processes. They can also be useful in serving as a kind of abstract ideal that computational models try to match at the outcome level.

Computational models complement mathematical models by providing detailed internal process descriptions that reveal underlying mechanisms of behaviour. Computational modelling opens up the black box, although it is usually done in a highly descriptive way. As in most cases, we do not have enough cognitive data that can lead directly to a computational model; numerous assumptions need to be made, and parameters need to be set. Thus, models are often under constraint from data, even with the methodologies of protocol analysis and other stylized procedures. Another shortcoming of computational models is that they often fail to account for individual differences and, thus, serve only as an “average subject,” which is nonexistent and ultimately meaningless. Given the shortcomings of each of the two approaches, it is clear that some combination of the two approaches might be useful or even necessary (Anderson, 1993).

Abstract models can serve as a compromise between mathematical and computational models. In abstract models, assumptions and details not essential to the phenomena are omitted, yet computer programs are still a good way to generate model behaviour.

There is still a huge gap in our knowledge on how the natural mind works. However, by looking at the structure of mental states, we may build replicas of these systems to help us understand how these biological systems work. Such computer models are unlocking the secrets of a variety of artificial reactions. This can be the breakthrough we have been waiting for to design cognitive agents focused on particular problem domains.

The Mental States approach allows us to trace more precisely the intrinsic dynamics of an interaction between tutor and students. These results can be used to improve future modelling and help us to build better students and Tutor models. The Mental States approach has much more to provide, and it will demand more research to improve properties and develop ways to represent teaching/learning situations.

The BDI modelling, that is a Mental States approach, allows for a more granulated representation of the student’s mental states during the problem-solving activities. For this reason, it is appropriate for student model representation. Also, the BDI approach is composed of proactive mental states (desires and intentions). Therefore, the student model can better represent the student’s actions in the ITS environment, and the student’s actions guide the tutor’s pedagogic tactics in order to achieve a more effective teaching. This fact provides knowledge for actions for the student model and allows that this part of the ITS be modelled and developed as a knowledge-intense agent, and also communicate this knowledge with the other agents in the environment. In some cases, as in MCOE, PAT, and AMPLIA applications (see the next chapters of this book), the student model can also make decisions about the student actions. It is an important aspect that differs our work from the others. Indeed, the use of BDI architectures appears to be adequate to build teaching and learning agents because the proactive mental states potentially lead agents into action. Another important feature is that desires may be contradictory (with other desires and agent’s beliefs) while beliefs allow the agents to constantly update its view of the world. These characteristics of mental states are very important to better representing the choreography that occurs during a teaching and learning interaction.

We have been using the concepts of agency and mental states as an abstraction to describe design and to build systems. As usual, architectures are quite implementation-oriented. They provide schemas to build agents and systems. Nevertheless, they are insufficient as an analysis tool, especially in the case of pedagogical agents. It is not easy to respect the pedagogical theoretical foundations that are the basis for such agents while building them. Formal models come into place as we are interested in both describing and analysing the autonomous behaviour of (artificial and natural) agents in an ITS.

The agent’s behaviour is based on the fact that the only thing the programmer has to do is to specify the agent’s mental states as a high-level form of program development. The inclusion of expectations in traditional BDI architectures enables greater flexibility and more complex behaviours. The ideas behind this social approach introduce promising concepts to understand how to construct agents with special abilities to learn and teach, for instance.

Trying to go beyond the notion of behaviour and to know which mental states allow us to model knowledge and reasoning, we have been updating these architectures to represent some affective aspects of learners, such as effort, confidence, and independence. The affective information improves the system, and allows it to provide a more adequate help for the student. In other words, we believe that the use of both mental states and affective aspects permits more accurate selection of pedagogical strategies (see Chapters 6 and 9).

The long-term purpose of our research is to define a Multiagent environment where there is no explicit tutor or learner, but only a set of mental states and affective aspects that generate learning and teaching attitudes, which can be assumed by all agents that compose the environment. The acquisition of knowledge by the agents in this Multiagent society emerges from their interactions in which each agent can behave as a tutor or as a learner. For this much, the agent needs only to change its mental states (sometimes an agent assumes the learner mental states and another time assumes the tutor mental states set).

STUDENT MODEL

The student model remains the weak part of such systems. This situation imposes a great restriction for designing better tutoring systems to aid the student to build his own knowledge. However, the strong restriction comes from our imprecise knowledge about mental activities of the students during the teaching/learning process. In order to build an ITS with a good student model, we need to better understand what is happening in the student’s mind during the interaction. We need to understand the process and reproduce it into the machine. Much has been done to understand such process, according to different points of view: psychological, educational, and from computer science.

The simplest student model covers basic profile characteristics, normally informed by the student himself, through a form, and information on its level of knowledge in the domain. The construction of the student model is, therefore, an important aspect to take care in the pedagogical profile of the ITS.

Some models as Conati et al. (1997) and Gertner et al. (Gertner, Conati, & VanLehn, 1998) works are based on the student’s previous knowledge, and brought up-to-date in accordance with the detected progress and its probable future action. Others, as Mayo and Mitrovic (2001), consider the model directed toward the accompaniment of the student answering the questions. Studies carried out by Bull and Pain (1995a) consider the importance of the student participation, in a collaborative way with the ITS, in the student model construction (open student model). The environment developed by Collins (Bull, Pain, & Brna, 1995b), is a good example of these ideas. Kay (2001) presented a student model controlled by the student and also considered the social context, like the collaborative work made among students. Indeed, the construction of the student model is a complex task, involving a diversity of variables that, moreover, must constantly be brought up-to-date, in accordance with the software development process. Hence, a good approach to deal with this complex task is to use formal specifications to design the student model.

The application of formal approaches to understand or conceptualise aspects of educational processes brought ITS research closer to cognitive agent models. Works from Self (1990, 1994) and Dillenbourg and Self (1992) present solid foundation on the application of formal methods to the analysis of student models. Particularly, the formal analysis presented in (Self, 1994) clearly shows that there is a profound relationship among several areas of AI, like machine learning and cognitive-agent modelling, and ITS research. The formal model defined by Self is derived from various areas of theoretical artificial intelligence, particularly from the epistemic/doxastic modal logics (Logic’s of Knowledge/Beliefs), and from BDI logic’s used for agent cognitive and communication modelling.

The student model can be defined as the representation of some characteristics and attitudes of the learners, which are used to achieve individualised and appropriate interaction between computational and student environments. Its objective is to understand the exploratory behaviour of the learner in order to offer any necessary support whilst also maintaining a sense of control and freedom.

This model constitutes a description of student knowledge, learning skills, strengths, and weaknesses. In addition, the model can also take into consideration the domain of the problem that was taught and the student’s learning process. It should also be updated when more information on the student as affective aspects, is obtained.

In the systems developed at present in our group (see Chapters 6, 7 and 9), the general cognitive agent structure is formed by T tuples where B is the set of agent’s beliefs, D is the set of agent’s desires, I are the set of agent’s intentions, and T is the set of time axioms.

The desires of the agent are a set of sentences DES(Ag,P,Atr), where Ag is an agent identification, P is a property, and Atr is a list of attributes. Desires are related to the state of affairs that the agent eventually wants to bring about; but desires, in the sense usually presented, do not necessarily drive the agent to act. That is, the fact of an agent having a desire does not mean it will act to satisfy it. It means, instead, that before such an agent decides what to do, it will be engaged in a reasoning process, confronting its desires (the state of affairs it wants to bring about) with its beliefs (the current circumstances and constraints the world imposes). The agent will choose those desires that are possible, according to some criteria.

Beliefs constitute the agent’s information attitude. They represent the information agents have about the environment and about themselves. Set B contains sentences describing the problem domain using ELP. An agent A believes a property P holds at a time T if, from B and T, the agent can deduce BEL (Ag,P) for the time T. We assume that the agent continuously updates its beliefs to reflect changes it detects in the environment. We assume that, whenever a new belief is added to the beliefs set, consistency is maintained.

Intentions are characterised by a choice of a state of affairs to achieve, and a commitment to this choice. Thus, intentions are viewed as a compromise that the agent assumes with a specific possible future. This means that, differently from desires, an intention may not be contradictory with other intentions, as it would not be rational for an agent to act in order to achieve incompatible states. Intentions should also be supported by the agent’s beliefs. That is, it would not be rational for an agent to intend something it does not believe possible. Once an intention is adopted, the agent will pursue that intention, planning actions to accomplish it, replanning when a failure occurs, and so forth. Agents must also adopt these actions, as means used to achieve intentions, as intentions.

The definition of intentions enforces its rationality constraints: an agent should not intend something at a time that has already passed; an agent should not intend something it believes is already satisfied or that will be satisfied with no efforts by the agent; an agent only intends something that it believes possible to be achieved, that is, if it believes there is a course of actions that lead to the intended state of affairs. When designing an agent, we specify only the agent’s beliefs and desires. It is up to the agent to choose its intentions appropriately from its desires. Such rationality constraints must also be guaranteed during this selection process.

Agents choose their intentions from two different sources: from its desires and as a refinement from other intentions. By definition, there are no constraints on the agent’s desires. Therefore, an agent may have conflicting desires, that is, desires that are not jointly achievable. Intentions, on the other hand, are restricted by rationality constraints. Thus, agents must select only the desires that are related to those constraints. Firstly, it is necessary to determine the subsets of the desires that are relevant according to the current beliefs of the agent. Afterwards, it is necessary to determine desires that are jointly achievable. In general, there may be more than one subset of the relevant desires that are jointly achievable. Therefore, we should somehow indicate which of these subsets are in preference to be adopted as intentions. This is done through a preference relation defined on the attributes of desires. The agent in our applications should prefer to first satisfy the most important desires. Additionally in preferring the most important ones, the agent adopts as much desires as possible. The selection is made by combining the different forms of non-monotonic reasoning provided by the logical formalism.

Once the agent adopts its intentions, it will start planning on how to achieve those intentions. During planning, the agent will form intentions that are relative to preexisting ones. That is, they “refine” their existing intentions. This can be done in various ways as, for instance, a plan that includes an action that is not directly executable, but can be elaborated by specifying a particular way of carrying out that action, or a plan that includes a set of actions that can be elaborated by imposing a temporal order on that set. Since the agent commits to the adopted intentions, these previously adopted intentions constrain the adoption of new ones. That is, during the elaboration of plans, a potential new intention is only adopted if it is not contradictory with the existing intentions and beliefs.

The next step is to define when the agent should perform all this reasoning about intentions. We argue that it is not enough to state that an agent should revise its intentions when it believes a certain condition holds, just like to believe that an intention has been satisfied or that it is no longer possible to satisfy it, as this suggests that the agent needs to verify its beliefs constantly. Instead, we take the stance that it is necessary to define, along with those conditions, a mechanism that triggers the reasoning process without imposing a significant additional burden on the agent. Our approach is to define those conditions that make the agent start reasoning about intentions as constraints over its beliefs. Recall that we assume that an agent constantly has to maintain its beliefs consistent whenever new facts are incorporated. When the agent reviews its beliefs and one of the conditions for revising intentions hold, a contradiction is raised. The intention revision process is triggered when one of these constraints is violated (see figure 3).

ORGANIZATION OF THE BOOK

In this book, we intend to present a modern view of ITS, focusing mainly on the conception of these systems according to a multiagent approach, and on the affective and cognitive modeling of the student also based on agent’s technologies. In this way, this book consists of 14 chapters organized into three major sections. In the first section, we present some notions about intelligent agents and multiagent systems, which are necessary to understand some learning environments modelled according to the agents approach, as well as some agent technologies, and examples of their application, related to educational purposes as animated pedagogical agents, embodied conversational agents and intelligent learning objects. This first part is composed of four chapters. In the second section, we present some works on affective and cognitive modeling of students. This part consists of six chapters that give to the reader a vision of the state-of-art in the area, and also describe some works in order to exemplify the concepts showed. The next four chapters, which compose the third section, aim at presenting some works that profit of modularity and openness advantages of multiagent systems in order to implement decentralized and distributed architectures of ITS, also known as Intelligent Learning Environments (ILEs).

A brief description of each of the chapters follows:

Chapter 1 aims at providing to the reader an overview of agent’s paradigm. The chapter introduces the agent’s concept, as also describes reactive (weak notion of agent, according to Wooldridge & Jennings, 1999) and cognitive (strong notion of AI) architecture of agents, focusing on the cognitive BDI model. Some challenges in the matter are also addressed by the chapter.

Chapter 2 addresses the reuse of learning objects to create ILEs. The authors are mainly concerned in the interoperability improvement among Learning Objects in agent-based Learning Environments by integrating Learning Objects technology and the MultiAgent Systems approach. Aiming at achieving this goal, they propose the development of learning objects based on agent architectures: the Intelligent Learning Objects approach.

Chapter 3 presents some basic guidelines to design and evaluate Animated Pedagogical Agents. Some background in the matter is also introduced. Besides, the authors discuss some limitations and problems in the area that motivate their proposed guidelines.

Chapter 4 introduces MagaVitta, an embodied conversational agent inserted into a multiuser intelligent learning environment, called CIVITAS, oriented to the construction and simulation of virtual cities. The authors also present the educational theories that ground CIVITAS modelling, as well as the pedagogical methodology developed for training teachers and for working with students in the educational environment.

Chapter 5 reviews the Bayesian-related technologies for modelling students in learning environments. The author states that Bayesian networks offer a mathematically sound mechanism for representing and reasoning about students under uncertainty, a topic that is also addressed in the chapter. An example of application of Bayesian network for modelling uncertainty information about the student in ITS is also presented in order to illustrate the concepts presented in the chapter.

Chapter 6 addresses the modelling of collaborative ITS by using a multiagent architecture and a mental states approach. In order to exemplify this idea, the authors introduce MCOE, an ecological collaborative educational game that was implemented according to these notions. The authors also present a comparison of MCOE with Ecologist, a non-agent earlier version of MCOE, aiming at illustrating the benefits of using a multiagent and mental states approach for modelling and implementing ITS.

Chapter 7 introduces Pat, an animated pedagogical agent that infers student’s emotions in order to adapt the system to student’s affective states. The authors issue the use of a BDI approach for the inference of emotions by student’s actions in the ILE interface using an appraisal-based psychological model of emotion. The chapter also lists Pat affective tactics that are expressed through emotional behaviour and encouragement messages of the agent lifelike character. An evaluation of Pat’s affective tactics with pedagogues and psychologist is also described in the chapter.

Chapter 8 is concerned with modelling student’s self-efficacy. The authors present a fuzzy-logic approach for student’s self-efficacy inference and modelling. The authors illustrate their work by presenting Intelliweb, an intelligent e-learning system that covers the domain of Vegetal Anatomy for undergraduate students of a Biological Science course. This environment is composed by two agents: the (i) SEM, which was conceived for modelling student’s self-efficacy; (ii) and PAT, the animated pedagogical agent presented in Chapter 7 that was introduced in IntelliWeb to apply affective tactics in order to increase student’s self-efficacy. An evaluation of Intelliweb is also presented by the authors.

Chapter 9 discusses the use of affective information about students to support the decision of an ITS on establishing pedagogical actions The chapter is mainly concerned with the inference of the student’s motivation to learn. A prototype developed to test some of those ideas was implemented as an agent that is modelled through mental states and is responsible for the inference of the student’s affective states and the choice of the pedagogical actions. This agent was introduced in the ILE Eletrotutor III for the evaluation that is also described in the chapter.

Chapter 10 presents a proposal for using probabilistic networks in order to model affective information about students. This model is used by an affective agent, Social Agent, in AMPLIA ILE in order to promote collaboration between students. AMPLIA, which is a probabilistic multiagent environment for the medical area, is also described in this chapter. The authors state that the probabilistic networks, that is, Bayesian Networks, are a powerful tool to deal with complex and uncertain domains, like the medical domain. The authors also present some experiments and results obtained in considering emotions to promote collaboration in AMPLIA.

Chapter 11 presents a set of tools for constructing multiagent based ITS, and describes a methodology for guiding the development of ITS. The main goal is to make multiagent based ITS development more efficient and useful for both developers and authors. This chapter also describes MATHEMA ILE as a reference model for the construction of ITS. Two study cases are also presented in order to illustrate the advantages in using their proposal.

Chapter 12 describes Leibniz and E-M@T. Leibniz is a pedagogical agent that was inserted in E-M@T, an ILE developed to give support for Calculus classes of engineering courses. The authors also review the Solidarity Assimilation Group theory that grounded E-M@AT design. Initial results obtained from the application of the prototype to real classrooms and future perspectives of this work are also presented by the authors.

Chapter 13 discusses some guidelines for modelling and designing ITS for distributed learning. The author also identifies trends that are influencing the development of intelligent tutoring systems and suggests some topics for future research and development in the matter.

Chapter 14 presents a multiagent approach for implementing ILEs. The proposed architecture exploits the assumption that each teaching subject can be regarded as the synthesis of elementary pieces of knowledge, each of which can be presented by an independent expert. The authors also present two implementations of the proposed architecture and discuss the advantages in using it.