Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities

Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities

Marianne Laurent
DOI: 10.4018/978-1-60960-869-9.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The research and development on spoken dialog systems embraces technical, user-centered and business-related perspectives. It brings together stakeholders belonging to distinct job families, therefore prone to different traditions and practices. When assessing their contributions, as well as the final solution, they conduct very nomadic evaluation protocols. As a result, the field is eager to set up norms for evaluation. Contributions abound in this way. However, despite standardization exercises, we believe that the absence of common conceptual foundations and dedicated “knowledge creation spaces” frustrates the effort of convergence. The chapter therefore presents an application framework meant to rationalize the design of evaluation protocols inside and across project teams. This Multi Point Of VieW Evaluation Refine Studio (MPOWERS) enforces common models for the design of evaluation protocols. It aims at facilitating, on the one hand, the individual evaluator-users task and, on the second hand, the emergence of (first virtual, then maybe real) communities of practice and multidisciplinary communities of interest. It illustrates how implementing shared knowledge frameworks and vocabulary for non-ambiguous asynchronous discussions can support the emergence of such virtual communities.
Chapter Preview
Top

Introduction

Need for a Convergence of Evaluation Practices

The success of a product or service design generally cannot rely on the sole accumulation of elementary isolated contributions. In his analysis of the Renault Twingo’s groundbreaking project, Midler (1995) illustrates that exterior and interior designers are solely responsible for success. However the latter provided key ingredients for the car's personality, the various engineers and stylists, the purchasers and providers implicated in the design-to-cost operation and the industrials and commercials that came up with original production and distribution processes are also accountable for the successful outcome. As a matter of fact, the design of products and services impanels stakeholders with various expertise, roles and therefore points of view on the project. They need to measure their contribution to the system design, both between versions and with competing solutions. They process instrumented evaluations (for e.g., noise and consumption are measured and confronted to requirements) in parallel to experimental setups and questionnaires enabling the expression of the stakeholder’s subjectivity (e.g. presentation of models, prototypes trials, project reviews). Methods encompass technical-oriented, user-centered and business-related outlooks. Accordingly, Midler alerts that this diversity of coexisting evaluation practices may deceive the one looking for a straightforward recipe for project evaluation.

Multidisciplinary projects bring together very different dictates of evaluation inferred and generalized from the team members’ past experience. Contrary to traditional hierarchical working organizations, transverse project groups cannot abide by established rules, inherited from the silo relative job family. On the contrary, they combine and accommodate various traditions. This requires recognizing the coexistent norms and policies, understanding why they are endorsed and to what extent they can be negotiated.

Meanwhile, such nomadism of practices leads to the poor reusability of evaluation protocols from a project to another, the difficult comparison of performance across projects and a lack of credence for communication on the systems’ performances. The domain therefore claims for a convergence of practices toward more transparent and prevailing metrics that would both make authority for service commensurability and lower evaluation efforts so as to concentrate on the service design.

Complete Chapter List

Search this Book:
Reset