Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities

Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities

Marianne Laurent
ISBN13: 9781609608699|ISBN10: 1609608690|EISBN13: 9781609608705
DOI: 10.4018/978-1-60960-869-9.ch003
Cite Chapter Cite Chapter

MLA

Laurent, Marianne. "Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities." Virtual Community Building and the Information Society: Current and Future Directions, edited by Christo El Morr and Pierre Maret, IGI Global, 2012, pp. 29-48. https://doi.org/10.4018/978-1-60960-869-9.ch003

APA

Laurent, M. (2012). Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities. In C. El Morr & P. Maret (Eds.), Virtual Community Building and the Information Society: Current and Future Directions (pp. 29-48). IGI Global. https://doi.org/10.4018/978-1-60960-869-9.ch003

Chicago

Laurent, Marianne. "Coordinating Nomadic Evaluation Practices by Supporting the Emergence of Virtual Communities." In Virtual Community Building and the Information Society: Current and Future Directions, edited by Christo El Morr and Pierre Maret, 29-48. Hershey, PA: IGI Global, 2012. https://doi.org/10.4018/978-1-60960-869-9.ch003

Export Reference

Mendeley
Favorite

Abstract

The research and development on spoken dialog systems embraces technical, user-centered and business-related perspectives. It brings together stakeholders belonging to distinct job families, therefore prone to different traditions and practices. When assessing their contributions, as well as the final solution, they conduct very nomadic evaluation protocols. As a result, the field is eager to set up norms for evaluation. Contributions abound in this way. However, despite standardization exercises, we believe that the absence of common conceptual foundations and dedicated “knowledge creation spaces” frustrates the effort of convergence. The chapter therefore presents an application framework meant to rationalize the design of evaluation protocols inside and across project teams. This Multi Point Of VieW Evaluation Refine Studio (MPOWERS) enforces common models for the design of evaluation protocols. It aims at facilitating, on the one hand, the individual evaluator-users task and, on the second hand, the emergence of (first virtual, then maybe real) communities of practice and multidisciplinary communities of interest. It illustrates how implementing shared knowledge frameworks and vocabulary for non-ambiguous asynchronous discussions can support the emergence of such virtual communities.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.