Case Study: Managing Content Quality and Consistency in a Collaborative Virtual World

Case Study: Managing Content Quality and Consistency in a Collaborative Virtual World

Kent Taylor
DOI: 10.4018/978-1-60566-996-4.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The application of quality management tools in the content development process provides a range of benefits to writing, production, and program teams. This case study of a Natural Language Processing (NLP)-based information quality management solution developed by acrolinx® GmbH describes the results that real-world virtual collaborative writing practitioners have realized, and provides a roadmap for applying quality management strategies within writing organizations. When information products have consistent style, voice, terminology, and brand identification no matter where, when, or by whom the materials written, they are easier to read, understand, translate, and use. Quality management tools support collaboration within writing teams by centralizing access to the standards as writers are creating content, and providing objective quality metrics and reports at handoff points in the information supply chain. This process ensures consistency and clarity across information products, which makes them easier for writers to develop and for customers to use.
Chapter Preview
Top

Introduction

First, a bit of personal background will set the stage for this chapter and offer some perspective. An engineer by birth defect, a manager by accident, and quality zealot by choice, I am currently a recovering Tech Pubs manager, with 20 years of experience at a large equipment manufacturing company, and six years consulting in enterprises with large publishing operations. I have experienced just about every conceivable way to do everything associated with information development and delivery—the good, the bad, and the ugly. I have instigated and lived through a number of traumatic transitions, starting with the shift from pencils and blue-line pads to green-screen input terminals, up through large scale, all electronic multimedia, multi-language simultaneous shipment (sim-ship) operations. In the context of these transitions, I have deployed generic code (GenCode), Standard Generalized Markup Language (SGML), Extensible Markup Language (XML), machine translation, and International Standard, Quality Management and Quality Assurance Standards (ISO 9000) processes along with those related to content management systems (CMSs) and single sourcing. While the transitions to some of these technologies were more difficult than others, nearly all of them eventually resulted in improvements in terms of productivity and throughput.

The deployment of ISO 9000 in the information development arena was an exception, however. After months of planning and startup activity, we had a general feeling that quality was improving, but could not demonstrate any significant productivity improvements. This surprised me, as I had seen stunning results in a previous assignment managing a large, complex manufacturing operation. In the manufacturing environment, we were able to establish strict quality standards, measure conformance to standards objectively at every work station, provide feedback to the operators (quality assurance, or QA), and employ an independent quality department to test a statistical sample of our output against the standards (quality control, or QC). Quality, productivity, cycle time, and yield all improved shortly after we started providing feedback to the operators; scrap and unit cost both plummeted.

This positive outcome did not occur when implementing ISO 9000 in the information development environment because, in my opinion, the writers lacked real-time objective, actionable metrics and feedback. In the manufacturing environment, employees got real-time feedback about how their work conformed to standards—generally automated dimensional and electrical measurements. And managers got summary reports indicating the types and numbers of defects encountered each day; if a specific type of error peaked on any given day, the cause could be identified, and corrective action taken to resolve the issue.

This level of monitoring and tracking was not feasible in the information development environment of the early 1990s. While our editing team provided feedback to the writers in terms of marked up documents, it generally came weeks after the writing activity, and it was subjective and actionable only to the extent that editors’ comments, additions, changes, and deletions had to be inserted in the documents. Summary reports to managers consisted only of the number of documents and pages processed, time spent, and associated costs. Such activities are examples of serial and parallel collaboration as described in Chapter 1 of this book.

It appeared that the triad of cost, timeliness, and quality was impossible to achieve. Given real-world constraints, writing managers invariably focus on cost and timeliness and sacrifice quality, making uneven quality a fact of life in the world of information development. That is, the cost of manually collecting actionable metrics at the individual document and writer levels was prohibitive. However, that reality was 15 years ago, and breakthroughs in Internet-based (sometimes called cloud) computing and the field of Natural Language Processing (NLP) have changed the game. NLP employs sophisticated algorithms to analyze language at the word- and sentence-level. Software systems based on NLP technology can analyze text the same way that a human copy editor does, by checking for conformance to established standards for spelling, grammar, style, and terminology. Thus, real-time feedback on language rules can be provided to writers via spell-check-like highlighting and guidance in their native authoring environment, and managers can get aggregated summary reports on demand.

Complete Chapter List

Search this Book:
Reset