Human factors assessment is a set of methods that are employed in order to determine if a product, service, or system meets the needs of the end users. These needs are measured along the dimensions of effectiveness (can the user actually accomplish the task at hand?), efficiency (can the user accomplish the task with a minimum of effort?), and satisfaction (is the user satisfied with his or her interaction with the product?). Multimedia technology requires significantly more attention to human factors and usability because the mode interactions create a more complex operating environment for the end user. This complexity can make these systems difficult for consumers to learn and use, reducing both the satisfaction of the users and their willingness to purchase or use similar systems in the future. It is critically important to assess the usability of a product from the onset of the project. Although it is common to perform a summative human factors assessment of the product at the end of development, it is typically too late to do anything meaningful with the results at this point because of the cost of changing a complete or nearly complete design. It is most beneficial to engage in a full human factors assessment during the concept generation phases, so that fundamental limitations of human perception and cognition can be considered before designs have already been established. Human factors assessment should continue throughout the project lifecycle. Rigorous application of these methods helps insure that the resulting end product will have high user acceptance because of superior ease of use.
Methods Of Human Factors Assessment
There are three major methods for gathering data for the assessment of a product:
Each method has certain unique advantages and disadvantages that require that they be employed carefully during the project lifecycle. Specific submethods within each of these major categories are described in the following sections.Top
Methods Of Inquiry
Inquiry methods are those in which users of a product are asked about their experiences. If the product is already available, then inquiry methods tend to focus on the users’ previous experience with the product, especially areas in which the user feels that there are deficiencies. Ideally, however, inquiry methods are employed early in the concept design phase in order to gauge what users want and need in a particular product, as well as what they may dislike in similar or competing products. Four commonly used inquiry methods include contextual inquiry, interviews, surveys, and self-report.
In contextual inquiry, the participant is observed using the product in its normal context of use, and the experimenter interacts with the user by asking questions that are generated based on that use. It is important to let the participant “tell the story” and ask questions only to clarify or expand on behaviors of interest. Ideally, data collection takes place with the product in the environment in which the participant would be actually using it so that other relevant connections (i.e., the context of use) can be made. Bailey, Konstan, and Carlis (2001) performed a study in which they used contextual inquiry to assess a tool that was being used by multimedia designers in their day-to-day development work. Their contextual inquiry assessment found that the current tools did not support multimedia designers in the way they actually worked. Applying the lessons learned thorough this analysis, they developed specialized software specifically for multimedia designers. For a complete description of the general method, see Beyer and Holtzblatt (1998).
Interviews are a popular method of obtaining information from a set of users. Interviews are best done when contextual inquiry is impractical or cost-prohibitive. For example, it’s difficult to perform contextual inquiry with a participant who is immersed in a fast-paced multimedia game. In this case, pre-use and post-use interviews would be a better choice. Additional information about interview techniques can be found in Weiss (1995).
Key Terms in this Chapter
Checklist: A predetermined list of usability criteria that can be used to perform a human factors assessment.
Ethnography: A method of collecting user data in which the user is observed in their natural environment using the product. Ethnography differs from contextual inquiry in that there is limited interaction with the user in ethnography.
Cognitive Walkthrough: A method of evaluating the usability of a product in which a human factors expert reviews the steps and processes of completing a specific task and notes deficiencies in both the interface and the sequence for that task.
Coding Scheme: A technique that allows a researcher to quantify qualitative data in a form that lends itself to quantitative analysis. The technique is frequently used in cases where verbal data needs to be analyzed.
Usability: A term that describes the ability of a human to make a product or system perform the required functions with sufficient efficiency, effectiveness, and end user satisfaction.
Contextual Inquiry: A method of gathering data in which a human factors expert interacts with a user and product in the actual context of use of that product. It differs from ethnography in that there is significant interaction between the user and the expert in contextual inquiry.
Usability Testing: A method of evaluating the usability of a product in which participants are observed in a laboratory using the product while they try to complete specific tasks.y
Task Analysis: A method of determining how a multimedia product works by determining the exact procedural steps that must be performed in order to complete a given task.
Telemetry: A method of collecting user-generated data that does not directly involve watching participants use a product. In telemetry, data is collected automatically and remotely and then reviewed at a later time.
Heuristic Evaluation: A method of determining the usability of a product by having a human factors expert review the product against a set of known usability principles.