This chapter describes the state of the art in testing GUI-based software. Traditionally, GUI testing has been performed manually or semimanually, with the aid of capture- replay tools. Since this process may be too slow and ineffective to meet the demands of today’s developers and users, recent research in GUI testing has pushed toward automation. Model-based approaches are being used to generate and execute test cases, implement test oracles, and perform regression testing of GUIs automatically. This chapter shows how research to date has addressed the difficulties of testing GUIs in today’s rapidly evolving technological world, and it points to the many challenges that lie ahead.
A GUI provides a visual front-end through which a user can interact with a software application. Although there are various models for GUI design, the most commonly used in practice and in software-testing research—and hence the model assumed in this chapter—is the WIMP model with windows, icons, menus, and pointing devices (Nielsen, 1993). The GUI is made up of widgets—such as buttons, text boxes, and labels—that the user can manipulate to send input to the underlying software and the software can, in turn, manipulate to send output to the user. Each widget has a set of properties—for example, “font”, “width”, “enabled”—each of which has some value—for example, “Helvetica”, “100”, “true” (Yuan & Memon, 2007).
Widgets are contained in windows, which may either be modal or modeless. A modal window blocks the user’s interaction with other windows while it is active, whereas a modeless window imposes no such restrictions. A window’s state at any particular time is the set of all triples (w,p,v) such that w is a widget in the window, p is a property of w, and v is the value of p. The GUI state then consists of the state of all windows in the GUI (Yuan & Memon, 2007).
As the user interacts with the GUI, the state of both the GUI and the underlying software can change. When the user performs an event on the GUI—such as clicking a button or typing in a text box—a piece of application code called an event handler is executed. The event is the basic unit of interaction with a GUI. To accomplish a task, a user typically must perform multiple events in sequence. Hence, a GUI test case consists of a sequence of events (Yuan & Memon, 2007).
Key Terms in this Chapter
System-Interaction Event: An event that either closes a window or performs some action without opening or closing any windows or menus.
Graphical User Interface (GUI): A visual front-end through which a user can interact with a software application.
GUI Test Case: A sequence of events to be performed on the GUI.
Probabilistic Event-Flow Graph (PEFG): A graph representation of a GUI that consists of an EFG whose paths are annotated with probabilities of traversal by users.
Event Semantic Interaction Graph (ESIG): A graph representation of a GUI in which vertices represent system-interaction events and an edge from event e 1 to event e 2 signifies that performing e 1 followed by e 2 results in a GUI state that is qualitatively different from the state that would have resulted had e 1 and e 2 been performed in isolation.
GUI State: The collection of states of all windows in the GUI, where a window’s state is the set of all triples ( w , p , v ) such that w is a widget in the window, p is a property of w , and v is the value of p .
Event-Flow Graph (EFG): A graph representation of a GUI in which vertices represent events and an edge from event e 1 to event e 2 signifies that e 2 can be performed immediately after e 1 .
Event-Interaction Graph (EIG): A graph representation of a GUI in which vertices represent system-interaction events and an edge from event e 1 to event e 2 signifies that there is a path from e 1 to e 2 in the EFG that contains no system-interaction events other than e 1 and e 2 .
Event: The basic unit of input to a GUI, triggered by such user actions as clicking a button or typing in a text box.