Building systems that are correct by design has always been a major challenge of software development. Typical software development approaches (and in particular interactive systems development approaches) are based around the notion of prototyping and testing. However, except for simple systems, testing cannot guarantee absence of errors, and, in the case of interactive systems, testing with real users can become extremely resource intensive and time-consuming. Additionally, when a system reaches a prototype stage that is amenable to testing, many design decisions have already been made and committed to. In fact, in an industrial setting, user testing can become useless if it is done when time or money is no longer available to substantially change the design. To address these issues, a number of discount techniques for usability evaluation of early designs were proposed. Two examples are heuristic evaluation, and cognitive walkthroughs. Although their effectiveness has been subject of debate, reports show that they are being used in practice. These are largely informal approaches that do not scale well as the complexity of the systems (or the complexity of the interaction between system and users) increases. In recent years, researchers have started investigating the applicability of automated reasoning techniques and tools to the analysis of interactive systems models. The hope being that these tools will enable more thorough analysis of the designs. The challenge faced is how to fold human factors’ issues into a formal setting as that created by the use of such tools. This article reviews some of the work in this area and presents some directions for future work.