RDMTk: A Toolkit for Risky Decision Making

RDMTk: A Toolkit for Risky Decision Making

Vinay Gavirangaswamy, Aakash Gupta, Mark Terwilliger, Ajay Gupta
DOI: 10.4018/IJCINI.2019100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Research into risky decision making (RDM) has become a multidisciplinary effort. Conversations cut across fields such as psychology, economics, insurance, and marketing. This broad interest highlights the necessity for collaborative investigation of RDM to understand and manipulate the situations within which it manifests. A holistic understanding of RDM has been impeded by the independent development of diverse RDM research methodologies across different fields. There is no software specific to RDM that combines paradigms and analytical tools based on recent developments in high-performance computing technologies. This paper presents a toolkit called RDMTk, developed specifically for the study of risky decision making. RDMTk provides a free environment that can be used to manage globally-based experiments while fostering collaborative research. The incorporation of machine learning and high-performance computing (HPC) technologies in the toolkit further open additional possibilities such as scalable algorithms and big data problems arising from global scale experiments.
Article Preview
Top

1. Introduction

Decision Making (DM), and in particular Risky Decision Making (RDM), has become a cross-disciplinary field of interest. Many people ask, “Why do some people have better life outcomes than others?” The economist might ask “Why do some people make certain financial decisions?” Psychologists wonder “Why do certain people have a higher appetite for risk than others?” The organizational behaviorist asks, “How will these decisions affect the organization?” Neuroscientists wonder “Are there certain areas of the brain that are tied to risky decision-making processes?” The computer scientist asks, “Can networks predict when actors will make risky decisions?” And these are just a few of the many ways in which risky decision making has become a widely studied phenomenon. Unfortunately, this analysis is often done independently across different fields, using disparate toolkits, disparate analysis methods, and only moderate cross-disciplinary pollination. There are no universally accepted tools and measures for risky decision-making. Historically, assessments of risky decision making were made using self-reports as measuring instruments.

As the field has evolved, data from previously published research has shown that not all individuals possess the ability to assess and accurately report on their behavior. Our assumption is that, under uncertain situations, even the most sophisticated response formats, including multiple-choice, multiple-selection, short and extended constructed-response and performance task, etc. are inadequate to account for the complex cognitive processes involved in the decision. Diagnostic instruments constructed in laboratories offered limited relief at the cost of smaller test subject pools. Since then, the use of experimental manipulation has become prevalent. For example, psychologists would often present participants with hypothetical decision games (Cronbach & Meehl, 1955). Economists began to present participants with lists of hypothetical scenarios (Kahneman & Tversky, 1977). These experimental manipulations naturally found their way to computerized interfaces, which enabled ease of use in manipulation, data collection, and scale. More recent developments show test takers prefer internet-based assessments over paper-based (Chapter 7) (Schweizer & DiStefano, 2016). Virtualized versions of in laboratory measuring techniques using computer technologies offer cost-effective and enhanced replicas of the same.

Reservations against incorporating computer and information technologies in psychometric measurements were much debated during the 1980’s and 1990’s; what is possible as well as the accuracy of results, compared to paper-based methods. The situation has changed today, as computers are now available in much wider forms such as smartphones, tablets, surface and touch devices when compared to previous decades (Schweizer & DiStefano, 2016). With the widespread acceptance of computer usage, International Test Commission, Inc. (ITC) (International Test Commission, Inc., 2017) has developed guidelines for computer-based and internet-based testing in psychometric assessments.

As a result, the number of techniques and packages exploded. E-Prime (Schneider and Zuccoloto, 2007), Inquisit 4 (Draine, 1998), MouseLabWeb (Willemsen & Johnson, 2004), PEBL (Mueller, 2009), PsychMate (Eschman et al., 2005), PsychoPy (Peirce, 2007), Presentation (NeuroBehavioralSystems, 2015), SuperLab, SurveyWiz and FactorWiz (Birnbaum, 2000), Visual DMDX(Garaizar & Reips, 2015),Webexp2 (Keller et al., 1998), WEXTOR (Reips & Neuhaus, 2002) all provide tasks that can be used for RDM analysis experiments. There are also several laboratories such as the Laboratory for Cognitive and Decision Sciences (Pleskac, 2015), Laboratory of Biological Dynamics and Theoretical Medicine (Paulus, 2015), and The Brain and Mind Research Institute, which have historically focused on developing such tools. Experimenters often pair the tools with specialized add-ons. For instance, The Black Box Toolkit (Plant, 2015) is designed to give precise timing control and tracking for psychology researchers to remove timing errors.

Complete Article List

Search this Journal:
Reset
Volume 18: 1 Issue (2024)
Volume 17: 1 Issue (2023)
Volume 16: 1 Issue (2022)
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing