Often computers are viewed as machines that run algorithms. This view of computers covers a vast range of devices, from simple computers that perform limited and specific computations (for example, a computer found in a wristwatch) to supercomputers, which are groups of computers linked together in homogeneous and heterogeneous clusters and serving a vast array of computational needs. In between these extremes are a variety of machines from personal computers to embedded devices dedicated to serving a variety of functions. Let us call the perspective of computers as machines that run algorithms “mechanistic.” Through the mechanistic lens, the computer is an artifact, coming from the Latin words arte, which means “skill” or “craft,” and facere, which means to do or make. An artifact is that thing is skillfully or artfully made; and the computer, then, is a skillfully made machine for purposes of computation.
While the mechanistic view is irrefutable, it is also incomplete because it fails to consider contextual definitions of technology. One way of seeing the context of computing is to broaden the definition to consider the initial need for the innovation, as well as the consequent processes such as the conceptualization, design, development, implementation, use, diffusion, adaptation, evolution, maintenance, and disposal of computing. We will call this view the “social context influencing technology” perspective, shown in figure 1. This view assumes that such innovation processes arise from social needs and therefore positions computing as intrinsically social and humanistic. This view also suggests that technology occurs in a social milieu – a context – wherein the context is a set of interrelated conditions including social, cultural, and physical elements that form an environment, a circumstance, if you will. For example, in a society that where efficiency and productivity are highly valued norms, one would expect to see different technology innovations and adoptions in contrast to a society that is less concerned with efficiency and productivity (Bimber, 1990). The environment and its constituent elements in which technology is conceived, designed, developed, implemented, used, evolved, and so on, become factors that shapes how technology is conceived, designed, developed, implemented, used, evolved, etc.
An excellent example of how social context shapes technology innovation is provided in an article by Cowhey, Aronson and Richards (2009) that describes how political climate changed the US Information and Communication Technology Architecture. The social context that Cowhey, Aronson and Richards describe highlights how the division of powers, the majoritarian electoral system and federalism made it possible for a formulation of strong competition policy. The effects of this on the ICT architecture were threefold: (1) it enabled the architectural principle of “modularity” as multiple companies entered the marketplace making a “portion” of the goods that today comprise the Internet; (2) it created multiple network infrastructures for telecommunications, which is in contrast to other countries that either tried to retain a monopoly infrastructure or purposefully limit the number of competitors, and (3) it propelled both a particular architecture for computing (intelligence at the edge of the network) and the full realization of the potential benefits of the Internet (Cowhey, Aronson, and Richards, 2009).
In contrast to the “social context influencing technology” perspective is the view that technology shapes or influences social context. In this view (figure 2), technology is an agent that possesses active power, and perhaps even cause, and is capable of producing effects. The “technology influencing social context” perspective focuses on the manner in which technologies function as agencies in social functioning, change, and structure.
In an article ahead of its time, Moor (1985) offers an example of how technology can influence social context when he discusses how a program written to computerize airline reservations favored American Airlines by suggesting their flights first, even when the American Airline flight was not the best flight available. This example highlights how technology may affect social change in a manner that advantages American Airlines, but disadvantages the consumer. However, there are numerous examples where technology acts as a social agent for the betterment of human life. Take, for example, advances in health care. Today, information technology is being used to provide telecare and telehealth services to citizens in their homes. These technologically-mediated solutions promise many potential benefits including improved quality of life, cost savings, quality of service, and accessibility of service. Telecare, for example, offers elderly persons (1) the opportunity to age in place, which is widely known to be preferred by most older persons, (2) increased independence for the individual, and (3) an expansion of the possible care giving group to more easily include friends, family, and neighbors, as well as health care staff. Telecare also holds promise for a variety of cost savings, such as reduced travel costs for caregivers, where the time saved can be redirected toward offering improved care.
The perspectives in figures 1 and 2 are useful in helping us conceive our world. Using models such as these to partition and delineate relationships help us order and make sense of the world around us. These are sense making tools. Practically speaking, such delimitations are necessary so that the human mind can explore, define, and analyze phenomena. However, we need to be mindful that such partitions are artificial. Like technology, these boundaries are also artifacts. They do not reflect reality; rather they are conceptual boundaries that we impose (whether in our minds or on paper) due to our own limitations at comprehending totality. This book aims to cultivate awareness and questioning of these conceptual boundaries in readers’ minds. Greater awareness should result in better preparation of the information assurance and security professional and consequently enable them to contribute to more socially robust and responsible endeavors.
This book is for students and practitioners in the rapidly growing field of information assurance and security. Early in the germination process, this book was going to catalogue the ethical issues that are of importance in today’s online environment, e.g., privacy, access, ownership, security, cybercrime, and so on. However, several other books have taken this approach, focusing on ‘what’ the issues are and how they are exacerbated by the ubiquity and pervasiveness of information technologies (for example, see Johnson, 2000; Tavani, 2006). It was not my desire to duplicate what others already have done masterfully. Instead, this book will deliberate some of the ethical and social issues in information assurance and security. Chapters in this book address issues of privacy, access, safety, liability and reliability in a manner that asks readers to think about how the social context is shaping technology and how technology is shaping social context and, in so doing, to rethink conceptual boundaries.
This book assumes a complex adaptive systems’ perspective regarding these ethical issues in information assurance and security by elucidating ways in which ethical issues (such as privacy, access, ownership) are at the intersection of information technology, policy, culture, and economics – all of which are systems with several associated subsystems. What this book aims to inculcate in the minds of readers is that issues of information assurance and security ethics are (1) co-constitutive, i.e., technology and social context co-adapt, (2) complex, which means there are actually several arrows, and (3) emergent, which suggests that these relationships are dynamic in uncertain ways.
Information assurance and security is inherently normative, dealing with weighty social and ethical issues. The core of information assurance and security ethics include questions such as these: What ought systems do in order to preserve privacy? To whom should access be granted? Who should be responsible for harm and risk of software security flaws? Should we have predictive insider threat monitoring? However, I need to be perfectly clear; this book will not offer answers to any of these questions. Not because answers are not desired. Rather because answers are not easy to come by. The social implications of questions such as these implicate deliberative participation, which occurs slowly. Social decisions about multiple goals calls for participatory control, which needs to occur transparently. Furthermore, in the large sense, what “ought to be” is akin to a journey without a destination. And while there is no preformulated state of balance, no foregone conclusion, the ideal of the common good and human flourishing are undeniable and ageless.
At the nexus of information assurance and security ethics are several complex systems. This book aims to reveal some of this complexity on the belief that more fully comprehending the problem space is more important than moving prematurely toward naïve solutions. This book asks readers to contemplate the role of existing norms in influencing what should be moving forward? This book extends beyond technical systems to include how, for example, political, cultural and economic systems shape and interact with technical systems and what this suggests for information assurance and security ethics. It is my hope that in reading this book, readers will question – and then question again – where system boundaries lie. I hope that readers come to understand, for example, that the responsibility for risk and harm of software security flaws is as much an economic challenge as a technical one. I hope that readers reflect on how peer-to-peer networks are acting as agents of social change in the intellectual property milieu and contemplate how the field of information assurance and security will change given advances in pharmacogenomics and personalized medicine. If at the end of this book, readers feel that information assurance and security ethics is messier and more vexing than originally perceived, then this book achieved its goal. If at the end of this book, readers feel more committed to why they chose information assurance and security as a field of study and their professional calling, then this book will have exceeded its goal.
As readers, you need to know that this book is grounded in constructivism, as opposed to rationalism or empiricism. What does this mean? Epistemologically, rationalists hold that knowledge is true or verifiable when what one knows corresponds to objective reality. For example, human beings breathe oxygen and exhale carbon dioxide. This is an objective reality. I can teach my child that humans breathe oxygen and exhale carbon dioxide and then test her knowledge thereof. If she knows this, then her knowledge reflects the world, ergo it is rational. Empiricism holds that knowledge is true when it can be observed through the senses. For example, my daughter is getting ready to start her seventh grade science fair project testing the effects of acid rain on aquatic plants. She will conduct her experiment by attempting to cultivate various aquatic plants in water with increasing doses of acid and observing the effects. The difference between rationalism and empiricism is in effect the degree to which sense experience is the source of knowing. Rationalists contend that some knowledge cannot be perceived through the senses, yet irrefutably exists.
The chapters in this book partly rely on rationalism and empiricism, yet, we all need to keep presence of mind that full knowledge of objective reality is unattainable – a key tenet of constructivism.
Constructivism recognizes that even the most elaborate theories of objective reality are through the mind’s eye. As Piaget (1954, p. 400) noted, human “intelligence organizes the world by organizing itself.” Our observations can never be independent of us. The interesting twists come when the observations we wish to make are observations about ourselves, what should be, and aspirations of one’s contribution toward what should be. Here reality takes a different form; it is what we are working toward, not what is. This isn’t to say that people do not work from experiences in formulating ideas about what should be – we do. It is just that our ideas about what should be can never fit reality – we humans are perpetually in the act of becoming. Here constructivism suggests that knowledge needs to be relevant and fitting to the context and circumstances, which are both external and internal. Questions of ethics require learning about our own ethics as context and circumstance, as well as the external context and circumstance. This book asks readers to engage in this reflection.
It might be useful for readers of this book to adopt a figure-ground practice with regard to their perception. You likely know of figure-ground phenomena; the concept of figure-ground is perhaps most well-known in the field of visual perception. In vision, figure-ground is a type of perceptual organization that involves assignment of edges to regions for purposes of shape determination, determination of depth across an edge, and the allocation of visual attention. One of the most well-known examples of figure-ground in vision is the faces-vase drawing popularized by Gestalt psychologist Edgar Rubin (see figure 3).
What is figural (either the faces or the vase) at any one moment depends on patterns of sensory stimulation and on the momentary interests of the perceiver. If the edges (the boundary) are perceived inward, then the perceiver sees a white vase against a black background. In contrast, if the edges are perceived outward, then the perceiver sees two black profiles against a white background. Both are valid.
Because they so aptly convey the human condition, figure-ground phenomena are also present in music and literature, including folklore. Consider this Russian joke; a guard at the factory gate saw a worker walking out with a wheelbarrow full of straw at the end of every work day. And every day the guard thoroughly searched the contents of the wheelbarrow, but never found anything but straw. One day he asked the worker, “What do you gain by taking home all that straw?” and the worker replied, “The wheelbarrow.” The illusion, you see, is that we are accustomed to thinking about the load of straw as the “figure.” At first consideration, one assumes that the wheelbarrow is only an instrument and therefore it is relegated to the “ground” in the mind.
Figure-ground relationships are an important element of the way we organize reality in our awareness, which is at the heart of this book. This book then is about straw and wheelbarrows, about shifting attention from figure to ground or, rather, about turning into figure what is usually perceived as ground and then back again. Question your assumptions about figure and ground vigilantly – as they pertain to the world, to yourself, and to you in the world.
Chapter one of this book aims to help readers think about and develop a conceptual framework that serves the purpose of helping readers construe, question, and reconstrue their interpretive system about what should be. As chapter one positions questions of ethics in the individual mind, chapter two offers readers an historical view and in so doing serves to remind us all that ethics has a long and rich foundation from which one can build. Chapter three reinforces Harter’s point in chapter two that while ethics is very old, it is forever new and invites readers to join the dialog.
Chapter four serves to remind readers that our own individual experiences are not the only filter we use in formulating judgments about right and wrong. In a study of international ethical attitudes and behaviors, Yates and Harris probe the role of culture in shaping perceptions about right and wrong use of information and information systems. While their findings are discussed in the context of information security policies for multi-national organizations, the second purpose of the chapter is to acknowledge that notions of right and wrong are often solidly grounded in group norms. Chapters five through nine present the following contemporary and emerging issues in information assurance and security: peer to peer networking, software security, predictive insider threat monitoring, behavioral advertising, and pharmocogenomic testing. Each chapter offers a critical analysis of ethical issues by looking at interplays of technology, policy, and economic systems.
Chapters 10 and 11offer two different glimpses of public sector involvement. Chapter 10 considers the competing interests of privacy versus public access to e-government services and public information. As information technology has become more ubiquitous and pervasive, assurance and security concerns have escalated; in response, we have seen noticeable growth in public policy aimed at bolstering cyber trust. With this growth in public policy, questions regarding the effectiveness of these policies arise. While public policy aims to ameliorate a social problem or need, public policy does not occur in a vacuum, it arises in a context which has implications for the policy outcomes we observe. Chapter 11 offers a retrospective and prospective look at data breach disclosure laws in the United States as a way of introducing readers to the broader context of public policy in information assurance and security.
I hope to create for readers a space where they reflect on the role of ethics in information assurance and security in the absence of certainty. This book asks the reader to engage in a conversation about the mutually adaptive roles of information and information technology to ethics, morality and emotional life, and to consider these entities in the context of their vitality to sustaining society. This book seeks to shed light on false, insufficient and/or useless distinctions between science and humanistic endeavors. Instead, the goal of this book is to provide a lens by which the adaptive relationships between information technology and human flourishing can be considered in meaningful and sustainable ways.
A final word, the subject of this book – information assurance and security ethics in complex systems – requires patience. Considering complex adaptive systems requires taking multiple vantage points and necessitates tolerance for uncertainty because adaptive systems are dynamic by nature – they vary, they evolve, and they emerge. For the reductionist or the rationalist, this can be frustrating. Impatience with messiness and imperfection has no seat at this table. Engaging in analysis of complex and adaptive systems does not reduce the number of questions one asks and attempts to answer. On the contrary, it produces more questions. The healthier mindset then is to abandon a quest for certainty and adopt a learning mentality, both at the individual and analytical levels, where the former is perhaps requisite to the latter. Knowing is an ongoing adaptive process in which objectivity and subjectivity emerge and continually evolve. The knower, like complexity itself, can be characterized as non-linear, sensitive to contextual conditions, and unpredictable. Your epistemology is not only a part of the complexity; it is also a part of the dynamic interactions. The nature of reality and its dynamics are complex and the knower is a part of that complexity. It is not the quest of this book to divorce subjectivity from objectivity in pursuit of the latter. As the essayist Henry David Thoreau said “Live your beliefs and you can turn the world around.”
Bimber, B. (1990). Karl Marx and the three faces of technological determinism. Social Studies of Science, 20(2), 333-351.
Cowhey, P., Aronson, J., and Richards, J. (2009). Shaping the architecture of the U.S. information and communication technology architecture: A political economic analysis. Review of Policy Research, 26(1-2), 105-125.
Johnson, D. (2000). Computer ethics. Upper Saddle River, NJ: Prentice Hall.
Moor, J. (1985). What is computer ethics? Metaphilosophy, 16(4), 266-275.
Piaget, Jean. (1954). The construction of reality in the child (M. Cook, Trans.). New York: Ballantine.
Tavani, H. (2006). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: Wiley.