Technological Social-ism

Technological Social-ism

Judson Wright (Pump Orgin Computer Artist, USA)
DOI: 10.4018/978-1-60566-352-4.ch020


Culture is a byproduct of our brains. Moreover, we’ll look at ways culture also employs ritual (from shamanistic practices to grocery shopping) to shape neural paths, and thus shape our brains. Music has a definite (well researched) role in this feedback loop. The ear learns how to discern music from noise in the very immediate context of the environment. This serves more than entertainment purposes however. At a glance, we often can discern visual noise from images, nonsense from words. The dynamics are hardly unique to audial compositions. There are many kinds of compositional rules that apply to all of the senses and well beyond. The brain develops these rule sets specific to the needs of the culture and in order to maintain it. These rules, rarely articulated, are stored in the form of icons, a somewhat abstracted, context-less abbreviation open to wide interpretation. It may seem somewhat amazing we can come up with compatible rules, by reading these icons from our unique personal perspectives. And often we don’t, as we each have differing tastes and opinions. However, “drawing from the same well” defines abstract groupings, to which we choose to subscribe. We both subscribe to and influence which rule-sets we use to filter our perceptions and conclusions. But the way we (often unconsciously) choose is far more elusive and subtle.
Chapter Preview


Language may have both a hard-wired component in our DNA, and a learned component (Chomsky, 1977). Neither is operable without the other. Or at least we don’t get language without both. This is a debatable theory, yet very useful to us. If spoken languages could be thus constructed/understood, it seems sensible that non-verbal languages could also follow this organization. Fundamentally each are means of using symbols to represent ideas we want to transfer from our minds into another’s (Calvin, 1996a).

Furthermore, it appears likely that where music operates neurologically on a (non-verbal) linguistic level, it too is organized in this dual fashion. Music also obeys both fundamental laws and is influenced by the immediate culture, while influencing it. Music serves cultural cohesion on a neuro-level (shown in many modern studies thanks to the fMRI (Levitin, 2006; Doidge 2007). In many cases, music, culture and the neural result are inextricable (Huron, 2008).

The way humans use musical instruments extends well beyond providing entertainment. We choose to employ a drum to speak to our own minds and the minds within others (more about this later), even if we are doing so with no knowledge of it, or intention in manipulating brain waves. But essentially, we mustn’t forget that these instruments are tools at our disposal, they are also technology whether old or new. However we use the net, it is also a tool. Its ultimate product “cyberspace”, shares many important features with music and cultural cohesion as well. Thus our big question becomes: if culture informs the web and visa versa, what neurological impact is it being used for?

In other words, asking a tribal member why a certain drumbeat is used in a ceremony gets you one answer. It is not at all the wrong answer (Narby, 2001). But asking an anthropologist or neurologist who studies the effects of “deep listening”, gets a very different answer. The beat ultimately is used to hold the culture together hypnotically.

Let’s consider the modern equivalent of a drum though. It is a tool, one that has neurological effects, which may be one reason we use it. Oddly though, we often do not employ computers as tools but as human substitutes. Instead, the tasks for which computers are commonly employed are strangely inappropriate. We pretend they function as specialized brains.

Human brains accomplish most thinking (perceptive and conceptual) by means of switching logically between inductive and deductive reasoning (Dewey 1910; Fodor, 2000; Hawkins, 2005). Computers, with no means of comprehending or creating anything remotely like context, accomplish tasks using only a limited version of deduction1. Some inductive reasoning can be accomplished with a computer by iterating through every single possibility (by making the question deductive). But in real life, this is absurd. Real problems have either infinite unknown possibilities or at least unpredictable ones. Computers simply can’t solve things humans can. And we still have no clue as to how we do it.

John Dewey (usually required reading for educational studies) published a very good account of “How We Think” in 1910. It happens to stand as a very good description of how computers don’t think. Boolean Logic (Hillis, 1998) and modus ponens have been common subjects in Philosophy, Logic, Psychology and Cog Sci for much longer than computers have been on the open market. But no one questions that there is more to human brains and thought than these formalizations. In other words, this is old news from rather common sources. So why have we resisted what should plainly be ingrained into our habits of thought, just to bang our heads against the wall?

On the flip-side, though most of us may want computers to accomplish human functions (as in security facial recognition or recognition of written words), we aren’t all actually working on these things first hand. Instead, most of us are using these same machines primarily to store and send strings of text (email, the web, word processing, spread sheets, …). The processor is minimally involved in just delivering a copy from one terminal to another. This is hardly a harmful or bad use. But it certainly doesn’t warrant the fancy hardware. Not even close.

Key Terms in this Chapter

Digital: This has to be one of the most abused buzz words ever. It has a very definite meaning that gets lost in a hazy fog. People often think it means something like “newer and better”. Actually, it just means “less accurate (for a good reason though)”. Theoretically, analog is any value from 0 to the highest, an infinite gradation of grays. In reality, we are limited by human perception and frustrated by mechanical limitations. Digital means that the same reading is now only 0 and 1, black or white. There is no question this is an imperfect alternative. What is useful though is that when done enough times, we can surpass previous limitations (though we don’t always). Creating copies is far more reliable since there are so few possible states per pixel (see below). In fact, whereas before “copy” and “original” were useful ideas, now they really don’t apply. The thing we play back now is always a copy. The original, which only existed as electrical pulses, is long discarded as soon as it is saved. But there is no longer any reason to distinguish, since they are identical clones.

Pixel: Usually this word is used in the technical sense (and probably one you are well familiar with). But the general concept is often a hazy one. The term being popularly bandied about with careless excitement, refers digital-ity. It is the smallest indivisible element in a collection of like elements that comprise an entity. “All in all it’s just another brick in the wall.” You could say the places on a checkerboard are pixels. But they needn’t even be square. City blocks may be considered pixels. But they needn’t be configured in any orderly fashion. Cells in a cluster of tissue could be pixels. The audio equivalent of resolution is called the sample rate. Usually to call something a pixel, there needs to be an encompassing (explicit or implied) system. A colored dot of ink is not necessarily a pixel, until you consider it part of a larger picture, say in a newspaper.

Network Behavior: Individual entities (nodes) can be grouped to communicate with each other in a myriad of configurations. Two cans and a string is the simplest networking model, where there is only one link between the nodes. The phone system is a complex network, allowing any node to relay to any other. Or you can think of social networks. In cases where “she told two friends, then she told two friends and so on, and so on.” In networks, tiny events can take on enormous proportions. Where a germ is passed between nodes, the epidemic charted over time, does not look like a gradual slope, but stays low, then suddenly explodes. The features of the network beyond the sum of its nodes are the emergent properties. For instance, telephones are only so handy in isolation but by connecting them in a network. That’s a simple emergent idea, but there are more complex ones. For instance, you can scream at one end and raise a person’s pulse at the other.

Consciousness: This word (often called “the C-word”) gets people all flustered and surely needs definition. Many want to believe it is uncannily elusive. I disagree. Jeff Hawkins seems to as well and articulates it quite nicely. “Consciousness is not a big problem. I think consciousness is simply what it feels like to have a cortex.” (Hawkins, 2004, p. 194) Of course “feels like” is even more problematic. I would alter this slightly. Consciousness is the result of whatever the neo-cortex does. We don’t precisely know all the details, but whatever they end up being we can just call “consciousness.”

Turing Machine: Probably, if you bought this book, you are someone who knows about Alan Turing’s ideas more than I. It is a very popular concept in computing and for good reason. Turing’s original example has been applied in countless variations, in countless disciplines over the years. But what the Turing Machine, from 1936, refers to is a proposed concept (never actually built), where a computer could use feedback with a conditional assessment function to “learn”. Almost all computer learning is based on this idea in some way.

Surface Grammar and Deep Structure: These are terms coined by the linguist Noam Chomsky. Probably this entire essay is predicated on an understanding of these terms. It hardly matters if this theory proves correct for language, it is nonetheless a very useful way of looking at the issues. But if you aren’t familiar with the terms, and many aren’t, we’ll try here. According to Chomsky, (non-human) animals certainly communicate but insists that these communications can not really be called language because the animals lack essential mental features. He has a point. It’s a fine one and could easily be overturned some day. But it is based on the notion that there is a unique mental apparatus. A Language Acquisition Device. A bee dance or a bird call serve linguistic purposes, but bees’ and birds’ brains don’t have enough parts. Each human language obeys a somewhat unique set of conventions. “Noam talks about language.” is not the same as “Language talks about Noam.” The difference in meaning is derived from their specific surface grammar. But the fact that we can put words together in some order and it will convey a message is deep structure. Though it is clearly impossible all the specifics of a language would be in our genes (no one is born speaking fluently, we need to learn), it is also unlikely we could learn so many detailed rules, so proficiently, with almost no trial-and-error for many aspects, in the few years we acquire language. “Colorless green ideas sleep furiously.” Chomsky’s famous example feels like it should make sense but does not. Possibly because, while we may figure out by “looking up” the references of our learning, that this is meaningless, we are “hard-wired” for the rules by which a languages rules are concocted. The brain doesn’t actually know syntax from chaos, it simply applies this pattern recognition to whatever stimuli is input. Over the ages we have gradually adapted this mental tool to the task of communication via words.

Neural Net(work): A computer system is often designed to mimic the workings of the brain, primarily for the task of statistical learning. However, this term is a bit misleading, since it is not at all likely that the brain learns in even a similar way to the way computers (even networked ones) save and retrieve data from memory. Nonetheless, whether accurately depictive or not, it is a remarkable programming. In very generalized terms, data comes from several distinct sources (often with their own processors analyzing input. Each source is called a neuron and has a weight of influence. The data is scaled according to the neurons current weight. The central computer determines the successfulness of this pool and updates the weights accordingly. Thus it seems to learn, which processes influence the outcome more heavily in complex tasks. Of course, this implies fore-knowledge of the goal, which brains don’t consider and computers can’t do without, However, the pooling and dynamic weighing process is no less effective once the analogy to the brain is discarded.

Complete Chapter List

Search this Book: