This chapter reviews the issue of information overload, introducing the concept of “infoxication 2.0” as one of the main downsides to Web 2.0. The chapter describes some of its potential effects on the learner, on the one hand, and puts forward some solutions to deal with the informational and communication barrage worsened by Web 2.0 plethora, on the other. The review of the issue reveals that although the problem of information overload has existed for many years, the massive abundance of fragmented Web 2.0 informational and communicative resources for the language learner might become an obstacle, i.e. it is often difficult to find what’s useful. Two kinds of solutions are identified, those based on common sense and time management, and those based on technology agents such as RSS readers and especially the future generation of RSS mash-up tools. An emphasis is placed on the role of the teacher as the facilitator to provide the know-how on these tools.
The idea that computer technology introduced the age of information is completely misleading and fallacious. The printing press began that age (Dewar, 1998; Borgman, 2000; Darnton 2000a). But, computer technologies enlarged it exponentially. One of the most overwhelming features of present western society is the rapid sequence in which events, thoughts, and products occur due to technological progress (Bolter, 1984). If Google is handling the processing of exabytes of information with difficulty, users, consumers and producers of information (i.e. prosumers) are being surpassed by the amount of time devoted to absorb and, in the process, to purge gigabytes of information. After all, when searching for information what is actually being done is to filter contents in order to keep only what is interesting or that what is agreed with. Whatever it is that is being processed, e.g. audio, text or video, a conversation, a newspaper article or a TV documentary. The human brain, whose mechanisms science would like to emulate, is then responsible for processing, tagging and storing information on our cognitive servers.
But there is so much to see and read in the Web and time is too short. There is no Web 2.0 site that gives vouchers to get more time for free. Learners need to handle all that draws their attention in Web 2.0 without feeling dizzy or overwhelmed by their own information/communication eagerness. This eagerness to know more is not a new thing. As Shenk (1997) explained, human beings have always pursued information and contact, but nowadays the problem is not so much getting hold of it as it is differentiating what we expose ourselves to. It is that ancestral desire to know more and to communicate with others that took society to our current situation. Thus, the stimulus is not new — as will be seen later — but the available answers to that stimulus are indeed new in terms of quantity, quality and accessibility. In the current information glut, learners have to differentiate what is useful from what is not. At this point, it should be emphasized that in this chapter the discussion is not about deontological distinctions such as “what is good vs. what is bad” because who can define the inherent “goodness” of information? From a pragmatic viewpoint, this chapter will refer to that sort of information that is somehow useful to language teachers and learners. It is not concerned with the process of accessing information but the process of accessing by means of which we can find useful knowledge, whatever this may be.
In a normal studying day, a learner will have to pick up calls, read emails, read the press, chat through an Internet messenger, answer SMS, read Web feeds and carry out their job, as well as pay attention to their social and personal life. And although there are some mechanisms, which will be seen below, to help with some of these tasks, there is no way to control this flood of data that comes increasingly as a commodity. As Postman noted, “information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one’s status. It comes indiscriminately, directed at no one particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, do not know what to do with it” (Postman, 1990, para. 27). What could Postman’s view be now, 18 years later, when there are millions of Web pages, blogs, wikis, and social networks?
The University of Berkeley (Lyman, 2003) attempted to quantify in bytes the information available in our society. Their first attempt dates from 2000 (with data from 1999) and their most recent attempt was in 2003 (with data from 2002). It might be interesting to know if the reason why there have not been further attempts was the tsunami of information caused by the wide adoption of blogs (a significant application of Web 2.0) in 2004. In any case, the numbers identified by the 2003 study are already staggering — all production information in various formats for the year 2002 occupies a trillion and a half gigabytes of storage or about 250 MB per person. However, from the amount of information produced in 2002, “only” 1.75% came from Web pages. For example, email generated much more information with 8% of the total. But, although talking about these figures creates a certain impact on us, it will not help us see the whole picture (Brown & Duguid, 2000), because “storage” does not mean importance, or “volume” value. Some times figures lead to “tunnel vision.”
Key Terms in this Chapter
Web 3.0: Probably another buzzword like Web 2.0 for marketing purposes. Web 3.0 is referred to as the Semantic Web, in which the web itself will be used as a database with more intelligent search engines, filtering tags and where the information will be widgetized.
Feed: A feed refers to syndicated website content; a feed is a document (based on XML) including a headline, a short summary of the content and a link back to the place on the reader’s website where the content resides (if it is a partial feed) or the full article/content (if it is a full feed). Orange or gray icons in websites indicate that the website’s content is available in a feed, and therefore, can be syndicated (or subscribed using an RSS reader).
Beta Version: A stage of the software release cycle. A beta version is the first version released outside the organization or community that develops the software, for the purpose of evaluation or debugging. In the world of Web 2.0, the beta stage is almost a must so that Web 2.0 tools should be always in a perpetual beta or developed in the open.
Information Fatigue Syndrome (IFS): The cognitive inability to keep up with the ever-increasing amounts of available information.
Tag: A tag is a keyword or label. People can tag their posts, photos, videos and any content uploaded to web 2.0 with any keyword that makes sense. While categories tell users the specific location, i.e. where a given piece of content is, tags indicate what that content is about. They offer another way to navigate content on a site, showing how popular different keywords are. Tags that are large are mentioned a lot, tags that are smaller have only been written about a few times.
Long Tail: An expression first coined by Chris Anderson in an October 2004 Wired Magazine article. Although intended as a business principle, The Long Tail is also being used to discuss information retrieval on the Internet to emphasize the fact that information is being fragmented into thousands of blogs, feeds, social networks, etc.
Infoxication 2.0: Infoxication 2.0 is a viral process, a ripped, mixed and burned virus coming from our most essential needs (information and communication), exponentially worsened by the myriad of Web 2.0 communication and networking possibilities. It refers to an intoxication of excessive informational and communicational demands.
WYSIWYG: An acronym for What You See Is What You Get, an interface in which content during editing appears as the final product.