Deconstructing Smart Cities

Deconstructing Smart Cities

Michael Batty
DOI: 10.4018/978-1-4666-4349-9.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter defines the smart city in terms of the process whereby computers and computation are being embedded into the very fabric of the city itself. In short, the smart city is the automated city where the goal is to improve the efficiency of how the city functions. These new technologies tend to improve the performance of cities in the short term with respect to how cities function over minutes, hours or days rather than over years or decades. After establishing definitions and context, the author then explores questions of big data. One important challenge is to synthesize or integrate different data about the city's functioning and this provides an enormous challenge which presents many obstacles to producing coherent solutions to diverse urban problems. The chapter augments this argument with ideas about how the emergence of widespread computation provides a new interface to the public realm through which citizens might participate in rather fuller and richer ways than hitherto, through interactions in various kinds of decision-making about the future city. The author concludes with some speculations as to how the emerging science of smart cities fits into the wider science of cities.
Chapter Preview
Top

The Context

Back in 1971 when the micro-processor or ‘computer on a chip’ was first developed, several commentators close to the industry proclaimed that this marked the real transition of digital computing into the ‘universal machine’. The history of computing before this date was marked by notions which came together in work pioneered by Turing and Shannon, amongst others, that demonstrated the logic of computation in terms of universality, that is, that everything that could be reduced to the binary code was capable of being computable in some way. But it took the development of the transistor at Bell Labs in 1947 for the physics of modern computation to be established in a way that would lead to ever smaller computers with the consequent possibilities that eventually they could be embedded into virtually anything and anybody.

By 1971 with the development of the microprocessor, it looked to those close to the cutting edge of miniaturization that Vannevar Bush’s (1945) vision of the home office, of computers being personal, would be borne out despite proclamations to the contrary. Some however, also close to the industry, still imagined that such pervasive applications were fanciful. In 1973, the personal computer was invented and by the late 1970s the home computer had become a reality in the form of PCs from Apple and then quite soon after from IBM. Software became a distinct industry on the back of the development of the PC while local area networking proceeded apace and in parallel, beginning at places like Xerox Parc. Local and wide area networks then merged with inter-network technologies and by the early 1990s, the Web had been invented. All these developments led to networked computing and graphics which was a major feature of the falling cost of memories, making computers much more accessible to a wider public through their visual interfaces. The last decade has been dominated by further miniaturizations and by the development of smart phones and all kinds of computable devices at different scales which have firmly established a world which is as much digital as it is material.

In this context, it is quite logical that computers should continue to be developed in every aspect of modern life. Just as the home and the office, thence the supermarket and the car are increasingly computable, the next great frontier has become the city, in particular the smart city where many public and private functions involving citizenry at large and the use of urban services have become computable. The next decade will certainly be the decade of the smart city while in parallel, computers to improve medicine and then to actually reengineer biological systems, ourselves included, represent other frontiers that are already in prospect.

So it no surprise that cities have become the ‘next new thing’ in the all pervasive dissemination of computation in contemporary society. This has produced however an even greater level of surprise in the concept of universality than in most other domains to date. Essentially computers were first used with respect to cities to enable models to be built that enabled computable predictions: simulations of the present state and structure of cities which could then be used to make conditional, if not absolute predictions of their futures. This idea of the computer being used to enable our understanding of cities is the traditional activity of computable planning. But in the interim computers have actually become embedded into the city itself with the consequent confusion of computers being used to build models of systems that in themselves are becoming computable. This interesting recursion and simultaneity is part of the increasing complexity that cities are clearly becoming (Batty & Hudson-Smith, 2007).

Complete Chapter List

Search this Book:
Reset