11 reasons computers can’t understand or solve our problems without human judgement

(Photo by Matt Gidley)

(Photo by Matt Gidley)

Why data is uncertain, cities are not programmable, and the world is not “algorithmic”.

Many people are not convinced that the Smart Cities movement will result in the use of technology to make places, communities and businesses in cities better. Outside their consumer enjoyment of smartphones, social media and online entertainment – to the degree that they have access to them – they don’t believe that technology or the companies that sell it will improve their lives.

The technology industry itself contributes significantly to this lack of trust. Too often we overstate the benefits of technology, or play down its limitations and the challenges involved in using it well.

Most recently, the idea that traditional processes of government should be replaced by “algorithmic regulation” – the comparison of the outcomes of public systems to desired objectives through the measurement of data, and the automatic adjustment of those systems by algorithms in order to achieve them – has been proposed by Tim O’Reilly and other prominent technologists.

These approaches work in many mechanical and engineering systems – the autopilots that fly planes or the anti-lock braking systems that we rely on to stop our cars. But should we extend them into human realms – how we educate our children or how we rehabilitate convicted criminals?

It’s clearly important to ask whether it would be desirable for our society to adopt such approaches. That is a complex debate, but my personal view is that in most cases the incredible technologies available to us today – and which I write about frequently on this blog – should not be used to take automatic decisions about such issues. They are usually more valuable when they are used to improve the information and insight available to human decision-makers – whether they are politicians, public workers or individual citizens – who are then in a better position to exercise good judgement.

More fundamentally, though, I want to challenge whether “algorithmic regulation” or any other highly deterministic approach to human issues is even possible. Quite simply, it is not.

It is true that our ability to collect, analyse and interpret data about the world has advanced to an astonishing degree in recent years. However, that ability is far from perfect, and strongly established scientific and philosophical principles tell us that it is impossible to definitively measure human outcomes from underlying data in physical or computing systems; and that it is impossible to create algorithmic rules that exactly predict them.

Sometimes automated systems succeed despite these limitations – anti-lock braking technology has become nearly ubiquitous because it is more effective than most human drivers at slowing down cars in a controlled way. But in other cases they create such great uncertainties that we must build in safeguards to account for the very real possibility that insights drawn from data are wrong. I do this every time I leave my home with a small umbrella packed in my bag despite the fact that weather forecasts created using enormous amounts of computing power predict a sunny day.

(No matter how sophisticated computer models of cities become, there are fundamental reasons why they will always be simplifications of reality. It is only by understanding those constraints that we can understand which insights from computer models are valuable, and which may be misleading. Image of Sim City by haljackey)

We can only understand where an “algorithmic” approach can be trusted; where it needs safeguards; and where it is wholly inadequate by understanding these limitations. Some of them are practical, and limited only by the sensitivity of today’s sensors and the power of today’s computers. But others are fundamental laws of physics and limitations of logical systems.

When technology companies assert that Smart Cities can create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” (as London School of Economics Professor Adam Greenfield rightly criticised in his book “Against the Smart City”), they are ignoring these challenges.

A blog published by the highly influential magazine Wired recently made similar overstatements: “The Universe is Programmable” argues that we should extend the concept of an “Application Programming Interface (API)” – a facility usually offered by technology systems to allow external computer programmes to control or interact with them – to every aspect of the world, including our own biology.

To compare complex, unpredictable, emergent biological and social systems to the very logical, deterministic world of computer software is at best a dramatic oversimplification. The systems that comprise the human body range from the armies of symbiotic microbes that help us digest food in our stomachs to the consequences of using corn syrup to sweeten food to the cultural pressure associated with “size 0” celebrities. Many of those systems can’t be well modelled in their own right, let alone deterministically related to each other; let alone formally represented in an accurate, detailed way by technology systems (or even in mathematics).

We should regret and avoid the hubris that leads to the distrust of technology by overstating its capability and failing to recognise its challenges and limitations. That distrust is a barrier that prevents us from achieving the very real benefits that data and technology can bring, and that have been convincingly demonstrated in the past.

For example, an enormous contribution to our knowledge of how to treat and prevent disease was made by John Snow who used data to analyse outbreaks of cholera in London in the 19th century. Snow used a map to correlate cases of cholera to the location of communal water pipes, leading to the insight that water-borne germs were responsible for spreading the disease. We wash our hands to prevent diseases spreading through germs in part because of what we would now call the “geospatial data analysis” performed by John Snow.

Many of the insights that we seek from analytic and smart city systems are human in nature, not physical or mathematical – for example identifying when and where to apply social care interventions in order to reduce the occurrence of  emotional domestic abuse. Such questions are complex and uncertain: what is “emotional domestic abuse?” Is it abuse inflicted by a live-in boyfriend, or by an estranged husband who lives separately but makes threatening telephone calls? Does it consist of physical violence or bullying? And what is “bullying”?

IMG_0209-1

(John Snow’s map of cholera outbreaks in 19th century London)

We attempt to create structured, quantitative data about complex human and social issues by using approximations and categorisations; by tolerating ranges and uncertainties in numeric measurements; by making subjective judgements; and by looking for patterns and clusters across different categories of data. Whilst these techniques can be very powerful, just how difficult it is to be sure what these conventions and interpretations should be is illustrated by the controversies that regularly arise around “who knew what, when?” whenever there is a high profile failure in social care or any other public service.

These challenges are not limited to “high level” social, economic and biological systems. In fact, they extend throughout the worlds of physics and chemistry into the basic nature of matter and the universe. They fundamentally limit the degree to which we can measure the world, and our ability to draw insight from that information.

By being aware of these limitations we are able to design systems and practises to use data and technology effectively. We know more about the weather through modelling it using scientific and mathematical algorithms in computers than we would without those techniques; but we don’t expect those forecasts to be entirely accurate. Similarly, supermarkets can use data about past purchases to make sufficiently accurate predictions about future spending patterns to boost their profits, without needing to predict exactly what each individual customer will buy.

We underestimate the limitations and flaws of these approaches at our peril. Whilst Tim O’Reilly cites several automated financial systems as good examples of “algorithmic regulation”, the financial crash of 2008 showed the terrible consequences of the thoroughly inadequate risk management systems used by the world’s financial institutions compared to the complexity of the system that they sought to profit from. The few institutions that realised that market conditions had changed and that their models for risk management were no longer valid relied instead on the expertise of their staff, and avoided the worst affects. Others continued to rely on models that had started to produce increasingly misleading guidance, leading to the recession that we are only now emerging from six years later, and that has damaged countless lives around the world.

Every day in their work, scientists, engineers and statisticians draw conclusions from data and analytics, but they temper those conclusions with an awareness of their limitations and any uncertainties inherent in them. By taking and communicating such a balanced and informed approach to applying similar techniques in cities, we will create more trust in these technologies than by overstating their capabilities.

What follows is a description of some of the scientific, philosophical and practical issues that lead inevitability to uncertainty in data, and to limitations in our ability to draw conclusions from it:

But I’ll finish with an explanation of why we can still draw great value from data and analytics if we are aware of those issues and take them properly into account.

Three reasons why we can’t measure data perfectly

(How Heisenberg’s Uncertainty Principle results from the dual wave/particle nature of matter. Explanation by HyperPhysics at Georgia State University)

1. Heisenberg’s Uncertainty Principle and the fundamental impossibility of knowing everything about anything

Heisenberg’s Uncertainty Principle is a cornerstone of Quantum Mechanics, which, along with General Relativity, is one of the two most fundamental theories scientists use to understand our world. It defines a limit to the precision with which certain pairs of properties of the basic particles which make up the world – such as protons, neutrons and electrons – can be known at the same time. For instance, the more accurately we measure the position of such particles, the more uncertain their speed and direction of movement become.

The explanation of the Uncertainty Principle is subtle, and lies in the strange fact that very small “particles” such as electrons and neutrons also behave like “waves”; and that “waves” like beams of light also behave like very small “particles” called “photons“. But we can use an analogy to understand it.

In order to measure something, we have to interact with it. In everyday life, we do this by using our eyes to measure lightwaves that are created by lightbulbs or the sun and that then reflect off objects in the world around us.

But when we shine light on an object, what we are actually doing is showering it with billions of photons, and observing the way that they scatter. When the object is quite large – a car, a person, or a football – the photons are so small in comparison that they bounce off without affecting it. But when the object is very small – such as an atom – the photons colliding with it are large enough to knock it out of its original position. In other words, measuring the current position of an object involves a collision which causes it to move in a random way.

This analogy isn’t exact; but it conveys the general idea. (For a full explanation, see the figure and link above). Most of the time, we don’t notice the effects of Heisenberg’s Uncertainty Principle because it applies at extremely small scales. But it is perhaps the most fundamental law that asserts that “perfect knowledge” is simply impossible; and it illustrates a wider point that any form of measurement or observation in general affects what is measured or observed. Sometimes the effects are negligible,  but often they are not – if we observe workers in a time and motion study, for example, we need to be careful to understand the effect our presence and observations have on their behaviour.

2. Accuracy, precision, noise, uncertainty and error: why measurements are never fully reliable

Outside the world of Quantum Mechanics, there are more practical issues that limit the accuracy of all measurements and data.

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become "fuzzy". In this case, the effects of noise and interference - the degree to which the signal appears "fuzzy" - are relatively small relative to the strength of the signal, and the device is usable)

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become “fuzzy”. In this case, the effects of noise and interference – the degree to which the signal appears “fuzzy” – are relatively small compared to the strength of the signal, and the device is usable)

We live in a “warm” world – roughly 300 degrees Celsius above what scientists call “absolute zero“, the coldest temperature possible. What we experience as warmth is in fact movement: the atoms from which we and our world are made “jiggle about” – they move randomly. When we touch a hot object and feel pain it is because this movement is too violent to bear – it’s like being pricked by billions of tiny pins.

This random movement creates “noise” in every physical system, like the static we hear in analogue radio stations or on poor quality telephone connections.

We also live in a busy world, and this activity leads to other sources of noise. All electronic equipment creates electrical and magnetic fields that spread beyond the equipment itself, and in turn affect other equipment – we can hear this as a buzzing noise when we leave smartphones near radios.

Generally speaking, all measurements are affected by random noise created by heat, vibrations or electrical interference; are limited by the precision and accuracy of the measuring devices we use; and are affected by inconsistencies and errors that arise because it is always impossible to completely separate the measurement we want to make from all other environmental factors.

Scientists, engineers and statisticians are familiar with these challenges, and use techniques developed over the course of more than a century to determine and describe the degree to which they can trust and rely on the measurements they make. They do not claim “perfect knowledge” of anything; on the contrary, they are diligent in describing the unavoidable uncertainty that is inherent in their work.

3. The limitations of measuring the natural world using digital systems

One of the techniques we’ve adopted over the last half century to overcome the effects of noise and to make information easier to process is to convert “analogue” information about the real world (information that varies smoothly) into digital information – i.e. information that is expressed as sequences of zeros and ones in computer systems.

(When analogue signals are amplified, so is the noise that they contain. Digital signals are interpreted using thresholds: above an upper threshold, the signal means “1”, whilst below a lower threshold, the signal means “0”. A long string of “0”s and “1”s can be used to encode the same information as contained in analogue waves. By making the difference between the thresholds large compared to the level of signal noise, digital signals can be recreated to remove noise. Further explanation and image by Science Aid)

This process involves a trade-off between the accuracy with which analogue information is measured and described, and the length of the string of digits required to do so – and hence the amount of computer storage and processing power needed.

This trade-off can be clearly seen in the difference in quality between an internet video viewed on a smartphone over a 3G connection and one viewed on a high definition television using a cable network. Neither video will be affected by the static noise that affects weak analogue television signals, but the limited bandwidth of a 3G connection dramatically limits the clarity and resolution of the image transmitted.

The Nyquist–Shannon sampling theorem defines this trade-off and the limit to the quality that can be achieved in storing and processing digital information created from analogue sources. It determines the quality of digital data that we are able to create about any real-world system – from weather patterns to the location of moving objects to the fidelity of sound and video recordings. As computers and communications networks continue to grow more powerful, the quality of digital information will improve,  but it will never be a perfect representation of the real world.

Three limits to our ability to analyse data and draw insights from it

1. Gödel’s Incompleteness Theorem and the inconsistency of algorithms

Kurt Gödel’s Incompleteness Theorem sets a limit on what can be achieved by any “closed logical system”. Examples of “closed logical systems” include computer programming languages, any system for creating algorithms – and mathematics itself.

We use “closed logical systems” whenever we create insights and conclusions by combining and extrapolating from basic data and facts. This is how all reporting, calculating, business intelligence, “analytics” and “big data” technologies work.

Gödel’s Incompleteness Theorem proves that any closed logical system can be used to create conclusions that  it is not possible to show are true or false using the same system. In other words, whilst computer systems can produce extremely useful information, we cannot rely on them to prove that that information is completely accurate and valid. We have to do that ourselves.

Gödel’s theorem doesn’t stop computer algorithms that have been verified by humans using the scientific method from working; but it does mean that we can’t rely on computers to both generate algorithms and guarantee their validity.

2. The behaviour of many real-world systems can’t be reduced analytically to simple rules

Many systems in the real-world are complex: they cannot be described by simple rules that predict their behaviour based on measurements of their initial conditions.

A simple example is the “three body problem“. Imagine a sun, a planet and a moon all orbiting each other. The movement of these three objects is governed by the force of gravity, which can be described by relatively simple mathematical equations. However, even with just three objects involved, it is not possible to use these equations to directly predict their long-term behaviour – whether they will continue to orbit each other indefinitely, or will eventually collide with each other, or spin off into the distance.

(A computer simulation by Hawk Express of a Belousov–Zhabotinsky reaction,  in which reactions between liquid chemicals create oscillating patterns of colour. The simulation is carried out using “cellular automata” a technique based on a grid of squares which can take different colours. In each “turn” of the simulation, like a turn in a board game, the colour of each square is changed using simple rules based on the colours of adjacent squares. Such simulations have been used to reproduce a variety of real-world phenomena)

As Stephen Wolfram argued in his controversial book “A New Kind of Science” in 2002, we need to take a different approach to understanding such complex systems. Rather than using mathematics and logic to analyse them, we need to simulate them, often using computers to create models of the elements from which complex systems are composed, and the interactions between them. By running simulations based on a large number of starting points and comparing the results to real-world observations, insights into the behaviour of the real-world system can be derived. This is how weather forecasts are created, for example. 

But as we all know, weather forecasts are not always accurate. Simulations are approximations to real-world systems, and their accuracy is restricted by the degree to which digital data can be used to represent a non-digital world. For this reason, conclusions and predictions drawn from simulations are usually “average” or “probable” outcomes for the system as a whole, not precise predictions of the behaviour of the system or any individual element of it. This is why weather forecasts are often wrong; and why they predict likely levels of rain and windspeed rather than the shape and movement of individual clouds.

(Hello)

(A simple and famous example of a computer programme that never stops running because it calls itself. The output continually varies by printing out characters based on random number generation. Image by Prosthetic Knowledge)

3. Some problems can’t be solved by computing machines

If I consider a simple question such as “how many letters are in the word ‘calculation’?”, I can easily convince myself that a computer programme could be written to answer the question; and that it would find the answer within a relatively short amount of time. But some problems are much harder to solve, or can’t even be solved at all.

For example, a “Wang Tile” (see image below) is a square tile formed from four triangles of different colours. Imagine that you have bought a set of tiles of various colour combinations in order to tile a wall in a kitchen or bathroom. Given the set of tiles that you have bought, is it possible to tile your wall so that triangles of the same colour line up to each other, forming a pattern of “Wang Tile” squares?

In 1966 Robert Berger proved that no algorithm exists that can answer that question. There is no way to solve the problem – or to determine how long it will take to solve the problem – without actually solving it. You just have to try to tile the room and find out the hard way.

One of the most famous examples of this type of problem is the “halting problem” in computer science. Some computer programmes finish executing their commands relatively quickly. Others can run indefinitely if they contain a “loop” instruction that never ends. For others which contain complex sequences of loops and calls from one section of code to another, it may be very hard to tell whether the programme finishes quickly, or takes a long time to complete, or never finishes its execution at all.

Alan Turing, one of the most important figures in the development of computing, proved in 1936 that a general algorithm to determine whether or not any computer programme finishes its execution does not exist. In other words, whilst there are many useful computer programmes in the world, there are also problems that computer programmes simply cannot solve.

(A set of Wang Tiles, and a pattern created by tiling them so that tiles are placed next to other tiles so that their edges have the same colour. Given any particular set of tiles, it is impossible to determine whether such a pattern can be created by any means other than trial and error)

(A set of Wang Tiles, and a pattern of coloured squares created by tiling them. Given any random set of tiles of different colour combinations, there is no set of rules that can be relied on to determine whether a valid pattern of coloured squares can be created from them. Sometimes, you have to find out by trial and error. Images from Wikipedia)

Five reasons why the human world is messy, unpredictable, and can’t be perfectly described using data and logic

1. Our actions create disorder

The 2nd Law of Thermodynamics is a good candidate for the most fundamental law of science. It states that as time progresses, the universe becomes more disorganised. It guarantees that ultimately – in billions of years – the Universe will die as all of the energy and activity within it dissipates.

An everyday practical consequence of this law is that every time we act to create value – building a shed, using a car to get from one place to another, cooking a meal – our actions eventually cause a greater amount of disorder to be created as a consequence – as noise, pollution, waste heat or landfill refuse.

For example, if I spend a day building a shed, then to create that order and value from raw materials, I consume structured food and turn it into sewage. Or if I use an electric forklift to stack a pile of boxes, I use electricity that has been created by burning structured coal into smog and ash.

So it is literally impossible to create a “perfect world”. Whenever we act to make a part of the world more ordered, we create disorder elsewhere. And ultimately – thankfully, long after you and I are dead – disorder is all that will be left.

2. The failure of Logical Atomism: why the human world can’t be perfectly described using data and logic

In the 20th Century two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein, invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic.

But despite 40 years of work, these two supremely intelligent people could not get their theory to work: “Logical Atomism” failed. It is not possible to describe our world in that way.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of sophisticated systems.

Despite centuries of scientific and philosophical effort, we do not have a complete understanding of how to describe our world at its most basic level. As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations mean. And as we have already seen, Heisenberg’s Uncertainty Principle prevents us from ever having perfect knowledge of the world at this level.

Perhaps the most important failure of logical atomism, though, was that it proved impossible to use logical rules to turn “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – into facts at another level of abstraction – such as “physical assault is a crime”. The human world and the things that we care about can’t be described using logical combinations of “atomic facts”. For example, how would you define the set of all possible uses of a screwdriver, from prising the lids off paint tins to causing a short-circuit by jamming it into a switchboard?

Our world is messy, subjective and opportunistic. It defies universal categorisation and logical analysis.

(A Pescheria in Bari, Puglia, where a fish-market price information service makes it easier for local fisherman to identify the best buyers and prices for their daily catch. Photo by Vito Palmi)

3. The importance and inaccessibility of “local knowledge” 

Because the tool we use for calculating and agreeing value when we exchange goods and services is money, economics is the discipline that is often used to understand the large-scale behaviour of society. We often quantify the “growth” of society using economic measures, for example.

But this approach is notorious for overlooking social and environmental characteristics such as health, happiness and sustainability. Alternatives exist, such as the Social Progress Index, or the measurement framework adopted by the United Nations 2014 Human Development Report on world poverty; but they are still high level and abstract.

Such approaches struggle to explain localised variations, and in particular cannot predict the behaviours or outcomes of individual people with any accuracy. This “local knowledge problem” is caused by the fact that a great deal of the information that determines individual actions is personal and local, and not measurable at a distance – the experienced eye of the fruit buyer assessing not just the quality of the fruit but the quality of the farm and farmers that produce it, as a measure of the likely consistency of supply; the emotional attachments that cause us to favour one brand over another; or the degree of community ties between local businesses that influence their propensity to trade with each other.

Sharing economy” business models that use social media and reputation systems to enable suppliers and consumers of goods and services to find each other and transact online are opening up this local knowledge to some degree. Local food networks, freecycling networks, and land-sharing schemes all use this technology to the benefit of local communities whilst potentially making information about detailed transactions more widely available. And to some degree, the human knowledge that influences how transactions take place can be encoded in “expert systems” which allow computer systems to codify the quantitative and heuristic rules by which people take decisions.

But these technologies are only used in a subset of the interactions that take place between people and businesses across the world, and it is unlikely that they’ll become ubiquitous in the foreseeable future (or that we would want them to become so). Will we ever reach the point where prospective house-buyers delegate decisions about where to live to computer programmes operating in online marketplaces rather than by visiting places and imagining themselves living there? Will we somehow automate the process of testing the freshness of fish by observing the clarity of their eyes and the freshness of their smell before buying them to cook and eat?

In many cases, while technology may play a role introducing potential buyers and sellers of goods and services to each other, it will not replace – or predict – the human behaviours involved in the transaction itself.

(Medway Youth Trust use predictive and textual analytics to draw insight into their work helping vulnerable children. They use technology to inform expert case workers, not to take decisions on their behalf.)

4. “Wicked problems” cannot be described using data and logic

Despite all of the challenges associated with problems in mathematics and the physical sciences, it is nevertheless relatively straightforward to frame and then attempt to solve problems in those domains; and to determine whether the resulting solutions are valid.

As the failure of Logical Atomism showed, though, problems in the human domain are much more difficult to describe in any systematic, complete and precise way – a challenge known as the “frame problem” in artificial intelligence. This is particularly true of “wicked problems” – challenges such as social mobility or vulnerable families that are multi-faceted, and consist of a variety of interdependent issues.

Take job creation, for example. Is that best accomplished through creating employment in taxpayer-funded public sector organisations? Or by allowing private-sector wealth to grow, creating employment through “trickle-down” effects? Or by maximising overall consumer spending power as suggested by “middle-out” economics? All of these ideas are described not using the language of mathematics or other formal logical systems, but using natural human language which is subjective and inconsistent in use.

The failure of Logical Atomism to fully represent such concepts in formal logical systems through which truth and falsehood can be determined with certainty emphasises what we all understand intuitively: there is no single “right” answer to many human problems, and no single “right” action in many human situations.

(An electricity bill containing information provided by OPower comparing one household’s energy usage to their neighbours. Image from Grist)

5. Behavioural economics and the caprice of human behaviour

Behavioural economics” attempts to predict the way that humans behave when taking choices that have a measurable impact on them – for example, whether to put the washing machine on at 5pm when electricity is expensive, or at 11pm when it is cheap.

But predicting human behaviour is notoriously unreliable.

For example, in a smart water-meter project in Dubuque, Iowa, households that were told how their water conservation compared to that of their near neighbours were found to be twice as likely to take action to improve their efficiency as those who were only told the details of their own water use. In other words, people who were given quantified evidence that they were less responsible water user than their neighbours changed their behaviour. OPower have used similar techniques to help US households save 1.9 terawatt hours of power simply by including a report based on data from smart meters in a printed letter sent with customers’ electricity bills.

These are impressive achievements; but they are not always repeatable. A recycling scheme in the UK that adopted a similar approach found instead that it lowered recycling rates across the community: households who learned that they were putting more effort into recycling than their neighbours asked themselves “if my neighbours aren’t contributing to this initiative, then why should I?”

Low carbon engineering technologies like electric vehicles have clearly defined environmental benefits and clearly defined costs. But most Smart Cities solutions are less straightforward. They are complex socio-technical systems whose outcomes are emergent. Our ability to predict their performance and impact will certainly improve as more are deployed and analysed, and as University researchers, politicians, journalists and the public assess them. But we will never predict individual actions using these techniques, only the average statistical behaviour of groups of people. This can be seen from OPower’s own comparison of their predicted energy savings against those actually achieved – the predictions are good, but the actual behaviour of OPower’s customers shows a high degree of apparently random variation. Those variations are the result of the subjective, unpredictable and sometimes irrational behaviour of real people.

We can take insight from Behavioural Economics and other techniques for analysing human behaviour in order to create appropriate strategies, policies and environments that encourage the right outcomes in cities; but none of them can be relied on to give definitive solutions to any individual person or situation. They can inform decision-making, but are always associated with some degree of uncertainty. In some cases, the uncertainty will be so small as to be negligible, and the predictions can be treated as deterministic rules for achieving the desired outcome. But in many cases, the uncertainty will be so great that predictions can only be treated as general indications of what might happen; whilst individual actions and outcomes will vary greatly.

(Of course it is impossible to predict individual criminal actions as portrayed in the film “Minority Report”. But is is very possible to analyse past patterns of criminal activity, compare them to related data such as weather and social events, and predict the likelihood of crimes of certain types occurring in certain areas. Cities such as Memphis and Chicago have used these insights to achieve significant reductions in crime)

Learning to value insight without certainty

Mathematics and digital technology are incredibly powerful; but they will never perfectly and completely describe and predict our world in human terms. In many cases, our focus for using them should not be on automation: it should be on the enablement of human judgement through better availability and communication of information. And in particular, we should concentrate on communicating accurately the meaning of information in the context of its limitations and uncertainties.

There are exceptions where we automate systems because of a combination of a low-level of uncertainty in data and a large advantage in acting autonomously on it. For example, anti-lock braking systems save lives by using automated technology to take thousands of decisions more quickly than most humans would realise that even a single decision needed to be made; and do so based on data with an extremely low degree of uncertainty.

But the most exciting opportunity for us all is to learn to become sophisticated users of information that is uncertain. The results of textual analysis of sentiment towards products and brands expressed in social media are far from certain; but they are still of great value. Similar technology can extract insights from medical research papers, case notes in social care systems, maintenance logs of machinery and many other sources. Those insights will rarely be certain; but properly assessed by people with good judgement they can still be immensely valuable.

This is a much better way to understand the value of technology than ideas like “perfect knowledge” and “algorithmic regulation”. And it is much more likely that people will trust the benefits that we claim new technologies can bring if we are open about their limitations. People won’t use technologies that they don’t trust; and they won’t invest their money in them or vote for politicians who say they’ll spend their taxes on it.

Thankyou to Richard Brown and Adrian McEwen for discussions on Twitter that helped me to prepare this article. A more in-depth discussion of some of the scientific and philosophical issues I’ve described, and an exploration of the nature of human intelligence and its non-deterministic characteristics, can be found in the excellent paper “Answering Descartes: Beyond Turing” by Stuart Kauffman published by MIT press.

Can Smarter City technology measure and improve our quality of life?

(Photo of Golden Gate Bridge, San Francisco, at night by David Yu)

Can information and technology measure and improve the quality of life in cities?

That seems a pretty fundamental question for the Smarter Cities movement to address. There is little point in us expending time and money on the application of technology to city systems unless we can answer it positively. It’s a question that I had the opportunity to explore with technologists and urbanists from around the world last week at the Urban Systems Collaborative meeting in London, on whose blog this article will also appear.

Before thinking about how we might approach such a challenging and complex issue, I’d like to use two examples to support my belief that we will eventually conclude that “yes, information and technology can improve the quality of life in cities.”

The first example, which came to my attention through Colin Harrison, who heads up the Urban Systems Collaborative, concerns public defibrillator devices – equipment that can be used to give an electric shock to the victim of a heart attack to restart their heart. Defibrillators are positioned in many public buildings and spaces. But who knows where they are and how to use them in the event that someone nearby suffers a heart attack?

To answer those questions, many cities now publish open data lists of the locations of publically-accessible Defibrillators. Consequently, SmartPhone apps now exist that can tell you where the nearest one to you is located. As cities begin to integrate these technologies with databases of qualified first-aiders and formal emergency response systems, it becomes more feasible that when someone suffers a heart attack in a public place, a nearby first-aider might be notified of the incidence and of the location of a nearby defibrillator, and be able to respond valuable minutes before the arrival of emergency services. So in this case, information and technology can increase the chancees of heart attack victims recovering.

(Why Smarter Cities matter: "Lives on the Line" by James Cheshire at UCL's Centre for Advanced Spatial Analysis, showing the variation in life expectancy and correlation to child poverty in London. From Cheshire, J. 2012. Lives on the Line: Mapping Life Expectancy Along the London Tube Network. Environment and Planning A. 44 (7). Doi: 10.1068/a45341)

(Why Smarter Cities matter: “Lives on the Line” by James Cheshire at UCL’s Centre for Advanced Spatial Analysis, showing the variation in life expectancy across London. From Cheshire, J. 2012. Lives on the Line: Mapping Life Expectancy Along the London Tube Network. Environment and Planning A. 44 (7). Doi: 10.1068/a45341)

In a more strategic scenario, the Centre for Advanced Spatial Analysis (CASA) at University College London have mapped life expectancy at birth across London. Life expectancy across the city varies from 75 to 96 years, and CASA’s researchers were able to correlate it with a variety of other issues such as child poverty.

Life expectancy varies by 10 or 20 years in many cities in the developed world; analysing its relationship to other economic, demographic, social and spatial information can provide insight into where money should be spent on providing services that address the issues leading to it, and that determine quality of life. The UK Technology Strategy Board cited Glasgow’s focus on this challenge as one of their reasons for investing £24 million in Glasgow’s Future Cities Demonstrator project – life expectancy at birth for male babies in Glasgow varies by 26 years between the poorest and wealthiest areas of the city.

These examples clearly show that in principle urban data and technology can contribute to improving quality of life in cities; but they don’t explain how to do so systematically across the very many aspects of quality of life and city systems, and between the great variety of urban environments and cultures throughout the world. How could we begin to do that?

Deconstructing “quality of life”

We must first think more clearly about what we mean by “quality of life”. There are many needs, values and outcomes that contribute to quality of life and its perception. Maslow’s “Hierarchy of Needs” is a well-researched framework for considering them. We can use this as a tool for considering whether urban data can inform us about, and help us to change, the ability of a city to create quality of life for its inhabitants.

(Maslow’s Hierarchy of Needs, image by Factoryjoe via Wikimedia Commons)

But whilst Maslow’s hierarchy tells us about the various aspects that comprise the overall quality of life, it only tells us about our relationship with them in a very general sense. Our perception of quality of life, and what creates it for us, is highly variable and depends on (at least) some of the following factors:

  • Individual lifestyle preferences
  • Age
  • Culture and ethnicity
  • Social standing
  • Family status
  • Sexuality
  • Gender
  • … and so on.

Any analysis of the relationship between quality of life, urban data and technology must take this variability into account; either by allowing for it in the analytic approach; or by enabling individuals and communities to customise the use of data to their specific needs and context.

Stress and Adaptability

Two qualities of urban systems and life within them that can help us to understand how urban data of different forms might relate to Maslow’s hierarchy of needs and individual perspectives on it are stress and adaptability.

Jurij Paraszczak, IBM’s Director of Research for Smarter Cities, suggested that one way to improve quality of life is to reduce stress. A city with efficient, well integrated services – such as transport; availability of business permits etc. – will likely cause less stress, and offer a higher quality of life, than a city whose services are disjointed and inefficient.

One cause of stress is the need to change. The Physicist Geoffrey West is one of many scientists who has explored the roles of technology and population growth in speeding up city systems; as our world changes more and more quickly, our cities will need to become more agile and adaptable – technologists, town planners and economists all seem to agree on this point.

The architect Kelvin Campbell has explored how urban environments can support adaptability by enabling actors within them to innovate with the resources available to them (streets, buildings, spaces, technology) in response to changes in local and global context – changes in the economy of cultural trends, for example.

Service scientists” analyse the adaptability of systems (such as cities) by considering the “affordances” they offer to actors within them. An “affordance” is a capability within a system that is not exercised until an actor chooses to exercise it in order to create value that is specific to them, and specific to the time, place and context within which they act.

An “affordance” might be the ability to start a temporary business or “pop-up” shop within a disused building by exploiting a temporary exemption from planning controls. Or it might be the ability to access open city data and use it as the basis of new information-based business services. (I explored some ideas from science, technology, economics and urbanism for creating adaptability in cities in an article in March this year).

(Photo by lecercle of a girl in Mumbai doing her homework on whatever flat surface she could find. Her use of a stationary tool usually employed for physical mobility to enhance her own social mobility is an example of the very basic capacity we all have to use the resources available to us in innovative ways)

Stress and adaptability are linked. The more personal effort that city residents must exert in order to adapt to changing circumstances (i.e. the less that a city offers them useful affordances), then the more stress they will be subjected to.

Stress; rates of change; levels of effort and cost exerted on various activities: these are all things that can be measured.

Urban data and quality of life in the district high street

In order to explore these ideas in more depth, our discussion at the Urban Systems Collaborative meeting explored a specific scenario systematically. We considered a number of candidate scenarios – from a vast city such as New York, with a vibrant economy but affected by issues such as flood risk; through urban parks and property developments down to the scale of an individual building such as a school or hospital.

We chose to start with a scenario in the middle of that scale range that is the subject of particularly intense debate in economics, policy and urban design: a mixed-demographic city district with a retail centre at its heart spatially, socially and economically.

We imagined a district with a population of around 50,000 to 100,000 people within a larger urban area; with an economy including the retail, service and manufacturing sectors. The retail centre is surviving with some new businesses starting; but also with some vacant property; and with a mixture of national chains, independent specialist stores, pawnshops, cafes, payday lenders, pubs and betting shops. We imagined that local housing stock would support many levels of wealth from benefits-dependent individuals and families through to millionaire business owners. A district similar to Kings Heath in Birmingham, where I live, and whose retail economy was recently the subject of an article in the Economist magazine.

We asked ourselves what data might be available in such an environment; and how it might offer insight into the elements of Maslow’s hierarchy.

We began by considering the first level of Maslow’s hierarchy, our physiological needs; and in particular the availability of food. Clearly, food is a basic survival need; but the availability of food of different types – and our individual and cultural propensity to consume them – also contributes to wider issues of health and wellbeing.

(York Road, Kings Heath, in the 2009 Kings Heath Festival. Photo by Nick Lockey)

Information about food provision, consumption and processing can also give insights into economic and social issues. For example, the Economist reported in 2011 that since the 2008 financial crash, some jobs lost in professional service industries such as finance in the UK had been replaced by jobs created in independent artisan industries such as food. Evidence of growth in independent businesses in artisan and craft-related sectors in a city area may therefore indicate the early stages of its recovery from economic shock.

Similarly, when a significant wave of immigration from a new cultural or ethnic group takes place in an area, then it tends to result in the creation of new, independent food businesses catering to preferences that aren’t met by existing providers. So a measure of diversity in food supply can be an indicator of economic and social growth.

So by considering a need that Maslow’s hierarchy places at the most basic level, we were able to identify data that describes an urban area’s ability to support that need – for example, the “Enjoy Kings Heath” website provides information about local food businesses; and furthermore, we identified ways that the same data related to needs throughout the other levels of Maslow’s hierarchy.

We next considered how economic flows within and outside an area can indicate not just local levels of economic activity; but also the area’s trading surplus or deficit. Relevant information in principle exists in the form of the accounts and business reports of businesses. Initiatives such as local currencies and loyalty schemes attempt to maximise local synergies by minimising the flow of money out of local economies; and where they exploit technology platforms such as Droplet’s SmartPhone payments service, which operates in London and Birmingham, the money flows within local economies can be measured.

These money flows have effects that go beyond the simple value of assets and property within an area. Peckham high street in London has unusually high levels of money flow in and out of its economy due to a high degree of import / export businesses; and to local residents transferring money to relatives overseas. This flow of money makes business rents in the area disproportionally high  compared to the value of local assets.

Our debate also touched on environmental quality and transport. Data about environmental quality is increasingly available from sensors that measure water and air quality and the performance of sewage systems. These clearly contribute insights that are relevant to public health. Transport data provides perhaps more subtle insights. It can provide insight into economic activity; productivity (traffic jams waste time); environmental impact; and social mobility.

My colleagues in IBM Research have recently used anonymised data from GPS sensors in SmartPhones to analyse movement patterns in cities such as Abidjan and Istanbul on behalf of their governments and transport authorities; and to compare those movement patterns with public transport services such as bus routes. When such data is used to alter public transport services so that they better match the end-to-end journey requirements of citizens, an enormous range of individual, social, environmental and economic benefits are realised.

(The origins and destinations of end-to-end journeys made in Abidjan, identified from anonymised SmartPhone GPS data)

(The origins and destinations of end-to-end journeys made in Abidjan, identified from anonymised SmartPhone GPS data)

Finally, we considered data sources and aspects of quality of life relating to what Maslow called “self-actualisation”: the ability of people within the urban environment of our scenario to create lifestyles and careers that are individually fulfilling and that reward creative self-expression. Whilst not direct, measurements of the registration of patents, or of the formation and survival of businesses in sectors such as construction, technology, arts and artisan crafts, relate to those values in some way.

In summary, the exercise showed that a great variety of data is available that relates to the ability of an urban environment to provide Maslow’s hierarchy of needs to people within it. To gain a fuller picture, of course, we would need to repeat the exercise with many other urban contexts at every scale from a single building up to the national, international and geographic context within which the city exists. But this seems a positive start.

Recognising the challenge

Of course, it is far from straightforward to convert these basic ideas and observations into usable techniques for deriving insight and value concerning quality of life from urban data.

What about the things that are extremely hard to measure but which are often vital to quality of life – for example the cash economy? Physical cash is notoriously hard to trace and monitor; and arguably it is particularly important to the lives of many individuals and communities who have the most significant quality of life challenges; and to those who are responsible for some of the activities that detract from quality of life – burglary, mugging and the supply of narcotics, for example.

The Urban Systems Collaborative’s debate also touched briefly on the question of whether we can more directly measure the outcomes that people care about – happiness, prosperity, the ability to provide for our families, for example. Antti Poikola has written an article on his blog, “Vital signs for measuring the quality of life in cities“, based on the presentation on that topic by Samir Menon of Tata Consulting Services. Samir identified a number of “happiness indices” that have been proposed by the UK Prime Minister, David Cameron, the European Quality of Life Survey, the OECD’s Better Life Index, and the Social Progress Index created by economist Michael Porter. Those indices generally attempt to correlate a number of different quantitative indicators with qualitative information from surveys into an overall score. Their accuracy and usefulness is the subject of contentious debate.

As an alternative, Michael Mezey of the Royal Society for the Arts recently collected descriptions of attempts to measure happiness more directly by identifying the location of issues or events associated with positive or negative emotions – such as parks and pavements fouled by dog litter or displays of emotion in public. It’s fair to say that the results of these approaches are very subjective and selective so far, but it will be interesting to observe what progress is made.

There is also a need to balance our efforts between creating value from the data that is available to us – which is surely a resource that we should exploit – with making sure that we focus our efforts on addressing our most important challenges, whether or not data relevant to them is easily accessible.

And in practise, a great deal of the data that describes cities is still not very accessible or useful. Most of it exists within IT systems that were designed for a specific purpose – for example, to allow building owners to manage the maintenance of their property. Those systems may not be very good at providing data in a way that is useful for new purposes – for example, identifying whether a door is connected to a pavement by a ramp or by steps, and hence how easy it is for a wheelchair user to enter a building.

(Photo by Closed 24/7 of the Jaguar XF whose designers used “big data” analytics to optimise the emotional response of potential customers and drivers)

Generally speaking, transforming data that is useful for a specific purpose into data that is generally useful takes time, effort and expertise – and costs money. We may desire city data to be tidied up and made more readily accessible; just as we may desire a disused factory to be converted into useful premises for shops and small businesses. But securing the investment required to do so is often difficult – this is why open city data is a “brownfield regeneration” challenge for the information age.

We don’t yet have a general model for addressing that challenge, because the socio-economic model for urban data has not been defined. Who owns it? What does it cost to create? What uses of it are acceptable? When is it proper to profit from data?

Whilst in principle the data available to us, and our ability to derive insight and knowledge from it, will continue to grow, our ability to benefit from it in practise will be determined by these crucial ethical, legal and economic issues.

There are also more technical challenges. As any mathematician or scientist in a numerate discipline knows, data, information and analysis models have significant limitations.

Any measurement has an inherent uncertainty. Location information derived from Smartphones is usually accurate to within a few meters when GPS services are available, for example; but only to within a few hundred meters when derived by triangulation between mobile transmission masts. That level of inaccuracy is tolerable if you want to know which city you are in; but not if you need to know where the nearest defibrilator is.

These limitations arise both from the practical limitations of measurement technology; and from fundamental scientific principles that determine the performance of measurement techniques.

We live in a “warm” world – roughly 300 degrees Celsius above what scientists call “absolute zero“, the coldest temperature possible. Warmth is created by heat energy; that energy makes the atoms from which we and our world are made “jiggle about” – to move randomly. When we touch a hot object and feel pain it is because this movement is too violent to bear – it’s like being pricked by billions of tiny pins. This random movement creates “noise” in every physical system, like the static we hear in analogue radio stations or on poor quality telephone lines.

And if we attempt to measure the movements of the individual atoms that make up that noise, we enter the strange world of quantum mechanics in which Heisenberg’s Uncertainty Principle states that the act of measuring such small objects changes them in unpredictable ways. It’s hardly a precise analogy, but imagine trying to measure how hard the surface of a jelly is by hitting it with a hammer. You’d get an idea of the jelly’s hardness by doing so, but after the act of “measurement” you wouldn’t be left with the same jelly. And before the measurement you wouldn’t be able to predict the shape of the jelly afterwards.

(A graph from my PhD thesis showing experimental data plotted against the predictions of an analytic. Notice that whilst the theoretical prediction (the smooth line) is a good guide to the experimental data, that each actual data point lies above or below the line, not on it. In addition, each data point has a vertical bar expressing the level of uncertainty involved in its measurement. In most circumstances, data is uncertain and theory is only a rough guide to reality.)

Even if our measurements were perfect, our ability to understand what they are telling us is not. We draw insight into the behaviour of a real system by comparing measurements of it to a theoretical model of its behaviour. Weather forecasters predict the weather by comparing real data about temperature, air pressure, humidity and rainfall to sophisticated models of weather systems; but, as the famous British preoccupation with talking about the weather illustrates, their predictions are frequently inaccurate. Quite simply this is because the weather system of our world is more complicated than the models that weather forecasters are able to describe using mathematics; and process using today’s computers.

This may all seem very academic; and indeed it is – these are subjects that I studied for my PhD in Physics. But all scientists, mathematicians and engineers understand them; and whether our work involves city systems, motor cars, televisions, information technology, medicine or human behaviour, when we work with data, information and analysis technology we are very much aware and respectful of their limitations.

Most real systems are more complicated than the theoretical models that we are able to construct and analyse. That is especially true of any system that includes the behaviour of people – in other words, the vast majority of city systems. Despite the best efforts of psychology, social science and artificial intelligence we still do not have an analytic model of human behaviour.

For open data and Smarter Cities to succeed, we need to openly recognise these challenges. Data and technology can add immense value to city systems – for instance, IBM’s “Deep Thunder” technology creates impressively accurate short-term and short-range predictions of weather-related events such as flash-flooding that have the potential to save lives. But those predictions, and any other result of data-based analysis, have limitations; and are associated with caveats and constraints.

It is only by considering the capabilities and limitations of such techniques together that we can make good decisions about how to use them – for example, whether to trust our lives to the automated analytics and control systems involved in anti-lock braking systems, as the vast majority of us do every time we travel by road; or whether to use data and technology only to provide input into a human process of consideration and decision-making – as takes place in Rio when city agency staff consider Deep Thunder’s predictions alongside other data and use their own experience and that of their colleagues in determining how to respond.

In current discussions of the role of technology in the future of cities, we risk creating a divide between “soft” disciplines that deal with qualitative, subjective matters – social science and the arts for example; and “hard” disciplines that deal with data and technology – such as science, engineering, mathematics.

In the most polarised debates, opinion from “soft” disciplines is that “Smart cities” is a technology-driven approach that does not take human needs and nature into account, and does not recognise the variability and uncertainty inherent in city systems; and opinion from “hard” disciplines is that operational, design and policy decisions in cities are taken without due consideration of data that can be used to inform them and predict their outcomes. As Stephan Shakespeare wrote in the “Shakespeare Review of Public Sector Information“, “To paraphrase the great retailer Sir Terry Leahy, to run an enterprise without data is like driving by night with no headlights. And yet that is what government often does.”

There is no reason why these positions cannot be reconciled. In some domains “soft” and “hard” disciplines regularly collaborate. For example, the interior and auditory design of the Jaguar XF car, first manufactured in 2008, was designed by re-creating the driving experience in a simulator at the University of Warwick, and analysing the emotional response of test subjects using physiological sensors and data. Such techniques are now routinely used in product design. And many individuals have a breadth of knowledge that extends far beyond their core profession into a variety of areas of science and the arts.

But achieving reconciliation between all of the stakeholders involved in the vastly complex domain of cities – including the people who live in them, not just the academics, professionals and politicians who study, design, engineer and govern them – will not happen by default. It will only happen if we have an open and constructive debate about the capabilities and the limitations of data, information and technology; and if we are then able to communicate them in a way that expresses to everyone why Smarter City systems will improve their quality of life.

(“Which way to go?” by Peter Roome)

What’s next?
It’s astonishing and encouraging that we could use a model of individual consciousness to navigate the availability and value of data in the massively collective context of an urban scenario. To continue developing an understanding of the ability of information and technology to contribute to quality of life within cities, we need to expand that approach to explore the other dimensions we identified that affect perceptions of quality of life: culture, age and family status, for example; and within both larger and smaller scales of city context than the “district” scenario that we started with.

And we need to compare that approach to existing research work such as the Liveable Cities research collaboration between UK Universities that is establishing an evidence-based technique for assessing wellbeing; or the IBM Research initiative “SCRIBE” which seeks to define the meaning of and relationships between the many types of data that describe cities.

As a next step, the Urban Systems Collaborative attendees suggested that it would be useful to consider how people in different circumstances in cities use data, information and technology to take decisions:  for example, city leaders, businesspeople, parents, hostel residents, commuters, hospital patients and so forth across the incredible variety of roles that we play in cities. You can find out more about how the Collaborative is taking this agenda forward on their website.

But this is not a debate that belongs only within the academic community or with technologists and scientists. Information and technology are changing the cities, society and economy that we live in and depend on. But that information results from data that in large part is created by all of our actions and activities as individuals, as we carry out our lives in cities, interacting with systems that from a technology perspective are increasingly instrumented, interconnected and intelligent. We are the ultimate stakeholders in the information economy, and we should seek to establish an equitable consensus for how our data is used; and that consensus should include an understanding and acceptance between all parties of both the capabilities and limitations of information and technology.

I’ve written before about the importance of telling stories that illustrate ways in which technology and information can change lives and communities for the better. The Community Lovers’ Guide to Birmingham is a great example of doing this. As cities such as Birmingham, Dublin and Chicago demonstrate what can be achieved by following a Smarter City agenda, I’m hoping that those involved can tell stories that will help other cities across the world to pursue these ideas themselves.

(This article summarises a discussion I chaired this week to explore the relationship between urban data, technology and quality of life at the Urban Systems Collaborative’s London workshop, organised by my ex-colleague, Colin Harrison, previously an IBM Distinguished Engineer responsible for much of our Smarter Cities strategy; and my current colleague, Jurij Paraszczak, Director of Industry Solutions and Smarter Cities for IBM ResearchI’m grateful for the contributions of all of the attendees who took part. The article also appears on the Urban Systems Collaborative’s blog).

Can cities break Geoffrey West’s laws of urban scaling?

(Photo of Kowloon by Frank Müller)

As I mentioned a couple of weeks ago, I recently read Geoffrey West’s fascinating paper on urban scaling laws, “Growth, innovation, scaling and the pace of life in cities“.

The paper applies to cities techniques that I recall from my Doctoral studies in the Physics and Engineering of Superconducting Devices for studying the emergent properties of self-organising complex systems.

Cities, being composed of 100,000s or millions of human beings with free-will who interact with each other, are clearly examples of such complex systems; and their emergent properties of interest include economic output, levels of crime, and expenditure on maintaining and expanding physical infrastructures.

It’s a less intimidating read than it might sound, and draws fascinating conclusions about the relationship between the size of city populations; their ability to create wealth through innovation; sustainability; and what many of us experience as the increasing speed of modern life.

I’m going to summarise the conclusions the paper draws about the characteristics and behaviour of cities; and then I’d like to challenge us to change them.

Professor West’s paper (which is also summarised in his excellent TED talk) uses empirical techniques to present fascinating insights into how cities have performed in our experience so far; but as I’ve argued before, such conclusions drawn from historic data do not rule out the possibility of cities achieving different levels of performance in the future by undertaking transformations.

That potential to transform city performance is vitally important in the light of West’s most fundamental finding: that the largest, densest cities currently create the most wealth most efficiently. History shows that the most successful models spread, and in this case that could lead us towards the higher end of predictions for the future growth of world population in a society dominated by larger and larger megacities supported by the systems I’ve described in the past as “extreme urbanism“.

I personally don’t find that an appealing vision for our future so I’m keen to pursue alternatives. (Note that Professor West is not advocating limitless city growth either; he’s simply analysing and reporting insights from the available data about cities, and doing it in an innovative and important way. I am absolutely not criticising his work; quite the oppostite – I’m inspired by it).

So here’s an unfairly brief summary of his findings:

  • Quantitative measures of the creative performance of cities (such as wealth creation or the number of patents and inventions generated by city populations) – grow faster and faster the more that city size increases.
  • Quantitative measures of the cost of city infrastructures grow more slowly as city size increases, because bigger cities can exploit economies of scale to grow more cheaply than smaller cities.

West found that these trends were incredibly consistent across cities of very different sizes. To explain the consistency, he drew an analogy with biology: for almost all animals, characteristics such as metabolic rate and life expectancy vary in a very predictable way according to the size of the animal.

(Photo of Geoffery West describing the scaling laws that determine animal characteristics by Steve Jurvetson). Note that whilst the chart focusses on mammals, the scaling laws are more broadly applicable.

The reason for this is that the performance of the thermodynamic, cardio-vascular and metabolic systems that support most animals in the same way are affected by size. For example, geometry determines that the surface area of small animals is larger compared to their body mass than that of large animals. So smaller animals lose heat through their skin more rapidly than larger animals. They therefore need faster metabolic systems that convert food to replacement heat more rapidly to keep them warm. This puts more pressure on their cardio-vascular systems and in particular their heart muscles, which beat more quickly and wear out sooner. So mice don’t live as long as elephants.

Further, more complex mechanisms are also involved, but they don’t contradict the idea that the emergent properties of biological systems are determined by the relationship between the scale of those systems and the performance of the processes that support them.

Professor West hypothesised that city systems such as transportation and utilities, as well as characteristics of the way that humans interact with each other, would similarly provide the underlying reasons for the urban scaling laws he observed.

Those systems are exactly what we need to affect if we are to change the relationship between city size and performance in the future. Whilst the cardio-vascular systems of animals are not something that animals can change, we absolutely can change the way that city systems behave – in the same way that as human beings we’ve extended our life expectancy through ingenuity in medicine and improvements in standards of living. This is precisely the idea behind Smarter cities.

(A graph from my own PhD thesis showing real experimental data plotted against a theoretical prediction similar to a scaling law. Notice that whilst the theoretical prediction (the smooth line) is a good guide to the experimental data, that each actual data point lies above or below the line, not on it. In most circumstances, theory is only a rough guide to reality.)

The potential to do this is already apparent in West’s paper. In the graphs it presents that plot the performance of individual cities against the predictions of urban scaling laws, the performance of every city varies slightly from the law. Some cities outperform, and some underperform. That’s exactly what we should expect when comparing real data to an analysis of this sort. Whilst the importance of these variations in the context of West’s work is hotly contested, both in biology and in cities, personally I think they are crucial.

In my view, such variations suggest that the best way to interpret the urban scaling laws that Professor West discovered is as a challenge: they set the bar that cities should try to beat.

Cities everywhere are already exploring innovative, sustainable ways to create improvements in the performance of their social, economic and environmental systems. Examples include:

(Photograph by Meshed Media of Birmingham’s Social Media Cafe, where individuals from every part of the city who have connected online meet face-to-face to discuss their shared interest in social media.)

In all of those cases, cities have used technology effectively to disrupt and transform the behaviour of urban systems. They have all lifted at least some elements of performance above the bar set by urban scaling laws. There are many more examples in cities across the world. In fact, this process has been taking place continuously for as long as cities have existed – see, for example, the recent Centre for Cities report on the development and performance of cities in the UK throughout the 20th Century.

That report contains a specific challenge for Birmingham, my home city. It shows that in the first part of the 20th Century, Birmingham outperformed many UK cities and became prosperous and successful because of the diversity of its industries – famously expressed as the “city of a thousand trades”. In the latter part of the Century, however, as Birmingham became more dependent on an automotive industry that subsequently declined, the city lost a lot of ground. Birmingham is undertaking some exciting regenerative initiatives at present – such as the City Deal that increases it’s financial independence from Central Government; the launch of a Green Commission; and investments in ultra-fast broadband infrastructure. They are vitally important in order for the city to re-create a more vibrant, diverse, innovative and successful economy.

As cities everywhere emulate successful innovations, though, they will of course reset the bar of expected performance. Cities that wish to consistently outperform others will need to constantly generate new innovations.

This is where I’ll bring in another idea from physics – the concept of a phase change. A phase change occurs when a system passes a tipping point and suddenly switches from one type of behaviour to another. This is what happens when the temperature of water in a kettle rises from 98 to 99 to 100 degrees Centigrade and water – which is heavy and stays in the bottom of the kettle – changes to steam – which is light and rises out of the kettle’s spout. The “phase change” in this example is the transformation of a volume of water from a liquid to a gas through the process of boiling.

So the big question is: as we change the way that city systems behave, will we eventually encounter a phase change that breaks West’s fundamental finding that the largest cities create the most value most efficiently? For example, will we find new technologies for communication and collaboration that enable networks of people spread across thousands of miles of countryside or ocean to be as efficiently creative as the dense networks of people living in megacities?

I certainly hope so; because unless we can break the link between the size and the success of cities, I worry that the trend towards larger and larger cities and increasing global population will continue and eventually reach levels that will be difficult or impossible to maintain. West apparently agrees; in an interview with the New York Times, which provides an excellent review of his work, he stated that “The only thing that stops the superlinear equations is when we run out of something we need. And so the growth slows down. If nothing else changes, the system will eventually start to collapse.”

But I’m an optimist; so I look forward to the amazing innovations we’re all going to create that will break the laws of urban scaling and offer us a more attractive and sustainable future. It’s incredibly important that we find them.

(I’d like to think Dr. Pam Waddell, the Director of Birmingham Science City, for her helpful comments during my preparation of this post).

The world is at our childrens’ fingertips; and they will change it

(Image by TurkleTom)

Several of my recent posts to this blog have been concerned with two sides of the same coin: the importance of science and technology skills to our societies and economies; and the importance of making technology and information consumable and accessible.

But this is the first time I’m putting those concerns to the test in the very act of writing my blog – which I’m doing using the iPad that arrived 3 days ago.

My last purchase from Apple – a company whose controlling approach to technology and media ecosystems I don’t admire – was a 3rd generation iPod; it’s now so unusually old that I’m often asked if it’s some strange *new* gadget. I was very unimpressed by the speed with which that iPod’s battery deteriorated, and by the impossibility of replacing it. So I needed some considerable persuasion to shell out several hundred pounds on an iPad.

That persuasion came from my 3 year old son. On the (very rare, if you’re my boss reading this) occasions that I work from home, I sometimes share my laptop screen with him. My side has my e-mail on it; his side has Thomas the Tank Engine on YouTube (he gets the better deal). Often when I launch a new window, it pops up on his side of the screen, obscuring whatever’s going on on Sodor. His immediate and instinctive reaction is to touch the screen and try to drag the obstruction out of the way.

(I heard an amazing corollary to this from a contact at Birmingham City Council yesterday – she’s seen her toddler drag her fingers apart on the surface of a paper magazine in an attempt to “zoom” the pictures in it!)

I’ve just written an article that repeats an often quoted though hard to source statistic that 90% of the information that exists in the world today was created (or more accurately recorded) in the last 5 years.

That made me think that: every fact in the world is literally at the fingertips of our children.

You can argue whether that’s literally true; and whether it’s equally true for all the children in the world (it’s clearly not); but there’s a deep and fundamental truth to the insight that suggests: however much we think the technologies we use today have already changed the world, it’s absolutely nothing compared to the utter transformation that will be created by the real “information natives” that our very young children will become.

That’s why I shelled out for an iPad this week. Love Apple or loathe them, they are creating technologies that offer us – if we explore and engage with them – a window into an important part of the future. And if we want to help our children, our schools, our businesses and our cities prepare for that future, then we had better do our best to get to grips with them ourselves.

How will the UK create the skills that the economy of 2020 will need?

(Photo by Orange Tuesday)

I’ve been reading Edward Glaeser’s book “The Triumph of the City” recently. One of his arguments is that the basis of sustainable city economies is the presence of clusters of small, entrepreneurial businesses that constantly co-create new commercial value from technological innovations.

Alan Penn, the Dean of the Bartlett Institute for the Built Environment, made similar comments to me recently. Interestingly, both Alan and Edward Glaesar identified Birmingham, my hometown, as an example of a city with such an innovative, marketplace economy, along with London. They also both identified Manchester as a counter-example of a city overly dependent on commoditised industries and external investment.

Cities are fundamentally important to the UK economy; more than 90% of the UK population lives in urban areas. But many – or perhaps most – UK cities are not well placed to support innovative, marketplace-based, high-technology economies (see my recent post on this topic). For example, e-Skills UK report that less than 20% of people hired into information technology positions in the UK acquired their skills in the education system; and I agree strongly with Seth Godin’s views as expressed by the “Stop Stealing Dreams” manifesto that we need to question and change the fundamental objectives around which our education system is designed.

To create and / or sustain economies capable of organic innovation and growth, cities need a particular mixture of skills: entrepreneurial skills; commercial skills; operational skills; technology skills; and creative skills. The blunt truth is that our education system isn’t structured to deliver those skills to city economies with this objective.

Whilst the opinions I’ve expressed here are personal, I’ll shortly be launching a project at work for my employer IBM to look at the challenges in this space. IBM’s business interest is our need to continue hiring smart, skilled people in the UK; the interest of IBM’s technical community as individuals to commit their time to the project additionally involves personal passion for technology and education.

I’m enormously aware that I’m not the first person to whom these thoughts have occurred; and I know that I and my colleagues in IBM don’t have all the answers.

So if this topic interests you and you’d like to share your insight with the project I’m going to run this year, please let me know. I’d very much appreciate hearing from you.

Who will be the next generation of technology millionaires?

(Image: “IT is innovation” by Frank Allan Hansen)

A few years ago I attended a dinner debate hosted by the British Computer Society about the future of technology careers in the UK. At the time, I’d recently written a report for IBM UK on the subject. The common motivation was to explore the effect of globalisation on the UK’s IT industry.

Despite the continuing emergence of high quality technology industries around the world, the local demand for technology skills in the UK was then, and is now, increasing. The secret to understanding the seeming contradiction is twofold.

Firstly, consider which specific skills are required, and why. To cut a long story short, the ones that are needed on-shore in countries with high wages such as the UK are the ones most closely tied to agile innovation in local economic and cultural markets, or to the operation of critical infrastructures (such as water, roads and energy) or operations (such as banking and law enforcement).

Secondly, the more fundamental point is that we’re living through an Information Revolution that is increasing in pace and impact. That means the demand for science, technology, mathematics and information skills is going through the roof across the board. As  evidence, consider this article from McKinsey on the hidden “Information Economy”; or the claim that 90% of the information in the world was created in the last two years (widely referenced, e.g. by this article in Forbes); or that IBM now employs more mathematics PhD holders than any other organisation in the world.

At the BCS debate, a consultant from Capgemini introduced the evening by describing his meeting that morning with a group of London-based internet entrepreneurs. These people were young (20-25), successful (owning and running businesses worth £millions), and fiercely technology literate.

Today, I wonder if the same meeting would be held with internet entrepreneurs? In ten years time, I certainly don’t think it will be – they’ll be genetic engineers, nano-technologists, or experts in some field we can’t imagine yet. Of course, there are already many early entrepreneurs exploring those fields, as was shown in Adam Rutherford’s recent BBC Horizon documentary “Playing God”  (see this video or this review).

I’ve blogged recently about the importance of skills, education and localism to the future of our cities’ and country’s economies. This leads me to believe that more important than addressing the UK’s shortfall in IT skills (as reported by e-Skills last year) is understanding how to systematically integrate the teaching of technology, science, creative and business skills across schools, universities and vocational education. Further, that needs to be done in a way that’s responsive to the changes that will come to the sciences and technologies that have the most power to compliment the unique economy, geography and culture of the British isles.

This is already a problem for the UK economy. The e-Skills report found that UK businesses are nearly 10% less productive than US ones; and that 80% of that gap is down to less effective use of technology. Their research predicts that closing the technology gap could contribute £50bn to the UK economy over 5-7 years. But their finding that the British Education system provides less than 20% of the technology skills we need today means that closing the gap will be hard.

As the information revolution proceeds, the problem will get worse. And unless we do something about it in an enlightened way that recognises that the science and technology skills we’ll need in 10 years time are not the IT skills that are familiar to us today, we’ll fail to address it.

I was born in 1970; for me, the Tandy TRS80 computer my family bought in 1980 was a technological marvel, with its 16k RAM and graphic resolution of 128×48 pixels (all of them green). Today, my 3 year old son is growing up with a high resolution smartphone touchscreen as an unremarkable part of his world. By the time he’s of working age, the world will be unrecognisable – as will the skills he’ll require to be successful in it.

From the earliest years, we need to be exciting children in the mixture of creativity; abstract thinking and modelling; mathematics, technology, art and entrepreneurialism that are apparent now in such forums as TED. (www.ted.com). Whatever their interest and acumen, we need to give them the opportunity to find their own niche in that range of cross-disciplinary skills that will be economically valuable in the future. If we don’t, they won’t be ready to find jobs in the industries of the future when the computer programming industry, and others as we know them today, disappear.

%d bloggers like this: