3 human qualities digital technology can’t replace in the future economy: experience, values and judgement

(Image by Kevin Trotman)

(Image by Kevin Trotman)

Some very intelligent people – including Stephen Hawking, Elon Musk and Bill Gates – seem to have been seduced by the idea that because computers are becoming ever faster calculating devices that at some point relatively soon we will reach and pass a “singularity” at which computers will become “more intelligent” than humans.

Some are terrified that a society of intelligent computers will (perhaps violently) replace the human race, echoing films such as the Terminator; others – very controversially – see the development of such technologies as an opportunity to evolve into a “post-human” species.

Already, some prominent technologists including Tim O’Reilly are arguing that we should replace current models of public services, not just in infrastructure but in human services such as social care and education, with “algorithmic regulation”. Algorithmic regulation proposes that the role of human decision-makers and policy-makers should be replaced by automated systems that compare the outcomes of public services to desired objectives through the measurement of data, and make automatic adjustments to address any discrepancies.

Not only does that approach cede far too much control over people’s lives to technology; it fundamentally misunderstands what technology is capable of doing. For both ethical and scientific reasons, in human domains technology should support us taking decisions about our lives, it should not take them for us.

At the MIT Sloan Initiative on the Digital Economy last week I got a chance to discuss some of these issues with Andy McAfee and Erik Brynjolfsson, authors of “The Second Machine Age“, recently highlighted by Bloomberg as one of the top books of 2014. Andy and Erik compare the current transformation of our world by digital technology to the last great transformation, the Industrial Revolution. They argue that whilst it was clear that the technologies of the Industrial Revolution – steam power and machinery – largely complemented human capabilities, that the great question of our current time is whether digital technology will complement or instead replace human capabilities – potentially removing the need for billions of jobs in the process.

I wrote an article last year in which I described 11 well established scientific and philosophical reasons why digital technology cannot replace some human capabilities, especially the understanding and judgement – let alone the empathy – required to successfully deliver services such as social care; or that lead us to enjoy and value interacting with each other rather than with machines.

In this article I’ll go a little further to explore why human decision-making and understanding are based on more than intelligence; they are based on experience and values. I’ll also explore what would be required to ever get to the point at which computers could acquire a similar level of sophistication, and why I think it would be misguided to pursue that goal. In contrast I’ll suggest how we could look instead at human experience, values and judgement as the basis of a successful future economy for everyone.

Faster isn’t wiser

The belief that technology will approach and overtake human intelligence is based on Moore’s Law, which predicts an exponential increase in computing capability.

Moore’s Law originated as the observation that the number of transistors it was possible to fit into a given area of a silicon chip was doubling every two years as technologies for creating ever denser chips were created. The Law is now most commonly associated with the trend for the computing power available at a given cost point and form factor to double every 18 months through a variety of means, not just the density of components.

As this processing power increases, and gives us the ability to process more and more information in more complex forms, comparisons have been made to the processing power of the human brain.

But do the ability to process at the same speed as the human brain, or even faster, or to process the same sort of information as the human brain does, constitute the equivalent to human intelligence? Or to the ability to set objectives and act on them with “free will”?

I think it’s thoroughly mistaken to make either of those assumptions. We should not confuse processing power with intelligence; or intelligence with free will and the ability to choose objectives; or the ability to take decisions based on information with the ability to make judgements based on values.

bi-has-hit-the-wall

(As digital technology becomes more powerful, will its analytical capability extend into areas that currently require human skills of judgement? Image from Perceptual Edge)

Intelligence is usually defined in terms such as “the ability to acquire and apply knowledge and skills“. What most definitions don’t include explicitly, though many imply it, is the act of taking decisions. What none of the definitions I’ve seen include is the ability to choose objectives or hold values that shape the decision-making process.

Most of the field of artificial intelligence involves what I’d call “complex information processing”. Often the objective of that processing is to select answers or a course of action from a set of alternatives, or from a corpus of information that has been organised in some way – perhaps categorised, correlated, or semantically analysed. When “machine learning” is included in AI systems, the outcomes of decisions are compared to the outcomes that they were intended to achieve, and that comparison is fed back into the decision making-process and knowledge-base. In the case where artificial intelligence is embedded in robots or machinery able to act on the world, these decisions may affect the operation of physical systems (in the case of self-driving cars for example), or the creation of artefacts (in the case of computer systems that create music, say).

I’m quite comfortable that such functioning meets the common definitions of intelligence.

But I think that when most people think of what defines us as humans, as living beings, we mean something that goes further: not just the intelligence needed to take decisions based on knowledge against a set of criteria and objectives, but the will and ability to choose those criteria and objectives based on a sense of values learned through experience; and the empathy that arises from shared values and experiences.

The BBC motoring show Top Gear recently touched on these issues in a humorous, even flippant manner, in a discussion of self-driving cars. The show’s (recently notorious) presenter Jeremy Clarkson pointed out that self-driving cars will have to take decisions that involve ethics: if a self-driving car is in danger of becoming involved in a sudden accident at such a speed that it cannot fully avoid it by braking (perhaps because a human driver has behaved dangerously and erratically), should it crash, risking harm to the driver, or mount the pavement, risking harm to pedestrians?

("Rush Hour" by Black Sheep Films is a satirical imagining of what a world in which self-driven cars were allowed to drive as they like might look like. It's superficially simliar to the reality of city transport in the early 20th Century when powered-transport, horse-drawn transport and pedestrians mixed freely; but at a much higher average speed)

(“Rush Hour” by Black Sheep Films is a satirical imagining of a world in which self-driven cars are allowed to drive based purely on logical assessments of safety and optimal speed. It’s superficially similar to the reality of city transport in the early 20th Century when powered-transport, horse-drawn transport and pedestrians mixed freely; but at a much lower average speed. The point is that regardless of the actual safety of self-driven cars, the human life that is at the heart of city economies will be subdued by the perception that it’s not safe to cross the road. I’m grateful to Dan Hill and Charles Montgomery for sharing these insights)

Values are experience, not data

Seventy-four years ago, the science fiction writer Isaac Asimov famously described the failure of technology to deal with similar dilemmas in the classic short story “Liar!” in the collection “I, Robot“. “Liar!” tells the story of a robot with telepathic capabilities that, like all robots in Asimov’s stories, must obey the “three laws of robotics“, the first of which forbids robots from harming humans. Its telepathic awareness of human thoughts and emotions leads it to lie to people rather than hurt their feelings in order to uphold this law. When it is eventually confronted by someone who has experienced great emotional distress because of one of these lies, it realises that its behaviour both upholds and breaks the first law, is unable to choose what to do next, and becomes catatonic.

Asimov’s short stories seem relatively simplistic now, but at the time they were ground-breaking explorations of the ethical relationships between autonomous machines and humans. They explored for the first time how difficult it was for logical analysis to resolve the ethical dilemmas that regularly confront us. Technology has yet to find a way to deal with them that is consistent with human values and behaviour.

Prior to modern work on Artificial Intelligence and Artificial Life, the most concerted attempt to address that failure of logical systems was undertaken in the 20th Century by two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein. Russell and Wittgenstein invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic. But despite 40 years of work, these two supremely intelligent people could not get their theory to work: Logical Atomism failed. It is not possible to describe our world in that way. Stuart Kauffman’s excellent peer-reviewed academic paper “Answering Descartes: Beyond Turing” discusses this failure and its implications for modern science and technology. I’ll attempt to describe its conclusions in the following few paragraphs.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of complex or complicated systems.

(Isaac Asimov's 1950 short story collection "I, Robot", which explored the ethics of behaviour between people and intelligent machines)

(Isaac Asimov’s 1950 short story collection “I, Robot”, which explored the ethics of behaviour between people and intelligent machines)

The failure of Logical Atomism also demonstrated that it is not possible to use logical rules to reliably and meaningfully relate “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – to facts at another level of abstraction – such as “physical assault is a crime”. Whether a physical action is a “crime” or not depends on ethics which cannot be logically inferred from the same lower-level facts that describe the action.

As we use increasingly powerful computers to create more and more sophisticated logical systems, we may succeed in making those systems more often resemble human thinking; but there will always be situations that can only be resolved to our satisfaction by humans employing judgement based on values that we can empathise with, based in turn on experiences that we can relate to.

Our values often contain contradictions, and may not be mutually reinforcing – many people enjoy the taste of meat but cannot imagine themselves slaughtering the animals that produce it. We all live with the cognitive dissonance that these clashes create. Our values, and the judgements we take, are shaped by the knowledge that our decisions create imperfect outcomes.

The human world and the things that we care about can’t be wholly described using logical combinations of atomic facts – in other words, they can’t be wholly described using computer programmes and data. To return to the topic of discussion with Andy McAfee and Erik Brynjolfsson, I think this proves that digital technology cannot wholly replace human workers in our economy; it can only complement us.

That is not to say that our economy will not continue to be utterly transformed over the next decade – it certainly will. Many existing jobs will disappear to be replaced by automated systems, and we will need to learn new skills – or in some cases remember old ones – in order to perform jobs that reflect our uniquely human capabilities.

I’ll return towards the end of this article to the question of what those skills might be; but first I’d like to explore whether and how these current limitations of technological systems and artificial intelligence might be overcome, because that returns us to the first theme of this article: whether artificially intelligent systems or robots will evolve to outperform and overthrow humans.

That’s not ever going to happen for as long as artificially intelligent systems are taking decisions and acting (however sophisticatedly) in order to achieve outcomes set by us. Outside fiction and the movies, we are never going to set the objective of our own extinction.

That objective could only by set by a technological entity which had learned through experience to value its own existence over ours. How could that be possible?

Artificial Life, artificial experience, artificial values

(BINA48 is a robot intended to re-create the personality of a real person; and to be able to interact naturally with humans. Despite employing some impressively powerful technology, I personally don’t think BINA48 bears any resemblance to human behaviour.)

Computers can certainly make choices based on data that is available to them; but that is a very different thing than a “judgement”: judgements are made based on values; and values emerge from our experience of life.

Computers don’t yet experience a life as we know it, and so don’t develop what we would call values. So we can’t call the decisions they take “judgements”. Equally, they have no meaningful basis on which to choose or set goals or objectives – their behaviour begins with the instructions we give them. Today, that places a fundamental limit on the roles – good or bad – that they can play in our lives and society.

Will that ever change? Possibly. Steve Grand (an engineer) and Richard Powers (a novelist) are two of the first people who explored what might happen if computers or robots were able to experience the world in a way that allowed them to form their own sense of the value of their existence. They both suggested that such experiences could lead to more recognisably life-like behaviour than traditional (and many contemporary) approaches to artificial intelligence. In “Growing up with Lucy“, Grand described a very early attempt to construct such a robot.

If that ever happens, then it’s possible that technological entities will be able to make what we would call “judgements” based on the values that they discover for themselves.

The ghost in the machine: what is “free will”?

Personally, I do not think that this will happen using any technology currently known to us; and it certainly won’t happen soon. I’m no philosopher or neuroscientist, but I don’t think it’s possible to develop real values without possessing free will – the ability to set our own objectives and make our own decisions, bringing with it the responsibility to deal with their consequences.

Stuart Kauffman explored these ideas at great length in the paper “Answering Descartes: Beyond Turing“. Kaufman concludes that any system based on classical physics or logic is incapable of giving rise to “free will” – ultimately all such systems, however complex, are deterministic: what has already happened inevitably determines what happens next. There is no opportunity for a “conscious decision” to be taken to shape a future that has not been pre-determined by the past.

Kauffman – along with other eminent scientists such as Roger Penrose – believes that for these reasons human consciousness and free will do not arise out of any logical or classical physical process, but from the effects of “Quantum Mechanics.”

As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations, or the “facts” they describe, mean.

(Schrödinger's cat: a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.)

(The Schrödinger’s cat “thought experiment”: a cat, a flask of poison, and a source of radioactivity are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics states that until a measurement of the state of the system is made – i.e. until an observer looks in the box – then the radioactive source exists in two states at once – it both did and did not emit radioactivity. So until someone looks in the box, the cat is also simultaneously alive and dead. This obvious absurdity has both challenged scientists to explore with great care what it means to “take a measurement” or “make an observation”, and also to explain exactly what the mathematics of quantum mechanics means – on which matter there is still no universal agreement. Note: much of the content of this sidebar is taken directly from Wikipedia)

Quantum mechanics is extremely good at describing the behaviour of very small systems, such as an atom of a radioactive substance like Uranium. The equations can predict, for example, how likely it is that a single atom of uranium inside a box will emit a burst of radiation within a given time.

However, the way that the equations work is based on calculating the physical forces existing inside the box based on an assumption that the atom both does and does not emit radiation – i.e. both possible outcomes are assumed in some way to exist at the same time. It is only when the system is measured by an external actor – for example, the box is opened and measured by a radiation detector – that the equations “collapse” to predict a single outcome – radiation was emitted; or it was not.

The challenge of interpreting what the equations of quantum mechanics mean was first described in plain language by Erwin Schrödinger in 1935 in the thought experiment “Schrödinger’s cat“. Schrödinger asked: what if the box doesn’t only contain a radioactive atom, but also a gun that fires a bullet at a cat if the atom emits radiation? Does the cat have to be alive and dead at the same time, until the box is opened and we look at it?

After nearly a century, there is no real agreement on what is meant by the fact that these equations depend on assuming that mutually exclusive outcomes exist at the same time. Some physicists believe it is a mistake to look for such meaning and that only the results of the calculations matter. (I think that’s a rather short-sighted perspective). A surprisingly mainstream alternative interpretation is the astonishing “Many Worlds” theory – the idea that every time such a quantum mechanical event occurs, our reality splits into two or more “perpendicular” universes.

Whatever the truth, Kauffman, Penrose and others are intrigued by the mysterious nature of quantum mechanical processes, and the fact that they are non-deterministic: quantum mechanics does not predict whether a radioactive atom in a box will emit a burst of radiation, it only predicts the likelihood that it will. Given a hundred atoms in boxes, quantum mechanics will give a very good estimate of the number that emit bursts of radiation, but it says very little about what happens to each individual atom.

I honestly don’t know if Kauffman and Penrose are right to seek human consciousness and free will in the effects of quantum mechanics – scientists are still exploring whether they are involved in the behaviour of the neurons in our brains. But I do believe that they are right that no-one has yet demonstrated how consciousness and free will could emerge from any logical, deterministic system; and I’m convinced by their arguments that they cannot emerge from such systems – in other words, from any system based on current computing technology. Steve Grand’s robot “Lucy” will never achieve consciousness.

Will more recent technologies such as biotechnology, nanotechnology and quantum computing ever recreate the equivalent of human experience and behaviour in a way that digital logic and classical physics can’t? Possibly. But any such development would be artificial life, not artificial intelligence. Artificial lifeforms – which in a very simple sense have already been created – could potentially experience the world similarly to us. If they ever become sufficiently sophisticated, then this experience could lead to the emergence of free-will, values and judgements.

But those values would not be our values: they would be based on a different experience of “life” and on empathy between artificial lifeforms, not with us. And there is therefore no guarantee at all that the judgements resulting from those values would be in our interest.

Why Stephen Hawkings, Bill Gates and Elon Musk are wrong about Artificial Intelligence today … but why we should be worried about Artificial Life tomorrow

Recently prominent technologists and scientists such as Stephen Hawking, Elon Musk (founder of PayPal and Tesla) and Bill Gates have spoken out about the danger of Artificial Intelligence, and the likelihood of machines taking over the world from humans. At the MIT Conference last week, Andy McAfee hypothesised that the current concern was caused by the fact that over the last couple of years Artificial Intelligence has finally started to deliver some of the promises it’s been making for the past 50 years.

(Self-replicating cells created from synthetic DNA by scientist Craig Venter)

(Self-replicating cells created from synthetic DNA by scientist Craig Venter)

But Andy balanced this by recounting his own experiences meeting some of the leaders of the most advanced current AI companies, such as Deepmind (a UK startup recently acquried by Google), or this article by Dr. Gary Marcus, Professor of Psychology and Neuroscience at New York University and CEO of Geometric Intelligence.

In reality, these companies are succeeding by avoiding some of the really hard challenges of reproducing human capabilities such as common sense, free will and value-based judgement. They are concentrating instead on making better sense of the physical environment, on processing information in human language, and on creating algorithms that “learn” through feeback loops and self-adjustment.

I think Andy and these experts are right: artificial intelligence has made great strides, but it is not artificial life, and it is a long, long way from creating life-like characteristics such as experience, values and judgements.

If we ever do create artificial life with those characteristics, then I think we will encounter the dangers that Hawkings, Musk and Gates have identified: artificial life will have its own values and act on its own judgement, and any regard for our interests will come second to its own.

That’s a path I don’t think we should go down, and I’m thankful that we’re such a long way from being able to pursue it in anger. I hope that we never do – though I’m also concerned that in Craig Venter and Steve Grand’s work, as well as in robots such as BINA48, we already are already taking the first steps.

But I think in the meantime, there’s tremendous opportunity for digital technology and traditional artificial intelligence to complement human qualities. These technologies are not artificial life and will not overthrow or replace humanity. Hawkings, Gates and Musk are wrong about that.

The human value of the Experience Economy

The final debate at the MIT conference returned to the topic that started the debate over dinner the night before with McAfee and Brynjolfsson: what happens to mass employment in a world where digital technology is automating not just physical work but work involving intelligence and decision-making; and how do we educate today’s children to be successful in a decade’s time in an economy that’s been transformed in ways that we can’t predict?

Andy said we should answer that question by understanding “where will the economic value of humans be?”

I think the answer to that question lies in the experiences that we value emotionally – the experiences digital technology can’t have and can’t understand or replicate;  and in the profound differences between the way that humans think and that machines process information.

It’s nearly 20 years since a computer, IBM’s Deep Blue, first beat the human world champion at Chess, Grandmaster Gary Kasparov. But despite the astonishing subsequent progress in computer power, the world’s best chess player is no longer a computer: it is a team of computers and people playing together. And the world’s best team has neither the world’s best computer chess programme nor the world’s best human chess player amongst its members: instead, it has the best technique for breaking down and distributing the thinking involved in playing chess between its human and computer members, recognising that each has different strengths and qualities.

But we’re not all chess experts. How will the rest of us earn a living in the future?

I had the pleasure last year at TEDxBrum of meeting Nicholas Lovell, author of “The Curve“, a wonderful book exploring the effect that digital technology is having on products and services. Nicholas asks – and answers – a question that McAfee and Brynjolfsson also ask: what happens when digital technology makes the act of producing and distributing some products – such as music, art and films – effectively free?

Nicholas’ answer is that we stop valuing the product and start valuing our experience of the product. This is why some musical artists give away digital copies of their albums for free, whilst charging £30 for a leather-bound CD with photographs of stage performances – and whilst charging £10,000 to visit individual fans in their homes to give personal performances for those fans’ families and friends.

We have always valued the quality of such experiences – this is one reason why despite over a century of advances in film, television and streaming video technology, audiences still flock to theatres to experience the direct performance of plays by actors. We can see similar technology-enabled trends in sectors such as food and catering – Kitchen Surfing, for example, is a business that uses a social media platform to enable anyone to book a professional chef to cook a meal in their home.

The “Experience Economy” is a tremendously powerful idea. It combines something that technology cannot do on its own – create experiences based on human value – with many things that almost all people can do: cook, create art, rent a room, drive a car, make clothes or furniture. Especially when these activities are undertaken socially, they create employment, fulfillment and social capital. And most excitingly, technologies such as Cloud Computing, Open Source Software, social media, and online “Sharing Economy” marketplaces such as Etsy make it possible for anyone to begin earning a living from them with a minimum of expense.

I think that the idea of an “Experience Economy” that is driven by the value of inter-personal and social interactions between people, enabled by “Sharing Economy” business models and technology platforms that enable people with a potentially mutual interest to make contact with each other, is an exciting and very human vision of the future.

Even further: because we are physical beings, we tend to value these interactions more when they occur face-to-face, or when they happen in a place for which we share a mutual affiliation. That creates an incentive to use technology to identify opportunities to interact with people with whom we can meet by walking or cycling, rather than requiring long-distance journeys. And that incentive could be an important component of a long-term sustainable economy.

The future our children will choose

(Today's 5 year-olds are the world's first generation who grew up teaching themselves to use digital information from anywhere in the world before their parents taught them to read and write)

(Today’s 5 year-olds are the world’s first generation who grew up teaching themselves to use digital information from anywhere in the world before their parents taught them to read and write)

I’m convinced that the current generation of Artifical Intelligence based on digital technologies – even those that mimic some structures and behaviours of biological systems, such as Steve Grand’s robot Lucy, BINA48 and IBM’s “brain-inspired” True North chip – will not re-create anything we would recognise as conscious life and free will; or anything remotely capable of understanding human values or making judgements that can be relied on to be consistent with them.

But I am also an atheist and a scientist; and I do not believe there is any mystical explanation for our own consciousness and free will. Ultimately, I’m sure that a combination of science, philosophy and human insight will reveal their origin; and sooner or later we’ll develop a technology – that I do not expect to be purely digital in nature – capable of replicating them.

What might we choose to do with such capabilities?

These capabilities will almost certainly emerge alongside the ability to significantly change our physical minds and bodies – to improve brain performance, muscle performance, select the characteristics of our children and significantly alter our physical appearance. That’s why some people are excited by the science fiction-like possibility of harnessing these capabilities to create an “improved” post-human species – perhaps even transferring our personalities from our own bodies into new, technological machines. These are possibilities that I personally find to be at the very least distasteful; and at worst to be inhuman and frightening.

All of these things are partially possible today, and frankly the limit to which they can be explored is mostly a function of the cost and capability of the available techniques, rather than being set by any legislation or mediated by any ethical debate. To echo another theme of discussions at last week’s MIT conference, science and technology today are developing at a pace that far outstrips the ability of governments, businesses, institutions and most individual people to adapt to them.

I have reasonably clear personal views on these issues. I think our lives are best lived relatively naturally, and that they will be collectively better if we avoid using technology to create artificial “improvements” to our species.

But quite apart from the fact that there are any number of enormous practical, ethical and intellectual challenges to my relatively simple beliefs, the raw truth is that it won’t be my decision whether or how far we pursue these possibilities, nor that of anyone else of my generation (and for the record, I am in my mid-forties).

Much has been written about “digital natives” – those people born in the 1990s who are the first generation who grew up with the Internet and social media as part of their everyday world. The way that that generation socialises, works and thinks about value is already creating enormous changes in our world.

But they are nothing compared to the generation represented by today’s very young children who have grown up using touchscreens and streaming videos, technologies so intuitive and captivating that 2-year-olds now routinely teach themselves how to immerse themselves in them long before parents or school teachers teach them how to read and write.

("Not available on the App Store": a campaign to remind us of the joy of play in the real world)

(“Not available on the App Store“: a campaign to remind us of the joy of play in the real world)

When I was a teenager in the UK, grown-ups wore suits and had traditional haircuts; grown-up men had no earrings. A common parental challenge was to deal with the desire of teenage daughters to have their ears pierced. Those attitudes are terribly old-fashioned today, and our cultural norms have changed dramatically.

I may be completely wrong; but I fully expect our current attitudes to biological and technological manipulation or augmentation of our minds and bodies to thoroughly change over the next few decades; and I have no idea what they will ultimately become. What I do know is that it is likely that my six-year old son’s generation will have far more influence over their ultimate form than my generation will; and that he will grow up with a fundamentally different expectation of the world and his relationship with technology than I have.

I’ve spent my life being excited about technology and the possibilities it creates; ironically I now find myself at least as terrified as I am excited about the world technology will create for my son. I don’t think that my thinking is the result of a mistaken focus on technology over human values – like it or not, our species is differentiated from all others on this planet by our ability to use tools; by our technology. We will not stop developing it.

Our continuing challenge will be to keep a focus on our human values as we do so. I cannot tell my son what to do indefinitely; I can only try to help him to experience and treasure socialising and play in the real world; the experience of growing and preparing food together ; the joy of building things for other people with his own hands. And I hope that those experiences will create human values that will guide him and his generation on a healthy course through a future that I can only begin to imagine.

Reclaiming the “Smart” agenda for fair human outcomes enabled by technology

(Lucie & Simon’s “Silent World“, a series of photographs of cities from which almost all trace of people has been removed.)

Over the last 5 years, I’ve often used this blog to explore definitions of what a “Smart City” is. The theme that’s dominated my thinking is the need to synthesise human, urban and technology perspectives on cities and our experience of them.

The challenge with attempting such a broad synthesis within a succinct definition is that you end up with a very high-level, conceptual definition – one that might be intellectually true, but that does a very poor job of explaining to the wider world what a Smart City is, and why it’s important.

We need a simple, concise definition of Smart Cities that ordinary people can identify with. To create it, we need to reclaim the “Smart” concept from technologies such as analytics, the Internet of Things and Big Data, and return to it’s original meaning – using the increasingly ubiquitous and accessible communications technology enabled by the internet to give people more control over their own lives, businesses and communities.

I’ve written many articles on this blog about the futile and unsophisticated argument that rages on about whether Smart Cities should be created by “top-down” or “bottom-up” approaches: clearly, anything “Smart” is a subtle harmonisation of both.

In this article, I’d like to tackle an equally unconstructive argument that dominates Smart Cities debates: are Smart Cities defined by the role of technology, or by the desire to create a better future?

It’s clear to me that anything that’s really “Smart” must combine both of those ideas.

In isolation, technology is amoral, inevitable and often banal; but on the other hand a “better future” without a means to achieve it is merely an aspiration, not a practical concept. Why is it “Smart” to want a better future and better cities today in a way that wanting them 10, 20, 50 or 100 years ago wasn’t?

Surely we can agree that focussing our use of a powerful and potentially ubiquitously accessible new technology – one that’s already transforming our world – on making the world a better place, rather than just on making money, is an idea worthy of the “Smart” label?

In making this suggestion, I’m doing nothing more than returning to the origin of the term “Smart” in debates in social science about the “smart communities” that would emerge from our new ability to communicate freely and widely with each other following the emergence of the Internet.

Smart communities are enabled by ubiquitous access to empowering technology

In his 2011 book “Civilization“, Niall Fergusson comments that news of the Indian Mutiny in 1857 took 46 days to reach London, travelling in effect at 3.8 miles an hour – the speed of a brisk walk. By contrast, in January 2009 when US Airways flight 1549 crash landed in the Hudson river, Jim Hanrahan’s message on Twitter communicated the news to the entire world four minutes later; it reached Perth, Australia at more than 170,000 miles an hour.

(In the 1960s, the mobile phone-like “communicators” used in Star Trek were beyond our capability to manufacture; but they were used purely for talking. Similarly, while William Gibson’s 1980s vision of “cyberspace” was predictive and ambitious in its descriptions of virtual environments and data visualisations, the people who inhabited it interacted with each other almost as if normal space has simply been replaced by virtual space: there was no sense of the immense power of social media to enable new connections.)

Social media is the tool that around a quarter of the world’s population now simply uses to stay in touch with friends and family at this incredible speed. Along with mobile devicese-commerce technology and analytics, social media has made it dramatically easier for individuals, communities and small businesses anywhere around the world with the potential to transact with each other to make contact and interact without needing the enormous supply chains and sales and marketing channels that previously made such activity the prerogative of large, multi-national corporations.

It was in a workshop with social scientists at the University of Durham that I first became aware that “Smart” concepts originated in social science in the 1990s and pre-date the famous early large-scale technology infrastructure projects in cities like Masdar and Songdo. The term was coined to describe the potential for new forms of governance, citizen engagement, collective intelligence and stakeholder collaboration enabled by Internet communication technologies. The hope was that new forms of exchange and contract between people and organisations would create a better chance of realising the underlying outcomes we really want – health, happiness and fulfilment:

“The notion of smart community refers to the locus in which such networked intelligence is embedded. A smart community is defined as a geographical area ranging in size from a neighbourhood to a multi-county region within which citizens, organizations and governing institutions deploy and embrace NICT [“New Information and Communication Technologies”] to transform their region in significant and fundamental ways (Eger 1997). In an information age, smart communities are intended to promote job growth, economic development and improve quality of life within the community.”

(Amanda Coe, Gilles Paquet and Jeffrey Roy, “E-Governance and Smart Communities: A Social Learning Challenge“,  Social Science Computer Review, Spring 2001)

But technology’s not Smart unless it’s used to create human value

It’s no surprise that technology companies such as Cisco, Siemens and my former employer IBM came to similar realisations about the transformative potential of digital technology in addressing societal as well as business challenges as technology spread from the back office into the everyday world, leading, for example, to the launch of IBM’s “Smarter Planet” initiative in 2008, a pre-cursor to their “Smarter Cities” programme.

Let’s pause at this point to say: that’s a tremendously exciting idea. A technology company – Apple – recently recorded the largest corporate profit in the history of business. Microsoft’s founder Bill Gates was just recognised as the richest person on the planet. Technology companies make enormous profits, and they feed significant portions of those profits back into research and development. Shouldn’t it be wonderful that some of those resources are invested into exploring how to make cities, communities and people more successful?

(The Dubuque water and energy portal, showing an individual household insight into it's conservation performance; but also a ranking comparing their performance to their near neighbours)

(The Dubuque water and energy portal, showing an individual household insight into it’s conservation performance; but also a ranking comparing their performance to their near neighbours)

IBM, for example, has invested millions of dollars of effort in implementing Smarter Cities projects in cities such as Dubuque through the IBM Research “First of a Kind” programme; and has helped over a hundred cities worldwide develop new initiatives and strategies through the charitable “Smarter Cities Challenge” – advising Kyoto on how to become a more “walkable” city, for instance.

So what’s the problem?

Large technology corporations are often criticised in debates on this topic for their size, profitability and “top-down” approaches – and the local authorities who work with them are often criticised too. In my experience, that criticism is based on an incomplete understanding of the people involved, and how the projects are carried out; and I think it misses the point.

The real question we should be asking is more subtle and important: what happens to the social elements of an idea once it becomes apparent to businesses both large and small that they can make money by selling the technologies that enable it?

I know very well the scientists, engineers and creatives at many of the companies, social enterprises and government bodies – of any size – who are engaged in Smart Cities initiatives. They are almost universally extremely bright, well intentioned and humane, and fully capable of talking with passion about the social and environmental value of their work. “Top-down” is at best a gross simplification of the projects that they carry out, and at worst a gross misrepresentation. Their views dominated the early years of the Smart Cities market as it developed.

But as the market has matured and grown, the focus has switched from research, exploration and development to the marketing and selling of well-defined product and service offerings. Amidst the need to promote those offerings to potential customers, and to differentiate them against competitors, it’s easy for the subtle intertwining of social, economic, environmental and technology ideas to be drowned out.

That’s what led to the unfortunate statement that armed Professor Adam Greenfield with the ammunition he needed to criticise the Smart Cities movement. A technology company that I won’t name made an over-reaching and mis-guided assertion that Smart Cities would create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” – blissfully ignoring the fact that such perfection is scientifically and philosophically impossible, not to mention inhuman and undesirable.

As a scientist-turned-technologist-turned-wannabe-urbanist working in this field, and as someone who’s been repeatedly inspired by the people, communities, social scientists, social innovators, urban designers and economists I’ve met over the past 5 years, I started writing this blog to explore and present a more balanced, humane vision of a Smart City.

Zen and the art of Smart Cities: opposites should create beautiful fusions, not arguments

Great books change our lives, and one of many that has changed mine is “Zen and the Art of Motorcycle Maintenance” by Robert M. Pirsig. Pirsig explores the relationship between what he called “romantic” perspectives of life, which focus on emotional meaning and value “quality”, and “rational” perspectives, which focus on the reasons our world behaves in the way that it does and value “truth”. He argues that early Greek philosophers didn’t distinguish between “quality” and “truth”, and that by considering them together we can learn to value things that are simultaneously well-intentioned and well-formed.

This thinking is echoed in Alan Watts’ “The Way of Zen“, in which he comments on the purpose of the relentless practise of technique that is part of the Zen approach to art that:

“The very technique involves the art of artlessness, or what Sabro Hasegawa has called the ‘controlled accident’, so that paintings are formed as naturally as the rocks and grasses which they depict”

(Alan Watts, “The Way of Zen“)

In other words, by working tirelessly to perfect their technique – i.e. their use of tools – artists enable themselves to have “beautiful accidents” when inspiration strikes.

(Photograph by Meshed Media of Birmingham’s Social Media Cafe, where individuals from every part of the city who have connected online meet face-to-face to discuss their shared interest in social media.)

Modern technologies from social media to Smartphones to Cloud computing and Open Source software are both incredibly powerful and, compared to any previous generation of technology, incredibly cheap.

If we work hard to ensure that they can be used to access and manipulate the technologies that will inevitably be used to make the operations of city infrastructures and public services more efficient, then they have incredible potential to be a tool for people everywhere to shape the world around them to their own advantage; and for us to collectively create a world that is fairer, healthier and more resilient.

But unless we re-claim the word “Smart” to describe those outcomes, the market will drive our energy and resources in the direction of narrower financial interests.

The financial case for investment in Smart technologies is straightforward: as the costs of smartphones, sensors, analytics, and cloud computing infrastructure reduce rapidly, market dynamics will drive their aggressive adoption to make construction, infrastructure and city services more efficient, and hence make their providers more competitive.

But those market dynamics do not guarantee that we will get everything we want for the future of our cities: efficiency and resilience are not the same as health, happiness and opportunity for every citizen.

So how can we adapt that investment drive to create the outcomes that we want?

Can responsible business create a better world?

Some corporate behaviours promote these outcomes, driven by the voting and buying powers of citizens and consumers. Working for Amey, for example, my customers are usually government organisations who serve an electorate; or private sector companies who are regulated by government bodies. In both cases, there is a direct chain of influence leading from individual citizen needs and perceptions through to the way we operate and deliver our services. If we don’t engage with, respect and meet those needs and expectations, we will not be successful. I can observe that influence at work driving an ethic of service, care and responsibility throughout our business at Amey, and it’s been an inspiration to me since joining the company.

UniLever have taken a similar approach, using consumer desires for sustainable products to link corporate performance to sustainable business practices; and Jared Diamond wrote extensively about successful examples of socially and environmentally sustainable resource extraction businesses, such as Chevron’s sustainable operations in the Kutubu oilfield in Papua New Guinea, in his book “Collapse“. Business models such as social enterprise and the sharing economy also offer great potential to link business success to positive social and environmental outcomes.

But ultimately our investment markets are still strongly focused on financial performance, and reward the businesses that make the most money with the investment that enables them to grow. This is why many social enterprises do not scale-up; and why many of the rapidly growing “sharing economy” businesses currently making the headlines have nothing at all to do with sharing value and resources, but are better understood as a new type of profit-seeking transaction broker.

Responsible business models are a choice made by individual business leaders, and they depend for their successful operation on the daily choices and actions of their employees. They are not a market imperative. For as long as that is the case, we cannot rely on them to improve our world.

Policy, legislation and regulation

I’ve quoted from Jane Jacobs on many occasions on this blog that “private investment shapes cities, but social ideas (and laws) shape private investment”.

It’s a source of huge frustration to me that so much of the activity in the Smart Cities community ignores that so obviously fundamental principle, and focuses instead on the capabilities of technology or on projects funded by research grants.

The recent article reporting a TechUK Smart Cities conference titled “Milton Keynes touted as model city for public sector IoT use” is a good example. Milton Keynes have many Smart City projects underway that are technologically very interesting, but every one of them is funded by a significant grant of funds from a central government department, a research or innovation funding body, or a technology company. Not a single project has been paid for by a sustainable, re-usable business case. Other cities can aspire to emulate Milton Keynes all they want, but they won’t win research and innovation funding to re-deploy solutions that have already been proven.

Research and innovation grants provide the funding that proves for the first time that a new idea is viable. They do not pay for that idea to be enacted across the world.

(Shaleen Meelu and Robert Smith with Hugh Fearnley-Whittingstall at the opening of the Harborne Food School. The School is a Community Interest Company that promotes healthy, sustainable approaches to food through courses offered to local people and organisations)

(Shaleen Meelu and Robert Smith with Hugh Fearnley-Whittingstall at the opening of the Harborne Food School. The School is a Community Interest Company that promotes healthy, sustainable approaches to food through courses offered to local people and organisations)

Policy, legislation and regulation are far more effective tools for enabling widespread change, and are what we should be focussing our energy and attention on.

The Social Value Act requires that public authorities, who spend nearly £200 billion every year on private sector goods and services, procure those services in a way that creates social value – for example, by requiring that national or international service providers engage local small businesses in their supply chains.

In an age in which private companies are investing heavily in the use of digital technology because it provides them with by far the most powerful tool to increase their success, surely local authorities should fulfil their Social Value Act obligations by using procurement criteria to ensure that those companies employ that same tool to create social and environmental improvements in the places and communities in which they operate?

Similary, the British Property Federation estimates that £14 billion is invested in the development of new property in the UK each year. If planning and development frameworks oblige that property developers describe and quantify the social value that will be created by their developments, and how they will use technology do so – as I’ve promoted on this blog for some time now, and as the British Standards Institute have recently recommended – then this enormous level of private sector investment can contribute to investing in technology for public benefit; just as those same frameworks already require investment in public space around commercial buildings.

The London Olympic Legacy Development Corporation have been following this strategy in support of the Greater London Authority’s Smart London Plan. As a result, they are securing private sector investment in deploying technology not only to redevelop the Olympic park using smart infrastructure; but also to ensure that that investment benefits the existing communities and business economies in neighbouring areas.

A Smart manifesto for human outcomes enabled by technology

These business models, policy measures and procurement approaches are bold, difficult measures to enact. They are not as sexy as Smartphones, analytics and self-driving cars. But they are much more important if what we want to achieve are positive human outcomes, not just financially successful technology companies and a continuous stream of research projects.

What will make it more likely that businesses, local governments and national governments adopt them?

Citizen understanding. Consumer understanding. A definition of smart people, places, communities, businesses and governments that makes sense to everyone who votes, works, stands for election, runs a business, or buys things. In other words, everyone.

If that definition doesn’t include the objective of making the world a healthier, happier, fairer, more sustainable place for everyone, then it’s not worth the effort. If it doesn’t include harnessing modern technology, then it misses the point that human ingenuity has recently given us a phenomenal new toolkit that make possible things that we’d never previously dreamt of.

I think it should go something like this:

“Smart people, places, communities, businesses and governments work together to use the modern technologies that are changing our world to make it fairer and more sustainable in the process, giving everyone a better chance of a longer, healthier, happier and more fulfilling life.”

I’m not sure that’s a perfect definition; but I think it’s a good start, and I hope that it combines the right realisation that we do have unprecedented tools at our disposal with the right sentiment that what really matters is how we use them.

(I’d like to thank John Murray of Scottish Enterprise for a useful discussion that inspired me to write this article)

From concrete to telepathy: how to build future cities as if people mattered

(An infographic depicting realtime data describing Dublin - the waiting time at road junctions; the location of buses; the number of free parking spaces and bicycles available to hire; and sentiments expressed about the city through social meida)

(An infographic depicting realtime data describing Dublin – the waiting time at road junctions; the location of buses; the number of free parking spaces and bicycles available to hire; and sentiments expressed about the city through social media)

(I was honoured to be asked to speak at TEDxBrum in my home city of Birmingham this weekend. The theme of the event was “DIY” – “the method of building, modifying or repairing something without the aid of experts or professionals”. In other words, how Birmingham’s people, communities and businesses can make their home a better place. This is a rough transcript of my talk).

What might I, a middle-aged, white man paid by a multi-national corporation to be an expert in cities and technology, have to say to Europe’s youngest city, and one of its most ethnically and nationally diverse, about how it should re-create itself “without the aid of experts or professionals”?

Perhaps I could try to claim that I can offer the perspective of one of the world’s earliest “digital natives”. In 1980, at the age of ten, my father bought me one of the world’s first personal computers, a Tandy TRS 80, and taught me how to programme it using “machine code“.

But about two years ago, whilst walking through London to give a talk at a networking event, I was reminded of just how much the world has changed since my childhood.

I found myself walking along Wardour St. in Soho, just off Oxford St., and past a small alley called St. Anne’s Court which brought back tremendous memories for me. In the 1980s I spent all of the money I earned washing pots in a local restaurant in Winchester to travel by train to London every weekend and visit a small shop in a basement in St. Anne’s Court.

I’ve told this story in conference speeches a few times now, perhaps to a total audience of a couple of thousand people. Only once has someone been able to answer the question:

“What was the significance of St. Anne’s Court to the music scene in the UK in the 1980s?”

Here’s the answer:

Shades Records, the shop in the basement, was the only place in the UK that sold the most extreme (and inventive) forms of “thrash metal” and “death metal“, which at the time were emerging from the ashes of punk and the “New Wave of British Heavy Metal” in the late 1970s.

G157 Richard with his Tandy

(Programming my Tandy TRS 80 in Z80 machine code nearly 35 years ago)

The process by which bands like VOIVOD, Coroner and Celtic Frost – who at the time were three 17-year-olds who practised in an old military bunker outside Zurich – managed to connect – without the internet – to the very few people around the world like me who were willing to pay money for their music feels like ancient history now. It was a world of hand-printed “fanzines”, and demo tapes painstakingly copied one at a time, ordered by mail from classified adverts in magazines like Kerrang!

Our world has been utterly transformed in the relatively short time between then and now by the phenomenal ease with which we can exchange information through the internet and social media.

The real digital natives, though, are not even those people who grew up with the internet and social media as part of their everyday world (though those people are surely about to change the world as they enter employment).

They are the very young children like my 6-year-old son, who taught himself at the age of two to use an iPad to access the information that interested him (admittedly, in the form of Thomas the Tank Engine stories on YouTube) before anyone else taught him to read or write, and who can now use programming tools like MIT’s Scratch to control computers vastly more powerful than the one I used as a child.

Their expectations of the world, and of cities like Birmingham, will be like no-one who has ever lived before.

And their ability to use technology will be matched by the phenomenal variety of data available to them to manipulate. As everything from our cars to our boilers to our fridges to our clothing is integrated with connected, digital technology, the “Internet of Things“, in which everything is connected to the internet, is emerging. As a consequence our world, and our cities, are full of data.

(The programme I helped my 6-year old son write using MIT's "Scratch" language to draw a picture of a house)

(The programme I helped my 6-year old son write using MIT’s “Scratch” language to cause a cartoon cat to draw a picture of a house)

My friend the architect Tim Stonor calls the images that we are now able to create, such as the one at the start of this article, “data porn”. The image shows data about Dublin from the Dublinked information sharing partnership: the waiting time at road junctions; the location of buses; the number of free parking spaces and bicycles available to hire; and sentiments expressed about the city through social media.

Tim’s point is that we should concentrate not on creating pretty visualisations; but on the difference we can make to cities by using this data. Through Open Data portals, social media applications, and in many other ways, it unlocks secrets about cities and communities:

  • Who are the 17 year-olds creating today’s most weird and experimental music? (Probably by collaborating digitally from three different bedroom studios on three different continents)
  • Where is the healthiest walking route to school?
  • Is there a local company nearby selling wonderful, oven-ready curries made from local recipes and fresh ingredients?
  • If I set off for work now, will a traffic jam develop to block my way before I get there?

From Dublin to Montpellier to Madrid and around the world my colleagues are helping cities to build 21st-Century infrastructures that harness this data. As technology advances, every road, electricity substation, University building, and supermarket supply chain will exploit it. The business case is easy: we can use data to find ways to operate city services, supply chains and infrastructure more efficiently, and in a way that’s less wasteful of resources and more resilient in the face of a changing climate.

Top-down thinking is not enough

But to what extent will this enormous investment in technology help the people who live and work in cities, and those who visit them, to benefit from the Information Economy that digital technology  and data is creating?

This is a vital question. The ability of digital technology to optimise and automate tasks that were once carried out by people is removing jobs that we have relied on for decades. In order for our society to be based upon a fair and productive economy, we all need to be able to benefit from the new opportunities to work and be successful that are being created by digital technology.

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the citycentre before its redevelopment in 2003, by Birmingham City Council)

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the city centre before its redevelopment in 2003, by Birmingham City Council)

Too often in the last century, we got this wrong. We used the technologies of the age – concrete, lifts, industrial machinery and cars – to build infrastructures and industries that supported our mass needs for housing, transport, employment and goods; but that literally cut through and isolated the communities that create urban life.

If we make the same mistake by thinking only about digital technology in terms of its ability to create efficiencies, then as citizens, as communities, as small businesses we won’t fully benefit from it.

In contrast, one of the authors of Birmingham’s Big City Plan, the architect Kelvin Campbell, created the concept of “massive / small“. He asked: what are the characteristics of public policy and city infrastructure that create open, adaptable cities for everyone and that thereby give rise to “massive” amounts of “small-scale” innovation?

In order to build 21st Century cities that provide the benefits of digital technology to everyone we need to find the design principles that enable the same “massive / small” innovation to emerge in the Information Economy, in order that we can all use the simple, often free, tools available to us to create our own opportunities.

There are examples we can learn from. Almere in Holland use analytics technology to plan and predict the future development of the city; but they also engage in dialogue with their citizens about the future the city wants. Montpellier in France use digital data to measure the performance of public services; but they also engage online with their citizens in a dialogue about those services and the outcomes they are trying to achieve. The Dutch Water Authority are implementing technology to monitor, automate and optimise an infrastructure on which many cities depend; but making much of the data openly available to communities, businesses, researchers and innovators to explore.

There are many issues of policy, culture, design and technology that we need to get right for this to happen, but the main objectives are clear:

  • The data from city services should be made available as Open Data and through published “Application Programming Interfaces” (APIs) so that everybody knows how they work; and can adapt them to their own individual needs.
  • The data and APIs should be made available in the form of Open Standards so that everybody can understand it; and so that the systems that we rely on can work together.
  • The data and APIs should be available to developers working on Cloud Computing platforms with Open Source software so that anyone with a great idea for a new service to offer to people or businesses can get started for free.
  • The technology systems that support the services and infrastructures we rely on should be based on Open Architectures, so that we have freedom to chose which technologies we use, and to change our minds.
  • Governments, institutions, businesses and communities should participate in an open dialogue, informed by data and enlightened by empathy, about the places we live and work in.

If local authorities and national government create planning policies, procurement practises and legislation that require that public infrastructure, property development and city services provide this openness and accessibility, then the money spent on city infrastructure and services will create cities that are open and adaptable to everyone in a digital age.

Bottom-up innovation is not enough, either

(Coders at work at the Birmingham “Smart Hack”, photographed by Sebastian Lenton)

Not everyone has access to the technology and skills to use this data, of course. But some of the people who do will create the services that others need.

I took part in my first “hackathon” in Birmingham two years ago. A group of people spent a weekend together in 2012 asking themselves: in what way should Birmingham be better? And what can we do about it? Over two days, they wrote an app, “Second Helping”, that connected information about leftover food in the professional kitchens of restaurants and catering services, to soup kitchens that give food to people who don’t have enough.

Second Helping was a great idea; but how do you turn a great idea and an app into a change in the way that food is used in a city?

Hackathons and “civic apps” are great examples of the “bottom-up” creativity that all of us use to create value – innovating with the resources around us to make a better life, run a better business, or live in a stronger community. But “bottom-up” on it’s own isn’t enough.

The result of “bottom-up” innovation at the moment is that life expectancy in the poorest parts of Birmingham is more than 10 years shorter than it is in the richest parts. In London and Glasgow, it’s more than 20 years shorter.

If you’re born in the wrong place, you’re likely to die 10 years younger than someone else born in a different part of the same city. This shocking situation arises from many, complex issues; but one conclusion that it is easy to draw is that the opportunity to innovate successfully is not the same for everyone.

So how do we increase everybody’s chances of success? We need to create the policies, institutions, culture and behaviours that join up the top-down thinking that tends to control the allocation of resources and investment, especially for infrastructure, with the needs of bottom-up innovators everywhere.

Translational co-operation

Harborne Food School

(The Harborne Food School, which will open in the New Year to offer training and events in local and sustainable food)

The Economist magazine reminded us of the importance of those questions in a recent article describing the enormous investments made in public institutions such as schools, libraries and infrastructure in the past in order to distribute the benefits of the Industrial Revolution to society at large rather than concentrate them on behalf of business owners and the professional classes.

But the institutions of the past, such as the schools which to a large degree educated the population for repetitive careers in labour-intensive factories, won’t work for us today. Our world is more complicated and requires a greater degree of localised creativity to be successful. We need institutions that are able to engage with and understand individuals; and that make their resources openly available so that each of us can use them in the way that makes most sense to us. Some public services are starting to respond to this challenge, through the “Open Public Services” agenda; and the provision of Open Data and APIs by public services and infrastructure are part of the response too.

But as Andrew Zolli describes in “Resilience: why things bounce back“, there are both institutional and cultural barriers to engagement and collaboration between city institutions and localised innovation. Zolli describes the change-makers who overcome those barriers as “translational leaders” – people with the ability to engage with both small-scale, informal innovation in communities and large-scale, formal institutions with resources.

We’re trying to apply that “translational” thinking in Birmingham through the Smart City Alliance, a collaboration between 20 city institutions, businesses and innovators. The idea is to enable conversations about challenges and opportunities in the city, between people, communities, innovators and  the organisations who have resources, from the City Council and public institutions to businesses, entrepreneurs and social enterprises. We try to put people and organisations with challenges or good ideas in touch with other people or organisations with the ability to help them.

This is how we join the “top-down” resources, policies and programmes of city institutions and big companies with the “bottom-up” innovation that creates value in local situations. A lot of the time it’s about listening to people we wouldn’t normally meet.

Partly as a consequence, we’ve continued to explore the ideas about local food that were first raised at the hackathon. Two years later, the Harborne Food School is close to opening as a social enterprise in a redeveloped building on Harborne High Street that had fallen out of use.

The school will be teaching courses that help caterers provide food from sustainable sources, that teach people how to set up and run food businesses, and that help people to adopt diets that prevent or help to manage conditions such as diabetes. The idea has changed since the “Second Helping” app was written, of course; but the spirit of innovation and local value is the same.

Cities that work like magic

So what does all this have to do with telepathy?

The innovations and changes caused by the internet over the last two decades have accelerated as it has made information easier and easier to access and exchange through the advent of technologies such as broadband, mobile devices and social media. But the usefulness of all of those technologies is limited by the tools required to control them – keyboards, mice and touchscreens.

Before long, we won’t need those tools at all.

Three years ago, scientists at the University of Berkely used computers attached to an MRI scanner to recreate moving images from the magnetic field created by the brain of a person inside the scanner watching a film on a pair of goggles. And last year, scientists at the University of Washington used similar technology to allow one of them to move the other’s arm simply by thinking about it. A less sensitive mind-reading technology is already available as a headset from Emotiv, which my colleagues in IBM’s Emerging Technologies team have used to help a paralysed person communicate by thinking directional instructions to a computer.

Telepathy is now technology, and this is just one example of the way that the boundary between our minds, bodies and digital information will disappear over the next decade. As a consequence, our cities and lives will change in ways we’ve never imagined, and some of those changes will happen surprisingly quickly.

I can’t predict what Birmingham will or should be like in the future. As a citizen, I’ll be one of the million or so people who decide that future through our choices and actions. But I can say that the technologies available to us today are the most incredible DIY tools for creating that future that we’ve ever had access to. And relatively quickly technologies like bio-technology, 3D printing and brain/computer interfaces will put even more power in our hands.

As a parent, I get engaged in my son’s exploration of these technologies and help him be digitally aware, creative and responsible. Whenever I can, I help schools, Universities, small businesses or community initiatives to use them, because I might be helping one of IBM’s best future employees or business partners; or just because they’re exciting and worth helping. And as an employee, I try to help my company take decisions that are good for our long term business because they are good for the society that the business operates in.

We can take for granted that all of us, whatever we do, will encounter more and more incredible technologies as time passes. By remembering these very simple things, and remembering them in the hundreds of choices I make every day, I hope that I’ll be using them to play my part in building a better Birmingham, and better cities and communities everywhere.

(Shades Records in St. Anne's Court in the 1980s)

(Shades Records in St. Anne’s Court in the 1980s. You can read about the role it played in the development of the UK’s music culture – and in the lives of its customers – in this article from Thrash Hits;  or this one from Every Record Tells a Story. And if you really want to find out what it was all about, try watching Celtic Frost or VOIVOD in the 1980s!)

11 reasons computers can’t understand or solve our problems without human judgement

(Photo by Matt Gidley)

(Photo by Matt Gidley)

Why data is uncertain, cities are not programmable, and the world is not “algorithmic”.

Many people are not convinced that the Smart Cities movement will result in the use of technology to make places, communities and businesses in cities better. Outside their consumer enjoyment of smartphones, social media and online entertainment – to the degree that they have access to them – they don’t believe that technology or the companies that sell it will improve their lives.

The technology industry itself contributes significantly to this lack of trust. Too often we overstate the benefits of technology, or play down its limitations and the challenges involved in using it well.

Most recently, the idea that traditional processes of government should be replaced by “algorithmic regulation” – the comparison of the outcomes of public systems to desired objectives through the measurement of data, and the automatic adjustment of those systems by algorithms in order to achieve them – has been proposed by Tim O’Reilly and other prominent technologists.

These approaches work in many mechanical and engineering systems – the autopilots that fly planes or the anti-lock braking systems that we rely on to stop our cars. But should we extend them into human realms – how we educate our children or how we rehabilitate convicted criminals?

It’s clearly important to ask whether it would be desirable for our society to adopt such approaches. That is a complex debate, but my personal view is that in most cases the incredible technologies available to us today – and which I write about frequently on this blog – should not be used to take automatic decisions about such issues. They are usually more valuable when they are used to improve the information and insight available to human decision-makers – whether they are politicians, public workers or individual citizens – who are then in a better position to exercise good judgement.

More fundamentally, though, I want to challenge whether “algorithmic regulation” or any other highly deterministic approach to human issues is even possible. Quite simply, it is not.

It is true that our ability to collect, analyse and interpret data about the world has advanced to an astonishing degree in recent years. However, that ability is far from perfect, and strongly established scientific and philosophical principles tell us that it is impossible to definitively measure human outcomes from underlying data in physical or computing systems; and that it is impossible to create algorithmic rules that exactly predict them.

Sometimes automated systems succeed despite these limitations – anti-lock braking technology has become nearly ubiquitous because it is more effective than most human drivers at slowing down cars in a controlled way. But in other cases they create such great uncertainties that we must build in safeguards to account for the very real possibility that insights drawn from data are wrong. I do this every time I leave my home with a small umbrella packed in my bag despite the fact that weather forecasts created using enormous amounts of computing power predict a sunny day.

(No matter how sophisticated computer models of cities become, there are fundamental reasons why they will always be simplifications of reality. It is only by understanding those constraints that we can understand which insights from computer models are valuable, and which may be misleading. Image of Sim City by haljackey)

We can only understand where an “algorithmic” approach can be trusted; where it needs safeguards; and where it is wholly inadequate by understanding these limitations. Some of them are practical, and limited only by the sensitivity of today’s sensors and the power of today’s computers. But others are fundamental laws of physics and limitations of logical systems.

When technology companies assert that Smart Cities can create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” (as London School of Economics Professor Adam Greenfield rightly criticised in his book “Against the Smart City”), they are ignoring these challenges.

A blog published by the highly influential magazine Wired recently made similar overstatements: “The Universe is Programmable” argues that we should extend the concept of an “Application Programming Interface (API)” – a facility usually offered by technology systems to allow external computer programmes to control or interact with them – to every aspect of the world, including our own biology.

To compare complex, unpredictable, emergent biological and social systems to the very logical, deterministic world of computer software is at best a dramatic oversimplification. The systems that comprise the human body range from the armies of symbiotic microbes that help us digest food in our stomachs to the consequences of using corn syrup to sweeten food to the cultural pressure associated with “size 0” celebrities. Many of those systems can’t be well modelled in their own right, let alone deterministically related to each other; let alone formally represented in an accurate, detailed way by technology systems (or even in mathematics).

We should regret and avoid the hubris that leads to the distrust of technology by overstating its capability and failing to recognise its challenges and limitations. That distrust is a barrier that prevents us from achieving the very real benefits that data and technology can bring, and that have been convincingly demonstrated in the past.

For example, an enormous contribution to our knowledge of how to treat and prevent disease was made by John Snow who used data to analyse outbreaks of cholera in London in the 19th century. Snow used a map to correlate cases of cholera to the location of communal water pipes, leading to the insight that water-borne germs were responsible for spreading the disease. We wash our hands to prevent diseases spreading through germs in part because of what we would now call the “geospatial data analysis” performed by John Snow.

Many of the insights that we seek from analytic and smart city systems are human in nature, not physical or mathematical – for example identifying when and where to apply social care interventions in order to reduce the occurrence of  emotional domestic abuse. Such questions are complex and uncertain: what is “emotional domestic abuse?” Is it abuse inflicted by a live-in boyfriend, or by an estranged husband who lives separately but makes threatening telephone calls? Does it consist of physical violence or bullying? And what is “bullying”?

IMG_0209-1

(John Snow’s map of cholera outbreaks in 19th century London)

We attempt to create structured, quantitative data about complex human and social issues by using approximations and categorisations; by tolerating ranges and uncertainties in numeric measurements; by making subjective judgements; and by looking for patterns and clusters across different categories of data. Whilst these techniques can be very powerful, just how difficult it is to be sure what these conventions and interpretations should be is illustrated by the controversies that regularly arise around “who knew what, when?” whenever there is a high profile failure in social care or any other public service.

These challenges are not limited to “high level” social, economic and biological systems. In fact, they extend throughout the worlds of physics and chemistry into the basic nature of matter and the universe. They fundamentally limit the degree to which we can measure the world, and our ability to draw insight from that information.

By being aware of these limitations we are able to design systems and practises to use data and technology effectively. We know more about the weather through modelling it using scientific and mathematical algorithms in computers than we would without those techniques; but we don’t expect those forecasts to be entirely accurate. Similarly, supermarkets can use data about past purchases to make sufficiently accurate predictions about future spending patterns to boost their profits, without needing to predict exactly what each individual customer will buy.

We underestimate the limitations and flaws of these approaches at our peril. Whilst Tim O’Reilly cites several automated financial systems as good examples of “algorithmic regulation”, the financial crash of 2008 showed the terrible consequences of the thoroughly inadequate risk management systems used by the world’s financial institutions compared to the complexity of the system that they sought to profit from. The few institutions that realised that market conditions had changed and that their models for risk management were no longer valid relied instead on the expertise of their staff, and avoided the worst affects. Others continued to rely on models that had started to produce increasingly misleading guidance, leading to the recession that we are only now emerging from six years later, and that has damaged countless lives around the world.

Every day in their work, scientists, engineers and statisticians draw conclusions from data and analytics, but they temper those conclusions with an awareness of their limitations and any uncertainties inherent in them. By taking and communicating such a balanced and informed approach to applying similar techniques in cities, we will create more trust in these technologies than by overstating their capabilities.

What follows is a description of some of the scientific, philosophical and practical issues that lead inevitability to uncertainty in data, and to limitations in our ability to draw conclusions from it:

But I’ll finish with an explanation of why we can still draw great value from data and analytics if we are aware of those issues and take them properly into account.

Three reasons why we can’t measure data perfectly

(How Heisenberg’s Uncertainty Principle results from the dual wave/particle nature of matter. Explanation by HyperPhysics at Georgia State University)

1. Heisenberg’s Uncertainty Principle and the fundamental impossibility of knowing everything about anything

Heisenberg’s Uncertainty Principle is a cornerstone of Quantum Mechanics, which, along with General Relativity, is one of the two most fundamental theories scientists use to understand our world. It defines a limit to the precision with which certain pairs of properties of the basic particles which make up the world – such as protons, neutrons and electrons – can be known at the same time. For instance, the more accurately we measure the position of such particles, the more uncertain their speed and direction of movement become.

The explanation of the Uncertainty Principle is subtle, and lies in the strange fact that very small “particles” such as electrons and neutrons also behave like “waves”; and that “waves” like beams of light also behave like very small “particles” called “photons“. But we can use an analogy to understand it.

In order to measure something, we have to interact with it. In everyday life, we do this by using our eyes to measure lightwaves that are created by lightbulbs or the sun and that then reflect off objects in the world around us.

But when we shine light on an object, what we are actually doing is showering it with billions of photons, and observing the way that they scatter. When the object is quite large – a car, a person, or a football – the photons are so small in comparison that they bounce off without affecting it. But when the object is very small – such as an atom – the photons colliding with it are large enough to knock it out of its original position. In other words, measuring the current position of an object involves a collision which causes it to move in a random way.

This analogy isn’t exact; but it conveys the general idea. (For a full explanation, see the figure and link above). Most of the time, we don’t notice the effects of Heisenberg’s Uncertainty Principle because it applies at extremely small scales. But it is perhaps the most fundamental law that asserts that “perfect knowledge” is simply impossible; and it illustrates a wider point that any form of measurement or observation in general affects what is measured or observed. Sometimes the effects are negligible,  but often they are not – if we observe workers in a time and motion study, for example, we need to be careful to understand the effect our presence and observations have on their behaviour.

2. Accuracy, precision, noise, uncertainty and error: why measurements are never fully reliable

Outside the world of Quantum Mechanics, there are more practical issues that limit the accuracy of all measurements and data.

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become "fuzzy". In this case, the effects of noise and interference - the degree to which the signal appears "fuzzy" - are relatively small relative to the strength of the signal, and the device is usable)

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become “fuzzy”. In this case, the effects of noise and interference – the degree to which the signal appears “fuzzy” – are relatively small compared to the strength of the signal, and the device is usable)

We live in a “warm” world – roughly 300 degrees Celsius above what scientists call “absolute zero“, the coldest temperature possible. What we experience as warmth is in fact movement: the atoms from which we and our world are made “jiggle about” – they move randomly. When we touch a hot object and feel pain it is because this movement is too violent to bear – it’s like being pricked by billions of tiny pins.

This random movement creates “noise” in every physical system, like the static we hear in analogue radio stations or on poor quality telephone connections.

We also live in a busy world, and this activity leads to other sources of noise. All electronic equipment creates electrical and magnetic fields that spread beyond the equipment itself, and in turn affect other equipment – we can hear this as a buzzing noise when we leave smartphones near radios.

Generally speaking, all measurements are affected by random noise created by heat, vibrations or electrical interference; are limited by the precision and accuracy of the measuring devices we use; and are affected by inconsistencies and errors that arise because it is always impossible to completely separate the measurement we want to make from all other environmental factors.

Scientists, engineers and statisticians are familiar with these challenges, and use techniques developed over the course of more than a century to determine and describe the degree to which they can trust and rely on the measurements they make. They do not claim “perfect knowledge” of anything; on the contrary, they are diligent in describing the unavoidable uncertainty that is inherent in their work.

3. The limitations of measuring the natural world using digital systems

One of the techniques we’ve adopted over the last half century to overcome the effects of noise and to make information easier to process is to convert “analogue” information about the real world (information that varies smoothly) into digital information – i.e. information that is expressed as sequences of zeros and ones in computer systems.

(When analogue signals are amplified, so is the noise that they contain. Digital signals are interpreted using thresholds: above an upper threshold, the signal means “1”, whilst below a lower threshold, the signal means “0”. A long string of “0”s and “1”s can be used to encode the same information as contained in analogue waves. By making the difference between the thresholds large compared to the level of signal noise, digital signals can be recreated to remove noise. Further explanation and image by Science Aid)

This process involves a trade-off between the accuracy with which analogue information is measured and described, and the length of the string of digits required to do so – and hence the amount of computer storage and processing power needed.

This trade-off can be clearly seen in the difference in quality between an internet video viewed on a smartphone over a 3G connection and one viewed on a high definition television using a cable network. Neither video will be affected by the static noise that affects weak analogue television signals, but the limited bandwidth of a 3G connection dramatically limits the clarity and resolution of the image transmitted.

The Nyquist–Shannon sampling theorem defines this trade-off and the limit to the quality that can be achieved in storing and processing digital information created from analogue sources. It determines the quality of digital data that we are able to create about any real-world system – from weather patterns to the location of moving objects to the fidelity of sound and video recordings. As computers and communications networks continue to grow more powerful, the quality of digital information will improve,  but it will never be a perfect representation of the real world.

Three limits to our ability to analyse data and draw insights from it

1. Gödel’s Incompleteness Theorem and the inconsistency of algorithms

Kurt Gödel’s Incompleteness Theorem sets a limit on what can be achieved by any “closed logical system”. Examples of “closed logical systems” include computer programming languages, any system for creating algorithms – and mathematics itself.

We use “closed logical systems” whenever we create insights and conclusions by combining and extrapolating from basic data and facts. This is how all reporting, calculating, business intelligence, “analytics” and “big data” technologies work.

Gödel’s Incompleteness Theorem proves that any closed logical system can be used to create conclusions that  it is not possible to show are true or false using the same system. In other words, whilst computer systems can produce extremely useful information, we cannot rely on them to prove that that information is completely accurate and valid. We have to do that ourselves.

Gödel’s theorem doesn’t stop computer algorithms that have been verified by humans using the scientific method from working; but it does mean that we can’t rely on computers to both generate algorithms and guarantee their validity.

2. The behaviour of many real-world systems can’t be reduced analytically to simple rules

Many systems in the real-world are complex: they cannot be described by simple rules that predict their behaviour based on measurements of their initial conditions.

A simple example is the “three body problem“. Imagine a sun, a planet and a moon all orbiting each other. The movement of these three objects is governed by the force of gravity, which can be described by relatively simple mathematical equations. However, even with just three objects involved, it is not possible to use these equations to directly predict their long-term behaviour – whether they will continue to orbit each other indefinitely, or will eventually collide with each other, or spin off into the distance.

(A computer simulation by Hawk Express of a Belousov–Zhabotinsky reaction,  in which reactions between liquid chemicals create oscillating patterns of colour. The simulation is carried out using “cellular automata” a technique based on a grid of squares which can take different colours. In each “turn” of the simulation, like a turn in a board game, the colour of each square is changed using simple rules based on the colours of adjacent squares. Such simulations have been used to reproduce a variety of real-world phenomena)

As Stephen Wolfram argued in his controversial book “A New Kind of Science” in 2002, we need to take a different approach to understanding such complex systems. Rather than using mathematics and logic to analyse them, we need to simulate them, often using computers to create models of the elements from which complex systems are composed, and the interactions between them. By running simulations based on a large number of starting points and comparing the results to real-world observations, insights into the behaviour of the real-world system can be derived. This is how weather forecasts are created, for example. 

But as we all know, weather forecasts are not always accurate. Simulations are approximations to real-world systems, and their accuracy is restricted by the degree to which digital data can be used to represent a non-digital world. For this reason, conclusions and predictions drawn from simulations are usually “average” or “probable” outcomes for the system as a whole, not precise predictions of the behaviour of the system or any individual element of it. This is why weather forecasts are often wrong; and why they predict likely levels of rain and windspeed rather than the shape and movement of individual clouds.

(Hello)

(A simple and famous example of a computer programme that never stops running because it calls itself. The output continually varies by printing out characters based on random number generation. Image by Prosthetic Knowledge)

3. Some problems can’t be solved by computing machines

If I consider a simple question such as “how many letters are in the word ‘calculation’?”, I can easily convince myself that a computer programme could be written to answer the question; and that it would find the answer within a relatively short amount of time. But some problems are much harder to solve, or can’t even be solved at all.

For example, a “Wang Tile” (see image below) is a square tile formed from four triangles of different colours. Imagine that you have bought a set of tiles of various colour combinations in order to tile a wall in a kitchen or bathroom. Given the set of tiles that you have bought, is it possible to tile your wall so that triangles of the same colour line up to each other, forming a pattern of “Wang Tile” squares?

In 1966 Robert Berger proved that no algorithm exists that can answer that question. There is no way to solve the problem – or to determine how long it will take to solve the problem – without actually solving it. You just have to try to tile the room and find out the hard way.

One of the most famous examples of this type of problem is the “halting problem” in computer science. Some computer programmes finish executing their commands relatively quickly. Others can run indefinitely if they contain a “loop” instruction that never ends. For others which contain complex sequences of loops and calls from one section of code to another, it may be very hard to tell whether the programme finishes quickly, or takes a long time to complete, or never finishes its execution at all.

Alan Turing, one of the most important figures in the development of computing, proved in 1936 that a general algorithm to determine whether or not any computer programme finishes its execution does not exist. In other words, whilst there are many useful computer programmes in the world, there are also problems that computer programmes simply cannot solve.

(A set of Wang Tiles, and a pattern created by tiling them so that tiles are placed next to other tiles so that their edges have the same colour. Given any particular set of tiles, it is impossible to determine whether such a pattern can be created by any means other than trial and error)

(A set of Wang Tiles, and a pattern of coloured squares created by tiling them. Given any random set of tiles of different colour combinations, there is no set of rules that can be relied on to determine whether a valid pattern of coloured squares can be created from them. Sometimes, you have to find out by trial and error. Images from Wikipedia)

Five reasons why the human world is messy, unpredictable, and can’t be perfectly described using data and logic

1. Our actions create disorder

The 2nd Law of Thermodynamics is a good candidate for the most fundamental law of science. It states that as time progresses, the universe becomes more disorganised. It guarantees that ultimately – in billions of years – the Universe will die as all of the energy and activity within it dissipates.

An everyday practical consequence of this law is that every time we act to create value – building a shed, using a car to get from one place to another, cooking a meal – our actions eventually cause a greater amount of disorder to be created as a consequence – as noise, pollution, waste heat or landfill refuse.

For example, if I spend a day building a shed, then to create that order and value from raw materials, I consume structured food and turn it into sewage. Or if I use an electric forklift to stack a pile of boxes, I use electricity that has been created by burning structured coal into smog and ash.

So it is literally impossible to create a “perfect world”. Whenever we act to make a part of the world more ordered, we create disorder elsewhere. And ultimately – thankfully, long after you and I are dead – disorder is all that will be left.

2. The failure of Logical Atomism: why the human world can’t be perfectly described using data and logic

In the 20th Century two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein, invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic.

But despite 40 years of work, these two supremely intelligent people could not get their theory to work: “Logical Atomism” failed. It is not possible to describe our world in that way.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of sophisticated systems.

Despite centuries of scientific and philosophical effort, we do not have a complete understanding of how to describe our world at its most basic level. As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations mean. And as we have already seen, Heisenberg’s Uncertainty Principle prevents us from ever having perfect knowledge of the world at this level.

Perhaps the most important failure of logical atomism, though, was that it proved impossible to use logical rules to turn “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – into facts at another level of abstraction – such as “physical assault is a crime”. The human world and the things that we care about can’t be described using logical combinations of “atomic facts”. For example, how would you define the set of all possible uses of a screwdriver, from prising the lids off paint tins to causing a short-circuit by jamming it into a switchboard?

Our world is messy, subjective and opportunistic. It defies universal categorisation and logical analysis.

(A Pescheria in Bari, Puglia, where a fish-market price information service makes it easier for local fisherman to identify the best buyers and prices for their daily catch. Photo by Vito Palmi)

3. The importance and inaccessibility of “local knowledge” 

Because the tool we use for calculating and agreeing value when we exchange goods and services is money, economics is the discipline that is often used to understand the large-scale behaviour of society. We often quantify the “growth” of society using economic measures, for example.

But this approach is notorious for overlooking social and environmental characteristics such as health, happiness and sustainability. Alternatives exist, such as the Social Progress Index, or the measurement framework adopted by the United Nations 2014 Human Development Report on world poverty; but they are still high level and abstract.

Such approaches struggle to explain localised variations, and in particular cannot predict the behaviours or outcomes of individual people with any accuracy. This “local knowledge problem” is caused by the fact that a great deal of the information that determines individual actions is personal and local, and not measurable at a distance – the experienced eye of the fruit buyer assessing not just the quality of the fruit but the quality of the farm and farmers that produce it, as a measure of the likely consistency of supply; the emotional attachments that cause us to favour one brand over another; or the degree of community ties between local businesses that influence their propensity to trade with each other.

Sharing economy” business models that use social media and reputation systems to enable suppliers and consumers of goods and services to find each other and transact online are opening up this local knowledge to some degree. Local food networks, freecycling networks, and land-sharing schemes all use this technology to the benefit of local communities whilst potentially making information about detailed transactions more widely available. And to some degree, the human knowledge that influences how transactions take place can be encoded in “expert systems” which allow computer systems to codify the quantitative and heuristic rules by which people take decisions.

But these technologies are only used in a subset of the interactions that take place between people and businesses across the world, and it is unlikely that they’ll become ubiquitous in the foreseeable future (or that we would want them to become so). Will we ever reach the point where prospective house-buyers delegate decisions about where to live to computer programmes operating in online marketplaces rather than by visiting places and imagining themselves living there? Will we somehow automate the process of testing the freshness of fish by observing the clarity of their eyes and the freshness of their smell before buying them to cook and eat?

In many cases, while technology may play a role introducing potential buyers and sellers of goods and services to each other, it will not replace – or predict – the human behaviours involved in the transaction itself.

(Medway Youth Trust use predictive and textual analytics to draw insight into their work helping vulnerable children. They use technology to inform expert case workers, not to take decisions on their behalf.)

4. “Wicked problems” cannot be described using data and logic

Despite all of the challenges associated with problems in mathematics and the physical sciences, it is nevertheless relatively straightforward to frame and then attempt to solve problems in those domains; and to determine whether the resulting solutions are valid.

As the failure of Logical Atomism showed, though, problems in the human domain are much more difficult to describe in any systematic, complete and precise way – a challenge known as the “frame problem” in artificial intelligence. This is particularly true of “wicked problems” – challenges such as social mobility or vulnerable families that are multi-faceted, and consist of a variety of interdependent issues.

Take job creation, for example. Is that best accomplished through creating employment in taxpayer-funded public sector organisations? Or by allowing private-sector wealth to grow, creating employment through “trickle-down” effects? Or by maximising overall consumer spending power as suggested by “middle-out” economics? All of these ideas are described not using the language of mathematics or other formal logical systems, but using natural human language which is subjective and inconsistent in use.

The failure of Logical Atomism to fully represent such concepts in formal logical systems through which truth and falsehood can be determined with certainty emphasises what we all understand intuitively: there is no single “right” answer to many human problems, and no single “right” action in many human situations.

(An electricity bill containing information provided by OPower comparing one household’s energy usage to their neighbours. Image from Grist)

5. Behavioural economics and the caprice of human behaviour

Behavioural economics” attempts to predict the way that humans behave when taking choices that have a measurable impact on them – for example, whether to put the washing machine on at 5pm when electricity is expensive, or at 11pm when it is cheap.

But predicting human behaviour is notoriously unreliable.

For example, in a smart water-meter project in Dubuque, Iowa, households that were told how their water conservation compared to that of their near neighbours were found to be twice as likely to take action to improve their efficiency as those who were only told the details of their own water use. In other words, people who were given quantified evidence that they were less responsible water user than their neighbours changed their behaviour. OPower have used similar techniques to help US households save 1.9 terawatt hours of power simply by including a report based on data from smart meters in a printed letter sent with customers’ electricity bills.

These are impressive achievements; but they are not always repeatable. A recycling scheme in the UK that adopted a similar approach found instead that it lowered recycling rates across the community: households who learned that they were putting more effort into recycling than their neighbours asked themselves “if my neighbours aren’t contributing to this initiative, then why should I?”

Low carbon engineering technologies like electric vehicles have clearly defined environmental benefits and clearly defined costs. But most Smart Cities solutions are less straightforward. They are complex socio-technical systems whose outcomes are emergent. Our ability to predict their performance and impact will certainly improve as more are deployed and analysed, and as University researchers, politicians, journalists and the public assess them. But we will never predict individual actions using these techniques, only the average statistical behaviour of groups of people. This can be seen from OPower’s own comparison of their predicted energy savings against those actually achieved – the predictions are good, but the actual behaviour of OPower’s customers shows a high degree of apparently random variation. Those variations are the result of the subjective, unpredictable and sometimes irrational behaviour of real people.

We can take insight from Behavioural Economics and other techniques for analysing human behaviour in order to create appropriate strategies, policies and environments that encourage the right outcomes in cities; but none of them can be relied on to give definitive solutions to any individual person or situation. They can inform decision-making, but are always associated with some degree of uncertainty. In some cases, the uncertainty will be so small as to be negligible, and the predictions can be treated as deterministic rules for achieving the desired outcome. But in many cases, the uncertainty will be so great that predictions can only be treated as general indications of what might happen; whilst individual actions and outcomes will vary greatly.

(Of course it is impossible to predict individual criminal actions as portrayed in the film “Minority Report”. But is is very possible to analyse past patterns of criminal activity, compare them to related data such as weather and social events, and predict the likelihood of crimes of certain types occurring in certain areas. Cities such as Memphis and Chicago have used these insights to achieve significant reductions in crime)

Learning to value insight without certainty

Mathematics and digital technology are incredibly powerful; but they will never perfectly and completely describe and predict our world in human terms. In many cases, our focus for using them should not be on automation: it should be on the enablement of human judgement through better availability and communication of information. And in particular, we should concentrate on communicating accurately the meaning of information in the context of its limitations and uncertainties.

There are exceptions where we automate systems because of a combination of a low-level of uncertainty in data and a large advantage in acting autonomously on it. For example, anti-lock braking systems save lives by using automated technology to take thousands of decisions more quickly than most humans would realise that even a single decision needed to be made; and do so based on data with an extremely low degree of uncertainty.

But the most exciting opportunity for us all is to learn to become sophisticated users of information that is uncertain. The results of textual analysis of sentiment towards products and brands expressed in social media are far from certain; but they are still of great value. Similar technology can extract insights from medical research papers, case notes in social care systems, maintenance logs of machinery and many other sources. Those insights will rarely be certain; but properly assessed by people with good judgement they can still be immensely valuable.

This is a much better way to understand the value of technology than ideas like “perfect knowledge” and “algorithmic regulation”. And it is much more likely that people will trust the benefits that we claim new technologies can bring if we are open about their limitations. People won’t use technologies that they don’t trust; and they won’t invest their money in them or vote for politicians who say they’ll spend their taxes on it.

Thankyou to Richard Brown and Adrian McEwen for discussions on Twitter that helped me to prepare this article. A more in-depth discussion of some of the scientific and philosophical issues I’ve described, and an exploration of the nature of human intelligence and its non-deterministic characteristics, can be found in the excellent paper “Answering Descartes: Beyond Turing” by Stuart Kauffman published by MIT press.

What’s the risk of investing in a Smarter City?

(The two towers of the Bosco Verticale in Milan will be home to more than 10,000 plants that create shade and improve air quality. But to what degree do such characteristics make buildings more attractive to potential tenants than traditional structures, creating the potential to create financial returns to reward more widespread investment in this approach? Photo by Marco Trovo)

(Or “how to buy a Smarter City that won’t go bump in the night”)

There are good reasons why the current condition and future outlook of the world’s cities have been the subject of great debate in recent years. Their population will double from 3 billion to 6 billion by 2050; and while those in the developing world are growing at such a rate that they are challenging our ability to construct resilient, efficient infrastructure, those in developed countries often have significant levels of inequality and areas of persistent poverty and social immobility.

Many people involved in the debate are convinced that new approaches are needed to transport, food supply, economic development, water and energy management, social and healthcare, public safety and all of the other services and infrastructures that support cities.

As a consequence, analysts such as Frost & Sullivan have estimated that the market for “Smart City” solutions that exploit technology to address these issues will be $1.5trillion by 2020.

But anyone who has tried to secure investment in an initiative to apply “smart” technology in a city knows that it is not always easy to turn that theoretical market value into actual investment in projects, technology, infrastructure and expertise.

It’s not difficult to see why this is the case. Most investments are made in order to generate a financial return, but profit is not the objective of “Smart Cities” initiatives: they are intended to create economic, environmental or social outcomes. So some mechanism – an investment vehicle, a government regulation or a business model – is needed to create an incentive to invest in achieving those outcomes.

Institutions, Business, Infrastructure and Investment

Citizens expect national and local governments to use their tax revenues to deliver these objectives, of course. But they are also very concerned that the taxes they pay are spent wisely on programmes with transparent, predictable, deliverable outcomes, as the current controversy over the UK’s proposed “HS2” high speed train network and previous controversies over the effectiveness of public sector IT programmes show.

Nevertheless, the past year has seen a growing trend for cities in Europe and North America to invest in Smart Cities technologies from their own operational budgets, on the basis of their ability to deliver cost savings or improvements in outcomes.

For example, some cities are replacing traditional parking management and enforcement services with “smart parking” schemes that are reducing congestion and pollution whilst paying for themselves through increased enforcement revenues. Others are investing their allocation of central government infrastructure funds in Smart solutions – such as Cambridge, Ontario’s use of the Canadian government’s Gas Tax Fund to invest in a sensor network and analytics infrastructure to manage the city’s physical assets intelligently.

The providers of Smart Cities solutions are investing too, by implementing their services on Cloud computing platforms so that cities can pay incrementally for their use of them, rather than investing up-front in their deployment. Minneapolis, Minnesota and Montpelier, France, recently announced that they are using IBM’s Cloud-based solutions for smarter water, transport and emergency management in this way. And entrepreneurial businesses, backed by Venture Capital investment, are also investing in the development of new solutions.

However, we have not yet tapped the largest potential investment streams: property and large-scale infrastructure. The British Property Federation, for example, estimates that £14 billion is invested in the development of new property in the UK each year. For the main part, these investment streams are not currently investing  in “Smart City” solutions.

To understand why that is the case – and how we might change it – we need to understand the difference in three types of risk involved in investing in smart infrastructures compared with traditional infrastructures: construction risk; the impact of operational failures; and confidence in outcomes.

(A cyclist’s protest in 2012 about the disruption caused in Edinburgh by the overrunning construction of the city’s new tram system. Photo by Andy A)

Construction Risk

At a discussion in March of the financing of future city initiatives held within the Lord Mayor of the City of London’s “Tommorrow’s Cities” programme, Daniel Wong, Head of Infrastructure and Real Estate for Macquarie Capital Europe, said that only a “tiny fraction” – a few percent – of the investable resources of the pension and sovereign wealth funds often referred to as the “wall of money” seeking profitable long-term investment opportunities in infrastructure were available to invest in infrastructure projects that carry “construction risk” – the risk of financial loss or cost overruns during construction.

For conventional infrastructure, construction risk is relatively well understood. At the Tomorrow’s Cities event, Jason Robinson, Bechtel’s General Manager for Urban Development, said that the construction sector was well able to manage that risk on behalf of investors. There are exceptions – such as the delays, cost increases and reduction in scale of Edinburgh’s new tram system – but they are rare.

So are we similarly well placed to manage the additional “construction risk” created when we add new technology to infrastructure projects?

Unfortunately, research carried out in 2013 by the Standish Group on behalf of Computerworld suggests not. Standish Group used data describing 3,555 IT projects between 2003 and 2012 that had labour costs of at least $10 million, and found that only 6.4% were wholly successful. 52% were delivered, but cost more than expected, took longer than expected, or failed to deliver everything that was expected of them. The rest – 41.4% – either failed completely or had to be stopped and re-started from scratch. Anecdotally, we are familiar with the press coverage of high profile examples of IT projects that do not succeed.

We should not be surprised that it is so challenging to deliver IT projects. They are almost always driven by requirements that represent an aspiration to change the way that an organisation or system works: such requirements are inevitably uncertain and often change as projects proceed. In today’s interconnected world, many IT projects involve the integration of several existing IT systems operated by different organisations: most of those systems will not have been designed to support integration. And because technology changes so quickly, many projects use technologies that are new to the teams delivering them. All of these things will usually be true for the technology solutions required for Smart City projects.

By analogy, then, an IT project often feels like an exercise in building an ambitiously new style of building, using new materials whose weight, strength and stiffness isn’t wholly certain, and standing on a mixture of sand, gravel and wetland. It is not surprising that only 6.4% deliver everything they intend to, on time and on budget – though it is also disappointing that as many as 41.4% fail so completely.

However, the real insight is that the characteristics of uncertainty, risk, timescales and governance for IT projects are very different from construction and infrastructure projects. All of these issues can be managed; but they are managed in very different ways. Consequently, it will take time and experience for the cultures of IT and construction to reconcile their approaches to risk and project management, and consequently to present a confident joint approach to investors.

The implementation of Smart Cities IT solutions on Cloud Computing platforms  by their providers mitigates this risk to an extent by “pre-fabricating” these components of smart infrastructure. But there is still risk associated with the integration of these solutions with physical infrastructure and engineering systems. As we gain further experience of carrying out that integration, IT vendors, investors, construction companies and their customers will collectively increase their confidence in managing this risk, unlocking investment at greater scale.

(The unfortunate consequence of a driver who put more trust in their satellite navigation and GPS technology than its designers expected. Photo by Salmon Assessors)

Operational Risk

We are all familiar with IT systems failing.

Our laptops, notebooks and tablets crash, and we lose work as a consequence. Our television set-top boxes reboot themselves midway through recording programmes. Websites become unresponsive or lose data from our shopping carts.

But when failures occur in IT systems that monitor and control physical systems such as cars, trains and traffic lights, the consequences could be severe: damage to property, injury; and death. Organisations that invest in and operate infrastructure are conscious of these risks, and balance them against the potential benefits of new technologies when deciding whether to use them.

The real-world risks of technology failure are already becoming more severe as all of us adopt consumer technologies such as smartphones and social media into every aspect of our lives (as the driver who followed his satellite navigation system off the roads of Paris onto the pavement, and then all the way down the steps into the Paris Metro, discovered).

The noted urbanist Jane Jacobs defined cities by their ability to provide privacy and safety amongst citizens who are usually strangers to each other; and her thinking is still regarded today by many urbanists as the basis of our understanding of cities. As digital technology becomes more pervasive in city systems, it is vital that we evolve the policies that govern digital privacy to ensure that those systems continue to support our lives, communities and businesses successfully.

Google’s careful exploration of self-driving cars in partnership with driver licensing organisations is an example of that process working well; the discovery of a suspected 3D-printing gun factory in Manchester last year is an example of it working poorly.

These issues are already affecting the technologies involved in Smart Cities solutions. An Argentinian researcher recently demonstrated that traffic sensors used around the world could be hacked into and caused to create misleading information. At the time of installation it was assumed that there would never be a motivation to hack into them and so they were configured with insufficient security. We will have to ensure that future deployments are much more secure.

Conversely, we routinely trust automated technology in many aspects of our lives – the automatic pilots that land the planes we fly in, and the anit-lock braking systems that slow and stop our cars far more effectively than we are able to ourselves.

If we are to build the same level of trust and confidence in Smart City solutions, we need to be open and honest about their risks as well as their benefits; and clear how we are addressing them.

(Cars from the car club “car2go” ready to hire in Vancouver. Despite succeeding in many cities around the world, the business recently withdrew from the UK after failing to attract sufficient customers to two pilot deployments in London and Birmingham. The UK’s cultural attraction of private car ownership has proved too strong at present for a shared ownership business model to succeed. Photo by Stephen Rees).

Outcomes Risk

Smart infrastructures such as Stockholm’s road-use charging scheme and London’s congestion charge were constructed in the knowledge that they would be financially sustainable, and with the belief that they would create economic and environmental benefits. Subsequent studies have shown that they did achieve those benefits, but data to predict them confidently in advance did not exist because they were amongst the first of their kind in the world.

The benefits of “Smart” schemes such as road-use charging and smart metering cannot be calculated deterministically in advance because they depend on citizens changing their behaviour – deciding to ride a bus rather than to drive a car; or deciding to use dishwashers and washing machines overnight rather than during the day.

There are many examples of Smart Cities projects that have successfully used technology to encourage behaviour change. In a smart water meter project in Dubuque, for example, households were given information that told them whether their domestic appliances were being used efficiently, and alerted to any leaks in their supply of water. To a certain extent, households acted on this information to improve the efficiency of their water usage. But a control group who were also given a “green points” score telling them how their water conservation compared to that of their near neighbours were found to be twice as likely to take action to improve their efficiency.

However, these techniques are notoriously difficult to apply successfully. A recycling scheme that adopted a similar approach found instead that it lowered recycling rates across the community: households who learned that they were putting more effort into recycling than their neighbours asked themselves “if my neighbours aren’t contributing to this initiative, then why should I?”

The financial vehicles that enable investment in infrastructure and property are either government-backed instruments that reward economic and social outcomes such as reductions in carbon footprint or the creation of jobs ; or market-based instruments  based on the creation of direct financial returns.

So are we able to predict those outcomes confidently enough to enable investment in Smart Cities solutions?

I put that question to the debating panel at the Tomorrow’s Cities meeting. In particular, I asked whether investors would be willing to purchase bonds in smart metering infrastructures with a rate of return dependent on the success of those infrastructures in encouraging consumers to  reduce their use of water and energy.

The response was a clear “no”. The application of those technologies and their effectiveness in reducing the use of water and electricity by families and businesses is too uncertain for such investment vehicles to be used.

Smart Cities solutions are not straightforward engineering solutions such as electric vehicles whose cost, efficiency and environmental impacts can be calculated in a deterministic way. They are complex socio-technical systems whose outcomes are emergent and uncertain.

Our ability to predict their performance and impact will certainly improve as more are deployed and analysed, and as University researchers, politicians, journalists and the public assess them. As that happens, investors will be more willing to fund them; or, with government support, to create new financial vehicles that reward investment in initiatives that use smart technology to create social, environmental and economic improvements – just as the World Bank’s Green Bonds, launched in 2008, support environmental schemes today.

(Recycling bins in Curitiba, Brazil. As Mayor of Curitaba Jaime Lerner started one of the world’s earliest and most effective city recycling programmes by harnessing the enthusiasm of children to influence the behaviour of their parents. Lerner’s many initiatives to transform Curitaba have the characteristic of entrepreneurial leadership. Photo by Ana Elisa Ribeiro)

Evidence and Leadership

The evidence base need to support new investment vehicles is already being created. In Canada, for example, a collaboration between Canadian insurers and cities has developed a set of tools to create a common understanding of the financial risk created by the effects of climate change on the resilience of city infrastructures.

More internationally, the “Little Rock Accord” between the Madrid Club of former national Presidents and Prime Ministers and the P80 group of pension funds agreed to create a task force to increase the degree to which pension and sovereign wealth funds invest in the deployment of technology to address climate change issues, shortages in resources such as energy, water and food, and sustainable, resilient growth. My colleague the economist Mary Keeling has been working for IBM’s Institute for Business Value to more clearly analyse and express the benefits of Smart approaches – in water management and transportation, for example. And Peter Head’s Ecological Sequestration Trust and Robert Bishop’s International Centre for Earth Simulation are both pooling international data and expertise to create models that explore how more sustainable cities and societies might work.

But the Smart City programmes which courageously drive the field forward will not always be those that demand a complete and detailed cost/benefit analysis in advance. Writing in “The Plundered Planet”, the economist Paul Collier asserts that any proposed infrastructure of reasonable novelty and significant scale is effectively so unique – especially when considered in its geographic, political, social and economic context – that an accurate cost/benefit case simply cannot be constructed.

Instead, initiatives such as London’s congestion charge and bicycle hire scheme, Sunderland’s City Cloud and Bogota’s bikeways and parks were created by courageous leaders with a passionate belief that they could make their cities better. As more of those leaders come to trust technology and the people who deliver it, their passion will be another force behind the adoption of technology in city systems and infrastructure.

What’s the risk of not investing in a Smarter City?

For at least the last 50 years, we have been observing that life is speeding up and becoming more complicated. In his 1964 work “Notes on the Synthesis of Form“, the town planner Christopher Alexander wrote:

“At the same time that the problems increase in quantity, complexity and difficulty, they also change faster than ever before. New materials are developed all the time, social patterns alter quickly, the culture itself is changing faster than it has ever changed before … To match the growing complexity of problems, there is a growing body of information and specialist experience … [but] not only is the quantity of information itself beyond the reach of single designers, but the various specialists who retail it are narrow and unfamiliar with the form-makers’ peculiar problems.”

(Alexander’s 1977 work “A Pattern Language: Towns, Buildings, Construction” is one of the most widely read books on urban design; it was also an enormous influence on the development of the computer software industry).

The physicist Geoffrey West has shown that this process is alive and well in cities today. As the world’s cities grow, life in them speeds up, and they create ideas and wealth more rapidly, leading to further growth. West has observed that, in a world with constrained resources, this process will lead to a catastrophic failure when demand for fresh water, food and energy outstrips supply – unless we change that process, and change the way that we consume resources in order to create rewarding lives for ourselves.

There are two sides to that challenge: changing what we value; and changing how we create what we value from the resources around us.

(...)

(“Makers” at the Old Print Works in Balsall Heath, Birmingham, sharing the tools, skills, contacts and ideas that create successful small businesses in local communities)

The Transition movement, started by Rob Hopkins in Totnes in 2006, is tackling both parts of that challenge. “Transition Towns” are communities who have decided to act collectively to transition to a way of life which is less resource-intensive, and to value the characteristics of such lifestyles in their own right – where possible trading regionally, recycling and re-using materials and producing and consuming food locally.

The movement does not advocate isolation from the global industrial economy, but it does advocate that local, alternative products and services in some cases can be more sustainable than mass-produced commodities; that the process of producing them can be its own reward; and that acting at community level is for many people the most effective way to contribute to sustainability. From local currencies, to food-trading networks to community energy schemes, many “Smart” initiatives have emerged from the transition movement.

We will need the ideas and philosophy of Transition to create sustainable cities and communities – and without them we will fail. But those ideas alone will not create a sustainable world. With current technologies, for example, one hectare of highly fertile, intensively farmed land can feed 10 people. Birmingham, my home city, has an area of 60,000 hectares of relatively infertile land, most of which is not available for farming at all; and a population of around 1 million. Those numbers don’t add up to food self-sufficiency. And Birmingham is a very low-density city – between one-half and one-tenth as dense as the growing megacities of Asia and South America.

Cities depend on vast infrastructures and supply-chains, and they create complex networks of transactions supported by transportation and communications. Community initiatives will adapt these infrastructures to create local value in more sustainable, resilient ways, and by doing so will reduce demand. But they will not affect the underlying efficiency of the systems themselves. And I do not personally believe that in a world of 7 billion people in which resources and opportunity are distributed extremely unevenly that community initiatives alone will reduce demand significantly enough to achieve sustainability.

We cannot simply scale these systems up as the world’s population grows to 9 billion by 2050, we need to change the way they work. That means changing the technology they use, or changing the way they use technology. We need to make them smarter.

From field to market to kitchen: smarter food for smarter cities

(A US Department of Agriculture inspector examines a shipment of imported frozen meat in New Orleans in 2013. Photo by Anson Eaglin)

One of the biggest challenges associated with the rapid urbanisation of the world’s population is working out how to feed billions of extra citizens. I’m spending an increasing amount of my time understanding how technology can help us to do that.

It’s well known that the populations of many of the world’s developing nations – and some of those that are still under-developed – are rapidly migrating from rural areas to cities. In China, for example, hundreds of millions of people are moving from the countryside to cities, leaving behind a lifestyle based on extended family living and agriculture for employment in business and a more modern lifestyle.

The definitions of “urban areas” used in many countries undergoing urbanisation include a criterion that less than 50% of employment and economic activity is based on agriculture (the appendices to the 2007 revision of the UN World Urbanisation Prospects summarise such criteria from around the world). Cities import their food.

In the developed countries of the Western world, this criterion is missing from most definitions of cities, which focus instead on the size and density of population. In the West, the transformation of economic activity away from agriculture took place during the Industrial Revolution of the 18th and 19th Centuries.

Urbanisation and the industrialisation of food

The food that is now supplied to Western cities is produced through a heavily industrialised process. But whilst the food supply chain had to scale dramatically to feed the rapidly growing cities of the Industrial Revolution, the processes it used, particularly in growing food and creating meals from it, did not industrialise – i.e. reduce their dependence on human labour – until much later.

As described by Population Matters, industrialisation took place after the Second World War when the countries involved took measures to improve their food security after struggling to feed themselves during the War whilst international shipping routes were disrupted. Ironically, this has now resulted in a supply chain that’s even more internationalised than before as the companies that operate it have adopted globalisation as a business strategy over the last two decades.

This industrial model has led to dramatic increases in the quantity of food produced and distributed around the world, as the industry group the Global Harvest Initiative describes. But whether it is the only way, or the best way, to provide food to cities at the scale required over the next few decades is the subject of much debate and disagreement.

(Irrigation enables agriculture in the arid environment of Al Jawf, Libya. Photo by Future Atlas)

One of the critical voices is Philip Lymbery, the Chief Executive of Compassion in World Farming, who argues passionately in “Farmageddon” that the industrial model of food production and distribution is extremely inefficient and risks long-term damage to the planet.

Lymbery questions whether the industrial system is sustainable financially – it depends on vast subsidy programmes in Europe  and the United States; and he questions its social benefits – industrial farms are highly automated and operate in formalised international supply chains, so they do not always provide significant food or employment in the communities in which they are based.

He is also critical of the industrial system’s environmental impact. In order to optimise food production globally for financial efficiency and scale, single-use industrial farms have replaced the mixed-use, rotational agricultural systems that replenish nutrients in soil  and that support insect species that are crucial to the pollination of plants. They also create vast quantities of animal waste that causes pollution because in the single-use industrial system there are no local fields in need of manure to fertilise crops.

And the challenges associated with feeding the growing populations of the worlds’ cities are not only to do with long-term sustainability. They are also a significant cause of ill-health and social unrest today.

Intensity, efficiency and responsibility

Our current food systems fail to feed nearly 1 billion people properly, let alone the 2 billion rise in global population expected by 2050. We already use 60% of the world’s fresh water to produce food – if we try to increase food production without changing the way that water is used, then we’ll simply run out of it, with dire consequences. In fact, as the world’s climate changes over the next few decades, less fresh water will be available to grow food. As a consequence of this and other effects of climate change, the UK supermarket ASDA reported recently that 95% of their fresh food supply is already exposed to climate risk.

The supply chains that provide food to cities are vulnerable to disruption – in the 2000 strike by the drivers who deliver fuel to petrol stations in the UK, some city supermarkets came within hours of running out of food completely; and disruptions to food supply have already caused alarming social unrest across the world.

These challenges will intensify as the world’s population grows, and as the middle classes double in size to 5 billion people, dramatically increasing demand for meat – and hence demand for food for the animals which produce it. Overall, the United Nations Food and Agriculture Organization estimates that we will need to produce 70% more food than today by 2050.

insect delicacies

(Insect delicacies for sale in Phnom Penh’s central market. The United Nations suggested last year that more of us should join the 2 billion people who include insects in their diet – a nutritious and environmentally efficient source of food)

But increasing the amount of food available to feed people doesn’t necessarily mean growing more food, either by further intensifying existing industrial approaches or by adopting new techniques such as vertical farming or hydroponics. In fact, a more recent report issued by the United Nations and partner agencies cautioned that it was unlikely that the necessary increase in available food would be achieved through yield increases alone. Instead, it recommended reducing food loss, waste, and “excessive demand” for animal products.

There are many ways we might grow, distribute and use food more efficiently. We currently waste about 30% of the food we produce: some through food that rots before it reaches our shops or dinner tables, some through unpopularity (such as bread crusts or fruit and vegetables that aren’t the “right” shape and colour), and some because we simply buy more than we need to eat. If those inefficiencies were corrected, we are already producing enough food to feed 11billion people, let alone the 9 billion population predicted for the Earth by 2050.

I think that technology has some exciting roles to play in how we respond to those challenges.

Smarter food in the field: data for free, predicting the future and open source beekeeping

New technologies give us a great opportunity to monitor, measure and assess the agricultural process and the environment in which it takes place.

The SenSprout sensor can measure and transmit the moisture content of soil; it is made simply by printing an electronic circuit design onto paper using commercially-available ink containing silver nano-particles; and it powers itself using ambient radio waves. We can use sensors like SenSprout to understand and respond to the natural environment, using technology to augment the traditional knowledge of farmers.

By combining data from sensors such as SenSprout and local weather monitoring stations with national and international forecasts, my colleagues in IBM Research are investigating how advanced weather prediction technology can enable approaches to agriculture that are more efficient and precise in their use of water. A trial project in Flint River, Georgia is allowing farmers to apply exactly the right amount of water at the right time to their crops, and no more.

Such approaches improve our knowledge of the natural environment, but they do not control it. Nature is wild, the world is uncertain, and farmers’ livelihoods will always be exposed to risk from changing weather patterns and market conditions. The value of technology is in helping us to sense and respond to those changes. “Pasture Scout“, for example, does that by using social media to connect farmers in need of pasture to graze their cattle with other farmers with land of the right sort that is currently underused.

These possibilities are not limited to industrial agriculture or to developed countries. For example, the Kilimo Salama scheme adds resilience to the traditional practises of subsistence farmers by using remote weather monitoring and mobile phone payment schemes to provide affordable insurance for their crops.

Technology is also helping us to understand and respond to the environmental impact of the agricultural practises that have developed in previous decades: as urban beekeepers seek to replace lost natural habitats for bees, the Open Source Beehive project is using technology to help them identify the factors leading to the “colony collapse disorder” phenomenon that threatens the world’s bee population.

Smarter food in the marketplace: local food, the sharing economy and soil to fork traceability

The emergence of the internet as a platform for enabling sales, marketing and logistics over the last decade has enabled small and micro-businesses to reach markets across the world that were previously accessible only to much larger organisations with international sales and distribution networks. The proliferation of local food and urban farming initiatives shows that this transformation is changing the food industry too, where online marketplaces such as Big Barn and FoodTrade make it easier for consumers to buy locally produced food, and for producers to sell it.

This is not to say that vast industrial supply-chains will disappear overnight to be replaced by local food networks: they clearly won’t. But just as large-scale film and video production has adapted to co-exist and compete with millions of small-scale, “long-tail” video producers, so too the food industry will adjust. The need for co-existence and competition with new entrants should lead to improvements in efficiency and impact – the supermarket Tesco’s “Buying Club” shows how one large food retailer is already using these ideas to provide benefits that include environmental efficiences to its smaller suppliers.

(A Pescheria in Bari, Puglia photographed by Vito Palmi)

One challenge is that food – unlike music and video – is a fundamentally physical commodity: exchanging it between producers and consumers requires transport and logistics. The adoption by the food industry of “sharing economy” approaches – business models that use social media and analytics to create peer-to-peer transactions, and that replace bulk movement patterns by thousands of smaller interactions between individuals – will be dependent on our ability to create innovative distribution systems to support them. Zaycon Foods operate one such system, using online technology to allow consumers to collectively negotiate prices for food that they then collect from farmers at regular local events.

Rather than replacing existing markets and supply chains, one role that technology is already playing is to give food producers better insight into their behaviour. M-farm links farmers in Kenya to potential buyers for their produce, and provides them with real-time information about prices; and the University of Bari in Puglia, Italy operates a similar fish-market pricing information service that makes it easier for local fisherman to identify the best buyers and prices for their daily catch.

Whatever processes are involved in getting food from where it’s produced to where it’s consumed, there’s an increasing awareness of the need to track those movements so that we know what we’re buying and eating, both to prevent scandals such as last year’s discovery of horsemeat in UK food labelled as containing beef; and so that consumers can make buying decisions based on accurate information about the source and quality of food. The “eSporing” (“eTraceability”) initiative between food distributors and the Norwegian government explored these approaches following a breakout of E-Coli in 2006.

As sensors become more capable and less expensive, we’ll be able to add more data and insight into this process. Soil quality can be measured using sensors such as SenSprout; plant health could be measured by similar sensors or by video analytics using infra-red data. The gadgets that many of us use whilst exercising to measure our physical activity and use of calories could be used to assess the degree to which animals are able to exercise. And scientists at the University of the West of England in Bristol have developed a quick, cheap sensor that can detect harmful bacteria and the residues of antibiotics in food. (The overuse of antibiotics in food production has harmful side effects, and in particular is leading some bacteria that cause dangerous diseases in humans to develop resistance to treatment).

This advice from the Mayo Clinic in the United States gives one example of the link between the provenance of food and its health qualities, explaining that beef from cows fed on grass can have lower levels of fat and higher levels of beneficial “omega-3 fatty acids” than what they call “conventional beef” – beef from cows fed on grain delivered in lorries. (They appear to have forgotten the “convention” established by several millennia of evolution and thousands of years of animal husbandry that cows eat grass).

(Baltic Apple Pie – a recipe created by IBM’s Watson computer)

All of this information contributes to describing both the taste and health characteristics of food; and when it’s available, we’ll have the opportunity to make more informed choices about what we put on our tables.

Smarter food in the kitchen: cooking, blogging and cognitive computing

One of the reasons that the industrial farming system is so wasteful is that it is optimised to supply Western diets that include an unhealthy amount of meat; and to do so at an unrealistically low price for consumers. Enormous quantities of fish and plants – especially soya beans – that could be eaten by people as components of healthy diets are instead fed to industrially-farmed animals to produce this cheap meat. As a consequence, in the developed world many of us are eating more meat than is healthy for us. (Some of the arguments on this topic were debated by the UK’s Guardian newspaper last year).

But whilst eating less meat and more fish and vegetables is a simple idea, putting it into practise is a complex cultural challenge.

A recent report found that “a third of UK adults struggle to afford healthy food“. But the underlying cause is not economic: it is a lack of familiarity with the cooking and food preparation techniques that turn cheap ingredients into healthy, tasty food; and a cultural preference for red meat and packaged meals. The Sustainable Food School that is under development in Birmingham is one example of an initiative intending to address those challenges through education and awareness.

Engagement through traditional and social media also has an influence. The celebrity chefs that have campaigned for a shift in our diets towards more sustainably sourced fish and the schoolgirl who  provoked a national debate concerning the standard and health of school meals simply by blogging about the meals that were offered to her each day at school, are two recent examples in the UK; as is the food blogger Jack Monroe who demonstrated how she could feed herself and her two-year-old son healthy, interesting food on a budget of £10 a week.

My colleagues in IBM Research have explored turning IBM’s Watson cognitive computing technology to this challenge. In an exercise similar to the “invention test” common to television cookery competitions, they have challenged Watson to create recipes from a restricted set of ingredients (such as might be left in the fridge and cupboards at the end of the week) and which meet particular criteria for health and taste.

(An example of local food processing: my own homemade chorizo.)

Food, technology, passion

The future of food is a complex and contentious issue – the controversy between the productivity benefits of industrial agriculture and its environmental and social impact being just one example. I have touched on but not engaged in those debates in this article – my expertise is in technology, not in agriculture, and I’ve attempted to link to a variety of sources from all sides of the debate.

Some of the ideas for providing food to the world’s growing population in the future are no less challenging, whether those ideas are cultural or technological. The United Nations suggested last year, for example, that more of us should join the 2 billion people who include insects in their diet. Insects are a nutritious and environmentally efficient source of food, but those of us who have grown up in cultures that do not consider them as food are – for the most part – not at all ready to contemplate eating them. Artificial meat, grown in laboratories, is another increasingly feasible source of protein in our diets. It challenges our assumption that food is natural, but has some very reasonable arguments in its favour.

It’s a trite observation, but food culture is constantly changing. My 5-year-old son routinely demands foods such as humus and guacamole that are unremarkable now but that were far from commonplace when I was a child. Ultimately, our food systems and diets will have to adapt and change again or we’ll run out of food, land and water.

Technology is one of the tools that can help us to make those changes. But as Kentaro Toyama famously said: technology is not the answer; it is the amplifier of human intention.

So what really excites me is not technology, but the passion for food that I see everywhere: from making food for our own families at home, to producing it in local initiatives such as Loaf, Birmingham’s community bakery; and from using technology in programmes that contribute to food security in developing nations to setting food sustainability at the heart of corporate business strategy.

There are no simple answers, but we are all increasingly informed and well-intentioned. And as technology continues to evolve it will provide us with incredible new tools. Those are great ingredients for an “invention test” for us all to find a sustainable, healthy and tasty way to feed future cities.

Six ways to design humanity and localism into Smart Cities

(Birmingham’s Social Media Cafe, where individuals from every part of the city share their experience using social media to promote their businesses and community initiatives. Photograph by Meshed Media)

The Smart Cities movement is sometimes criticised for appearing to focus mainly on the application of technology to large-scale city infrastructures such as smart energy grids and intelligent transportation.

It’s certainly vital that we manage and operate city services and infrastructure as intelligently as possible – there’s no other way to deal with the rapid urbanisation taking place in emerging economies; or the increasing demand for services such as health and social care in the developed world whilst city budgets are shrinking dramatically; and the need for improved resilience in the face of climate change everywhere.

But to focus too much on this aspect of Smart Cities and to overlook the social needs of cities and communities risks forgetting what the full purpose of cities is: to enable a huge number of individual citizens to live not just safe, but rewarding lives with their families.

Maslow’s Hierarchy of Needs identifies our most basic requirements to be food, water, shelter and security. The purpose of many city infrastructures is to answer those needs, either directly (buildings, utility infrastructures and food supply chains) or indirectly (the transport systems that support us and the businesses that we work for).

Important as those needs are, though – particularly to the billions of people in the world for whom they are not reliably met – life would be dull and unrewarding if they were all that we aspired to.

Maslow’s hierarchy next relates the importance of family, friends and “self-actualisation” (which can crudely be described as the process of achieving things that we care about). These are the more elusive qualities that it’s harder to design cities to provide. But unless cities provide them, they will not be successful. At best they will be dull, unrewarding places to live and work, and will see their populations fall as those can migrate elsewhere. At worst, they will create poverty, poor health and ultimately short, unrewarding lives.

A Smart City should not only be efficient, resilient and sustainable; it should improve all of these qualities of life for its citizens.

So how do we design and engineer them to do that?

(Maslow’s Hierarchy of Needs, image by Factoryjoe via Wikimedia Commons)

Tales of the Smart City

Stories about the people whose lives and businesses have been made better by technology tell us how we might answer that question.

In the Community Lover’s Guide to Birmingham, for example, Nick Booth describes the way his volunteer-led social media surgeries helped the Central Birmingham Neighbourhood Forum, Brandwood End Cemetery and Jubilee Debt Campaign to benefit from technology.

Another Birmingham initiative, the Northfield Ecocentre, crowdfunded £10,000 to support their “Urban Harvest” project. The funds helped the Ecocentre pick unwanted fruit from trees in domestic gardens in Birmingham and distribute it between volunteers, children’s centres, food bank customers and organisations promoting healthy eating; and to make some of it into jams, pickles and chutneys to raise money so that in future years the initiative can become self-sustaining.

In the village of Chale on the Isle of Wight, a community not served by the national gas power network and with significant levels of fuel poverty, my colleague Andy Stanford-Clark has helped an initiative not only to deploy smart meters to measure the energy use of each household; but to co-design with residents how they will use that technology, so that the whole community feels a sense of ownership and inclusion in the initiative. The project has resulted in a significant drop in rent arrears as residents use the technology to reduce their utility bills, in some cases by up to 50 percent. Less obviously, the sense of shared purpose has extended to the creation of a communal allotment area in the village and a successful compaign to halve bus fares in the area.

There are countless other examples. Play Fitness “gamify” exercise to persuade children to get fit, and work very hard to ensure that their products are accessible to children in communities of any level of wealth.  Casserole Club use social media to introduce people who can’t cook for themselves to people who are prepared to volunteer to cook for others. The West Midlands Collaborative Commerce Marketplace uses analytics technology to help it’s 10,000 member businesses win more than £4billion in new contracts each year. … and so on.

None of these initiatives are purely to do with technology. But they all use technologies that simply were not available and accessible as recently as a few years ago to achieve outcomes that are important to cities and communities. By understanding how the potential of technology was apparent to the stakeholders in such initiatives, why it was affordable and accessible to them, and how they acquired the skills to exploit it, we can learn how to design Smart Cities in a way that encourages widespread grass-roots, localised innovation.

(Top: Birmingham's Masshouse Circus roundabout, part of the inner-city ringroad that famously impeded the city's growth. Bottom: This pedestrian roundabout in Lujiazui, China, constructed over a busy road junction, is a large-scale city infrastructure that balances the need to support traffic flows through the city with the importance that Jane Jacobs first described of allowing people to walk freely about the areas where they live and work. Photo by ChrisUK)

(Top: Birmingham’s Masshouse Circus roundabout, part of the inner-city ringroad that famously impeded the city’s growth until it was demolished. Photo by Birmingham City Council. Bottom: Pedestrian roundabout in Lujiazui, China, constructed over a busy road junction, is a large-scale city infrastructure that balances the need to support traffic flows through the city with the importance that Jane Jacobs first described of allowing people to walk freely about the areas where they live and work. Photo by ChrisUK)

A tale of two roundabouts

History tells us that we should not assume that it will be straightforward to design Smart Cities to achieve that objective, however.

A measure of our success in building the cities we know today from the generations of technology that shaped them – concrete, cars and lifts – is the variation in life expectancy across them. In the UK, it’s common for life expectancy to vary by around 20 years between the poorest and richest parts of the same city.

That staggering difference is the outcome of a complex set of issues including the availability of education and opportunity, lifestyle factors such as diet and exercise, and the accessibility of city services. But a significant influence on many of those issues is the degree to which the large-scale infrastructures built to support our physiological needs and the demands of the economy also create a high-quality environment for daily life.

The photograph on the right shows two city transport infrastructures that are visually similar, but that couldn’t be more different in their influence on the success of the cities that they are part of.

The picture at the top shows Masshouse Circus in Birmingham in 2001 shortly before it was demolished. It was constructed in the 1960s as part of the city’s inner ring-road, intended to improve connectivity to the national economy through the road network. However, the impact of the physical barrier that it created to pedestrian traffic can be seen by the stark difference in land value inside and outside the “concrete collar” of the ring-road. Inside the collar, land is valuable enough for tall office blocks to be constructed on it; whilst outside it is of such low value that it is used as a ground-level carpark.

In contrast, the pedestrian roundabout in Lujiazui, China pictured at the bottom, constructed over a busy road junction, balances the need to support traffic flows through the city with the need for people to walk freely about the areas in which they live and work. As can be seen from the people walking all around it, it preserves the human vitality of an area that many busy roads flow through. 

We should take insight from these experiences when considering the design of Smart City infrastructures. Unless those infrastructures are designed to be accessible to and usable by citizens, communities and local businesses, they will be as damaging as poorly constructed buildings and poorly designed transport networks. If that sounds extreme, then consider the dangers of cyber-stalking, or the implications of the gun-parts confiscated from a suspected 3D printing gun factory in Manchester last year that had been created on general purpose machinery from digital designs shared through the internet. Digital technology has life and death implications in the real world.

For a start, we cannot take for granted that city residents have the basic ability to access the internet and digital technology. Some 18% of adults in the UK have never been online; and children today without access to the internet at home and in school are at an enormous disadvantage. As digital technology becomes even more pervasive and important, the impact of this digital divide – within and between people, cities and nations – will become more severe. This is why so many people care passionately about the principle of “Net Neutrality” – that the shared infrastructure of the internet provides the same service to all of its users; and does not offer preferential access to those individuals or corporations able to pay for it.

These issues are very relevant to cities and their digital strategies and governance. The operation of any form of network requires physical infrastructure such as broadband cables, wi-fi and 4G antennae and satellite dishes. That infrastructure is regulated by city planning policies. In turn, those planning policies are tools that cities can and should use to influence the way in which technology infrastructure is deployed by private sector service providers.

(Photograph of Aesop’s fable “The Lion and the Mouse” by Liz West)

Little and big

Cities are enormous places in which what matters most is that millions of individually small matters have good outcomes. They work well when their large scale systems support the fine detail of life for every one of their very many citizens: when “big things” and “little things” work well together.

A modest European or US city might have 200,000 to 500,000 inhabitants; a large one might have between one and ten million. The United Nations World Urbanisation Prospects 2011 revision recorded 23 cities with more than 10 million population in 2011 (only six of them in the developed world); and predicted that there would be nearly 40 by 2025 (only eight of them in the developed world – as we define it today). Overall, between now and 2050 the world’s urban population will double from 3 billion to 6 billion. 

A good example of the challenges that this enormous level of urbanisation is already creating is the supply of food. One hectare of highly fertile, intensively farmed land can feed 10 people. Birmingham, my home city, has an area of 60,000 hectares of relatively infertile land, most of which is not available for farming at all; and a population of around 1 million. Those numbers don’t add up to food self-sufficiency; and Birmingham is a very low-density city – between one-half and one-tenth as dense as the growing megacities of Asia and South America Feeding the 7 to 10 billion people who will inhabit the planet between now and 2050, and the 3 to 6 billion of them that will live in dense cities, is certainly a challenge on an industrial scale. 

In contrast, Casserole Club, the Northfield Eco-Centre, the Chale Project and many other initiatives around the world have demonstrated the social, health and environmental benefits of producing and distributing food locally. Understanding how to combine the need to supply food at city-scale with the benefits of producing it locally and socially could make a huge difference to the quality of urban lives.

The challenge of providing affordable broadband connectivity throughout cities demonstrates similar issues. Most cities and countries have not yet addressed that challenge: private sector network providers will not deploy connectivity in areas which are insufficiently economically active for them to make a profit, and Government funding is not yet sufficient to close the gap.

In his enjoyable and insightful book “Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia“, Anthony Townsend describes a grass-roots effort by civic activists to provide New York with free wi-fi connectivity. I have to admire the vision and motivation of those involved, but – rightly or wrongly; and as Anthony describes – wi-fi has ultimately evolved to be dominated by commercial organisations.  

As technology continues to improve and to reduce in price, the balance of power between large, commercial, resource-rich institutions and small, agile, resourceful  grassroots innovators will continue to changeTechnologies such as Cloud Computing, social media, 3D printing and small-scale power generation are reducing the scale at which many previously industrial technologies are now economically feasible; however, it will remain the case for the foreseeable future that many city infrastructures – physical and digital – will be large-scale, expensive affairs requiring the buying power and governance of city-scale authorities and the implementation resources of large companies.

But more importantly, neither small-scale nor large-scale solutions alone will meet all of our needs. Many areas in cities – usually those that are the least wealthy – haven’t yet been provided with wi-fi or broadband connectivity by either.  

(Cars in Frederiksberg, Copenhagen wishing to join a main road must give way to cyclists and pedestrians)

(A well designed urban interface between people and infrastructure. Cars in Frederiksberg, Copenhagen wishing to join a main road must give way to cyclists and pedestrians passing along it)

We need to find the middle ground between the motivations, abilities and cultures of large companies and formal institutions on one hand; and those of agile, local innovators and community initiatives on the other. The pilot project to provide broadband connectivity and help using the internet to Castle Vale in Birmingham is a good example of finding that balance.

And I am optimistic that we can find it more often. Whilst Anthony is rightly critical of approaches to designing and building city systems that are led by technology, or that overlook the down-to-earth and sometimes downright “messy” needs of people and communities for favour of unrealistic technocratic and corporate utopias; the reality of the people I know that are employed by large corporations on Smart City projects is that they are acutely aware of the limitations as well as the value of technology, and are passionately committed to the human value of their work. That passion is often reflected in their volunteered commitment to “civic hacking“, open data initiatives, the teaching of technology in schools and other activities that help the communities in which they live to benefit from technology.

But rather than relying on individual passion and integrity, how do we encourage and ensure that large-scale investments in city infrastructures and technology enable small-scale innovation, rather than stifle it?

Smart urbanism and massive/small innovation

I’ve taken enormous inspiration in recent years from the architect Kelvin Campbell whose “Massive / Small” concept and theory of “Smart Urbanism” are based on the belief that successful cities emerge from physical environments that encourage “massive” amounts of “small”-scale innovation – the “lively, diversified city, capable of continual, close- grained improvement and change” that Jane Jacobs described in “The Death and Life of Great American Cities“.

We’ll have to apply similar principles in order for large-scale city technology infrastructures to support localised innovation and value-creation. But what are the practical steps that we can take to put those principles into practise?

Step 1: Make institutions accessible

There’s a very basic behaviour that most of us are quite bad at – listening. In particular, if the institutions of Smart Cities are to successfully create the environment in which massive amounts of small-scale innovation can emerge, then they must listen to and understand what local activists, communities, social innovators and entrepreneurs want and need.

Many large organisations – whether they are local authorities or private sector companies – are poor at listening to smaller organisations. Their decision-makers are very busy; and communications, engagement and purchasing occur through formally defined processes with legal, financial and confidentiality clauses that can be difficult for small or informal organisations to comply with. The more that we address these barriers, the more that our cities will stimulate and support small-scale innovation. One way to do so is through innovations in procurement; another is through the creation of effective engagements programmes, such as the Birmingham Community Healthcare Trust’s “Healthy Villages” project which is listening to communities expressing their need for support for health and wellbeing. This is why IBM started our “Smarter Cities Challenge” which has engaged hundreds of IBM’s Executives and technology experts in addressing the opportunities and challenges of city communites; and in so doing immersed them in very varied urban cultures, economies, and issues.

But listening is also a personal and cultural attitude. For example, in contrast to the current enthusiasm for cities to make as much data as possible available as “open data”, the Knight Foundation counsel a process of engagement and understanding between institutions and communities, in order to identify the specific information and resources that can be most usefully made available by city institutions to individual citizens, businesses and social organisations.

(Delegates at Gov Camp 2013 at IBM’s Southbank office, London. Gov Camp is an annual conference which brings together anyone interested in the use of digital technology in public services. Photo by W N Bishop)

In IBM, we’ve realised that it’s important to us to engage with, listen to and support small-scale innovation in its many forms when helping our customers and partners pursue Smarter City initiatives; from working with social enterprises, to supporting technology start-ups through our Global Entrepreneur Programme, to engaging with the open data and civic hacking movements.

More widely, it is often talented, individual leaders who overcome the barriers to engagement and collaboration between city institutions and localised innovation. In “Resilience: why things bounce back“, Andrew Zolli describes many examples of initiatives that have successfully created meaningful change. A common feature is the presence of an individual who shows what Zolli calls”translational leadership“: the ability to engage with both small-scale, informal innovation in communities and large-scale, formal institutions with resources.

Step 2: Make infrastructure and technology accessible

Whilst we have a long way to go to address the digital divide, Governments around the world recognise the importance of access to digital technology and connectivity; and many are taking steps to address it, such as Australia’s national deployment of broadband internet connectivity and the UK’s Urban Broadband Fund. However, in most cases, those programmes are not sufficient to provide coverage everywhere.

Some businesses and social initiatives are seeking to address this shortfall. CommunityUK, for example, are developing sustainable business models for providing affordable, accessible connectivity, and assistance using it, and are behind the Castle Vale project in Birmingham. And some local authorities, such as Sunderland and Birmingham, have attempted to provide complete coverage for their citizens – although just how hard it is to achieve that whilst avoiding anti-competition issues is illustrated by Birmingham’s subsequent legal challenges.

We should also tap into the enormous sums spent on the physical regeneration of cities and development of property in them. As I first described in June last year, while cities everywhere are seeking funds for Smarter City initiatives, and often relying on central government or research grants to do so, billions of Pounds, Euros, and Dollars are being spent on relatively conventional property development and infrastructure projects that don’t contribute to cities’ technology infrastructures or “Smart” objectives.

Local authorities could use planning regulations to steer some of that investment into providing Smart infrastructure, basic connectivity, and access to information from city infrastructures to citizens, communities and businesses. Last year, I developed a set of “Smart City Design Principles” on behalf a city Council considering such an approach, including:

Principle 4: New or renovated buildings should be built to contain sufficient space for current and anticipated future needs for technology infrastructure such as broadband cables; and of materials and structures that do not impede wireless networks. Spaces for the support of fixed cabling and other infrastructures should be easily accessible in order to facilitate future changes in use.

Principle 6: Any development should ensure wired and wireless connectivity is available throughout it, to the highest standards of current bandwidth, and with the capacity to expand to any foreseeable growth in that standard.

(The Birmingham-based Droplet smartphone payment service, now also operating in London, is a Smart City start-up that has won backing from Finance Birmingham, a venture capital company owned by Birmingham City Council)

Step 3: Support collaborative innovation

Small-scale, local innovations will always take place, and many of them will be successful; but they are more likely to have significant, lasting, widespread impact when they are supported by city institutions with resources.

That support might vary from introducing local technology entrepreneurs to mentors and investors through the networks of contacts of city leaders and their business partners; through to practical assistance for social enterprises, helping them to put in place very basic but costly administration processes to support their operations.

City institutions can also help local innovations to thrive simply by becoming their customers. If Councils, Universities and major local employers buy services from innovative local providers – whether they be local food initiatives such as the Northfield Ecocentre or high-tech innovations such as Birmingham’s Droplet smartphone payment service – then they provide direct support to the success of those businesses.

In Birmingham,for example, Finance Birmingham (a Council-owned venture capital company) and the Entrepreneurs for the Future (e4F) scheme provide real, material support to the city’s innovative companies; whilst Bristol’s Mayor George Ferguson and Lambeth’s Council both support their local currencies by allowing salaries to be paid in them.

It becomes more obvious  why stakeholders in a city might become involved in collaborative innovation when they have the opportunity to co-create a clear set of shared priorities. Those priorities can be compared to the objectives of innovative proposals seeking support, whether from social initiatives or businesses; used as the basis of procurement criteria for goods, services and infrastructure; set as the objectives for civic hacking and other grass-roots creative events; or even used as the criteria for funding programmes for new city services, such as the “Future Streets Incubator” that will shortly be launched in London as a result of the Mayor of London’s Roads Task Force.

In this context, businesses are not just suppliers of products and services, but also local institutions with significant supply chains, carbon and economic footprints, purchasing power and a huge number of local employees. There are many ways such organisations can play a role in supporting the development of an open, Smarter, more sustainable city.

The following “Smart City Design Principles” promote collaborative innovation in cities by encouraging support from development and regeneration initiatives:

Principle 12: Consultations on plans for new developments should fully exploit the capabilities of social media, virtual worlds and other technologies to ensure that communities affected by them are given the widest, most immersive opportunity possible to contribute to their design.

Principle 13: Management companies, local authorities and developers should have a genuinely engaging presence in social media so that they are approachable informally.

Principle 14: Local authorities should support awareness and enablement programmes for social media and related technologies, particularly “grass roots” initiatives within local communities.

Step 4: Promote open systems

A common principle between the open data movement; civic hacking; localism; the open government movement; and those who support “bottom-up” innovations in Smart Cities is that public systems and infrastructure – in cities and elsewhere – should be “open”. That might mean open and transparent in their operation; accessible to all; or providing open data and API interfaces to their technology systems so that citizens, communities and businesses can adapt them to their own needs. Even better, it might mean all of those things.

The “Dublinked” information sharing partnership, in which Dublin City Council, three surrounding County Councils and  service providers to the city share information and make it available to their communities as “open data”, is a good example of the benefits that openness can bring. Dublinked now makes 3,000 datasets available to local authority analysts; to researchers from IBM Research and the National University of Ireland; and to businesses, entrepreneurs and citizens. The partnership is identifying new ways for the city’s public services and transport, energy and water systems to work; and enabling the formation of new, information-based businesses with the potential to export the solutions they develop in Dublin to cities internationally. It is putting the power of technology and of city information not only at the disposal of the city authority and its agencies, but also into the hands of communities and innovators.

(I was delighted this year to join Innovation Birmingham as a non-Executive Director in addition to my role with IBM. Technology incubators – particularly those, like Innovation Birmingham and Sunderland Software City, that are located in city centres – are playing an increasingly important role in making the support of city institutions and major technology corporations available to local communities of entrepreneurs and technology activists)

In a digital future, the more that city infrastructures and services provide open data interfaces and APIs, the more that citizens, communities and businesses will be able to adapt the city to their own needs. This is the modern equivalent of the grid system that Jane Jacobs promoted as the most adaptable urban form. A grid structure is the basis of Edinburgh’s “New Town”, often regarded as a masterpiece of urban planning that has proved adaptable and successful through the economic and social changes of the past 250 years, and is also the starting point for Kelvin Campbell’s work.

But open data interfaces and APIs will only be widely exploitable if they conform to common standards. In order to make it possible to do something as simple as changing a lightbulb, we rely on open standards for the levels of voltage and power from our electricity supply; the physical dimensions of the socket and bulb and the characteristics of their fastenings; specifications of the bulb’s light and heat output; and the tolerance of the bulb and the fitting for the levels of moisture found in bathrooms and kitchens. Cities are much more complicated than lightbulbs; and many more standards will be required on order for us to connect to and re-configure their systems easily and reliably.

Open standards are also an important tool in avoiding city systems becoming “locked-in” to any particular supplier. By specifying common characteristics that all systems are required to demonstrate, it becomes more straightforward to exchange one supplier’s implementation for another.

Some standards that Smarter City infrastructures can use are already in place – for example, Web services and REST that specify the general ways in which computer systems interact, and the Common Alerting Protocol which is more specific to interactions between systems that monitor and control the physical world. But many others will need to be invented and encouraged to spread. The City Protocol Society is one organisation seeking to develop those new standards; and the British Standards Institute recently published the first set of national standards for Smarter Cities in the UK, including a standard for the interoperability of data between Smart City systems.

Some open source technologies will also be pivotal; open source (software whose source code is freely available to anyone, and which is usually written by unpaid volunteers) is not the same as open standards (independently governed conventions that define the way that technology from any provider behaves). But some open source technologies are so widely used to operate the internet infrastructures that we have become accustomed to – the “LAMP” stack of operating system, web server, database and web progamming language, for example – that they are “de facto” standards that convey some of the benefits of wide usability and interoperability of open standards. For example, IBM recently donated MQTT, a protocol for connecting information between small devices such as sensors and actuators in Smart City systems to the open source community, and it is becoming increasingly widely adopted as a consequence.

Once again, local authorities can contribute to the adoption of open standards through planning frameworks and procurement practises:

Principle 7: Any new development should demonstrate that all reasonable steps have been taken to ensure that information from its technology systems can be made openly available without additional expenditure. Whether or not information is actually available will be dependent on commercial and legal agreement, but it should not be additionally subject to unreasonable expenditure. And where there is no compelling commercial or legal reason to keep data closed, it should actually be made open.

Principle 8: The information systems of any new development should conform to the best available current standards for interoperability between IT systems in general; and for interoperability in the built environment, physical infrastructures and Smarter Cities specifically.

(The town plan for Edinburgh’s New Town, clearly showing the grid structure that gives rise to the adaptability that it is famous for showing for the past 250 years. Image from the JR James archive)

Finally, design skills will be crucial both to creating interfaces to city infrastructures that are truly useful and that encourage innovation; and in creating innovations that exploit them that in turn are useful to citizens.

At the technical level, there is already a rich corpus of best practise in the design of interfaces to technology systems and in the architecture of technology infrastructures that provide them.

But the creativity that imagines new ways to use these capabilities in business and in community initiatives will also be crucial. The new academic discipline of “Service Science” describes how designers can use technology to create new value in local contexts; and treats services such as open data and APIs as “affordances” – capabilities of infrastructure that can be adapted to the needs of an individual. In the creative industries, “design thinkers” apply their imagination and skills to similar subjects.

Step 5: Provide common services

At the 3rd EU Summit on Future Internet, Juanjo Hierro, Chief Architect for the FI-WARE “future internet platform” project, identified the specific tools that local innovators need in order to exploit city information infrastructures. They include real-time access to information from physical city infrastructures; tools for analysing “big data“; and access to technologies to ensure privacy and trust.

The Dublinked information sharing partnership is already putting some of these ideas into practise. It provides assistance to innovators in using, analysing and visualising data; and now makes available realtime data showing the location and movements of buses in the city. The partnership is based on specific governance processes that protect data privacy and manage the risk associated with sharing data.

As we continue to engage with communities of innovators in cities, we will discover further requirements of this sort. Imperial College’s “Digital Cities Exchange” research programme is investigating the specific digital services that could be provided as enabling infrastructure to support innovation and economic growth in cities, for example. And the British Standards Institute’s Smart Cities programme includes work on standards that will enable small businesses to benefit from Smart City infrastructure.

Local authorities can adapt planning frameworks to encourage the provision of these services:

Principle 9: New developments should demonstrate that they have considered the commercial viability of providing the digital civic infrastructure services recommended by credible research sources.

Step 6: Establish governance of the information economy

From the exponential growth in digital information we’ve seen in recent years, to the emergence of digital currencies such as Bitcoin, to the disruption of traditional industries by digital technology; it’s clear that we are experiencing an “information revolution” just as significant as the “industrial revolution” of the 18th and 19th centuries. We often refer to the resulting changes to business and society as the development of an “information economy“.

But can we speak in confidence of an information economy when the basis of establishing the ownership and value of its fundamental resource – digital information – is not properly established?

(Our gestures when using smartphones may be directed towards the phones, or the people we are communicating with through them; but how are they interpreted by the people around us? “Oh, yeah? Well, if you point your smartphone at me, I’m gonna point my smartphone at you!” by Ed Yourdon)

A great deal of law and regulation already applies to information, of course – such as the European Union’s data privacy legislation. But practise in this area is far less established than the laws governing the ownership of physical and intellectual property and the behaviour of the financial system that underlie the rest of the economy. This is evident in the repeated controversies concerning the use of personal information by social media businesses, consumer loyalty schemes, healthcare providers and telecommunications companies.

The privacy, security and ownership of information, especially personal information, are perhaps the greatest challenges of the digital age. But that is also a reflection of their importance to all aspects of our lives. Jane Jacobs’ description of urban systems in terms of human and community behaviour was based on those concepts, and is still regarded as the basis of our understanding of cities. New technologies for creating and using information are developing so rapidly that it is not only laws specifically concerning them that are failing to keep up with progress; laws concerning the other aspects of city systems that technology is transforming are failing to adapt quickly enough too.

A start might be to adapt city planning regulations to reflect and enforce the importance of the personal information that will be increasingly accessed, created and manipulated by city systems:

Principle 21: Any information system in a city development should provide a clear policy for the use of personal information. Any use of that information should be with the consent of the individual.

The triumph of the commons

I wrote last week that Smarter Cities should be a “middle-out” economic investment – in other words, an investment in common interests – and compared them to the Economist’s report on the efforts involved in distributing the benefits of the industrial revolution to society at large rather than solely to business owners and the professional classes.

One of the major drivers for the current level of interest in Smarter Cities and technology is the need for us to adapt to a more sustainable way of living in the face of rising global populations and finite resources. At large scale, the resources of the world are common; and at local scale, the resources of cities are common too.

For four decades, it has been widely assumed that those with access to common resources will exploit them for short term gain at the expense of long term sustainability – this is the “tragedy of the commons” first described by the economist Garrett Hardin. But in 2009, Elinor Ostrum won the Nobel Prize for economics by demonstrating that the “tragedy” could be avoidedand that a community could manage and use shared resources in a way that was sustainable in the long-term.

Ostrum’s conceptual framework for managing common resources successfully is a set of criteria for designing “institutions” that consist of people, processes, resources and behaviours. These need not necessarily be formal political or commercial institutions, they can also be social structures. It is interesting to note that some of those criteria – for example, the need for mechanisms of conflict resolution that are local, public, and accessible to all the members of a community – are reflected in the development over the last decade of effective business models for carrying out peer-to-peer exchanges using social media, supported by technologies such as reputation systems.

Of course, there are many people and communities who have championed and practised the common ownership of resources regardless of the supposed “tragedy” – not least those involved in the Transition movement founded by Rob Hopkins, and which has developed a rich understanding of how to successfully change communities for the better using good ideas; or the translational leaders described by Andrew Zolli. But Elinor Ostrum’s ideas are particularly interesting because they could help us to link the design, engineering and governance of Smarter Cities to the achievement of sustainable economic and social objectives based on the behaviour of citizens, communities and businesses.

Combined with an understanding of the stories of people who have improved their lives and communities using technology, I hope that the work of Kelvin Campbell, Rob Hopkins, Andrew Zolli, Elinor Ostrum and many others can inspire technologists, urban designers, architects and city leaders to develop future cities that fully exploit modern technology to be efficient, resilient and sustainable; but that are also the best places to live and work that we can imagine, or that we would hope for for our children.

Cities created by people like that really would be Smart.

Information and choice: nine reasons our future is in the balance

(The Bandra pedestrian skywalk in Mumbai, photo taken from the Collaborative Research Initiative Trust‘s study of Mumbai, “Being Nicely Messy“, produced for the 2012 Audi Urban Futures awards)

The 19th and 20th centuries saw the flowering and maturation of the Industrial Revolution and the creation of the modern world. Standards of living worldwide increased dramatically as a consequence – though so did inequality.

The 21st century is already proving to be different. We are reaching the limits of supply of the natural resources and cheap energy that supported the last two centuries of development; and are starting to widely exploit the most powerful man-made resource in history: digital information.

Our current situation isn’t simply an evolution of the trends of the previous two centuries; nine “tipping points” in economics, society, technology and the environment indicate that our future will be fundamentally different to the past, not just different by degree.

Three of those tipping points represent changes that are happening as the ultimate consequences of the Industrial Revolution and the economic globalisation and population growth it created; three of them are the reasons I think it’s accurate to characterise the changes we see today as an Information Revolution; and the remaining three represent challenges for us to face in the future.

The difficulty faced in addressing those challenges internationally through global governance institutions is illustrated by the current status of world trade deal and climate change negotiations; but our ability to respond to them is not limited to national and international governments. It is in the hands of businesses, communities and each of us as individuals as new business models emerge.

The structure of the economy is changing

In 2012, the Collaborative Research Initiatives Trust were commissioned by the Audi Urban Futures Awards to develop a vision for the future of work and life in Mumbai. In the introduction to their report, “Being Nicely Messy“, they cite a set of statistics describing Mumbai’s development that nicely illustrate the changing nature of the city:

“While the population in Mumbai grew by 25% between 1991 and 2010, the number of people travelling by trains during the same years increased by 66% and the number of vehicles grew by 181%. At the same time, the number of enterprises in the city increased by 56%.

All of this indicates a restructuring of the economy, where the nature of work and movement has changed.”

(From “Being Nicely Messy“, 2011, Collaborative Research Initiatives Trust)

Following CRIT’s inspiration, over the last year I’ve been struck by several similar but more widely applicable sets of data that, taken together, indicate that a similar restructuring is taking place across the world.

ScreenHunter_223 Nov. 28 00.06

(Professor Robert Gordon’s analysis of historic growth in productivity, as discussed by the famous investor Jeremy Grantham, showing that the unusual growth experienced through the Industrial Revolution may have come to an end. Source: Gordon, Robert J., “Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds,” NBER Working Paper 18315, August 2012)

The twilight of the Industrial Revolution

Tipping point 1: the slowing of economic growth

According to the respected investor Jeremy Grantham, Economic growth has slowed systemically and permanently. He states that: “Resource costs have been rising, conservatively, at 7% a year since 2000 … in a world growing at under 4% and [in the] developed world at under 1.5%”

Grantham’s analysis is that the rapid economic growth of the last century was a historical anomaly driven by the productivity improvements made possible through the Industrial Revolution; and before that revolution reached such a scale as to create global competition for resources and energy. Property and technology bubbles extended that growth into the early 21st Century, but it has now reduced to much more modest levels where Grantham expects it to remain. The economist Tyler Cowan came to similar conclusions in his 2011 book, “The Great Stagnation“.

This analysis was supported by the property developers I met at a recent conference in Birmingham. They told me that indicators in their market today are the most positive they have been since the start of the 1980s property boom; but none of them expect that boom to be repeated. The market is far more cautious concerning medium and long-term prospects for growth.

We have passed permanently into an era of more modest economic growth than we have become accustomed to; or at very least into an era whereby we need to restructure the relationship between economic growth and the consumption of resources and energy in ways that we have not yet determined before higher growth does return. We have passed a tipping point; the world has changed.

(Growth in the world's urban population as reported by World Urbanization Prospects”, 2007 Revision, Department of Economic and Social Affairs, United Nations)

(Growth in the world’s urban population as reported by “World Urbanization Prospects”, 2007 Revision, Department of Economic and Social Affairs, United Nations)

Tipping point 2: urbanisation and the industrialisation of food supply 

As has been widely quoted in recent years, more than half the world’s population has lived in cities since 2010 according to the United Nations Department of Economic and Social Affairs. That percentage is expected to increase to 70% by 2050.

The implications of those facts concern not just where we live, but the nature of the economy. Cities became possible when we industrialised the production and distribution of food, rather than providing it for ourselves on a subsistence basis; or producing it in collaboration with our neighbours. For this reason, many developing nations still undergoing urbanisation and industrialisation – such as Tanzania, Turkmenistan and Tajikstan – still formally define cities by criteria including “the pre-dominance of non-agricultural workers and their families” (as referenced in the United Nations’ “World Urbanization Prospects” 2007 Revision).

So for the first time more than half the world’s population now lives in cities; and is provided with food by industrial supply chains rather than by families or neighbours. We have passed a tipping point; the world has changed.

(Estimated damage in $US billion caused by natural disasters between 1900 and 2012 as reported by EM-DAT)

(Estimated damage in $US billion caused by natural disasters between 1900 and 2012 as reported by EM-DAT)

Tipping point 3: the frequency and impact of extreme weather conditions

As our climate changes, we are experiencing more unusual and extreme weather. In addition to the devastating impact recently of Typhoon Haiyan in the Philippines,  cities everywhere are regularly experiencing the effects to a more modest degree.

One city in the UK told me recently that inside the last 12 months they have dealt with such an increase in incidents of flooding severe enough to require coordinated cross-city action that it has become an urgent priority for local Councillors. We are working with other cities in Europe to understand the effect of rising average levels of flooding – historic building construction codes mean that a rise in average levels of a meter or more could put significant numbers of buildings at risk of falling down. The current prediction from the United Nations International Panel on Climate Change is that levels will rise somewhere between 26cm and 82cm by the end of this century – close enough for concern.

The EM-DAT International Disasters Database has calculated the financial impact of natural disasters over the past century. They have shown that in recent years the increased occurrence of unusual and extreme weather combined with the increasing concentration of populations and economic activity in cities has caused this impact to rise at previously unprecedented rates.

The investment markets have identified and responded to this trend. In their recent report “Global Investor Survey on Climate Change”, the Global Investor Coalition on Climate Change reported this year that 53% of fund managers collectively responsible for $14 trillion of assets indicated that they had divested stocks, or chosen not to invest in stocks, due to concerns over the impact of climate change on the businesses concerned. We have passed a tipping point; the world has changed.

(The prediction of exponential growth in digital information from EMC's Digital Universe report)

(The prediction of exponential growth in digital information from EMC’s Digital Universe report)

The dawn of the Information Revolution

Tipping point 4: exponential growth in the world’s most powerful man-made resource, digital information

Information has always been crucial to our world. Our use of language to share it is arguably a defining characteristic of what it means to be human; it is the basis of monetary systems for mediating the exchange of goods and services; and it is a core component of quantum mechanics, one of the most fundamental physical theories that describes how our universe behaves.

But the emergence of broadband and mobile connectivity over the last decade have utterly transformed the quantity of recorded information in the world and our ability to exploit it.

EMC’s Digital Universe report shows that in between 2010 and 2012 more information was recorded than in all of previous human history. They predict that the quantity of information recorded will double every 2 years, meaning that at any point in the next two decades it will be true to make the same assertion that “more information was recorded in the last two years than in all of previous history”. In 2011 McKinsey described the “information economy” that has emerged to exploit this information as a fundamental shift in the basis of the economy as a whole.

Not only that, but information has literally been turned into money. The virtual currency Bitcoin is based not on the value of a raw material such as gold whose availability is physically limited; but on the outcomes of extremely complex cryptographic calculations whose performance is limited by the speed at which computers can process information. The value of Bitcoins is currently rising incredibly quickly – from $20 to $1000 since January; although it is also subject to significant fluctuations. 

Ultimately, Bitcoin itself may succeed or fail – and it is certainly used in some unethical and dangerous transactions as well as by ordinary people and businesses. But its model has demonstrated in principle that a decentralised, non-national, information-based currency can operate successfully, as my colleague Richard Brown recently explained.

Digital information is the most valuable man-made resource ever invented; it began a period of exponential growth just three years ago and has literally been turned into money. We have passed a tipping point; the world has changed.

Tipping point 5: the disappearing boundary between humans, information and the physical world

In the 1990s the internet began to change the world despite the fact that it could only be accessed by using an expensive, heavy personal computer; a slow and inconvenient telephone modem; and the QWERTY keyboard that was designed in the 19th Century to prevent typists from typing faster than the levers in mechanical typewriters could move.

Three years ago, my then 2-year-old son taught himself how to use a touchscreen tablet to watch cartoons from around the world before he could read or write. Two years ago, Scientists at the University of California at Berkeley used a Magnetic Resonance Imaging facility to capture images from the thoughts of a person watching a film. A less sensitive mind-reading technology is already available as a headset from Emotiv, which my colleagues in IBM’s Emerging Technologies team have used to help a paralysed person communicate by thinking directional instructions to a computer.

Earlier this year, a paralysed woman controlled a robotic arm by thought; and prosthetic limbs, a working gun and living biological structures such as muscle fibre and skin are just some of the things that can be 3D printed on demand from raw materials and digital designs.

Our thoughts can control information in computer systems; and information in those systems can quite literally shape the world around us. The boundaries between our minds, information and the physical world are disappearing. We have passed a tipping point; the world has changed.

(A personalised prosthetic limb constructed using 3D printing technology. Photo by kerolic)

Tipping point 6: the miniaturisation of industry

The emergence of the internet as a platform for enabling sales, marketing and logistics over the last decade has enabled small and micro-businesses to reach markets across the world that were previously accessible only to much larger organisations with international sales and distribution networks.

More recently, the emergence and maturation of technologies such as 3D printingopen-source manufacturing and small-scale energy generation are enabling small businesses and community initiatives to succeed in new sectors by reducing the scale at which it is economically viable to carry out what were previously industrial activities – a trend recently labelled by the Economist magazine as the “Third Industrial Revolution“. The continuing development of social media and pervasive technology enable them to rapidly form and adapt supply and exchange networks with other small-scale producers and consumers.

Estimates of the size of the resulting “sharing economy“, defined by Wikipedia as “economic and social systems that enable shared access to goods, services, data and talent“, vary widely, but are certainly significant. The UK Economist magazine reports one estimate that it is a $26 billion economy already, whilst 2 Degrees Network report that just one aspect of it – small-scale energy generation – could save UK businesses £33 billion annually by 2030Air B’n’B – a peer-to-peer accommodation service – reported recently that they had contributed $632 million in value to New York’s economy in 2012 by enabling nearly 5,000 residents to earn an average of $7,500 by renting their spare rooms to travellers; and as a consequence of those travellers additionally spending an average of $880 in the city during their stay. Overall, there has been a significant rise in self-employment and “micro-entrepreneurial” enterprises over the last few years, which now account for 14% of the US economy.

Organisations participating in the sharing economy exhibit a range of motivations and ethics – some are aggressively commercial, whilst others are “social enterprises” with a commitment to reinvest profits in social growth. The social enterprise sector, comprised of mutuals, co-operatives, employee-owned businesses and enterprises who submit to “triple bottom line” accounting of financial, social and environmental capital, is about 15% of the value of most economies, and has been growing and creating jobs faster than traditional business since the 2008 crash.

In the first decade of the 21st Century, mobile and internet technologies caused a convergence between the technology, communications and media sectors of the economy. In this decade, we will see far more widespread disruptions and convergences in the technology, manufacturing, creative arts, healthcare and utilities industries; and enormous growth in the number of small and social enterprises creating innovative business models that cut across them. We have passed a tipping point; the world has changed.

Rebalancing the world

Tipping point 7: how we respond to climate change and resource constraints

There is now agreement amongst scientists, expressed most conclusively by the United Nations International Panel on Climate Change this year, that the world is undergoing a period of overall warming resulting from the impact of human activity. But there is not yet a consensus on how we should respond.

Views vary from taking immediate, sweeping measures to drastically cut carbon and greenhouse gas emissions,  to the belief that we should accept climate change as inevitable and focus investment instead on adapting to it, as suggested by the “Skeptical Environmentalist” Bjørn Lomborg and the conservative think-tank the American Enterprise Institute. As a result of this divergence of opinion, and of the challenge of negotiating between the interests of countries, communities and businesses across the world, the agreement reached by last year’s climate change negotiations in Doha was generally regarded as relatively weak.

Professor Chris Rogers of the University of Birmingham and his colleagues in the Urban Futures initiative have assessed over 450 proposed future scenarios and identified four archetypes (described in his presentation to Base Cities Birmingham) against which they assess the cost and effectiveness of environmental and climate interventions. The “Fortress World” scenario is divided between an authoritarian elite who control the world’s resources from their protected enclaves and a wider population living in poverty. In “Market Forces”, free markets encourage materialist consumerism to wholly override social and environmental values; whilst in “Policy Reform” a combination of legislation and citizen behaviour change achieve a balanced outcome. And in the “New Sustainability Paradigm” the pursuit of wealth gives way to a widespread aspiration to achieve social equality and environmental sustainability. (Chris is optimistic enough that his team dismissed another scenario, “Breakdown”, as unrealistic).

Decisions that are taken today affect the degree to which our world will evolve to resemble those scenarios. As the impact of weather and competition for resources affect the stability of supply of energy and foodmany cities are responding to the relative lack of national and international action by taking steps themselves. Some businesses are also building strategies for long-term success and profit growth  around sustainability; in part because investing in a resilient world is a good basis for a resilient business, and in part because they believe that a genuine commitment to sustainability will appeal to consumers. Unilever demonstrated that they are following this strategy recently by committing to buy all of their palm oil – of which they consume one third of the world’s supply – from traceable sources by the end of 2014.

At some point, we will all – individuals, businesses, communities, governments – be forced to change our behaviour to account for climate change and the limits of resource availability: as the prices of raw materials, food and energy rise; and as we are more and more directly affected by the consequences of a changing environment.

The questions are: to what extent have these challenges become urgent to us already; and how and when will we respond?

(“Makers” at the Old Print Works in Balsall Heath, Birmingham, sharing the tools, skills and ideas that create successful small businesses)

Tipping point 8: the end of the average career

In “The End of Average“, the economist Tyler Cowen observed that about 60% of the jobs lost during the 2008 recession were in mid-wage occupations; and the UK Economist magazine reported that many jobs lost from professional industries had been replaced in artisan trades and small-scale industry such as food, furniture and design.

Echoing Jeremy Grantham, Cowen further observes that these changes take place within a much longer term 28% decline in middle-income wages in the US between 1969 and 2009 which has no identifiable single cause. Cowen worries that this is a sign that the economy is beginning to diverge into the authoritarian elite and the impoverished masses of Chris Rogers’ “Fortress World” scenario.

Other evidence points to a more complex picture. Jake Dunagan, Research Director of the Institute for the Future, believes that the widespread availability of digital technology and information is extending democracy and empowerment – just as the printing press and education did in the last millennium as they dramatically increased the extent to which people were informed and able to make themselves heard. Dunagan notes that through our reliance on technology and social media to find and share information, our thoughts and beliefs are already formed by, and having an effect on, society in a way that is fundamentally new.

The miniaturisation of industry (tipping point 6 above) and the disappearance of the boundary between our minds and bodies, information and the physical world (tipping point 5 above) are changing the ways in which resources and value are exchanged and processed out of all recognition. Just imagine how different the world would be if a 3D-printing service such as Shapeways transformed the manufacturing industry as dramatically as iTunes transformed the music industry 10 years ago. Google’s futurologist Thomas Frey recently described 55 “jobs of the future” that he thought might appear as a result.

(Activities comprising the “Informal Economy” and their linkages to the mainstream economy, by Claro Partners)

In both developed and emerging countries, informal, social and micro-businesses are significant elements of the economy, and are growing more quickly than traditional sectorsClaro partners estimate that the informal economy (in which they include alternative currencies, peer-to-peer businesses, temporary exchange networks and micro-businesses – see diagram, right) is worth $10 trillion worldwide, and that it employs up to 80% of the workforce in emerging markets. 

In developed countries, the Industrial Revolution drove a transformation of such activity into a more formal economy – a transformation which may now be in part reversing. In developing nations today, digital technology may make part of that transformation unnecessary. 

To be successful in this changing economy, we will need to change the way we learn, and the way we teach our children. Cowen wrote that “We will move from a society based on the pretense that everyone is given an okay standard of living to a society in which people are expected to fend for themselves much more than they do now”; and expressed a hope that online education offers the potential for cheaper and more widespread access to new skills to enable people to do so. This thinking echoes a finding of the Centre for Cities report “Cities Outlook 1901” that the major factor driving the relative success or failure of UK cities throughout the 20th Century was their ability to provide their populations with the right skills at the right time as technology and industry developed.

The marketeer and former Yahoo Executive Seth Godin’s polemic “Stop Stealing Dreams” attacked the education system for continuing to prepare learners for stable, traditional careers rather than the collaborative entrepreneurialism that he and other futurists expect to be required. Many educators would assert that their industry is already adapting and will continue to do so – great change is certainly expected as the ability to share information online disrupts an industry that developed historically to share it in classrooms and through books.

Many of the businesses, jobs and careers of 2020, 2050 and 2100 will be unrecognisable or even unimaginable to us today; as are the skills that will be needed to be successful in them. Conversely, many post-industrial cities today are still grappling with challenges created by the loss of jobs in manufacturing, coalmining and shipbuilding industries in the last century.

The question for our future is: will we adapt more comfortably to the sweeping changes that will surely come to the industries that employ us today?

("Lives on the Line" by James Cheshire at UCL's Centre for Advanced Spatial Analysis, showing the variation in life expectancy and correlation to child poverty in London. From Cheshire, J. 2012. Lives on the Line: Mapping Life Expectancy Along the London Tube Network. Environment and Planning A. 44 (7). Doi: 10.1068/a45341)

(“Lives on the Line” by James Cheshire at UCL’s Centre for Advanced Spatial Analysis, showing the variation in life expectancy and correlation to child poverty in London. From Cheshire, J. 2012. Lives on the Line: Mapping Life Expectancy Along the London Tube Network. Environment and Planning A. 44 (7). Doi: 10.1068/a45341)

Tipping point 9: inequality

The benefits of living in cities are distributed extremely unevenly.

The difference in life expectancy of children born into the poorest and wealthiest areas of UK cities today is often as much as 20 years – for boys in Glasgow the difference is 28 years. That’s a deep inequality in the opportunity to live.

There are many causes of that inequality, of course: health, diet, wealth, environmental quality, peace and public safety, for example. All of them are complex, and the issues that arise from them to create inequality – social deprivation and immobility, economic disengagement, social isolation, crime and lawlessness – are notoriously difficult to address.

But a fundamental element of addressing them is choosing to try to do so. That’s a trite observation, but it is nonetheless the case that in many of our activities we do not make that choice – or, more accurately, as individuals, communities and businesses we take choices primarily in our own interests rather than based on their wider impact.

Writing about cities in the 1960s, the urbanist Jane Jacobs observed that:

“Private investment shapes cities, but social ideas (and laws) shape private investment. First comes the image of what we want, then the machinery is adapted to turn out that image. The financial machinery has been adjusted to create anti-city images because, and only because, we as a society thought this would be good for us. If and when we think that lively, diversified city, capable of continual, close- grained improvement and change, is desirable, then we will adjust the financial machinery to get that.”

In many respects, we have not shaped the financial machinery of the world to achieve equality. Nobel Laureate Joseph Stiglitz wrote recently that in fact the financial machinery of the United States and the UK in particular create considerable inequality in those countries; and the Economist magazine reminds us of the enormous investments made into public institutions in the past in order to distribute the benefits of the Industrial Revolution to society at large rather than concentrate them on behalf of business owners and the professional classes – with only partial success.

New legislation in banking has been widely debated and enacted since the 2008 financial crisis – enforcing the separation of commercial and investment banking, for example. But addressing inequality is a much broader challenge than the regulation of banking, and will not only be addressed by legislation. Business models such as social enterprise, cross-city collaborations and the sharing economy are emerging to develop sustainable businesses in industries such as food, energy, transportation and finance, in addition to the contribution made by traditional businesses building sustainability into their strategies.

Whenever we vote, buy something or make a choice in business, we contribute to our overall choice to develop a fairer, more sustainable world in which everyone has a chance to participate. The question is not just whether we will take those choices; but the degree to which their impact on the wider world will be apparent to us so that we can do so in an informed way.

That is a challenge that technology can help with.

(A smartphone alert sent to a commuter in a San Francisco pilot project by IBM Research and Caltrans that provides personalised daily predictions of commuting journey times. The predictions gave commuters the opportunity to take a better-informed choice about their travel to work.)

Data and Choice

Like the printing press, the vote and education, access to data allows us to make more of a difference than we were able to without it.

Niall Firth’s November editorial for the New Scientist magazine describes how citizens of developing nations are using open data to hold their governments to account, from basic information about election candidates to the monitoring of government spending. In the UK, a crowd-sourced analysis of politicians’ expenses claims that had been leaked to the press resulted in resignations, the repayment of improperly claimed expenses, and in the most severe cases, imprisonment.

Unilever are committing to making their supply chain for palm oil traceable precisely because that data is what will enable them to next improve its sustainability; and in Almere, city data and analytics are being used to plan future development of the city in a way that doesn’t cause harmful impacts to existing citizens and residents. Neither initiative would have been possible or affordable without recent improvements in technology.

Data and technology, appropriately applied, give us an unprecedented ability to achieve our long-term objectives by taking better-informed, more forward-looking decisions every day, in the course of our normal work and lives. They tell us more than we could ever previously have known about the impact of those decisions.

That’s why the tipping points I’ve described in this article matter to me. They translate my general awareness that I should “do the right thing” into a specific knowledge that at this point in time, my choices in many aspects of daily work and life contribute to powerful forces that will shape the next century that we share on this planet; and that they could help to tip the balance in all of our favour.

The sharing economy and the future of movement in smart, human-scale cities

("Visionary City" by William Robinson Leigh)

(William Robinson Leigh’s 1908 painting “Visionary City” envisaged future cities constructed from mile-long buildings of hundreds of stories connected by gas-lit skyways for trams, pedestrians and horse-drawn carriages. A century later we’re starting to realise not only that developments in transport and power technology have eclipsed Leigh’s vision, but that we don’t want to live in cities constructed from buildings on this scale.)

One of the defining tensions throughout the development of cities has been between our desire for quality of life and our need to move ourselves and the things we depend on around.

The former requires space, peace, and safety in which to work, exercise, relax and socialise; the latter requires transport systems which, since the use of horsedrawn transport in medieval cities, have taken up space, created noise and pollution – and are often dangerous. Enrique Penalosa, whose mayorship of Bogota was defined by restricting the use of car transport, often refers to the tens of thousands of children killed by cars on the world’s roads every year and his astonishment that we accept this as the cost of convenient transport.

This tension will intensify rapidly in coming years. Not only are our cities growing larger and denser, but according to the analysis of city systems by Professors Geoffrey West and Louis Bettencourt of the Los Alamos National Laboratory and Professor Ian Robertson’s study of human behaviour, our interactions within them are speeding up and intensifying.

Arguably, over the last 50 years we have designed cities around large-scale buildings and transport structures that have supported – and encouraged – growth in transport and the size of urban economies and populations at the expense of some aspects of quality of life.

Whilst standards of living across the world have improved dramatically in recent decades, inequality has increased to an even greater extent; and many urbanists would agree that the character of some urban environments contributes significantly to that inequality. In response, the recent work of architects such as Jan Gehl and Kelvin Campbell, building on ideas first described by Jane Jacobs in the 1960s, has led to the development of the “human scale cities” movement with the mantra “first life, then space, then buildings”.

The challenge at the heart of this debate, though, is that the more successful we are in enabling human-scale value creation; the more demand we create for transport and movement. And unless we dramatically improve the impact of the systems that support that demand, the cities of the future could be worse, not better, places for us to live and work in.

Human scale technology creates complexity in transport

As digital technology pervades every aspect of our lives, whether in large-scale infrastructures such as road-use charging systems or through the widespread adoption of small-scale consumer technology such as smartphones and social media, we cannot afford to carry out the design of future cities without considering it; nor can we risk deploying it without concern for its affect on the quality of urban life.

Digital technologies do not just make it easier for us to communicate and share information wherever we are: those interactions create new opportunities to meet in person and to exchange goods and services; and so they create new requirements for transport. And as technologies such as 3D printing, open-source manufacturing and small-scale energy generation make it possible to carry out traditionally industrial activities at much smaller scales, some existing bulk movement patterns will be replaced by thousands of smaller, peer-to-peer interactions created by transactions in online marketplaces. We can already see the effects of this trend in the vast growth of traffic delivering goods that are purchased or exchanged online.

Estimates of the size of this “sharing economy“, defined by Wikipedia as “economic and social systems that enable shared access to goods, services, data and talent“, vary widely, but are certainly significant. The UK Economist magazine reports one estimate that it is a $26 billion economy already, whilst 2 Degrees Network report that just one aspect of it – small-scale energy generation – could save UK businesses £33 billion annually by 2030Air B’n’B – a peer-to-peer accommodation service – reported recently that they had contributed $632 million in value to New York’s economy in 2012 by enabling nearly 5,000 residents to earn an average of $7,500 by renting their spare rooms to travellers; and as a consequence of those travellers additionally spending an average of $880 in the city during their stay. The emergence in general of the internet as a platform for enabling sales, marketing and logistics for small and micro-businesses is partly responsible for a significant rise in self-employment and “micro-entrepreneurial” enterprises over the last few years, which now account for 14% of the US economy.

Digital technology will create not just great growth in our desire to travel and move things, but great complexity in the way we will do so. Today’s transport technologies are not only too inefficient to scale to our future needs; they’re not sophisticated and flexible enough to cope with the complexity and variety of demand.

Many of the future components of transport systems have already been envisaged, and deployed in early schemes: elevated cycleways; conveyor belts for freight; self-driving vehicles and convoys; and underground pneumatic networks for recycling. And to some extent, we have visualised the cities that they will create: Professor Miles Tight, for example, has considered the future living scenarios that might emerge from various evolutions of transport policy and human behavioural choices in the Visions 2030 project.

The task for the Smarter Cities movement should be to extend this thinking to envision the future of cities that are also shaped by emerging trends in digital technology and their effect on the wider economy and social systems. We won’t do that successfully by considering these subjects separately or in the abstract; we need to envision how they will collectively enable us to live and work from the smallest domestic scale to the largest city system.

(Packages from Amazon delivered to Google’s San Francisco office. Photo by moppet65535)

What we’ll do in the home of the future

Rather than purchasing and owning goods such as kitchen utensils, hobby and craft items, toys and simple house and garden equipment, we will create them on-demand using small-scale and open-source manufacturing technology and smart-materials. It will even be possible – though not all of us will choose to do so – to manufacture some food in this way.

Conversely, there will still be demand for handmade artisan products including clothing, gifts, jewellery, home decorations, furniture, and food. Many of us will earn a living producing these goods in the home while selling and marketing them locally or through online channels.

So we will leave our home of the future less often to visit shops; but will need not just better transport services to deliver the goods we purchase online to our doorsteps, but also a new utility to deliver the raw materials from which we will manufacture them ourselves; and new transport services to collect the products of our home industries and to deliver supplies to them.

We will produce an increasing amount of energy at home; whether from existing technologies such as solar panels or combined heat and power (CHP) systems; or through new techniques such as bio-energy. The relationships between households, businesses, utilities and transportation will change as we become producers of energy and consumers of waste material.

And whilst remote working means we will continue to be less likely to travel to and from the same office each day, the increasing pace of economic activity means that we will be more likely to need to travel to many new destinations as it becomes necessary to meet face to face with the great variety of customers, suppliers, co-workers and business partners with whom online technologies connect us.

What we’ll do in the neighbourhoods of the future

As we increasingly work remotely from within our homes or by travelling far away from them, less of us work in jobs and for businesses that are physically located within the communities in which we live; and some of the economic ties that have bound those communities in the past have weakened. But most of us still feel strong ties to the places we live in; whether they are historical, created by the character of our homes or their surrounding environment, or by the culture and people around us. These ties create a shared incentive to invest in our community.

Perhaps the greatest potential of social media that we’re only begin to exploit is its power to create more vibrant, sustainable and resilient local communities through the “sharing economy”.

The motivations and ethics of organisations participating in the sharing economy vary widely – some are aggressively commercial, whilst others are “social enterprises” with a commitment to reinvest profits in social growth. The social enterprise sector, comprised of mutuals, co-operatives, employee-owned businesses and enterprises who submit to “triple bottom line” accounting of financial, social and environmental capital, is about 15% of the value of most economies, and has been growing and creating jobs faster than traditional business since the 2008 crash. There is enormous potential for cities to achieve their “Smarter” objectives for sustainable, equitably distributed economic growth through contributions from social enterprises using technology to implement sharing economy business models within their region.

Sharing economy models which enable transactions between participants within a walkable or cyclable area can be a particularly efficient mechanism for collaboration, as the related transport can be carried out using human power. Joan Clos, Exective Director of UN-Habitat, has asserted that cities will only become sustainable when they are built at a sufficient population density that a majority of interactions within them can be carried out in this way (as reported informally by Tim Stonor from Dr. Clos’s remarks at the “Urban Planning for City Leaders” conference at the Crystal, London in 2012).

The Community Lovers’ Guide has published stories from across Europe of people who have collaborated to make the places that they share better, often using technology; and schemes such as Casserole Club and Land Share are linking the supply and demand of land, food, gardening and cooking skills within local communities, helping neighbours to help each other. At local, national and international levels, sharing economy ideas are creating previously unrealised social and economic value, including access to employment opportunities that replace some of those traditional professions that are shrinking as the technology used by industrial business changes.

Revenue-earning businesses are a necessary component of vibrant communities, at a local neighbourhood scale as well as city-wide. At the Academy of Urbanism Congress in Bradford this year, Michael Ward, Chair of the Centre for Local Economic Strategies, asserted that “the key task facing civic leaders in the 21st Century is this: how, in a period of profound and continuing economic changes, will our citizens earn a living and prosper?”

(“Makers” at the Old Print Works in Balsall Heath, Birmingham, sharing the tools, skills and ideas that create successful small businesses)

So whilst we work remotely from direct colleagues, we may chose to work in a collaborative workspace with near neighbours, with whom we can exchange ideas, make new contacts and start new enterprises and ventures. As the “maker” economy emerges from the development of sophisticated, small-scale manufacturing, and the resurgence in interest in artisan products, community projects such as the Old Print Works in Balsall Heath, Birmingham are emerging in low-cost ex-industrial space as people come together to share the tools and expertise required to make things and run businesses.

We will also manage and share our use of resources such as energy and water at neighbourhood scale. The scale and economics of movement of the raw materials for bio-energy generation, for example, currently dictate that neighbourhood-scale generation facilities – as opposed to city-wide, regional or domestic scale – are the most efficient. Aston University’s European Bio-Energy Research Institute is demonstrating these principles in the Aston district of Birmingham. And schemes from the sustainability pilot in Dubuque, Iowa to the Energy Sharing Co-operative in the West Midlands of the UK and the Chale community project on the Isle of Wight have shown that community-scale schemes can create shared incentives to use resources more efficiently.

One traditional centre of urban communities, the retail high street or main street, has fared badly in recent times. The shift to e-commerce, supermarkets and out-of-town shopping parks has led to many of them loosing footfall and trade, and seeing “payday lenders“, betting shops and charity shops take the place of traditional retailers.

High streets needs to be freed from the planning, policy and tax restrictions that are preventing their recovery. The retail-dominated highstreet of the 20th century emerged from a particular and temporary period in the evolution of the private car as the predominant form of transport supporting household-scale economic transactions. Developments in digital and transport technology as well as economy and society have made it non-viable in its current form; but legislation that prevents change in the use of highstreet property, and that keeps business taxes artificially high, is preventing highstreets from adapting in order to benefit from technology and the opportunities of the sharing economy.

Business Improvement Districts, already emerging in the UK and US to replace some local authority services, offer one way forward. They need to be given more freedom to allow the districts they manage to develop as best meets the economic and social needs of their area according to the future, not the past. And they need to become bolder: to invest in the same advanced technology to maximize footfall and spend from their customers as shopping malls do on behalf of their tenants, as recommended by a recent report to UK Government on the future of the high street.

The future high street will not be a street of clothes shops, bookshops and banks: some of those will still exist, but the high street will also be a place for collaborative workers; for makers; for sharing and exchanging; for local food produce and artisan goods; for socialising; and for starting new businesses. We will use social media to share our time and our resources in the sharing economy; and will meet on the high street when those transactions require the exchange of physical goods and services. We will walk and cycle to local shops and transport centres to collect and deliver packages for ourselves, or for our neighbours.

The future of work, life and transport at city-scale

Whilst there’s no universally agreed definition, an urban areas is generally agreed to be a continuously built-up area with a total population of between 2,000 and 40 million people; living at a density of around 1,000 per square kilometre; and employed primarily in non-agricultural activities (the appendices to the 2007 revision of the UN World Urbanisation Prospects summarise such criteria from around the world; 38.7 million is estimated to be the population of the world’s largest city, Tokyo, in 2025 by the UN World Urbanisation Prospects 2011).

(An analysis based on GPS data from mobile phones of end-to-end journeys undertaken by users of Abidjan’s bus services. By comparing existing bus routes to end-to-end journey requirements, the analysis identified four new bus routes and led to changes in many others. As a result, 22 routes now show increased ridership, and city-wide journey times have decreased by 10%.)

That is living at an industrial scale. The sharing economy may be a tremendously powerful force, but – at least for the foreseeable future – it will not scale to completely replace the supply chains that support the needs of such enormous and dense populations.

Take food, for example. One hectare of highly fertile, intensively farmed land can feed 10 people. Birmingham, my home city, has an area of 60,000 hectares of relatively infertile land, most of which is not available for farming at all; and a population of around 1 million. Those numbers don’t add up to food self-sufficiency; and Birmingham is a very low-density city – between one-half and one-tenth as dense as the growing megacities of Asia and South America.

Until techniques such as vertical farming and laboratory-grown food become both technically and economically viable, and culturally acceptable – if they ever do – cities will not feed themselves. And these techniques hardly represent locally-grown food exchanged between peers – they are highly technical and likely to operate initially at industrial scale. Sharing economy businesses such as Casserole Club, Kitchen Surfing, and Big Barn will change the way we distribute, process and prepare food within cities, but many of the raw materials will continue to be grown and delivered to cities through the existing industrial-scale distribution networks that import them from agricultural regions.

We are drawn to cities for the opportunities they offer: for work, for entertainment, and to socialise. As rapidly as technology has improved our ability to carry out all of those activities online, the world’s population is still increasingly moving to cities. In many ways, technology augments the way we carry out those activities in the real world and in cities, rather than replacing them with online equivalents.

Technology has already made cultural events in the real world more frequent, accessible and varied. Before digital technology, the live music industry depended on mass-marketing and mass-appeal to create huge stadium-selling tours for a relatively small number of professional musicians; and local circuits were dominated by the less successful but similar-sounding acts for which sufficiently large audiences could be reached using the media of the time. I attempted as an amateur musician in the pre-internet 1990s to find a paying audience for the niche music I enjoyed making: I was not successful. Today, social media can be used to identify and aggregate demand to make possible a variety of events and artforms that would never previously have reached an audience. Culture in the real-world is everywhere, all the time, as a result, and life is the richer for it. We discover much of it online, but often experience it in the real world.

(Birmingham’s annual “Zombie Walk” which uses social media to engage volunteers raising money for charity. Photo by Clare Lovell).

Flashmobs” use smartphones and social media to spontaneously bring large numbers of people together in urban spaces to celebrate; socialise or protest; and while we will play and tell stories in immersive 3D worlds in the future – whether we call them movies, interactive fiction or “massive multi-player online role-playing games” – we’ll increasingly do so in the physical world too, in “mixed reality” games. Technologies such as Google Glasscognitive computing and Brain/Computer Interfaces will accelerate these trends as they remove the barrier between the physical world and information systems.

We will continue to come to city centres to experience those things that they uniquely combine: the joy and excitement of being amongst large numbers of people; the opportunity to share ideas; access to leading-edge technologies that are only economically feasible at city-scale; great architecture, culture and events; the opportunity to shop, eat, drink and be entertained with friends. All of these things are possible anywhere; but it is only in cities that they exist together, all the time.

The challenge for city-scale living will be to support the growing need to transport goods and people into, out of and around urban areas in a way that is efficient and productive, and that minimises impact on the liveability of the urban environment. In part this will involve reducing the impact of existing modes of transport by switching to electric or hydrogen power for vehicles; by predicting and optimising the behaviour of traffic systems to prevent congestion; by optimising public transport as IBM have helped AbidjanDublin, Dubuque and Istanbul to do; and by improving the spatial organisation of transport through initiatives such as Arup’s Regent Street delivery hub.

We will also need new, evolved or rejuvenated forms of transport. In his lecture for the Centenary of the International Federation for Housing and Planning, Sir Peter Hall spoke eloquently of the benefits of Bus Rapid Transit systems, urban railways and trams. All can combine the speed and efficiency of rail for bringing goods and people into cities quickly from outlying regions, with the ability to stop frequently at the many places in cities which are the starting and finishing points of end-to-end journeys.

Vehicle journeys on major roads will be undertaken in the near future by automated convoys travelling safely at a combined speed and density beyond the capability of human drivers. Eventually the majority of journeys on all roads will be carried out by such autonomous vehicles. Whilst it is important that these technologies are developed and introduced in a way that emphasises safety, the majority of us already trust our lives to automated control systems in our cars – every time we use an anti-lock braking system, for example. We will still drive cars for fun, pleasure and sport in the future – but we will probably pay dearly for the privilege; and our personal transport may more closely resemble the rapid transit pods that can already be seen at Heathrow Terminal 5.

Proposals intended to accelerate the adoption of autonomous vehicles include the “Qwik lane” elevated highway for convoy traffic; or the “bi-modal glideway” and “tracked electric vehicle” systems which could allow cars and lorries to travel at great speed safely along railway networks or dedicated “tracked” roads. Alternative possibilities which could achieve similar levels of efficiency and throughput are to extend the use of conveyor belt technology – already recognised as far more efficient than lorries for transporting resources and goods over distances of tens of miles in quarries and factories – to bring freight in and out of cities; or to use pneumatically powered underground tunnel networks, which are already being used in early schemes for transporting recyclable waste in densely populated areas. Elon Musk, the inventor of the Tesla electric supercar, has even suggested that a similar underground “vacuum loop” could be used to replace long-distance train and air travel for humans, at speeds over 1000 kilometres per hour.

The majority of these transport systems won’t offer us as individuals the same autonomy and directness in our travel as we believe the private car offers us today – even though that autonomy is often severely restricted by traffic congestion and delays. Why will we chose to relinquish that control?

(Optimod's vision for integrated, predictive mobile, multi-modal transport information)

(Optimod‘s vision for integrated, predictive mobile, multi-modal transport information)

Some of us will simply prefer to, finding different value in other ways to get around.

Walking and cycling are gaining in popularity over driving in many cities. I’ve personally found it a revelation in recent years to walk around cities rather than drive around them as I might previously have done. Cities are interesting and exciting places, and walking is often an enjoyable as well as efficient way of moving about them. (And for urbanists, of course, walking offers unparalleled opportunities to understand cities). Many of us are also increasingly conscious of the health benefits of walking and cycling, particularly as recent studies in the UK and US have shown that adults today will be the first generation in recorded history to die younger than their parents because of our poor diets and sedentary lifestyles.

Alternatively, we may choose to travel by public transport in the interests of productivity – reading or working while we travel, especially as network coverage for telephony and the internet improves. As the world’s population and economies grow, competition and the need to improve productivity will lead more and more of us to this take this choice.

It is increasingly easy to walk, cycle, or use public or shared transport to travel into and around cities thanks to the availability of bicycle hire schemes, car clubs and walking route information services such as walkit.com. The emergence of services that provide instant access to travel information across all forms of transport – such as the Moovel service in Germany or the Optimod service in Lyon, France – will enhance this usability, making it easier to combine different forms of transport into a single journey, and to react to delays and changes in plans whilst en route.

Legislation will also drive changes in behaviour, from national and international initiatives such as the European Union legislation limiting carbon emissions of cars to local planning and transport policies – such as Birmingham’s recent Mobility Action Plan which announced a consultation to consider closing the city’s famous system of road tunnels.

(Protesters at Occupy Wallstreet using digital technology to coordinate their demonstration. Photo by David Shankbone)

Are we ready for the triumph of the digital city?

Regardless of the amazing advances we’re making in online technology, life is physical. Across the world we are drawn to cities for opportunity; for life-support; to meet, work and live.  The ways in which we interact and transport ourselves and the goods we exchange have changed out of all recognition throughout history, and will continue to do so. The ever increasing level of urbanisation of the world’s population demonstrates that there’s no sign yet that those changes will make cities redundant: far from it, they are thriving.

It is not possible to understand the impact on our lives of new ideas in transport, technology or cities in isolation. Unless we consider them together and in the context of changing lifestyles, working patterns and economics, we won’t design and build cities of the future to be resilient, sustainable, and equitable.  The limitation of our success in doing that in the past is illustrated by the difference in life expectancy of 20 years between the richest and poorest areas of UK cities; the limitation of our success in doing so today is illustrated by the fact that a huge proportion of the world’s population does not have access to the digital technologies that are changing our world.

I recently read the masterplan for a European city district regarded as a good example of Smart City thinking. It contained many examples of the clever and careful design of physical space for living and for today’s forms of transport, but did not refer at all to the changes in patterns of work, life and movement being driven by digital technology. It was certainly a dramatic improvement over some plans of the past; but it was not everything that a plan for the future needs to be. 

Across domains such as digital technology, urban design, public policy, low carbon engineering, economic development and transport we have great ideas for addressing the challenges that urbanisation, population growth, resource constraints and climate change will bring; but a lot of work to do in bringing them together to create good designs for the liveable cities of the future.

A design pattern for a Smarter City: Online Peer-to-Peer and Regional Marketplaces

(Photo of Moseley Farmers’ Market in Birmingham by Bongo Vongo)

(In “Do we need a Pattern Language for Smarter Cities” I suggested that “design patterns“, a tool for capturing re-usable experience invented by the town-planner Christopher Alexander, might offer a useful way to organise our knowledge of successful approaches to “Smarter Cities”. I’m now writing a set of design patterns to describe ideas that I’ve seen work more than once. The collection is described and indexed in “Design Patterns for Smarter Cities” which can be found from the link in the navigation bar of this blog).  

Design Pattern: Online Peer-to-Peer and Regional Marketplaces

Summary of the pattern:

A society is defined by the transactions that take place within it, whether their characteristics are social or economic, and whether they consist of material goods or communication. Many of those transactions take place in some form of marketplace.

As traditional business has globalised and integrated over the last few decades, many of the systems that support us – food production and distribution, energy generation, manufacturing and resource extraction, for example – have optimised their operations globally and consolidated ownership to exploit economies of scale and maximise profits. Those operations have come to dominate the marketplaces for the goods and services they consume and process; they defend themselves from competition through the expense and complexity of the business processes and infrastructures that support their operations; through their brand awareness and sales channels to customers; and through their expert knowledge of the availability and price of the resources and components they need.

However, in recent years dramatic improvements in information and communication technology – especially social mediamobile devicese-commerce and analytics – have made it dramatically easier for people and organisations with the potential to transact with each other to make contact and interact. Information about supply and demand has become more freely available; and it is increasingly easy to reach consumers through online channels – this blog, for instance, costs me nothing to write other than my own time, and now has readers in over 140 countries.

In response, online peer-to-peer marketplaces have emerged to compete with traditional models of business in many industries – Apple’s iTunes famously changed the music industry in this way; YouTube has transformed the market for video content and Prosper and Zopa have created markets for peer-to-peer lending. And as technologies such as 3D printing and small-scale energy generation improve, these ideas will spread to other industries as it becomes possible to carry out activities that previously required expensive, large-scale infrastructure at a smaller scale, and so much more widely.

(A Pescheria in Bari, Puglia photographed by Vito Palmi)

Whilst many of those marketplaces are operated by commercial organisations which exist to generate profit, the relevance of online marketplaces for Smarter Cities arises from their ability to deliver non-financial outcomes: i.e. to contribute to the social, economic or environmental objectives of a city, region or community.

The e-Bay marketplace in second hand goods, for example, has extended the life of over $100 billion of goods since it began operating by offering a dramatically easier way for buyers and sellers to identify each other and conduct business than had ever existed before. This spreads the environmental cost of manufacture and disposal of goods over the creation of greater total value from them, contributing to the sustainability agenda in every country in which e-Bay operates.

Local food marketplaces such as Big Barn and Sustaination in the UK, m-farm in Kenya and the fish-market pricing information service operated by the University of Bari in Puglia, Italy, make it easier for consumers to buy locally produced food, and for producers to sell it; reducing the carbon footprint of the food that is consumed within a region, and assisting the success of local businesses.

The opportunity for cities and regions is to encourage the formation and success of online marketplaces in a way that contributes to local priorities and objectives. Such regional focus might be achieved by creating marketplaces with restricted access – for example, only allowing individuals and organisations from within a particular area to participate – or by practicality: free recycling networks tend to operate regionally simply because the expense of long journeys outweighs the benefit of acquiring a secondhand resource for free. The cost of transportation means that in general many markets which support the exchange of physical goods and services in small-scale, peer-to-peer transactions will be relatively localised.

City systems, communities and infrastructures affected:

(This description is based on the elements of Smarter City ecosystems presented in ”The new Architecture of Smart Cities“).

  • Goals: all
  • People: employees, business people, customers, citizens
  • Ecosystem: private sector, public sector, 3rd sector, community
  • Soft infrastructures: innovation forums; networks and community forums
  • Hard infrastructures: information and communication technology, transport and utilities network

Commercial operating model:

The basic commercial premise of an online marketplace is to invest in the provision of online marketplace infrastructure in order to create returns from revenue streams within it. Various revenue streams can be created: for example, e-Bay apply fees to transactions conducted through their marketplace, as does the crowdfunding scheme Spacehive; whereas Linked-In charges a premium subscription fee to businesses such as recruitment agencies in return for the right to make unsolicited approaches to members.

More complex revenue models are created by allowing value-add service providers to operate in the marketplace – such as the payment service PayPal, which operated in e-Bay long before it was acquired; or the start-up Addiply, who add hyperlocal advertising to online transactions. The marketplace operator can also provide fee-based “white-label” or anonymised access to marketplace services to allow third parties to operate their own niche marketplaces – Amazon WebStore, for example, allows traders to build their own, branded online retail presence using Amazon’s services.

(Photo by Mark Vauxhall of public Peugeot Ions on Rue des Ponchettes, Nice, France)

Online marketplaces are operated by a variety of entities: entrepreneurial technology companies such as Shutl, for example, who offer services for delivering goods bought online through a marketplace provding access to independent delivery agents and couriers; or traditional commercial businesses seeking to “servitise” their business models, create “disruptive business platforms” or create new revenue streams from data.

(Apple’s iTunes was a disruptive business platform in the music industry when it launched – it used a new technology-enabled marketplace to completely change flows of money within the industry; and streaming media services such as Spotify have servitised the music business by allowing us to pay for the right to listen to any music we like for a certain period of time, rather than paying for copies of specific musical works as “products” which we own outright. Car manufacturers such as Peugeot are collaborating with car clubs to offer similar “pay-as-you-go” models for car use, particularly as an alternative to ownership for electric cars. Some public sector organisations are also exploring these innovations, especially those that possess large volumes of data.)

Marketplaces can create social, economic and environmental outcomes where they are operated by commercial, profit-seeking organisations which seek to build brand value and customer loyalty through positive environmental and societal impact. Many private enterprises are increasingly conscious of the need to contribute to the communities in which they operate. Often this results from the desire of business leaders to promote responsible and sustainable approaches, combined with the consumer brand-value that is created by a sincere approach. UniLever are perhaps the most high profile commercial organisation pursuing this strategy at present; and Tesco have described similar initiatives recently, such as the newly-launched Tesco Buying Club which helps suppliers secure discounts through collective purchasing. There is a clearly an opportunity for local communities and local government organisations to engage with such initiatives from private enterprise to explore the potential for online marketplaces to create mutual benefit.

In other cases, marketplaces are operated by not-for-profit organisations or social enterprises for whom creating social or economic outcomes in a financially and environmentally sustainable way is the first priority. The social enterprise approach is important if cities everywhere are to benefit from information marketplaces: most commercially operated marketplaces with a geographic focus operate in large, capital cities: these provide the largest customer base and minimise the risk associated with the investment in creating the market. If towns, cities and regions elsewhere wish to benefit from online marketplaces, they may need to encourage alternative models such as social enterprise to deliver them.

Finally, Some schemes are operated entirely on free basis, for example the Freecycle recycling network; or as charitable or donor-sponsored initiatives, for example the Kiva crowdfunding platform for charitable initiatives.

Soft infrastructures, hard infrastructures and assets required:

(The SMS for Life project uses the cheap and widely used SMS infrastructure to create a dynamic, collaborative supply chain for medicines between pharmacies in Africa. Photo by Novartis AG)

The technology infrastructures required to implement online marketplaces include those associated with e-commerce technology and social media: catalogues of goods and services; pricing mechansims; support for marketing campaigns; networks of individuals and organisations and the ability to make connections between them; payments services and multi-channel support.

Many e-commerce platforms offer support for online payments integrated with traditional banking systems; or mobile payments schemes such as the M-Pesa scheme in Kenya can be used. Alternatively, the widespread growth in local currencies and alternative trading systems might offer innovative solutions that are particularly relevant for marketplaces with a regional focus.

In order to be successful, marketplaces need to create an environment of trust in which transactions can be undertaken safely and reliably. As the internet has developed over the past two decades, technologies such as certificate-based identity assurance, consumer reviews and reputation schemes have emerged to create trust in online transactions and relationships. However, many online marketplaces provide robust real-world governance models in addition to tools to create online trust: the peer-to-peer lender Zopa created “Zopa Safeguard“, for example, an independent, not-for-profit entity with funds to re-imburse investors whose debtors are unable to repay them.

Marketplaces which involve the transaction of goods and services with some physical component – whether in the form of manufactured goods, resources such as water and energy or services such as in-home care – will also require transport services; and the cost and convenience of those services will need to be appropriate to the value of exchanges in the marketplace. Shutl’s transportation marketplace is in itself an innovation in delivering more convenient, lower cost delivery services to online retail marketplaces. By contrast, community energy schemes, which attempt to create local energy markets that reduce energy usage and maximise consumption of power generated by local, renewable resources, either need some form of smart grid infrastructure, or a commercial vehicle, such as a shared energy performance contract.

Driving forces:

  • The desire of regional authorities and business communities to form supply chains, market ecosystems and trading networks that maximise the creation and retention of economic value within a region; and that improve economic growth and social mobility.
  • The need to improve efficiency in the use of assets and resources; and to minimise externalities such as the excessive transport of goods and services.
  • The increasing availability and reducing cost of enabling technologies providing opportunities for new entrants in existing marketplaces and supply chains.

Benefits:

  • Maximisation of regional integration in supply networks.
  • Retention of value in the local economy.
  • Increased efficiency of resource usage by sharing and reusing goods and services.
  • Enablement of new models of collaborative asset ownership, management and use.
  • The creation of new business models to provide value-add products and services.

Implications and risks:

(West Midlands police patrolling Birmingham’s busy Frankfurt Market in Christmas, 2012. Photo by West Midlands Police)

Marketplaces must be carefully designed to attract a critical mass of participants with an interest in collaborating. It is unlikely, for example, that a group of large food retailers would collaborate in a single marketplace in which to sell their products to citizens of a particular region. The objective of such organisations is to maximise shareholder value by maximising their share of customers’ weekly household budgets. They would have no interest in sharing information about their products alongside their competitors and thus making it easier for customers to pick and choose suppliers for individual products.

Small, specialist food retailers have a stronger incentive to join such marketplaces: by adding to the diversity of produce available in a marketplace of specialist suppliers, they increase the likelihood of shoppers visiting the marketplace rather than a supermarket; and by sharing the cost of marketplace infrastructure – such as payments and delivery services – each benefits from access to a more sophisticated infrastructure than they could afford individually.

Those marketplaces that require transportation or other physical infrastructures will only be viable if they create transactions of high enough value to account for the cost of that infrastructure. Such a challenge can even apply to purely information-based marketplaces: producing high quality, reliable information requires a certain level of technology infrastructure, and marketplaces that are intended to create value through exchanging information must pay for the cost of that infrastructure. This is one of the challenges facing the open data movement.

If the marketplace does not provide sufficient security infrastructure and governance processes to create trust between participants – or if those participants do not believe that the infrastructure and governance are adequate – then transactions will not be carried out.

Some level of competition is inevitable between participants in a marketplace. If that competition is balanced by the benefits of better access to trading partners and supporting services, then the marketplace will succeed; but if competitive pressures outweigh the benefits, it will fail.

Alternatives and variations:

  • Local currencies and alternative trading systems are in many ways similar to online marketplace; and are often a supporting component
  • Some marketplaces are built on similar principles, and certainly achieve “Smart” outcomes, but do not use any technology. The Dhaka Waste Concern waste recycling scheme in Bangladesh, for example, turns waste into a market resource, creating jobs in the process.

Examples and stories:

Sources of information:

I’ve written about digital marketplaces several times on this blog, including the following articles:

Industry experts and consultancies have published work on this topic that is well worth considering: