Why Smart Cities still aren’t working for us after 20 years. And how we can fix them.

(The futuristic "Emerald City" in the 1939 film "The Wizard of Oz". The "wizard" who controls the city is a fraud who uses theatrical technology to disguise his lack of real power.)

(The futuristic “Emerald City” in the 1939 film “The Wizard of Oz“. The “wizard” who controls the city is a fraud who uses theatrical technology to disguise his lack of real power.)

(I was recently asked to give evidence to the United Nations Commission on Science and Technology for Development during the development of their report on Smart Cities and Infrastructure. This article is based on my presentation, which you can find here).

The idea of a “Smart City” (or town, or region, or community) is 20 years old now; but despite some high profile projects and a lot of attention, it has so far achieved relatively little.

The goal of a Smart City is to invest in technology in order to create economic, social and environmental improvements. That is an economic and political challenge, not a technology trend; and it is an imperative challenge because of the nature and extent of the risks we face as a society today. Whilst the demands created by urbanisation and growth in the global population threaten to outstrip the resources available to us, those resources are under threat from man-made climate change; and we live in a world in which many think that access to resources is becoming dangerously unfair.

Surely, then, there should be an urgent political debate concerning how city leaders and local authorities enact policies and other measures to steer investments in the most powerful tool we have ever created, digital technology, to address those threats?

In honesty, that debate is not really taking place. There are endless conferences and reports about Smart Cities, but very, very few of them tackle the issues of financing, investment and policy – they are more likely to describe the technology and engineering solutions behind schemes that appear to create new efficiencies and improvements in transport and energy systems, for example, but that in reality are unsustainable because they rely on one-off research and innovation grants.

Because Smart Cities are usually defined in these terms – by the role of technology in city systems rather than by the role of policy in shaping the outcomes of investment – the idea has not won widespread interest and support from the highest level of political leadership – the very people without whom the policy changes and investments that Smart Cities need will not be made.

And because Smart Cities are usually discussed as projects between technology providers, engineers, local authorities and universities, the ordinary people who vote for politicians, pay taxes, buy products, use public services and make businesses work are not even aware of the idea, let alone supportive of it.

("Visionary City" by William Robinson Leigh)

(William Robinson Leigh’s 1908 painting “Visionary City” envisaged future cities constructed from mile-long buildings of hundreds of stories connected by gas-lit skyways for trams, pedestrians and horse-drawn carriages. A century later we’re starting to realise not only that developments in transport and power technology have eclipsed Leigh’s vision, but that we don’t want to live in cities constructed from buildings on this scale.)

The fact that the Smart Cities movement confuses itself with inconsistent and contradictory definitions exacerbates this lack of engagement, understanding and support. From the earliest days, it has been defined in terms of either smart infrastructure or smart citizens; but rarely both at the same time.

For example, in “City of Bits” in 1996, William Mitchell, Director of the Smart Cities Research Group at MIT’s Media Lab, predicted the widespread deployment of digital technology to transform city infrastructures:

“… as the infobahn takes over a widening range of functions, the roles of inhabited structures and transportation systems are shifting once again, fresh urban patterns are forming, and we have the opportunity to rethink received ideas of what buildings and cities are, how they can be made, and what they are really for.”

Whilst in their paper “E-Governance and Smart Communities: A Social Learning Challenge“, published in the Social Science Computer Review in 2001, Amanda Coe, Gilles Paquet and Jeffrey Roy described the 1997 emergence of the idea of “Smart Communities” in which citizens and communities are given a stronger voice in their own governance by the power of internet communication technologies:

“A smart community is defined as a geographical area ranging in size from a neighbourhood to a multi-county region within which citizens, organizations and governing institutions deploy and embrace NICT [“New Information and Communication Technologies”] to transform their region in significant and fundamental ways (Eger 1997). In an information age, smart communities are intended to promote job growth, economic development and improve quality of life within the community.”

Because few descriptions of a Smart City reflect both of those perspectives in harmony, many Smart City discussions quickly create arguments between opposing camps rather than constructive ideas: infrastructure versus people; top-down versus bottom-up; technology versus urban design; proprietary technology versus open source; public service improvements versus the enablement of open innovation – and so on.

I haven’t seen many political leaders or the people who vote for them be impressed by proposals whose advocates are arguing with each other.

The emperor has no wearable technology … why we’re not really investing in Smart Cities

The consequence of this lack of cohesion and focus is that very little real money is being invested in Smart Cities to create the outcomes that cities, towns, regions and whole countries have set out for themselves in thousands of Smart City visions and strategies. The vast majority of Smart City initiatives to date are pilot projects funded by research and innovation grants. There are very, very few sustainable, repeatable solutions yet.

There are three reasons for this; and they will have serious economic and social consequences if we don’t address them.

Firstly, the investment streams available to most of those who are trying to shape Smart Cities initiatives – engineers, technologists, academics, local authority officers and community activists – are largely limited to corporate research and development funds, national and international innovation programmes and charitable or socially-focussed grants. Those are important sources of funding, but they are only available at a scale sufficient to prove that good new ideas can work through individual, time-limited projects. They are not intended to fund the deployment of those ideas across cities everywhere, or to construct new infrastructure at city scale, and they are not remotely capable of doing so.

(United States GDP plotted against median household income from 1953 to present. Until about 1980, growth in the economy correlated to increases in household wealth. But from 1980 onwards as digital technology has transformed the economy, household income has remained flat despite continuing economic growth)

(United States GDP plotted against median household income from 1953 to present. Until about 1980, growth in the economy correlated to increases in household wealth. But from 1980 onwards as digital technology has transformed the economy, household income has remained flat despite continuing economic growth. From “The Second Machine Age“, by MIT economists Andy McAfee and Erik Brynjolfsson, summarised in this article.)

Secondly and conversely, the massive investments that are being made in smart technology at a scale that is transforming our world are primarily commercial: they are investing in technology to develop new products and services that consumers want to buy. That’s guaranteed to create convenience for consumers and profit for companies; but it’s far from guaranteed to create resilient, socially mobile, vibrant and healthy cities. It’s just as likely to reduce our life expectancy and social engagement by making it easier to order high-fat, high-sugar takeaway food on our smartphones to be delivered to our couches by drones whilst we immerse ourselves in multiplayer virtual reality games.

That’s why whilst technology advocates praise the ingenuity of technology-enabled “sharing economy” business models such as Airbnb and Uber, most other commentators point out that far from being platforms for “sharing” many are simply profit-seeking transaction brokers. More fundamentally, some economists are seriously concerned that the economy is becoming dominated by such platform business models and that the majority of the value they create is captured by a small number of platform owners – world leaders discussed these issues at the World Economic Forum’s Davos summit this year. There is real evidence that the exploitation of technology by business is contributing to the evolution of the global economy in a way that makes it less equal and that concentrates an even greater share of wealth amongst a smaller number of people.

Finally, the similarly massive investments continually made in property development and infrastructure in cities are, for the most part, not creating investments in digital technology in the public interest. Sometimes that’s because there’s no incentive to do so: development investors make their returns by selling the property they construct; they often have no interest in whether the tenants of that property start successful digital businesses, and they receive no income from any connectivity services those tenants might use. In other cases, policy actively inhibits more socially-minded developers from providing digital services. One developer of a £1billion regeneration project told me that European Union restrictions on state aid had prevented them making any investment in connectivity. They could only build buildings without connectivity – in an area with no mobile coverage – and attempt to attract people and businesses to move in, thereby creating demand for telecommunications companies to subsequently compete to fulfil.

We’ll only build Smart Cities when we shape the market for investing in technology for city services and infrastructure

In her seminal 1961 work “The Death and Life of Great American Cities“, Jane Jacobs wrote that “Private investment shapes cities, but social ideas (and laws) shape private investment. First comes the image of what we want, then the machinery is adapted to turn out that image.”

Cities, towns, regions and countries around the world have set out their self-images of a Smart future, but we have not adapted the financial, regulatory and economic machinery – the policies, the procurement practises, the development frameworks, the investment models – to incentivise the private sector to create them.

I do not mean to be critical of the private sector in this article. I have worked in the private sector for my entire career. It is the engine of our economy, and without its profits we would not create the jobs needed by a growing global population, or the means to pay the taxes that sustain our public services, or the surplus wealth that creates an ability to invest in our future.

But one of the fallacies of large parts of the Smart Cities movement, and of a significant part of the overall debate concerning the enormous growth in value of the technology economy, is the assumption that economic growth driven by private sector investments in technology to improve business performance will create broad social, economic and environmental benefits.

There is no guarantee that it will. Outside philanthropy, charitable donations and social business models, private sector investments are made in order to make a profit, period. In doing so, social, economic and environmental benefits may also be created, but they are side effects which, at best, result from the informed investment choices of conscientious business leaders. At worst, they are simply irrelevant to the imperative of the profit motive.

Some businesses have the scale, vision and stability to make more direct links in their strategies and decision-making to the dependency between their success as businesses and the health of the society in which they operate – Unilever is a notable and high profile example. And all businesses are run by real people whose consciences influence their business decisions (with unfortunate exceptions, of course).

But those examples do not in any way add up to the alignment of private sector investment objectives with the aspirations of city authorities or citizens for their future. And as MIT economists Andy McAfee and Erik Brynjolfsson, amongst others, have shown, most current evidence indicates that the technology economy is exacerbating the inequality that exists in our society (see graph above). That is the opposite of the future aspirations expressed by many cities, communities and their governments.

This leads us to the political and economic imperative represented by the Smart Cities movement: to adapt the machinery of our economy to influence investments in technology so that they contribute to the social, economic and environmental outcomes that we want.

A leadership imperative to learn from the past

Those actions can only be taken by political leaders; and they must be taken because without them developments and investments in new technology and infrastructure will not create ubiquitously beneficial outcomes. Historically, there is plenty of evidence that investments in technology and infrastructure can create great harm if market forces alone are left to shape them.

(Areas of relative wealth and deprivation in Birmingham as measured by the Indices of Multiple Deprivation. Birmingham, like many of the UK's Core Cities, has a ring of persistently deprived areas immediately outside the city centre, co-located with the highest concentration of transport infrastructure allowing traffic to flow in and out of the centre.)

(Areas of relative wealth and deprivation in Birmingham as measured by the Indices of Multiple Deprivation. Birmingham, like many of the UK’s Core Cities, has a ring of persistently deprived areas immediately outside the city centre, co-located with the highest concentration of transport infrastructure allowing traffic to flow in and out of the centre)

For example, in the decades after the Second World War, cities in developed countries rebuilt themselves using the technologies of the time – concrete and the internal combustion engine. Networks of urban highways were built into city centres in the interests of connecting city economies with national and international transport links to commerce.

Those infrastructures supported economic growth; but they did not provide access to the communities they passed through.

The 2015 Indices of Multiple Deprivation in the UK demonstrate that some of those communities were greatly harmed as a result. The indices identify neighbourhoods with combinations of low levels of employment and income; poor health; poor access to quality education and training; high levels of crime; poor quality living environments and shortages of quality housing and services. An analysis of these areas in the UK’s Core Cities (the eight economically largest cities outside London, plus Glasgow and Cardiff) show that many of them exist in rings surrounding relatively thriving city centres. Whilst clearly the full causes are complex, it is no surprise that those rings feature a concentration of transport infrastructure passing through them, but primarily serving the interests of those passing in and out of the centre. (And this is without taking into account the full health impacts of transport-related pollution, which we’re only just starting to appreciate).

Similar effects can be seen historically. In their report “Cities Outlook 1901“, Centre for Cities explored the previous century of urban development in the UK, examining why at various times some cities thrived and some did not. They concluded that the single most important influence on the success of cities was their ability to provide their citizens with the right skills and opportunities to find employment, as the skills required in the economy changed as technology evolved. (See the sample graph below). A recent short article in The Economist magazine similarly argued that history shows there is no inevitable mechanism that ensures that the benefits of economic growth driven by technology-enabled productivity improvements are broadly distributed. It cites huge investments made in the US education system in the late 19th and early 20th Centuries to ensure that the general population was in a position to benefit from the technological developments of the Industrial Revolution as an example of the efforts that may need to be made.

Why smart cities are a political leadership challenge

So, to summarise the arguments I’ve made so far:

From global urbanisation and population growth to man-made climate change we are facing some of the most serious and acute challenges in our history, as well as the persistent challenge of inequality. But the most powerful tool that is shaping a transformation of our society and economy, digital technology, is, for the most part, not being used to address those challenges. The vast majority of investments in it are being made simply in the interests of profitable returns. Our political leaders are not shaping the markets in which those investment are made, or influencing public sector procurement practises, in order to create broader social, economic and environmental outcomes.

So what can we do about that?

We need to persuade political leaders to act – the leaders of cities; of local authorities more generally; and national politicians. I’m trying to do that using the arguments set out in this article, approaching “Smart Cities” not as a technology initiative but as a political and economic issue made urgent by imperative challenges to society.

I can imagine three arguments against that proposition, which I’d like to tackle first, before going on to talk about the actions that we need those leaders to take.

(Population changes in Blackburn, Burnley and Preston from 1901-2001. In the early part of the century, all three cities grew, supported by successful manufacturing economies. But in the latter half, only Preston continued to grow as it transitioned successfully to a service economy. From Cities Outlook 1901 by Centre for Cities)


(Population changes in Blackburn, Burnley and Preston from 1901-2001. In the early part of the century, all three cities grew, supported by successful manufacturing economies. But in the latter half, only Preston continued to grow as it transitioned successfully to a service economy. If cities do not adapt to changes in the economy driven by technology, history shows that they fail. From “Cities Outlook 1901” by Centre for Cities)

The first argument is: why focus on cities? What about the rest of the world, and in particular the challenges of smaller towns, which are often overlooked; or rural regions, which are distinctive and deserve focus in their own right?

There are two replies to this argument. The first is that cities do represent the most sizeable challenge. Since 2010, more than half the world’s population has lived in urban areas, and that’s expected to rise to 70% by 2050. Cities drive the majority of the world’s economy, consume the majority of resources in the most concentrated way and create the majority of the pollution driving climate change. By focussing on cities we focus on most of our challenges at the same time, and in the places where they are most concentrated; and we focus on a unit of governance that is able to act decisively and with understanding of local context.

And that brings us to the second reply: most of the arguments I make in this article aren’t really about cities, they’re about the need for the leaders of local governments – cities, towns and regions – to take action. That applies to any local authority, not just to cities.

The second counter-argument is that my proposal is “top-down” and that instead we should focus on the “bottom-up” creativity that is the richest source of innovation and of practical solutions to problems that are rooted in local context.

My answer to this challenge is that I agree completely that it is bottom-up innovation that will create the majority of the answers to our challenges. But bottom-up innovation is already happening everywhere around us – it is what everyone does every day to create a better business, a better community, a better life. The problem with bottom-up innovation doesn’t lie in making it happen; it lies in enabling it to have a bigger impact. If bottom-up innovation on its own were the answer, then we wouldn’t have the staggering and increasing levels of inequality that we see today, and the economic growth created by the information revolution would be more broadly distributed.

Ultimately, it’s not the bottom-up innovators who need persuading to take action: they’re already acting. It’s the top-down leaders and policy-makers who are not doing what we need them to do: setting the policies that will influence investments in digital technology infrastructure to create better opportunities and support for citizen-led, community-led and business-led innovation. That’s why I’m focussing this article on those leaders and the actions we need them to take.

The third argument works similarly to the second argument, and it’s that we should be focussing on people, not on technology and policy.

Yes, of course we should be focussing on people: their creativity, the detail of their daily lives, and the outcomes that matter to them. But two central points to my argument are that digital technology is a new and revolutionary force reshaping our world, our society and our economy; and that the benefits of that revolution are not being equitably distributed. The main thing that’s not working for people right now is the impact of digital technology on society, and the main reason for that is the lack of action by political leaders. So that’s what we should concentrate on fixing.

Finally, I can summarise my response to all of those arguments in a simple statement: first we have to persuade political leaders to act, because many of them are not acting on these issues at the moment; and then we have to persuade them to act in the right way – to support bottom-up innovation through investment in open technology infrastructures and to put the interests of people at the heart of the policies that drive and shape that investment.

(Innovation Birmingham's £7m "iCentrum" facility will open in March 2016. It will small companies developing smart city products and services will have the opportunity to co-develop them with larger organisations such as RWE nPower, the Transport Systems Catapult and Centro (Birmingham’s Public Transport Executive) – see, e.g., https://ts.catapult.org.uk/-/centro-and-the-transport-systems-catapult-to-run-intelligent-mobility-incubator-within-innovation-birmingham-s-8m-icentrum-buildi-1 )

(Innovation Birmingham’s Chief Executive David Hardman describes the £7m “iCentrum” facility which will open in March 2016 to local stakeholders. It will offer entrepreneurial companies opportunities to co-develop smart city products and services with larger organisations such as RWE nPower, the Transport Systems Catapult and Centro, Birmingham’s Public Transport Executive)

Learning from what’s worked

This might all sound rather negative so far; and in a sense that’s intentional because I want to be very clear in my message that I do not think we are doing enough.

But I have a positive message too: if we can persuade our political leaders to act, then it’s increasingly clear what we need them to do. Whilst the majority of “Smart City” initiatives are unsustainable pilot and innovation projects, that’s not true of them all.

In the UK, from Sunderland to London to Newcastle to Birmingham there are examples of initiatives that are supported by sustainable funding sources and investment streams; that are not dependent on research and development grants from national or international innovation funds or technology companies; and that essentially could be applied by any city or community.

I summarised these repeatable models recently in the article “4 ways to get on with building Smart Cities. And the societal failure that stops us using them“:

1. Include Smart City criteria in the procurement of services by local authorities to encourage competitive innovation from private sector providers. Whilst local authority budgets are under pressure around the world, and have certainly suffered enormous cuts in the UK, local authorities nevertheless spend up to billions of pounds sterling annually on goods, services and staff time. The majority of procurements that direct that spending still procure traditional goods and services through traditional criteria and contracts. By contrast, Sunderland, a UK city, and Norfolk, a UK county, have shown that by emphasising city and regional aspirations in procurement scoring criteria it is possible to incentivise suppliers to invest in smart solutions that contribute to local objectives.

2. Encourage development opportunities to include “smart” infrastructure. Investors invest in infrastructure and property development because it creates returns for them – to the tune of billions of pounds sterling annually in the UK. Those investments are already made in the context of regulations – planning frameworks, building codes and energy performance criteria, for example. Those regulations can be adapted to demand that investments in property and physical infrastructure include investment in digital infrastructure in a way that contributes to local authority and community objectives. The East Wick and Sweetwater development in London – a multi-£100million development that is part of the 2012 Olympics legacy and that is financed by a pension fund investment – was awarded to it’s developer based in part on their commitments to invest in this way.

3. Commit to entrepreneurial programmes. There are many examples of new urban or public services being delivered by entrepreneurial organisations who develop new business and operating models enabled by technology – I’ve already cited Uber and Airbnb as examples that contribute to traveller convenience; Casserole Club, a service that uses social media to connect people who can’t provide their own food with neighbours who are happy to cook an extra portion of a meal for someone else, is an example that has more obviously social benefits. Many cities have local investment funds and support services for entrepreneurial businesses, and Sunderland’s Software Centre, Birmingham’s iCentrum development, Sheffield’s Smart Lab and London’s Cognicity accelerator are examples where those investments have been linked to local smart city objectives.

4. Enable and support Social Enterprise. The objectives of Smart Cities are analogous to the “triple bottom line” objectives of Social Enterprises – organisations whose finances are sustained by revenues from the products or services that they provide, but that commit themselves to social, environmental or economic outcomes, rather than to maximising their financial returns to shareholders. A vast number of Smart City initiatives are carried out by these organisations when they innovate using technology. Cities that find a way to systematically enable social enterprises to succeed could unlock a reservoir of beneficial innovation, as the Impact Hub network, a global community of collaborative workspaces, has shown.

How to lead a smart city: Commitment, Collaboration, Consistency and Community

Each of the approaches I’ve described is dependent on both political leadership from a local authority and collaboration with regional stakeholders – businesses, developers, Universities, community groups and so on.

So the first task for political leaders who wish to drive an effective Smart City programme is to facilitate the co-creation of regional consensus and an action plan (I’m not going to use the word “roadmap”. My experience of Smart Cities roadmaps is that they are, as the name implies, passive documents that don’t go anywhere).

I can sum up how to do that effectively using “four C’s”: Commitment, Collaboration, Consistency and Community:

Commitment: a successful approach to a Smart City or community needs the commitment, leadership and active engagement of the most senior local government leaders. Of course, elected Mayors, Council Leaders and Chief Executives are busy people with a multitude of responsibilities and they inevitably delegate; but this is a responsibility that cannot be delegated too far. The vast majority of local authorities that I have seen pursue this agenda with tangible results – through whichever approach, even those authorities who have been successful funding their initiatives through research and innovation grants – have appointed a dedicated Executive officer reporting directly to the Chief Executive and with a clear mandate to create, communicate and drive a collaborative smart strategy and programme.

Collaboration: a collaborative, empowered regional stakeholder forum is needed to convene local resources. Whilst a local authority is the only elected body with a mandate to set regional objectives, local authorities directly control only a fraction of regional resources, and do not directly set many local priorities. Most approaches to Smart Cities require coordinated activity by a variety of local organisations. That only comes about if those organisations decide to collaborate at the most senior level, mutually agree their objectives for doing so, and meet regularly to agree actions to achieve them. The local authority’s elected mandate usually makes it the most appropriate organisation to facilitate the formation and chair the proceedings of such fora; but it cannot direct them.

Consistency: in order to collaborate, regional stakeholders need to agree a clear, consistent, specific local vision for their future. Without that, they will lack a context in which to take decisions that reconcile their individual interests with shared regional objectives; and any bids for funding and investments they make, whether individually or jointly, will appear inconsistent and unconvincing.

Community: finally, the only people who really know what a smart city should look like are the citizens, taxpayers, voters, customers, business owners and employees who form its community; who will live and work in it; and who will ultimately pay for it through their taxes. It’s their bottom-up innovation that will give rise to the most meaningful and effective initiatives. Their voice – heard through events, consultation exercises, town hall meetings, social media and so on – should lead to the visions and policies to create an environment in which they can flourish.

(Birmingham's newly opened city centre trams are an example of a reversal of 20th century trends that prioritised car traffic over the public transport systems that we have realised are so important to healthy cities)

(Birmingham’s newly opened city centre trams are an example of a reversal of 20th century trends that prioritised car traffic over the public transport systems that we have re-discovered to be so important to healthy cities)

Beyond “top-down” versus “bottom-up”: Translational Leadership and Smart Digital Urbanism

Having established that there’s a challenge worth facing, argued that we need political leaders to take action to address it, and explored what that action should be, I’d like finally to return to one of the arguments I explored along the way.

Action by political leaders is, almost by definition, “top-down”; and, whilst I stand by my argument that it’s the most important missing element of the majority of smart cities initiatives today, it’s vitally important that those top-down actions are taken in such a way as to encourage, enable and empower “bottom-up” innovation by the people, communities and businesses from which real cities are made.

It’s not only important that our leaders take the actions that I’ve argued for; it’s important that they act in the right way. Smart cities are not “business as usual”; and they are also not “behaviour as usual”.

The smart cities initiatives that I have been part of or had the privilege to observe, and that have delivered meaningful outcomes, have taken me on a personal journey. They have involved meeting with, listening to and working with people, organisations and communities that I would not have previously expected to be part of my working life, and that I was not previously familiar with in my personal life – from social enterprises to community groups to individual people with unusual ideas.

Writing in “Resilience: Why Things Bounce Back”, Andrew Zolli observes that the leaders of initiatives that have created real, lasting and surprising change in communities around the world show a quality that he defines as “Translational Leadership“. Translational leaders have the ability to overcome the institutional and cultural barriers to engagement and collaboration between small-scale, informal innovators in communities and large-scale, formal institutions with resources. This is precisely the ability that any leaders involved in smart cities need in order to properly understand how the powerful “top-down” forces within their influence – policies, procurements and investments – can be adapted to empower and enable real people, real communities and real businesses.

Translational leaders understand that their role is not to direct change, but to create the conditions in which others can be successful.

We can learn how to create those conditions from the decades of experience that town planners and urban designers have acquired in creating “human-scale cities” that don’t repeat the mistakes that were made in constructing vast urban highways, tower blocks and housing projects from unforgiving concrete in the past century.

And there is good precedent to do so. It is not just that the experience of town planners and urban designers leads us unmistakably to design thinking that focusses on the needs of the millions of individual citizens whose daily experiences collectively create the behaviour of cities. That is surely the only approach that will succeed; and the designers of smart city technologies and infrastructures will fail unless they take it. But there is also a long-lasting and profound relationship between the design techniques of town planners and of software engineers. The basic architectures of the internet and mobile applications we use today were designed using those techniques in the last decade of the last millennium and the first decade of this one.

The architect Kelvin Campbell’s concept of “massive/small smart urbanism” can teach us how to join the effects of “top-down” investments and policy with the capacity for “bottom-up” innovation that exists in people, businesses and communities everywhere. In the information age, we create the capacity for “massive amounts of small-scale innovation” if digital infrastructures are accessible and adaptable through the provision of open data interfaces, and accessible from open source software on cloud computing platforms – the digital equivalent of accessible public space and human-scale, mixed-used urban environments.

I call this “Smart Digital Urbanism”, and many of its principles are already apparent because their value has been demonstrated time and again. These principles should be the starting point for adapting planning frameworks, procurement practises and the other policies that influence spending and investment in cities and public services.

Re-stating what Smart Cities are all about

Defining and re-defining the “Smart City” is a hoary old business – as I pointed out at the start of this article, we’ve been at it for 20 years now, and without much success.

But definitions are important: saying what you mean to do is an important first step in acting successfully, particularly in a collaborative, public context.

So I’ll end this article by offering another attempt to sum up a smart city – or community – in a way that emphasises what I know from experience are the important factors that will lead to successful actions and outcomes, rather than the endless rounds of debate that we can’t allow to continue any longer:

A Smart City or community is one which successfully harnesses the most powerful tool of our age – digital technology – to create opportunities for its citizens; to address the most severe acute challenges the human race has ever faced, arising from global urbanisation and population growth and man-made climate change; and to address the persistent challenge of social and economic inequality. The policies and investments needed to do this demand the highest level of political leadership at a local level where regional challenges and resources are best understood, and particularly in cities where they are most concentrated. Those policies and investments will only be successful if they are enabling, not directing; if they result from the actions of leaders who are listening and responding to the people and communities they serve; and if they shape an urban environment and digital economy in which individual citizens, businesses and communities have the skills, opportunities and resources to create their own success on their own terms.

That’s not a snappy definition; but I hope it’s a useful definition that’s inclusive of the major issues and clearly points out the actions that are required by city, political, community and business leaders … and why it’s vitally important that we finally start taking them.

 

Intelligent Transport Systems need to get wiser … or transport will keep on killing us

(The 2nd Futurama exhibition at the 1964 New York World’s Fair displayed a vision for the future that in many ways reflected the concrete highways and highrises constructed at the time. We now recognise that the environments those structures created often failed to support healthy personal and community life. In 50 years’ time, how will we perceive today’s visions of Intelligent Transport Systems? Photo by James Vaughan)


Two weeks ago the Transport Systems Catapult published a “Traveller Needs and UK Capability Study”, which it called “the UK’s largest traveller experience study” – a survey of 10,000 people and their travelling needs and habits, complemented by interviews with 100 industry experts and companies. The survey identifies a variety of opportunities for UK innovators in academia and industry to exploit the predicted £56 billion market for intelligent mobility solutions in the UK by 2025, and £900 billion market worldwide. It is rightly optimistic that the UK can be a world leader in those markets.

This is a great example of the enormous value that the Catapult programme – inspired by Germany’s Fraunhofer Institutes – can play in transferring innovation and expertise out of University research and into the commercial economy, and in enabling the UK’s expert small businesses to reach opportunities in international markets.

But it’s also a great example of failing to connect the ideas of Intelligent Transport with their full impact on society.

I don’t think we should call any transport initiative “intelligent” unless it addresses both the full relationship between the physical mobility of people and goods with social mobility; and the significant social impact of transport infrastructure – which goes far beyond issues of congestion and pollution.

The new study not only fails to address these topics, it doesn’t mention them at all. In that light, such a significant report represents a failure to meet the Catapult’s own mission statement, which incorporates a focus on “wellbeing” – as quoted in the introduction to the report:

“We exist to drive UK global leadership in Intelligent Mobility, promoting sustained economic growth and wellbeing, through integrated, efficient and sustainable transport systems.” [My emphasis]

I’m surprised by this failing in the study as both the engineering consultancy Arup and the Future Cities Catapult – two organisations that have worked extensively to promote human-scale, walkable urban environments and human-centric technology – were involved in its production; as was at least one social scientist (although the experts consulted were otherwise predominantly from the engineering, transport and technology industries or associated research disciplines).

I note also that the list of reports reviewed for the study does not include a single work on urbanism. Jane Jacobs’ “The Death and Life of Great American Cities”, Jan Gehl’s “Cities for People“, Jeff Speck’s “Walkable City” and Charles Montgomery’s “The Happy City“, for example, all describe very well the way that transport infrastructures and traffic affect the communities in which most of the world’s population lives. That perspective is sorely lacking in this report.

Transport is a balance between life and death. Intelligent transport shouldn’t forget that.

These omissions matter greatly because they are not just lost areas of opportunity for the UK economy to develop solutions (although that’s certainly what they are). More importantly, transport systems that are designed without taking their full social impact into account have the most serious social consequences – they contribute directly to deprivation, economic stagnation, a lack of social mobility, poor health, premature deaths, injuries and fatalities.

As town planner Jeff Speck and urban consultant Charles Montgomery recently described at length in “Walkable City” and “The Happy City” respectively, the most vibrant, economically successful urban environments tend to be those where people are able to walk between their homes, places of work, shops, schools, local transport hubs and cultural amenities; and where they feel safe doing so.

But many people do not feel that it is safe to walk about the places in which they live, work and relax. Transport is not their only cause of concern; but it is certainly a significant one.

After motorcyclists (another group of travellers who are poorly represented), pedestrians and cyclists are by far the most likely travellers to be injured in accidents. According to the Royal Society for the Prevention of Accidents, for example, more than 60 child pedestrians are killed or injured every week in the UK – that’s over 3000 every year. No wonder that the number of children walking to school has progressively fallen as car ownership has risen, contributing (though it is obviously far from the sole cause) to rising levels of childhood obesity. In its 60 pages, the Traveller Needs study doesn’t mention the safety of pedestrians at all.

A recent working paper published by Transport for London found that the risk and severity of injury for different types of road users – pedestrians, cyclists, drivers, car passengers, bus passengers etc. – vary in complex and unexpected ways; and that in particular, the risks for each type of traveller vary very differently according to age, as our personal behaviours change, depending on the journeys we undertake, and according to the nature of the transport infrastructure we use.

These are not simple issues, they are deeply challenging. They are created by the tension between our need to travel in order to carry out social and economic interactions, and the physical nature of transport which takes up space and creates pollution and danger.

As a consequence, many of the most persistently deprived areas in cities are badly affected by large-scale transport infrastructure that has been primarily designed in the interests of the travellers who pass through them, and not in the interests of the people who live and work around them.

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the citycentre before its redevelopment in 2003, by Birmingham City Council)

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the city centre before its redevelopment in 2003, by Birmingham City Council)

Birmingham’s Masshouse circus, for example, was constructed in the 1960s as part of the city’s inner ring-road, intended to improve connectivity to the national economy through the road network. However, the impact of the physical barrier that it created to pedestrian traffic can be seen by the stark difference in land value inside and outside the “concrete collar” that the ring-road created around the city centre. Inside the collar, land is valuable enough for tall office blocks to be constructed on it; whilst outside it is of such low value that it is used as a ground-level carpark. The reason for such a sharp change in value? People didn’t feel safe walking across or under the roundabout. The demolition of Masshouse Circus in 2002 enabled a revitalisation of the city centre that has continued for more than a decade.

Atlanta’s Buford Highway is a seven lane road which for two miles has no pavements, no junctions and no pedestrian crossings, passing through an area of houses, shops and businesses. It is an infrastructure fit only for vehicles, not for people. It allows no safe access along or across it for the communities it passes through – it is closed to them, unless they risk their lives.

In Sheffield, two primary schools were recently forced to close after measurements of pollution from diesel vehicles revealed levels 10-15 times higher than those considered the maximum safe limits, caused by traffic from the nearby M1 motorway. The vast majority of vehicles using the motorway comply to the appropriate emissions legislation depending on their age; and until specific emissions measurements were performed at the precise locations of the schools, the previous regional measurements of air quality had been within legal limits. This illustrates the failure of our transport policies to take into account the nature of the environments within which we live, and the detailed impact of transport on them. That’s why it’s now suspected that up to 60,000 people die prematurely every year in the UK due to the effects of diesel emissions, double previous estimates.

Nathaniel Lichfield and Partners recently published a survey of the 2015 Indices of Multiple Deprivation in the UK – the indices summarise many of the challenges that affect deprived communities such as low levels of employment and income; poor health; poor access to quality education and training; high levels of crime; poor quality living environments and shortages of quality housing and services.

Lichfield and Partners found that most of the UK’s Core Cities (the eight economically largest cities outside London, plus Glasgow and Cardiff) are characterised by a ring of persistently deprived areas surrounding their relatively thriving city centres. Whilst clearly the full causes are complex, it is no surprise that those rings feature a concentration of transport infrastructure passing through them, but primarily serving the interests of those passing in and out of the centre.

Birmingham IMD cropped

(Areas of relative wealth and deprivation in Birmingham as measured by the Indices of Multiple Deprivation. Birmingham, like many of the UK’s Core Cities, has a ring of persistently deprived areas immediately outside the city centre, co-located with the highest concentration of transport infrastructure allowing traffic to flow in and out of the centre)

These issues are not considered at all in the Transport Systems Catapult’s study. The word “walk” appears just three times in the document, all in a section describing the characteristics of only one type of traveller, the “dependent passenger” who does not own a car. Their walking habits are never examined, and walking as a transport choice is never mentioned or presented as an option in any of the sections of the report discussing challenges, opportunities, solutions or policy initiatives, beyond a passing mention that public transport users sometimes undertake the beginnings and ends of their journeys on foot. The word “pedestrian” does not appear at all. Cycling is mentioned only a handful of times; once in the same section on dependent passengers, and later on to note that “bike sharing [schemes have] not yet enjoyed high uptake in the UK”. The reason cited for this is that “it is likely that there are simply not enough use cases where using these types of services is convenient and cost-effective for travellers.”

If that is the case, why not investigate ways to extend the applicability of such schemes to broader use cases?

If only the sharing economy were a walking and cycling economy

The role of the Transport Systems Catapult is to promote the UK transport and transport technology industry, and this perhaps explains why so much of the study is focussed on public and private forms of powered transport and infrastructure. But there are many ways for businesses to profit by providing innovative technology and services that support walking and cycling.

What about way-finding services and street furniture that benefit pedestrians, for example, as the Future Cities Catapult recently explored? What about the cycling industry – including companies providing cargo-carrying bicycles as an alternative to small vans and trucks? What about the wearable technology industry to promote exercise measurement and pedestrian navigation along the safest, least polluted routes?

What about the construction of innovative infrastructure that promotes cycling and walking such as the “SkyCycle” proposal to build cycle highways above London’s railway lines, similar to the pedestrian and cycle roundabouts already built in Europe and China? What about the use of conveyor belts along similar routes to transport freight? What about the use of underground, pneumatically powered distribution networks for recycling and waste processing? All of these have been proposed or explored by UK businesses and universities.

And what about the UK’s world-class community of urban designers, town planners and landscape architects, some of whom are using increasingly sophisticated technologies to complement their professional skills in designing places and communities in which living, working and travelling co-exist in harmony? What about our world class University expertise researching visions for sustainable, liveable cities with less intrusive transport systems?

An even more powerful source of innovations to achieve a better balance between transportation and liveability could be the use of “sharing economy” business models to promote social and economic systems that emphasise local, human-powered travel.

Wikipedia describes the sharing economy as “economic and social systems that enable shared access to goods, services, data and talent“. Usually, these systems employ consumer technologies such as SmartPhones and social media to create online peer-to-peer trading networks that disrupt or replace traditional supply chains and customer channels – eBay is an obvious example for trading second hand goods, Airbnb connects travellers with people willing to rent out a spare room, and Uber connects passengers and drivers.

These business models can be enormously successful. Since its formation 8 years ago, Airbnb has acquired access to over 800,000 rooms to let in more than 190 countries; in 2014 the estimated value of this company which employed only 300 people at the time was $13 billion. Uber has demonstrated similarly astonishing growth.

However, it is much less clear what these businesses are contributing to society. In many cases their rapid growth is made possible by operating business models that side-step – or just ignore – the regulation that governs the traditional businesses that they compete with. Whilst they can offer employment opportunities to the providers in their trading networks, those opportunities are often informal and may not be protected by employment rights and minimum wage legislation. As privately held companies their only motivation is to return a profit to their owners.

By creating dramatic shifts in how transactions take place in the industries in which they operate, sharing economy businesses can create similarly dramatic shifts in transport patterns. For example, hotels in major cities frequently operate shuttle buses to transfer guests from nearby airports – a shared form of transport. Airbnb offer no such equivalent transfers to their independent accommodation. This is a general consequence of replacing large-scale, centrally managed systems of supply with thousands of independent transactions. At present there is very little research to understand these impacts, and certainly no policy to address them.

But what if incentives could be created to encourage the formation of sharing economy systems that promoted local transactions that can take place with less need for powered transport?

For example, Borroclub provides a service that matches someone who needs a tool with a neighbour who owns one that they could borrow. Casserole Club connects people who are unable to cook for themselves with a neighbours who are happy to cook and extra portion and share it. The West Midlands Collaborative Commerce Marketplace identifies opportunities for groups of local businesses to collaborate to win new contracts. Such “hyperlocal” schemes are not a new idea, and there are endless possibilities for them to reveal local opportunities to interact; but they struggle to compete for attention and investment against businesses purely focussed on maximising profits and investor returns.

Surely, a study that includes the Future Cities Catapult, Digital Catapult and Transport Systems Catapult amongst its contributors could have explored possibilies for encouraging and scaling hyperlocal sharing economy business models, alongside all those self-driving cars and multi-modal transport planners that industry seems to be quite willing to invest in on its own?

The study does mention some “sharing economy” businesses, including Uber; but it makes no mention of the controversy created because their profit-seeking focus takes no account of their social, economic and environmental impact.

It also mentions the role of online commerce in providing retail options that avoid the need to travel in person – and cites these as an option for reducing the overall demand for travel. But it fails to adequately explore the impact of the consequent requirements for delivery transport – other than to note the potential for detrimental impact on, let’s wait for it, not local communities but: local traffic!

“Enabling lifestyles is about more than just enabling and improving physical travel. 31% (19bn) of journeys made today would rather not have been made if alternative means were available (e.g. online shopping)” (page 15)

“Local authorities and road operators need to be aware that increased goods delivery can potentially have a negative impact on local traffic flows.” (page 24)

Why promote transactions that we carry out in isolation online rather than transactions that we carry out socially by walking, and that could contribute towards the revitalisation of local communities and town centres? Why mention “enabling lifestyles” without exploring the health benefits of walking, cycling and socialising?

(A poster from the International Sustainability Institute's Commuter Toolkit, depicting the space 200 travellers occupy on Seattle's 2nd Avenue when using different forms of transport, and intended to persuade travellers to adopt those forms that use less public space)

(A poster from the International Sustainability Institute’s Commuter Toolkit, depicting the space 200 travellers occupy on Seattle’s 2nd Avenue when using different forms of transport, and intended to persuade travellers to adopt those forms that use less public space)

Self-driving cars as a consumer product represent selfish interests, not societal interests

The sharing economy is not the only example of a technology trend whose social and economic impact cannot be assumed to be positive. The same challenge applies very much to perhaps the most widely publicised transport innovation today, and one that features prominently in the new study: the self-driving car.

On Friday I attended a meeting of the UK’s Intelligent Transport Systems interest group, ITS-UK. Andy Graham of White Willow Consulting gave a report of the recent Intelligent Transport Systems World Congress in Bordeaux. The Expo organisers had provided a small fleet of self-driving cars to transfer delegates between hotels and conference venues.

Andy noted that the cars drove very much like humans did – and that they kept at least as large, if not a larger, gap between themselves and the car in front. On speaking to the various car manufacturers at the show, he learned that their market testing had revealed that car buyers would only be attracted to self-driving cars if they drove in this familiar way.

Andy pointed out that this could significantly negate one of the promoted advantages of self-driving cars: reducing congestion and increasing transport flow volumes by enabling cars to be driven in close convoys with each other. This focus on consumer motivations rather than the holistic impact of travel choices is repeated in the Transport Systems Catapults’ study’s consideration of self-driving cars.

Cars don’t only harm people, communities and the environment if they are diesel or petrol powered and emit pollution, or if they are involved in collisions: they do so simply because they are big and take up space.

Space – space that is safe for people to inhabit – is vital to city and community life. We use it to walk; to sit and relax; to exercise; for our children to play in; to meet each other. Self-driving cars and electric cars take up no less space than the cars we have driven for decades. Cars that are shared take up slightly less space per journey – but are nowhere near as efficient as walking, cycling or public transport in this regard. Car clubs might reduce the need for vehicles to be parked in cities, but they still take up as much space on the road.

The Transport Systems Catapult’s study does explore many means to encourage the use of shared or public transport rather than private cars; but it does so primarily in the interests of reducing congestion and pollution. The relationship between public space, wellbeing and transport is not explored; and neither is the – at best – neutral societal impact of self-driving cars, if their evolution is left to today’s market forces.

Just as the industry and politicians are failing to enact the policies and incentives that are needed to adapt the Smart Cities market to create better cities rather than simply creating efficiencies in service provision and infrastructure, the Intelligent Transport Systems community will fail to deliver transport that serves our society better if it doesn’t challenge our self-serving interests as consumers and travellers and consider the wider interests of society.

The Catapult’s report does highlight the potential need for city-wide and national policies to govern future transport systems consisting of connected and autonomous vehicles; but once again the emphasis is on optimising traffic flows and the traveller experience, not on optimising the outcomes for everyone affected by transport infrastructure and traffic.

As consumers we don’t always know best. In the words of one of the most famous transport innovators in history: “If I had asked people what they wanted, they would have said ‘faster horses’.” (Henry Ford, inventor of the first mass-produced automobile, and of the manufacturing production line).

A failure that matters

The Transport Systems Catapult’s report doesn’t mention most of the issues I’ve explored in this article, and those that it does touch on are quickly passed over. In 60 pages it only mentions walking and cycling a handful of times; it never analyses the needs of pedestrians and cyclists, and beyond a passing mention of employers’ “cycle to work” schemes and the incorporation of bicycle hire schemes in multi-modal ticketing solutions, these modes of transport are never presented as solutions to our transport and social challenges.

This is a failure that matters. The Transport Systems Catapult is only one voice in the Intelligent Transport Systems community, and many of us would do well to broaden our understanding of the context and consequences of our work. For my part when I worked with IBM’s Intelligent Transport Systeams team several years ago I was similarly disengaged with these issues, and focussed on the narrower economic and technological aspects of the domain. It was only later in my career as I sought to properly understand the wider complexities of Smart Cities that I began to appreciate them.

But the Catapult Centre benefits from substantial public funding, is a high profile influencer across the transport sector, and is perceived to have the authority of a relatively independent voice between the public and private sectors. By not taking into account these issues, its recommendations and initiatives run the risk of creating great harm in cities in the UK, and anywhere else our transport industry exports its ideas to.

Both the “Smart Cities” and “Intelligent Transport” communities often talk in terms of breaking down silos in industry, in city systems and in thinking. But in reality we are not doing so. Too many Smart City discussions separate out “energy”, “mobility” and ”wellbeing” as separate topics. Too few invite town planners, urban designers or social scientists to participate. And this is an example of an “Intelligent Transport” discussion that makes the same mistakes.

(Pedestrian’s attempting to cross Atlanta’s notorious Buford Highway; a 7-lane road with no pavements and 2 miles between junctions and crossings. Photo by PBS)

In the wonderful “Walkable City“, Jeff Speck describe’s the epidemiologist Richard Jackson’s stark realisation of the life-and-death significance of good urban design related to transport infrastructure. Jackson was driving along the notorious two mile stretch of Atlanta’s seven lane Buford highway with no pavements or junctions:

“There, by the side of the road, in the ninety-five degree afternoon, he saw a woman in her seventies, struggling under the burden of two shopping bags. He tried to relate her plight to his own work as an epidemiologist. “If that poor woman had collapsed from heat stroke, we docs would have written the cause of death as heat stroke and not lack of trees and public transportation, poor urban form, and heat-island effects. If she had been killed by a truck going by the cause of death would have been “motor vehicle trauma”, and not lack of sidewalks and transit, poor urban planning and failed political leadership.”

We will only harness technology, transport and infrastructure to create better communities and better cities if we seek out and respect those cross-disciplinary insights that take seriously the needs of everyone in our society who is affected by them; not just the needs of those who are its primary users.

Our failure to do so over the last century is demonstrated by the UK’s disgracefully low social mobility; by those areas of multiple deprivation which in most cases have persisted for decades; and by the fact that as a consequence life expectancy for babies born today in the poorest parts of cities in the UK is 20 years shorter than for babies born today in the richest part of the same city.

That is the life and death impact of the transport strategies that we’ve had in the past; the transport strategies we publish today must do better.

Postscript 3rd November

The Transport Systems Catapult replied very positively on Twitter today to my rather forthright criticisms of their report. They said “Great piece Rick. The study is a first step in an ongoing discussion and we welcome further input/ideas feeding in as we go on.”

I’d like to think I’d respond in a similarly gracious way to anyone’s criticism of my own work!

What my article doesn’t say is that the Catapult’s report is impressively detailed and insightful in its coverage of those topics that it does include. I would absolutely welcome their expertise and resources being applied to a broader consideration of the topic of future transport, and look forward to seeing it. 

Let’s not get carried away by self-driving cars and the sharing economy: they won’t make Smart Cities better places to live, work and play

(Cities either balance or create tension between human interaction and transport; how will self-driving cars change that equation?)

(Cities either balance or create tension between human interaction and transport; how will self-driving cars change that equation? With thanks and apologies to Tim Stonor for images and inspiration)

Will we remember to design cities for people and life, enriched by interactions and supported by transport? Or will we put the driverless car and the app that hires it before the passenger?

I’m worried that the current level of interest in self-driving cars as a Smart City initiative is a distraction from the transport and technology issues that really matter in cities.

It’s a great example of a technology that is attracting significant public, private and academic investment because many people will pay for the resulting product in return for the undoubted benefits to their personal safety and convenience.

But will cities full of cars driving themselves be better places to live, work and play than cities full of cars driven by people?

Cities create value when people in them transact with each other: that often requires meeting in person and/or exchanging goods – both of which require transport. From the medieval era to the modern age cities have in part been defined by the tension between our desire to interact and the negative effects created by the size, noise, pollution and danger of the transport that we use to do so – whether that transport is horses and wagons or cars and vans.

A number of town planners and urban designers argue that we’ve got that balance wrong over the past half century with the result that many urban environments are dominated by road traffic and infrastructure to the extent that they inhibit the human interactions that are at the heart of the social and economic life of cities.

What will be the effect of autonomous vehicles on that inherent tension – will they help us to achieve a better balance, or make it harder to do so?

(Traffic clogging the streets of Rome. Photo by AntyDiluvian)

(Traffic clogging the streets of Rome. Photo by AntyDiluvian)

Autonomous vehicles are driven in a different way than the cars that we drive today, and that creates certain advantages: freeing people from the task of driving in order to work or relax; and allowing a higher volume of traffic to flow in safety than currently possible, particularly on national highway networks. And they will almost certainly very soon become better at avoiding accidents with people, vehicles and their surroundings than human drivers.

But they are no smaller than traditional vehicles, so they will take up just as much space. And they will only produce less noise and pollution if they are electric vehicles (which in turn merely create pollution elsewhere in the power system) or are powered by hydrogen – a technology that is still a long way from large-scale adoption.

And whilst computer-driven cars may be safer than cars driven by people, they will not make pedestrians and cyclists feel any safer: people are more likely to feel safe in proximity with slow moving cars with whose drivers they can make eye contact, not automated vehicles travelling at speed. The extent to which we feel safe (which we are aware of) is often a more important influence on our social and economic activity than the extent to which we are actually safe (which we may well not be accurately aware of).

The tension between the creation of social and economic value in cities through interactions between people, and the transport required to support those interactions, is also at the heart of the world’s sustainability challenge. At the “Urban Age: Governing Urban Futures” conference in New Delhi,  November 2014, Ricky Burdett, Director of the London School of Economics’ Cities Program, described the graph below that shows the relationship between social and economic development, as measured by the UN Human Welfare Index, plotted left-to-right; and ecological footprint per person, which is shown vertically, and which by and large grows significantly as social and economic progress is made.  (You can watch Burdett’s presentation, along with those by other speakers at the conference, here).

the relationship between social and economic development, as measured by the UN Human Welfare Index, plotted left-to-right and ecological footprint per person, which is shown vertically

(The relationship between social and economic development, as measured by the UN Human Welfare Index, plotted left-to-right and ecological footprint per person, which is shown vertically)

The dotted line at the bottom of the graph shows when the ecological footprint of each person passes beyond that which our world can support for the entire population. Residents of cities in the US are using five times this limit already, and countries such as China and Brazil, whose cities are growing at a phenomenal rate, are just starting to breach that line of sustainability.

Tackling this challenge does not necessarily involve making economic, social or personal sacrifices, though it certainly involves making changes. In recent decades, a number of politicians such as Enrique Penalosa, ex-Mayor of Bogota, international influencers such as  Joan Clos, Exective Director of UN-Habitat  (as reported informally by Tim Stonor from Dr. Clos’s remarks at the “Urban Planning for City Leaders” conference at the Crystal, London in 2012), and town planners such as Jeff Speck and Charles Montgomery have explored the social and economic benefits of cities that combine low-carbon lifestyles and economic growth by promoting medium-density, mixed-use urban centres that stimulate economies with a high proportion of local transactions within a walkable and cyclable distance.

Of course no single idea is appropriate to every situation, but overall I’m personally convinced that this is the only sensible general conception of cities for the future that will lead to a happy, healthy, fair and sustainable world.

There are many ways that technology can contribute to the development of this sort of urban economy, to complement the work of urban designers and town planners in the physical environment. For example, a combination of car clubs, bicycle hire schemes and multi-modal transport information services is already contributing to a changing culture in younger generations of urban citizens who are less interested in owning cars than previous generations.

ScreenHunter_07 Jun. 03 23.49

(Top: Frederiksberg, Copenhagen, where cyclists and pedestrians on one of the districts main thoroughfares are given priority over cars waiting to turn onto the road. Bottom: Buford Highway, Atlanta, a 2 kilometre stretch of 7-line highway passing through a residential and retail area with no pavements or pedestrian crossings)

And this is a good example that it is not set in stone that cities must inevitably grow towards the high ecological footprints of US cities as their economies develop.

The physicist Geoffrey West’s work is often cited as proof that cities will grow larger, and that their economies will speed up as they do so, increasing their demand for resources and production of waste and pollution. But West’s work is “empirical”, not “deterministic”: it is simply based on measurements and observations of how cities behave today; it is not a prediction for how cities will behave in the future.

It is up to us to discover new services and infrastructures to support urban populations and their desire for ever more intense interactions in a less profligate way. Already today, cities diverge from West’s predictions according to the degree to which they have done so. The worst examples of American sprawl such as Houston, Texas have enormous ecological footprints compared to the standard of living and level of economy activity they support; more forward-thinking cities such as Portland, Vancouver, Copenhagen and Freiberg are far more efficient (and Charles Montgomery has argued that they are home to happier, healthier citizens as a consequence).

However, the role that digital technologies will play in shaping the economic and social transactions of future cities and that ecological footprint is far from certain.

On the one hand modern, technologies make it easier for us to communicate and share information wherever we are without needing to travel; but on the other hand those interactions create new opportunities to meet in person and to exchange goods and services; and so they create new requirements for transport. As technologies such as 3D printingopen-source manufacturing and small-scale energy generation make it possible to carry out traditionally industrial activities at much smaller scales, an increasing number of existing bulk movement patterns are being replaced by thousands of smaller, peer-to-peer interactions created by transactions in online marketplaces. We can already see the effects of this trend in the vast growth of traffic delivering goods that are purchased or exchanged online.

I first wrote about this “sharing economy“, defined by Wikipedia as “economic and social systems that enable shared access to goods, services, data and talent”, two years ago. It has the potential to promote a sustainable economy through matching supply and demand in ways that weren’t previously possible. For example, e-Bay CEO John Donahoe has described the environmental benefits created by the online second-hand marketplace extending the life of over $100 billion of goods since it began, representing a significant reduction in the impact of manufacturing and disposing of goods. But on the other hand those benefits are offset by the carbon footprint of the need to transport goods between the buyers and sellers who use them; and by the social and economic impact of that traffic on city communities.

There are many sharing economy business models that promote sustainable, walkable, locally-reinforcing city economies: Casserole Club, who use social media to introduce people who can’t cook for themselves to people who are prepared to volunteer to cook for others; the West Midlands Collaborative Commerce Marketplace, which uses analytics technology to help it’s 10,000 member businesses work together in local partnerships to win more than £4billion in new contracts each year, and Freecyle and other free recycling networks which tend to promote relatively local re-use of goods and services because the attraction of free, used goods diminishes with the increasing expense of the travel required to collect them.

(Packages from Amazon delivered to Google’s San Francisco office. Photo by moppet65535)

But it takes real skill and good ideas to create and operate these business models successfully; and those abilities are just those that the MIT economists Andy McAfee, Erik Brynjolfsson and Michael Spence have pointed out can command exceptional financial rewards in a capitalist economy. What is there to incent the people who posess those skills to use them to design business models that achieve balanced financial, social and environmental outcomes, as opposed to simply maximising profit and personal return?

The vast majority of systematic incentives act to encourage such people to design businesses that maximise profit. That is why many social enterprises are small-scale, and why many successful “sharing economy” businesses such as Airbnb and Uber have very little to do with sharing value and resources, but are better understood as a new type of profit-seeking transaction broker. It is only personal, ethical attitudes to society that persuade any of us to turn our efforts and talents to more balanced models.

This is a good example of a big choice that we are taking in millions of small decisions: the personal choices of entrepreneurs, social innovators and business leaders in the businesses they start, design and operate; and our personal choices as consumers, employees and citizens in the products we buy, the businesses we work for and the politicians we vote for.

For individuals, those choices are influenced by the degree to which we understand that our own long term interests, the long term interests of the businesses we run or work for, and the long term interests of society are ultimately the same – we are all people living on a single planet together – and that that long-term alignment is more important than the absolute maximisation of short-term financial gain.

But as a whole, the markets that invest in businesses and enable them to operate and grow are driven by relatively short-term financial performance unless they are influenced by external forces.

In this context, self-driving cars – like any other technology – are strictly neutral and amoral. They are a technology that does have benefits, but those benefits are relatively weakly linked to the outcomes that most cities have set out as their objectives. If we want autonomous vehicles, “sharing economy” business models or the Internet of Things to deliver vibrant, fair, healthy and happy cities then more of our attention should be on the policy initiatives, planning and procurement frameworks, business licensing and taxation regimes that could shape the market to achieve those outcomes. The Centre for Data Innovation, British Standards Institute, and Future Cities Catapult have all published work on this subject and are carrying out  initiatives to extend it.

(Photograph by Martin Deutsche of plans to redevelop Queen Elizabeth Park, site of the 2012 London Olympics. The London Legacy Development’s intention, in support of the Smart London Plan, is “for the Park to become one of the world’s leading digital environments, providing a unique opportunity to showcase how digital technology enhances urban living. The aim is to use the Park as a testing ground for the use of new digital technology in transport systems and energy services.”)

Cities create the most value in the most sustainable way when they encourage transactions between people that can take place over a walkable or cyclable distance. New technologies and new technology-enabled business models have great potential to encourage both of those outcomes, but only if we use the tools available to us to shape the market to make them financially advantageous to private sector enterprise.  We should be paying more attention to those tools, and less attention to technology.

Smart Digital Urbanism: creating the conditions for equitably distributed opportunity in the digital age

(The sound artists FA-TECH [http://fa-tech.tumblr.com/] improvising in Shoreditch, London. Shoreditch's combination of urban character, cheap rents and proximity to London's business, financial centres and culture led to the emergence of a thriving technology startup community - although that community's success is now driving rents up, challenging some of the characteristics that enabled it.)

(The sound artists FA-TECH improvising in Shoreditch, London. Shoreditch’s combination of urban character, cheap rents and proximity to London’s business, financial centres and culture led to the emergence of a thriving technology startup community – although that community’s success is now driving rents up, challenging some of the characteristics that enabled it.)

(I first learned of the architect Kelvin Campbell‘s concept of “massive/small” just over two years ago – the idea that certain characteristics of policy and the physical environment in cities could encourage “massive amounts of small-scale innovation” to occur. Kelvin recently launched a collaborative campaign to capture ideas, tools and tactics for massive/small “Smart Urbanism“. This is my first contribution to that campaign.)

Over the past 5 years, enormous interest has developed in the potential for digital technologies to contribute to the construction and development of cities, and to the operation of the services and infrastructures that support them. These ideas are often referred to as “Smart Cities” or “Future Cities”.

Indeed, as the price of digital technologies such as smartphones, sensors, analytics, open source software and cloud platforms reduces rapidly, market dynamics will drive their aggressive adoption to make construction, infrastructure and city services more efficient, and hence make their providers more competitive.

But those market dynamics do not guarantee that we will get everything we want for the future of our cities: efficiency and resilience are not the same as health, happiness and opportunity for every citizen.

Is it realistic to ask ourselves whether we can achieve those objectives? Yes, it has to be.

Many of us believe in that possibility, and spend a lot of our efforts finding ways to achieve it. And over the same timeframe that interest in “smart” and “future” cities has emerged, a belief has developed around the world that the governance institutions of cities – local authorities and elected mayors, rather than the governments of nations – are the most likely political entities to implement the policies that lead to a sustainable, resilient future with more equitably distributed economic growth.

Consequently many Mayors and City Councils are considering or implementing legislation and policy frameworks that change the economic and financial context in which construction, infrastructure and city services are deployed and operated. The British Standards Institute recently published guidance on this topic as part of its overall Smart Cities Standards programme.

But whilst in principle these trends and ideas are incredibly exciting in their potential to create better cities, communities, places and lives in the future, in practise many debates about applying them falter on a destructive and misleading argument between “top-down” and “bottom-up” approaches – the same chasm that Smart Urbanism seeks to bridge in the physical world.

Policies and programmes driven by central government organisations or implemented by technology and infrastructure corporations that drive digital technology into large-scale infrastructures and public services are often criticised as crude, “top-down” initiatives that prioritise resilience and efficiency at the expense of the concerns and values of ordinary people, businesses and communities. However, the organic, “bottom-up” innovation that critics of these initatives champion as the better, alternative approach is ineffective at creating equality.

("Lives on the Line" by James Cheshire at UCL's Centre for Advanced Spatial Analysis, showing the variation in life expectancy and correlation to child poverty in London. From Cheshire, J. 2012. Lives on the Line: Mapping Life Expectancy Along the London Tube Network. Environment and Planning A. 44 (7). Doi: 10.1068/a45341)

(“Lives on the Line” by James Cheshire at UCL’s Centre for Advanced Spatial Analysis, showing the variation in life expectancy and correlation to child poverty in London. From Cheshire, J. 2012. Lives on the Line: Mapping Life Expectancy Along the London Tube Network. Environment and Planning A. 44 (7). Doi: 10.1068/a45341)

“Bottom-up innovation” is what every person, community and business does every day: using our innate creativity to find ways to use the resources and opportunities available to us to make a better life.

But the degree to which we fail to distribute those resources and opportunities equally is illustrated by the stark variation in life expectancy between the richest and poorest areas of cities in the UK: often this variation is as much as 20 years within a single city.

Just as the “design pattern”, a tool invented by a town planner in the 1970s, Christopher Alexander, is probably the single most influential concept that drove the development of the digital technology we all use today, two recent movements in town planning and urban design – “human scale cities” and “smart urbanism” – offer the analogies that can connect “top-down” technology policies and infrastructure with the factors that affect the success of “bottom-up” creativity to create “massive / small” success: future, digital cities that create “massive amounts of small-scale innovation“.

The tools to achieve this are relatively cheap, and the right policy environment could make it fairly straightforward to augment the business case for efficient, resilient “smart city” infrastructures to ensure that they are deployed. They are the digital equivalents of the physical concepts of Smart Urbanism – the use of open grid structures for spatial layouts, and the provision of basic infrastructure components such as street layouts and party walls in areas expected to attract high growth in informal housing. Some will be delivered as a natural consequence of market forces driving technology adoption; but others will only become economically viable when local or national government policies shape the market by requiring them:

  • Broadband, wi-if and 3G / 4G connectivity should be broadly available so that everyone can participate in the digital economy.
  • The data from city services should be made available as Open Data and published through “Application Programming Interfaces” (APIs) so that everybody knows how they work; and can adapt them to their own individual needs.
  • The data and APIs should be made available in the form of Open Standards so that everybody can understand them; and so that the systems that we rely on can work together.
  • The data and APIs should be available to developers working on Cloud Computing platforms with Open Source software so that anyone with a great idea for a new service to offer to people or businesses can get started for free.
  • The technology systems that support the services and infrastructures we rely on should be based on Open Architectures, so that we have freedom to chose which technologies we use, and to change our minds.
  • Governments, institutions, businesses and communities should participate in an open dialogue about the places we live and work in, informed by open data, enabled by social media and smartphones, and enlightened by empathy.

(Casserole Club, a social enterprise developed by FutureGov uses social media to connect people who have difficulty cooking for themselves with others who are happy to cook an extra portion for a neighbour; a great example of a locally-focused “sharing economy” business model which creates financially sustainable social value.)

These principles would encourage good “digital placemaking“: they would help to align the investments that will be made in improving cities using technology with the needs and motivations of the public sector, the private sector, communities and businesses. They would create “Smart Digital Urbanism”: the conditions and environment in which vibrant, fair digital cities grow from the successful innovations of their citizens, communities and businesses in the information economy.

In my new role at Amey, a vast organisation in the UK that delivers public services and operates and supports public infrastructure, I’m leading a set of innovative projects with our customers and technology partners to explore these ideas and to understand how we can collaboratively create economic, social and environmental value for ourselves; for our customers; and for the people, communities and businesses who live in the areas our services support.

It’s a terrifically exciting role; and I’ll soon be hiring a small team of passionate, creative people to help me identify, shape and deliver those projects. I’ll post an update here with details of the skills, experience and characteristics I’m looking for. I hope some of you will find them attractive and get in touch.

From concrete to telepathy: how to build future cities as if people mattered

(An infographic depicting realtime data describing Dublin - the waiting time at road junctions; the location of buses; the number of free parking spaces and bicycles available to hire; and sentiments expressed about the city through social meida)

(An infographic depicting realtime data describing Dublin – the waiting time at road junctions; the location of buses; the number of free parking spaces and bicycles available to hire; and sentiments expressed about the city through social media)

(I was honoured to be asked to speak at TEDxBrum in my home city of Birmingham this weekend. The theme of the event was “DIY” – “the method of building, modifying or repairing something without the aid of experts or professionals”. In other words, how Birmingham’s people, communities and businesses can make their home a better place. This is a rough transcript of my talk).

What might I, a middle-aged, white man paid by a multi-national corporation to be an expert in cities and technology, have to say to Europe’s youngest city, and one of its most ethnically and nationally diverse, about how it should re-create itself “without the aid of experts or professionals”?

Perhaps I could try to claim that I can offer the perspective of one of the world’s earliest “digital natives”. In 1980, at the age of ten, my father bought me one of the world’s first personal computers, a Tandy TRS 80, and taught me how to programme it using “machine code“.

But about two years ago, whilst walking through London to give a talk at a networking event, I was reminded of just how much the world has changed since my childhood.

I found myself walking along Wardour St. in Soho, just off Oxford St., and past a small alley called St. Anne’s Court which brought back tremendous memories for me. In the 1980s I spent all of the money I earned washing pots in a local restaurant in Winchester to travel by train to London every weekend and visit a small shop in a basement in St. Anne’s Court.

I’ve told this story in conference speeches a few times now, perhaps to a total audience of a couple of thousand people. Only once has someone been able to answer the question:

“What was the significance of St. Anne’s Court to the music scene in the UK in the 1980s?”

Here’s the answer:

Shades Records, the shop in the basement, was the only place in the UK that sold the most extreme (and inventive) forms of “thrash metal” and “death metal“, which at the time were emerging from the ashes of punk and the “New Wave of British Heavy Metal” in the late 1970s.

G157 Richard with his Tandy

(Programming my Tandy TRS 80 in Z80 machine code nearly 35 years ago)

The process by which bands like VOIVOD, Coroner and Celtic Frost – who at the time were three 17-year-olds who practised in an old military bunker outside Zurich – managed to connect – without the internet – to the very few people around the world like me who were willing to pay money for their music feels like ancient history now. It was a world of hand-printed “fanzines”, and demo tapes painstakingly copied one at a time, ordered by mail from classified adverts in magazines like Kerrang!

Our world has been utterly transformed in the relatively short time between then and now by the phenomenal ease with which we can exchange information through the internet and social media.

The real digital natives, though, are not even those people who grew up with the internet and social media as part of their everyday world (though those people are surely about to change the world as they enter employment).

They are the very young children like my 6-year-old son, who taught himself at the age of two to use an iPad to access the information that interested him (admittedly, in the form of Thomas the Tank Engine stories on YouTube) before anyone else taught him to read or write, and who can now use programming tools like MIT’s Scratch to control computers vastly more powerful than the one I used as a child.

Their expectations of the world, and of cities like Birmingham, will be like no-one who has ever lived before.

And their ability to use technology will be matched by the phenomenal variety of data available to them to manipulate. As everything from our cars to our boilers to our fridges to our clothing is integrated with connected, digital technology, the “Internet of Things“, in which everything is connected to the internet, is emerging. As a consequence our world, and our cities, are full of data.

(The programme I helped my 6-year old son write using MIT's "Scratch" language to draw a picture of a house)

(The programme I helped my 6-year old son write using MIT’s “Scratch” language to cause a cartoon cat to draw a picture of a house)

My friend the architect Tim Stonor calls the images that we are now able to create, such as the one at the start of this article, “data porn”. The image shows data about Dublin from the Dublinked information sharing partnership: the waiting time at road junctions; the location of buses; the number of free parking spaces and bicycles available to hire; and sentiments expressed about the city through social media.

Tim’s point is that we should concentrate not on creating pretty visualisations; but on the difference we can make to cities by using this data. Through Open Data portals, social media applications, and in many other ways, it unlocks secrets about cities and communities:

  • Who are the 17 year-olds creating today’s most weird and experimental music? (Probably by collaborating digitally from three different bedroom studios on three different continents)
  • Where is the healthiest walking route to school?
  • Is there a local company nearby selling wonderful, oven-ready curries made from local recipes and fresh ingredients?
  • If I set off for work now, will a traffic jam develop to block my way before I get there?

From Dublin to Montpellier to Madrid and around the world my colleagues are helping cities to build 21st-Century infrastructures that harness this data. As technology advances, every road, electricity substation, University building, and supermarket supply chain will exploit it. The business case is easy: we can use data to find ways to operate city services, supply chains and infrastructure more efficiently, and in a way that’s less wasteful of resources and more resilient in the face of a changing climate.

Top-down thinking is not enough

But to what extent will this enormous investment in technology help the people who live and work in cities, and those who visit them, to benefit from the Information Economy that digital technology  and data is creating?

This is a vital question. The ability of digital technology to optimise and automate tasks that were once carried out by people is removing jobs that we have relied on for decades. In order for our society to be based upon a fair and productive economy, we all need to be able to benefit from the new opportunities to work and be successful that are being created by digital technology.

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the citycentre before its redevelopment in 2003, by Birmingham City Council)

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the city centre before its redevelopment in 2003, by Birmingham City Council)

Too often in the last century, we got this wrong. We used the technologies of the age – concrete, lifts, industrial machinery and cars – to build infrastructures and industries that supported our mass needs for housing, transport, employment and goods; but that literally cut through and isolated the communities that create urban life.

If we make the same mistake by thinking only about digital technology in terms of its ability to create efficiencies, then as citizens, as communities, as small businesses we won’t fully benefit from it.

In contrast, one of the authors of Birmingham’s Big City Plan, the architect Kelvin Campbell, created the concept of “massive / small“. He asked: what are the characteristics of public policy and city infrastructure that create open, adaptable cities for everyone and that thereby give rise to “massive” amounts of “small-scale” innovation?

In order to build 21st Century cities that provide the benefits of digital technology to everyone we need to find the design principles that enable the same “massive / small” innovation to emerge in the Information Economy, in order that we can all use the simple, often free, tools available to us to create our own opportunities.

There are examples we can learn from. Almere in Holland use analytics technology to plan and predict the future development of the city; but they also engage in dialogue with their citizens about the future the city wants. Montpellier in France use digital data to measure the performance of public services; but they also engage online with their citizens in a dialogue about those services and the outcomes they are trying to achieve. The Dutch Water Authority are implementing technology to monitor, automate and optimise an infrastructure on which many cities depend; but making much of the data openly available to communities, businesses, researchers and innovators to explore.

There are many issues of policy, culture, design and technology that we need to get right for this to happen, but the main objectives are clear:

  • The data from city services should be made available as Open Data and through published “Application Programming Interfaces” (APIs) so that everybody knows how they work; and can adapt them to their own individual needs.
  • The data and APIs should be made available in the form of Open Standards so that everybody can understand it; and so that the systems that we rely on can work together.
  • The data and APIs should be available to developers working on Cloud Computing platforms with Open Source software so that anyone with a great idea for a new service to offer to people or businesses can get started for free.
  • The technology systems that support the services and infrastructures we rely on should be based on Open Architectures, so that we have freedom to chose which technologies we use, and to change our minds.
  • Governments, institutions, businesses and communities should participate in an open dialogue, informed by data and enlightened by empathy, about the places we live and work in.

If local authorities and national government create planning policies, procurement practises and legislation that require that public infrastructure, property development and city services provide this openness and accessibility, then the money spent on city infrastructure and services will create cities that are open and adaptable to everyone in a digital age.

Bottom-up innovation is not enough, either

(Coders at work at the Birmingham “Smart Hack”, photographed by Sebastian Lenton)

Not everyone has access to the technology and skills to use this data, of course. But some of the people who do will create the services that others need.

I took part in my first “hackathon” in Birmingham two years ago. A group of people spent a weekend together in 2012 asking themselves: in what way should Birmingham be better? And what can we do about it? Over two days, they wrote an app, “Second Helping”, that connected information about leftover food in the professional kitchens of restaurants and catering services, to soup kitchens that give food to people who don’t have enough.

Second Helping was a great idea; but how do you turn a great idea and an app into a change in the way that food is used in a city?

Hackathons and “civic apps” are great examples of the “bottom-up” creativity that all of us use to create value – innovating with the resources around us to make a better life, run a better business, or live in a stronger community. But “bottom-up” on it’s own isn’t enough.

The result of “bottom-up” innovation at the moment is that life expectancy in the poorest parts of Birmingham is more than 10 years shorter than it is in the richest parts. In London and Glasgow, it’s more than 20 years shorter.

If you’re born in the wrong place, you’re likely to die 10 years younger than someone else born in a different part of the same city. This shocking situation arises from many, complex issues; but one conclusion that it is easy to draw is that the opportunity to innovate successfully is not the same for everyone.

So how do we increase everybody’s chances of success? We need to create the policies, institutions, culture and behaviours that join up the top-down thinking that tends to control the allocation of resources and investment, especially for infrastructure, with the needs of bottom-up innovators everywhere.

Translational co-operation

Harborne Food School

(The Harborne Food School, which will open in the New Year to offer training and events in local and sustainable food)

The Economist magazine reminded us of the importance of those questions in a recent article describing the enormous investments made in public institutions such as schools, libraries and infrastructure in the past in order to distribute the benefits of the Industrial Revolution to society at large rather than concentrate them on behalf of business owners and the professional classes.

But the institutions of the past, such as the schools which to a large degree educated the population for repetitive careers in labour-intensive factories, won’t work for us today. Our world is more complicated and requires a greater degree of localised creativity to be successful. We need institutions that are able to engage with and understand individuals; and that make their resources openly available so that each of us can use them in the way that makes most sense to us. Some public services are starting to respond to this challenge, through the “Open Public Services” agenda; and the provision of Open Data and APIs by public services and infrastructure are part of the response too.

But as Andrew Zolli describes in “Resilience: why things bounce back“, there are both institutional and cultural barriers to engagement and collaboration between city institutions and localised innovation. Zolli describes the change-makers who overcome those barriers as “translational leaders” – people with the ability to engage with both small-scale, informal innovation in communities and large-scale, formal institutions with resources.

We’re trying to apply that “translational” thinking in Birmingham through the Smart City Alliance, a collaboration between 20 city institutions, businesses and innovators. The idea is to enable conversations about challenges and opportunities in the city, between people, communities, innovators and  the organisations who have resources, from the City Council and public institutions to businesses, entrepreneurs and social enterprises. We try to put people and organisations with challenges or good ideas in touch with other people or organisations with the ability to help them.

This is how we join the “top-down” resources, policies and programmes of city institutions and big companies with the “bottom-up” innovation that creates value in local situations. A lot of the time it’s about listening to people we wouldn’t normally meet.

Partly as a consequence, we’ve continued to explore the ideas about local food that were first raised at the hackathon. Two years later, the Harborne Food School is close to opening as a social enterprise in a redeveloped building on Harborne High Street that had fallen out of use.

The school will be teaching courses that help caterers provide food from sustainable sources, that teach people how to set up and run food businesses, and that help people to adopt diets that prevent or help to manage conditions such as diabetes. The idea has changed since the “Second Helping” app was written, of course; but the spirit of innovation and local value is the same.

Cities that work like magic

So what does all this have to do with telepathy?

The innovations and changes caused by the internet over the last two decades have accelerated as it has made information easier and easier to access and exchange through the advent of technologies such as broadband, mobile devices and social media. But the usefulness of all of those technologies is limited by the tools required to control them – keyboards, mice and touchscreens.

Before long, we won’t need those tools at all.

Three years ago, scientists at the University of Berkely used computers attached to an MRI scanner to recreate moving images from the magnetic field created by the brain of a person inside the scanner watching a film on a pair of goggles. And last year, scientists at the University of Washington used similar technology to allow one of them to move the other’s arm simply by thinking about it. A less sensitive mind-reading technology is already available as a headset from Emotiv, which my colleagues in IBM’s Emerging Technologies team have used to help a paralysed person communicate by thinking directional instructions to a computer.

Telepathy is now technology, and this is just one example of the way that the boundary between our minds, bodies and digital information will disappear over the next decade. As a consequence, our cities and lives will change in ways we’ve never imagined, and some of those changes will happen surprisingly quickly.

I can’t predict what Birmingham will or should be like in the future. As a citizen, I’ll be one of the million or so people who decide that future through our choices and actions. But I can say that the technologies available to us today are the most incredible DIY tools for creating that future that we’ve ever had access to. And relatively quickly technologies like bio-technology, 3D printing and brain/computer interfaces will put even more power in our hands.

As a parent, I get engaged in my son’s exploration of these technologies and help him be digitally aware, creative and responsible. Whenever I can, I help schools, Universities, small businesses or community initiatives to use them, because I might be helping one of IBM’s best future employees or business partners; or just because they’re exciting and worth helping. And as an employee, I try to help my company take decisions that are good for our long term business because they are good for the society that the business operates in.

We can take for granted that all of us, whatever we do, will encounter more and more incredible technologies as time passes. By remembering these very simple things, and remembering them in the hundreds of choices I make every day, I hope that I’ll be using them to play my part in building a better Birmingham, and better cities and communities everywhere.

(Shades Records in St. Anne's Court in the 1980s)

(Shades Records in St. Anne’s Court in the 1980s. You can read about the role it played in the development of the UK’s music culture – and in the lives of its customers – in this article from Thrash Hits;  or this one from Every Record Tells a Story. And if you really want to find out what it was all about, try watching Celtic Frost or VOIVOD in the 1980s!)

11 reasons computers can’t understand or solve our problems without human judgement

(Photo by Matt Gidley)

(Photo by Matt Gidley)

Why data is uncertain, cities are not programmable, and the world is not “algorithmic”.

Many people are not convinced that the Smart Cities movement will result in the use of technology to make places, communities and businesses in cities better. Outside their consumer enjoyment of smartphones, social media and online entertainment – to the degree that they have access to them – they don’t believe that technology or the companies that sell it will improve their lives.

The technology industry itself contributes significantly to this lack of trust. Too often we overstate the benefits of technology, or play down its limitations and the challenges involved in using it well.

Most recently, the idea that traditional processes of government should be replaced by “algorithmic regulation” – the comparison of the outcomes of public systems to desired objectives through the measurement of data, and the automatic adjustment of those systems by algorithms in order to achieve them – has been proposed by Tim O’Reilly and other prominent technologists.

These approaches work in many mechanical and engineering systems – the autopilots that fly planes or the anti-lock braking systems that we rely on to stop our cars. But should we extend them into human realms – how we educate our children or how we rehabilitate convicted criminals?

It’s clearly important to ask whether it would be desirable for our society to adopt such approaches. That is a complex debate, but my personal view is that in most cases the incredible technologies available to us today – and which I write about frequently on this blog – should not be used to take automatic decisions about such issues. They are usually more valuable when they are used to improve the information and insight available to human decision-makers – whether they are politicians, public workers or individual citizens – who are then in a better position to exercise good judgement.

More fundamentally, though, I want to challenge whether “algorithmic regulation” or any other highly deterministic approach to human issues is even possible. Quite simply, it is not.

It is true that our ability to collect, analyse and interpret data about the world has advanced to an astonishing degree in recent years. However, that ability is far from perfect, and strongly established scientific and philosophical principles tell us that it is impossible to definitively measure human outcomes from underlying data in physical or computing systems; and that it is impossible to create algorithmic rules that exactly predict them.

Sometimes automated systems succeed despite these limitations – anti-lock braking technology has become nearly ubiquitous because it is more effective than most human drivers at slowing down cars in a controlled way. But in other cases they create such great uncertainties that we must build in safeguards to account for the very real possibility that insights drawn from data are wrong. I do this every time I leave my home with a small umbrella packed in my bag despite the fact that weather forecasts created using enormous amounts of computing power predict a sunny day.

(No matter how sophisticated computer models of cities become, there are fundamental reasons why they will always be simplifications of reality. It is only by understanding those constraints that we can understand which insights from computer models are valuable, and which may be misleading. Image of Sim City by haljackey)

We can only understand where an “algorithmic” approach can be trusted; where it needs safeguards; and where it is wholly inadequate by understanding these limitations. Some of them are practical, and limited only by the sensitivity of today’s sensors and the power of today’s computers. But others are fundamental laws of physics and limitations of logical systems.

When technology companies assert that Smart Cities can create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” (as London School of Economics Professor Adam Greenfield rightly criticised in his book “Against the Smart City”), they are ignoring these challenges.

A blog published by the highly influential magazine Wired recently made similar overstatements: “The Universe is Programmable” argues that we should extend the concept of an “Application Programming Interface (API)” – a facility usually offered by technology systems to allow external computer programmes to control or interact with them – to every aspect of the world, including our own biology.

To compare complex, unpredictable, emergent biological and social systems to the very logical, deterministic world of computer software is at best a dramatic oversimplification. The systems that comprise the human body range from the armies of symbiotic microbes that help us digest food in our stomachs to the consequences of using corn syrup to sweeten food to the cultural pressure associated with “size 0” celebrities. Many of those systems can’t be well modelled in their own right, let alone deterministically related to each other; let alone formally represented in an accurate, detailed way by technology systems (or even in mathematics).

We should regret and avoid the hubris that leads to the distrust of technology by overstating its capability and failing to recognise its challenges and limitations. That distrust is a barrier that prevents us from achieving the very real benefits that data and technology can bring, and that have been convincingly demonstrated in the past.

For example, an enormous contribution to our knowledge of how to treat and prevent disease was made by John Snow who used data to analyse outbreaks of cholera in London in the 19th century. Snow used a map to correlate cases of cholera to the location of communal water pipes, leading to the insight that water-borne germs were responsible for spreading the disease. We wash our hands to prevent diseases spreading through germs in part because of what we would now call the “geospatial data analysis” performed by John Snow.

Many of the insights that we seek from analytic and smart city systems are human in nature, not physical or mathematical – for example identifying when and where to apply social care interventions in order to reduce the occurrence of  emotional domestic abuse. Such questions are complex and uncertain: what is “emotional domestic abuse?” Is it abuse inflicted by a live-in boyfriend, or by an estranged husband who lives separately but makes threatening telephone calls? Does it consist of physical violence or bullying? And what is “bullying”?

IMG_0209-1

(John Snow’s map of cholera outbreaks in 19th century London)

We attempt to create structured, quantitative data about complex human and social issues by using approximations and categorisations; by tolerating ranges and uncertainties in numeric measurements; by making subjective judgements; and by looking for patterns and clusters across different categories of data. Whilst these techniques can be very powerful, just how difficult it is to be sure what these conventions and interpretations should be is illustrated by the controversies that regularly arise around “who knew what, when?” whenever there is a high profile failure in social care or any other public service.

These challenges are not limited to “high level” social, economic and biological systems. In fact, they extend throughout the worlds of physics and chemistry into the basic nature of matter and the universe. They fundamentally limit the degree to which we can measure the world, and our ability to draw insight from that information.

By being aware of these limitations we are able to design systems and practises to use data and technology effectively. We know more about the weather through modelling it using scientific and mathematical algorithms in computers than we would without those techniques; but we don’t expect those forecasts to be entirely accurate. Similarly, supermarkets can use data about past purchases to make sufficiently accurate predictions about future spending patterns to boost their profits, without needing to predict exactly what each individual customer will buy.

We underestimate the limitations and flaws of these approaches at our peril. Whilst Tim O’Reilly cites several automated financial systems as good examples of “algorithmic regulation”, the financial crash of 2008 showed the terrible consequences of the thoroughly inadequate risk management systems used by the world’s financial institutions compared to the complexity of the system that they sought to profit from. The few institutions that realised that market conditions had changed and that their models for risk management were no longer valid relied instead on the expertise of their staff, and avoided the worst affects. Others continued to rely on models that had started to produce increasingly misleading guidance, leading to the recession that we are only now emerging from six years later, and that has damaged countless lives around the world.

Every day in their work, scientists, engineers and statisticians draw conclusions from data and analytics, but they temper those conclusions with an awareness of their limitations and any uncertainties inherent in them. By taking and communicating such a balanced and informed approach to applying similar techniques in cities, we will create more trust in these technologies than by overstating their capabilities.

What follows is a description of some of the scientific, philosophical and practical issues that lead inevitability to uncertainty in data, and to limitations in our ability to draw conclusions from it:

But I’ll finish with an explanation of why we can still draw great value from data and analytics if we are aware of those issues and take them properly into account.

Three reasons why we can’t measure data perfectly

(How Heisenberg’s Uncertainty Principle results from the dual wave/particle nature of matter. Explanation by HyperPhysics at Georgia State University)

1. Heisenberg’s Uncertainty Principle and the fundamental impossibility of knowing everything about anything

Heisenberg’s Uncertainty Principle is a cornerstone of Quantum Mechanics, which, along with General Relativity, is one of the two most fundamental theories scientists use to understand our world. It defines a limit to the precision with which certain pairs of properties of the basic particles which make up the world – such as protons, neutrons and electrons – can be known at the same time. For instance, the more accurately we measure the position of such particles, the more uncertain their speed and direction of movement become.

The explanation of the Uncertainty Principle is subtle, and lies in the strange fact that very small “particles” such as electrons and neutrons also behave like “waves”; and that “waves” like beams of light also behave like very small “particles” called “photons“. But we can use an analogy to understand it.

In order to measure something, we have to interact with it. In everyday life, we do this by using our eyes to measure lightwaves that are created by lightbulbs or the sun and that then reflect off objects in the world around us.

But when we shine light on an object, what we are actually doing is showering it with billions of photons, and observing the way that they scatter. When the object is quite large – a car, a person, or a football – the photons are so small in comparison that they bounce off without affecting it. But when the object is very small – such as an atom – the photons colliding with it are large enough to knock it out of its original position. In other words, measuring the current position of an object involves a collision which causes it to move in a random way.

This analogy isn’t exact; but it conveys the general idea. (For a full explanation, see the figure and link above). Most of the time, we don’t notice the effects of Heisenberg’s Uncertainty Principle because it applies at extremely small scales. But it is perhaps the most fundamental law that asserts that “perfect knowledge” is simply impossible; and it illustrates a wider point that any form of measurement or observation in general affects what is measured or observed. Sometimes the effects are negligible,  but often they are not – if we observe workers in a time and motion study, for example, we need to be careful to understand the effect our presence and observations have on their behaviour.

2. Accuracy, precision, noise, uncertainty and error: why measurements are never fully reliable

Outside the world of Quantum Mechanics, there are more practical issues that limit the accuracy of all measurements and data.

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become "fuzzy". In this case, the effects of noise and interference - the degree to which the signal appears "fuzzy" - are relatively small relative to the strength of the signal, and the device is usable)

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become “fuzzy”. In this case, the effects of noise and interference – the degree to which the signal appears “fuzzy” – are relatively small compared to the strength of the signal, and the device is usable)

We live in a “warm” world – roughly 300 degrees Celsius above what scientists call “absolute zero“, the coldest temperature possible. What we experience as warmth is in fact movement: the atoms from which we and our world are made “jiggle about” – they move randomly. When we touch a hot object and feel pain it is because this movement is too violent to bear – it’s like being pricked by billions of tiny pins.

This random movement creates “noise” in every physical system, like the static we hear in analogue radio stations or on poor quality telephone connections.

We also live in a busy world, and this activity leads to other sources of noise. All electronic equipment creates electrical and magnetic fields that spread beyond the equipment itself, and in turn affect other equipment – we can hear this as a buzzing noise when we leave smartphones near radios.

Generally speaking, all measurements are affected by random noise created by heat, vibrations or electrical interference; are limited by the precision and accuracy of the measuring devices we use; and are affected by inconsistencies and errors that arise because it is always impossible to completely separate the measurement we want to make from all other environmental factors.

Scientists, engineers and statisticians are familiar with these challenges, and use techniques developed over the course of more than a century to determine and describe the degree to which they can trust and rely on the measurements they make. They do not claim “perfect knowledge” of anything; on the contrary, they are diligent in describing the unavoidable uncertainty that is inherent in their work.

3. The limitations of measuring the natural world using digital systems

One of the techniques we’ve adopted over the last half century to overcome the effects of noise and to make information easier to process is to convert “analogue” information about the real world (information that varies smoothly) into digital information – i.e. information that is expressed as sequences of zeros and ones in computer systems.

(When analogue signals are amplified, so is the noise that they contain. Digital signals are interpreted using thresholds: above an upper threshold, the signal means “1”, whilst below a lower threshold, the signal means “0”. A long string of “0”s and “1”s can be used to encode the same information as contained in analogue waves. By making the difference between the thresholds large compared to the level of signal noise, digital signals can be recreated to remove noise. Further explanation and image by Science Aid)

This process involves a trade-off between the accuracy with which analogue information is measured and described, and the length of the string of digits required to do so – and hence the amount of computer storage and processing power needed.

This trade-off can be clearly seen in the difference in quality between an internet video viewed on a smartphone over a 3G connection and one viewed on a high definition television using a cable network. Neither video will be affected by the static noise that affects weak analogue television signals, but the limited bandwidth of a 3G connection dramatically limits the clarity and resolution of the image transmitted.

The Nyquist–Shannon sampling theorem defines this trade-off and the limit to the quality that can be achieved in storing and processing digital information created from analogue sources. It determines the quality of digital data that we are able to create about any real-world system – from weather patterns to the location of moving objects to the fidelity of sound and video recordings. As computers and communications networks continue to grow more powerful, the quality of digital information will improve,  but it will never be a perfect representation of the real world.

Three limits to our ability to analyse data and draw insights from it

1. Gödel’s Incompleteness Theorem and the inconsistency of algorithms

Kurt Gödel’s Incompleteness Theorem sets a limit on what can be achieved by any “closed logical system”. Examples of “closed logical systems” include computer programming languages, any system for creating algorithms – and mathematics itself.

We use “closed logical systems” whenever we create insights and conclusions by combining and extrapolating from basic data and facts. This is how all reporting, calculating, business intelligence, “analytics” and “big data” technologies work.

Gödel’s Incompleteness Theorem proves that any closed logical system can be used to create conclusions that  it is not possible to show are true or false using the same system. In other words, whilst computer systems can produce extremely useful information, we cannot rely on them to prove that that information is completely accurate and valid. We have to do that ourselves.

Gödel’s theorem doesn’t stop computer algorithms that have been verified by humans using the scientific method from working; but it does mean that we can’t rely on computers to both generate algorithms and guarantee their validity.

2. The behaviour of many real-world systems can’t be reduced analytically to simple rules

Many systems in the real-world are complex: they cannot be described by simple rules that predict their behaviour based on measurements of their initial conditions.

A simple example is the “three body problem“. Imagine a sun, a planet and a moon all orbiting each other. The movement of these three objects is governed by the force of gravity, which can be described by relatively simple mathematical equations. However, even with just three objects involved, it is not possible to use these equations to directly predict their long-term behaviour – whether they will continue to orbit each other indefinitely, or will eventually collide with each other, or spin off into the distance.

(A computer simulation by Hawk Express of a Belousov–Zhabotinsky reaction,  in which reactions between liquid chemicals create oscillating patterns of colour. The simulation is carried out using “cellular automata” a technique based on a grid of squares which can take different colours. In each “turn” of the simulation, like a turn in a board game, the colour of each square is changed using simple rules based on the colours of adjacent squares. Such simulations have been used to reproduce a variety of real-world phenomena)

As Stephen Wolfram argued in his controversial book “A New Kind of Science” in 2002, we need to take a different approach to understanding such complex systems. Rather than using mathematics and logic to analyse them, we need to simulate them, often using computers to create models of the elements from which complex systems are composed, and the interactions between them. By running simulations based on a large number of starting points and comparing the results to real-world observations, insights into the behaviour of the real-world system can be derived. This is how weather forecasts are created, for example. 

But as we all know, weather forecasts are not always accurate. Simulations are approximations to real-world systems, and their accuracy is restricted by the degree to which digital data can be used to represent a non-digital world. For this reason, conclusions and predictions drawn from simulations are usually “average” or “probable” outcomes for the system as a whole, not precise predictions of the behaviour of the system or any individual element of it. This is why weather forecasts are often wrong; and why they predict likely levels of rain and windspeed rather than the shape and movement of individual clouds.

(Hello)

(A simple and famous example of a computer programme that never stops running because it calls itself. The output continually varies by printing out characters based on random number generation. Image by Prosthetic Knowledge)

3. Some problems can’t be solved by computing machines

If I consider a simple question such as “how many letters are in the word ‘calculation’?”, I can easily convince myself that a computer programme could be written to answer the question; and that it would find the answer within a relatively short amount of time. But some problems are much harder to solve, or can’t even be solved at all.

For example, a “Wang Tile” (see image below) is a square tile formed from four triangles of different colours. Imagine that you have bought a set of tiles of various colour combinations in order to tile a wall in a kitchen or bathroom. Given the set of tiles that you have bought, is it possible to tile your wall so that triangles of the same colour line up to each other, forming a pattern of “Wang Tile” squares?

In 1966 Robert Berger proved that no algorithm exists that can answer that question. There is no way to solve the problem – or to determine how long it will take to solve the problem – without actually solving it. You just have to try to tile the room and find out the hard way.

One of the most famous examples of this type of problem is the “halting problem” in computer science. Some computer programmes finish executing their commands relatively quickly. Others can run indefinitely if they contain a “loop” instruction that never ends. For others which contain complex sequences of loops and calls from one section of code to another, it may be very hard to tell whether the programme finishes quickly, or takes a long time to complete, or never finishes its execution at all.

Alan Turing, one of the most important figures in the development of computing, proved in 1936 that a general algorithm to determine whether or not any computer programme finishes its execution does not exist. In other words, whilst there are many useful computer programmes in the world, there are also problems that computer programmes simply cannot solve.

(A set of Wang Tiles, and a pattern created by tiling them so that tiles are placed next to other tiles so that their edges have the same colour. Given any particular set of tiles, it is impossible to determine whether such a pattern can be created by any means other than trial and error)

(A set of Wang Tiles, and a pattern of coloured squares created by tiling them. Given any random set of tiles of different colour combinations, there is no set of rules that can be relied on to determine whether a valid pattern of coloured squares can be created from them. Sometimes, you have to find out by trial and error. Images from Wikipedia)

Five reasons why the human world is messy, unpredictable, and can’t be perfectly described using data and logic

1. Our actions create disorder

The 2nd Law of Thermodynamics is a good candidate for the most fundamental law of science. It states that as time progresses, the universe becomes more disorganised. It guarantees that ultimately – in billions of years – the Universe will die as all of the energy and activity within it dissipates.

An everyday practical consequence of this law is that every time we act to create value – building a shed, using a car to get from one place to another, cooking a meal – our actions eventually cause a greater amount of disorder to be created as a consequence – as noise, pollution, waste heat or landfill refuse.

For example, if I spend a day building a shed, then to create that order and value from raw materials, I consume structured food and turn it into sewage. Or if I use an electric forklift to stack a pile of boxes, I use electricity that has been created by burning structured coal into smog and ash.

So it is literally impossible to create a “perfect world”. Whenever we act to make a part of the world more ordered, we create disorder elsewhere. And ultimately – thankfully, long after you and I are dead – disorder is all that will be left.

2. The failure of Logical Atomism: why the human world can’t be perfectly described using data and logic

In the 20th Century two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein, invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic.

But despite 40 years of work, these two supremely intelligent people could not get their theory to work: “Logical Atomism” failed. It is not possible to describe our world in that way.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of sophisticated systems.

Despite centuries of scientific and philosophical effort, we do not have a complete understanding of how to describe our world at its most basic level. As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations mean. And as we have already seen, Heisenberg’s Uncertainty Principle prevents us from ever having perfect knowledge of the world at this level.

Perhaps the most important failure of logical atomism, though, was that it proved impossible to use logical rules to turn “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – into facts at another level of abstraction – such as “physical assault is a crime”. The human world and the things that we care about can’t be described using logical combinations of “atomic facts”. For example, how would you define the set of all possible uses of a screwdriver, from prising the lids off paint tins to causing a short-circuit by jamming it into a switchboard?

Our world is messy, subjective and opportunistic. It defies universal categorisation and logical analysis.

(A Pescheria in Bari, Puglia, where a fish-market price information service makes it easier for local fisherman to identify the best buyers and prices for their daily catch. Photo by Vito Palmi)

3. The importance and inaccessibility of “local knowledge” 

Because the tool we use for calculating and agreeing value when we exchange goods and services is money, economics is the discipline that is often used to understand the large-scale behaviour of society. We often quantify the “growth” of society using economic measures, for example.

But this approach is notorious for overlooking social and environmental characteristics such as health, happiness and sustainability. Alternatives exist, such as the Social Progress Index, or the measurement framework adopted by the United Nations 2014 Human Development Report on world poverty; but they are still high level and abstract.

Such approaches struggle to explain localised variations, and in particular cannot predict the behaviours or outcomes of individual people with any accuracy. This “local knowledge problem” is caused by the fact that a great deal of the information that determines individual actions is personal and local, and not measurable at a distance – the experienced eye of the fruit buyer assessing not just the quality of the fruit but the quality of the farm and farmers that produce it, as a measure of the likely consistency of supply; the emotional attachments that cause us to favour one brand over another; or the degree of community ties between local businesses that influence their propensity to trade with each other.

Sharing economy” business models that use social media and reputation systems to enable suppliers and consumers of goods and services to find each other and transact online are opening up this local knowledge to some degree. Local food networks, freecycling networks, and land-sharing schemes all use this technology to the benefit of local communities whilst potentially making information about detailed transactions more widely available. And to some degree, the human knowledge that influences how transactions take place can be encoded in “expert systems” which allow computer systems to codify the quantitative and heuristic rules by which people take decisions.

But these technologies are only used in a subset of the interactions that take place between people and businesses across the world, and it is unlikely that they’ll become ubiquitous in the foreseeable future (or that we would want them to become so). Will we ever reach the point where prospective house-buyers delegate decisions about where to live to computer programmes operating in online marketplaces rather than by visiting places and imagining themselves living there? Will we somehow automate the process of testing the freshness of fish by observing the clarity of their eyes and the freshness of their smell before buying them to cook and eat?

In many cases, while technology may play a role introducing potential buyers and sellers of goods and services to each other, it will not replace – or predict – the human behaviours involved in the transaction itself.

(Medway Youth Trust use predictive and textual analytics to draw insight into their work helping vulnerable children. They use technology to inform expert case workers, not to take decisions on their behalf.)

4. “Wicked problems” cannot be described using data and logic

Despite all of the challenges associated with problems in mathematics and the physical sciences, it is nevertheless relatively straightforward to frame and then attempt to solve problems in those domains; and to determine whether the resulting solutions are valid.

As the failure of Logical Atomism showed, though, problems in the human domain are much more difficult to describe in any systematic, complete and precise way – a challenge known as the “frame problem” in artificial intelligence. This is particularly true of “wicked problems” – challenges such as social mobility or vulnerable families that are multi-faceted, and consist of a variety of interdependent issues.

Take job creation, for example. Is that best accomplished through creating employment in taxpayer-funded public sector organisations? Or by allowing private-sector wealth to grow, creating employment through “trickle-down” effects? Or by maximising overall consumer spending power as suggested by “middle-out” economics? All of these ideas are described not using the language of mathematics or other formal logical systems, but using natural human language which is subjective and inconsistent in use.

The failure of Logical Atomism to fully represent such concepts in formal logical systems through which truth and falsehood can be determined with certainty emphasises what we all understand intuitively: there is no single “right” answer to many human problems, and no single “right” action in many human situations.

(An electricity bill containing information provided by OPower comparing one household’s energy usage to their neighbours. Image from Grist)

5. Behavioural economics and the caprice of human behaviour

Behavioural economics” attempts to predict the way that humans behave when taking choices that have a measurable impact on them – for example, whether to put the washing machine on at 5pm when electricity is expensive, or at 11pm when it is cheap.

But predicting human behaviour is notoriously unreliable.

For example, in a smart water-meter project in Dubuque, Iowa, households that were told how their water conservation compared to that of their near neighbours were found to be twice as likely to take action to improve their efficiency as those who were only told the details of their own water use. In other words, people who were given quantified evidence that they were less responsible water user than their neighbours changed their behaviour. OPower have used similar techniques to help US households save 1.9 terawatt hours of power simply by including a report based on data from smart meters in a printed letter sent with customers’ electricity bills.

These are impressive achievements; but they are not always repeatable. A recycling scheme in the UK that adopted a similar approach found instead that it lowered recycling rates across the community: households who learned that they were putting more effort into recycling than their neighbours asked themselves “if my neighbours aren’t contributing to this initiative, then why should I?”

Low carbon engineering technologies like electric vehicles have clearly defined environmental benefits and clearly defined costs. But most Smart Cities solutions are less straightforward. They are complex socio-technical systems whose outcomes are emergent. Our ability to predict their performance and impact will certainly improve as more are deployed and analysed, and as University researchers, politicians, journalists and the public assess them. But we will never predict individual actions using these techniques, only the average statistical behaviour of groups of people. This can be seen from OPower’s own comparison of their predicted energy savings against those actually achieved – the predictions are good, but the actual behaviour of OPower’s customers shows a high degree of apparently random variation. Those variations are the result of the subjective, unpredictable and sometimes irrational behaviour of real people.

We can take insight from Behavioural Economics and other techniques for analysing human behaviour in order to create appropriate strategies, policies and environments that encourage the right outcomes in cities; but none of them can be relied on to give definitive solutions to any individual person or situation. They can inform decision-making, but are always associated with some degree of uncertainty. In some cases, the uncertainty will be so small as to be negligible, and the predictions can be treated as deterministic rules for achieving the desired outcome. But in many cases, the uncertainty will be so great that predictions can only be treated as general indications of what might happen; whilst individual actions and outcomes will vary greatly.

(Of course it is impossible to predict individual criminal actions as portrayed in the film “Minority Report”. But is is very possible to analyse past patterns of criminal activity, compare them to related data such as weather and social events, and predict the likelihood of crimes of certain types occurring in certain areas. Cities such as Memphis and Chicago have used these insights to achieve significant reductions in crime)

Learning to value insight without certainty

Mathematics and digital technology are incredibly powerful; but they will never perfectly and completely describe and predict our world in human terms. In many cases, our focus for using them should not be on automation: it should be on the enablement of human judgement through better availability and communication of information. And in particular, we should concentrate on communicating accurately the meaning of information in the context of its limitations and uncertainties.

There are exceptions where we automate systems because of a combination of a low-level of uncertainty in data and a large advantage in acting autonomously on it. For example, anti-lock braking systems save lives by using automated technology to take thousands of decisions more quickly than most humans would realise that even a single decision needed to be made; and do so based on data with an extremely low degree of uncertainty.

But the most exciting opportunity for us all is to learn to become sophisticated users of information that is uncertain. The results of textual analysis of sentiment towards products and brands expressed in social media are far from certain; but they are still of great value. Similar technology can extract insights from medical research papers, case notes in social care systems, maintenance logs of machinery and many other sources. Those insights will rarely be certain; but properly assessed by people with good judgement they can still be immensely valuable.

This is a much better way to understand the value of technology than ideas like “perfect knowledge” and “algorithmic regulation”. And it is much more likely that people will trust the benefits that we claim new technologies can bring if we are open about their limitations. People won’t use technologies that they don’t trust; and they won’t invest their money in them or vote for politicians who say they’ll spend their taxes on it.

Thankyou to Richard Brown and Adrian McEwen for discussions on Twitter that helped me to prepare this article. A more in-depth discussion of some of the scientific and philosophical issues I’ve described, and an exploration of the nature of human intelligence and its non-deterministic characteristics, can be found in the excellent paper “Answering Descartes: Beyond Turing” by Stuart Kauffman published by MIT press.

From field to market to kitchen: smarter food for smarter cities

(A US Department of Agriculture inspector examines a shipment of imported frozen meat in New Orleans in 2013. Photo by Anson Eaglin)

One of the biggest challenges associated with the rapid urbanisation of the world’s population is working out how to feed billions of extra citizens. I’m spending an increasing amount of my time understanding how technology can help us to do that.

It’s well known that the populations of many of the world’s developing nations – and some of those that are still under-developed – are rapidly migrating from rural areas to cities. In China, for example, hundreds of millions of people are moving from the countryside to cities, leaving behind a lifestyle based on extended family living and agriculture for employment in business and a more modern lifestyle.

The definitions of “urban areas” used in many countries undergoing urbanisation include a criterion that less than 50% of employment and economic activity is based on agriculture (the appendices to the 2007 revision of the UN World Urbanisation Prospects summarise such criteria from around the world). Cities import their food.

In the developed countries of the Western world, this criterion is missing from most definitions of cities, which focus instead on the size and density of population. In the West, the transformation of economic activity away from agriculture took place during the Industrial Revolution of the 18th and 19th Centuries.

Urbanisation and the industrialisation of food

The food that is now supplied to Western cities is produced through a heavily industrialised process. But whilst the food supply chain had to scale dramatically to feed the rapidly growing cities of the Industrial Revolution, the processes it used, particularly in growing food and creating meals from it, did not industrialise – i.e. reduce their dependence on human labour – until much later.

As described by Population Matters, industrialisation took place after the Second World War when the countries involved took measures to improve their food security after struggling to feed themselves during the War whilst international shipping routes were disrupted. Ironically, this has now resulted in a supply chain that’s even more internationalised than before as the companies that operate it have adopted globalisation as a business strategy over the last two decades.

This industrial model has led to dramatic increases in the quantity of food produced and distributed around the world, as the industry group the Global Harvest Initiative describes. But whether it is the only way, or the best way, to provide food to cities at the scale required over the next few decades is the subject of much debate and disagreement.

(Irrigation enables agriculture in the arid environment of Al Jawf, Libya. Photo by Future Atlas)

One of the critical voices is Philip Lymbery, the Chief Executive of Compassion in World Farming, who argues passionately in “Farmageddon” that the industrial model of food production and distribution is extremely inefficient and risks long-term damage to the planet.

Lymbery questions whether the industrial system is sustainable financially – it depends on vast subsidy programmes in Europe  and the United States; and he questions its social benefits – industrial farms are highly automated and operate in formalised international supply chains, so they do not always provide significant food or employment in the communities in which they are based.

He is also critical of the industrial system’s environmental impact. In order to optimise food production globally for financial efficiency and scale, single-use industrial farms have replaced the mixed-use, rotational agricultural systems that replenish nutrients in soil  and that support insect species that are crucial to the pollination of plants. They also create vast quantities of animal waste that causes pollution because in the single-use industrial system there are no local fields in need of manure to fertilise crops.

And the challenges associated with feeding the growing populations of the worlds’ cities are not only to do with long-term sustainability. They are also a significant cause of ill-health and social unrest today.

Intensity, efficiency and responsibility

Our current food systems fail to feed nearly 1 billion people properly, let alone the 2 billion rise in global population expected by 2050. We already use 60% of the world’s fresh water to produce food – if we try to increase food production without changing the way that water is used, then we’ll simply run out of it, with dire consequences. In fact, as the world’s climate changes over the next few decades, less fresh water will be available to grow food. As a consequence of this and other effects of climate change, the UK supermarket ASDA reported recently that 95% of their fresh food supply is already exposed to climate risk.

The supply chains that provide food to cities are vulnerable to disruption – in the 2000 strike by the drivers who deliver fuel to petrol stations in the UK, some city supermarkets came within hours of running out of food completely; and disruptions to food supply have already caused alarming social unrest across the world.

These challenges will intensify as the world’s population grows, and as the middle classes double in size to 5 billion people, dramatically increasing demand for meat – and hence demand for food for the animals which produce it. Overall, the United Nations Food and Agriculture Organization estimates that we will need to produce 70% more food than today by 2050.

insect delicacies

(Insect delicacies for sale in Phnom Penh’s central market. The United Nations suggested last year that more of us should join the 2 billion people who include insects in their diet – a nutritious and environmentally efficient source of food)

But increasing the amount of food available to feed people doesn’t necessarily mean growing more food, either by further intensifying existing industrial approaches or by adopting new techniques such as vertical farming or hydroponics. In fact, a more recent report issued by the United Nations and partner agencies cautioned that it was unlikely that the necessary increase in available food would be achieved through yield increases alone. Instead, it recommended reducing food loss, waste, and “excessive demand” for animal products.

There are many ways we might grow, distribute and use food more efficiently. We currently waste about 30% of the food we produce: some through food that rots before it reaches our shops or dinner tables, some through unpopularity (such as bread crusts or fruit and vegetables that aren’t the “right” shape and colour), and some because we simply buy more than we need to eat. If those inefficiencies were corrected, we are already producing enough food to feed 11billion people, let alone the 9 billion population predicted for the Earth by 2050.

I think that technology has some exciting roles to play in how we respond to those challenges.

Smarter food in the field: data for free, predicting the future and open source beekeeping

New technologies give us a great opportunity to monitor, measure and assess the agricultural process and the environment in which it takes place.

The SenSprout sensor can measure and transmit the moisture content of soil; it is made simply by printing an electronic circuit design onto paper using commercially-available ink containing silver nano-particles; and it powers itself using ambient radio waves. We can use sensors like SenSprout to understand and respond to the natural environment, using technology to augment the traditional knowledge of farmers.

By combining data from sensors such as SenSprout and local weather monitoring stations with national and international forecasts, my colleagues in IBM Research are investigating how advanced weather prediction technology can enable approaches to agriculture that are more efficient and precise in their use of water. A trial project in Flint River, Georgia is allowing farmers to apply exactly the right amount of water at the right time to their crops, and no more.

Such approaches improve our knowledge of the natural environment, but they do not control it. Nature is wild, the world is uncertain, and farmers’ livelihoods will always be exposed to risk from changing weather patterns and market conditions. The value of technology is in helping us to sense and respond to those changes. “Pasture Scout“, for example, does that by using social media to connect farmers in need of pasture to graze their cattle with other farmers with land of the right sort that is currently underused.

These possibilities are not limited to industrial agriculture or to developed countries. For example, the Kilimo Salama scheme adds resilience to the traditional practises of subsistence farmers by using remote weather monitoring and mobile phone payment schemes to provide affordable insurance for their crops.

Technology is also helping us to understand and respond to the environmental impact of the agricultural practises that have developed in previous decades: as urban beekeepers seek to replace lost natural habitats for bees, the Open Source Beehive project is using technology to help them identify the factors leading to the “colony collapse disorder” phenomenon that threatens the world’s bee population.

Smarter food in the marketplace: local food, the sharing economy and soil to fork traceability

The emergence of the internet as a platform for enabling sales, marketing and logistics over the last decade has enabled small and micro-businesses to reach markets across the world that were previously accessible only to much larger organisations with international sales and distribution networks. The proliferation of local food and urban farming initiatives shows that this transformation is changing the food industry too, where online marketplaces such as Big Barn and FoodTrade make it easier for consumers to buy locally produced food, and for producers to sell it.

This is not to say that vast industrial supply-chains will disappear overnight to be replaced by local food networks: they clearly won’t. But just as large-scale film and video production has adapted to co-exist and compete with millions of small-scale, “long-tail” video producers, so too the food industry will adjust. The need for co-existence and competition with new entrants should lead to improvements in efficiency and impact – the supermarket Tesco’s “Buying Club” shows how one large food retailer is already using these ideas to provide benefits that include environmental efficiences to its smaller suppliers.

(A Pescheria in Bari, Puglia photographed by Vito Palmi)

One challenge is that food – unlike music and video – is a fundamentally physical commodity: exchanging it between producers and consumers requires transport and logistics. The adoption by the food industry of “sharing economy” approaches – business models that use social media and analytics to create peer-to-peer transactions, and that replace bulk movement patterns by thousands of smaller interactions between individuals – will be dependent on our ability to create innovative distribution systems to support them. Zaycon Foods operate one such system, using online technology to allow consumers to collectively negotiate prices for food that they then collect from farmers at regular local events.

Rather than replacing existing markets and supply chains, one role that technology is already playing is to give food producers better insight into their behaviour. M-farm links farmers in Kenya to potential buyers for their produce, and provides them with real-time information about prices; and the University of Bari in Puglia, Italy operates a similar fish-market pricing information service that makes it easier for local fisherman to identify the best buyers and prices for their daily catch.

Whatever processes are involved in getting food from where it’s produced to where it’s consumed, there’s an increasing awareness of the need to track those movements so that we know what we’re buying and eating, both to prevent scandals such as last year’s discovery of horsemeat in UK food labelled as containing beef; and so that consumers can make buying decisions based on accurate information about the source and quality of food. The “eSporing” (“eTraceability”) initiative between food distributors and the Norwegian government explored these approaches following a breakout of E-Coli in 2006.

As sensors become more capable and less expensive, we’ll be able to add more data and insight into this process. Soil quality can be measured using sensors such as SenSprout; plant health could be measured by similar sensors or by video analytics using infra-red data. The gadgets that many of us use whilst exercising to measure our physical activity and use of calories could be used to assess the degree to which animals are able to exercise. And scientists at the University of the West of England in Bristol have developed a quick, cheap sensor that can detect harmful bacteria and the residues of antibiotics in food. (The overuse of antibiotics in food production has harmful side effects, and in particular is leading some bacteria that cause dangerous diseases in humans to develop resistance to treatment).

This advice from the Mayo Clinic in the United States gives one example of the link between the provenance of food and its health qualities, explaining that beef from cows fed on grass can have lower levels of fat and higher levels of beneficial “omega-3 fatty acids” than what they call “conventional beef” – beef from cows fed on grain delivered in lorries. (They appear to have forgotten the “convention” established by several millennia of evolution and thousands of years of animal husbandry that cows eat grass).

(Baltic Apple Pie – a recipe created by IBM’s Watson computer)

All of this information contributes to describing both the taste and health characteristics of food; and when it’s available, we’ll have the opportunity to make more informed choices about what we put on our tables.

Smarter food in the kitchen: cooking, blogging and cognitive computing

One of the reasons that the industrial farming system is so wasteful is that it is optimised to supply Western diets that include an unhealthy amount of meat; and to do so at an unrealistically low price for consumers. Enormous quantities of fish and plants – especially soya beans – that could be eaten by people as components of healthy diets are instead fed to industrially-farmed animals to produce this cheap meat. As a consequence, in the developed world many of us are eating more meat than is healthy for us. (Some of the arguments on this topic were debated by the UK’s Guardian newspaper last year).

But whilst eating less meat and more fish and vegetables is a simple idea, putting it into practise is a complex cultural challenge.

A recent report found that “a third of UK adults struggle to afford healthy food“. But the underlying cause is not economic: it is a lack of familiarity with the cooking and food preparation techniques that turn cheap ingredients into healthy, tasty food; and a cultural preference for red meat and packaged meals. The Sustainable Food School that is under development in Birmingham is one example of an initiative intending to address those challenges through education and awareness.

Engagement through traditional and social media also has an influence. The celebrity chefs that have campaigned for a shift in our diets towards more sustainably sourced fish and the schoolgirl who  provoked a national debate concerning the standard and health of school meals simply by blogging about the meals that were offered to her each day at school, are two recent examples in the UK; as is the food blogger Jack Monroe who demonstrated how she could feed herself and her two-year-old son healthy, interesting food on a budget of £10 a week.

My colleagues in IBM Research have explored turning IBM’s Watson cognitive computing technology to this challenge. In an exercise similar to the “invention test” common to television cookery competitions, they have challenged Watson to create recipes from a restricted set of ingredients (such as might be left in the fridge and cupboards at the end of the week) and which meet particular criteria for health and taste.

(An example of local food processing: my own homemade chorizo.)

Food, technology, passion

The future of food is a complex and contentious issue – the controversy between the productivity benefits of industrial agriculture and its environmental and social impact being just one example. I have touched on but not engaged in those debates in this article – my expertise is in technology, not in agriculture, and I’ve attempted to link to a variety of sources from all sides of the debate.

Some of the ideas for providing food to the world’s growing population in the future are no less challenging, whether those ideas are cultural or technological. The United Nations suggested last year, for example, that more of us should join the 2 billion people who include insects in their diet. Insects are a nutritious and environmentally efficient source of food, but those of us who have grown up in cultures that do not consider them as food are – for the most part – not at all ready to contemplate eating them. Artificial meat, grown in laboratories, is another increasingly feasible source of protein in our diets. It challenges our assumption that food is natural, but has some very reasonable arguments in its favour.

It’s a trite observation, but food culture is constantly changing. My 5-year-old son routinely demands foods such as humus and guacamole that are unremarkable now but that were far from commonplace when I was a child. Ultimately, our food systems and diets will have to adapt and change again or we’ll run out of food, land and water.

Technology is one of the tools that can help us to make those changes. But as Kentaro Toyama famously said: technology is not the answer; it is the amplifier of human intention.

So what really excites me is not technology, but the passion for food that I see everywhere: from making food for our own families at home, to producing it in local initiatives such as Loaf, Birmingham’s community bakery; and from using technology in programmes that contribute to food security in developing nations to setting food sustainability at the heart of corporate business strategy.

There are no simple answers, but we are all increasingly informed and well-intentioned. And as technology continues to evolve it will provide us with incredible new tools. Those are great ingredients for an “invention test” for us all to find a sustainable, healthy and tasty way to feed future cities.

%d bloggers like this: