Intelligent Transport Systems need to get wiser … or transport will keep on killing us

(The 2nd Futurama exhibition at the 1964 New York World’s Fair displayed a vision for the future that in many ways reflected the concrete highways and highrises constructed at the time. We now recognise that the environments those structures created often failed to support healthy personal and community life. In 50 years’ time, how will we perceive today’s visions of Intelligent Transport Systems? Photo by James Vaughan)


Two weeks ago the Transport Systems Catapult published a “Traveller Needs and UK Capability Study”, which it called “the UK’s largest traveller experience study” – a survey of 10,000 people and their travelling needs and habits, complemented by interviews with 100 industry experts and companies. The survey identifies a variety of opportunities for UK innovators in academia and industry to exploit the predicted £56 billion market for intelligent mobility solutions in the UK by 2025, and £900 billion market worldwide. It is rightly optimistic that the UK can be a world leader in those markets.

This is a great example of the enormous value that the Catapult programme – inspired by Germany’s Fraunhofer Institutes – can play in transferring innovation and expertise out of University research and into the commercial economy, and in enabling the UK’s expert small businesses to reach opportunities in international markets.

But it’s also a great example of failing to connect the ideas of Intelligent Transport with their full impact on society.

I don’t think we should call any transport initiative “intelligent” unless it addresses both the full relationship between the physical mobility of people and goods with social mobility; and the significant social impact of transport infrastructure – which goes far beyond issues of congestion and pollution.

The new study not only fails to address these topics, it doesn’t mention them at all. In that light, such a significant report represents a failure to meet the Catapult’s own mission statement, which incorporates a focus on “wellbeing” – as quoted in the introduction to the report:

“We exist to drive UK global leadership in Intelligent Mobility, promoting sustained economic growth and wellbeing, through integrated, efficient and sustainable transport systems.” [My emphasis]

I’m surprised by this failing in the study as both the engineering consultancy Arup and the Future Cities Catapult – two organisations that have worked extensively to promote human-scale, walkable urban environments and human-centric technology – were involved in its production; as was at least one social scientist (although the experts consulted were otherwise predominantly from the engineering, transport and technology industries or associated research disciplines).

I note also that the list of reports reviewed for the study does not include a single work on urbanism. Jane Jacobs’ “The Death and Life of Great American Cities”, Jan Gehl’s “Cities for People“, Jeff Speck’s “Walkable City” and Charles Montgomery’s “The Happy City“, for example, all describe very well the way that transport infrastructures and traffic affect the communities in which most of the world’s population lives. That perspective is sorely lacking in this report.

Transport is a balance between life and death. Intelligent transport shouldn’t forget that.

These omissions matter greatly because they are not just lost areas of opportunity for the UK economy to develop solutions (although that’s certainly what they are). More importantly, transport systems that are designed without taking their full social impact into account have the most serious social consequences – they contribute directly to deprivation, economic stagnation, a lack of social mobility, poor health, premature deaths, injuries and fatalities.

As town planner Jeff Speck and urban consultant Charles Montgomery recently described at length in “Walkable City” and “The Happy City” respectively, the most vibrant, economically successful urban environments tend to be those where people are able to walk between their homes, places of work, shops, schools, local transport hubs and cultural amenities; and where they feel safe doing so.

But many people do not feel that it is safe to walk about the places in which they live, work and relax. Transport is not their only cause of concern; but it is certainly a significant one.

After motorcyclists (another group of travellers who are poorly represented), pedestrians and cyclists are by far the most likely travellers to be injured in accidents. According to the Royal Society for the Prevention of Accidents, for example, more than 60 child pedestrians are killed or injured every week in the UK – that’s over 3000 every year. No wonder that the number of children walking to school has progressively fallen as car ownership has risen, contributing (though it is obviously far from the sole cause) to rising levels of childhood obesity. In its 60 pages, the Traveller Needs study doesn’t mention the safety of pedestrians at all.

A recent working paper published by Transport for London found that the risk and severity of injury for different types of road users – pedestrians, cyclists, drivers, car passengers, bus passengers etc. – vary in complex and unexpected ways; and that in particular, the risks for each type of traveller vary very differently according to age, as our personal behaviours change, depending on the journeys we undertake, and according to the nature of the transport infrastructure we use.

These are not simple issues, they are deeply challenging. They are created by the tension between our need to travel in order to carry out social and economic interactions, and the physical nature of transport which takes up space and creates pollution and danger.

As a consequence, many of the most persistently deprived areas in cities are badly affected by large-scale transport infrastructure that has been primarily designed in the interests of the travellers who pass through them, and not in the interests of the people who live and work around them.

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the citycentre before its redevelopment in 2003, by Birmingham City Council)

(Photo of Masshouse Circus, Birmingham, a concrete urban expressway that strangled the city centre before its redevelopment in 2003, by Birmingham City Council)

Birmingham’s Masshouse circus, for example, was constructed in the 1960s as part of the city’s inner ring-road, intended to improve connectivity to the national economy through the road network. However, the impact of the physical barrier that it created to pedestrian traffic can be seen by the stark difference in land value inside and outside the “concrete collar” that the ring-road created around the city centre. Inside the collar, land is valuable enough for tall office blocks to be constructed on it; whilst outside it is of such low value that it is used as a ground-level carpark. The reason for such a sharp change in value? People didn’t feel safe walking across or under the roundabout. The demolition of Masshouse Circus in 2002 enabled a revitalisation of the city centre that has continued for more than a decade.

Atlanta’s Buford Highway is a seven lane road which for two miles has no pavements, no junctions and no pedestrian crossings, passing through an area of houses, shops and businesses. It is an infrastructure fit only for vehicles, not for people. It allows no safe access along or across it for the communities it passes through – it is closed to them, unless they risk their lives.

In Sheffield, two primary schools were recently forced to close after measurements of pollution from diesel vehicles revealed levels 10-15 times higher than those considered the maximum safe limits, caused by traffic from the nearby M1 motorway. The vast majority of vehicles using the motorway comply to the appropriate emissions legislation depending on their age; and until specific emissions measurements were performed at the precise locations of the schools, the previous regional measurements of air quality had been within legal limits. This illustrates the failure of our transport policies to take into account the nature of the environments within which we live, and the detailed impact of transport on them. That’s why it’s now suspected that up to 60,000 people die prematurely every year in the UK due to the effects of diesel emissions, double previous estimates.

Nathaniel Lichfield and Partners recently published a survey of the 2015 Indices of Multiple Deprivation in the UK – the indices summarise many of the challenges that affect deprived communities such as low levels of employment and income; poor health; poor access to quality education and training; high levels of crime; poor quality living environments and shortages of quality housing and services.

Lichfield and Partners found that most of the UK’s Core Cities (the eight economically largest cities outside London, plus Glasgow and Cardiff) are characterised by a ring of persistently deprived areas surrounding their relatively thriving city centres. Whilst clearly the full causes are complex, it is no surprise that those rings feature a concentration of transport infrastructure passing through them, but primarily serving the interests of those passing in and out of the centre.

Birmingham IMD cropped

(Areas of relative wealth and deprivation in Birmingham as measured by the Indices of Multiple Deprivation. Birmingham, like many of the UK’s Core Cities, has a ring of persistently deprived areas immediately outside the city centre, co-located with the highest concentration of transport infrastructure allowing traffic to flow in and out of the centre)

These issues are not considered at all in the Transport Systems Catapult’s study. The word “walk” appears just three times in the document, all in a section describing the characteristics of only one type of traveller, the “dependent passenger” who does not own a car. Their walking habits are never examined, and walking as a transport choice is never mentioned or presented as an option in any of the sections of the report discussing challenges, opportunities, solutions or policy initiatives, beyond a passing mention that public transport users sometimes undertake the beginnings and ends of their journeys on foot. The word “pedestrian” does not appear at all. Cycling is mentioned only a handful of times; once in the same section on dependent passengers, and later on to note that “bike sharing [schemes have] not yet enjoyed high uptake in the UK”. The reason cited for this is that “it is likely that there are simply not enough use cases where using these types of services is convenient and cost-effective for travellers.”

If that is the case, why not investigate ways to extend the applicability of such schemes to broader use cases?

If only the sharing economy were a walking and cycling economy

The role of the Transport Systems Catapult is to promote the UK transport and transport technology industry, and this perhaps explains why so much of the study is focussed on public and private forms of powered transport and infrastructure. But there are many ways for businesses to profit by providing innovative technology and services that support walking and cycling.

What about way-finding services and street furniture that benefit pedestrians, for example, as the Future Cities Catapult recently explored? What about the cycling industry – including companies providing cargo-carrying bicycles as an alternative to small vans and trucks? What about the wearable technology industry to promote exercise measurement and pedestrian navigation along the safest, least polluted routes?

What about the construction of innovative infrastructure that promotes cycling and walking such as the “SkyCycle” proposal to build cycle highways above London’s railway lines, similar to the pedestrian and cycle roundabouts already built in Europe and China? What about the use of conveyor belts along similar routes to transport freight? What about the use of underground, pneumatically powered distribution networks for recycling and waste processing? All of these have been proposed or explored by UK businesses and universities.

And what about the UK’s world-class community of urban designers, town planners and landscape architects, some of whom are using increasingly sophisticated technologies to complement their professional skills in designing places and communities in which living, working and travelling co-exist in harmony? What about our world class University expertise researching visions for sustainable, liveable cities with less intrusive transport systems?

An even more powerful source of innovations to achieve a better balance between transportation and liveability could be the use of “sharing economy” business models to promote social and economic systems that emphasise local, human-powered travel.

Wikipedia describes the sharing economy as “economic and social systems that enable shared access to goods, services, data and talent“. Usually, these systems employ consumer technologies such as SmartPhones and social media to create online peer-to-peer trading networks that disrupt or replace traditional supply chains and customer channels – eBay is an obvious example for trading second hand goods, Airbnb connects travellers with people willing to rent out a spare room, and Uber connects passengers and drivers.

These business models can be enormously successful. Since its formation 8 years ago, Airbnb has acquired access to over 800,000 rooms to let in more than 190 countries; in 2014 the estimated value of this company which employed only 300 people at the time was $13 billion. Uber has demonstrated similarly astonishing growth.

However, it is much less clear what these businesses are contributing to society. In many cases their rapid growth is made possible by operating business models that side-step – or just ignore – the regulation that governs the traditional businesses that they compete with. Whilst they can offer employment opportunities to the providers in their trading networks, those opportunities are often informal and may not be protected by employment rights and minimum wage legislation. As privately held companies their only motivation is to return a profit to their owners.

By creating dramatic shifts in how transactions take place in the industries in which they operate, sharing economy businesses can create similarly dramatic shifts in transport patterns. For example, hotels in major cities frequently operate shuttle buses to transfer guests from nearby airports – a shared form of transport. Airbnb offer no such equivalent transfers to their independent accommodation. This is a general consequence of replacing large-scale, centrally managed systems of supply with thousands of independent transactions. At present there is very little research to understand these impacts, and certainly no policy to address them.

But what if incentives could be created to encourage the formation of sharing economy systems that promoted local transactions that can take place with less need for powered transport?

For example, Borroclub provides a service that matches someone who needs a tool with a neighbour who owns one that they could borrow. Casserole Club connects people who are unable to cook for themselves with a neighbours who are happy to cook and extra portion and share it. The West Midlands Collaborative Commerce Marketplace identifies opportunities for groups of local businesses to collaborate to win new contracts. Such “hyperlocal” schemes are not a new idea, and there are endless possibilities for them to reveal local opportunities to interact; but they struggle to compete for attention and investment against businesses purely focussed on maximising profits and investor returns.

Surely, a study that includes the Future Cities Catapult, Digital Catapult and Transport Systems Catapult amongst its contributors could have explored possibilies for encouraging and scaling hyperlocal sharing economy business models, alongside all those self-driving cars and multi-modal transport planners that industry seems to be quite willing to invest in on its own?

The study does mention some “sharing economy” businesses, including Uber; but it makes no mention of the controversy created because their profit-seeking focus takes no account of their social, economic and environmental impact.

It also mentions the role of online commerce in providing retail options that avoid the need to travel in person – and cites these as an option for reducing the overall demand for travel. But it fails to adequately explore the impact of the consequent requirements for delivery transport – other than to note the potential for detrimental impact on, let’s wait for it, not local communities but: local traffic!

“Enabling lifestyles is about more than just enabling and improving physical travel. 31% (19bn) of journeys made today would rather not have been made if alternative means were available (e.g. online shopping)” (page 15)

“Local authorities and road operators need to be aware that increased goods delivery can potentially have a negative impact on local traffic flows.” (page 24)

Why promote transactions that we carry out in isolation online rather than transactions that we carry out socially by walking, and that could contribute towards the revitalisation of local communities and town centres? Why mention “enabling lifestyles” without exploring the health benefits of walking, cycling and socialising?

(A poster from the International Sustainability Institute's Commuter Toolkit, depicting the space 200 travellers occupy on Seattle's 2nd Avenue when using different forms of transport, and intended to persuade travellers to adopt those forms that use less public space)

(A poster from the International Sustainability Institute’s Commuter Toolkit, depicting the space 200 travellers occupy on Seattle’s 2nd Avenue when using different forms of transport, and intended to persuade travellers to adopt those forms that use less public space)

Self-driving cars as a consumer product represent selfish interests, not societal interests

The sharing economy is not the only example of a technology trend whose social and economic impact cannot be assumed to be positive. The same challenge applies very much to perhaps the most widely publicised transport innovation today, and one that features prominently in the new study: the self-driving car.

On Friday I attended a meeting of the UK’s Intelligent Transport Systems interest group, ITS-UK. Andy Graham of White Willow Consulting gave a report of the recent Intelligent Transport Systems World Congress in Bordeaux. The Expo organisers had provided a small fleet of self-driving cars to transfer delegates between hotels and conference venues.

Andy noted that the cars drove very much like humans did – and that they kept at least as large, if not a larger, gap between themselves and the car in front. On speaking to the various car manufacturers at the show, he learned that their market testing had revealed that car buyers would only be attracted to self-driving cars if they drove in this familiar way.

Andy pointed out that this could significantly negate one of the promoted advantages of self-driving cars: reducing congestion and increasing transport flow volumes by enabling cars to be driven in close convoys with each other. This focus on consumer motivations rather than the holistic impact of travel choices is repeated in the Transport Systems Catapults’ study’s consideration of self-driving cars.

Cars don’t only harm people, communities and the environment if they are diesel or petrol powered and emit pollution, or if they are involved in collisions: they do so simply because they are big and take up space.

Space – space that is safe for people to inhabit – is vital to city and community life. We use it to walk; to sit and relax; to exercise; for our children to play in; to meet each other. Self-driving cars and electric cars take up no less space than the cars we have driven for decades. Cars that are shared take up slightly less space per journey – but are nowhere near as efficient as walking, cycling or public transport in this regard. Car clubs might reduce the need for vehicles to be parked in cities, but they still take up as much space on the road.

The Transport Systems Catapult’s study does explore many means to encourage the use of shared or public transport rather than private cars; but it does so primarily in the interests of reducing congestion and pollution. The relationship between public space, wellbeing and transport is not explored; and neither is the – at best – neutral societal impact of self-driving cars, if their evolution is left to today’s market forces.

Just as the industry and politicians are failing to enact the policies and incentives that are needed to adapt the Smart Cities market to create better cities rather than simply creating efficiencies in service provision and infrastructure, the Intelligent Transport Systems community will fail to deliver transport that serves our society better if it doesn’t challenge our self-serving interests as consumers and travellers and consider the wider interests of society.

The Catapult’s report does highlight the potential need for city-wide and national policies to govern future transport systems consisting of connected and autonomous vehicles; but once again the emphasis is on optimising traffic flows and the traveller experience, not on optimising the outcomes for everyone affected by transport infrastructure and traffic.

As consumers we don’t always know best. In the words of one of the most famous transport innovators in history: “If I had asked people what they wanted, they would have said ‘faster horses’.” (Henry Ford, inventor of the first mass-produced automobile, and of the manufacturing production line).

A failure that matters

The Transport Systems Catapult’s report doesn’t mention most of the issues I’ve explored in this article, and those that it does touch on are quickly passed over. In 60 pages it only mentions walking and cycling a handful of times; it never analyses the needs of pedestrians and cyclists, and beyond a passing mention of employers’ “cycle to work” schemes and the incorporation of bicycle hire schemes in multi-modal ticketing solutions, these modes of transport are never presented as solutions to our transport and social challenges.

This is a failure that matters. The Transport Systems Catapult is only one voice in the Intelligent Transport Systems community, and many of us would do well to broaden our understanding of the context and consequences of our work. For my part when I worked with IBM’s Intelligent Transport Systeams team several years ago I was similarly disengaged with these issues, and focussed on the narrower economic and technological aspects of the domain. It was only later in my career as I sought to properly understand the wider complexities of Smart Cities that I began to appreciate them.

But the Catapult Centre benefits from substantial public funding, is a high profile influencer across the transport sector, and is perceived to have the authority of a relatively independent voice between the public and private sectors. By not taking into account these issues, its recommendations and initiatives run the risk of creating great harm in cities in the UK, and anywhere else our transport industry exports its ideas to.

Both the “Smart Cities” and “Intelligent Transport” communities often talk in terms of breaking down silos in industry, in city systems and in thinking. But in reality we are not doing so. Too many Smart City discussions separate out “energy”, “mobility” and ”wellbeing” as separate topics. Too few invite town planners, urban designers or social scientists to participate. And this is an example of an “Intelligent Transport” discussion that makes the same mistakes.

(Pedestrian’s attempting to cross Atlanta’s notorious Buford Highway; a 7-lane road with no pavements and 2 miles between junctions and crossings. Photo by PBS)

In the wonderful “Walkable City“, Jeff Speck describe’s the epidemiologist Richard Jackson’s stark realisation of the life-and-death significance of good urban design related to transport infrastructure. Jackson was driving along the notorious two mile stretch of Atlanta’s seven lane Buford highway with no pavements or junctions:

“There, by the side of the road, in the ninety-five degree afternoon, he saw a woman in her seventies, struggling under the burden of two shopping bags. He tried to relate her plight to his own work as an epidemiologist. “If that poor woman had collapsed from heat stroke, we docs would have written the cause of death as heat stroke and not lack of trees and public transportation, poor urban form, and heat-island effects. If she had been killed by a truck going by the cause of death would have been “motor vehicle trauma”, and not lack of sidewalks and transit, poor urban planning and failed political leadership.”

We will only harness technology, transport and infrastructure to create better communities and better cities if we seek out and respect those cross-disciplinary insights that take seriously the needs of everyone in our society who is affected by them; not just the needs of those who are its primary users.

Our failure to do so over the last century is demonstrated by the UK’s disgracefully low social mobility; by those areas of multiple deprivation which in most cases have persisted for decades; and by the fact that as a consequence life expectancy for babies born today in the poorest parts of cities in the UK is 20 years shorter than for babies born today in the richest part of the same city.

That is the life and death impact of the transport strategies that we’ve had in the past; the transport strategies we publish today must do better.

Postscript 3rd November

The Transport Systems Catapult replied very positively on Twitter today to my rather forthright criticisms of their report. They said “Great piece Rick. The study is a first step in an ongoing discussion and we welcome further input/ideas feeding in as we go on.”

I’d like to think I’d respond in a similarly gracious way to anyone’s criticism of my own work!

What my article doesn’t say is that the Catapult’s report is impressively detailed and insightful in its coverage of those topics that it does include. I would absolutely welcome their expertise and resources being applied to a broader consideration of the topic of future transport, and look forward to seeing it. 

Reclaiming the “Smart” agenda for fair human outcomes enabled by technology

(Lucie & Simon’s “Silent World“, a series of photographs of cities from which almost all trace of people has been removed.)

Over the last 5 years, I’ve often used this blog to explore definitions of what a “Smart City” is. The theme that’s dominated my thinking is the need to synthesise human, urban and technology perspectives on cities and our experience of them.

The challenge with attempting such a broad synthesis within a succinct definition is that you end up with a very high-level, conceptual definition – one that might be intellectually true, but that does a very poor job of explaining to the wider world what a Smart City is, and why it’s important.

We need a simple, concise definition of Smart Cities that ordinary people can identify with. To create it, we need to reclaim the “Smart” concept from technologies such as analytics, the Internet of Things and Big Data, and return to it’s original meaning – using the increasingly ubiquitous and accessible communications technology enabled by the internet to give people more control over their own lives, businesses and communities.

I’ve written many articles on this blog about the futile and unsophisticated argument that rages on about whether Smart Cities should be created by “top-down” or “bottom-up” approaches: clearly, anything “Smart” is a subtle harmonisation of both.

In this article, I’d like to tackle an equally unconstructive argument that dominates Smart Cities debates: are Smart Cities defined by the role of technology, or by the desire to create a better future?

It’s clear to me that anything that’s really “Smart” must combine both of those ideas.

In isolation, technology is amoral, inevitable and often banal; but on the other hand a “better future” without a means to achieve it is merely an aspiration, not a practical concept. Why is it “Smart” to want a better future and better cities today in a way that wanting them 10, 20, 50 or 100 years ago wasn’t?

Surely we can agree that focussing our use of a powerful and potentially ubiquitously accessible new technology – one that’s already transforming our world – on making the world a better place, rather than just on making money, is an idea worthy of the “Smart” label?

In making this suggestion, I’m doing nothing more than returning to the origin of the term “Smart” in debates in social science about the “smart communities” that would emerge from our new ability to communicate freely and widely with each other following the emergence of the Internet.

Smart communities are enabled by ubiquitous access to empowering technology

In his 2011 book “Civilization“, Niall Fergusson comments that news of the Indian Mutiny in 1857 took 46 days to reach London, travelling in effect at 3.8 miles an hour – the speed of a brisk walk. By contrast, in January 2009 when US Airways flight 1549 crash landed in the Hudson river, Jim Hanrahan’s message on Twitter communicated the news to the entire world four minutes later; it reached Perth, Australia at more than 170,000 miles an hour.

(In the 1960s, the mobile phone-like “communicators” used in Star Trek were beyond our capability to manufacture; but they were used purely for talking. Similarly, while William Gibson’s 1980s vision of “cyberspace” was predictive and ambitious in its descriptions of virtual environments and data visualisations, the people who inhabited it interacted with each other almost as if normal space has simply been replaced by virtual space: there was no sense of the immense power of social media to enable new connections.)

Social media is the tool that around a quarter of the world’s population now simply uses to stay in touch with friends and family at this incredible speed. Along with mobile devicese-commerce technology and analytics, social media has made it dramatically easier for individuals, communities and small businesses anywhere around the world with the potential to transact with each other to make contact and interact without needing the enormous supply chains and sales and marketing channels that previously made such activity the prerogative of large, multi-national corporations.

It was in a workshop with social scientists at the University of Durham that I first became aware that “Smart” concepts originated in social science in the 1990s and pre-date the famous early large-scale technology infrastructure projects in cities like Masdar and Songdo. The term was coined to describe the potential for new forms of governance, citizen engagement, collective intelligence and stakeholder collaboration enabled by Internet communication technologies. The hope was that new forms of exchange and contract between people and organisations would create a better chance of realising the underlying outcomes we really want – health, happiness and fulfilment:

“The notion of smart community refers to the locus in which such networked intelligence is embedded. A smart community is defined as a geographical area ranging in size from a neighbourhood to a multi-county region within which citizens, organizations and governing institutions deploy and embrace NICT [“New Information and Communication Technologies”] to transform their region in significant and fundamental ways (Eger 1997). In an information age, smart communities are intended to promote job growth, economic development and improve quality of life within the community.”

(Amanda Coe, Gilles Paquet and Jeffrey Roy, “E-Governance and Smart Communities: A Social Learning Challenge“,  Social Science Computer Review, Spring 2001)

But technology’s not Smart unless it’s used to create human value

It’s no surprise that technology companies such as Cisco, Siemens and my former employer IBM came to similar realisations about the transformative potential of digital technology in addressing societal as well as business challenges as technology spread from the back office into the everyday world, leading, for example, to the launch of IBM’s “Smarter Planet” initiative in 2008, a pre-cursor to their “Smarter Cities” programme.

Let’s pause at this point to say: that’s a tremendously exciting idea. A technology company – Apple – recently recorded the largest corporate profit in the history of business. Microsoft’s founder Bill Gates was just recognised as the richest person on the planet. Technology companies make enormous profits, and they feed significant portions of those profits back into research and development. Shouldn’t it be wonderful that some of those resources are invested into exploring how to make cities, communities and people more successful?

(The Dubuque water and energy portal, showing an individual household insight into it's conservation performance; but also a ranking comparing their performance to their near neighbours)

(The Dubuque water and energy portal, showing an individual household insight into it’s conservation performance; but also a ranking comparing their performance to their near neighbours)

IBM, for example, has invested millions of dollars of effort in implementing Smarter Cities projects in cities such as Dubuque through the IBM Research “First of a Kind” programme; and has helped over a hundred cities worldwide develop new initiatives and strategies through the charitable “Smarter Cities Challenge” – advising Kyoto on how to become a more “walkable” city, for instance.

So what’s the problem?

Large technology corporations are often criticised in debates on this topic for their size, profitability and “top-down” approaches – and the local authorities who work with them are often criticised too. In my experience, that criticism is based on an incomplete understanding of the people involved, and how the projects are carried out; and I think it misses the point.

The real question we should be asking is more subtle and important: what happens to the social elements of an idea once it becomes apparent to businesses both large and small that they can make money by selling the technologies that enable it?

I know very well the scientists, engineers and creatives at many of the companies, social enterprises and government bodies – of any size – who are engaged in Smart Cities initiatives. They are almost universally extremely bright, well intentioned and humane, and fully capable of talking with passion about the social and environmental value of their work. “Top-down” is at best a gross simplification of the projects that they carry out, and at worst a gross misrepresentation. Their views dominated the early years of the Smart Cities market as it developed.

But as the market has matured and grown, the focus has switched from research, exploration and development to the marketing and selling of well-defined product and service offerings. Amidst the need to promote those offerings to potential customers, and to differentiate them against competitors, it’s easy for the subtle intertwining of social, economic, environmental and technology ideas to be drowned out.

That’s what led to the unfortunate statement that armed Professor Adam Greenfield with the ammunition he needed to criticise the Smart Cities movement. A technology company that I won’t name made an over-reaching and mis-guided assertion that Smart Cities would create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” – blissfully ignoring the fact that such perfection is scientifically and philosophically impossible, not to mention inhuman and undesirable.

As a scientist-turned-technologist-turned-wannabe-urbanist working in this field, and as someone who’s been repeatedly inspired by the people, communities, social scientists, social innovators, urban designers and economists I’ve met over the past 5 years, I started writing this blog to explore and present a more balanced, humane vision of a Smart City.

Zen and the art of Smart Cities: opposites should create beautiful fusions, not arguments

Great books change our lives, and one of many that has changed mine is “Zen and the Art of Motorcycle Maintenance” by Robert M. Pirsig. Pirsig explores the relationship between what he called “romantic” perspectives of life, which focus on emotional meaning and value “quality”, and “rational” perspectives, which focus on the reasons our world behaves in the way that it does and value “truth”. He argues that early Greek philosophers didn’t distinguish between “quality” and “truth”, and that by considering them together we can learn to value things that are simultaneously well-intentioned and well-formed.

This thinking is echoed in Alan Watts’ “The Way of Zen“, in which he comments on the purpose of the relentless practise of technique that is part of the Zen approach to art that:

“The very technique involves the art of artlessness, or what Sabro Hasegawa has called the ‘controlled accident’, so that paintings are formed as naturally as the rocks and grasses which they depict”

(Alan Watts, “The Way of Zen“)

In other words, by working tirelessly to perfect their technique – i.e. their use of tools – artists enable themselves to have “beautiful accidents” when inspiration strikes.

(Photograph by Meshed Media of Birmingham’s Social Media Cafe, where individuals from every part of the city who have connected online meet face-to-face to discuss their shared interest in social media.)

Modern technologies from social media to Smartphones to Cloud computing and Open Source software are both incredibly powerful and, compared to any previous generation of technology, incredibly cheap.

If we work hard to ensure that they can be used to access and manipulate the technologies that will inevitably be used to make the operations of city infrastructures and public services more efficient, then they have incredible potential to be a tool for people everywhere to shape the world around them to their own advantage; and for us to collectively create a world that is fairer, healthier and more resilient.

But unless we re-claim the word “Smart” to describe those outcomes, the market will drive our energy and resources in the direction of narrower financial interests.

The financial case for investment in Smart technologies is straightforward: as the costs of smartphones, sensors, analytics, and cloud computing infrastructure reduce rapidly, market dynamics will drive their aggressive adoption to make construction, infrastructure and city services more efficient, and hence make their providers more competitive.

But those market dynamics do not guarantee that we will get everything we want for the future of our cities: efficiency and resilience are not the same as health, happiness and opportunity for every citizen.

So how can we adapt that investment drive to create the outcomes that we want?

Can responsible business create a better world?

Some corporate behaviours promote these outcomes, driven by the voting and buying powers of citizens and consumers. Working for Amey, for example, my customers are usually government organisations who serve an electorate; or private sector companies who are regulated by government bodies. In both cases, there is a direct chain of influence leading from individual citizen needs and perceptions through to the way we operate and deliver our services. If we don’t engage with, respect and meet those needs and expectations, we will not be successful. I can observe that influence at work driving an ethic of service, care and responsibility throughout our business at Amey, and it’s been an inspiration to me since joining the company.

UniLever have taken a similar approach, using consumer desires for sustainable products to link corporate performance to sustainable business practices; and Jared Diamond wrote extensively about successful examples of socially and environmentally sustainable resource extraction businesses, such as Chevron’s sustainable operations in the Kutubu oilfield in Papua New Guinea, in his book “Collapse“. Business models such as social enterprise and the sharing economy also offer great potential to link business success to positive social and environmental outcomes.

But ultimately our investment markets are still strongly focused on financial performance, and reward the businesses that make the most money with the investment that enables them to grow. This is why many social enterprises do not scale-up; and why many of the rapidly growing “sharing economy” businesses currently making the headlines have nothing at all to do with sharing value and resources, but are better understood as a new type of profit-seeking transaction broker.

Responsible business models are a choice made by individual business leaders, and they depend for their successful operation on the daily choices and actions of their employees. They are not a market imperative. For as long as that is the case, we cannot rely on them to improve our world.

Policy, legislation and regulation

I’ve quoted from Jane Jacobs on many occasions on this blog that “private investment shapes cities, but social ideas (and laws) shape private investment”.

It’s a source of huge frustration to me that so much of the activity in the Smart Cities community ignores that so obviously fundamental principle, and focuses instead on the capabilities of technology or on projects funded by research grants.

The recent article reporting a TechUK Smart Cities conference titled “Milton Keynes touted as model city for public sector IoT use” is a good example. Milton Keynes have many Smart City projects underway that are technologically very interesting, but every one of them is funded by a significant grant of funds from a central government department, a research or innovation funding body, or a technology company. Not a single project has been paid for by a sustainable, re-usable business case. Other cities can aspire to emulate Milton Keynes all they want, but they won’t win research and innovation funding to re-deploy solutions that have already been proven.

Research and innovation grants provide the funding that proves for the first time that a new idea is viable. They do not pay for that idea to be enacted across the world.

(Shaleen Meelu and Robert Smith with Hugh Fearnley-Whittingstall at the opening of the Harborne Food School. The School is a Community Interest Company that promotes healthy, sustainable approaches to food through courses offered to local people and organisations)

(Shaleen Meelu and Robert Smith with Hugh Fearnley-Whittingstall at the opening of the Harborne Food School. The School is a Community Interest Company that promotes healthy, sustainable approaches to food through courses offered to local people and organisations)

Policy, legislation and regulation are far more effective tools for enabling widespread change, and are what we should be focussing our energy and attention on.

The Social Value Act requires that public authorities, who spend nearly £200 billion every year on private sector goods and services, procure those services in a way that creates social value – for example, by requiring that national or international service providers engage local small businesses in their supply chains.

In an age in which private companies are investing heavily in the use of digital technology because it provides them with by far the most powerful tool to increase their success, surely local authorities should fulfil their Social Value Act obligations by using procurement criteria to ensure that those companies employ that same tool to create social and environmental improvements in the places and communities in which they operate?

Similary, the British Property Federation estimates that £14 billion is invested in the development of new property in the UK each year. If planning and development frameworks oblige that property developers describe and quantify the social value that will be created by their developments, and how they will use technology do so – as I’ve promoted on this blog for some time now, and as the British Standards Institute have recently recommended – then this enormous level of private sector investment can contribute to investing in technology for public benefit; just as those same frameworks already require investment in public space around commercial buildings.

The London Olympic Legacy Development Corporation have been following this strategy in support of the Greater London Authority’s Smart London Plan. As a result, they are securing private sector investment in deploying technology not only to redevelop the Olympic park using smart infrastructure; but also to ensure that that investment benefits the existing communities and business economies in neighbouring areas.

A Smart manifesto for human outcomes enabled by technology

These business models, policy measures and procurement approaches are bold, difficult measures to enact. They are not as sexy as Smartphones, analytics and self-driving cars. But they are much more important if what we want to achieve are positive human outcomes, not just financially successful technology companies and a continuous stream of research projects.

What will make it more likely that businesses, local governments and national governments adopt them?

Citizen understanding. Consumer understanding. A definition of smart people, places, communities, businesses and governments that makes sense to everyone who votes, works, stands for election, runs a business, or buys things. In other words, everyone.

If that definition doesn’t include the objective of making the world a healthier, happier, fairer, more sustainable place for everyone, then it’s not worth the effort. If it doesn’t include harnessing modern technology, then it misses the point that human ingenuity has recently given us a phenomenal new toolkit that make possible things that we’d never previously dreamt of.

I think it should go something like this:

“Smart people, places, communities, businesses and governments work together to use the modern technologies that are changing our world to make it fairer and more sustainable in the process, giving everyone a better chance of a longer, healthier, happier and more fulfilling life.”

I’m not sure that’s a perfect definition; but I think it’s a good start, and I hope that it combines the right realisation that we do have unprecedented tools at our disposal with the right sentiment that what really matters is how we use them.

(I’d like to thank John Murray of Scottish Enterprise for a useful discussion that inspired me to write this article)

11 reasons computers can’t understand or solve our problems without human judgement

(Photo by Matt Gidley)

(Photo by Matt Gidley)

Why data is uncertain, cities are not programmable, and the world is not “algorithmic”.

Many people are not convinced that the Smart Cities movement will result in the use of technology to make places, communities and businesses in cities better. Outside their consumer enjoyment of smartphones, social media and online entertainment – to the degree that they have access to them – they don’t believe that technology or the companies that sell it will improve their lives.

The technology industry itself contributes significantly to this lack of trust. Too often we overstate the benefits of technology, or play down its limitations and the challenges involved in using it well.

Most recently, the idea that traditional processes of government should be replaced by “algorithmic regulation” – the comparison of the outcomes of public systems to desired objectives through the measurement of data, and the automatic adjustment of those systems by algorithms in order to achieve them – has been proposed by Tim O’Reilly and other prominent technologists.

These approaches work in many mechanical and engineering systems – the autopilots that fly planes or the anti-lock braking systems that we rely on to stop our cars. But should we extend them into human realms – how we educate our children or how we rehabilitate convicted criminals?

It’s clearly important to ask whether it would be desirable for our society to adopt such approaches. That is a complex debate, but my personal view is that in most cases the incredible technologies available to us today – and which I write about frequently on this blog – should not be used to take automatic decisions about such issues. They are usually more valuable when they are used to improve the information and insight available to human decision-makers – whether they are politicians, public workers or individual citizens – who are then in a better position to exercise good judgement.

More fundamentally, though, I want to challenge whether “algorithmic regulation” or any other highly deterministic approach to human issues is even possible. Quite simply, it is not.

It is true that our ability to collect, analyse and interpret data about the world has advanced to an astonishing degree in recent years. However, that ability is far from perfect, and strongly established scientific and philosophical principles tell us that it is impossible to definitively measure human outcomes from underlying data in physical or computing systems; and that it is impossible to create algorithmic rules that exactly predict them.

Sometimes automated systems succeed despite these limitations – anti-lock braking technology has become nearly ubiquitous because it is more effective than most human drivers at slowing down cars in a controlled way. But in other cases they create such great uncertainties that we must build in safeguards to account for the very real possibility that insights drawn from data are wrong. I do this every time I leave my home with a small umbrella packed in my bag despite the fact that weather forecasts created using enormous amounts of computing power predict a sunny day.

(No matter how sophisticated computer models of cities become, there are fundamental reasons why they will always be simplifications of reality. It is only by understanding those constraints that we can understand which insights from computer models are valuable, and which may be misleading. Image of Sim City by haljackey)

We can only understand where an “algorithmic” approach can be trusted; where it needs safeguards; and where it is wholly inadequate by understanding these limitations. Some of them are practical, and limited only by the sensitivity of today’s sensors and the power of today’s computers. But others are fundamental laws of physics and limitations of logical systems.

When technology companies assert that Smart Cities can create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” (as London School of Economics Professor Adam Greenfield rightly criticised in his book “Against the Smart City”), they are ignoring these challenges.

A blog published by the highly influential magazine Wired recently made similar overstatements: “The Universe is Programmable” argues that we should extend the concept of an “Application Programming Interface (API)” – a facility usually offered by technology systems to allow external computer programmes to control or interact with them – to every aspect of the world, including our own biology.

To compare complex, unpredictable, emergent biological and social systems to the very logical, deterministic world of computer software is at best a dramatic oversimplification. The systems that comprise the human body range from the armies of symbiotic microbes that help us digest food in our stomachs to the consequences of using corn syrup to sweeten food to the cultural pressure associated with “size 0” celebrities. Many of those systems can’t be well modelled in their own right, let alone deterministically related to each other; let alone formally represented in an accurate, detailed way by technology systems (or even in mathematics).

We should regret and avoid the hubris that leads to the distrust of technology by overstating its capability and failing to recognise its challenges and limitations. That distrust is a barrier that prevents us from achieving the very real benefits that data and technology can bring, and that have been convincingly demonstrated in the past.

For example, an enormous contribution to our knowledge of how to treat and prevent disease was made by John Snow who used data to analyse outbreaks of cholera in London in the 19th century. Snow used a map to correlate cases of cholera to the location of communal water pipes, leading to the insight that water-borne germs were responsible for spreading the disease. We wash our hands to prevent diseases spreading through germs in part because of what we would now call the “geospatial data analysis” performed by John Snow.

Many of the insights that we seek from analytic and smart city systems are human in nature, not physical or mathematical – for example identifying when and where to apply social care interventions in order to reduce the occurrence of  emotional domestic abuse. Such questions are complex and uncertain: what is “emotional domestic abuse?” Is it abuse inflicted by a live-in boyfriend, or by an estranged husband who lives separately but makes threatening telephone calls? Does it consist of physical violence or bullying? And what is “bullying”?

IMG_0209-1

(John Snow’s map of cholera outbreaks in 19th century London)

We attempt to create structured, quantitative data about complex human and social issues by using approximations and categorisations; by tolerating ranges and uncertainties in numeric measurements; by making subjective judgements; and by looking for patterns and clusters across different categories of data. Whilst these techniques can be very powerful, just how difficult it is to be sure what these conventions and interpretations should be is illustrated by the controversies that regularly arise around “who knew what, when?” whenever there is a high profile failure in social care or any other public service.

These challenges are not limited to “high level” social, economic and biological systems. In fact, they extend throughout the worlds of physics and chemistry into the basic nature of matter and the universe. They fundamentally limit the degree to which we can measure the world, and our ability to draw insight from that information.

By being aware of these limitations we are able to design systems and practises to use data and technology effectively. We know more about the weather through modelling it using scientific and mathematical algorithms in computers than we would without those techniques; but we don’t expect those forecasts to be entirely accurate. Similarly, supermarkets can use data about past purchases to make sufficiently accurate predictions about future spending patterns to boost their profits, without needing to predict exactly what each individual customer will buy.

We underestimate the limitations and flaws of these approaches at our peril. Whilst Tim O’Reilly cites several automated financial systems as good examples of “algorithmic regulation”, the financial crash of 2008 showed the terrible consequences of the thoroughly inadequate risk management systems used by the world’s financial institutions compared to the complexity of the system that they sought to profit from. The few institutions that realised that market conditions had changed and that their models for risk management were no longer valid relied instead on the expertise of their staff, and avoided the worst affects. Others continued to rely on models that had started to produce increasingly misleading guidance, leading to the recession that we are only now emerging from six years later, and that has damaged countless lives around the world.

Every day in their work, scientists, engineers and statisticians draw conclusions from data and analytics, but they temper those conclusions with an awareness of their limitations and any uncertainties inherent in them. By taking and communicating such a balanced and informed approach to applying similar techniques in cities, we will create more trust in these technologies than by overstating their capabilities.

What follows is a description of some of the scientific, philosophical and practical issues that lead inevitability to uncertainty in data, and to limitations in our ability to draw conclusions from it:

But I’ll finish with an explanation of why we can still draw great value from data and analytics if we are aware of those issues and take them properly into account.

Three reasons why we can’t measure data perfectly

(How Heisenberg’s Uncertainty Principle results from the dual wave/particle nature of matter. Explanation by HyperPhysics at Georgia State University)

1. Heisenberg’s Uncertainty Principle and the fundamental impossibility of knowing everything about anything

Heisenberg’s Uncertainty Principle is a cornerstone of Quantum Mechanics, which, along with General Relativity, is one of the two most fundamental theories scientists use to understand our world. It defines a limit to the precision with which certain pairs of properties of the basic particles which make up the world – such as protons, neutrons and electrons – can be known at the same time. For instance, the more accurately we measure the position of such particles, the more uncertain their speed and direction of movement become.

The explanation of the Uncertainty Principle is subtle, and lies in the strange fact that very small “particles” such as electrons and neutrons also behave like “waves”; and that “waves” like beams of light also behave like very small “particles” called “photons“. But we can use an analogy to understand it.

In order to measure something, we have to interact with it. In everyday life, we do this by using our eyes to measure lightwaves that are created by lightbulbs or the sun and that then reflect off objects in the world around us.

But when we shine light on an object, what we are actually doing is showering it with billions of photons, and observing the way that they scatter. When the object is quite large – a car, a person, or a football – the photons are so small in comparison that they bounce off without affecting it. But when the object is very small – such as an atom – the photons colliding with it are large enough to knock it out of its original position. In other words, measuring the current position of an object involves a collision which causes it to move in a random way.

This analogy isn’t exact; but it conveys the general idea. (For a full explanation, see the figure and link above). Most of the time, we don’t notice the effects of Heisenberg’s Uncertainty Principle because it applies at extremely small scales. But it is perhaps the most fundamental law that asserts that “perfect knowledge” is simply impossible; and it illustrates a wider point that any form of measurement or observation in general affects what is measured or observed. Sometimes the effects are negligible,  but often they are not – if we observe workers in a time and motion study, for example, we need to be careful to understand the effect our presence and observations have on their behaviour.

2. Accuracy, precision, noise, uncertainty and error: why measurements are never fully reliable

Outside the world of Quantum Mechanics, there are more practical issues that limit the accuracy of all measurements and data.

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become "fuzzy". In this case, the effects of noise and interference - the degree to which the signal appears "fuzzy" - are relatively small relative to the strength of the signal, and the device is usable)

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become “fuzzy”. In this case, the effects of noise and interference – the degree to which the signal appears “fuzzy” – are relatively small compared to the strength of the signal, and the device is usable)

We live in a “warm” world – roughly 300 degrees Celsius above what scientists call “absolute zero“, the coldest temperature possible. What we experience as warmth is in fact movement: the atoms from which we and our world are made “jiggle about” – they move randomly. When we touch a hot object and feel pain it is because this movement is too violent to bear – it’s like being pricked by billions of tiny pins.

This random movement creates “noise” in every physical system, like the static we hear in analogue radio stations or on poor quality telephone connections.

We also live in a busy world, and this activity leads to other sources of noise. All electronic equipment creates electrical and magnetic fields that spread beyond the equipment itself, and in turn affect other equipment – we can hear this as a buzzing noise when we leave smartphones near radios.

Generally speaking, all measurements are affected by random noise created by heat, vibrations or electrical interference; are limited by the precision and accuracy of the measuring devices we use; and are affected by inconsistencies and errors that arise because it is always impossible to completely separate the measurement we want to make from all other environmental factors.

Scientists, engineers and statisticians are familiar with these challenges, and use techniques developed over the course of more than a century to determine and describe the degree to which they can trust and rely on the measurements they make. They do not claim “perfect knowledge” of anything; on the contrary, they are diligent in describing the unavoidable uncertainty that is inherent in their work.

3. The limitations of measuring the natural world using digital systems

One of the techniques we’ve adopted over the last half century to overcome the effects of noise and to make information easier to process is to convert “analogue” information about the real world (information that varies smoothly) into digital information – i.e. information that is expressed as sequences of zeros and ones in computer systems.

(When analogue signals are amplified, so is the noise that they contain. Digital signals are interpreted using thresholds: above an upper threshold, the signal means “1”, whilst below a lower threshold, the signal means “0”. A long string of “0”s and “1”s can be used to encode the same information as contained in analogue waves. By making the difference between the thresholds large compared to the level of signal noise, digital signals can be recreated to remove noise. Further explanation and image by Science Aid)

This process involves a trade-off between the accuracy with which analogue information is measured and described, and the length of the string of digits required to do so – and hence the amount of computer storage and processing power needed.

This trade-off can be clearly seen in the difference in quality between an internet video viewed on a smartphone over a 3G connection and one viewed on a high definition television using a cable network. Neither video will be affected by the static noise that affects weak analogue television signals, but the limited bandwidth of a 3G connection dramatically limits the clarity and resolution of the image transmitted.

The Nyquist–Shannon sampling theorem defines this trade-off and the limit to the quality that can be achieved in storing and processing digital information created from analogue sources. It determines the quality of digital data that we are able to create about any real-world system – from weather patterns to the location of moving objects to the fidelity of sound and video recordings. As computers and communications networks continue to grow more powerful, the quality of digital information will improve,  but it will never be a perfect representation of the real world.

Three limits to our ability to analyse data and draw insights from it

1. Gödel’s Incompleteness Theorem and the inconsistency of algorithms

Kurt Gödel’s Incompleteness Theorem sets a limit on what can be achieved by any “closed logical system”. Examples of “closed logical systems” include computer programming languages, any system for creating algorithms – and mathematics itself.

We use “closed logical systems” whenever we create insights and conclusions by combining and extrapolating from basic data and facts. This is how all reporting, calculating, business intelligence, “analytics” and “big data” technologies work.

Gödel’s Incompleteness Theorem proves that any closed logical system can be used to create conclusions that  it is not possible to show are true or false using the same system. In other words, whilst computer systems can produce extremely useful information, we cannot rely on them to prove that that information is completely accurate and valid. We have to do that ourselves.

Gödel’s theorem doesn’t stop computer algorithms that have been verified by humans using the scientific method from working; but it does mean that we can’t rely on computers to both generate algorithms and guarantee their validity.

2. The behaviour of many real-world systems can’t be reduced analytically to simple rules

Many systems in the real-world are complex: they cannot be described by simple rules that predict their behaviour based on measurements of their initial conditions.

A simple example is the “three body problem“. Imagine a sun, a planet and a moon all orbiting each other. The movement of these three objects is governed by the force of gravity, which can be described by relatively simple mathematical equations. However, even with just three objects involved, it is not possible to use these equations to directly predict their long-term behaviour – whether they will continue to orbit each other indefinitely, or will eventually collide with each other, or spin off into the distance.

(A computer simulation by Hawk Express of a Belousov–Zhabotinsky reaction,  in which reactions between liquid chemicals create oscillating patterns of colour. The simulation is carried out using “cellular automata” a technique based on a grid of squares which can take different colours. In each “turn” of the simulation, like a turn in a board game, the colour of each square is changed using simple rules based on the colours of adjacent squares. Such simulations have been used to reproduce a variety of real-world phenomena)

As Stephen Wolfram argued in his controversial book “A New Kind of Science” in 2002, we need to take a different approach to understanding such complex systems. Rather than using mathematics and logic to analyse them, we need to simulate them, often using computers to create models of the elements from which complex systems are composed, and the interactions between them. By running simulations based on a large number of starting points and comparing the results to real-world observations, insights into the behaviour of the real-world system can be derived. This is how weather forecasts are created, for example. 

But as we all know, weather forecasts are not always accurate. Simulations are approximations to real-world systems, and their accuracy is restricted by the degree to which digital data can be used to represent a non-digital world. For this reason, conclusions and predictions drawn from simulations are usually “average” or “probable” outcomes for the system as a whole, not precise predictions of the behaviour of the system or any individual element of it. This is why weather forecasts are often wrong; and why they predict likely levels of rain and windspeed rather than the shape and movement of individual clouds.

(Hello)

(A simple and famous example of a computer programme that never stops running because it calls itself. The output continually varies by printing out characters based on random number generation. Image by Prosthetic Knowledge)

3. Some problems can’t be solved by computing machines

If I consider a simple question such as “how many letters are in the word ‘calculation’?”, I can easily convince myself that a computer programme could be written to answer the question; and that it would find the answer within a relatively short amount of time. But some problems are much harder to solve, or can’t even be solved at all.

For example, a “Wang Tile” (see image below) is a square tile formed from four triangles of different colours. Imagine that you have bought a set of tiles of various colour combinations in order to tile a wall in a kitchen or bathroom. Given the set of tiles that you have bought, is it possible to tile your wall so that triangles of the same colour line up to each other, forming a pattern of “Wang Tile” squares?

In 1966 Robert Berger proved that no algorithm exists that can answer that question. There is no way to solve the problem – or to determine how long it will take to solve the problem – without actually solving it. You just have to try to tile the room and find out the hard way.

One of the most famous examples of this type of problem is the “halting problem” in computer science. Some computer programmes finish executing their commands relatively quickly. Others can run indefinitely if they contain a “loop” instruction that never ends. For others which contain complex sequences of loops and calls from one section of code to another, it may be very hard to tell whether the programme finishes quickly, or takes a long time to complete, or never finishes its execution at all.

Alan Turing, one of the most important figures in the development of computing, proved in 1936 that a general algorithm to determine whether or not any computer programme finishes its execution does not exist. In other words, whilst there are many useful computer programmes in the world, there are also problems that computer programmes simply cannot solve.

(A set of Wang Tiles, and a pattern created by tiling them so that tiles are placed next to other tiles so that their edges have the same colour. Given any particular set of tiles, it is impossible to determine whether such a pattern can be created by any means other than trial and error)

(A set of Wang Tiles, and a pattern of coloured squares created by tiling them. Given any random set of tiles of different colour combinations, there is no set of rules that can be relied on to determine whether a valid pattern of coloured squares can be created from them. Sometimes, you have to find out by trial and error. Images from Wikipedia)

Five reasons why the human world is messy, unpredictable, and can’t be perfectly described using data and logic

1. Our actions create disorder

The 2nd Law of Thermodynamics is a good candidate for the most fundamental law of science. It states that as time progresses, the universe becomes more disorganised. It guarantees that ultimately – in billions of years – the Universe will die as all of the energy and activity within it dissipates.

An everyday practical consequence of this law is that every time we act to create value – building a shed, using a car to get from one place to another, cooking a meal – our actions eventually cause a greater amount of disorder to be created as a consequence – as noise, pollution, waste heat or landfill refuse.

For example, if I spend a day building a shed, then to create that order and value from raw materials, I consume structured food and turn it into sewage. Or if I use an electric forklift to stack a pile of boxes, I use electricity that has been created by burning structured coal into smog and ash.

So it is literally impossible to create a “perfect world”. Whenever we act to make a part of the world more ordered, we create disorder elsewhere. And ultimately – thankfully, long after you and I are dead – disorder is all that will be left.

2. The failure of Logical Atomism: why the human world can’t be perfectly described using data and logic

In the 20th Century two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein, invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic.

But despite 40 years of work, these two supremely intelligent people could not get their theory to work: “Logical Atomism” failed. It is not possible to describe our world in that way.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of sophisticated systems.

Despite centuries of scientific and philosophical effort, we do not have a complete understanding of how to describe our world at its most basic level. As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations mean. And as we have already seen, Heisenberg’s Uncertainty Principle prevents us from ever having perfect knowledge of the world at this level.

Perhaps the most important failure of logical atomism, though, was that it proved impossible to use logical rules to turn “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – into facts at another level of abstraction – such as “physical assault is a crime”. The human world and the things that we care about can’t be described using logical combinations of “atomic facts”. For example, how would you define the set of all possible uses of a screwdriver, from prising the lids off paint tins to causing a short-circuit by jamming it into a switchboard?

Our world is messy, subjective and opportunistic. It defies universal categorisation and logical analysis.

(A Pescheria in Bari, Puglia, where a fish-market price information service makes it easier for local fisherman to identify the best buyers and prices for their daily catch. Photo by Vito Palmi)

3. The importance and inaccessibility of “local knowledge” 

Because the tool we use for calculating and agreeing value when we exchange goods and services is money, economics is the discipline that is often used to understand the large-scale behaviour of society. We often quantify the “growth” of society using economic measures, for example.

But this approach is notorious for overlooking social and environmental characteristics such as health, happiness and sustainability. Alternatives exist, such as the Social Progress Index, or the measurement framework adopted by the United Nations 2014 Human Development Report on world poverty; but they are still high level and abstract.

Such approaches struggle to explain localised variations, and in particular cannot predict the behaviours or outcomes of individual people with any accuracy. This “local knowledge problem” is caused by the fact that a great deal of the information that determines individual actions is personal and local, and not measurable at a distance – the experienced eye of the fruit buyer assessing not just the quality of the fruit but the quality of the farm and farmers that produce it, as a measure of the likely consistency of supply; the emotional attachments that cause us to favour one brand over another; or the degree of community ties between local businesses that influence their propensity to trade with each other.

Sharing economy” business models that use social media and reputation systems to enable suppliers and consumers of goods and services to find each other and transact online are opening up this local knowledge to some degree. Local food networks, freecycling networks, and land-sharing schemes all use this technology to the benefit of local communities whilst potentially making information about detailed transactions more widely available. And to some degree, the human knowledge that influences how transactions take place can be encoded in “expert systems” which allow computer systems to codify the quantitative and heuristic rules by which people take decisions.

But these technologies are only used in a subset of the interactions that take place between people and businesses across the world, and it is unlikely that they’ll become ubiquitous in the foreseeable future (or that we would want them to become so). Will we ever reach the point where prospective house-buyers delegate decisions about where to live to computer programmes operating in online marketplaces rather than by visiting places and imagining themselves living there? Will we somehow automate the process of testing the freshness of fish by observing the clarity of their eyes and the freshness of their smell before buying them to cook and eat?

In many cases, while technology may play a role introducing potential buyers and sellers of goods and services to each other, it will not replace – or predict – the human behaviours involved in the transaction itself.

(Medway Youth Trust use predictive and textual analytics to draw insight into their work helping vulnerable children. They use technology to inform expert case workers, not to take decisions on their behalf.)

4. “Wicked problems” cannot be described using data and logic

Despite all of the challenges associated with problems in mathematics and the physical sciences, it is nevertheless relatively straightforward to frame and then attempt to solve problems in those domains; and to determine whether the resulting solutions are valid.

As the failure of Logical Atomism showed, though, problems in the human domain are much more difficult to describe in any systematic, complete and precise way – a challenge known as the “frame problem” in artificial intelligence. This is particularly true of “wicked problems” – challenges such as social mobility or vulnerable families that are multi-faceted, and consist of a variety of interdependent issues.

Take job creation, for example. Is that best accomplished through creating employment in taxpayer-funded public sector organisations? Or by allowing private-sector wealth to grow, creating employment through “trickle-down” effects? Or by maximising overall consumer spending power as suggested by “middle-out” economics? All of these ideas are described not using the language of mathematics or other formal logical systems, but using natural human language which is subjective and inconsistent in use.

The failure of Logical Atomism to fully represent such concepts in formal logical systems through which truth and falsehood can be determined with certainty emphasises what we all understand intuitively: there is no single “right” answer to many human problems, and no single “right” action in many human situations.

(An electricity bill containing information provided by OPower comparing one household’s energy usage to their neighbours. Image from Grist)

5. Behavioural economics and the caprice of human behaviour

Behavioural economics” attempts to predict the way that humans behave when taking choices that have a measurable impact on them – for example, whether to put the washing machine on at 5pm when electricity is expensive, or at 11pm when it is cheap.

But predicting human behaviour is notoriously unreliable.

For example, in a smart water-meter project in Dubuque, Iowa, households that were told how their water conservation compared to that of their near neighbours were found to be twice as likely to take action to improve their efficiency as those who were only told the details of their own water use. In other words, people who were given quantified evidence that they were less responsible water user than their neighbours changed their behaviour. OPower have used similar techniques to help US households save 1.9 terawatt hours of power simply by including a report based on data from smart meters in a printed letter sent with customers’ electricity bills.

These are impressive achievements; but they are not always repeatable. A recycling scheme in the UK that adopted a similar approach found instead that it lowered recycling rates across the community: households who learned that they were putting more effort into recycling than their neighbours asked themselves “if my neighbours aren’t contributing to this initiative, then why should I?”

Low carbon engineering technologies like electric vehicles have clearly defined environmental benefits and clearly defined costs. But most Smart Cities solutions are less straightforward. They are complex socio-technical systems whose outcomes are emergent. Our ability to predict their performance and impact will certainly improve as more are deployed and analysed, and as University researchers, politicians, journalists and the public assess them. But we will never predict individual actions using these techniques, only the average statistical behaviour of groups of people. This can be seen from OPower’s own comparison of their predicted energy savings against those actually achieved – the predictions are good, but the actual behaviour of OPower’s customers shows a high degree of apparently random variation. Those variations are the result of the subjective, unpredictable and sometimes irrational behaviour of real people.

We can take insight from Behavioural Economics and other techniques for analysing human behaviour in order to create appropriate strategies, policies and environments that encourage the right outcomes in cities; but none of them can be relied on to give definitive solutions to any individual person or situation. They can inform decision-making, but are always associated with some degree of uncertainty. In some cases, the uncertainty will be so small as to be negligible, and the predictions can be treated as deterministic rules for achieving the desired outcome. But in many cases, the uncertainty will be so great that predictions can only be treated as general indications of what might happen; whilst individual actions and outcomes will vary greatly.

(Of course it is impossible to predict individual criminal actions as portrayed in the film “Minority Report”. But is is very possible to analyse past patterns of criminal activity, compare them to related data such as weather and social events, and predict the likelihood of crimes of certain types occurring in certain areas. Cities such as Memphis and Chicago have used these insights to achieve significant reductions in crime)

Learning to value insight without certainty

Mathematics and digital technology are incredibly powerful; but they will never perfectly and completely describe and predict our world in human terms. In many cases, our focus for using them should not be on automation: it should be on the enablement of human judgement through better availability and communication of information. And in particular, we should concentrate on communicating accurately the meaning of information in the context of its limitations and uncertainties.

There are exceptions where we automate systems because of a combination of a low-level of uncertainty in data and a large advantage in acting autonomously on it. For example, anti-lock braking systems save lives by using automated technology to take thousands of decisions more quickly than most humans would realise that even a single decision needed to be made; and do so based on data with an extremely low degree of uncertainty.

But the most exciting opportunity for us all is to learn to become sophisticated users of information that is uncertain. The results of textual analysis of sentiment towards products and brands expressed in social media are far from certain; but they are still of great value. Similar technology can extract insights from medical research papers, case notes in social care systems, maintenance logs of machinery and many other sources. Those insights will rarely be certain; but properly assessed by people with good judgement they can still be immensely valuable.

This is a much better way to understand the value of technology than ideas like “perfect knowledge” and “algorithmic regulation”. And it is much more likely that people will trust the benefits that we claim new technologies can bring if we are open about their limitations. People won’t use technologies that they don’t trust; and they won’t invest their money in them or vote for politicians who say they’ll spend their taxes on it.

Thankyou to Richard Brown and Adrian McEwen for discussions on Twitter that helped me to prepare this article. A more in-depth discussion of some of the scientific and philosophical issues I’ve described, and an exploration of the nature of human intelligence and its non-deterministic characteristics, can be found in the excellent paper “Answering Descartes: Beyond Turing” by Stuart Kauffman published by MIT press.

%d bloggers like this: