11 reasons computers can’t understand or solve our problems without human judgement

(Photo by Matt Gidley)

(Photo by Matt Gidley)

Why data is uncertain, cities are not programmable, and the world is not “algorithmic”.

Many people are not convinced that the Smart Cities movement will result in the use of technology to make places, communities and businesses in cities better. Outside their consumer enjoyment of smartphones, social media and online entertainment – to the degree that they have access to them – they don’t believe that technology or the companies that sell it will improve their lives.

The technology industry itself contributes significantly to this lack of trust. Too often we overstate the benefits of technology, or play down its limitations and the challenges involved in using it well.

Most recently, the idea that traditional processes of government should be replaced by “algorithmic regulation” – the comparison of the outcomes of public systems to desired objectives through the measurement of data, and the automatic adjustment of those systems by algorithms in order to achieve them – has been proposed by Tim O’Reilly and other prominent technologists.

These approaches work in many mechanical and engineering systems – the autopilots that fly planes or the anti-lock braking systems that we rely on to stop our cars. But should we extend them into human realms – how we educate our children or how we rehabilitate convicted criminals?

It’s clearly important to ask whether it would be desirable for our society to adopt such approaches. That is a complex debate, but my personal view is that in most cases the incredible technologies available to us today – and which I write about frequently on this blog – should not be used to take automatic decisions about such issues. They are usually more valuable when they are used to improve the information and insight available to human decision-makers – whether they are politicians, public workers or individual citizens – who are then in a better position to exercise good judgement.

More fundamentally, though, I want to challenge whether “algorithmic regulation” or any other highly deterministic approach to human issues is even possible. Quite simply, it is not.

It is true that our ability to collect, analyse and interpret data about the world has advanced to an astonishing degree in recent years. However, that ability is far from perfect, and strongly established scientific and philosophical principles tell us that it is impossible to definitively measure human outcomes from underlying data in physical or computing systems; and that it is impossible to create algorithmic rules that exactly predict them.

Sometimes automated systems succeed despite these limitations – anti-lock braking technology has become nearly ubiquitous because it is more effective than most human drivers at slowing down cars in a controlled way. But in other cases they create such great uncertainties that we must build in safeguards to account for the very real possibility that insights drawn from data are wrong. I do this every time I leave my home with a small umbrella packed in my bag despite the fact that weather forecasts created using enormous amounts of computing power predict a sunny day.

(No matter how sophisticated computer models of cities become, there are fundamental reasons why they will always be simplifications of reality. It is only by understanding those constraints that we can understand which insights from computer models are valuable, and which may be misleading. Image of Sim City by haljackey)

We can only understand where an “algorithmic” approach can be trusted; where it needs safeguards; and where it is wholly inadequate by understanding these limitations. Some of them are practical, and limited only by the sensitivity of today’s sensors and the power of today’s computers. But others are fundamental laws of physics and limitations of logical systems.

When technology companies assert that Smart Cities can create “autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits” (as London School of Economics Professor Adam Greenfield rightly criticised in his book “Against the Smart City”), they are ignoring these challenges.

A blog published by the highly influential magazine Wired recently made similar overstatements: “The Universe is Programmable” argues that we should extend the concept of an “Application Programming Interface (API)” – a facility usually offered by technology systems to allow external computer programmes to control or interact with them – to every aspect of the world, including our own biology.

To compare complex, unpredictable, emergent biological and social systems to the very logical, deterministic world of computer software is at best a dramatic oversimplification. The systems that comprise the human body range from the armies of symbiotic microbes that help us digest food in our stomachs to the consequences of using corn syrup to sweeten food to the cultural pressure associated with “size 0” celebrities. Many of those systems can’t be well modelled in their own right, let alone deterministically related to each other; let alone formally represented in an accurate, detailed way by technology systems (or even in mathematics).

We should regret and avoid the hubris that leads to the distrust of technology by overstating its capability and failing to recognise its challenges and limitations. That distrust is a barrier that prevents us from achieving the very real benefits that data and technology can bring, and that have been convincingly demonstrated in the past.

For example, an enormous contribution to our knowledge of how to treat and prevent disease was made by John Snow who used data to analyse outbreaks of cholera in London in the 19th century. Snow used a map to correlate cases of cholera to the location of communal water pipes, leading to the insight that water-borne germs were responsible for spreading the disease. We wash our hands to prevent diseases spreading through germs in part because of what we would now call the “geospatial data analysis” performed by John Snow.

Many of the insights that we seek from analytic and smart city systems are human in nature, not physical or mathematical – for example identifying when and where to apply social care interventions in order to reduce the occurrence of  emotional domestic abuse. Such questions are complex and uncertain: what is “emotional domestic abuse?” Is it abuse inflicted by a live-in boyfriend, or by an estranged husband who lives separately but makes threatening telephone calls? Does it consist of physical violence or bullying? And what is “bullying”?


(John Snow’s map of cholera outbreaks in 19th century London)

We attempt to create structured, quantitative data about complex human and social issues by using approximations and categorisations; by tolerating ranges and uncertainties in numeric measurements; by making subjective judgements; and by looking for patterns and clusters across different categories of data. Whilst these techniques can be very powerful, just how difficult it is to be sure what these conventions and interpretations should be is illustrated by the controversies that regularly arise around “who knew what, when?” whenever there is a high profile failure in social care or any other public service.

These challenges are not limited to “high level” social, economic and biological systems. In fact, they extend throughout the worlds of physics and chemistry into the basic nature of matter and the universe. They fundamentally limit the degree to which we can measure the world, and our ability to draw insight from that information.

By being aware of these limitations we are able to design systems and practises to use data and technology effectively. We know more about the weather through modelling it using scientific and mathematical algorithms in computers than we would without those techniques; but we don’t expect those forecasts to be entirely accurate. Similarly, supermarkets can use data about past purchases to make sufficiently accurate predictions about future spending patterns to boost their profits, without needing to predict exactly what each individual customer will buy.

We underestimate the limitations and flaws of these approaches at our peril. Whilst Tim O’Reilly cites several automated financial systems as good examples of “algorithmic regulation”, the financial crash of 2008 showed the terrible consequences of the thoroughly inadequate risk management systems used by the world’s financial institutions compared to the complexity of the system that they sought to profit from. The few institutions that realised that market conditions had changed and that their models for risk management were no longer valid relied instead on the expertise of their staff, and avoided the worst affects. Others continued to rely on models that had started to produce increasingly misleading guidance, leading to the recession that we are only now emerging from six years later, and that has damaged countless lives around the world.

Every day in their work, scientists, engineers and statisticians draw conclusions from data and analytics, but they temper those conclusions with an awareness of their limitations and any uncertainties inherent in them. By taking and communicating such a balanced and informed approach to applying similar techniques in cities, we will create more trust in these technologies than by overstating their capabilities.

What follows is a description of some of the scientific, philosophical and practical issues that lead inevitability to uncertainty in data, and to limitations in our ability to draw conclusions from it:

But I’ll finish with an explanation of why we can still draw great value from data and analytics if we are aware of those issues and take them properly into account.

Three reasons why we can’t measure data perfectly

(How Heisenberg’s Uncertainty Principle results from the dual wave/particle nature of matter. Explanation by HyperPhysics at Georgia State University)

1. Heisenberg’s Uncertainty Principle and the fundamental impossibility of knowing everything about anything

Heisenberg’s Uncertainty Principle is a cornerstone of Quantum Mechanics, which, along with General Relativity, is one of the two most fundamental theories scientists use to understand our world. It defines a limit to the precision with which certain pairs of properties of the basic particles which make up the world – such as protons, neutrons and electrons – can be known at the same time. For instance, the more accurately we measure the position of such particles, the more uncertain their speed and direction of movement become.

The explanation of the Uncertainty Principle is subtle, and lies in the strange fact that very small “particles” such as electrons and neutrons also behave like “waves”; and that “waves” like beams of light also behave like very small “particles” called “photons“. But we can use an analogy to understand it.

In order to measure something, we have to interact with it. In everyday life, we do this by using our eyes to measure lightwaves that are created by lightbulbs or the sun and that then reflect off objects in the world around us.

But when we shine light on an object, what we are actually doing is showering it with billions of photons, and observing the way that they scatter. When the object is quite large – a car, a person, or a football – the photons are so small in comparison that they bounce off without affecting it. But when the object is very small – such as an atom – the photons colliding with it are large enough to knock it out of its original position. In other words, measuring the current position of an object involves a collision which causes it to move in a random way.

This analogy isn’t exact; but it conveys the general idea. (For a full explanation, see the figure and link above). Most of the time, we don’t notice the effects of Heisenberg’s Uncertainty Principle because it applies at extremely small scales. But it is perhaps the most fundamental law that asserts that “perfect knowledge” is simply impossible; and it illustrates a wider point that any form of measurement or observation in general affects what is measured or observed. Sometimes the effects are negligible,  but often they are not – if we observe workers in a time and motion study, for example, we need to be careful to understand the effect our presence and observations have on their behaviour.

2. Accuracy, precision, noise, uncertainty and error: why measurements are never fully reliable

Outside the world of Quantum Mechanics, there are more practical issues that limit the accuracy of all measurements and data.

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become "fuzzy". In this case, the effects of noise and interference - the degree to which the signal appears "fuzzy" - are relatively small relative to the strength of the signal, and the device is usable)

(A measurement of the electrical properties of a superconducting device from my PhD thesis. Theoretically, the behaviour should appear as a smooth, wavy line; but the experimental measurement is affected by noise and interference that cause the signal to become “fuzzy”. In this case, the effects of noise and interference – the degree to which the signal appears “fuzzy” – are relatively small compared to the strength of the signal, and the device is usable)

We live in a “warm” world – roughly 300 degrees Celsius above what scientists call “absolute zero“, the coldest temperature possible. What we experience as warmth is in fact movement: the atoms from which we and our world are made “jiggle about” – they move randomly. When we touch a hot object and feel pain it is because this movement is too violent to bear – it’s like being pricked by billions of tiny pins.

This random movement creates “noise” in every physical system, like the static we hear in analogue radio stations or on poor quality telephone connections.

We also live in a busy world, and this activity leads to other sources of noise. All electronic equipment creates electrical and magnetic fields that spread beyond the equipment itself, and in turn affect other equipment – we can hear this as a buzzing noise when we leave smartphones near radios.

Generally speaking, all measurements are affected by random noise created by heat, vibrations or electrical interference; are limited by the precision and accuracy of the measuring devices we use; and are affected by inconsistencies and errors that arise because it is always impossible to completely separate the measurement we want to make from all other environmental factors.

Scientists, engineers and statisticians are familiar with these challenges, and use techniques developed over the course of more than a century to determine and describe the degree to which they can trust and rely on the measurements they make. They do not claim “perfect knowledge” of anything; on the contrary, they are diligent in describing the unavoidable uncertainty that is inherent in their work.

3. The limitations of measuring the natural world using digital systems

One of the techniques we’ve adopted over the last half century to overcome the effects of noise and to make information easier to process is to convert “analogue” information about the real world (information that varies smoothly) into digital information – i.e. information that is expressed as sequences of zeros and ones in computer systems.

(When analogue signals are amplified, so is the noise that they contain. Digital signals are interpreted using thresholds: above an upper threshold, the signal means “1”, whilst below a lower threshold, the signal means “0”. A long string of “0”s and “1”s can be used to encode the same information as contained in analogue waves. By making the difference between the thresholds large compared to the level of signal noise, digital signals can be recreated to remove noise. Further explanation and image by Science Aid)

This process involves a trade-off between the accuracy with which analogue information is measured and described, and the length of the string of digits required to do so – and hence the amount of computer storage and processing power needed.

This trade-off can be clearly seen in the difference in quality between an internet video viewed on a smartphone over a 3G connection and one viewed on a high definition television using a cable network. Neither video will be affected by the static noise that affects weak analogue television signals, but the limited bandwidth of a 3G connection dramatically limits the clarity and resolution of the image transmitted.

The Nyquist–Shannon sampling theorem defines this trade-off and the limit to the quality that can be achieved in storing and processing digital information created from analogue sources. It determines the quality of digital data that we are able to create about any real-world system – from weather patterns to the location of moving objects to the fidelity of sound and video recordings. As computers and communications networks continue to grow more powerful, the quality of digital information will improve,  but it will never be a perfect representation of the real world.

Three limits to our ability to analyse data and draw insights from it

1. Gödel’s Incompleteness Theorem and the inconsistency of algorithms

Kurt Gödel’s Incompleteness Theorem sets a limit on what can be achieved by any “closed logical system”. Examples of “closed logical systems” include computer programming languages, any system for creating algorithms – and mathematics itself.

We use “closed logical systems” whenever we create insights and conclusions by combining and extrapolating from basic data and facts. This is how all reporting, calculating, business intelligence, “analytics” and “big data” technologies work.

Gödel’s Incompleteness Theorem proves that any closed logical system can be used to create conclusions that  it is not possible to show are true or false using the same system. In other words, whilst computer systems can produce extremely useful information, we cannot rely on them to prove that that information is completely accurate and valid. We have to do that ourselves.

Gödel’s theorem doesn’t stop computer algorithms that have been verified by humans using the scientific method from working; but it does mean that we can’t rely on computers to both generate algorithms and guarantee their validity.

2. The behaviour of many real-world systems can’t be reduced analytically to simple rules

Many systems in the real-world are complex: they cannot be described by simple rules that predict their behaviour based on measurements of their initial conditions.

A simple example is the “three body problem“. Imagine a sun, a planet and a moon all orbiting each other. The movement of these three objects is governed by the force of gravity, which can be described by relatively simple mathematical equations. However, even with just three objects involved, it is not possible to use these equations to directly predict their long-term behaviour – whether they will continue to orbit each other indefinitely, or will eventually collide with each other, or spin off into the distance.

(A computer simulation by Hawk Express of a Belousov–Zhabotinsky reaction,  in which reactions between liquid chemicals create oscillating patterns of colour. The simulation is carried out using “cellular automata” a technique based on a grid of squares which can take different colours. In each “turn” of the simulation, like a turn in a board game, the colour of each square is changed using simple rules based on the colours of adjacent squares. Such simulations have been used to reproduce a variety of real-world phenomena)

As Stephen Wolfram argued in his controversial book “A New Kind of Science” in 2002, we need to take a different approach to understanding such complex systems. Rather than using mathematics and logic to analyse them, we need to simulate them, often using computers to create models of the elements from which complex systems are composed, and the interactions between them. By running simulations based on a large number of starting points and comparing the results to real-world observations, insights into the behaviour of the real-world system can be derived. This is how weather forecasts are created, for example. 

But as we all know, weather forecasts are not always accurate. Simulations are approximations to real-world systems, and their accuracy is restricted by the degree to which digital data can be used to represent a non-digital world. For this reason, conclusions and predictions drawn from simulations are usually “average” or “probable” outcomes for the system as a whole, not precise predictions of the behaviour of the system or any individual element of it. This is why weather forecasts are often wrong; and why they predict likely levels of rain and windspeed rather than the shape and movement of individual clouds.


(A simple and famous example of a computer programme that never stops running because it calls itself. The output continually varies by printing out characters based on random number generation. Image by Prosthetic Knowledge)

3. Some problems can’t be solved by computing machines

If I consider a simple question such as “how many letters are in the word ‘calculation’?”, I can easily convince myself that a computer programme could be written to answer the question; and that it would find the answer within a relatively short amount of time. But some problems are much harder to solve, or can’t even be solved at all.

For example, a “Wang Tile” (see image below) is a square tile formed from four triangles of different colours. Imagine that you have bought a set of tiles of various colour combinations in order to tile a wall in a kitchen or bathroom. Given the set of tiles that you have bought, is it possible to tile your wall so that triangles of the same colour line up to each other, forming a pattern of “Wang Tile” squares?

In 1966 Robert Berger proved that no algorithm exists that can answer that question. There is no way to solve the problem – or to determine how long it will take to solve the problem – without actually solving it. You just have to try to tile the room and find out the hard way.

One of the most famous examples of this type of problem is the “halting problem” in computer science. Some computer programmes finish executing their commands relatively quickly. Others can run indefinitely if they contain a “loop” instruction that never ends. For others which contain complex sequences of loops and calls from one section of code to another, it may be very hard to tell whether the programme finishes quickly, or takes a long time to complete, or never finishes its execution at all.

Alan Turing, one of the most important figures in the development of computing, proved in 1936 that a general algorithm to determine whether or not any computer programme finishes its execution does not exist. In other words, whilst there are many useful computer programmes in the world, there are also problems that computer programmes simply cannot solve.

(A set of Wang Tiles, and a pattern created by tiling them so that tiles are placed next to other tiles so that their edges have the same colour. Given any particular set of tiles, it is impossible to determine whether such a pattern can be created by any means other than trial and error)

(A set of Wang Tiles, and a pattern of coloured squares created by tiling them. Given any random set of tiles of different colour combinations, there is no set of rules that can be relied on to determine whether a valid pattern of coloured squares can be created from them. Sometimes, you have to find out by trial and error. Images from Wikipedia)

Five reasons why the human world is messy, unpredictable, and can’t be perfectly described using data and logic

1. Our actions create disorder

The 2nd Law of Thermodynamics is a good candidate for the most fundamental law of science. It states that as time progresses, the universe becomes more disorganised. It guarantees that ultimately – in billions of years – the Universe will die as all of the energy and activity within it dissipates.

An everyday practical consequence of this law is that every time we act to create value – building a shed, using a car to get from one place to another, cooking a meal – our actions eventually cause a greater amount of disorder to be created as a consequence – as noise, pollution, waste heat or landfill refuse.

For example, if I spend a day building a shed, then to create that order and value from raw materials, I consume structured food and turn it into sewage. Or if I use an electric forklift to stack a pile of boxes, I use electricity that has been created by burning structured coal into smog and ash.

So it is literally impossible to create a “perfect world”. Whenever we act to make a part of the world more ordered, we create disorder elsewhere. And ultimately – thankfully, long after you and I are dead – disorder is all that will be left.

2. The failure of Logical Atomism: why the human world can’t be perfectly described using data and logic

In the 20th Century two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein, invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic.

But despite 40 years of work, these two supremely intelligent people could not get their theory to work: “Logical Atomism” failed. It is not possible to describe our world in that way.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of sophisticated systems.

Despite centuries of scientific and philosophical effort, we do not have a complete understanding of how to describe our world at its most basic level. As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations mean. And as we have already seen, Heisenberg’s Uncertainty Principle prevents us from ever having perfect knowledge of the world at this level.

Perhaps the most important failure of logical atomism, though, was that it proved impossible to use logical rules to turn “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – into facts at another level of abstraction – such as “physical assault is a crime”. The human world and the things that we care about can’t be described using logical combinations of “atomic facts”. For example, how would you define the set of all possible uses of a screwdriver, from prising the lids off paint tins to causing a short-circuit by jamming it into a switchboard?

Our world is messy, subjective and opportunistic. It defies universal categorisation and logical analysis.

(A Pescheria in Bari, Puglia, where a fish-market price information service makes it easier for local fisherman to identify the best buyers and prices for their daily catch. Photo by Vito Palmi)

3. The importance and inaccessibility of “local knowledge” 

Because the tool we use for calculating and agreeing value when we exchange goods and services is money, economics is the discipline that is often used to understand the large-scale behaviour of society. We often quantify the “growth” of society using economic measures, for example.

But this approach is notorious for overlooking social and environmental characteristics such as health, happiness and sustainability. Alternatives exist, such as the Social Progress Index, or the measurement framework adopted by the United Nations 2014 Human Development Report on world poverty; but they are still high level and abstract.

Such approaches struggle to explain localised variations, and in particular cannot predict the behaviours or outcomes of individual people with any accuracy. This “local knowledge problem” is caused by the fact that a great deal of the information that determines individual actions is personal and local, and not measurable at a distance – the experienced eye of the fruit buyer assessing not just the quality of the fruit but the quality of the farm and farmers that produce it, as a measure of the likely consistency of supply; the emotional attachments that cause us to favour one brand over another; or the degree of community ties between local businesses that influence their propensity to trade with each other.

Sharing economy” business models that use social media and reputation systems to enable suppliers and consumers of goods and services to find each other and transact online are opening up this local knowledge to some degree. Local food networks, freecycling networks, and land-sharing schemes all use this technology to the benefit of local communities whilst potentially making information about detailed transactions more widely available. And to some degree, the human knowledge that influences how transactions take place can be encoded in “expert systems” which allow computer systems to codify the quantitative and heuristic rules by which people take decisions.

But these technologies are only used in a subset of the interactions that take place between people and businesses across the world, and it is unlikely that they’ll become ubiquitous in the foreseeable future (or that we would want them to become so). Will we ever reach the point where prospective house-buyers delegate decisions about where to live to computer programmes operating in online marketplaces rather than by visiting places and imagining themselves living there? Will we somehow automate the process of testing the freshness of fish by observing the clarity of their eyes and the freshness of their smell before buying them to cook and eat?

In many cases, while technology may play a role introducing potential buyers and sellers of goods and services to each other, it will not replace – or predict – the human behaviours involved in the transaction itself.

(Medway Youth Trust use predictive and textual analytics to draw insight into their work helping vulnerable children. They use technology to inform expert case workers, not to take decisions on their behalf.)

4. “Wicked problems” cannot be described using data and logic

Despite all of the challenges associated with problems in mathematics and the physical sciences, it is nevertheless relatively straightforward to frame and then attempt to solve problems in those domains; and to determine whether the resulting solutions are valid.

As the failure of Logical Atomism showed, though, problems in the human domain are much more difficult to describe in any systematic, complete and precise way – a challenge known as the “frame problem” in artificial intelligence. This is particularly true of “wicked problems” – challenges such as social mobility or vulnerable families that are multi-faceted, and consist of a variety of interdependent issues.

Take job creation, for example. Is that best accomplished through creating employment in taxpayer-funded public sector organisations? Or by allowing private-sector wealth to grow, creating employment through “trickle-down” effects? Or by maximising overall consumer spending power as suggested by “middle-out” economics? All of these ideas are described not using the language of mathematics or other formal logical systems, but using natural human language which is subjective and inconsistent in use.

The failure of Logical Atomism to fully represent such concepts in formal logical systems through which truth and falsehood can be determined with certainty emphasises what we all understand intuitively: there is no single “right” answer to many human problems, and no single “right” action in many human situations.

(An electricity bill containing information provided by OPower comparing one household’s energy usage to their neighbours. Image from Grist)

5. Behavioural economics and the caprice of human behaviour

Behavioural economics” attempts to predict the way that humans behave when taking choices that have a measurable impact on them – for example, whether to put the washing machine on at 5pm when electricity is expensive, or at 11pm when it is cheap.

But predicting human behaviour is notoriously unreliable.

For example, in a smart water-meter project in Dubuque, Iowa, households that were told how their water conservation compared to that of their near neighbours were found to be twice as likely to take action to improve their efficiency as those who were only told the details of their own water use. In other words, people who were given quantified evidence that they were less responsible water user than their neighbours changed their behaviour. OPower have used similar techniques to help US households save 1.9 terawatt hours of power simply by including a report based on data from smart meters in a printed letter sent with customers’ electricity bills.

These are impressive achievements; but they are not always repeatable. A recycling scheme in the UK that adopted a similar approach found instead that it lowered recycling rates across the community: households who learned that they were putting more effort into recycling than their neighbours asked themselves “if my neighbours aren’t contributing to this initiative, then why should I?”

Low carbon engineering technologies like electric vehicles have clearly defined environmental benefits and clearly defined costs. But most Smart Cities solutions are less straightforward. They are complex socio-technical systems whose outcomes are emergent. Our ability to predict their performance and impact will certainly improve as more are deployed and analysed, and as University researchers, politicians, journalists and the public assess them. But we will never predict individual actions using these techniques, only the average statistical behaviour of groups of people. This can be seen from OPower’s own comparison of their predicted energy savings against those actually achieved – the predictions are good, but the actual behaviour of OPower’s customers shows a high degree of apparently random variation. Those variations are the result of the subjective, unpredictable and sometimes irrational behaviour of real people.

We can take insight from Behavioural Economics and other techniques for analysing human behaviour in order to create appropriate strategies, policies and environments that encourage the right outcomes in cities; but none of them can be relied on to give definitive solutions to any individual person or situation. They can inform decision-making, but are always associated with some degree of uncertainty. In some cases, the uncertainty will be so small as to be negligible, and the predictions can be treated as deterministic rules for achieving the desired outcome. But in many cases, the uncertainty will be so great that predictions can only be treated as general indications of what might happen; whilst individual actions and outcomes will vary greatly.

(Of course it is impossible to predict individual criminal actions as portrayed in the film “Minority Report”. But is is very possible to analyse past patterns of criminal activity, compare them to related data such as weather and social events, and predict the likelihood of crimes of certain types occurring in certain areas. Cities such as Memphis and Chicago have used these insights to achieve significant reductions in crime)

Learning to value insight without certainty

Mathematics and digital technology are incredibly powerful; but they will never perfectly and completely describe and predict our world in human terms. In many cases, our focus for using them should not be on automation: it should be on the enablement of human judgement through better availability and communication of information. And in particular, we should concentrate on communicating accurately the meaning of information in the context of its limitations and uncertainties.

There are exceptions where we automate systems because of a combination of a low-level of uncertainty in data and a large advantage in acting autonomously on it. For example, anti-lock braking systems save lives by using automated technology to take thousands of decisions more quickly than most humans would realise that even a single decision needed to be made; and do so based on data with an extremely low degree of uncertainty.

But the most exciting opportunity for us all is to learn to become sophisticated users of information that is uncertain. The results of textual analysis of sentiment towards products and brands expressed in social media are far from certain; but they are still of great value. Similar technology can extract insights from medical research papers, case notes in social care systems, maintenance logs of machinery and many other sources. Those insights will rarely be certain; but properly assessed by people with good judgement they can still be immensely valuable.

This is a much better way to understand the value of technology than ideas like “perfect knowledge” and “algorithmic regulation”. And it is much more likely that people will trust the benefits that we claim new technologies can bring if we are open about their limitations. People won’t use technologies that they don’t trust; and they won’t invest their money in them or vote for politicians who say they’ll spend their taxes on it.

Thankyou to Richard Brown and Adrian McEwen for discussions on Twitter that helped me to prepare this article. A more in-depth discussion of some of the scientific and philosophical issues I’ve described, and an exploration of the nature of human intelligence and its non-deterministic characteristics, can be found in the excellent paper “Answering Descartes: Beyond Turing” by Stuart Kauffman published by MIT press.

From field to market to kitchen: smarter food for smarter cities

(A US Department of Agriculture inspector examines a shipment of imported frozen meat in New Orleans in 2013. Photo by Anson Eaglin)

One of the biggest challenges associated with the rapid urbanisation of the world’s population is working out how to feed billions of extra citizens. I’m spending an increasing amount of my time understanding how technology can help us to do that.

It’s well known that the populations of many of the world’s developing nations – and some of those that are still under-developed – are rapidly migrating from rural areas to cities. In China, for example, hundreds of millions of people are moving from the countryside to cities, leaving behind a lifestyle based on extended family living and agriculture for employment in business and a more modern lifestyle.

The definitions of “urban areas” used in many countries undergoing urbanisation include a criterion that less than 50% of employment and economic activity is based on agriculture (the appendices to the 2007 revision of the UN World Urbanisation Prospects summarise such criteria from around the world). Cities import their food.

In the developed countries of the Western world, this criterion is missing from most definitions of cities, which focus instead on the size and density of population. In the West, the transformation of economic activity away from agriculture took place during the Industrial Revolution of the 18th and 19th Centuries.

Urbanisation and the industrialisation of food

The food that is now supplied to Western cities is produced through a heavily industrialised process. But whilst the food supply chain had to scale dramatically to feed the rapidly growing cities of the Industrial Revolution, the processes it used, particularly in growing food and creating meals from it, did not industrialise – i.e. reduce their dependence on human labour – until much later.

As described by Population Matters, industrialisation took place after the Second World War when the countries involved took measures to improve their food security after struggling to feed themselves during the War whilst international shipping routes were disrupted. Ironically, this has now resulted in a supply chain that’s even more internationalised than before as the companies that operate it have adopted globalisation as a business strategy over the last two decades.

This industrial model has led to dramatic increases in the quantity of food produced and distributed around the world, as the industry group the Global Harvest Initiative describes. But whether it is the only way, or the best way, to provide food to cities at the scale required over the next few decades is the subject of much debate and disagreement.

(Irrigation enables agriculture in the arid environment of Al Jawf, Libya. Photo by Future Atlas)

One of the critical voices is Philip Lymbery, the Chief Executive of Compassion in World Farming, who argues passionately in “Farmageddon” that the industrial model of food production and distribution is extremely inefficient and risks long-term damage to the planet.

Lymbery questions whether the industrial system is sustainable financially – it depends on vast subsidy programmes in Europe  and the United States; and he questions its social benefits – industrial farms are highly automated and operate in formalised international supply chains, so they do not always provide significant food or employment in the communities in which they are based.

He is also critical of the industrial system’s environmental impact. In order to optimise food production globally for financial efficiency and scale, single-use industrial farms have replaced the mixed-use, rotational agricultural systems that replenish nutrients in soil  and that support insect species that are crucial to the pollination of plants. They also create vast quantities of animal waste that causes pollution because in the single-use industrial system there are no local fields in need of manure to fertilise crops.

And the challenges associated with feeding the growing populations of the worlds’ cities are not only to do with long-term sustainability. They are also a significant cause of ill-health and social unrest today.

Intensity, efficiency and responsibility

Our current food systems fail to feed nearly 1 billion people properly, let alone the 2 billion rise in global population expected by 2050. We already use 60% of the world’s fresh water to produce food – if we try to increase food production without changing the way that water is used, then we’ll simply run out of it, with dire consequences. In fact, as the world’s climate changes over the next few decades, less fresh water will be available to grow food. As a consequence of this and other effects of climate change, the UK supermarket ASDA reported recently that 95% of their fresh food supply is already exposed to climate risk.

The supply chains that provide food to cities are vulnerable to disruption – in the 2000 strike by the drivers who deliver fuel to petrol stations in the UK, some city supermarkets came within hours of running out of food completely; and disruptions to food supply have already caused alarming social unrest across the world.

These challenges will intensify as the world’s population grows, and as the middle classes double in size to 5 billion people, dramatically increasing demand for meat – and hence demand for food for the animals which produce it. Overall, the United Nations Food and Agriculture Organization estimates that we will need to produce 70% more food than today by 2050.

insect delicacies

(Insect delicacies for sale in Phnom Penh’s central market. The United Nations suggested last year that more of us should join the 2 billion people who include insects in their diet – a nutritious and environmentally efficient source of food)

But increasing the amount of food available to feed people doesn’t necessarily mean growing more food, either by further intensifying existing industrial approaches or by adopting new techniques such as vertical farming or hydroponics. In fact, a more recent report issued by the United Nations and partner agencies cautioned that it was unlikely that the necessary increase in available food would be achieved through yield increases alone. Instead, it recommended reducing food loss, waste, and “excessive demand” for animal products.

There are many ways we might grow, distribute and use food more efficiently. We currently waste about 30% of the food we produce: some through food that rots before it reaches our shops or dinner tables, some through unpopularity (such as bread crusts or fruit and vegetables that aren’t the “right” shape and colour), and some because we simply buy more than we need to eat. If those inefficiencies were corrected, we are already producing enough food to feed 11billion people, let alone the 9 billion population predicted for the Earth by 2050.

I think that technology has some exciting roles to play in how we respond to those challenges.

Smarter food in the field: data for free, predicting the future and open source beekeeping

New technologies give us a great opportunity to monitor, measure and assess the agricultural process and the environment in which it takes place.

The SenSprout sensor can measure and transmit the moisture content of soil; it is made simply by printing an electronic circuit design onto paper using commercially-available ink containing silver nano-particles; and it powers itself using ambient radio waves. We can use sensors like SenSprout to understand and respond to the natural environment, using technology to augment the traditional knowledge of farmers.

By combining data from sensors such as SenSprout and local weather monitoring stations with national and international forecasts, my colleagues in IBM Research are investigating how advanced weather prediction technology can enable approaches to agriculture that are more efficient and precise in their use of water. A trial project in Flint River, Georgia is allowing farmers to apply exactly the right amount of water at the right time to their crops, and no more.

Such approaches improve our knowledge of the natural environment, but they do not control it. Nature is wild, the world is uncertain, and farmers’ livelihoods will always be exposed to risk from changing weather patterns and market conditions. The value of technology is in helping us to sense and respond to those changes. “Pasture Scout“, for example, does that by using social media to connect farmers in need of pasture to graze their cattle with other farmers with land of the right sort that is currently underused.

These possibilities are not limited to industrial agriculture or to developed countries. For example, the Kilimo Salama scheme adds resilience to the traditional practises of subsistence farmers by using remote weather monitoring and mobile phone payment schemes to provide affordable insurance for their crops.

Technology is also helping us to understand and respond to the environmental impact of the agricultural practises that have developed in previous decades: as urban beekeepers seek to replace lost natural habitats for bees, the Open Source Beehive project is using technology to help them identify the factors leading to the “colony collapse disorder” phenomenon that threatens the world’s bee population.

Smarter food in the marketplace: local food, the sharing economy and soil to fork traceability

The emergence of the internet as a platform for enabling sales, marketing and logistics over the last decade has enabled small and micro-businesses to reach markets across the world that were previously accessible only to much larger organisations with international sales and distribution networks. The proliferation of local food and urban farming initiatives shows that this transformation is changing the food industry too, where online marketplaces such as Big Barn and FoodTrade make it easier for consumers to buy locally produced food, and for producers to sell it.

This is not to say that vast industrial supply-chains will disappear overnight to be replaced by local food networks: they clearly won’t. But just as large-scale film and video production has adapted to co-exist and compete with millions of small-scale, “long-tail” video producers, so too the food industry will adjust. The need for co-existence and competition with new entrants should lead to improvements in efficiency and impact – the supermarket Tesco’s “Buying Club” shows how one large food retailer is already using these ideas to provide benefits that include environmental efficiences to its smaller suppliers.

(A Pescheria in Bari, Puglia photographed by Vito Palmi)

One challenge is that food – unlike music and video – is a fundamentally physical commodity: exchanging it between producers and consumers requires transport and logistics. The adoption by the food industry of “sharing economy” approaches – business models that use social media and analytics to create peer-to-peer transactions, and that replace bulk movement patterns by thousands of smaller interactions between individuals – will be dependent on our ability to create innovative distribution systems to support them. Zaycon Foods operate one such system, using online technology to allow consumers to collectively negotiate prices for food that they then collect from farmers at regular local events.

Rather than replacing existing markets and supply chains, one role that technology is already playing is to give food producers better insight into their behaviour. M-farm links farmers in Kenya to potential buyers for their produce, and provides them with real-time information about prices; and the University of Bari in Puglia, Italy operates a similar fish-market pricing information service that makes it easier for local fisherman to identify the best buyers and prices for their daily catch.

Whatever processes are involved in getting food from where it’s produced to where it’s consumed, there’s an increasing awareness of the need to track those movements so that we know what we’re buying and eating, both to prevent scandals such as last year’s discovery of horsemeat in UK food labelled as containing beef; and so that consumers can make buying decisions based on accurate information about the source and quality of food. The “eSporing” (“eTraceability”) initiative between food distributors and the Norwegian government explored these approaches following a breakout of E-Coli in 2006.

As sensors become more capable and less expensive, we’ll be able to add more data and insight into this process. Soil quality can be measured using sensors such as SenSprout; plant health could be measured by similar sensors or by video analytics using infra-red data. The gadgets that many of us use whilst exercising to measure our physical activity and use of calories could be used to assess the degree to which animals are able to exercise. And scientists at the University of the West of England in Bristol have developed a quick, cheap sensor that can detect harmful bacteria and the residues of antibiotics in food. (The overuse of antibiotics in food production has harmful side effects, and in particular is leading some bacteria that cause dangerous diseases in humans to develop resistance to treatment).

This advice from the Mayo Clinic in the United States gives one example of the link between the provenance of food and its health qualities, explaining that beef from cows fed on grass can have lower levels of fat and higher levels of beneficial “omega-3 fatty acids” than what they call “conventional beef” – beef from cows fed on grain delivered in lorries. (They appear to have forgotten the “convention” established by several millennia of evolution and thousands of years of animal husbandry that cows eat grass).

(Baltic Apple Pie – a recipe created by IBM’s Watson computer)

All of this information contributes to describing both the taste and health characteristics of food; and when it’s available, we’ll have the opportunity to make more informed choices about what we put on our tables.

Smarter food in the kitchen: cooking, blogging and cognitive computing

One of the reasons that the industrial farming system is so wasteful is that it is optimised to supply Western diets that include an unhealthy amount of meat; and to do so at an unrealistically low price for consumers. Enormous quantities of fish and plants – especially soya beans – that could be eaten by people as components of healthy diets are instead fed to industrially-farmed animals to produce this cheap meat. As a consequence, in the developed world many of us are eating more meat than is healthy for us. (Some of the arguments on this topic were debated by the UK’s Guardian newspaper last year).

But whilst eating less meat and more fish and vegetables is a simple idea, putting it into practise is a complex cultural challenge.

A recent report found that “a third of UK adults struggle to afford healthy food“. But the underlying cause is not economic: it is a lack of familiarity with the cooking and food preparation techniques that turn cheap ingredients into healthy, tasty food; and a cultural preference for red meat and packaged meals. The Sustainable Food School that is under development in Birmingham is one example of an initiative intending to address those challenges through education and awareness.

Engagement through traditional and social media also has an influence. The celebrity chefs that have campaigned for a shift in our diets towards more sustainably sourced fish and the schoolgirl who  provoked a national debate concerning the standard and health of school meals simply by blogging about the meals that were offered to her each day at school, are two recent examples in the UK; as is the food blogger Jack Monroe who demonstrated how she could feed herself and her two-year-old son healthy, interesting food on a budget of £10 a week.

My colleagues in IBM Research have explored turning IBM’s Watson cognitive computing technology to this challenge. In an exercise similar to the “invention test” common to television cookery competitions, they have challenged Watson to create recipes from a restricted set of ingredients (such as might be left in the fridge and cupboards at the end of the week) and which meet particular criteria for health and taste.

(An example of local food processing: my own homemade chorizo.)

Food, technology, passion

The future of food is a complex and contentious issue – the controversy between the productivity benefits of industrial agriculture and its environmental and social impact being just one example. I have touched on but not engaged in those debates in this article – my expertise is in technology, not in agriculture, and I’ve attempted to link to a variety of sources from all sides of the debate.

Some of the ideas for providing food to the world’s growing population in the future are no less challenging, whether those ideas are cultural or technological. The United Nations suggested last year, for example, that more of us should join the 2 billion people who include insects in their diet. Insects are a nutritious and environmentally efficient source of food, but those of us who have grown up in cultures that do not consider them as food are – for the most part – not at all ready to contemplate eating them. Artificial meat, grown in laboratories, is another increasingly feasible source of protein in our diets. It challenges our assumption that food is natural, but has some very reasonable arguments in its favour.

It’s a trite observation, but food culture is constantly changing. My 5-year-old son routinely demands foods such as humus and guacamole that are unremarkable now but that were far from commonplace when I was a child. Ultimately, our food systems and diets will have to adapt and change again or we’ll run out of food, land and water.

Technology is one of the tools that can help us to make those changes. But as Kentaro Toyama famously said: technology is not the answer; it is the amplifier of human intention.

So what really excites me is not technology, but the passion for food that I see everywhere: from making food for our own families at home, to producing it in local initiatives such as Loaf, Birmingham’s community bakery; and from using technology in programmes that contribute to food security in developing nations to setting food sustainability at the heart of corporate business strategy.

There are no simple answers, but we are all increasingly informed and well-intentioned. And as technology continues to evolve it will provide us with incredible new tools. Those are great ingredients for an “invention test” for us all to find a sustainable, healthy and tasty way to feed future cities.

A design pattern for a Smarter City: Online Peer-to-Peer and Regional Marketplaces

(Photo of Moseley Farmers’ Market in Birmingham by Bongo Vongo)

(In “Do we need a Pattern Language for Smarter Cities” I suggested that “design patterns“, a tool for capturing re-usable experience invented by the town-planner Christopher Alexander, might offer a useful way to organise our knowledge of successful approaches to “Smarter Cities”. I’m now writing a set of design patterns to describe ideas that I’ve seen work more than once. The collection is described and indexed in “Design Patterns for Smarter Cities” which can be found from the link in the navigation bar of this blog).  

Design Pattern: Online Peer-to-Peer and Regional Marketplaces

Summary of the pattern:

A society is defined by the transactions that take place within it, whether their characteristics are social or economic, and whether they consist of material goods or communication. Many of those transactions take place in some form of marketplace.

As traditional business has globalised and integrated over the last few decades, many of the systems that support us – food production and distribution, energy generation, manufacturing and resource extraction, for example – have optimised their operations globally and consolidated ownership to exploit economies of scale and maximise profits. Those operations have come to dominate the marketplaces for the goods and services they consume and process; they defend themselves from competition through the expense and complexity of the business processes and infrastructures that support their operations; through their brand awareness and sales channels to customers; and through their expert knowledge of the availability and price of the resources and components they need.

However, in recent years dramatic improvements in information and communication technology – especially social mediamobile devicese-commerce and analytics – have made it dramatically easier for people and organisations with the potential to transact with each other to make contact and interact. Information about supply and demand has become more freely available; and it is increasingly easy to reach consumers through online channels – this blog, for instance, costs me nothing to write other than my own time, and now has readers in over 140 countries.

In response, online peer-to-peer marketplaces have emerged to compete with traditional models of business in many industries – Apple’s iTunes famously changed the music industry in this way; YouTube has transformed the market for video content and Prosper and Zopa have created markets for peer-to-peer lending. And as technologies such as 3D printing and small-scale energy generation improve, these ideas will spread to other industries as it becomes possible to carry out activities that previously required expensive, large-scale infrastructure at a smaller scale, and so much more widely.

(A Pescheria in Bari, Puglia photographed by Vito Palmi)

Whilst many of those marketplaces are operated by commercial organisations which exist to generate profit, the relevance of online marketplaces for Smarter Cities arises from their ability to deliver non-financial outcomes: i.e. to contribute to the social, economic or environmental objectives of a city, region or community.

The e-Bay marketplace in second hand goods, for example, has extended the life of over $100 billion of goods since it began operating by offering a dramatically easier way for buyers and sellers to identify each other and conduct business than had ever existed before. This spreads the environmental cost of manufacture and disposal of goods over the creation of greater total value from them, contributing to the sustainability agenda in every country in which e-Bay operates.

Local food marketplaces such as Big Barn and Sustaination in the UK, m-farm in Kenya and the fish-market pricing information service operated by the University of Bari in Puglia, Italy, make it easier for consumers to buy locally produced food, and for producers to sell it; reducing the carbon footprint of the food that is consumed within a region, and assisting the success of local businesses.

The opportunity for cities and regions is to encourage the formation and success of online marketplaces in a way that contributes to local priorities and objectives. Such regional focus might be achieved by creating marketplaces with restricted access – for example, only allowing individuals and organisations from within a particular area to participate – or by practicality: free recycling networks tend to operate regionally simply because the expense of long journeys outweighs the benefit of acquiring a secondhand resource for free. The cost of transportation means that in general many markets which support the exchange of physical goods and services in small-scale, peer-to-peer transactions will be relatively localised.

City systems, communities and infrastructures affected:

(This description is based on the elements of Smarter City ecosystems presented in ”The new Architecture of Smart Cities“).

  • Goals: all
  • People: employees, business people, customers, citizens
  • Ecosystem: private sector, public sector, 3rd sector, community
  • Soft infrastructures: innovation forums; networks and community forums
  • Hard infrastructures: information and communication technology, transport and utilities network

Commercial operating model:

The basic commercial premise of an online marketplace is to invest in the provision of online marketplace infrastructure in order to create returns from revenue streams within it. Various revenue streams can be created: for example, e-Bay apply fees to transactions conducted through their marketplace, as does the crowdfunding scheme Spacehive; whereas Linked-In charges a premium subscription fee to businesses such as recruitment agencies in return for the right to make unsolicited approaches to members.

More complex revenue models are created by allowing value-add service providers to operate in the marketplace – such as the payment service PayPal, which operated in e-Bay long before it was acquired; or the start-up Addiply, who add hyperlocal advertising to online transactions. The marketplace operator can also provide fee-based “white-label” or anonymised access to marketplace services to allow third parties to operate their own niche marketplaces – Amazon WebStore, for example, allows traders to build their own, branded online retail presence using Amazon’s services.

(Photo by Mark Vauxhall of public Peugeot Ions on Rue des Ponchettes, Nice, France)

Online marketplaces are operated by a variety of entities: entrepreneurial technology companies such as Shutl, for example, who offer services for delivering goods bought online through a marketplace provding access to independent delivery agents and couriers; or traditional commercial businesses seeking to “servitise” their business models, create “disruptive business platforms” or create new revenue streams from data.

(Apple’s iTunes was a disruptive business platform in the music industry when it launched – it used a new technology-enabled marketplace to completely change flows of money within the industry; and streaming media services such as Spotify have servitised the music business by allowing us to pay for the right to listen to any music we like for a certain period of time, rather than paying for copies of specific musical works as “products” which we own outright. Car manufacturers such as Peugeot are collaborating with car clubs to offer similar “pay-as-you-go” models for car use, particularly as an alternative to ownership for electric cars. Some public sector organisations are also exploring these innovations, especially those that possess large volumes of data.)

Marketplaces can create social, economic and environmental outcomes where they are operated by commercial, profit-seeking organisations which seek to build brand value and customer loyalty through positive environmental and societal impact. Many private enterprises are increasingly conscious of the need to contribute to the communities in which they operate. Often this results from the desire of business leaders to promote responsible and sustainable approaches, combined with the consumer brand-value that is created by a sincere approach. UniLever are perhaps the most high profile commercial organisation pursuing this strategy at present; and Tesco have described similar initiatives recently, such as the newly-launched Tesco Buying Club which helps suppliers secure discounts through collective purchasing. There is a clearly an opportunity for local communities and local government organisations to engage with such initiatives from private enterprise to explore the potential for online marketplaces to create mutual benefit.

In other cases, marketplaces are operated by not-for-profit organisations or social enterprises for whom creating social or economic outcomes in a financially and environmentally sustainable way is the first priority. The social enterprise approach is important if cities everywhere are to benefit from information marketplaces: most commercially operated marketplaces with a geographic focus operate in large, capital cities: these provide the largest customer base and minimise the risk associated with the investment in creating the market. If towns, cities and regions elsewhere wish to benefit from online marketplaces, they may need to encourage alternative models such as social enterprise to deliver them.

Finally, Some schemes are operated entirely on free basis, for example the Freecycle recycling network; or as charitable or donor-sponsored initiatives, for example the Kiva crowdfunding platform for charitable initiatives.

Soft infrastructures, hard infrastructures and assets required:

(The SMS for Life project uses the cheap and widely used SMS infrastructure to create a dynamic, collaborative supply chain for medicines between pharmacies in Africa. Photo by Novartis AG)

The technology infrastructures required to implement online marketplaces include those associated with e-commerce technology and social media: catalogues of goods and services; pricing mechansims; support for marketing campaigns; networks of individuals and organisations and the ability to make connections between them; payments services and multi-channel support.

Many e-commerce platforms offer support for online payments integrated with traditional banking systems; or mobile payments schemes such as the M-Pesa scheme in Kenya can be used. Alternatively, the widespread growth in local currencies and alternative trading systems might offer innovative solutions that are particularly relevant for marketplaces with a regional focus.

In order to be successful, marketplaces need to create an environment of trust in which transactions can be undertaken safely and reliably. As the internet has developed over the past two decades, technologies such as certificate-based identity assurance, consumer reviews and reputation schemes have emerged to create trust in online transactions and relationships. However, many online marketplaces provide robust real-world governance models in addition to tools to create online trust: the peer-to-peer lender Zopa created “Zopa Safeguard“, for example, an independent, not-for-profit entity with funds to re-imburse investors whose debtors are unable to repay them.

Marketplaces which involve the transaction of goods and services with some physical component – whether in the form of manufactured goods, resources such as water and energy or services such as in-home care – will also require transport services; and the cost and convenience of those services will need to be appropriate to the value of exchanges in the marketplace. Shutl’s transportation marketplace is in itself an innovation in delivering more convenient, lower cost delivery services to online retail marketplaces. By contrast, community energy schemes, which attempt to create local energy markets that reduce energy usage and maximise consumption of power generated by local, renewable resources, either need some form of smart grid infrastructure, or a commercial vehicle, such as a shared energy performance contract.

Driving forces:

  • The desire of regional authorities and business communities to form supply chains, market ecosystems and trading networks that maximise the creation and retention of economic value within a region; and that improve economic growth and social mobility.
  • The need to improve efficiency in the use of assets and resources; and to minimise externalities such as the excessive transport of goods and services.
  • The increasing availability and reducing cost of enabling technologies providing opportunities for new entrants in existing marketplaces and supply chains.


  • Maximisation of regional integration in supply networks.
  • Retention of value in the local economy.
  • Increased efficiency of resource usage by sharing and reusing goods and services.
  • Enablement of new models of collaborative asset ownership, management and use.
  • The creation of new business models to provide value-add products and services.

Implications and risks:

(West Midlands police patrolling Birmingham’s busy Frankfurt Market in Christmas, 2012. Photo by West Midlands Police)

Marketplaces must be carefully designed to attract a critical mass of participants with an interest in collaborating. It is unlikely, for example, that a group of large food retailers would collaborate in a single marketplace in which to sell their products to citizens of a particular region. The objective of such organisations is to maximise shareholder value by maximising their share of customers’ weekly household budgets. They would have no interest in sharing information about their products alongside their competitors and thus making it easier for customers to pick and choose suppliers for individual products.

Small, specialist food retailers have a stronger incentive to join such marketplaces: by adding to the diversity of produce available in a marketplace of specialist suppliers, they increase the likelihood of shoppers visiting the marketplace rather than a supermarket; and by sharing the cost of marketplace infrastructure – such as payments and delivery services – each benefits from access to a more sophisticated infrastructure than they could afford individually.

Those marketplaces that require transportation or other physical infrastructures will only be viable if they create transactions of high enough value to account for the cost of that infrastructure. Such a challenge can even apply to purely information-based marketplaces: producing high quality, reliable information requires a certain level of technology infrastructure, and marketplaces that are intended to create value through exchanging information must pay for the cost of that infrastructure. This is one of the challenges facing the open data movement.

If the marketplace does not provide sufficient security infrastructure and governance processes to create trust between participants – or if those participants do not believe that the infrastructure and governance are adequate – then transactions will not be carried out.

Some level of competition is inevitable between participants in a marketplace. If that competition is balanced by the benefits of better access to trading partners and supporting services, then the marketplace will succeed; but if competitive pressures outweigh the benefits, it will fail.

Alternatives and variations:

  • Local currencies and alternative trading systems are in many ways similar to online marketplace; and are often a supporting component
  • Some marketplaces are built on similar principles, and certainly achieve “Smart” outcomes, but do not use any technology. The Dhaka Waste Concern waste recycling scheme in Bangladesh, for example, turns waste into a market resource, creating jobs in the process.

Examples and stories:

Sources of information:

I’ve written about digital marketplaces several times on this blog, including the following articles:

Industry experts and consultancies have published work on this topic that is well worth considering:

%d bloggers like this: