A three step manifesto for a smarter, fairer economy

(United States GDP plotted against median household income from 1953 to present. Until about 1980, growth in the economy correlated to increases in household wealth. But from 1980 onwards as digital technology has transformed the economy, household income has remained flat despite continuing economic growth)

(United States GDP plotted against median household income from 1953 to present. Until about 1980, growth in the economy correlated to increases in household wealth. But from 1980 onwards as digital technology has transformed the economy, household income has remained flat despite continuing economic growth. From “The Second Machine Age“, by MIT economists Andy McAfee and Erik Brynjolfsson, summarised in this article.)

(Or, why technology created the economy that helped Donald Trump and Brexit to win, and why we have to fix it.)

The world has not just been thrown into crisis because the UK voted in June to leave the European Union, and because the USA has just elected a President whose campaign rhetoric promised to tear up the rulebook of international behaviour (that’s putting it politely; many have accused him of much worse) – including pulling out of the global climate accord that many believe is the bare minimum to save us from a global catastrophe.

Those two choices (neither of which I support, as you might have guessed) were made by people who feel that a crisis has been building for years or even decades, and that the traditional leaders of our political, media and economic institutions have either been ignoring it or, worse, are refusing to address it due to vested interests in the status quo.

That crisis – which is one of worklessness, disenfranchisement and inequality for an increasingly significant proportion of the world’s population – is real; and is evident in figures everywhere:

… and so on.

Brexit and Donald Trump are the wrong solutions to the wrong problems

Of course, leaving the EU won’t solve this crisis for the UK.

Take the supposed need to limit immigration, for example, one of the main reasons people in the UK voted to leave the EU.

The truth is that the UK needs migrants. Firstly, with no immigration, the UK’s birth rate would be much lower than that needed to maintain our current level of population. That means less young people working and paying taxes and more older people relying on state pensions and services. We wouldn’t be able to afford the public services we rely on.

Secondly, the people most likely to start new businesses that grow rapidly and create new jobs aren’t rich people who are offered tax cuts, they’re immigrants and their children. And of course, what will any country in the world, let alone the EU, demand in return for an open trade deal with the UK? Freedom of immigration.

So Brexit won’t fix this crisis, and whilst Donald Trump is showing some signs of moderating the extreme statements he made in his election campaign (like both the “Leave” and “Remain” sides of the abysmal UK Referendum campaign, he knew he was using populist nonsense to win votes, but wasn’t at all bothered by the dishonesty of it), neither will he.

[Update 29/01/17: I take it back: President Trump isn’t moderating his behaviour at all. What a disgrace.]

Whatever his claims to the contrary, Donald Trump’s tax plan will benefit the richest the most. Like most Republican politicians, he promotes policies that are criticised as “trickle-down” economics, in which wealth for all comes from providing tax cuts to rich people and large corporations so they can invest to create jobs.

But this approach does not stand up to scrutiny: history shows that – particularly in times of economic change –  jobs and growth for all require leadership, action and investment from public institutions – in other words they depend on the sensible use of taxation to redistribute the benefits of growth.

(Areas of relative wealth and deprivation in Birmingham as measured by the Indices of Multiple Deprivation. Birmingham, like many of the UK's Core Cities, has a ring of persistently deprived areas immediately outside the city centre, co-located with the highest concentration of transport infrastructure allowing traffic to flow in and out of the centre.)

(Areas of relative wealth and deprivation in Birmingham as measured by the Indices of Multiple Deprivation. Birmingham, like many of the UK’s Core Cities, has a ring of persistently deprived areas immediately outside the city centre, co-located with the highest concentration of transport infrastructure allowing traffic to flow in and out of the centre)

Similarly, scrapping America’s role in the Trans-Pacific Partnership trade deal is unlikely to bring back manufacturing jobs to the US economy at anything like the scale that some of those who voted for Donald Trump hope, and that he’s given the impression it will.

In fact, manufacturing jobs are already rising in the US as the need for agility in production in response to local market conditions outweighs the narrowing difference in manufacturing cost as the salaries of China’s workers have grown along with its economy.

However, the real challenge is that the skills required to secure and perform those jobs have changed: factory workers need increasingly technical skills to manage the robotic machinery that now performs most of the work.

Likewise, jobs in the US coal industry won’t return by changing the way the US trades with foreign countries. The American coal mined in some areas of the country has become an uncompetitive fuel compared to the American shale gas that is made accessible in other areas by the new technology of “fracking”. (I’m not in favour of fracking; I’d prefer we concentrate our resources developing genuinely low-carbon, renewable energy sources. My point is that Donald Trump’s policies won’t address the job dislocation it has caused).

So, if the UK’s choice to leave the EU and the USA’s choice to elect Donald Trump represent the wrong solutions to the wrong problems, what are the underlying problems that are creating a crisis? And how do we fix them?

The crisis begins in places that don’t work

When veteran BBC journalist John Humphreys travelled the UK to meet communities which have experienced a high degree of immigration, he found that immigration itself isn’t a problem. Rather, the rise in population  caused by immigration becomes a problem when it’s not accompanied by investment in local infrastructure, services and business support. Immigrants are the same as people everywhere: they want to work; they start businesses (and in fact, they’re more likely to do that well than those of us who live and work in the country where we’re born); and they do all the other things that make communities thrive.

But the degree to which people – whether they’re immigrants or not – are successful doing so depends on the quality of their local environment, services and economy. And the reality is that there are stark, place-based differences in the opportunity people are given to live a good life.

In UK cities, life expectancy between the poorest and richest parts of the same city varies by up to 28 years. Areas of low life expectancy typically suffer from “multiple deprivation“: poor health, low levels of employment, low income, high dependency on benefits, poor education, poor access to services … and so on. These issues tend to affect the same areas for decade after decade, and they occur in part because of the effects of the physical urban infrastructure around them.


(The UK’s less wealthy regions benefit enormously from EU investment; whilst it’s richer regions, made wealthy by London’s economy, are net contributors. The EU acts to redistribute UK taxes to the regions that need them most, in a way that the national Government in Westminster does not)

The failure to invest in local services and infrastructure to accommodate influxes of migrants isn’t the EU’s fault; it is caused by the failure of the UK national government to devolve spending power to the local authorities that understand local needs – local authorities in the UK control only 17% of local spending, as opposed to 55% on average across OECD countries.

Ironically, one of the crucial things the EU does (or did) with the UK’s £350 million per week contribution to its budget, a large share of which is paid for by taxes from London’s dominant share of the UK economy, is to give it back to support local infrastructure and projects which create jobs and improve communities. If the Remain campaign had done a better job of explaining the extent of this support, rather than trumpeting overblown scare stories about the national, London-centric economy from which many people feel they don’t benefit anyway, some of the regions most dependent on EU investment might not have voted to Leave.

Technology is exacerbating inequality

We should certainly try to improve urban infrastructure and services; and the “Smart City” movement argues for using digital technology to do so.

But ultimately, infrastructure and services simply support activity that is generated by the economy and by social activity, and the fundamental shift taking place today is not a technological shift that makes existing business models, services or infrastructure more effective. It is the transformation of economic and social interactions by new “platform” business models that exploit online transaction networks that couldn’t exist at all without the technologies we’ve become familiar with over the last decade.

Well known examples include:

  • Apple iTunes, exchanging music between producers and consumers
  • YouTube, exchanging video content between producers and consumers
  • Facebook, an online environment for social activity that has also become a platform for content, games, news, business and community activity
  • AirBnB – an online marketplace for peer-to-peer arrangement of accomodation
  • Über – an online marketplace for peer-to-peer arrangement of transport

… and so on. MIT economist Marshall Van Alstyne’s work shows that platform businesses are increasingly the most valuable and fastest growing in the world, across many sectors.

The last two examples in that list – AirBnB and Über – are particularly good examples of online marketplaces that create transactions that take place face-to-face in the real world; these business models are not purely digital as YouTube, for example, arguably is.

But whilst these new, technology-enabled business models can be extraordinarily successful – Airbnb has been valued at $30 billion only 8 years after it was founded, and Über recently secured investments that, 7 years after it was founded, valued the company at over $60 billion – many economists and social scientists believe that the impact of these new technology-enabled business models is contributing to increasing inequality and social disruption.

As Andy McAfee and Erik Bryjolfsson have explained in theory, and as a recent JP Morgan survey has demonstrated in fact (see graph and text in box below), as traditional businesses that provide permanent employment are replaced by online marketplaces that enable the exchange of casual labour and self-employed work, the share of economic growth that is captured by the owners of capital platforms – the owners and shareholders in companies like Amazon, Facebook and Über – is rising, and the share of economic growth that is distributed to people who provide labour – people who are paid for the work they do; by far the majority of us – is falling.

The impact of technology on the financial services sector is having a similar effect. Technology enables the industry to profit from the construction of increasingly complex derivative products that speculate on sub-second fluctuations in the value of stocks and other tradeable commodities, rather than by making investments in business growth. The effect again is to concentrate the wealth the industry creates into profits for a small number of rich investors rather than distributing it in businesses that more widely provide jobs and pay salaries.

Finally, this is also ultimately the reason why the various shifting forces affecting employment in traditional manufacturing industries – off-shoring, automation, re-shoring etc. – have not resulted in a belief that manufacturing industries are providing widespread opportunities for high quality employment and careers to the people and communities who enjoyed them in the past. Even whilst manufacturing activity grows in many developed countries, jobs in those industries require increasingly technical skills, at the same time that, once again, the majority of the profits are captured by a minority of shareholders rather than distributed to the workforce.

(Analysis by JP Morgan of 260,000 current account customers earnings from 30 sharing economy websites over 3 years. Customers using websites to sell labour do not increase their income; earnings from sharing economy websites simply replace earnings from other sources. Customers using sharing economy websites to exploit the value of capital assets they own, however, are able to increase their income. This evidence supports just one of the mechanisms explored by Andy McAfee and Erik Brynjolfsson through which it appears that the digital economy is contributing to increasing income inequality)

(Analysis by JP Morgan of 260,000 current account customers’ earnings from 30 sharing economy websites over 3 years. Customers using websites to sell labour do not increase their income; earnings from sharing economy websites simply replace earnings from other sources. Customers using sharing economy websites to exploit the value of capital assets they own, however, are able to increase their income. This evidence supports just one of the mechanisms explored by Andy McAfee and Erik Brynjolfsson through which it appears that the digital economy is contributing to increasing income inequality)

That is why inequality is rising across the world; and that is the ultimate cause of the sense of unfairness that led to the choice of people in the UK to leave the EU, and people in the USA to elect Donald Trump as their President.

I do not blame the companies at the heart of these developments for causing inequality – I do not believe that is their aim, and many of their leaders believe passionately that they are a force for good.

But the evidence is clear that their cumulative impact is to create a world that is becoming damagingly unequal, and the reason is straightforward. Our market economies reward businesses that maximise profit and shareholder return; and there is simply no direct link from those basic corporate responsibilities to wider social, economic and environmental outcomes.

There are certainly indirect links – successful businesses need customers with money to spend, and there are more of those when more people have jobs that pay good wages, for example. But technology is increasingly enabling phenomenally successful new business models that depend much less on those indirect links to work.

We’re about to make things worse

Finally, as has been frequently highlighted in the media recently, new developments in technology are likely to further exacerbate the challenges of worklessness and inequality.

After a few decades in which scientific and technology progress in Artifical Intelligence (AI) made relatively little impact on the wider world, in the last few years the exponential growth of data and the computer processing power to manipulate it have led to some striking accomplishments by “machine learning”, a particular type of AI technology.

Whilst Machine Learning works in a very different way to our own intelligence, and whilst the Artificial Intelligence experts I’ve spoken to believe that any technological equivalent to human intelligence is between 20 and 100 years away (if it ever comes at all), one thing that is obvious is that Machine Learning technologies have already started to automate jobs that previously required human knowledge. Some studies predict that nearly half of all jobs – including those in highly-skilled, highly-paid occupations such as medicine, the law and journalism- could be replaced over the next few decades.

(Population changes in Blackburn, Burnley and Preston from 1901-2001. In the early part of the century, all three cities grew, supported by successful manufacturing economies. But in the latter half, only Preston continued to grow as it transitioned successfully to a service economy. From Cities Outlook 1901 by Centre for Cities)

(Population changes in Blackburn, Burnley and Preston from 1901-2001. In the early part of the century, all three cities grew, supported by successful manufacturing economies. But in the latter half, only Preston continued to grow as it transitioned successfully to a service economy. If cities do not adapt to changes in the economy driven by technology, history shows that they fail. From “Cities Outlook 1901” by Centre for Cities)

Über is perhaps the clearest embodiment of these trends combined. Whilst several cities and countries have compelled the company to treat their drivers as employees and offer improved terms and conditions, their strategy is unapologetically to replace their drivers with autonomous vehicles anyway.

I’m personally convinced that what we’re experiencing through these changes – and what we’ve possibly been experiencing for 50 years or more – is properly understood to be an Information Revolution that will reshape our world every bit as significantly as the Industrial Revolution.

And history shows us we should take the economic and social consequences of that very seriously indeed.

In the last Century as automated equipment replaced factory workers, many cities in the UK such as Sunderland, Birmingham and Bradford, saw severe job losses, economic depression and social challenges as they failed to adapt from a manufacturing economy to new industries based on knowledge-working.

In this Century many knowledge-worker jobs will be automated too, and unless we knowingly and successfully manage this huge transition into an economy based on jobs we can’t yet predict, the social and economic consequences – the crisis that has already begun – will be just as bad, or perhaps even worse.

So if the problem is the lack of opportunity, what’s the answer?

If trickle-down economics doesn’t work, top-down public sector schemes of improvement won’t work either – they’ve been tried again and again without much improvement to those persistently, multiply-deprived areas:

“For three generations governments the world over have tried to order and control the evolution of cities through rigid, top-down action. They have failed. Masterplans lie unfulfilled, housing standards have declined, the environment is under threat and the urban poor have become poorer. Our cities are straining under the pressure of rapid population growth, rising inequality, inadequate infrastructure, and failing systems of urban planning, design and development.”

– from “The Radical Incrementalist” by Kelvin Campbell, summarised here.

One of the most forward-looking UK local authority Chief Executives said to me recently that the problem isn’t that a culture of dependency on benefits exists in deprived communities; it’s that a culture of doing things for and to people, rather than finding ways to support them succeeding for themselves, permeates local government.

This subset of findings from Sir Bob Kerslake’s report on Birmingham City Council reflects similar concerns:

  • “The council, members and officers, have too often failed to tackle difficult issues. They need to be more open about what the most important issues are and focus on addressing them;
  • Partnership working needs fixing. While there are some good partnerships, particularly operationally, many external partners feel the culture is dominant and over-controlling and that the council is complex, impenetrable and too narrowly focused on its own agenda;
  • The council needs to engage across the whole city, including the outer areas, and all the communities within it;
  • Regeneration must take place beyond the physical transformation of the city centre. There is a particularly urgent challenge in central and east Birmingham.”

One solution that’s being proposed to the challenges of inequality and the displacement of jobs by automation is the “Universal Basic Income” – an unconditional payment made by government to every citizen, regardless of income and employment status. The idea is that such a payment ensures a good enough standard of living for everyone, even if many people lose employment or see their salaries fall; or chose to work in less financially rewarding occupations that have strong social value – caring for others, for example. Several countries, including Finland, Canada and the Netherlands have already begun pilots of this idea.

I think it’s a terrible mistake for two reasons.

Firstly, the proposed level of income – about $1500 per month – isn’t at all sufficient to address the vast levels of inequality that our economy has created. Whilst it might allow a majority of people to live a basically comfortable life, why should we accept that a small elite should exist at such a phenomenally different level of technology-enabled wealth as to be reminiscent of a science fiction dystopia?

Andy McAfee and Erik Brynjofflsson best expressed the second problem with a Universal Basic Income by quoting Voltaire in “The Second Machine Age“:

“Work keeps at bay three great evils: boredom, vice, and need.”

A Universal Basic Income might address “need”, to a degree, but it will do nothing to address boredom and vice. Most people want to work because they want to be useful, they want their lives to make a difference and they want to feel fulfilled – this is the “self-actualisation” at the apex of Maslow’s Hierarchy of Needs. Surely enabling everyone to reach that condition should be our aspiration for society, not a subsidy that addresses only basic needs?

Our answer to these challenges should be an economy that properly rewards the application of effort, talent and courage to achieving the objectives that matter to us most; not one that rewards the amoral maximisation of profits for the owners of capital assets accompanied by a gesture of redistribution that’s just enough to prevent civil unrest.


(Maslow’s “Hierarchy of Needs”)

Three questions that reveal the solution

There are three questions that I think define the way to answer these challenges in a way that neither the public, private nor third sectors have yet done.

The first is the question at the heart of the idea of a Smart City.

There are a million different definitions of a “Smart City”, but most of them are variations on the theme of “using digital technology to make cities better”. The most challenging part of that idea is not to do with how digital technology works, nor how it can be used in city systems; it is to do with how we pay for investments in technology to achieve outcomes that are social, economic and environmental – i.e. that don’t directly generate a financial return, which is usually why money is invested.

Of course, there are investment vehicles that translate achievement against social, economic or environmental objectives into a financial return – Social Impact Bonds and Climate Bonds, for example.

Using such vehicles to support the most interesting Smart City ideas can be challenging, however, due to the level of uncertainty in the outcomes that will be achieved. Many Smart City ideas provide people with information or services that allow them to make choices about the energy they use; how and when they travel; and the products and services they buy. The theory is that when given the option to improve their social, economic and environmental impact, people will chose to do so. But that’s only the theory; the extent to which people actually change their behaviour is notoriously unpredictable. That makes it very difficult to create an investment vehicle with a predictable level of return.

So the first key question that should be answered by any solution to the current crisis is:

  • QUESTION 1: How can we manage the risk of investing in technology to achieve uncertain social, economic or environmental aims such as improving educational attainment or social mobility in our most deprived areas?

The international Smart City community (of which I am a part) has so far utterly failed to answer that question. In the 20 years that the idea has been around, it simply hasn’t made a noticeable difference to economic opportunity, social mobility or resilience – if it had, I wouldn’t be writing this article about a crisis. Earlier this year, I described the examples of Smart City initiatives around the world that are finally starting to make an impact, and below I’ll describe some actions we can take to replicate them and drive them forward at scale.

The second question is inspired by the work of the architect and town planner Kelvin Campbell, whose “Smart Urbanism” is challenging the decades of orthodox thinking that has failed to improve those most deprived areas of our cities:

The solution lies in mobilising peoples’ latent creativity by harnessing the collective power of many small ideas and actions. This happens whenever people take control over the places they live in, adapting them to their needs and creating environments that are capable of adapting to future change. When many people do this, it adds up to a fundamental shift. This is what we call making Massive Small change.”

from “The Radical Incrementalist” by Kelvin Campbell, summarised here.

Kelvin’s concept of “Massive Small change” forms the second key question that defines the solution to our crisis:

  • QUESTION 2: What are the characteristics of urban environments and policy that give rise to massive amounts of small-scale innovation?

That’s one of the most thought-provoking and insightful questions I can think of. “Small-scale” innovation is what everybody does, every day, as we try to get by in life: fixing a leaky tap, helping our daughter with her maths homework, closing that next deal at work, losing another kilogram towards our weight target, becoming a trustee of a local charity … and so on.

For some people, what begin as small-scale innovations eventually amount to tremendously successful lives and careers. Mark Zuckerberg learned how to code, developed an online platform for friends to stay in touch with each other, and became the 6th richest man on the planet, worth approximately $40 billion. On the other hand, 15 million people around the world, including a vast number of children, show their resourcefulness by searching refuse dumps for re-usable objects.

Recent research on the platform economy by the not-for-profit PEW Research Centre confirms these vast gaps in opportunity; and most concerningly identifies clear biases based on race, class, wealth and gender.

The problem with small-scale innovation doesn’t lie in making it happen – it happens all the time. The problem lies in enabling it to have a bigger impact for those in the most challenging circumstances. Kelvin’s work has found ways to do that in the built environment; how do we translate those ideas into the digital economy?

The final question is more subtle:

  • QUESTION 3: How do we ensure that massive amounts of small-scale innovation create collective societal benefits, rather than lots of individual successes?

One way to explain what I mean by the difference between widespread individual success and societal success is in terms of resilience. Over the next 35 years, about 2 billion more people worldwide will acquire the level of wealth associated with the middle classes of developed economies. As a consequence, they are likely to dramatically increase their consumption of resources – eating more meat and less vegetables; buying cars; using more energy. Given that we are already consuming our planet’s resources at an unsustainable rate, such an increase in consumption could great an enormous global problem. So our concept of “success” should be collective as well as individual – it should result in us moderating our personal consumption in favour of a sustainable society.

One of the central tenets of economics for nearly 200 years, the “Tragedy of the Commons“, asserts that individual motives will always overwhelm societal motives and lead to the exhaustion of shared resources, unless those resoures are controlled by a system of private ownership or by government regulation – unless some people or organisations are able to own and control the use of resources by others. We’ll return to this subject shortly, and to its study in the field of Evolutionary Social Biology.

Calling out the failure of the free market: a Three Step Manifesto for Smart Community Economies

If we could answer those three questions, we’d have defined a digital economy in which individual citizens, businesses and communities everywhere would have the skills, opportunities and resources to create their own success on terms that matter to them; and in a way that was beneficial to us all.

That’s the only answer to our current crisis that makes sense to me. It’s not an answer that either Brexit or Donald Trump will help us to find.

So how do we find it?

(The White Horse Tavern in Greenwich Village, New York, one of the city’s oldest taverns. The rich urban life of the Village was described by one of the Taverns’ many famous patrons, the urbanist Jane Jacobs. Photo by Steve Minor).

I think the answers are at our fingertips. In one sense, they’re no more than “nudges” that influence what’s happening already; and they’re supported by robust research in technology, economics, social science, biology and urban design. They lay out a three step manifesto for successful community economies, enabled by technology and rooted in place.

But in another sense, this is a call for fundamental change. These “nudges” will only work if they are enacted as policies, regulations and laws by national and local governments. “Regulation” is a dirty word to the proponents of free markets; but free markets are failing us, and it’s time we admitted that, and shaped them to our needs.

A global-local economy

Globalisation is inevitable – and in many ways beneficial; but ironically the same technologies that enable it can also enable localism, and the two trends do not need to be mutually exclusive.

Many urban designers and environmental experts believe that the best path to a healthy, successful, sustainable and equitable future economy and society lies in a combination of medium density cities with a significant proportion of economic activity (from food to manufacturing to energy to re-use and recycling) based on local transactions supported by walking and cycling.

The same “platform” business models employed by Über, Airbnb and so on could in theory provide the new transaction infrastructure to stimulate and enable such economies. In fact, I believe that they are unique in their ability to do so. Examples already exist – “Borroclub“, for instance, whose platform business connects people who need tools to do jobs with near neighbours who own tools but aren’t using them at the time. A community that adopts Borroclub spends less money on tools; exchanges the money it does spend locally rather than paying it to importers; accomplishes more work using fewer resources; and undertakes fewer car journeys to out-of-town DIY stores.

This can only be accomplished using social digital technology that allows us to easily and cheaply share information with hundreds or thousands of neighbours about what we have and what we need. It could never have happened using telephones or the postal system – the communication technologies of the pre-internet age.

This could be a tremendously powerful way to address the crisis we are facing. Businesses using this model could create jobs, reinforce local social value, reduce the transport and environmental impact of economic transactions and promote the sustainable use of resources; all whilst tapping into the private sector investment that supports growing businesses.

But private sector businesses will only drive social outcomes at scale if we shape the markets they operate in to make that the most profitable business agenda to pursue. The fact that we haven’t shaped the market yet is why platform businesses are currently driving inequality.

There are three measures we could take to shape the market; and the best news is that the first one is already being taken.

1. Legislate to encourage and support social innovation with Open Data and Open Technology

The Director of one of the UK’s first incubators for technology start-up businesses recently told me that “20 years ago, the only way we could help someone to start a business was to help them write a better business plan in order to have a better chance of getting a bank loan. Today there are any number of ways to start a business, and lots of them don’t need you to have much money.”

Technologies such as smartphones, social media, cloud computing and open source software have made it possible to launch global businesses and initiatives almost for free, in return for little more than an investment of time and a willingness to learn new skills. Small-scale innovation has never before had access to such free and powerful tools.

(The inspirational Kilimo Salama scheme that uses

(The inspirational Kilimo Salama scheme that uses “appropriate technology” to make crop insurance affordable to subsistence farmers. Photo by Burness Communications)

These are all examples of what was originally described as “Intermediate Technology” by the economist Ernst Friedrich “Fritz” Schumacher in his influential work, “Small is Beautiful: Economics as if People Mattered“, and is now known as Appropriate Technology.

Schumacher’s views on technology were informed by his belief that our approach to economics should be transformed “as if people mattered”. He asked:

“What happens if we create economics not on the basis of maximising the production of goods and the ability to acquire and consume them – which ends up valuing automation and profit – but on the Buddhist definition of the purpose of work: “to give a man a chance to utilise and develop his faculties; to enable him to overcome his ego-centredness by joining with other people in a common task; and to bring forth the goods and services needed for a becoming existence.”

Schumacher pointed out that the most advanced technologies, to which we often look to create value and growth, are in fact only effective in the hands of those with the resources and skills required to use them – i.e. those who are already wealthy. Further, by emphasising efficiency, output and profit those technologies tend to further concentrate economic value in the hands of the wealthy – often specifically by reducing the employment of people with less advanced skills and roles.

His writing seems prescient now.

A perfect current example is the UK Government’s strategy to drive economic growth by making the UK an international leader in autonomous vehicles, to counter the negative economic impacts of leaving the European Union. That strategy is based on further increasing the number of highly skilled technology and engineering jobs at companies and research insitutions already involved in the sector; and on the UK’s relative lack of regulations preventing the adoption of such technology on the country’s roads.

The strategy will benefit those people with the technological and engineering skills needed to create improvements in autonomous vehicle technology. But what will happen to the far greater number of people who earn their living simply by driving vehicles? They will first see their income fall, and second see their jobs disappear, as technology firstly replaces their permanent jobs with casual labour through platforms such as Über, and secondly completely removes their jobs from the economy by replacing them with self-driving technology. The UK economy might grow in the process; but vast numbers of ordinary people will see their jobs and incomes disappear or decline.

From the broad perspective of the UK workforce, that strategy would be great if we were making a massive investment in education to enable more people to earn a living as highly paid engineers rather than an average or low-paid living as drivers. But of course we’re not doing that at all; at best our educational spend per student is stagnant, and at worst it’s declining as class-sizes grow and we reduce the number of teaching assistants we employ.

In contrast, Schumacher felt that the most genuine “development ” of our society would occur when the most possible people were employed in a way that gave them the practical ability to earn a living; and that also offered a level of human reward – much as Maslow’s “Hierarchy of Needs” first identifies our most basic requirements for food, water, shelter and security; but next relates the importance of family, friends and “self-actualisation” (which can crudely be described as the process of achieving things that we care about).

This led him to ask:

“What is that we really require from the scientists and technologists? I should answer:

We need methods and equipment which are:

    • Cheap enough so that they are accessible to virtually everyone;
    • Suitable for small-scale application; and
    • Compatible with man’s need for creativity”

These are precisely the characteristics of the Cloud Computing, social media, Open Source and smartphone technologies that are now so widely available, and so astonishingly powerful. What we need to do next is to provide more support to help people everywhere put them to use for their own purposes.

Firstly, Open data, open algorithms and open APIs should be mandatory for any publicly funded service or infrastructure. They should be included in the procurement criteria for services and goods procured on behalf of the public sector. Our public infrastructure should be digitally open, accessible and accountable.

Secondly, some of the proceeds from corporate taxation – whether at national level or from local business rates – should be used to provide regional investment funds to support local businesses and social enterprises that contribute to local social, economic and environmental objectives; and to support the regional social innovation communities such as the network of Impact Hubs that help such initiatives start, succeed and grow.

But perhaps most importantly, those proceeds should also be used to fund improvements to state education everywhere. People can only use tools if they are given the opportunity to acquire skills; and as tools and technologies change, we need the opportunity to learn new skills. If our jobs – or more broadly our roles in society – are not ultimately to be replaced by machines, we need to develop the creativity to use those tools to create the human value that technology will never understand.

It is surely insane that we are pouring billions of pounds and dollars into the development of technologies that mean we need to develop new skills in order to remain employable, and that those investments are making our economy richer and richer; but that at the same time we are making a smaller and smaller proportion of that wealth available to educate our children.

Just as some of the profits of the Industrial Revolution were spent on infrastructure with a social purpose, so should some of the profits of the Information Revolution be.

2. Legislate to encourage and support business models with a positive social outcome

(Hancock Bank’s vault, damaged by Hurricane Katrina. Photo by Social Stratification)

The social quality of the behaviour of private sector businesses varies enormously.

The story of Hancock Bank’s actions to assist the citizens of New Orleans to recover from hurricane Katrina in 2005 – by lending cash to anyone who needed it and was prepared to sign an IoU – is told in this video, and is an extraordinary example of responsible business behaviour. In an unprecedented situation, the Bank’s leaders based their decisions on the company’s purpose, expressed in its charter, to support the communities of the city. This is in contrast to the behaviour of Bob Diamond, who resigned as CEO of Barclays Bank following the LIBOR rate-manipulation scandal, and who under questioning by parliamentary committee could not remember what the Bank’s founding principles, written by community-minded Quakers, stated.

Barclays’ employees’ behaviour under Bob Diamond was driven purely by the motivation to earn bigger bonuses by achieving the Bank’s primary objective, to increase shareholder value.

But the overriding focus on shareholders as the primary stakeholder in private sector business is relatively new. Historically, customers and employees have been treated as equally important. Some leading economists now believe we should return to such balanced models.

There are already models of business – such as “social enterprise” – which promote more balanced corporate governance, and that even offer accreditation schemes. We could incentivise such models to be more successful in our economy by creating a preferential market for them – lower rates of taxation; preferential scoring in public sector procurements; and so on.

An alternative is to use technology to enable entirely new, entirely open systems. “Blockchains” are the technology that enable the digital currency “Bitcoin“. The Bitcoin Blockchain is a single, distributed ledger that records every Bitcoin transaction so that anyone in the world can see it. So unlike the traditional system of money in which we depend on physical tokens, banks and payment services to define the ownership of money and to govern transactions, Bitcoin transactions work because everybody can see who owns which Bitcoins and when they’re being exchanged.

This principle of a “distributed, open ledger” – implemented by a blockchain – is thought by many technology industry observers to be the most important, powerfully disruptive invention since the internet. The Ethereum “smart contracts” platfom adds behaviour to the blockchain – open algorithms that cannot be tampered with and that dictate how transactions take place and what happens as a consequence of them. It is leading to some strikingly different new business models, including the “Distributed Autonomous Organisation” (or “DAO” for short), a multi-$million investment fund that is entirely, democratically run by smart contracts on behalf of its investors.

By promoting distributed, non-repudiatable transparency in this way, blockchain technologies offer unprecedented opportunities to ensure that all of the participants in an economic system have the opportunity to influence the distribution of the benefits of the system in a fair way. This idea is already at the heart of an array of initiatives to ensure that some of the least wealthy people in the world benefit more fairly from the information economy.

Finally, research in economics and in evolutionary social biology is yielding prescriptive insights into how we can design business models that are as wildly successful as those of Über and Airbnb, but with models of corporate governance that ensure that the wealth they create is more broadly and fairly distributed.

In conversation with a researcher at Imperial College London a few years ago, I said that I thought we needed to find criteria to distinguish “platform” businesses like Casserole Club that create social value from those like Über that concentrate the vast majority of the wealth they create in the hands of the platform owners. (Casserole Club uses social media to match people who are unable to provide meals for themselves with neighbours who are happy to cook and share an extra portion of their meal).

The researcher told me I should consult Elinor Ostrom’s work in Economics. Ostrom, who won the Nobel prize in 2009, spent her life working with communities around the world who successfully manage shared resources (land, forests, fresh water, fisheries etc.) sustainably, and writing down the common features of their organisational models. Her Nobel prize was awarded for using this evidence to disprove the “tragedy of the commons” doctrine which economists previously believed proved that sustainable commons management was impossible.

(Elinor Ostrom working with irrigation management in Nepal)

(Elinor Ostrom working with irrigation management in Nepal)

Most of Ostrom’s principles for organisational design and behaviour are strikingly similar to the models used by platform businesses such as Über and Airbnb. But the principles she discovered that are the most interesting are the ones that Über and Airbnb don’t follow – the price of exchange being agreed by all of the participants in a transaction, for example, rather than it being set by the platform owner. Ostrom’s work has been continued by David Sloan Wilson who has demonstrated that the principles she discovered follow from evolutionary social biology – the science that studies the evolution of human social behaviour.

Elinor Ostrom’s design principles for commons organisations offer us not only a toolkit for the design of successful, socially responsible platform businesses; they offer us a toolkit for their regulation, too, by specifying the characteristics of businesses that we should preferentially reward through market regulation and tax policy.

3. Legislate for individual ownership of personal data, and a right to share in the profits it creates. 

Platform business models may depend less and less on our labour – or at least, may have found ways to pay less for it as a proportion of their profits; but they depend absolutely on our data.

Of course, we – usually – get some value in return for our data – useful search results, guidance to the quickest route to our journey, recommendations of new songs, films or books we might like.

But is massive inequality really a price worth paying for convenience?

The ownership of private property and intellectual property underpin the capitalist economy, which until recently was primarily based on the value of physical assets and closed knowledge, made difficult to replicate through being stored primarily in physical, analogue media (including our brains).

Our economy is now being utterly transformed by easy to replicate, easy to transfer digital data – from news to music to video entertainment to financial services, business models that had operated for decades have been swept away and replaced by models that are constantly adapting, driven by advances in technology.

But data legislation has not kept pace. Despite several revisions of data protection and privacy legislation, the ownership of digital data is far from clearly defined in law, and in general its exchange is subject to individual agreements between parties.

It is time to legislate more strongly that the value of the data we create by our actions, our movement and our communication belongs to us as individuals, and that in turn we receive a greater share of the profits that are made from its use.

That is the more likely mechanism to result in the fair distribution of value in the economy as the value of labour falls than a Universal Basic Income that rewards nothing.

One last plea to our political leaders to admit that we face a crisis

Whilst the UK and the USA argue – and even riot – about the outcomes of the European Union referendum and the US Presidential election, the issues of inequality, loss of jobs and disenfranchisement from the political system are finally coming to light in the media.

But it’s a disgrace that they barely featured at all in either of those campaigns.

Emotionally right now I want to castigate our politicians for getting us into this mess through all sorts of venality, complacency, hubris and untruthfulness. But two things I know they are not – including Donald Trump – are stupid or ignorant. They surely must be aware of these issues – why will they not recognise and address them?

Robert Wright’s mathematical analysis of the evolution of human society, NonZero, describes the emergence of our current model of nation states through the European Middle Ages as a tension between the ruling and working classes. The working classes pay a tax to the ruling classes, who they accept will live a wealthier life, in return for a safe and peaceful environment in which to live. Whenever the price paid for safety and peace grew unreasonably high, the working classes revolted and overthrew the ruling classes, resulting eventually in a new, better-balanced model.

Is it scaremongering to suggest we are close to a similar era of instability?

(Anti-Donald Trump protesters in San Jose, California in June. Trump supporters leaving a nearby campaign rally were attacked)

(Anti-Donald Trump protesters in San Jose, California in June. Trump supporters leaving a nearby campaign rally were attacked)

I don’t think so. At the same time that the Industrial Revolution created widespread economic growth and improvements in prosperity, it similarly exacerbated inequality between the general population and the property- and business-owning elite. Just as I have argued in this article, that inequality was corrected not by “big government” and grand top-down redistributive schemes, but by measures that shaped markets and investments in education and enablement for the wider population.

We have not yet taken those corrective actions for the Information Revolution – nor even realised and acknowledged that we need to take them. Inequality is rising as a consequence, and it is widely appreciated that inequality creates social unrest.

Brexit and the election of Donald Trump following a campaign of such obvious lies, misogyny and – at best – narrow-minded nationalism are unprecedented in modern times. They have already resulted in social unrest in the form of riots and increased incidents of racism – as has the rise in the price of staple food caused by severe climate events as a vast number of people around the world struggle to feed themselves when hurricanes and droughts affect the production of basic crops. It’s no surprise that the World Economic Forum’s 2016 Global Risks Report identifies “unemployment and underemployment” and “profound social instability” as amongst the top 10 most likely and impactful global risks facing the world.

Brexit and Donald Trump are not crises in themselves; but they are symptoms of a real crisis that we face now; and until we – and our political leaders – face up to that and start dealing with it properly, we are putting ourselves, our future and our childrens’ future at unimaginable risk.

Thankyou to the following, whose opinions and expertise, expressed in articles and conversations, helped me to write this post:

3 human qualities digital technology can’t replace in the future economy: experience, values and judgement

(Image by Kevin Trotman)

(Image by Kevin Trotman)

Some very intelligent people – including Stephen Hawking, Elon Musk and Bill Gates – seem to have been seduced by the idea that because computers are becoming ever faster calculating devices that at some point relatively soon we will reach and pass a “singularity” at which computers will become “more intelligent” than humans.

Some are terrified that a society of intelligent computers will (perhaps violently) replace the human race, echoing films such as the Terminator; others – very controversially – see the development of such technologies as an opportunity to evolve into a “post-human” species.

Already, some prominent technologists including Tim O’Reilly are arguing that we should replace current models of public services, not just in infrastructure but in human services such as social care and education, with “algorithmic regulation”. Algorithmic regulation proposes that the role of human decision-makers and policy-makers should be replaced by automated systems that compare the outcomes of public services to desired objectives through the measurement of data, and make automatic adjustments to address any discrepancies.

Not only does that approach cede far too much control over people’s lives to technology; it fundamentally misunderstands what technology is capable of doing. For both ethical and scientific reasons, in human domains technology should support us taking decisions about our lives, it should not take them for us.

At the MIT Sloan Initiative on the Digital Economy last week I got a chance to discuss some of these issues with Andy McAfee and Erik Brynjolfsson, authors of “The Second Machine Age“, recently highlighted by Bloomberg as one of the top books of 2014. Andy and Erik compare the current transformation of our world by digital technology to the last great transformation, the Industrial Revolution. They argue that whilst it was clear that the technologies of the Industrial Revolution – steam power and machinery – largely complemented human capabilities, that the great question of our current time is whether digital technology will complement or instead replace human capabilities – potentially removing the need for billions of jobs in the process.

I wrote an article last year in which I described 11 well established scientific and philosophical reasons why digital technology cannot replace some human capabilities, especially the understanding and judgement – let alone the empathy – required to successfully deliver services such as social care; or that lead us to enjoy and value interacting with each other rather than with machines.

In this article I’ll go a little further to explore why human decision-making and understanding are based on more than intelligence; they are based on experience and values. I’ll also explore what would be required to ever get to the point at which computers could acquire a similar level of sophistication, and why I think it would be misguided to pursue that goal. In contrast I’ll suggest how we could look instead at human experience, values and judgement as the basis of a successful future economy for everyone.

Faster isn’t wiser

The belief that technology will approach and overtake human intelligence is based on Moore’s Law, which predicts an exponential increase in computing capability.

Moore’s Law originated as the observation that the number of transistors it was possible to fit into a given area of a silicon chip was doubling every two years as technologies for creating ever denser chips were created. The Law is now most commonly associated with the trend for the computing power available at a given cost point and form factor to double every 18 months through a variety of means, not just the density of components.

As this processing power increases, and gives us the ability to process more and more information in more complex forms, comparisons have been made to the processing power of the human brain.

But do the ability to process at the same speed as the human brain, or even faster, or to process the same sort of information as the human brain does, constitute the equivalent to human intelligence? Or to the ability to set objectives and act on them with “free will”?

I think it’s thoroughly mistaken to make either of those assumptions. We should not confuse processing power with intelligence; or intelligence with free will and the ability to choose objectives; or the ability to take decisions based on information with the ability to make judgements based on values.


(As digital technology becomes more powerful, will its analytical capability extend into areas that currently require human skills of judgement? Image from Perceptual Edge)

Intelligence is usually defined in terms such as “the ability to acquire and apply knowledge and skills“. What most definitions don’t include explicitly, though many imply it, is the act of taking decisions. What none of the definitions I’ve seen include is the ability to choose objectives or hold values that shape the decision-making process.

Most of the field of artificial intelligence involves what I’d call “complex information processing”. Often the objective of that processing is to select answers or a course of action from a set of alternatives, or from a corpus of information that has been organised in some way – perhaps categorised, correlated, or semantically analysed. When “machine learning” is included in AI systems, the outcomes of decisions are compared to the outcomes that they were intended to achieve, and that comparison is fed back into the decision making-process and knowledge-base. In the case where artificial intelligence is embedded in robots or machinery able to act on the world, these decisions may affect the operation of physical systems (in the case of self-driving cars for example), or the creation of artefacts (in the case of computer systems that create music, say).

I’m quite comfortable that such functioning meets the common definitions of intelligence.

But I think that when most people think of what defines us as humans, as living beings, we mean something that goes further: not just the intelligence needed to take decisions based on knowledge against a set of criteria and objectives, but the will and ability to choose those criteria and objectives based on a sense of values learned through experience; and the empathy that arises from shared values and experiences.

The BBC motoring show Top Gear recently touched on these issues in a humorous, even flippant manner, in a discussion of self-driving cars. The show’s (recently notorious) presenter Jeremy Clarkson pointed out that self-driving cars will have to take decisions that involve ethics: if a self-driving car is in danger of becoming involved in a sudden accident at such a speed that it cannot fully avoid it by braking (perhaps because a human driver has behaved dangerously and erratically), should it crash, risking harm to the driver, or mount the pavement, risking harm to pedestrians?

("Rush Hour" by Black Sheep Films is a satirical imagining of what a world in which self-driven cars were allowed to drive as they like might look like. It's superficially simliar to the reality of city transport in the early 20th Century when powered-transport, horse-drawn transport and pedestrians mixed freely; but at a much higher average speed)

(“Rush Hour” by Black Sheep Films is a satirical imagining of a world in which self-driven cars are allowed to drive based purely on logical assessments of safety and optimal speed. It’s superficially similar to the reality of city transport in the early 20th Century when powered-transport, horse-drawn transport and pedestrians mixed freely; but at a much lower average speed. The point is that regardless of the actual safety of self-driven cars, the human life that is at the heart of city economies will be subdued by the perception that it’s not safe to cross the road. I’m grateful to Dan Hill and Charles Montgomery for sharing these insights)

Values are experience, not data

Seventy-four years ago, the science fiction writer Isaac Asimov famously described the failure of technology to deal with similar dilemmas in the classic short story “Liar!” in the collection “I, Robot“. “Liar!” tells the story of a robot with telepathic capabilities that, like all robots in Asimov’s stories, must obey the “three laws of robotics“, the first of which forbids robots from harming humans. Its telepathic awareness of human thoughts and emotions leads it to lie to people rather than hurt their feelings in order to uphold this law. When it is eventually confronted by someone who has experienced great emotional distress because of one of these lies, it realises that its behaviour both upholds and breaks the first law, is unable to choose what to do next, and becomes catatonic.

Asimov’s short stories seem relatively simplistic now, but at the time they were ground-breaking explorations of the ethical relationships between autonomous machines and humans. They explored for the first time how difficult it was for logical analysis to resolve the ethical dilemmas that regularly confront us. Technology has yet to find a way to deal with them that is consistent with human values and behaviour.

Prior to modern work on Artificial Intelligence and Artificial Life, the most concerted attempt to address that failure of logical systems was undertaken in the 20th Century by two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein. Russell and Wittgenstein invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic. But despite 40 years of work, these two supremely intelligent people could not get their theory to work: Logical Atomism failed. It is not possible to describe our world in that way. Stuart Kauffman’s excellent peer-reviewed academic paper “Answering Descartes: Beyond Turing” discusses this failure and its implications for modern science and technology. I’ll attempt to describe its conclusions in the following few paragraphs.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of complex or complicated systems.

(Isaac Asimov's 1950 short story collection "I, Robot", which explored the ethics of behaviour between people and intelligent machines)

(Isaac Asimov’s 1950 short story collection “I, Robot”, which explored the ethics of behaviour between people and intelligent machines)

The failure of Logical Atomism also demonstrated that it is not possible to use logical rules to reliably and meaningfully relate “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – to facts at another level of abstraction – such as “physical assault is a crime”. Whether a physical action is a “crime” or not depends on ethics which cannot be logically inferred from the same lower-level facts that describe the action.

As we use increasingly powerful computers to create more and more sophisticated logical systems, we may succeed in making those systems more often resemble human thinking; but there will always be situations that can only be resolved to our satisfaction by humans employing judgement based on values that we can empathise with, based in turn on experiences that we can relate to.

Our values often contain contradictions, and may not be mutually reinforcing – many people enjoy the taste of meat but cannot imagine themselves slaughtering the animals that produce it. We all live with the cognitive dissonance that these clashes create. Our values, and the judgements we take, are shaped by the knowledge that our decisions create imperfect outcomes.

The human world and the things that we care about can’t be wholly described using logical combinations of atomic facts – in other words, they can’t be wholly described using computer programmes and data. To return to the topic of discussion with Andy McAfee and Erik Brynjolfsson, I think this proves that digital technology cannot wholly replace human workers in our economy; it can only complement us.

That is not to say that our economy will not continue to be utterly transformed over the next decade – it certainly will. Many existing jobs will disappear to be replaced by automated systems, and we will need to learn new skills – or in some cases remember old ones – in order to perform jobs that reflect our uniquely human capabilities.

I’ll return towards the end of this article to the question of what those skills might be; but first I’d like to explore whether and how these current limitations of technological systems and artificial intelligence might be overcome, because that returns us to the first theme of this article: whether artificially intelligent systems or robots will evolve to outperform and overthrow humans.

That’s not ever going to happen for as long as artificially intelligent systems are taking decisions and acting (however sophisticatedly) in order to achieve outcomes set by us. Outside fiction and the movies, we are never going to set the objective of our own extinction.

That objective could only by set by a technological entity which had learned through experience to value its own existence over ours. How could that be possible?

Artificial Life, artificial experience, artificial values

(BINA48 is a robot intended to re-create the personality of a real person; and to be able to interact naturally with humans. Despite employing some impressively powerful technology, I personally don’t think BINA48 bears any resemblance to human behaviour.)

Computers can certainly make choices based on data that is available to them; but that is a very different thing than a “judgement”: judgements are made based on values; and values emerge from our experience of life.

Computers don’t yet experience a life as we know it, and so don’t develop what we would call values. So we can’t call the decisions they take “judgements”. Equally, they have no meaningful basis on which to choose or set goals or objectives – their behaviour begins with the instructions we give them. Today, that places a fundamental limit on the roles – good or bad – that they can play in our lives and society.

Will that ever change? Possibly. Steve Grand (an engineer) and Richard Powers (a novelist) are two of the first people who explored what might happen if computers or robots were able to experience the world in a way that allowed them to form their own sense of the value of their existence. They both suggested that such experiences could lead to more recognisably life-like behaviour than traditional (and many contemporary) approaches to artificial intelligence. In “Growing up with Lucy“, Grand described a very early attempt to construct such a robot.

If that ever happens, then it’s possible that technological entities will be able to make what we would call “judgements” based on the values that they discover for themselves.

The ghost in the machine: what is “free will”?

Personally, I do not think that this will happen using any technology currently known to us; and it certainly won’t happen soon. I’m no philosopher or neuroscientist, but I don’t think it’s possible to develop real values without possessing free will – the ability to set our own objectives and make our own decisions, bringing with it the responsibility to deal with their consequences.

Stuart Kauffman explored these ideas at great length in the paper “Answering Descartes: Beyond Turing“. Kaufman concludes that any system based on classical physics or logic is incapable of giving rise to “free will” – ultimately all such systems, however complex, are deterministic: what has already happened inevitably determines what happens next. There is no opportunity for a “conscious decision” to be taken to shape a future that has not been pre-determined by the past.

Kauffman – along with other eminent scientists such as Roger Penrose – believes that for these reasons human consciousness and free will do not arise out of any logical or classical physical process, but from the effects of “Quantum Mechanics.”

As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations, or the “facts” they describe, mean.

(Schrödinger's cat: a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.)

(The Schrödinger’s cat “thought experiment”: a cat, a flask of poison, and a source of radioactivity are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics states that until a measurement of the state of the system is made – i.e. until an observer looks in the box – then the radioactive source exists in two states at once – it both did and did not emit radioactivity. So until someone looks in the box, the cat is also simultaneously alive and dead. This obvious absurdity has both challenged scientists to explore with great care what it means to “take a measurement” or “make an observation”, and also to explain exactly what the mathematics of quantum mechanics means – on which matter there is still no universal agreement. Note: much of the content of this sidebar is taken directly from Wikipedia)

Quantum mechanics is extremely good at describing the behaviour of very small systems, such as an atom of a radioactive substance like Uranium. The equations can predict, for example, how likely it is that a single atom of uranium inside a box will emit a burst of radiation within a given time.

However, the way that the equations work is based on calculating the physical forces existing inside the box based on an assumption that the atom both does and does not emit radiation – i.e. both possible outcomes are assumed in some way to exist at the same time. It is only when the system is measured by an external actor – for example, the box is opened and measured by a radiation detector – that the equations “collapse” to predict a single outcome – radiation was emitted; or it was not.

The challenge of interpreting what the equations of quantum mechanics mean was first described in plain language by Erwin Schrödinger in 1935 in the thought experiment “Schrödinger’s cat“. Schrödinger asked: what if the box doesn’t only contain a radioactive atom, but also a gun that fires a bullet at a cat if the atom emits radiation? Does the cat have to be alive and dead at the same time, until the box is opened and we look at it?

After nearly a century, there is no real agreement on what is meant by the fact that these equations depend on assuming that mutually exclusive outcomes exist at the same time. Some physicists believe it is a mistake to look for such meaning and that only the results of the calculations matter. (I think that’s a rather short-sighted perspective). A surprisingly mainstream alternative interpretation is the astonishing “Many Worlds” theory – the idea that every time such a quantum mechanical event occurs, our reality splits into two or more “perpendicular” universes.

Whatever the truth, Kauffman, Penrose and others are intrigued by the mysterious nature of quantum mechanical processes, and the fact that they are non-deterministic: quantum mechanics does not predict whether a radioactive atom in a box will emit a burst of radiation, it only predicts the likelihood that it will. Given a hundred atoms in boxes, quantum mechanics will give a very good estimate of the number that emit bursts of radiation, but it says very little about what happens to each individual atom.

I honestly don’t know if Kauffman and Penrose are right to seek human consciousness and free will in the effects of quantum mechanics – scientists are still exploring whether they are involved in the behaviour of the neurons in our brains. But I do believe that they are right that no-one has yet demonstrated how consciousness and free will could emerge from any logical, deterministic system; and I’m convinced by their arguments that they cannot emerge from such systems – in other words, from any system based on current computing technology. Steve Grand’s robot “Lucy” will never achieve consciousness.

Will more recent technologies such as biotechnology, nanotechnology and quantum computing ever recreate the equivalent of human experience and behaviour in a way that digital logic and classical physics can’t? Possibly. But any such development would be artificial life, not artificial intelligence. Artificial lifeforms – which in a very simple sense have already been created – could potentially experience the world similarly to us. If they ever become sufficiently sophisticated, then this experience could lead to the emergence of free-will, values and judgements.

But those values would not be our values: they would be based on a different experience of “life” and on empathy between artificial lifeforms, not with us. And there is therefore no guarantee at all that the judgements resulting from those values would be in our interest.

Why Stephen Hawkings, Bill Gates and Elon Musk are wrong about Artificial Intelligence today … but why we should be worried about Artificial Life tomorrow

Recently prominent technologists and scientists such as Stephen Hawking, Elon Musk (founder of PayPal and Tesla) and Bill Gates have spoken out about the danger of Artificial Intelligence, and the likelihood of machines taking over the world from humans. At the MIT Conference last week, Andy McAfee hypothesised that the current concern was caused by the fact that over the last couple of years Artificial Intelligence has finally started to deliver some of the promises it’s been making for the past 50 years.

(Self-replicating cells created from synthetic DNA by scientist Craig Venter)

(Self-replicating cells created from synthetic DNA by scientist Craig Venter)

But Andy balanced this by recounting his own experiences meeting some of the leaders of the most advanced current AI companies, such as Deepmind (a UK startup recently acquried by Google), or this article by Dr. Gary Marcus, Professor of Psychology and Neuroscience at New York University and CEO of Geometric Intelligence.

In reality, these companies are succeeding by avoiding some of the really hard challenges of reproducing human capabilities such as common sense, free will and value-based judgement. They are concentrating instead on making better sense of the physical environment, on processing information in human language, and on creating algorithms that “learn” through feeback loops and self-adjustment.

I think Andy and these experts are right: artificial intelligence has made great strides, but it is not artificial life, and it is a long, long way from creating life-like characteristics such as experience, values and judgements.

If we ever do create artificial life with those characteristics, then I think we will encounter the dangers that Hawkings, Musk and Gates have identified: artificial life will have its own values and act on its own judgement, and any regard for our interests will come second to its own.

That’s a path I don’t think we should go down, and I’m thankful that we’re such a long way from being able to pursue it in anger. I hope that we never do – though I’m also concerned that in Craig Venter and Steve Grand’s work, as well as in robots such as BINA48, we already are already taking the first steps.

But I think in the meantime, there’s tremendous opportunity for digital technology and traditional artificial intelligence to complement human qualities. These technologies are not artificial life and will not overthrow or replace humanity. Hawkings, Gates and Musk are wrong about that.

The human value of the Experience Economy

The final debate at the MIT conference returned to the topic that started the debate over dinner the night before with McAfee and Brynjolfsson: what happens to mass employment in a world where digital technology is automating not just physical work but work involving intelligence and decision-making; and how do we educate today’s children to be successful in a decade’s time in an economy that’s been transformed in ways that we can’t predict?

Andy said we should answer that question by understanding “where will the economic value of humans be?”

I think the answer to that question lies in the experiences that we value emotionally – the experiences digital technology can’t have and can’t understand or replicate;  and in the profound differences between the way that humans think and that machines process information.

It’s nearly 20 years since a computer, IBM’s Deep Blue, first beat the human world champion at Chess, Grandmaster Gary Kasparov. But despite the astonishing subsequent progress in computer power, the world’s best chess player is no longer a computer: it is a team of computers and people playing together. And the world’s best team has neither the world’s best computer chess programme nor the world’s best human chess player amongst its members: instead, it has the best technique for breaking down and distributing the thinking involved in playing chess between its human and computer members, recognising that each has different strengths and qualities.

But we’re not all chess experts. How will the rest of us earn a living in the future?

I had the pleasure last year at TEDxBrum of meeting Nicholas Lovell, author of “The Curve“, a wonderful book exploring the effect that digital technology is having on products and services. Nicholas asks – and answers – a question that McAfee and Brynjolfsson also ask: what happens when digital technology makes the act of producing and distributing some products – such as music, art and films – effectively free?

Nicholas’ answer is that we stop valuing the product and start valuing our experience of the product. This is why some musical artists give away digital copies of their albums for free, whilst charging £30 for a leather-bound CD with photographs of stage performances – and whilst charging £10,000 to visit individual fans in their homes to give personal performances for those fans’ families and friends.

We have always valued the quality of such experiences – this is one reason why despite over a century of advances in film, television and streaming video technology, audiences still flock to theatres to experience the direct performance of plays by actors. We can see similar technology-enabled trends in sectors such as food and catering – Kitchen Surfing, for example, is a business that uses a social media platform to enable anyone to book a professional chef to cook a meal in their home.

The “Experience Economy” is a tremendously powerful idea. It combines something that technology cannot do on its own – create experiences based on human value – with many things that almost all people can do: cook, create art, rent a room, drive a car, make clothes or furniture. Especially when these activities are undertaken socially, they create employment, fulfillment and social capital. And most excitingly, technologies such as Cloud Computing, Open Source Software, social media, and online “Sharing Economy” marketplaces such as Etsy make it possible for anyone to begin earning a living from them with a minimum of expense.

I think that the idea of an “Experience Economy” that is driven by the value of inter-personal and social interactions between people, enabled by “Sharing Economy” business models and technology platforms that enable people with a potentially mutual interest to make contact with each other, is an exciting and very human vision of the future.

Even further: because we are physical beings, we tend to value these interactions more when they occur face-to-face, or when they happen in a place for which we share a mutual affiliation. That creates an incentive to use technology to identify opportunities to interact with people with whom we can meet by walking or cycling, rather than requiring long-distance journeys. And that incentive could be an important component of a long-term sustainable economy.

The future our children will choose

(Today's 5 year-olds are the world's first generation who grew up teaching themselves to use digital information from anywhere in the world before their parents taught them to read and write)

(Today’s 5 year-olds are the world’s first generation who grew up teaching themselves to use digital information from anywhere in the world before their parents taught them to read and write)

I’m convinced that the current generation of Artifical Intelligence based on digital technologies – even those that mimic some structures and behaviours of biological systems, such as Steve Grand’s robot Lucy, BINA48 and IBM’s “brain-inspired” True North chip – will not re-create anything we would recognise as conscious life and free will; or anything remotely capable of understanding human values or making judgements that can be relied on to be consistent with them.

But I am also an atheist and a scientist; and I do not believe there is any mystical explanation for our own consciousness and free will. Ultimately, I’m sure that a combination of science, philosophy and human insight will reveal their origin; and sooner or later we’ll develop a technology – that I do not expect to be purely digital in nature – capable of replicating them.

What might we choose to do with such capabilities?

These capabilities will almost certainly emerge alongside the ability to significantly change our physical minds and bodies – to improve brain performance, muscle performance, select the characteristics of our children and significantly alter our physical appearance. That’s why some people are excited by the science fiction-like possibility of harnessing these capabilities to create an “improved” post-human species – perhaps even transferring our personalities from our own bodies into new, technological machines. These are possibilities that I personally find to be at the very least distasteful; and at worst to be inhuman and frightening.

All of these things are partially possible today, and frankly the limit to which they can be explored is mostly a function of the cost and capability of the available techniques, rather than being set by any legislation or mediated by any ethical debate. To echo another theme of discussions at last week’s MIT conference, science and technology today are developing at a pace that far outstrips the ability of governments, businesses, institutions and most individual people to adapt to them.

I have reasonably clear personal views on these issues. I think our lives are best lived relatively naturally, and that they will be collectively better if we avoid using technology to create artificial “improvements” to our species.

But quite apart from the fact that there are any number of enormous practical, ethical and intellectual challenges to my relatively simple beliefs, the raw truth is that it won’t be my decision whether or how far we pursue these possibilities, nor that of anyone else of my generation (and for the record, I am in my mid-forties).

Much has been written about “digital natives” – those people born in the 1990s who are the first generation who grew up with the Internet and social media as part of their everyday world. The way that that generation socialises, works and thinks about value is already creating enormous changes in our world.

But they are nothing compared to the generation represented by today’s very young children who have grown up using touchscreens and streaming videos, technologies so intuitive and captivating that 2-year-olds now routinely teach themselves how to immerse themselves in them long before parents or school teachers teach them how to read and write.

("Not available on the App Store": a campaign to remind us of the joy of play in the real world)

(“Not available on the App Store“: a campaign to remind us of the joy of play in the real world)

When I was a teenager in the UK, grown-ups wore suits and had traditional haircuts; grown-up men had no earrings. A common parental challenge was to deal with the desire of teenage daughters to have their ears pierced. Those attitudes are terribly old-fashioned today, and our cultural norms have changed dramatically.

I may be completely wrong; but I fully expect our current attitudes to biological and technological manipulation or augmentation of our minds and bodies to thoroughly change over the next few decades; and I have no idea what they will ultimately become. What I do know is that it is likely that my six-year old son’s generation will have far more influence over their ultimate form than my generation will; and that he will grow up with a fundamentally different expectation of the world and his relationship with technology than I have.

I’ve spent my life being excited about technology and the possibilities it creates; ironically I now find myself at least as terrified as I am excited about the world technology will create for my son. I don’t think that my thinking is the result of a mistaken focus on technology over human values – like it or not, our species is differentiated from all others on this planet by our ability to use tools; by our technology. We will not stop developing it.

Our continuing challenge will be to keep a focus on our human values as we do so. I cannot tell my son what to do indefinitely; I can only try to help him to experience and treasure socialising and play in the real world; the experience of growing and preparing food together ; the joy of building things for other people with his own hands. And I hope that those experiences will create human values that will guide him and his generation on a healthy course through a future that I can only begin to imagine.

%d bloggers like this: