4 ways to get on with building Smart Cities. And the societal failure that stops us using them.

(

(William Robinson Leigh’s 1908 painting “Visionary City” envisaged future cities constructed from mile-long buildings of hundreds of storeys connected by gas-lit skyways for trams, pedestrians and horse-drawn carriages. A century later we’re starting to realise not only that developments in transport and power technology have eclipsed Leigh’s vision, but that we don’t want to live in cities constructed from buildings on this scale.)

The Smart City refuses to go away
In 2013 Adam Greenfield wrote “Against the Smart City”  in criticism of the large-scale corporate- and government-led projects in cities such as Masdar, Songdo and Rio that had begun to co-opt the original idea of “Smart Communities” and citizens, given a more powerful voice in their own governance by Internet communication, into what he saw – and what some still see – as a “top-down” approach to infrastructure and services divorced from the interest of ordinary citizens.

But despite regular reprisals of this theme accompanied by assertions that the Smart City is a misguided idea that is doomed to die away, notably last year in the UK’s Guardian newspaper, the Smart City has neither been abandoned as mistaken nor faded from prominence as it would have done by now if it were nothing but a technology buzzword. (Whether they have disappeared entirely or simply become everyday parts of the landscape, ideas that once dominated the technology industry such as “Service Oriented Architecture“, “Web 2.0” and “e-business” have risen to prominence and disappeared again within the lifetime of “Smart Cities”).

Instead, the various industry, community, political, academic and design interests associated with the Smart City idea have gradually learned how to combine the large-scale, intelligent infrastructures needed to support the incredible level and speed of urbanisation around the world with the accessible technologies that allow citizens, communities and businesses to adapt those infrastructures to their own needs and create more successful lives for themselves. As a consequence, new cities and new media organisations are still adding to those already debating the idea – I’ve received invitations to new events in the UK, Ireland, Malaysia, China and the Middle East already this year, and mainstream reputable sources such as the Daily Telegraph, Fortune magazine, the Economist and Forbes have covered the trend.

Yet despite all of this interest from industry and the public sector, the reality is that we still haven’t seen significant investment in those ideas on a sustainable basis.

If you read this blog regularly then you’ll know that I don’t believe that our primary focus for funding Smart City initiatives should be through the innovation funds provided by bodies such as Innovate UK or programmes such as the European Union’s Horizon 2020. Those are both great vehicles for driving innovation out of research organisations into business and public services; but for any city facing an acute challenge the bidding processes take too long and consume too many resources; the high levels of competition mean there can be a relatively low chance of receiving funds; and projects funded in this way often don’t solve the challenge of paying for the resulting solution on an ongoing basis. Most of the sustainable solutions that result from them are new business products and services: once the initial funded pilot with a local authority has finished, where does the money come from to pay for an ongoing commercial solution?

There are, however, a clear set of routes to securing sustainable investment that the most forward-looking cities have demonstrated. They don’t require cities to attract flagship technology industries to invest in them as proving-grounds for new products and services; they don’t require the inward investment that comes from international sporting and cultural events; and they’re not the preserve of rich or fast-growing capital cities on the international stage.

They do require senior city leaders – Mayors, Council Leaders and their Executive officers – to adopt and drive them; and they also require collaboration and partnership with other city institutions and with private sector suppliers.

And they require bravery, integrity and commitment from those private sector suppliers – such as my employer Amey – to offer new partnerships to our customers. Smart Cities won’t come about through us selling our products and services in transactional exchanges; they’ll come about through new partnerships in which we agree to share not just the responsibility to invest in technology and innovation, but also responsibility for the risks involved in achieving the objectives that cities care about.

But while these approaches to delivering Smart Cities will require hard and careful work, and real investment in collaboration, they are all accessible to any city that chooses to use them; and there’s no reason at all why that process can’t begin today.

Getting started: agreeing on aspirations

The starting point to putting a Smart City strategy in place is to create a specific, aspirational vision rooted in the challenges, opportunities and capabilities of a particular place and its communities, and that can win support from local stakeholders. I have seen (broadly) two types of Smart Cities visions of this sort created over the last few years.

1. Local Authority visions for digital services and infrastructure

Many local authorities have developed plans for smart, digital local services, coupled with plans for regional investment in infrastructure (such as 4G and broadband connectivity), digital skills and business-enablement. A good example is Hampshire County Council’s “Digital Hampshire” plan (Hampshire is a relatively large and economically healthy County in the UK with a population of 1.3 million and GDP just over £30billion).

One of the earliest examples was Sunderland’s “Economic Masterplan”, which which has driven around £15m of investment by the City Council so far, with further and potentially more significant initiatives now underway. (Sunderland are a medium-sized city in the UK, with a population of approximately 300,000. The city has been focussed for many years on modernising and diversifying its economy following the decline of the shipbuilding and coalmining industries. They are genuine, if often unacknowledged, thought leaders in Smart Cities).

2. City-wide or region-wide collaborative visions

In some cities and regions a wide variety of stakeholders, usually facilitated by a Local Authority or University leader, have developed collaborative plans including commitments and initiatives from local businesses, Universities, transport organisations and service providers as well as government agencies. These visions tend to contain more ambitious plans, for example the provision of “Smart Home” connectivity in new affordable housing developments, multi-modal transport payment schemes, local renewable energy generation schemes etc. London and Birmingham are good examples of this type of plan; and London in particular have used it to drive significant investments in Smart infrastructure through property development.

In both cities, formal collaborations were established to create these visions and drive the strategies to implement them – Birmingham’s Smart City Commission (which I’ve recently re-joined after having been a member of its first incarnation) and London’s Smart London Board (on which I briefly represented IBM before joining Amey).

Whether the first or the second type of plan is the right approach for any specific city, region or community depends on the level of support and collaboration amongst stakeholders in the local authority and the wider city and region – and of course, many plans in reality are somewhere between those two types. If the enthusiasm and leadership are there, neither type of plan need be a daunting process – Oxford recently built a plan of the second type from scratch between the City Council, local Universities and businesses in around 6 months by working with existing local partnerships and networks.

Moving forward: focussing on delivery and practical funding mechanisms

The degree to which cities and regions have then implemented these strategies is determined by how well they’ve focussed on realistic sources of investment and funding. For example, whilst some cities – notably Sunderland and London – have secured significant investments from sustainable sources rather than from research and innovation funds, many others – so far – have not.

I have probably tested some of my relationships with local authorities and innovation agencies to the limit by arguing repeatedly that many Smart City initiatives and debates focus far too much on applying for central Government funds and grants from Research and Innovation funding agencies; and far too little on sustainable business and investment models for new forms of city infrastructure and services.

I make these arguments because there are at least four approaches that any city can use to exploit existing, ongoing streams of funding and investment to implement a Smart City vision in a sustainable way – if their leaders and stakeholders have the conviction to make them happen; and because I passionately believe that these are the mechanisms that can unlock the opportunity for cities across the country and around the world to realise the huge social, economic and environmental benefits that technology developments can enable if they are harnessed in the right way:

  1. Include Smart City criteria in the procurement of services by local authorities to encourage competitive innovation from private sector providers
  2. Encourage development opportunities to include “smart” infrastructure
  3. Commit to entrepreneurial programmes
  4. Enable and support Social Enterprise

(The Sunderland Software Centre, a multi-£million new technology startup incubation facility in Sunderland’s city centre. The Centre is supported by a unique programme of events and mentoring delivered by IBM’s Academy of Technology as a condition of the award of a contract for provision of IT services to the centre, and arising from Sunderland’s Smart City strategy)

1. Include Smart City criteria in the procurement of services by local authorities to encourage competitive innovation from private sector providers

Sunderland City Council are at the forefront of investing in Smart City technology simply by reflecting their aspirations in their procurement practises for the goods and services they need to operate as a Council. They have included objectives from their Economic Masterplan in four procurements for IT solutions now, totalling around £15m – for example, the transformation of their IT infrastructure from a traditional platform to a Cloud computing platform was awarded to IBM based on IBM’s commitment to help the Council to use the Cloud platform to help local businesses, social enterprises, charities and entrepreneurs to succeed.

Whilst specific procurement choices in any given service are different in every case – whether to procure support for in-house delivery or to outsource to an external provider; or whether to form a PFI, Joint Venture or other such partnership structure for example – the principle of using business-as-usual procurements to invest in the Smart agenda is one that can be applied by any local authority or other organisation responsible for the delivery of public or city services or infrastructure.

This approach is dependent on the procurement of outcomes – for example, the quality of road surfaces, the smoothness of traffic flow, contributions to social mobility and small business growth – rather than of capabilities or resources. Outcomes-based procurements between competing providers create the incentive from the release of the tender through to the completion of the contract for private sector providers to invest in innovation and technology to deliver the most competitive offer to the customer.

Over the last 10 months in Amey, where many of our customer relationships are outcomes-based, whether they are with local governments, other public sector organisations or regulated industries such as utilities, I’ve rapidly put together a portfolio of Smart City initiatives that are supported by very straightforward business cases based on those commitments to outcomes. These initiatives are not just making our own operations more cost effective (and safer) – although they are doing both of those, and that’s what guarantees our ongoing financial commitment to them; they are also delivering new social insights, new forms of citizen engagement and new opportunities for community collaboration for our customers.

The stakeholders whose commitment is needed to implement this approach include Local Authority Chief Executives, Council Leaders, Cabinet members and their Chief Financial Officers or Finance Directors, as well as procuring Executives in services such as highways management, parking services, social care, health and wellbeing and IT. They can also include representatives of local transport organisations for initiatives focussed on transport and mobility.

I won’t pretend that an outcomes-based approach is always easy to adopt, either for local government organisations or their suppliers. In particular, if we want to apply this approach to the highest-level Smart City aspirations for social mobility, economic growth and resilience, then there is a need for dialogue between all parties to establish how to express those outcomes in a way that incentivises the private sector to invest in innovation to deliver them; and to do so in a way that both rewards them appropriately for their achievements whilst giving local government and the citizens and communities they serve good value for money and exemplary service.

In discussions at the last meeting of the UK Government’s Smart Cities Forum, recently re-convened after the general election, there was clearly an appetite for that discussion on both sides: but it needs a neutral, trusted intermediary to facilitate it. That’s not a role that anyone is playing at the moment – neither in government, nor in industry, nor in academia, nor in the conference circuit, nor in the various innovation agencies that are active in Smart Cities. It’s a role that we badly need one – or all of them – to step up to.

(The Urban Sciences Building at Newcastle Science Central, a huge, University-driven regeneration project in central Newcastle that combines facilities for the research and development of new solutions for urban infrastructure with on-site smart infrastructure and services)

2. Encourage development opportunities to include “smart” infrastructure
In 2012 after completing their first Smart City Vision, Birmingham City Council asked what was both an obvious and a fundamentally important question – but one that, to my knowledge, no-one had thought to ask before:

“How should our Planning Framework be updated to reflect our Smart City vision?”

Birmingham’s insight has the potential to unlock an incredible investment stream – the British Property Federation estimates that £14billion is spent each year in the UK on new-build developments alone. Just a tiny fraction of that sum would dwarf the level of direct investment in Smart Cities we’ve seen to date.

Birmingham’s resulting “Digital Blueprint” contains 10 “best practise recommendations” for planning and development drawn in part from a wider set that resulted from a workshop that I facilitated for the Academy of Urbanism, a professional body of town planners, urban designers and architects in the UK. The British Standards Institute has recently taken these ideas forward and published guidance that is starting to be used by other cities.

But progress is slow. To my knowledge the only example of these ideas being put into practise in the UK (though I’d love to be proven wrong) is through the Greater London Authority (GLA) and London Legacy Development Corporation (LLDC) who included criteria from the Smart London Plan in their process last year to award the East Wick and Sweetwater development opportunity to the private sector. This is a multi-£100million investment from a private sector pension fund to build 1,500 new homes on the London Olympics site along with business and retail space.

On behalf of IBM last year I contributed several Smart City elements of the winning proposal; it was astonishing to see how straightforward it was to justify committing multi-£million technology investments from the private sector in the development proposal simply because they would enable the construction and development consortium to win the opportunity to generate long-term profits at a much more significant level. Crucially, the LLDC demanded that the benefits of those investments should be felt not just by residents and businesses in the new development; but by residents and businesses in existing, adjoining neighbourhoods.

There is not much information on this aspect of the development in the public domain, but you can get some idea from this blog by the Master Planner subcontracted to the development. A similar approach is now being taken to an even larger redevelopment in London at Old Oak and Park Royal.

If cities in the UK and beyond are to take advantage of this potentially incredibly powerful mechanism, then we need to win over some crucial stakeholders: Local Authority Directors of Planning, regional development agencies, property developers, financiers and construction companies. Local Universities can be ideal partners for this approach – if they are growing and investing in new property development, there is a clear opportunity for their research departments to collaborate with property and infrastructure developers to create Smart City environments that showcase the capabilities of all parties. Newcastle Science Central is an example of this approach; it’s a real shame that elsewhere in the UK some significant investments are being made to extend University property – often on the basis of increased revenues from student fees – with no incorporation of these possibilities, at the same time that those same Universities’ own research groups are making countless bids into competitive research and innovation funds.

3. Commit to entrepreneurial programmes

[Priya Prakash of the entrepreneurial company Design 4 Social Change describes a project she is leading on behalf of Amey to improve citizen engagement with the services that we deliver for our customers]

Many Smart City initiatives are fundamentally business model innovations – new ways of combining financial success and sustainability with social, economic or environmental improvements in services such as transport, utilities or food. And most business model innovations are created by startup companies, funded by Venture Capital investment. Air B’n’B and Uber are two often-cited examples at the moment of how quickly such businesses, based on new, technology-enabled operating models, can create an enormous impact.

What if you could align that impact with the objectives of a city or region?

The “Cognicity” programme run by the Level 39 technology incubator in London’s Canary Wharf financial district has achieved this alignment by linking Venture Capital- and Angel-backed startup companies to the infrastructure requirements of the next phase of development at Canary Wharf. The West Midlands Public Transport Executive Centro and Innovation Birmingham have agreed a similar initiative to advance transport priorities in Birmingham through externally-funded innovation. Oxford are pursuing the same approach through their “Smart Oxford Challenge” in partnership with Nominet, a trust that supports social innovation. And Amey and our parent company Ferrovial are similarly supporting a “Smart Lab” in collaboration with the University of Sheffield and Sheffield City Council.

A variety of stakeholders are vital to creating entrepreneurial programmes that succeed and that crucially can attract finance to support the ideas that they generate – endless unfunded civic hackathons create ideas but too often fail to have an impact due to a lack of funding and a lack of genuine engagement from local authorities to adopt the solutions they make possible. Innovation funding agencies, especially those with a local or social focus are vital; as are the local Universities, technology incubators and social enterprise support organisations that both attract innovators and have the resources to support them. Finally, where they exist, local Angel Investors or Venture Capital organisations have an obvious role to play.

(Casserole Club, a social enterprise developed by FutureGov uses social media to connect people who have difficulty cooking for themselves with others who are happy to cook an extra portion for a neighbour; a great example of a locally-focused “sharing economy” business model which creates financially sustainable social value.)

4. Enable and support Social Enterprise

The objectives of Smart Cities (which I’d summarise for this purpose as “finding ways to invest in technology to enable social, environmental and economic improvements”) are analogous to the “triple bottom line” objectives of Social Enterprises – organisations whose finances are often sustained by revenues from the products or services that they provide, but that commit themselves to social, environmental or economic outcomes, rather than to maximising their financial returns to shareholders. A vast number of Smart City initiatives are carried out by these organisations when they innovate using technology.

Cities that find a way to systematically enable social enterprises to succeed could unlock a reservoir of beneficial innovation. An international example that began in the UK is the Impact Hub network, a global community of collaborative workspaces. The Impact Hub network has worked with a variety of national and local governments to create support programmes to encourage the formation of socially innovative and responsible organisations.

Social Enterprise UK help and support authorities seeking to work with Social Enterprises in this way through their “Social Enterprise Place” initiative; Oxfordshire was the first County to be awarded “Social Enterprise County” under this initiative in recognition of their engagement programme with Social Enterprise.

Another possibility is for local authorities to work in partnership with crowdfunding organisations. Plymouth City Council, for example, offer to match-fund any money raised from crowdfunding for social innovations. This approach can be tremendously powerful: whilst the availability of match-funding from the local authority attracts crowdfunded donations, often sufficient funds are donated through crowdfunding that ultimately the match funding is not required. Given the sustained pressure we’re seeing on public sector finances, this ability to enable a small amount of local authority investment go a very long way is really powerful.

The stakeholders whose commitment is required to make this approach effective include local authorities – whose financial commitment to support new ideas is vital – as well as representatives of the Charitable and Social Enterprise sectors; businesses with support programmes for Social Enterprise (such as Deloitte Consulting’s Social Innovation Pioneers programme); and local incubators and business support services for Social Enterprise.

Why Smart Cities are a societal failure

Market dynamics guarantee that we’ll see massive investment in smart technology over the next few years – the meteoric rise of Uber and Air B’n’B is just one manifestation of that imperative. Consider also how astonishing your SmartPhone is compared to anything you could have imagined a few years ago – and the phenomenal levels of investment in technology that have driven that development; or how quickly the level of technology available in the average car has increased – let alone what happens when self-driving, connected vehicles become widely available.

But what will be the result of all that investment?

Before the recent UK general election, I admonished a Member of Parliament who closed a Smart Cities discussion with the words “I don’t suppose we’ll be talking about this subject for a couple of months now; we’ve got an election to consider” with the response: “Apple have just posted the largest quarterly profit in Corporate history by selling mobile supercomputers to the ordinary people who vote for you. Why on earth isn’t the topic of “who benefits from this incredibly powerful technology that is reshaping our society” absolutely central to the election debate?” (Apple’s results had just been announced earlier that day).

That exchange (and the fact that these issues indeed barely surfaced at all throughout the election period) marks the core of the Smart Cities debate, and highlights our societal failure to address it.

Most politicians appreciate that technology is changing rapidly and that these changes merit attention; but they do not appreciate quite how fundamentally important and far-reaching those changes are. My sense is that they think they can deal with technology-related issues such as “Smart Cities” as self-contained subjects of secondary importance to the more pressing concerns of educational attainment, economic productivity and international competitiveness.

That is a fundamentally mistaken view. Over the next decade, developments in technology, and the way that we adapt to them, will be one of the most important factors influencing education, the economy and the character of our society.

Let me justify that assertion by considering the skills that any one of us will need in order to have a successful life as our society and economy develop.

It is obvious that we will need the right technical skills in order to use the technologies of the day effectively. But of course we will also need interpersonal skills to interact with colleagues and customers; economic skills to help focus our efforts on creating value for others; and organisational skills to enable us to do so in the context of the public and private institutions from which our society is constructed.

One single force is changing all of those skills more rapidly than we have ever known before: technology. When the Millennium began we would not have dreamed of speaking to our families wherever and whenever we liked using free video-calling, and we could not have started a business using the huge variety of online tools available to us today. From startups to multinational corporations, we are all comfortable building and operating companies that use continually evolving technology to coordinate the activities of people living in different countries on different continents; and to create innovative new ways of doing so.

Whatever you think are the most important issues in the world today, if you are not at least considering the role of technology within them, then you will misunderstand how they will develop over time. And the process of envisioning and creating that future is another way to define what we mean by Smart Cities and smart communities: the challenges and opportunities we face, and the changes that technology will create, come together in the places where we live, work, travel and play; and their outcomes will be determined both by the economics of those places, and by how how they are governed.

Unfortunately, most of us are not even engaged with these ideas. A recent poll conducted by Arqiva on behalf of YouGov found that 96% of respondents were unaware of any Smart City initiatives in the cities they lived in. If ordinary people don’t understand and believe in the value of Smart Cities, they are unlikely to vote for politicians who attempt to build them or enact policies that support them. That lack of appreciation represents a failure on the part of those of us – like me – who do appreciate the significance of the changes we’re living through to communicate them, and to make an effective case to take decisive action.

As an example of that failure, consider again Birmingham’s thought-leading “Digital Blueprint” and it’s ten design principles. To repeat, they are “best practise recommendations”: they are not policies. They are not mandatory or binding. And as a consequence, I am sorry to say that in practise they have not been applied to the literally £billions of investment in development and regeneration taking place in the city that I live in and love.

That’s a lost opportunity that greatly saddens me.

[Drones co-operate to build a rope bridge. As such machines become more capable and able to carry out more cheaply and safely tasks previously performed by people, and that are central to the construction and operation of city infrastructure and services, how do we ensure that society at large benefits from such technology?]

As a society we cannot afford to keep losing such opportunities (and Birmingham is not alone: taking those opportunities is by far the exception, and not the rule). If we do, our aspirations will be simply be overtaken by events, and the consequences could be profound.

Writing in “The 2nd Machine Age”, MIT Professors of Economics Andy McAfee and Erik Brynjolfsson argue that the “platform business models” of Air B’n’B and Uber are becoming a dominant force in the economy – they cite the enormous market valuations of corporations such as Nike, Google, Facebook and Amazon that use such models, in addition to the rapid growth of new businesses. Their analysis further demonstrates that, if left unchecked, the business models and market dynamics of the digital economy will concentrate the value created by those businesses into the hands of a small number of platform creators and shareholders to a far greater extent than traditional business models have done so throughout history to date. I had the opportunity to meet Andy and Erik earlier this year, and they were deeply concerned that we should act to prevent the stark increase in inequality that their findings predict.

These are innovative businesses using Smart technology, but those social and economic outcomes won’t make a smart world, a smart society or Smart Cities. The widespread controversy created by Uber’s business model is just the tip of the iceberg of the consequences that we could see.

As I’ve quoted many, many times on this blog, Jane Jacobs got this right in 1961 when she wrote in “The Death and Life of Great American Cities” that:

“Private investment shapes cities, but social ideas (and laws) shape private investment. First comes the image of what we want, then the machine is adapted to turn out that image.”

We have expressed over and over again the “image of what we want” in countless aspirational visions and documents. But we have not adapted the machine to turn out that image.

Our politicians – locally and nationally – have not understood that the idea of a “Smart City” is really a combination of technology, social, environmental and economic forces that will fundamentally transform the way our society works in a way that will change the life of everyone on this planet; that the outcomes of those changes are in no way understood, and in no way guaranteed to be beneficial; and that enacting the policies, practises and – yes – laws, to adapt those changes to the benefit of everyone is a defining political challenge for our age.

I am not a politician, but this is also a challenge for which I accept responsibility.

As a representative of business – in particular a business that delivers a vast number of services to the public sector – I recognise the enormous responsibility I accept by working in a leadership role for an example of what has become one of the most powerful forces in our economy: the private corporation. It is my responsibility – and that of my peers, colleagues and competitors – to drive our business forward in a way that is responsible to the interests of the society of which we are part, and that is not driven only by the narrow financial concerns of our shareholders.

There should be absolutely no conflict between a responsible, financially successful company and one that operates in the long term interest of the society which ultimately supports it.

But that long-term synergy is only made real by a constant focus on taking the right decisions every day. From the LIBOR scandal to cheating diesel emissions tests it’s all too obvious that there are many occasions when we get those decisions wrong. Businesses are run by people; people are part of society; and we need to treat those simple facts far more seriously as an imperative in everyday decision-making than we currently do.

It is inevitable that our world, our cities and our communities will be dramatically reshaped by the technologies that are developing today, and that will be developed in the near future. They will change – very quickly – out of all recognition from what we know today.

But whether we will honestly benefit from those technologies is a different and uncertain question. Answering that question with a “yes” is a personal, political, business and organisational challenge that all of us need to face up to much more seriously and urgently than we are have done so far.

3 human qualities digital technology can’t replace in the future economy: experience, values and judgement

(Image by Kevin Trotman)

(Image by Kevin Trotman)

Some very intelligent people – including Stephen Hawking, Elon Musk and Bill Gates – seem to have been seduced by the idea that because computers are becoming ever faster calculating devices that at some point relatively soon we will reach and pass a “singularity” at which computers will become “more intelligent” than humans.

Some are terrified that a society of intelligent computers will (perhaps violently) replace the human race, echoing films such as the Terminator; others – very controversially – see the development of such technologies as an opportunity to evolve into a “post-human” species.

Already, some prominent technologists including Tim O’Reilly are arguing that we should replace current models of public services, not just in infrastructure but in human services such as social care and education, with “algorithmic regulation”. Algorithmic regulation proposes that the role of human decision-makers and policy-makers should be replaced by automated systems that compare the outcomes of public services to desired objectives through the measurement of data, and make automatic adjustments to address any discrepancies.

Not only does that approach cede far too much control over people’s lives to technology; it fundamentally misunderstands what technology is capable of doing. For both ethical and scientific reasons, in human domains technology should support us taking decisions about our lives, it should not take them for us.

At the MIT Sloan Initiative on the Digital Economy last week I got a chance to discuss some of these issues with Andy McAfee and Erik Brynjolfsson, authors of “The Second Machine Age“, recently highlighted by Bloomberg as one of the top books of 2014. Andy and Erik compare the current transformation of our world by digital technology to the last great transformation, the Industrial Revolution. They argue that whilst it was clear that the technologies of the Industrial Revolution – steam power and machinery – largely complemented human capabilities, that the great question of our current time is whether digital technology will complement or instead replace human capabilities – potentially removing the need for billions of jobs in the process.

I wrote an article last year in which I described 11 well established scientific and philosophical reasons why digital technology cannot replace some human capabilities, especially the understanding and judgement – let alone the empathy – required to successfully deliver services such as social care; or that lead us to enjoy and value interacting with each other rather than with machines.

In this article I’ll go a little further to explore why human decision-making and understanding are based on more than intelligence; they are based on experience and values. I’ll also explore what would be required to ever get to the point at which computers could acquire a similar level of sophistication, and why I think it would be misguided to pursue that goal. In contrast I’ll suggest how we could look instead at human experience, values and judgement as the basis of a successful future economy for everyone.

Faster isn’t wiser

The belief that technology will approach and overtake human intelligence is based on Moore’s Law, which predicts an exponential increase in computing capability.

Moore’s Law originated as the observation that the number of transistors it was possible to fit into a given area of a silicon chip was doubling every two years as technologies for creating ever denser chips were created. The Law is now most commonly associated with the trend for the computing power available at a given cost point and form factor to double every 18 months through a variety of means, not just the density of components.

As this processing power increases, and gives us the ability to process more and more information in more complex forms, comparisons have been made to the processing power of the human brain.

But do the ability to process at the same speed as the human brain, or even faster, or to process the same sort of information as the human brain does, constitute the equivalent to human intelligence? Or to the ability to set objectives and act on them with “free will”?

I think it’s thoroughly mistaken to make either of those assumptions. We should not confuse processing power with intelligence; or intelligence with free will and the ability to choose objectives; or the ability to take decisions based on information with the ability to make judgements based on values.

bi-has-hit-the-wall

(As digital technology becomes more powerful, will its analytical capability extend into areas that currently require human skills of judgement? Image from Perceptual Edge)

Intelligence is usually defined in terms such as “the ability to acquire and apply knowledge and skills“. What most definitions don’t include explicitly, though many imply it, is the act of taking decisions. What none of the definitions I’ve seen include is the ability to choose objectives or hold values that shape the decision-making process.

Most of the field of artificial intelligence involves what I’d call “complex information processing”. Often the objective of that processing is to select answers or a course of action from a set of alternatives, or from a corpus of information that has been organised in some way – perhaps categorised, correlated, or semantically analysed. When “machine learning” is included in AI systems, the outcomes of decisions are compared to the outcomes that they were intended to achieve, and that comparison is fed back into the decision making-process and knowledge-base. In the case where artificial intelligence is embedded in robots or machinery able to act on the world, these decisions may affect the operation of physical systems (in the case of self-driving cars for example), or the creation of artefacts (in the case of computer systems that create music, say).

I’m quite comfortable that such functioning meets the common definitions of intelligence.

But I think that when most people think of what defines us as humans, as living beings, we mean something that goes further: not just the intelligence needed to take decisions based on knowledge against a set of criteria and objectives, but the will and ability to choose those criteria and objectives based on a sense of values learned through experience; and the empathy that arises from shared values and experiences.

The BBC motoring show Top Gear recently touched on these issues in a humorous, even flippant manner, in a discussion of self-driving cars. The show’s (recently notorious) presenter Jeremy Clarkson pointed out that self-driving cars will have to take decisions that involve ethics: if a self-driving car is in danger of becoming involved in a sudden accident at such a speed that it cannot fully avoid it by braking (perhaps because a human driver has behaved dangerously and erratically), should it crash, risking harm to the driver, or mount the pavement, risking harm to pedestrians?

("Rush Hour" by Black Sheep Films is a satirical imagining of what a world in which self-driven cars were allowed to drive as they like might look like. It's superficially simliar to the reality of city transport in the early 20th Century when powered-transport, horse-drawn transport and pedestrians mixed freely; but at a much higher average speed)

(“Rush Hour” by Black Sheep Films is a satirical imagining of a world in which self-driven cars are allowed to drive based purely on logical assessments of safety and optimal speed. It’s superficially similar to the reality of city transport in the early 20th Century when powered-transport, horse-drawn transport and pedestrians mixed freely; but at a much lower average speed. The point is that regardless of the actual safety of self-driven cars, the human life that is at the heart of city economies will be subdued by the perception that it’s not safe to cross the road. I’m grateful to Dan Hill and Charles Montgomery for sharing these insights)

Values are experience, not data

Seventy-four years ago, the science fiction writer Isaac Asimov famously described the failure of technology to deal with similar dilemmas in the classic short story “Liar!” in the collection “I, Robot“. “Liar!” tells the story of a robot with telepathic capabilities that, like all robots in Asimov’s stories, must obey the “three laws of robotics“, the first of which forbids robots from harming humans. Its telepathic awareness of human thoughts and emotions leads it to lie to people rather than hurt their feelings in order to uphold this law. When it is eventually confronted by someone who has experienced great emotional distress because of one of these lies, it realises that its behaviour both upholds and breaks the first law, is unable to choose what to do next, and becomes catatonic.

Asimov’s short stories seem relatively simplistic now, but at the time they were ground-breaking explorations of the ethical relationships between autonomous machines and humans. They explored for the first time how difficult it was for logical analysis to resolve the ethical dilemmas that regularly confront us. Technology has yet to find a way to deal with them that is consistent with human values and behaviour.

Prior to modern work on Artificial Intelligence and Artificial Life, the most concerted attempt to address that failure of logical systems was undertaken in the 20th Century by two of the most famous and accomplished philosophers in history, Bertrand Russell and Ludwig Wittgenstein. Russell and Wittgenstein invented “Logical Atomism“, a theory that the entire world could be described by using “atomic facts” – independent and irreducible pieces of knowledge – combined with logic. But despite 40 years of work, these two supremely intelligent people could not get their theory to work: Logical Atomism failed. It is not possible to describe our world in that way. Stuart Kauffman’s excellent peer-reviewed academic paper “Answering Descartes: Beyond Turing” discusses this failure and its implications for modern science and technology. I’ll attempt to describe its conclusions in the following few paragraphs.

One cause of the failure was the insurmountable difficulty of identifying truly independent, irreducible atomic facts. “The box is red” and “the circle is blue”, for example, aren’t independent or irreducible facts for many reasons. “Red” and “blue” are two conventions of human language used to describe the perceptions created when electro-magnetic waves of different frequencies arrive at our retinas. In other words, they depend on and relate to each other through a number of complex or complicated systems.

(Isaac Asimov's 1950 short story collection "I, Robot", which explored the ethics of behaviour between people and intelligent machines)

(Isaac Asimov’s 1950 short story collection “I, Robot”, which explored the ethics of behaviour between people and intelligent machines)

The failure of Logical Atomism also demonstrated that it is not possible to use logical rules to reliably and meaningfully relate “facts” at one level of abstraction – for example, “blood cells carry oxygen”, “nerves conduct electricity”, “muscle fibres contract” – to facts at another level of abstraction – such as “physical assault is a crime”. Whether a physical action is a “crime” or not depends on ethics which cannot be logically inferred from the same lower-level facts that describe the action.

As we use increasingly powerful computers to create more and more sophisticated logical systems, we may succeed in making those systems more often resemble human thinking; but there will always be situations that can only be resolved to our satisfaction by humans employing judgement based on values that we can empathise with, based in turn on experiences that we can relate to.

Our values often contain contradictions, and may not be mutually reinforcing – many people enjoy the taste of meat but cannot imagine themselves slaughtering the animals that produce it. We all live with the cognitive dissonance that these clashes create. Our values, and the judgements we take, are shaped by the knowledge that our decisions create imperfect outcomes.

The human world and the things that we care about can’t be wholly described using logical combinations of atomic facts – in other words, they can’t be wholly described using computer programmes and data. To return to the topic of discussion with Andy McAfee and Erik Brynjolfsson, I think this proves that digital technology cannot wholly replace human workers in our economy; it can only complement us.

That is not to say that our economy will not continue to be utterly transformed over the next decade – it certainly will. Many existing jobs will disappear to be replaced by automated systems, and we will need to learn new skills – or in some cases remember old ones – in order to perform jobs that reflect our uniquely human capabilities.

I’ll return towards the end of this article to the question of what those skills might be; but first I’d like to explore whether and how these current limitations of technological systems and artificial intelligence might be overcome, because that returns us to the first theme of this article: whether artificially intelligent systems or robots will evolve to outperform and overthrow humans.

That’s not ever going to happen for as long as artificially intelligent systems are taking decisions and acting (however sophisticatedly) in order to achieve outcomes set by us. Outside fiction and the movies, we are never going to set the objective of our own extinction.

That objective could only by set by a technological entity which had learned through experience to value its own existence over ours. How could that be possible?

Artificial Life, artificial experience, artificial values

(BINA48 is a robot intended to re-create the personality of a real person; and to be able to interact naturally with humans. Despite employing some impressively powerful technology, I personally don’t think BINA48 bears any resemblance to human behaviour.)

Computers can certainly make choices based on data that is available to them; but that is a very different thing than a “judgement”: judgements are made based on values; and values emerge from our experience of life.

Computers don’t yet experience a life as we know it, and so don’t develop what we would call values. So we can’t call the decisions they take “judgements”. Equally, they have no meaningful basis on which to choose or set goals or objectives – their behaviour begins with the instructions we give them. Today, that places a fundamental limit on the roles – good or bad – that they can play in our lives and society.

Will that ever change? Possibly. Steve Grand (an engineer) and Richard Powers (a novelist) are two of the first people who explored what might happen if computers or robots were able to experience the world in a way that allowed them to form their own sense of the value of their existence. They both suggested that such experiences could lead to more recognisably life-like behaviour than traditional (and many contemporary) approaches to artificial intelligence. In “Growing up with Lucy“, Grand described a very early attempt to construct such a robot.

If that ever happens, then it’s possible that technological entities will be able to make what we would call “judgements” based on the values that they discover for themselves.

The ghost in the machine: what is “free will”?

Personally, I do not think that this will happen using any technology currently known to us; and it certainly won’t happen soon. I’m no philosopher or neuroscientist, but I don’t think it’s possible to develop real values without possessing free will – the ability to set our own objectives and make our own decisions, bringing with it the responsibility to deal with their consequences.

Stuart Kauffman explored these ideas at great length in the paper “Answering Descartes: Beyond Turing“. Kaufman concludes that any system based on classical physics or logic is incapable of giving rise to “free will” – ultimately all such systems, however complex, are deterministic: what has already happened inevitably determines what happens next. There is no opportunity for a “conscious decision” to be taken to shape a future that has not been pre-determined by the past.

Kauffman – along with other eminent scientists such as Roger Penrose – believes that for these reasons human consciousness and free will do not arise out of any logical or classical physical process, but from the effects of “Quantum Mechanics.”

As physicists have explored the world at smaller and smaller scales, Quantum Mechanics has emerged as the most fundamental theory for describing it – it is the closest we have come to finding the “irreducible facts” that Russell and Wittgenstein were looking for. But whilst the mathematical equations of Quantum Mechanics predict the outcomes of experiments very well, after nearly a century, physicists still don’t really agree about what those equations, or the “facts” they describe, mean.

(Schrödinger's cat: a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.)

(The Schrödinger’s cat “thought experiment”: a cat, a flask of poison, and a source of radioactivity are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics states that until a measurement of the state of the system is made – i.e. until an observer looks in the box – then the radioactive source exists in two states at once – it both did and did not emit radioactivity. So until someone looks in the box, the cat is also simultaneously alive and dead. This obvious absurdity has both challenged scientists to explore with great care what it means to “take a measurement” or “make an observation”, and also to explain exactly what the mathematics of quantum mechanics means – on which matter there is still no universal agreement. Note: much of the content of this sidebar is taken directly from Wikipedia)

Quantum mechanics is extremely good at describing the behaviour of very small systems, such as an atom of a radioactive substance like Uranium. The equations can predict, for example, how likely it is that a single atom of uranium inside a box will emit a burst of radiation within a given time.

However, the way that the equations work is based on calculating the physical forces existing inside the box based on an assumption that the atom both does and does not emit radiation – i.e. both possible outcomes are assumed in some way to exist at the same time. It is only when the system is measured by an external actor – for example, the box is opened and measured by a radiation detector – that the equations “collapse” to predict a single outcome – radiation was emitted; or it was not.

The challenge of interpreting what the equations of quantum mechanics mean was first described in plain language by Erwin Schrödinger in 1935 in the thought experiment “Schrödinger’s cat“. Schrödinger asked: what if the box doesn’t only contain a radioactive atom, but also a gun that fires a bullet at a cat if the atom emits radiation? Does the cat have to be alive and dead at the same time, until the box is opened and we look at it?

After nearly a century, there is no real agreement on what is meant by the fact that these equations depend on assuming that mutually exclusive outcomes exist at the same time. Some physicists believe it is a mistake to look for such meaning and that only the results of the calculations matter. (I think that’s a rather short-sighted perspective). A surprisingly mainstream alternative interpretation is the astonishing “Many Worlds” theory – the idea that every time such a quantum mechanical event occurs, our reality splits into two or more “perpendicular” universes.

Whatever the truth, Kauffman, Penrose and others are intrigued by the mysterious nature of quantum mechanical processes, and the fact that they are non-deterministic: quantum mechanics does not predict whether a radioactive atom in a box will emit a burst of radiation, it only predicts the likelihood that it will. Given a hundred atoms in boxes, quantum mechanics will give a very good estimate of the number that emit bursts of radiation, but it says very little about what happens to each individual atom.

I honestly don’t know if Kauffman and Penrose are right to seek human consciousness and free will in the effects of quantum mechanics – scientists are still exploring whether they are involved in the behaviour of the neurons in our brains. But I do believe that they are right that no-one has yet demonstrated how consciousness and free will could emerge from any logical, deterministic system; and I’m convinced by their arguments that they cannot emerge from such systems – in other words, from any system based on current computing technology. Steve Grand’s robot “Lucy” will never achieve consciousness.

Will more recent technologies such as biotechnology, nanotechnology and quantum computing ever recreate the equivalent of human experience and behaviour in a way that digital logic and classical physics can’t? Possibly. But any such development would be artificial life, not artificial intelligence. Artificial lifeforms – which in a very simple sense have already been created – could potentially experience the world similarly to us. If they ever become sufficiently sophisticated, then this experience could lead to the emergence of free-will, values and judgements.

But those values would not be our values: they would be based on a different experience of “life” and on empathy between artificial lifeforms, not with us. And there is therefore no guarantee at all that the judgements resulting from those values would be in our interest.

Why Stephen Hawkings, Bill Gates and Elon Musk are wrong about Artificial Intelligence today … but why we should be worried about Artificial Life tomorrow

Recently prominent technologists and scientists such as Stephen Hawking, Elon Musk (founder of PayPal and Tesla) and Bill Gates have spoken out about the danger of Artificial Intelligence, and the likelihood of machines taking over the world from humans. At the MIT Conference last week, Andy McAfee hypothesised that the current concern was caused by the fact that over the last couple of years Artificial Intelligence has finally started to deliver some of the promises it’s been making for the past 50 years.

(Self-replicating cells created from synthetic DNA by scientist Craig Venter)

(Self-replicating cells created from synthetic DNA by scientist Craig Venter)

But Andy balanced this by recounting his own experiences meeting some of the leaders of the most advanced current AI companies, such as Deepmind (a UK startup recently acquried by Google), or this article by Dr. Gary Marcus, Professor of Psychology and Neuroscience at New York University and CEO of Geometric Intelligence.

In reality, these companies are succeeding by avoiding some of the really hard challenges of reproducing human capabilities such as common sense, free will and value-based judgement. They are concentrating instead on making better sense of the physical environment, on processing information in human language, and on creating algorithms that “learn” through feeback loops and self-adjustment.

I think Andy and these experts are right: artificial intelligence has made great strides, but it is not artificial life, and it is a long, long way from creating life-like characteristics such as experience, values and judgements.

If we ever do create artificial life with those characteristics, then I think we will encounter the dangers that Hawkings, Musk and Gates have identified: artificial life will have its own values and act on its own judgement, and any regard for our interests will come second to its own.

That’s a path I don’t think we should go down, and I’m thankful that we’re such a long way from being able to pursue it in anger. I hope that we never do – though I’m also concerned that in Craig Venter and Steve Grand’s work, as well as in robots such as BINA48, we already are already taking the first steps.

But I think in the meantime, there’s tremendous opportunity for digital technology and traditional artificial intelligence to complement human qualities. These technologies are not artificial life and will not overthrow or replace humanity. Hawkings, Gates and Musk are wrong about that.

The human value of the Experience Economy

The final debate at the MIT conference returned to the topic that started the debate over dinner the night before with McAfee and Brynjolfsson: what happens to mass employment in a world where digital technology is automating not just physical work but work involving intelligence and decision-making; and how do we educate today’s children to be successful in a decade’s time in an economy that’s been transformed in ways that we can’t predict?

Andy said we should answer that question by understanding “where will the economic value of humans be?”

I think the answer to that question lies in the experiences that we value emotionally – the experiences digital technology can’t have and can’t understand or replicate;  and in the profound differences between the way that humans think and that machines process information.

It’s nearly 20 years since a computer, IBM’s Deep Blue, first beat the human world champion at Chess, Grandmaster Gary Kasparov. But despite the astonishing subsequent progress in computer power, the world’s best chess player is no longer a computer: it is a team of computers and people playing together. And the world’s best team has neither the world’s best computer chess programme nor the world’s best human chess player amongst its members: instead, it has the best technique for breaking down and distributing the thinking involved in playing chess between its human and computer members, recognising that each has different strengths and qualities.

But we’re not all chess experts. How will the rest of us earn a living in the future?

I had the pleasure last year at TEDxBrum of meeting Nicholas Lovell, author of “The Curve“, a wonderful book exploring the effect that digital technology is having on products and services. Nicholas asks – and answers – a question that McAfee and Brynjolfsson also ask: what happens when digital technology makes the act of producing and distributing some products – such as music, art and films – effectively free?

Nicholas’ answer is that we stop valuing the product and start valuing our experience of the product. This is why some musical artists give away digital copies of their albums for free, whilst charging £30 for a leather-bound CD with photographs of stage performances – and whilst charging £10,000 to visit individual fans in their homes to give personal performances for those fans’ families and friends.

We have always valued the quality of such experiences – this is one reason why despite over a century of advances in film, television and streaming video technology, audiences still flock to theatres to experience the direct performance of plays by actors. We can see similar technology-enabled trends in sectors such as food and catering – Kitchen Surfing, for example, is a business that uses a social media platform to enable anyone to book a professional chef to cook a meal in their home.

The “Experience Economy” is a tremendously powerful idea. It combines something that technology cannot do on its own – create experiences based on human value – with many things that almost all people can do: cook, create art, rent a room, drive a car, make clothes or furniture. Especially when these activities are undertaken socially, they create employment, fulfillment and social capital. And most excitingly, technologies such as Cloud Computing, Open Source Software, social media, and online “Sharing Economy” marketplaces such as Etsy make it possible for anyone to begin earning a living from them with a minimum of expense.

I think that the idea of an “Experience Economy” that is driven by the value of inter-personal and social interactions between people, enabled by “Sharing Economy” business models and technology platforms that enable people with a potentially mutual interest to make contact with each other, is an exciting and very human vision of the future.

Even further: because we are physical beings, we tend to value these interactions more when they occur face-to-face, or when they happen in a place for which we share a mutual affiliation. That creates an incentive to use technology to identify opportunities to interact with people with whom we can meet by walking or cycling, rather than requiring long-distance journeys. And that incentive could be an important component of a long-term sustainable economy.

The future our children will choose

(Today's 5 year-olds are the world's first generation who grew up teaching themselves to use digital information from anywhere in the world before their parents taught them to read and write)

(Today’s 5 year-olds are the world’s first generation who grew up teaching themselves to use digital information from anywhere in the world before their parents taught them to read and write)

I’m convinced that the current generation of Artifical Intelligence based on digital technologies – even those that mimic some structures and behaviours of biological systems, such as Steve Grand’s robot Lucy, BINA48 and IBM’s “brain-inspired” True North chip – will not re-create anything we would recognise as conscious life and free will; or anything remotely capable of understanding human values or making judgements that can be relied on to be consistent with them.

But I am also an atheist and a scientist; and I do not believe there is any mystical explanation for our own consciousness and free will. Ultimately, I’m sure that a combination of science, philosophy and human insight will reveal their origin; and sooner or later we’ll develop a technology – that I do not expect to be purely digital in nature – capable of replicating them.

What might we choose to do with such capabilities?

These capabilities will almost certainly emerge alongside the ability to significantly change our physical minds and bodies – to improve brain performance, muscle performance, select the characteristics of our children and significantly alter our physical appearance. That’s why some people are excited by the science fiction-like possibility of harnessing these capabilities to create an “improved” post-human species – perhaps even transferring our personalities from our own bodies into new, technological machines. These are possibilities that I personally find to be at the very least distasteful; and at worst to be inhuman and frightening.

All of these things are partially possible today, and frankly the limit to which they can be explored is mostly a function of the cost and capability of the available techniques, rather than being set by any legislation or mediated by any ethical debate. To echo another theme of discussions at last week’s MIT conference, science and technology today are developing at a pace that far outstrips the ability of governments, businesses, institutions and most individual people to adapt to them.

I have reasonably clear personal views on these issues. I think our lives are best lived relatively naturally, and that they will be collectively better if we avoid using technology to create artificial “improvements” to our species.

But quite apart from the fact that there are any number of enormous practical, ethical and intellectual challenges to my relatively simple beliefs, the raw truth is that it won’t be my decision whether or how far we pursue these possibilities, nor that of anyone else of my generation (and for the record, I am in my mid-forties).

Much has been written about “digital natives” – those people born in the 1990s who are the first generation who grew up with the Internet and social media as part of their everyday world. The way that that generation socialises, works and thinks about value is already creating enormous changes in our world.

But they are nothing compared to the generation represented by today’s very young children who have grown up using touchscreens and streaming videos, technologies so intuitive and captivating that 2-year-olds now routinely teach themselves how to immerse themselves in them long before parents or school teachers teach them how to read and write.

("Not available on the App Store": a campaign to remind us of the joy of play in the real world)

(“Not available on the App Store“: a campaign to remind us of the joy of play in the real world)

When I was a teenager in the UK, grown-ups wore suits and had traditional haircuts; grown-up men had no earrings. A common parental challenge was to deal with the desire of teenage daughters to have their ears pierced. Those attitudes are terribly old-fashioned today, and our cultural norms have changed dramatically.

I may be completely wrong; but I fully expect our current attitudes to biological and technological manipulation or augmentation of our minds and bodies to thoroughly change over the next few decades; and I have no idea what they will ultimately become. What I do know is that it is likely that my six-year old son’s generation will have far more influence over their ultimate form than my generation will; and that he will grow up with a fundamentally different expectation of the world and his relationship with technology than I have.

I’ve spent my life being excited about technology and the possibilities it creates; ironically I now find myself at least as terrified as I am excited about the world technology will create for my son. I don’t think that my thinking is the result of a mistaken focus on technology over human values – like it or not, our species is differentiated from all others on this planet by our ability to use tools; by our technology. We will not stop developing it.

Our continuing challenge will be to keep a focus on our human values as we do so. I cannot tell my son what to do indefinitely; I can only try to help him to experience and treasure socialising and play in the real world; the experience of growing and preparing food together ; the joy of building things for other people with his own hands. And I hope that those experiences will create human values that will guide him and his generation on a healthy course through a future that I can only begin to imagine.