Any sense that government could own and control the commanding heights of technology has long since faded into history. The digital revolution that swept through the communications world from the 1980s onwards saw to that; generations of technology companies and many billions of R&D investment gave citizens and companies ready access to the tools and capabilities that were once only available to major nation states. I was something of a technical “geek” in my early years – I suspect I still am at heart – and believe that this has been an incredible, exciting period in which to be alive.
That same digital revolution also shaped my professional career as an intelligence officer. The once-stable landscape of security and intelligence shifted forever, as ubiquitous, free encryption, the routine anonymization of online identities, and vast data volumes became the “new normal”. For a period it seemed as though the security services might find themselves “going dark”, unable to generate the intelligence they needed to protect the public. The activities of terrorists, insurgents, criminals and hackers threatening the safety of the UK were still out there, but are now hidden in the digital noise.
The response of agencies like mine was to embrace the Internet. We combined the acquisition of very large volumes of data – albeit only a vanishingly small percentage of the actual Internet traffic touching the UK – with advanced analytical processing, and put tight privacy safeguards around how analysts could access the selected results. As time went by, we added what lawyers term “equipment interference” to our toolkit, enabling analysts to gain access to networked devices from afar – “computer network exploitation” or “hacking”, as most nonlawyers would call it.
In combination, these have proved powerful techniques for countering attempts to harm the UK, but it remains hard, painstaking work for our analysts. I often reflect that we could have done more to explain the limited capacity of the agencies and the practical difficulties – as well as the huge opportunities – of carrying out intelligence analysis in the digital age. We are limited by the need to protect our most sensitive techniques, but we have learnt the value of being open about our challenges and overall approach.
Meanwhile, we championed the use of strong encryption and security standards to protect citizens’ and companies’ data in the UK and beyond. Like many in GCHQ, I have worked within both our defensive cyber security and global intelligence missions during my career. I like to think my work over the years has both helped to save lives and to protect privacy.
But using technology means making choices and, at least in a democracy like our own, those decisions can, and should, be challenged. Some of those concerned with the implications of our approach took their case to the courts in Europe. The judges supported our approach to managing privacy and security, and they dismissed some of the wilder claims made about our capabilities, but in the process it became clear that some of our older laws needed updating.
The passage of the resulting Investigatory Powers Act last year was not a comfortable experience. Being cross-examined in detail about your work by Parliamentary committees or teams of top barristers is rarely relaxing, but the process certainly encouraged public debate and transparency. Parliament chose to renew the authorities available to the agencies, but also concluded that our exceptional powers required exceptional oversight. MPs inserted a new team of independent high court judges into our approvals process for the first time, and introduced a new panel of experts to advise on changing technology, especially where it impacts on privacy.
Technology has, of course, not stood still in the intervening year. The Internet has continued to “flatten” as emerging economies have taken forward investment in indigenous software and infrastructure. Up to a 100 billion separate devices could be globally connected by 2020, creating waves of new competition and opportunities. The Internet no longer belongs solely to the West, and we are having to fight to protect some of its founding principles – including multi-stakeholder ownership, the rights of the individual, and technical transparency – in the face of action by far less liberal states.
Society’s use of technology has also moved on at pace. To quote the author Isaac Asimov, “all devices have their dangers”: there is now a much greater awareness of the risks posed from cyber attacks, including the threats to our own democratic processes and daily ways of life. Emerging technologies have an important part to play in protecting us, including as part of GCHQ’s National Cyber Security Centre’s programme of active cyber defence.
From the fierce debates around how best to deliver both national and personal security, some points of consensus have emerged. Almost everyone would now agree that the public and private aspects of these policies are intertwined: how we choose to protect the individual’s right to privacy has much broader implications for society as a whole. And most would also agree that the government’s and the private sector’s approaches to protecting privacy can no longer be analysed in isolation – too much of the data and how it might be used in the modern world is common to us both.
Nonetheless we still have some way to go. There remains a gap between those specialists discussing these issues from an operational perspective – for example police officers, security experts, officials such as myself – and academic specialists from our universities and institutes. We often use different language and paradigms, and too often talk past one another, to the detriment of both disciplines.
There is also a gap between the discourses around abstract rights – for example the debates over privacy and data law led by some lawyers within Europe – and the more utopian approach to technology popular within some major technology companies, especially in the United States. Without greater cross-fertilisation, we risk producing legal conclusions that defy technological implementation, or tech products which ignore the potential ethical implications of their misuse.
We increasingly need to find imaginative ways and novel places to bring together these debates. Some of those discussions must take place within the UK, but others must necessarily be international. Just as modern national security threats cross borders, so does data, commerce, research and political vision – at least amongst democracies. Our collective security requires collective consensus on how to manage these emotionally charged issues, more so now than ever.