This is for everyone #london2012 #oneweb #openingceremony @webfoundation @w3c
Sir Tim Berners-Lee1
Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.
After 20 years we have reached the point of no return: we have net neutrality law in Europe, and in many other countries. It has even been accepted as a principle of the United Nations, as I explore later in this chapter. It is here to stay, however watered down its principles, however complex its enforcement, however unreasonable or overzealous its defenders, or duplicitous its enemies.
On 24 February 2016 I gave a presentation at the closed BEREC workshop on net neutrality, attended by national regulators and the European Commission, alongside Professor Barbara van Schewick, Dr Scott Marcus and Dr Alissa Cooper.3 While the latter pair of presentations focused on the lack of evidence gathering to prove net neutrality breaches, and the problems created in the technical protocol stack by differing attempts to introduce QoS, my presentation and that of van Schewick focused on the manner in which the BEREC Guidelines needed to clarify Regulation 2015/2120. In particular, we both warned of the danger of operators both favouring their own and affiliated content providers (‘friends and family’ as I termed it) using zero rating, especially of video content, and discriminating against rival content by charging extra for service that is barely different in any respect from standard best efforts Internet traffic (‘specialised-service your enemies’). The questions from the workshop suggested that there will be several national regulators who are sympathetic to such arguments, as they hold net neutrality as not so much a Friday afternoon job in their remit, as a never-to-be-enforced power. This is hardly surprising when regulators have always resisted new sector-specific regulation, favour consolidation of operators to increase investment in ‘superfast’ (sic) broadband, and have no new resources to meet the multifaceted challenge of net neutrality enforcement. Expect the smaller regulators in the Baltics, Cyprus, Malta, Luxembourg, the Visegrad Four and perhaps Ireland to be the first regulators whose very uncertain regulatory commitment to net neutrality will be tested by zero rating and/or specialised service plans. This double whammy of positive and negative discrimination appears to be emerging in the United States, where non-affiliated services, in particular NetFlix, has been rejected from some zero-rating plans while accepted on others.4
The Introduction explained that net neutrality directly regulates the relationship between IAPs and content providers, providing rules about how IAPs may contract with and treat the traffic of those content providers, especially that they may not discriminate against certain providers (either blocking their content or favouring commercial rivals such as IAP affiliates). It does not regulate those content providers directly. There are two types of net neutrality regulation: ‘lite’ and heavy. The former prevents IAPs from banning or throttling other content, application and service providers; the latter dictates non-discrimination on fast lane broadband, known as Specialised Services. Regulation and its enforcement has been delayed until 2017, given that 2015/16 laws and regulations failed to define those ‘lite’ and heavy regulations in detail. Chapter 1 explained the beginnings of net neutrality regulation in the US and Europe. Chapter 2 outlined competition policy’s purpose and limitations, and regulating telecoms access based on the UK case study, providing insights into how difficult net neutrality regulation will prove in practice. It considered the possibility of platform neutrality or some other form of platform regulation. It assessed the possibilities of behavioural regulation to overcome some of the consumer detriments identified in nascent net neutrality regulation, and the wider use of behavioural ‘nudge regulation’ in Internet policy. Chapter 3 explained the current debate over access to Specialised Services: fast lanes with higher QoS. Chapter 4 examined the new European law of 2015, with Chapter 5 examining the interaction between those laws and interception/privacy. Chapter 6 took a deep dive into UK self- and co-regulation of net neutrality. The majority of 2015 net neutrality regulatory cases related to mobile (or in US parlance ‘wireless’) net neutrality. Chapter 7 explored some of the wider international problems of regulating the newest manifestation of discrimination: zero rating.
In the final chapter, I consider the various means by which government can regulate net neutrality, focused in four parts: ex ante sector-specific and ex post generic (competition/consumer protection) regulation; educating the public and encouraging greater transparency; incentives for research and development of new technologies that can overcome (or exacerbate) the problem; and procurement and adoption of technologies to set a standard that the market can follow. The first is addressed with a toolkit for regulation. The second is summarised based on findings in Chapter 2 of the roles of competition law and behavioural regulation. The third is examined using the example of the tools of Internet science, notably the adoption of Responsible Research and Innovation (RRI) by the European Commission for Horizon 2020 projects, based on the Rome Declaration of 2014.5 This approach is intended to explicitly include human rights to privacy and free expression in the design of Internet technologies. The final part, procurement, is analysed by examining both commitments to Universal Service Obligation (USO) and to an industrial policy of ‘Gigabit infrastructure by 2025’ set out by the European Commission. If broadband Internet is vital to future socio-economic development, then net-neutral fast infrastructure is an essential part of that process. In conclusion, I consider what net neutrality regulation may teach regulatory research in other areas, the law more generally, and the future of the social sciences as data science and other techniques emerge, making the interdisciplinary challenges of net neutrality regulation of more general application.
The case studies have provided a variety of responses to net neutrality violation in practice, with zero rating as the main concern in 2016. I now offer some elements that may be suitable as a toolkit for regulators to respond to net neutrality concerns. It offers several elements:
- how to engage stakeholders;
- how to measure neutrality;
- how to access prior knowledge in technical advice; and
- an example of how to respond to zero-rating offers.
It is not prescriptive but descriptive, and points out that in all these areas, as well as Specialised Services, there remain serious research gaps in the analysis. These gaps were predictable five years ago but have only slowly been addressed, reflecting the political, economic and forensic uncertainty of net neutrality regulation.
Table 9 summarises the date of introduction of net neutrality policy, its regulatory basis and the major cases dealt with by the regulator.6
|Norway||Guidelinesa||24/2/2009b||Zero rating declaration by NKOM of 2014|
|Costa Rica||Sala Constitucional De La Corte Suprema De Justiciac||13/7/2010||2010 by Supreme Court precedent|
|Chile||Law 20.453d||18/8/2010||Decree 368, 15/12/2010e|
|Netherlands||Telecoms Act 2012||7/6/2012||2014 and Guidelines 15/5/2015f|
|Slovenia||Law on Electronic Communications 2012||20/12/2012||Zero rating 2015|
|Finland||Information Society Code (917/2014)||17/9/2014||2014|
|India||Regulations (No. 2 of 2016)||8/2/2016||August: six months after Gazette publication date|
|Brazil||Law No. 12.965||23/4/2014||Consultation 2015–16, no implementationg|
|Canada||Hearing of 2010||Telecom Act 1993||Zero rating 2015|
|United States||Open Internet Order under Title II, Communications Act 1934 as amended by Telecoms Act 1996||2015||Zero rating unenforced except by merger condition|
|UK||Code of Practice 2011||Self-regulatory||Unenforced|
bOlsen, T. (2015).
cCosta Rica, Guzmán et al. v. Ministerio De Ambiente, Energía y Telecomunicaciones (2010).
dSee Chile, Ley 20.453 de 18 de agosto 2010.
eChile, Decree 368 of 15 December 2010.
fNetherlands Department of Economic Affairs, Net Neutrality Guidelines, 2015.
As seen, no decision has been made in the United Kingdom (and the EU states whose BEREC Guidelines are pending). All other case studies implemented some type of regulation of zero rating, though in the United States and Chile this appears to have exceptions (for music and video streaming in the US, Wikipedia Zero in Chile). The nations with the fastest median Internet access, Netherlands and Norway, also have the strictest net neutrality regulation in practice.
Co-regulation was used extensively in Norway, and to a lesser extent in the UK to form policy. The use of multi-stakeholder forums to consult on policy was made, in addition to parliamentary discussion, in the United States, Brazil and Canada. In the former two countries, and in India in 2015, thousands of replies were received (four million in the US, two million in India) in favour of some form of neutrality. The Netherlands and Slovenia had extensive parliamentary debate about their net neutrality laws. This confirms that, at least in form, the telecoms regulators became best of breed in terms of making consultations widely available and receiving significant numbers of non-traditional responses. The July 2016 BEREC consultation may show Europe to be the exception to this greater participation.7
Better regulation requires better evidence of impact and actors. Two outcomes have been presented to the EC in the Code of Practice Agora to improve the evidence base.8 The first is a formal research/impact analysis task for the EC, rather than just claiming Option Zero if regulation is undertaken by corporate/NGO/standards actors instead of government (i.e. the other two points of the regulatory triangle in our case studies). For example, hardware governance and Border Gateway Protocols (BGP) need to be understood by government in order to formulate useful net neutrality policy even in the absence of formal regulation. How do these emergent areas interrelate with other parts of Internet governance? How does such governance interrelate with regulation, for instance where new actors and institutions are forming new coalitions of interest and epistemic communities? Further research into areas such as cyber-security, jurisdiction and borders, and standard setting is needed urgently to identify the complex international patterns of interdependent regulation of net neutrality’s many interlinked technical, corporate and social facets.
These new areas also shine a light on the increasing role of non-traditional actors – institutions, the third sector, multi-stakehoderism (MSH), IAPs, social networks and participation of individuals in policy-related activism as evidenced by responses to the net neutrality legislation in the US, India and Brazil. How do we understand the meaning of online activism? What made a difference was stakeholders acting through main political veins (Obama’s reference to the four million emails sent to the FCC in 2014 which opened this book, or the two million sent to TRAI in India in 2015), instead of the network make-up of corporate regulatory actors. Dynamic activities are taking place in different places. Recent work by Powell9 identifies how participation in policy making employs networked dynamics but also created new discourses related to the Internet that countered the ways that governments had attempted to position these regulatory interventions.
In addition to the ‘what’ and ‘who’ questions, net neutrality research also reveals ‘where’ answers: non-conventional venues – international, non-state, code-based, for instance. We are moving towards a more multi-stakeholder environment (and our case studies demonstrate this, with more forms of regulation by market actors), and away from legislative bodies. ‘Exotic’ actors include prosumers (Anonymous, hackers, Wikileaks and others) and have a strong impact on Internet governance. The recent calls by President Roussef, Chancellor Merkel and others for a ‘sovereign cloud’ is related to the Snowden revelations, but impacts powerfully on net neutrality. Governments need to commission a research programme to understand these processes, actors and venues.
The net neutrality case studies illustrate ‘how’ participation can occur in many modes, but they also stress that effective participation in governance is not only a matter of greater numbers of people representing different groups, but is also contingent on the legitimacy of spaces for participation. For code-based governance, this is often linked with expertise, but as the case of net neutrality demonstrates, this legitimacy also emerges in relation to other actors and through the use of the technical solutions. Standard-setting processes can be hijacked to further private interests. In some case studies we see that the more stakeholders there are, the less effectiveness there is. Experts, notably engineers, may migrate to another forum to avoid tedious legitimacy discussions and to conduct ‘real work’ on QoS. Thus, institutional contexts remain important, and understanding depends on development of new analytic models that:
- Identify the manner in which governance and legitimacy emerge socio-technically;
- Employ analysis of power, including the power of policy networks and the significance of discourses as developed by activists, individuals, the media and governments;
- Avoid justifying Potemkin multi-stakeholderism by separating policy domain and policy issues.
Net neutrality demonstrates itself as a Potemkin stakeholder process. Processes it puts in place are post facto opportunities for input. Loss of legitimacy of institutions is important, whether due to mission creep or issue linkage, such as between net neutrality, interception and privacy. How do you separate increased participation from decision making in drafting new processes? What do you do when greater participation breaks effectiveness? Proximity allows stakeholders to take each other’s interests into account, but a research question that emerges is: Do we take into account the direction of ‘travel’ in the case studies, e.g. downstream effects? Repulsion and attraction of different multi-stakeholders, such as civil society in the case of net neutrality, is a dynamic process that needs more research.
Research is needed to examine both enforcement of transparency in TMP by governments and their agencies, notably through use of SamKnows monitoring (Brazil, US, UK, EU, Canada) and the publication of key metrics, and enforcement by regulators following infringement actions where published. Seven of the national case studies are now using measurement devices in the consumer’s home. SamKnows is now active in measuring end user TMPs in contracts with regulators in the US, Brazil, the UK, Canada and the European Union as a whole.10 This has supplanted self-reporting of violation by the IAPs, and network measurement by downloaded diagnostic tools, as the preferred method of discovering TMPs. Given the lack of clarity in the latter, and the obvious incentive paradox in asking IAPs to self-report violation, this approach appears to be the best fit.
The US regulator is taking action to actively consult on future TMPs that may violate neutrality via its Advisory Opinion approach. Even critics of net neutrality acknowledge that better measurement of end user experience is a vital contributor to forcing IAPs to offer increased transparency to end users.11 A report for Ofcom published in August 2015 concluded that an approach based on a quality floor (i.e. minimum service quality, possibly based on a new Universal Service Obligation) would help app designers and users understand better how SamKnows-type measurement can help them make better choices.12
The advanced measurement standards emerging may help regulators and consumers understand how best to enforce net neutrality standards.
Technical elements of net neutrality remain complex in both resource and interpretation for regulators, especially those with fewer human resources and technical experience. It would be helpful if greater clarity on such future approaches were to build on the former role of the Open Internet Advisory Committee (OIAC) of the FCC in 2011–12, and Broadband Internet Technical Advisory Group (BITAG) in the period since. Between OIAC, BITAG and BEREC, many very useful technical and policy reports have been produced since 2011. I highlighted BEREC’s contribution earlier in Chapter 4, but in Table 10 I list valuable US contributions.
|BITAG 2011–14a||OIAC 2013b|
|2014 Interconnection and Traffic Exchange on the Internet||20 August 2013 Economic Impacts of Open Internet Frameworks|
|2014 VoIP Impairment, Failure, and Restrictions||20 August 2013 Policy Issues in Data Caps and Usage-Based Pricing|
|2013 Real-time Network Management of Internet Congestion||20 August 2013 AT&T FaceTime Case Study; Openness in the Mobile Broadband Ecosystem|
|Port Blocking 2013||20 August 2013 Specialized Services: Findings and Conclusions|
|SNMP DDoS Attacks 2013||20 August 2013 Open Internet Label Study|
|Large Scale Network Address Translation 2012||17 January 2013 Specialized Services|
|IPv6 DNS Whitelisting 2011||17 January 2013 Economic Impact Data Cap|
aSee www.bitag.org/ (Accessed 13 September 2016).
bSee www.fcc.gov/encyclopedia/open-internet-advisory-committee (Accessed 13 September 2016).
These reports were all either written by a co-regulatory group, as with OIAC and BITAG (though the latter claims to be formally self-regulatory), or consulted on with many stakeholders. Note that the BEREC site lists several other draft papers developed in 2011–15. BEREC has consulted very widely on its approach within the various regional regulator groups, including in what might be termed the ‘Regulators’ regulators’ forum in Barcelona on 2–3 July 2015, when no less than ten national regulators explained their approaches to net neutrality.13 BEREC met with EaPeReg (Eastern Partnership Electronic Communications Regulators Network), REGULATEL (Latin American Forum of Telecommunications Regulators) and EMERG (Euro-Mediterranean Regulators Group) for the high-level Regulator Summit, representing over seventy regulators.14
In terms of the value of net neutrality to consumers, regulators in the Netherlands, UK and BEREC15 all commissioned specialist reports to use focus groups to ascertain consumer ignorance and anger. These are in addition to the SamKnows reports released on an annual basis by regulators.
A Paris conference in 2005 of the most senior IAPs and academics concluded that the only foolproof method of discovering net neutrality violations is when IAPs not only admit to the practice but use it as marketing to sell their services as superior to ‘neutral’ competitors.16 This has occurred frequently where net neutrality ‘lite’ blocking of, for instance, Skype and other IM services has occurred, but may occur less frequently as IAPs respond to the new European law which outlaws this type of blocking. In future it will be even harder to identify violations. Academic research in this field is often industrially and governmentally supported, which has made it very perilous to conduct such research without accusations of bias in policy. Interference in the research agenda by corporate and/or government sponsors of other researchers in the departments in which a researcher works can put collegial pressure on the researcher to curb investigation into malfeasance.17 Research into detection of violation in Europe has been deliberately blocked by potential funders, whether or not under pressure from corporate interests, while research building the technical and economic case for violation has been richly rewarded.
Those academics conducting research that measures net neutrality have been accused, often incorrectly, of accepting support from content companies, specifically Google until 2010 (Google has become less committed to net neutrality since that point). In this respect, academia is facing that wider issue of identifying sponsors and corporate–government influence over research agendas that has become particularly acute in Europe since 2008 as research council funding has declined and direct corporate and government-sponsored research support has replaced it, yet has been commonplace in the United States throughout its academic history.18 Net neutrality is a small regulatory issue area compared to the global financial, pharmaceutical and arms proliferation crises caused by both explicit deregulation and failures to regulate in the twenty-first century.
The European funding gap is likely to become a less severe ethical hurdle as European law finally requires companies and governments to support net neutrality from 2016, and 2015 workshops for both EINS and the SMART Internet Measurement project (which I attended) were permitted to become substantial net neutrality discussions.19
Law is typically grounded in national policy outcomes, with international law an extension of national norms and processes, notably in the doctrine of extraterritorial application of national laws. Enforcement and policing are typically carried out in cooperation between national jurisdictions, and the most significant international law relating to the Internet remains the 2001 Council of Europe Cybercrime Treaty (to which the United States, Japan and Canada are non-European signatories). International lawyers respond to transnational problems with summitry, followed by an international treaty, paralleled by continued extraterritorial application of national law in cooperation with other jurisdictions: a pattern that is repeated with regard to the Internet, with one exception. The lack of competence and legitimacy for state action in regard to the Internet, and the continuous opposition of the United States government (supported by its complicit partner the European Union) to an international treaty in any but specific cybercrime policing matters, has meant that the frequent pleas from international lawyers, the United Nations system and regional human rights bodies for a treaty to establish international norms for the Internet have been met by deaf ears. In the World Conference on International Telecommunications (WCIT) renegotiation of 2012, varied attempts were made to establish a decision-making body to replace the toothless consultative summitry of the Internet Governance Forum. International lawyers’ calls for summitry to lead to binding treaties failed. The lack of policing and enforcement of extraterritorial jurisdiction make calls for a more vigorous international treaty or application of human rights legal norms equally frustrating.
Net neutrality regulation is a blunt telecom regulatory instrument for a multifaceted problem such as Internet access, which also includes such policy issues as privacy and free expression, as well as universal access and many Millennium Development Goals. Belli and Foditsch have written extensively about the modelling of a universal principle-based network neutrality law, an experiment conducted through the Dynamic Coalition on Net Neutrality in the UN Internet Governance Forum led by Belli since 2012:
it seems possible to distil some essential elements from the existing net neutrality frameworks, in order to define a common principle base on which interested policymakers or market actors can develop compatible net neutrality frameworks. Indeed, while the Internet is usually seen as an interconnection of electronic networks, it is important to stress that the online environment also determines an interconnection of juridical systems that may benefit from shared policies.20
Crowd-sourcing, however led by experts, is a novel form of legislative initiative, with Rasdu et al. cautioning that there is greater need for ‘leveraging sufficient community interest for substantial input; defining procedures for the collection and screening of inputs; and committing to institutionalizing rules for incorporating feedback.’21 This issue will not go away quickly.
United Nations policy documents remain generalist and somewhat naive, playing catch-up with telecoms regulation in developed nations. The official UN ‘State of Broadband’ report appeared to have only a single author, who thanked in preface over forty equipment vendors and telecoms lobbyists but only one person who might be seen as an independent expert. The use of corporate statistics without criticism or any objectivity was immediately condemned by the technical press, who saw the report as captured. It stated:
Internet companies and Internet content providers need to contribute to investment in broadband infrastructure by debating interconnection issues and revenue/fee sharing with operators and broadband access providers to increase investments in broadband infrastructure and energize the broadband ecosystem.22
It does offer support for a FRAND solution to Specialised Services, however: Although various strategies for open access exist, it is vital that policy-makers ensure that access to new facilities is provided on fair, reasonable and equivalent terms.’23
David Kaye, United Nations special rapporteur on freedom of expression, argues that:
In the longer term, net neutrality policies should be guaranteed wherever Internet infrastructure is being built out. The 13 ‘Necessary & Proportionate’ Principles, which apply human rights to communications surveillance, should also be adopted and implemented as a framework for rights-respecting connectivity.24
In December 2015 he argued for a human rights-oriented connectivity programme to flow from the UN General Assembly debate on WSIS+10 (a ten-year review of the World Summit on the Information Society) and the newly updated Millennium Development Goals (‘Global Goals for Sustainable Development’ (GGSD) as adopted by the UN General Assembly in September 2015). The GGSD emphasise that access to technology underpins every other ‘Global Goal’ toward the eradication of extreme poverty. He particularly urged cautious adoption of the multinational connectivity platform pursued by Facebook and endorsed by celebrities, explaining that:
Mark Zuckerberg and Bono issued a call to ‘unite the earth’ and, with other global opinion shapers and business leaders, released a Connectivity Declaration to ‘connect the world’. The U.S. State Department’s Global Connect program makes Internet access a foreign aid priority … But connectivity alone cannot be global policy. Respect for privacy and the freedom of expression must go hand in glove with the drive to connection.25
He argued strongly that the Facebook-sponsored Free Basics project, which offers free access to basic low-bandwidth versions of sponsored websites such as Facebook itself, Wikipedia and local news websites, offers a false equivalence with Open Internet access, warning that government may ‘bless deals creating a two-tiered Internet pushed by so-called zero-rated service providers that limits browsing to pre-selected applications and establishes new gatekeepers’26 such as Facebook. This may be especially pernicious as Free Basics is rolled out in the least developed countries with very low fixed Internet access, and thus greater dependence on low bandwidth mobile connections. Examples are Zambia, Myanmar, Kenya, Peru and Guatemala.
Privacy is an area of clear theoretical distinction between the EU and US, even though in practice smaller European states have highly inadequate regulators, while the US has a strong federal regulator which has imposed fines on a scale far beyond those imposed by its weakling European counterparts.27 The UK shares the US’s ambivalence towards privacy, its government campaigning in the 2015 General Election to leave the 47-member European Convention on Human Rights as a result of media-inspired fears of Article 8 privacy rights.28 In most developed countries, neutrality developed from privacy concerns, a dynamic which needs further empirical comparative research in the developing nation context.
The calls for a ‘Magna Carta for the Internet’ in the wake of the Snowden revelations miss the point that since the OECD 1980 Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,29 and especially since Directive 95/46/EC, there has been such a document in regard to privacy. In the concept of Privacy by Design, the standard use of such guidelines in funded research would prevent some of the most egregious failures to innovate with basic concern for human dignities, which impact users of net neutrality-violating technologies. These are not difficult outcomes to effect, though they require attention to social science and basic regard for regulation.30
If international multi-jurisdictional legal doctrine fails to regulate the Internet, this arguably calls for more sophisticated transnational responses, a reality that legal systems and policymakers have embraced cautiously since the mid-1990s. Law is to a great extent the attempt to enshrine controls over technosocial systems; and given the technological and social challenges of controlling the Internet, combined with many lawyers’ understandable reluctance to make bad law that leads to ridicule of their technosocial incompetence, new approaches that offer behavioural guidance, rather than prohibition and control, have been attempted. These follow the behavioural science approach of using legal nudges to achieve regulatory aims, as discussed in Chapter 2.
Privacy remains a thorny issue, and is largely unregulated in developing countries. The wider issue of how Internet users of ‘free’ apps such as Facebook and others are being monetised by advertisers is associated with the net neutrality and zero-rated debates, and in particular the correct policy responses. In countries such as Indonesia where monthly Average Revenue Per User (ARPU) is only $2.20 for calls, texts and data, it is unsurprising that advertising is attractive as a further revenue partnership with zero-rated apps.31 Freischlad considers:
Users of zero-rated apps should definitely be aware that aspects of their browsing, downloading, and searching behavior are likely being recorded and analyzed, as both the zero-rated app itself and the sponsor who footed the bill are interested in monetizing this data further. Is there no alternative to sponsored data? It’s almost cynical: the most vulnerable people – low income communities just making their first steps on the internet – become easy targets of marketing messages and data mining.32
A much more popular service than Facebook (described by Pahwa as ‘privacy nightmares’33) is Jana Corporation’s mCent, a service that lets users use mobile data as a reward if they try a new app – many of which are privacy-invasive. The choice of trading your privacy for basic Internet access is a daily occurrence for the reported 30 million mCent users.34
When considered next to such a pervasive Internet policy problem as privacy or free speech, is neutrality an over-inflated sideshow, or a necessary precondition? Examination of national case studies helps to shed light on the extent to which net neutrality proves an essential pre-condition for solving other less technical, more politically accessible communications policy problems.
Regulatory concepts in multi-stakeholderism, co-regulation, algorithmic regulation and conceptual models of regulation processes are becoming mainstream, as is measuring the effect of multi-stakeholderism. The net neutrality case studies demonstrate the breadth and depth of emerging institutions and actors in regulation and governance, from hardware and BGP to international organisations and multi-stakeholder governance, to bottom-up communities creating innovative open network solutions.35 Moreover, many ideas to educate politicians about regulating the Internet have been accelerated by the Snowden revelations, causing an intense interest in Internet governance and net neutrality.
Net neutrality is an intensely complex regulatory problem, with implications for a research agenda into both regulatory impact assessment and regulation of support for science and technology. There is continued lack of integration between technical and social sciences in regulatory assessment.36 In 2001 I wrote: ‘The omission of research from nationally-oriented agendas due to funding and resource constraints, is compounded by the disciplinary gulf between social scientists and computer scientists.’37 In bridging this gulf, more systemic research is needed. Nature editors stated:
If you want science to deliver for society, through commerce, government or philanthropy, you need to support a capacity to understand that society that is as deep as your capacity to understand the science.38
That means using social science inputs to help support better regulation and governance of society.39 Integrating social and technical sciences has been vital both to the innovation engine that the Internet represents and its success. Now that the Internet, and digital information sharing more generally, is becoming the growth engine for the post-industrial economies, these lessons need to be reinforced throughout policy making, notably in both the assessment of regulation and the use of regulation of technology funding.
Governance of research funding can be controversial with respect to fundamental regulatory requirements for new technologies, notably in privacy but also security, interoperability and in other respects. Whereas regulation is often considered reactive, in the field of scientific research it can be proactive, saving vast amounts of time and expense in ensuring that innovation meets basic societal needs in its planning. Evaluation of funding needs connecting to the policy process to ensure regulatory outcomes can be matched to potential innovations in technology. A classic case in point of regulation anticipating and responding to such concerns is the Internet of Things, where the Commission introduced several innovations in the policy process to address regulatory concerns, having been intimately involved in the funding of such components as Radio Frequency Identification (RFID).40 A rather less successful example might be net neutrality, where the funding of QoS was not accompanied by a commitment to ensuring an Open Internet with fundamental freedoms observed in implementing Specialised Services. As a result, such regulatory requirements have had to be retrofitted into the ongoing deployment of such technologies.41 Much time and effort can be saved by ensuring that regulatory requirements are addressed at an early stage in such processes, through standardisation and implementation of privacy impact assessments, for instance.
The EC Code of Practice Agora provides an example of a limited but successful umbrella gathering of experts on co-regulation, and may provide a template for a ‘foresight’ assembly of experts. Certain issues arise with regard to expertise versus advocacy mapped onto policy controversies, for instance on privacy and net neutrality.42
One of the central governance gaps in information technology policy has been that shared more broadly in techno-economic policy: markets are failing badly, with governments abandoning national strategies in favour of reliance on buccaneering hedge fund-led investment, which has proved neither far-sighted nor strategic for developed economies. While it is more newsworthy to bemoan the fate of the government-supported banking sectors in developed countries or the collapse of the British steel industry in mid-2016, and the polity is obsessed with the failure of the European Union’s political-economic vision and the attempt by the British non-political class to leave the European project entirely in favour of neo-colonial ambitions with developing countries, the future of industrial policy lies at a crossroads.
Law can create property rights and use the machinery of government to spur the development of technology via both procurement of private sector expertise and the use of government funding to conduct research and development and primary research. In practice, many innovations were brought to market by a combination of both government procurement and government-funded research. Examples include virtually every technology that we can document from the ancient world and, more especially (given their archaeological prominence) the major civil works and transportation projects from earlier civilisations, notably sanitation, road and harbour building, great libraries, temples and mausolea of gods and emperors, city walls, forts and castles. In particular, military expenditure on innovation has played a well-documented role in technological innovation, and the literature on government expenditure is voluminous. Law played the role of authorising such government expenditure. The Internet uses common carrier public telecommunications systems based on ancient rights of way via what were formerly Roman roads and Victorian railway tracks and telegraph lines, for instance.
There is a particular need for policy making in dealing with disruptive innovation. ‘Black swans’ have been an issue of great interest to policymakers in the wake of the long recession and Euro crisis since 2008.43 Turk details the events in 2008–10 that led to the Reflection Group final report: the failed referendum in Ireland in June 2008; the collapse of Lehman Brothers in September 2008; December 2009, when the Treaty of Lisbon came into force; the Greek and Euro crisis in March 2010 that is ongoing. One could add for information policy the various attempts to suggest cyberwar, such as the North Korea–Sony farce of December 2014; Assange’s work in Wikileaks since 2008; Snowden’s revelations in 2013; and the Brexit referendum result and Trump election in 2016.44 A pressing need is to strengthen the EC capacity in foresight for technological innovation in the area of Internet policy. Internet Science grew out of three pioneering and highly successful foresight exercises: Towards a Future Internet (TAFI),45 Reflection Group on the Future of Europe46 and EIFFEL.47While the Internet and technological issues are prominently represented in EU strategic work such as ESPAS,48 there is a clear need for a much larger-scale foresight exercise identifying the many challenges that are presented by digital social innovation (DSI).49 A foresight panel would at least be the start of an attempt to identify some of the issues at stake and potential outcomes.50
Information technology policy is an aspect of industrial policy, a sector which has gained enormously from government funding for research and development (R&D). Mazzucato has shown how companies such as Apple, Hewlett Packard, Qualcomm, Motorola and Microsoft gained hugely from such investments, most notably at Xerox Parc but also in ongoing programmes. UK companies such as Marconi (until its demise in 2006), Vodafone and even ARM benefitted greatly from UK promotion of their international expansion via favourable tax regimes, tax breaks for research and development, and purchase of their products.51 For instance: ‘ARM uses legitimate tax exemptions and reliefs to minimise its tax liabilities. A large proportion of ARM’s products are developed in the UK, where the government offers R&D tax incentives, namely R&D tax credits and the Patent Box, to companies with R&D commitments.’52 A highly successful and innovative company, ARM licenses its technologies to 105 partner semiconductor chip companies to manufacture its products, selling 12 billion chips in 2014. However, it is feared by Mazzucato and Andy Grove (who led Intel for two decades) that the hollowing out of manufacturing to lower-cost locations (especially China) will lead to mass employment moving away from the developed nations that develop the technologies. Grove cited FoxConn, which manufactures on behalf of Apple and others, employing 1.3 million people mainly in China and its Taiwan base.53
Employment outsourcing by technology supported by developing nation workforces has become a major political issue in 2016. De-industrialisation is so advanced, especially in the United Kingdom, that hardware manufacturing at scale has been abandoned to other nations, especially those outside Europe. The European Commissioner responsible is German, and therefore more dedicated to skilled manufacturing than the British rentier class who occupy the Cabinet of the United Kingdom government. On 14 March 2016 Oettinger called for a ‘Gigabit infrastructure for the Gigabit economy’.54 He stated:
Everyone should enjoy adequate connectivity to fully benefit from digital opportunities and from Digital Single Market. For me the adequate level of connectivity is a Gigabit society by 2025.
This requires universal fibre connection deployment in only eight years.55
Ofcom’s November 2015 SamKnows traffic measurement showed that only the fastest UK consumer broadband product, VirginMedia’s ‘up to 200Mbps’ fibre service, with the ‘highest average actual download speed at 174.0Mbit/s’ and the only fibre to the home option available, could offer UHD: ‘Thirteen per cent of ADSL2+ packages streamed NetFlix videos reliably in Ultra High Definition (UHD), while this figure was over 90% for cable and FTTC services.’56 Fibre is increasingly accepted as the route to that ‘Gigabit economy’, yet the UK continues to claim that industrial policy should play little part in private providers’ deployment of higher speed connectivity, even though Ofcom-measured ‘average download speeds in urban areas (50.5Mbit/s) were over three times those in rural areas (13.7Mbit/s). The main reasons for this difference were the lower availability of fibre and cable broadband in rural areas and slower average ADSL and fibre-to-the-cabinet (FTTC) connection speeds.’57
The need for fibre is evident, but the UK government is not investing in upgrading households beyond copper broadband service. The pursuit of sharing economy policy, inspired by Ayn Rand acolytes in venture capital, is sponsored by the British government but vociferously opposed and held in contempt by the Germans, and indeed much of the social democratic polity.58 Instead, their platform protection against Silicon Valley venture capital-backed attempts to overturn European-regulated accommodation and taxi services, amongst others, is daily backed by those constituencies on which business travellers depend.
Lawyers have been challenged by net neutrality and digital communication regulation for three reasons:
- technologies are fast-moving and require expert design choice to implement policy choices;
- the technologies are typically international, if not of the mythical ‘borderless’ character ascribed to them by libertarians; and
- enforcement of legislative will is difficult and uneven on the Internet, with the result that many more sophisticated types of legal instrument, including ‘soft law’ types, are required.
Each challenge and the stereotypical and failed legal responses are discussed in turn.
Regulation is a term of art used by lawyers to describe the broad set of attempts to control an environment using control systems, which has been extended to explain legal controls as one of a set of four modes acting on the environment to be regulated, together with architectural control (road planning, urban design or, in this case, technosocial construction of the software environment), social norms imposed by the community and economic forces acting on the exchange of goods and services (including reputation and other intangibles) online.59 This gives law a much broader toolset with which to influence other environments than a narrow description of legislative and court-based prescriptions on particular behaviours based on ‘law in books’. For instance, governments authorised by law may fund standards bodies to develop new technical standards that enable better protection of privacy, exchange of information or an architecture of control. The development of the public Internet was in part enabled by law using public funds in public universities under state law.
Law also facilitates technological innovation by authorising the creation of the science base, from the corporations and non-profit bodies for higher and secondary education, to compulsory primary and then secondary education, to the protection of competition and its primary private sector exemption, intellectual property. Authorising the creation of the science base is critical to the development of digital technologies.60 Moglen explained that the corporate–political relationships:
extend in many cases back to the period immediately after [World War 2]. They have merely grown with time. The technical facilities that were covered by the arrangements went from telegraph to telephone, through rebuilding of the communication network destroyed in Europe.61
That continues in the broadband space. I explained:
The influence of the dominant super-power is greater than ever before, driven by ICTs. Where the British Empire was represented by telegraphs and railways, the US is represented by satellite television, Hollywood and the Internet …. ICT standards have driven the clustering of economic power within the most connected networks of private corporations globally.62
Law also plays a role in ensuring the standardisation of technologies, and much of the early modern ‘weights and measures’ legislation was updated versions of Roman legal standards for measurement. Two examples of standard setting are P2P distribution networks for Internet content, and encryption. Standards have always been important. Legal standardisation plays a role in prohibiting the use of many non-standard technologies, which has created great legal controversy over time, notably in the adoption of Internet technology standards that, it has been argued, are markedly inferior to their defeated rivals: the QWERTY keyboard for the English language and standard gauge rail are good examples,63 as is net neutrality according to some network engineers, as we saw in Chapters 3 and 6.64
Law can have its longest-run effect on technological innovation in institutionalising a dominant standard. The use of the IP suite for much of global communications was effected, not by commercial standardisation nor law initially, but by government-funded (mainly) university researchers, as was the E2E principle. Their adoption as standards was an unwelcome, and still unwelcomed, surprise to the data communications industry. Debates in the third millennium about ‘network neutrality’ are in fact debates about the attempts by the telecommunications carriers to return to a more clockwork-Cartesian pace of technological development in data traffic, away from the innovative chaos unleashed by the unheralded arrival of IP and E2E in the 1980s. Note that even in more traditional telecommunications standards bodies, such as the intergovernmental International Telecommunication Union, the overwhelming majority of actual technical standard setting and development is carried out by companies, universities and affiliated researchers, before eventually receiving official approval.
Net neutrality can be seen as another iteration of the universal access problem, involving great complexity and difficulty.
Economic-technical efficiency and human rights reasons predominate in the emergent areas studied, with net neutrality as a case in point. Solutions could involve non-traditional methods such as complexity, behavioural solutions, co-regulation, filtering, private censorship, licensing and contracts, security audit and liability rules. A new institutional analytical model is emerging that is based in policy networks literature, with epistemic communities built around issue areas such as net neutrality governance. Issues themselves have actors, accumulate people and define powers and instruments: applying an institutional context defines these emerging regulatory communities. Informational challenges for global public goods such as net neutrality are fundamental, including governance (with new venues for international state–firm diplomacy) and security (privacy and openness). The alarming lack of expertise revealed by the Snowden leaks make more risible the escalating political calls for ‘cyberwar’ and sanctions against nation states, and retaliatory strikes between states hide a much more interesting need for informed debate about censorship, encryption-by-default and liabilities for net neutrality violation. The rise of the policy agenda surrounding privacy in the wake of Snowden’s revelations is often obscured by the surveillance industry’s calls for greater DPI-based intrusion against real or inflated risks rather than sensible evidence-led policy. Global public goods are too important in this sphere to be left to corporate lobbyists without a robust independent scientific evidence base. There is a need to nurture the independence of researchers who can robustly analyse real rather than invented risks.
In conclusion I offer some thoughts on methods. In searching for hard regulatory cases, the net neutrality case studies attempt to understand actors with empirical analysis of method; this is work that needs to continue. For example: Is multi-stakeholderism a reality? Multi-stakeholderism requires legitimacy, and there is therefore a need to expand our methodological toolset so that we understand how this legitimacy is constructed, in order to avoid creating the conditions for Potemkin (sham or empty) multi-stakeholderism. The case studies highlight sites and methods for undertaking this further research, but other methodological and empirical sites and approaches can be identified. To construct the possibility for real multi-stakeholderism, we need to understand the technical and social aspects of governance and to work constructively on developing methods in this area. Scientists and governments must urgently address the need to strengthen the research base in this vital agenda.