When the National Health Service Acts were passed in 1946 and 1947, two vaccines were part of the routine infant vaccination schedule. By 2018 these had increased to seventeen.1 Ninety-four per cent of children received the pentavalent diphtheria-tetanus-acellular-pertussis, poliomyelitis and haemophilus influenza type b vaccine before their first birthdays in England and Wales in 2014–15.2 Polio is no longer endemic in Britain and has nearly been eradicated worldwide. There were 18,596 notifications of diphtheria in England and Wales in 1945. In 2016 there were nine.3 For the public health profession, this has been a major achievement over a period of some seventy years. As we have seen, this progress has not been linear, nor consistent. Nevertheless, the mature vaccination system in Britain has created and reflects Jacob Heller's vaccine narrative – people believe that vaccines work, that they are safe and that they are an integral part of the modern, functioning British state.4 Anxieties over outbreaks such as the 2012 measles outbreak in Swansea also seem to suggest that vaccination is part of being a good British citizen.5 Vaccination is not simply imposed upon the British public. It is something which the public demands of its government and its fellow citizens.6
The preceding chapters have shown how the routine immunisation of children became the status quo in Britain after the Second World War. Modern vaccination programmes based on laboratory science and state-guided public health administration arrived on a national scale in the 1940s. The success of the anti-diphtheria campaign during the war showed both to the Ministry of Health and to the general public that vaccination could be an effective public health tool. Building on advertising and education techniques employed in other jurisdictions in the inter-war period, the lack of compulsion in the diphtheria immunisation campaign gave it credibility. These new health tools – born from modern vaccinology and without the baggage of the imposition and unpleasant nature of smallpox vaccination – could now be exploited. During the 1950s and 1960s, improvements in research and manufacturing techniques led to new vaccines which could be introduced to a receptive public. Indeed, for the high-profile ones (such as Salk's poliomyelitis vaccine) there was active demand from citizens. But such demand was also tempered by concerns about other risks, such as vaccine damage, convenience and financial sustainability.
Thus, the public played a key role in shaping public health authorities’ priorities. The general trend was toward the increased use of vaccination, in terms both of the number of vaccines available and of percentage uptake among the population. This relationship between the public and public health led to an expansion of the vaccination programme and provided the authority for its maintenance. But this relationship still needed tending. Uptake was not always optimal, and occasional bouts of apathy (either across the population or in specific localised examples) required the intervention of MOHs and the Ministry of Health. In some cases, the government reminded parents of their responsibilities and the very real dangers posed by diseases that might return. In others, such as smallpox, general disinterest among the population, coupled with expert analyses of the risks posed by the disease and the vaccine, meant that the United Kingdom's smallpox vaccination programme was dismantled in advance of those of many other European countries.
The two crises outlined in Part II of the book, somewhat paradoxically, provide the best example of how the British public believed in vaccination. For while specific vaccines could become the centre of controversy at certain times, the vaccination system as a whole stood firm. With both pertussis vaccination and MMR vaccine, immunisation rates recovered relatively quickly following initial scares. Furthermore, uptake of vaccines that were not directly associated to whole-cell pertussis vaccine or MMR was not dramatically affected. Similarly, there were reports in both cases of parents demanding separate vaccines in order to avoid the Urabe mumps strain within the trivalent vaccines that had been identified as potentially dangerous. Nevertheless, the public understood the relative risks of disease symptoms and vaccines, and the inconvenience of presenting children for vaccination, differently to epidemiologist advisers in the government. Faith in vaccination still relied upon the moral and political authority of the scientific and administrative communities that vouched for the safety and efficacy of both the vaccines themselves and the mass immunisation programmes that underpinned them. In the aftermath of the thalidomide or BSE crises, or during major political debates about the viability and future of the welfare state, such authority was dented. Experiences with these crises led to a reappraisal of how vaccinators communicated with the public, producing a greater academic and administrative emphasis on hesitancy and decision making about vaccination. The hope was that analysing and monitoring for signs of faltering confidence could predict and prevent such crises before they occurred.
The five themes explored in this book – apathy, nation, demand, risk and hesitancy – all help to answer the main question posed in the Introduction: how did routine vaccination become normalised in Britain after the Second World War? In drawing together these ideas, this conclusion makes some final observations on a thread that runs throughout the chapters. How did the public fit into British public health over the post-war period? How was the public identified; and what was public about public health? These are important questions, given the centrality of the relationship between British citizens and the British government across the vaccination programme. This relationship drove the development of the vaccination schedule. As we have seen, the government had expectations of the population and, in turn, the population made demands on its government. But these demands did not remain unchanged. The same is true of the public.
Janet Newman and John Clarke have argued that publicness – that is, representations of the concept of the public – is a useful lens for discussing historical change.7 This form of analysis is designed to move away from solely talking about the public sphere. Partly this is because the public sphere is only one element of publicness; and partly it is because of the critique that many narratives surrounding the Habermasean public sphere often describe a decline from a “golden age”.8 Moreover, the limits of publicness have varied across time and according to what sort of public is under discussion. Newman and Clarke thus draw attention to three ‘discursive chains’, of which the public sphere is only one. First, there is the belief that the public is embodied by citizens, or the people, which in turn represents the nation. Second, one can argue that the public is manifested in the public sector, which represents the actions of the state. Third, the public are created through legal and democratic value systems, best expressed through the aforementioned concept of the public sphere.9 This is the public space (physical or metaphorical) in which debates about the people and the state can be articulated. Each of these may have been considered more important relative to the others at different times or circumstances. This book has largely focused on how debates about publicness played out in the public sphere. Evidence of public activity is inferred and identified through official statistics, utterances in the press and the actions of voluntary organisations and representative bodies claiming to operate in the interests of the public. The two themes left to explore are what this public sphere activity says about the people and what this in turn says about the role of the state.
To tackle the first discursive chain, the public were discussed in public health discourse as “the population”. The way that the government constructed apathy in Chapter 1 exemplifies this. Defining the public through statistical returns – and then inferring public behaviours through changes in these statistics – was a common practice in the vaccination programme throughout this book. Rises and falls in uptake and morbidity (either over time or in comparison to other national and local authorities) were used to measure the success of vaccination efforts. Thus, the drop in the number of immunisations between 1949 and 1950 was considered troublesome in its own right. The solution, building on pre-existing ideas and conventions surrounding the dissemination of public health information and local MOHs led the Ministry of Health to focus its attempts on an advertising campaign. Resource constraints meant that it targeted its interventions on specific publics: those living in local authorities with low response rates relative to their peers. Over time, these statistical measures became more detailed, as did the means to analyse them. As Chapter 3 showed, the growth of the medical civil service both created and interpreted data for directing policy.10 The foundation, first, of the JCPV (1955) and, later, of the JCVI (1962) provided the basis for this. In later years, health researchers paid greater attention not just to immunisation figures but to public attitudes through surveys. These had been performed as one-off studies in the 1940s for diphtheria but became routine from the late 1980s onwards through the Public Health Laboratory Service's COVER and regular studies of mothers’ attitudes to immunisation.11 These not only allowed for the identification of problematic behaviour within the public but also provided baseline measures to evaluate any intervention into such behaviour. This approach – which would form the basis of discussions around hesitancy in Chapter 5 – showed how conceptions of the public had evolved. Instead of semi-arbitrary target figures like 75 per cent for diphtheria or smallpox vaccination in the 1950s, epidemiologically and politically derived goals came from within the Department of Health and from internationally agreed standards with the WHO. Increasingly, outbreaks of manageable diseases became an embarrassment to the British authorities and the British public. During the MMR crisis, part of the education and risk communication campaign emphasised how other nations used immunisation. Advanced nations were supposed to avoid outbreaks of vaccine-preventable diseases; less-developed nations experienced them regularly.
Statistics were also used to sell the narrative that vaccination was not just for the good of the individual, but a sign of modernity, technological advancement and national pride. The national scope and character of the vaccination programme were therefore significant. Even where programmes were administered at the local level, they required national direction, financing and oversight. The national government's priorities and actions had been important in the inter-war period too. Experiences in other countries with diphtheria immunisation and BCG for tuberculosis had influenced the ways in which those vaccines were introduced in Britain.12 Similarly, constant comparisons with the United States’ IPV drove the course of the IPV campaign in Britain during the 1950s and early 1960s. While these discussions mainly concerned vaccines and the science surrounding them, they also reflected deeper ideas about who the “British public” were in “British public health”. The nation (as highlighted in Chapter 2) came across strongly through the smallpox campaigns. Here, the British public was a body that needed to be protected from outside infection. Sometimes this was from foreign people, as seen with the reaction to Pakistani immigrants in the 1961/62 outbreak. At other times, foreign places were seen as the contagion, with people merely the mules, such as Australasian tourists in the 1950 outbreak. Even today, visitors to “exotic” countries are often obliged to receive vaccines against diseases such as yellow fever. Campaigns to control infectious disease also worked on a national level. This was true in their administration – decisions about national policy were taken jointly between the four home nations – and in their goals. Concerns over apathy towards the diphtheria programme were that a disease that was on the verge of elimination within Britain might return. The demand for polio vaccine in the 1950s reflected the British public's call to be protected from the disease, with vaccination and eventual eradication considered to be the most sensible way of achieving this. The management of risks, as discussed in Chapter 4, was also constructed at a national level. Immunisation was offered to all children in Britain to protect the entire population living there. At the same time, risks could be localised. This localisation could be geographic, such as with concentrated attempts to improve diphtheria or poliomyelitis vaccination in certain local authority areas; or demographic, such as with targeted rubella vaccination campaigns in the 1970s for girls and young women, or foreign-language adverts in South Asian newspapers. As discussed, internationally agreed targets would become increasingly important from the 1970s through the WHO and the rise of global public health initiatives.
While the state clearly defined the public in these administrative terms, the public also spoke back. Through this we can see that the government's definitions and treatment of the public did not always accord with the public's demands and expectations. Indeed, while it is clear that the British public demanded protection by the government against threats to the British public's health, it did not at all times agree that mass routine vaccination was the only or preferred solution. With smallpox, parents were more likely to avoid routine childhood vaccination than to present their children, but the immediate threat of disease could change behaviours. Fear was a motivating factor. In areas where there were outbreaks of smallpox, thousands would queue for hours outside the MOH's clinic for emergency vaccination. There appeared to be “soft” support for the polio vaccine among the general public, but it was only when a prominent footballer died that young adults presented themselves for vaccination in large numbers. Even with the success of the diphtheria programme in the 1940s, interest was revived in the 1950s by leveraging the greater demand for protection against whooping cough and creating multi-dose vaccines. The Ministry of Health had hoped to revive demand for diphtheria immunisation as a good in its own right, but had to be pragmatic in order to achieve its public health goals. The pertussis and MMR crises also emphasised that the public weighed risks very differently to the government in some circumstances. For not only were people worried about infectious disease, but they were also anxious about the risks of the vaccines themselves. Voluntary and consumer organisations weighed in on the debate in this period, reflecting and creating a greater demand for choice and transparency in health-care decision making.13 While the government demanded that the public continue to use the government-approved vaccine, many parents sought alternative forms of protection (such as separate vaccines or abstention from the process entirely until safety could be guaranteed). The government did retain a degree of authority throughout the period, as did the narrative that modern states and scientific methods could protect people through vaccination. Uptake recovered within a few years of both crises, reflecting a deeper long-term confidence in vaccination and the belief that it could protect the public from dangerous diseases. But in periods where the credibility of administrative, scientific and political establishments was under strain, the conditions were ripe for crises of confidence in vaccines and vaccination programmes.
If the government played a key role in defining and responding to the public as a population, it is also vital to interrogate the role of the public sector, or the second of Newman and Clarke's discursive chains. The government's use of bureaucratic and statistical tools was by no means restricted to public health policy. It reflected a wider shift in governance in Britain and other liberal democracies from the mid-twentieth century onwards.14 Vaccination grew in importance during a period in which technocratic, state-led solutions to complex social problems were considered both viable and desirable. Newman and Clarke argue that the post-war period and the foundation of the welfare state mark a point where the public sector became a much more visible and important aspect of publicness.15 This builds on T. H. Marshall's idea of the post-war period as an era of social rights, one in which health care and the wider welfare state became integral to the function of modern government.16 Martin Moore has shown how public health and general practice increasingly routinised health care, a process that accelerated after the fiscal crisis of the 1970s. A greater emphasis on preventative medicine meant that the control of chronic conditions (or, in the case of vaccination policy, the risk of infectious disease outbreaks) became politically necessary, in line with the government's financial priorities. At the same time, developments in bureaucratic technologies for identifying and managing such risks had been harnessed and promoted by health professionals and co-opted by the state.17 As Virginia Berridge has argued, post-war public health is characterised by the use of mass media, evidence-based medicine and a focus on individual behaviour.18 The chapters of this volume also emphasise these changes. But what does this activity say about how the state constructed the public? Moreover, what does this say about what responsibilities the government felt that it had towards the public's health, and about what the public demanded from its government?
Government approaches to risk, as highlighted in Chapter 4, help to explain the relationship between the public sector and citizens during this period. Primarily, the state intended to reduce the burden of infectious disease through vaccination. In the 1950s, apathy was problematic because it risked the return of diphtheria as a common and widespread disease. Intervention through education and pressure on local authorities was considered a necessary health response because a deterioration in this element of public health was unacceptable to both the government and the general public. Similarly, the demand for IPV in the 1950s stemmed from the public's desire for protection from infectious disease and the belief that the British state had a duty to provide such protection. At the same time, collective responsibility for vaccination was re-emphasised through campaign literature and posters. While the private choice of parents remained, and compulsion was never re-introduced, citizens were expected to vaccinate both for the good of their own children and for the collective health of the nation as a whole. Such concepts of health citizenship were internalised as well as being imposed by government campaigning.19 However, the risks to be managed through such behaviour changed over time. Once the state had succeeded in reducing the burden of infectious disease, it then sought to ensure that those infectious did not return. Complete eradication and its preservation required different forms of communication. The public had contradictory expectations with regard to disease management. On the one hand, parents had ceased to be overly concerned about diseases that were now so rare that few had direct experience of severe complications or death. To some extent this was evident in the diphtheria programme in the 1950s, but was considered especially prominent with pertussis in the 1970s and measles in the 1990s. On the other hand, reports of the increased morbidity of vaccine-preventable diseases reflected poorly on the government and the nation as a whole. These contradictions flared up in the pertussis crisis when the risks of both a whooping cough epidemic and a potentially dangerous vaccine had to be weighed against each other. In part, this led the government to strengthen its public health measures, such as incentivising general practitioners to vaccinate the entire child population, and the increased use of multi-dose vaccines like MMR.
This relationship between the public and the public sector was ever changing. This can be shown through the way in which hesitancy evolved as an analytical tool in the 2010s, as detailed in Chapter 5. Apathy was construed as a passive state by public health authorities in the 1950s with regard to diphtheria. In the twenty-first century, they were much more likely to talk about decision-making processes, hoping to influence these through effective risk communication. British governance structures had become more concerned with risk management and harm prevention in the post-war period.20 Vaccination was no different – but the risks identified by public health authorities and the public changed over time. As in other arenas, risk became increasingly identified with financial cost. In the 1950s, apathy presented the possibility that diphtheria morbidity would cease to decline, perhaps even returning. Outbreaks of diphtheria remained the major cause of concern. As the vaccination programme established itself, however, these immediate threats dissipated. Many vaccine-preventable diseases became rare, meaning that any outbreak was damaging to the government's reputation. By the 1980s, disease rates were not framed as human tragedies so much as financial ones. Thus, the vaccination system became a front-line tool in reducing unnecessary public expenditure, and an investment whose benefits far outweighed the potential costs. These could be more accurately measured due to increased statistical monitoring both within Britain and by bodies such as the WHO.
The public, too, expressed risk in different ways. The general swell of approval for poliomyelitis vaccines showed a demand for protection from the disease. In the 1970s, such demands for protection were framed by voluntary organisations, consumer groups and advisory bodies. Moreover, while the public clearly felt that it was the job of the public sector to protect it from disease, it also expected protection from other dangers – which could include the vaccine itself. It would be simplistic to say that the public became less compliant with government advice over the post-war period. This was not some great rebellion as a result of the 1960s. Publics were non-compliant in the 1950s, as seen both in the decreasing uptake of smallpox and diphtheria vaccines among some populations and in the demands for improvements to the polio vaccination programme. Similarly, the acceptance rate of MMR in the mid-1990s – exceeding 90 per cent – suggests that in some ways there was greater compliance from the public in the later period. Rather, when there was a breakdown in trust between the public and the government, for reasons not always directly related to the science of vaccines or vaccination, the break was more dramatic and more vocal.
From the autumn of 2017, hepatitis B was added to the British childhood vaccination schedule.21 The clamour over extending meningitis vaccine to all children also shows that parents continue to demand that the government protects them from infectious disease.22 In 2017–18, uptake of MMR remains well above 90 per cent in all home nations of the United Kingdom. By the standards of the 1960s, this is a remarkable achievement. The vast majority of people appear to accept the general narrative that vaccinations work, they are safe and they are an integral part of a modern functioning state.23
The challenges facing public health authorities today are different than they were some eighty years ago when nation-wide diphtheria immunisation was introduced. Throughout this volume, the government's attempts to improve uptake were based mainly on increasing national vaccination rates. At times, this meant focusing on particular local authorities and specific vaccines. But, broadly, successes or failures have been measured by increases or decreases in national statistics relative to various targets. Today, mature vaccination programmes are having to deal with different threats. There are geographically and socially concentrated communities whose children are under-vaccinated for a variety of reasons. Some of those have been convinced that certain vaccines do not work – such as through the influence of anti-vaccination campaigners on the Somali-American community in Minnesota.24 Others, particularly middle-class parents, doubt the need for or safety of vaccines produced by pharmaceutical companies and governments whose motives they find suspicious.25
There is a narrative among some in the public health community that this is inevitable. Public health is a victim of its own success.26 Robert Chen and Beth Hibbs offer a model of the evolution of an immunisation programme. As vaccine coverage increases, the disease declines; but through increased usage, the number of adverse events also climbs. At a certain point, this results in a loss in confidence. Because the disease is rare, the adverse events get more media coverage. As vaccination rates dip, an outbreak of the disease occurs – which is also rare, and therefore newsworthy. This restores faith in the vaccine until, eventually, the disease is fully eradicated.27 Of course, this is an overly simplified model for explaining vaccine crises. Chen and Hibbs were writing after several pertussis scares in different countries, but before the MMR crisis started to bite. It neatly encapsulates, however, why apathy may set in and how this can lead to a drop in confidence in a particular vaccine. It also shows that public health researchers and practitioners are aware that progress towards disease eradication is rarely linear.
There is a clear issue here. These debates have been refracted through the politics of public health. Health authorities believe that vaccination is a universal good, that highly vaccinated populations are healthier and, therefore, that people ought to present themselves and their children for vaccination when required. This is a political view for which many have sympathy. But it means that explanations of the public's behaviour have centred on why people do not vaccinate and what can be done to get them to change their minds. Since the early 2010s work on vaccine confidence has begun to focus on the factors that affect parents’ choices, but much of this work is also used to assess hesitancy.28 Since parents’ decisions are known to be affected by their communities, local and national circumstances, and attitudes towards medical science, culture plays a key role. And if parents behave differently according to culture and space, they also behave differently across time. This is where history can offer insights for vaccination debates. The demand for IPV in the 1950s might inform debates about vaccine confidence; the apathy around diphtheria might say something about complacency. Perhaps more pertinently, the debates around vaccination policy in Britain since the Second World War show us that the ways in which the public's behaviour has been described have also been historically specific. Vaccine confidence and hesitancy happen to be the latest in a global context. Through approaching these issues historically, we can explore these other constructions of public behaviour and see how publics have responded to authorities in different ways.
To end, it is worth taking another historical perspective – that of scale. Public health authorities believe that we are heading into another period of crisis. The return of measles in North America and several European countries has left governments puzzled and worried in equal measure. In Italy, there have been violent clashes with doctors over the introduction of stricter compulsion laws.29 Protests in California have followed the decision to end conscientious objection to vaccination for parents wishing to enrol their children in public schools.30 Even in Britain, where the spectre of compulsory smallpox vaccination loomed over the twentieth century, there is serious consideration about whether forcing parents to vaccinate their children might improve public health outcomes.31 Opponents appear to be emboldened by online communities, protests and legal cases. A 2017 ruling in the European Court of Justice appears to pave the way for circumstantial evidence to be accepted as proof of vaccine injury in compensation cases, rather than the use of scientific evidence “beyond reasonable doubt”. The consequences for vaccine manufacturers could be devastating – and, given the already-rising costs of vaccines and the declining number of companies producing them, this could threaten the stability of vaccination programmes that have taken decades to evolve.32
In a world where the risk of measles and other infectious diseases can be managed through vaccination, it is not surprising that governments and publics have sought to ensure that protection is universally available. But, as Mark Drakeford and Ian Butler demonstrate in their work on scandals, crises have to be manufactured. They do not emerge value-free from scientific facts.33 If there is a crisis today, it is a historically specific one. The 1950s IPV supply crisis was rooted in post-war anxieties about Britain's place in the world and the unfulfilled promises of technological progress. The 1970s pertussis crisis emerged alongside deep concerns about the role and function of the British welfare state. The 2000s MMR crisis flourished in an age of mass media, the internet and mistrust of political and medical authority following a host of scandals. So what of today's crises? They are portrayed as a result of a declining faith in science, the rampant individualism of certain types of parents and a sense that we have forgotten just how deadly measles, polio and diphtheria really were. This is not just a crisis about declining vaccination rates – it reflects a wider moral panic about globalisation, “post-truth politics” and the lack of faith in traditional forms of expertise.
Uptake of key vaccinations has stalled or even declined among certain populations. Nevertheless, uptake remains historically high. Given that well over 90 per cent of children in the United Kingdom receive the recommended vaccines, it would suggest that the vaccine narrative has survived among the vast majority of the public. The events and processes described in this book do not deny that there is a crisis brewing today. But they suggest that this too shall pass. Public health authorities are in the unenviable position of seeking a perfection which they may never obtain. The public may elude precise measurement and may not completely comply with official advice. For the most part, however, they want and demand vaccination – for (and of) themselves and others.