This chapter outlines how diabetes re-emerged as a concern of central government during the late 1970s, setting the scene for the move of managed care from clinical settings to policy arenas. It does so by examining the tribulations of efforts to secure Department for Health and Social Security funding for retinopathy screening and photocoagulation treatment trials between 1977 and 1985. The trials were by no means the biggest intervention that central government made into diabetes care during the 1970s and 1980s. Examining their history, however, reveals the ways in which post-war policy networks developed in relation to diabetes, and the shifting ways in which they framed diabetes to garner government attention in a period of considerable economic and political change. Crucially, underpinning debates about the trials were new concepts of risk management, disease prevention, and standard-setting that became central to policy discussions of diabetes care and managed medicine at the end of the century.
The concluding chapter brings together the preceding themes and links them to show how the British vaccination programme changed from the 1940s to the 2010s. It examines how these changes can give an insight into the deeper relationship between the public and the public health authorities that purport to act on their behalf. It argues that the relationship between the two was not entirely top-down. Public action – either directly expressed or inferred through various surveillance and governance structures – was a key driving force behind policy changes and initiatives. The longer view of vaccination policy, including periods of relative calm as well as crisis, shows how this relationship changed over time and was inextricably linked to wider political concerns. The chapter argues that twenty-first-century crises such as the measles outbreaks in North America and Europe in the 2010s are also historically contingent. Whether disease or vaccination rates are “too high” or “too low” is based on contemporary conceptions of risk, health citizenship and our relationship to public health authorities.
This chapter uses the diphtheria programme of the 1950s to explore the theme of apathy in British vaccination policy. Following the success of the war-time immunisation campaign in reducing morbidity and mortality from diphtheria, there was a sharp decline in take-up at the end of the 1940s. The Ministry of Health attributed this to apathy among the public – particularly mothers who no longer feared diphtheria because it was no longer common. However, this interpretation required a view of the public as both ignorant of health risks and amenable to education. Furthermore, it made assumptions about the responsibility of parents to protect their children even though vaccination was not compulsory. Diphtheria immunisation recovered, and the disease was virtually eliminated by the early 1960s – but not necessarily because of the Ministry’s centralised propaganda. Local medical officers made significant efforts to make immunisation more convenient, including through the provision of multi-dose vaccines to reduce clinic visits and offer protection against diseases that parents were more fearful of.
This chapter introduces the historiography of the British welfare state, vaccination and public health, and sets out the book’s structure. It argues that while much attention has been given to the various controversies in British vaccination policy, this obscures the long periods of relative calm. Even during crises, most parents continued to vaccinate their children with individual vaccines and, overall, take-up has increased markedly since the 1940s. The chapter therefore reframes the debate to ask why vaccination became normalised during the post-war period, and draws attention to the role of the public as a receiver and forger of public health priorities. This question is then explored through the following five chapters, examining five key themes – apathy, nation, demand, risk and hesitancy. The first three themes are covered in Part I of the book, showing how the modern vaccination programme became established. Part II details the pertussis and measles-mumps-rubella (MMR) vaccine crises and how they exposed the limits of public support for vaccination and the welfare state.
This chapter examines the twenty-first-century public health concept of hesitancy by placing it in a wider historical context. Hesitancy as an analytical category was developed by social scientists and adopted by the World Health Organization and other nations to explain the numerous vaccine crises that had occurred worldwide over previous decades. In Britain between 1998 and 2004 a significant drop in measles-mumps-rubella vaccine (MMR) take-up followed a series of media stories that it might cause autism. Initially, the government sought to refute this through a typical education campaign but was forced to adopt new strategies of risk communication. The internet had become an important tool for vaccine sceptics to spread doubt and for uncertain parents to seek information. Although the vaccination rate eventually recovered, many of the criticisms of the government and the vaccine during this period reflected deeper anxieties on the part of the public regarding the motives and competence of medical and political authorities in the 1990s and early 2000s. The MMR crisis was a product of a particular historical moment, and the construction of hesitancy that followed is coloured by this.
Part II begins with an examination of what the pertussis (whooping cough) vaccine crisis of the 1970s tells us about risk. The management of risk was an integral part of post-war public health and, indeed, of modern nation-states. The risks associated with infectious disease for both the state and individuals had to be weighed against the risks associated with specific vaccines. In the 1970s, reports that the pertussis vaccine might cause brain damage in some children resulted in a significant drop in take-up. A campaign for social security payments for children suffering from vaccine injury was successful, showing how the vaccination programme was tied to wider political concerns within the welfare state during a period of financial retrenchment. These debates are contrasted with those over the provision of rubella vaccine to girls and young women, where voluntary organisations demanded that the government should provide many more resources to the programme.
This chapter focuses on the example of the inactivated poliomyelitis vaccine (IPV) programme in the 1950s and early 1960s to show how the public expressed demand for vaccination services. On the one hand, the government struggled to raise the registration rate for the vaccine to target levels. On the other hand, parents and the media became increasingly frustrated over a series of supply crises. Some of these were caused by an inability or unwillingness to import American vaccine to cover shortfalls in production by British pharmaceutical companies. Others were caused by surges in demand, such as the rush by young adults to get the vaccine following the death of professional footballer Jeff Hall. Thus, demand was a major problem for the British government. Demanding parents could force policy responses (such as a commitment to import more vaccine). Surges in demand could stress the system to breaking point. But a lack of demand also threatened the Ministry of Health’s wider public health goals. The supply issues were only fully resolved after the introduction of the oral polio vaccine (OPV) in 1962.
In this chapter the decline of the routine smallpox vaccination programme is used to examine the theme of nation. While smallpox had been eliminated from Britain in the 1930s, occasional importations by air and sea showed the vulnerability of the nation to external public health threats. Moreover, since the disease often came from postcolonial Commonwealth nations – notably India and Pakistan – racialised views of threats to public health became more common during periods of anxiety about immigration and Britain’s place within the international community. The government attempted to combat declining vaccination rates through publicity campaigns, but struggled to convince the public to comply with its guidance. The public was not anti-vaccination, as shown by the demand for vaccination as a form of epidemic control when outbreaks occurred. However, by showing little enthusiasm for vaccination, coupled with the declining statistical and emotional threat of the disease during the 1960s, the British public helped to create the conditions for the removal of routine childhood smallpox vaccination in 1971 – years before the disease’s official eradication and before other European nations followed suit.
Vaccinating Britain investigates the relationship between the British public and vaccination policy since 1945. It is the first book to examine British vaccination policy across the post-war period and covers a range of vaccines, providing valuable context and insight for those interested in historical or present-day public health policy debates. Drawing on government documents, newspapers, internet archives and medical texts it shows how the modern vaccination system became established and how the public played a key role in its formation. British parents came to accept vaccination as a safe, effective and cost-efficient preventative measure. But occasional crises showed that faith in the system was tied to contemporary concerns about the medical profession, the power of the state and attitudes to individual vaccines. Thus, at times the British public demanded more comprehensive vaccination coverage from the welfare state; at others they eschewed specific vaccines that they thought were dangerous or unnecessary. Moreover, they did not always act uniformly, with “the public” capable of expressing contradictory demands that were often at odds with official policy. This case study of Britain’s vaccination system provides insight into the relationship between the British public and the welfare state, as well as contributing to the historiography of public health and medicine.