This chapter explores the formal emergence of local systems of managed diabetes care, and situates them in relation to tools used to integrate hospital clinics and primary care into shared care arrangements. The respatialisation of care in the 1970s and 1980s, together with a growing emphasis on surveillance and blood glucose control, raised questions about how patient care could be effectively co-ordinated. In response, GPs and specialists drew upon a rich culture of regulatory bureaucracy within British medicine and mobilised a combination of tools – from recall systems and medical records to local care protocols – to regulate the timing, nature and content of medical engagements. These tools embodied an increasingly standard view of ‘good diabetes care’, and inherently ordered medical labour. The implicit politics of these instruments, however, became explicit within in a context of mounting political and professional concerns about professional competence, and in relation to concerns about the deputation of care to previously inexperienced practitioners. Especially once practitioners began to use standards to audit care, this ‘technology of quality’ subjected routine practice to a novel form of bureaucratic management and provided new forms of evidence for later national initiatives.
Through a study of diabetes care in post-war Britain, this book is the first historical monograph to explore the emergence of managed medicine within the National Health Service. Much of the extant literature has cast the development of systems for structuring and reviewing clinical care as either a political imposition in pursuit of cost control or a professional reaction to state pressure. By contrast, Managing Diabetes, Managing Medicine argues that managerial medicine was a co-constructed venture between profession and state. Despite possessing diverse motives – and though clearly influenced by post-war Britain’s rapid political, technological, economic, and cultural changes – general practitioners (GPs), hospital specialists, national professional and patient bodies, a range of British government agencies, and influential international organisations were all integral to the creation of managerial systems in Britain. By focusing on changes within the management of a single disease at the forefront of broader developments, this book ties together innovations across varied sites at different scales of change, from the very local programmes of single towns to the debates of specialists and professional leaders in international fora. Drawing on a broad range of archival materials, published journals, and medical textbooks, as well as newspapers and oral histories, Managing Diabetes, Managing Medicine not only develops fresh insights into the history of managed healthcare, but also contributes to histories of the NHS, medical professionalism, and post-war government more broadly.
This chapter outlines how diabetes re-emerged as a concern of central government during the late 1970s, setting the scene for the move of managed care from clinical settings to policy arenas. It does so by examining the tribulations of efforts to secure Department for Health and Social Security funding for retinopathy screening and photocoagulation treatment trials between 1977 and 1985. The trials were by no means the biggest intervention that central government made into diabetes care during the 1970s and 1980s. Examining their history, however, reveals the ways in which post-war policy networks developed in relation to diabetes, and the shifting ways in which they framed diabetes to garner government attention in a period of considerable economic and political change. Crucially, underpinning debates about the trials were new concepts of risk management, disease prevention, and standard-setting that became central to policy discussions of diabetes care and managed medicine at the end of the century.
The concluding chapter brings together the preceding themes and links them to show how the British vaccination programme changed from the 1940s to the 2010s. It examines how these changes can give an insight into the deeper relationship between the public and the public health authorities that purport to act on their behalf. It argues that the relationship between the two was not entirely top-down. Public action – either directly expressed or inferred through various surveillance and governance structures – was a key driving force behind policy changes and initiatives. The longer view of vaccination policy, including periods of relative calm as well as crisis, shows how this relationship changed over time and was inextricably linked to wider political concerns. The chapter argues that twenty-first-century crises such as the measles outbreaks in North America and Europe in the 2010s are also historically contingent. Whether disease or vaccination rates are “too high” or “too low” is based on contemporary conceptions of risk, health citizenship and our relationship to public health authorities.
This chapter uses the diphtheria programme of the 1950s to explore the theme of apathy in British vaccination policy. Following the success of the war-time immunisation campaign in reducing morbidity and mortality from diphtheria, there was a sharp decline in take-up at the end of the 1940s. The Ministry of Health attributed this to apathy among the public – particularly mothers who no longer feared diphtheria because it was no longer common. However, this interpretation required a view of the public as both ignorant of health risks and amenable to education. Furthermore, it made assumptions about the responsibility of parents to protect their children even though vaccination was not compulsory. Diphtheria immunisation recovered, and the disease was virtually eliminated by the early 1960s – but not necessarily because of the Ministry’s centralised propaganda. Local medical officers made significant efforts to make immunisation more convenient, including through the provision of multi-dose vaccines to reduce clinic visits and offer protection against diseases that parents were more fearful of.
This chapter introduces the historiography of the British welfare state, vaccination and public health, and sets out the book’s structure. It argues that while much attention has been given to the various controversies in British vaccination policy, this obscures the long periods of relative calm. Even during crises, most parents continued to vaccinate their children with individual vaccines and, overall, take-up has increased markedly since the 1940s. The chapter therefore reframes the debate to ask why vaccination became normalised during the post-war period, and draws attention to the role of the public as a receiver and forger of public health priorities. This question is then explored through the following five chapters, examining five key themes – apathy, nation, demand, risk and hesitancy. The first three themes are covered in Part I of the book, showing how the modern vaccination programme became established. Part II details the pertussis and measles-mumps-rubella (MMR) vaccine crises and how they exposed the limits of public support for vaccination and the welfare state.
This chapter examines the twenty-first-century public health concept of hesitancy by placing it in a wider historical context. Hesitancy as an analytical category was developed by social scientists and adopted by the World Health Organization and other nations to explain the numerous vaccine crises that had occurred worldwide over previous decades. In Britain between 1998 and 2004 a significant drop in measles-mumps-rubella vaccine (MMR) take-up followed a series of media stories that it might cause autism. Initially, the government sought to refute this through a typical education campaign but was forced to adopt new strategies of risk communication. The internet had become an important tool for vaccine sceptics to spread doubt and for uncertain parents to seek information. Although the vaccination rate eventually recovered, many of the criticisms of the government and the vaccine during this period reflected deeper anxieties on the part of the public regarding the motives and competence of medical and political authorities in the 1990s and early 2000s. The MMR crisis was a product of a particular historical moment, and the construction of hesitancy that followed is coloured by this.
Part II begins with an examination of what the pertussis (whooping cough) vaccine crisis of the 1970s tells us about risk. The management of risk was an integral part of post-war public health and, indeed, of modern nation-states. The risks associated with infectious disease for both the state and individuals had to be weighed against the risks associated with specific vaccines. In the 1970s, reports that the pertussis vaccine might cause brain damage in some children resulted in a significant drop in take-up. A campaign for social security payments for children suffering from vaccine injury was successful, showing how the vaccination programme was tied to wider political concerns within the welfare state during a period of financial retrenchment. These debates are contrasted with those over the provision of rubella vaccine to girls and young women, where voluntary organisations demanded that the government should provide many more resources to the programme.
This chapter focuses on the example of the inactivated poliomyelitis vaccine (IPV) programme in the 1950s and early 1960s to show how the public expressed demand for vaccination services. On the one hand, the government struggled to raise the registration rate for the vaccine to target levels. On the other hand, parents and the media became increasingly frustrated over a series of supply crises. Some of these were caused by an inability or unwillingness to import American vaccine to cover shortfalls in production by British pharmaceutical companies. Others were caused by surges in demand, such as the rush by young adults to get the vaccine following the death of professional footballer Jeff Hall. Thus, demand was a major problem for the British government. Demanding parents could force policy responses (such as a commitment to import more vaccine). Surges in demand could stress the system to breaking point. But a lack of demand also threatened the Ministry of Health’s wider public health goals. The supply issues were only fully resolved after the introduction of the oral polio vaccine (OPV) in 1962.
In this chapter the decline of the routine smallpox vaccination programme is used to examine the theme of nation. While smallpox had been eliminated from Britain in the 1930s, occasional importations by air and sea showed the vulnerability of the nation to external public health threats. Moreover, since the disease often came from postcolonial Commonwealth nations – notably India and Pakistan – racialised views of threats to public health became more common during periods of anxiety about immigration and Britain’s place within the international community. The government attempted to combat declining vaccination rates through publicity campaigns, but struggled to convince the public to comply with its guidance. The public was not anti-vaccination, as shown by the demand for vaccination as a form of epidemic control when outbreaks occurred. However, by showing little enthusiasm for vaccination, coupled with the declining statistical and emotional threat of the disease during the 1960s, the British public helped to create the conditions for the removal of routine childhood smallpox vaccination in 1971 – years before the disease’s official eradication and before other European nations followed suit.