The international growth and influence of bioethics has led some to identify it as a decisive shift in the location and exercise of 'biopower'. This book provides an in-depth study of how philosophers, lawyers and other 'outsiders' came to play a major role in discussing and helping to regulate issues that used to be left to doctors and scientists. It discusses how club regulation stemmed not only from the professionalising tactics of doctors and scientists, but was compounded by the 'hands-off' approach of politicians and professionals in fields such as law, philosophy and theology. The book outlines how theologians such as Ian Ramsey argued that 'transdisciplinary groups' were needed to meet the challenges posed by secular and increasingly pluralistic societies. It also examines their links with influential figures in the early history of American bioethics. The book centres on the work of the academic lawyer Ian Kennedy, who was the most high-profile advocate of the approach he explicitly termed 'bioethics'. It shows how Mary Warnock echoed governmental calls for external oversight. Many clinicians and researchers supported her calls for a 'monitoring body' to scrutinise in vitro fertilisation and embryo research. The growth of bioethics in British universities occurred in the 1980s and 1990s with the emergence of dedicated centres for bioethics. The book details how some senior doctors and bioethicists led calls for a politically-funded national bioethics committee during the 1980s. It details how recent debates on assisted dying highlight the authority and influence of British bioethicists.
In April 1990, the Conservative government issued a new contract to general practitioners (GPs) working within the British National Health Service (NHS). The negotiations around the contract had been troubling for GPs. Whilst not the sole point of dispute, many practitioners found novel performance-related pay provisions to be particularly unwelcome departures from previous arrangements. Despite gaining concessions, GPs rejected multiple offers until a frustrated administration decided to simply impose the contract.1 So far as remuneration was concerned, the government felt strongly that new incentive payments and targets were essential. They would, the government believed, simultaneously raise standards of service and enable primary care to confront a range of public health concerns, not least those associated with ‘chronic disease’.2
The management of diabetes mellitus was one area of chronic disease care that the contract sought to improve. Political interest in diabetes had developed slowly over the twentieth century. Prominent British clinicians had warned that ‘deaths from diabetes were as numerous as those from all infectious diseases put together’ during the 1930s, and estimates of the condition's prevalence rose steadily over the post-war period.3 Likewise, medical professionals regularly referred to increases in workload and escalating consultations for the disease during the 1970s and 1980s; new technologies and understandings of risk management had extended the boundaries of treatment, whilst greater life expectancy and disease detection buttressed changes of demography, employment, leisure, and diet that probably underpinned increased incidence.4 Strong policy networks had been established around the condition by the early 1990s, and lobbyists drew government attention to diabetes’ growing financial and human costs. Responding to these concerns, the GP contract included incentive payments for special diabetes management clinics.5 Focused treatment within primary care would, the Department of Health hoped, provide a cost-effective way to reduce troubling rates of diabetes’ long-term ocular, renal, hepatic, neuropathic, and cardiovascular complications.
Notably, the contract itself contributed to the government's broader programme of reform for the NHS. Imbued with ‘neoliberal’ views about the political importance of competition in national life, the innovative policies introduced market-like mechanisms of devolved budgeting and contracting to the NHS. They also put into operation widely held beliefs that subjecting medical practitioners to managerial instruments would reduce costs and improve quality of care.6 Under the new arrangements, for instance, receipt of financial incentives for diabetes management was contingent upon health authorities reviewing practice records. If satisfied that care aligned with locally agreed protocols – documents that codified the facilities, tests, and treatment processes considered necessary for good patient management – authorities would approve payment to GPs.7 Similar practices were extended to other areas of healthcare. Concurrent service reforms required all purchasing authorities to benchmark performance indicators for new contracts, and all practitioners were compelled to undertake medical audit to highlight areas for improvement.
Although mandated by British health departments, these activities were to remain predominantly professionally led. Local committees comprising hospital clinicians, GPs, and technical staff would support audit activity, whilst the Royal Colleges and elite specialist organisations produced national care guidelines and minimum datasets to inform local developments. Crucially, in terms of diabetes management, these bodies intended their standards to be used by hospital doctors as much as by primary care teams, and they stressed the need for local systems to bridge the community–hospital divide. Through these and similar measures, managed medicine became central, not just to diabetes care, but also to the NHS.
Looking closely at the measures introduced for diabetes care, we can see how the reforms of the early 1990s consolidated a post-war transformation in British medicine. Across the twentieth century, doctors considered diabetes an incurable condition, one characterised by a chronic state of raised blood sugar and subject to lifelong management to abate symptoms and correct disturbed metabolic functions. Patients were responsible for performing daily acts of treatment and self-surveillance, with practitioners setting the parameters of therapy, assessing ongoing care, providing patient education, and monitoring for the earliest signs of devastating (though increasingly treatable) complications. Between the 1930s and the 1950s, clinicians, civil servants, and politicians agreed that good long-term diabetes care rested on two foundations: firstly, that patients were provided regular access to experienced, specially trained doctors for clinical review and ongoing advice; and secondly, that these doctors were employed within well-staffed and fully equipped hospital facilities to enable comprehensive disease surveillance. Specialist outpatient clinics embodied the ideal, most efficient arrangement of resources, and questions of organisation generally concerned how best to geographically distribute clinics to maximise patient contact.8 So far as managing medical practice was concerned, clinical decision-making might have been supported by a range of informal practices (peer advice, training in a firm, even the formatting of records), but medical skill provided the basis for good care. It was a belief echoed across different areas of medicine: ‘there are wide fields … of individual judgement and skill in general medical practice’, declared one report in the 1930s, ‘that disciplinary action cannot enter and where attempts at minute control and supervision would be harmful’. Rather, it concluded, ‘the quality of the service will depend mainly on the quality of the entrants’.9
By the 1990s, however, faith in specialist practitioners, individual skill, and organised clinics to guarantee good care had disappeared. Laboratories, experience, and education were still important features of medical and political discourse. Now, though, neither policy-makers nor specialist practitioners considered them sufficient safeguards of quality. Instead, management of professional care teams – and thus of disease management itself – had come to be seen as the key to better patient care and improved public health. Over the preceding eighty years, doctors and their care teams had mobilised a range of tools – from patient registers and recall systems to specialist records and care protocol – to place patients with diabetes under increasing surveillance. By the end of the century, these same tools were consciously used to specify, divide, and integrate the responsibilities of spatially dispersed teams, as well as to subject the very timing and processes of patient management to codification and review. Although unconnected to mechanisms of punishment, these instruments were designed to be disciplinary: once integrated into practice, they were to set the rhythms and content of care, and to make deviations visible to practitioners and their peers for justification or correction.10 Managerialism, moreover, was more broadly expressed through new structures of national and international clinical government, in temporary and enduring institutions dedicated to advising on, and auditing, the new structures of managed care.11
Today, objective-setting, standards production, guidelines, and auditing are widespread features of risk management and organisational governance.12 Particularly in medicine, it appears almost common sense that ‘best practice’ should be methodically laid out in evidence-based guidelines or national frameworks, that state agencies should encourage adherence to these standards, and that a range of state and non-state bodies consistently review performance.13 Institutions have grown up, in Britain and around the world, to support, guide, and monitor not just medicine, but the management of medicine.14 Indeed, it is the knowledge that such activities produce, rather than the technology of bureaucratic management itself, that draws popular comment.15
Yet programmes for structuring and reviewing care embody a very specific iteration of medicine, one which emerged slowly during the post-war period, and which became established during the early 1990s. This approach to medical practice was predicated upon a radical restructuring of trust in professionals that took place during the late twentieth century.16 Where once politicians, employers, and the public professed faith in the self-regulation and tacit knowledge of trained practitioners, they now demanded formal mechanisms of oversight, rituals of verification, codified standards documents, and incentive payments.17 As was the case in finance and associated areas of welfare provision, the remaking of relations of trust in medicine built upon a series of scandals, sustained political attacks, and popular critiques of experts and professionals emergent from the 1960s onwards.18 At the heart of many calls for change in British medicine, however, were medical professionals themselves.19 In recent years, neither medical scandals nor external criticism of medical care have ceased. Public trust in doctors and healthcare practitioners remains high, but a series of interrelated political, economic, cultural, intellectual, and technical transformations in post-war Britain has also rendered medical professionals subject to previously unthinkable managerial technologies, created in the name of quality.20 Through its history of diabetes management in post-war Britain, this book explores these transfigurations and asks how British medicine was so extensively subjected to management over the second half of the twentieth century. Who promoted managerial mechanisms, and why? And what connected new forms of clinical management with the rise of chronic disease control as a political and medical concern?
Managing medical professionals
To some extent, these are questions that scholars have previously sought to answer. One body of literature, for example, has cast the creation of systems for professional management as predominantly state-driven.21 Here, the global economic crises of the 1970s are seen to have undermined the funding assumptions of welfare states the world over.22 In Britain, state support for clinical guidelines and audit structures supposedly developed as a response to this turmoil, serving, as in the USA, to regulate clinical activity and to remove costly variations in healthcare through standardisation.23 Such efforts, moreover, are seen to have productively intersected with ‘New Right’ theories of government and economy that became prominent in British politics during the 1970s.24 Within this framework, guidelines (and other forms of clinical government) thus formed part of a broader remaking of public services, motivated by an ideological distrust of welfare professionals, and a desire to curtail professional autonomy through private-sector accountability techniques.25
Such broad-stroke accounts, however, have often downplayed the role of healthcare professionals in constructing the means for their own management, or have portrayed them as successfully restrained or co-opted by the state. To be sure, competing analyses have contradicted arguments of state success. Here, scholars have suggested that medical professionals responded effectively to political and administrative pressures, moving to maintain control over collective autonomy at the expense of reduced individual clinical freedom.26 Nonetheless, such interpretations still set professional activity as a rear-guard campaign fought in opposition to the state. ‘Managerialism’, moreover, is taken to represent an external, state-originated construct that ran counter to ideals of medical professionalism, ideals predicated upon collective control over standard-setting and work content.27 Thus, much of the extant literature has tended to understate the complicated, often synergistic, relationships between state agencies and professional actors upon which national systems for professional management were built. They have also overlooked connections between care guidelines, audit structures, and a broader history of bureaucratised care stretching back before and across the post-war period.28
With a focus on diabetes care in twentieth-century Britain, this book reinstates the active role of practitioners – particularly GPs and specialists – as partners with state and non-state agencies in the development of tools and systems for professional management. In what follows, the opening three chapters position the first managerial care systems as local developments. Hospital clinicians and GPs developed models for integrated and structured care in response to growing disease prevalence, strained NHS resources, and shifting understandings of diabetes on the one hand, and as part of professional projects and longer trends towards bureaucratisation of medicine on the other. The instruments produced subjected the rhythms and processes of care to codification, and the local use of audit introduced elements of review. The subsequent three chapters then chart the political career of new models of care, moving with specialists and advocates in turns between clinic and surgery, and national and international policy fora. In so doing, this work builds on a small number of historical studies that situate tools of clinical management in a context of professional politics and concerns over quality.29 Whilst positioning the promotion of technologies for professional management in terms of cultural and political anxieties about professional accountability, it suggests that specialists and elite medical bodies were not simply reacting to external pressures; rather, elite doctors and academics actively shared these concerns. Apprehensions about accountability and variation informed the development of new technologies, and motivated specialist agencies (and the Royal Colleges) to reposition themselves as governors of medical quality. Despite being sceptical about neoliberal programmes to remake the NHS, professional bodies forged common ground with government departments and statutory organisations over the managerial principles and practices that sat at the centre of their mutual (though somewhat misaligned) political projects.
Reinstating the active role of healthcare professionals in the history of managed medicine, however, does not mean negating the role of ‘the state’, conceptualised here as a loose collection of political institutions, statutory agencies, regulatory organisations, local and central government departments, welfare bodies, judicial and police systems, and quangos, funded by public monies.30 At the end of the twentieth century, the medical profession remained closely entangled with the British state. Parliamentary legislation empowered central medical bodies to set educational and disciplinary standards for registered practitioners, and secured for doctors their monopoly supply of labour to tax-funded institutions.31 Likewise, government departments sought to use clinical and public health expertise to both devise and legitimate central health policy, and continued to depend upon medical professionals to staff health services.32 It was through such extended connections, moreover, that the policy of elected governments could be influenced by state officials and professionals alike.33 Over the post-war period, professionals and their organisations interacted with civil servants, health authorities, ministers, and Parliament to construct elements of managed medicine, and elite practitioners worked through statutory agencies to give guidelines and audits greater authority.
The history that follows, therefore, remains a political history as much as a story of technical developments or professional manoeuvrings. Indeed, any history of managed medicine within the British health services must include politics in both its broadest and most traditional historiographical senses. On the one hand, like the scientific medicine of the late nineteenth and early twentieth centuries, managed medicine was promoted by certain segments of the medical profession and associated academic institutions.34 It required a reordering of resources, institutions, practices, and relations of labour. Healthcare practitioners seemingly accepted some of its forms and practices as uncontroversial, even useful. Yet debates about how clinical guidelines might produce unfeeling and unthinking ‘cookbook medicine’, removing skill and individuality from practice, also highlight how professionals regarded the reworking of their quotidian lives as inherently political.35
On the other hand, managed medicine developed through the pressure and support of more formal political actors, such as government ministers, the civil service, Parliament, and political parties, as well as more dispersed state institutions. Like other parts of the post-war welfare state, the NHS owed its existence to the centralising and collectivising political impulses of Britain's post-war reconstruction, and British medicine continued to be influenced by shifting political and economic tides. Traditionally, historians have debated these currents in terms of ‘consensus’, the extent to which the three decades after 1945 were characterised by broad policy agreement between elite figures in Whitehall and the major political parties, with all sides supporting a mixed economy, a predominantly Keynesian fiscal policy (to maintain full employment), and a generous welfare state, comprising tax-funded education, social security, and healthcare free at the point of use.36 It was a framework of policy-making that supposedly ended with the radicalism of Margaret Thatcher's Conservative governments (1979–90).37
The existence of consensus, however, is of less importance here than the economic trajectories and changes in frames of policy-making that characterised the five decades after the Second World War.38 In more recent work, scholars have traced the shifting sands of British politics, providing deep analyses of social and economic planning under the Clement Attlee (Labour) governments of the 1940s;39 the return to market-oriented policies pursued by the Conservative governments of the 1950s;40 the conflicts embedded within revived planning of the 1960s, beginning with Harold Macmillan's Conservative government (1959–64) and accelerating under Harold Wilson's Labour premiership (1964–70);41 the shifts between denationalisation, corporatism, and spending restraints noted within the governments of the 1970s led by Edward Heath (Conservative, 1970–74), Wilson (1974–76), and James Callaghan (Labour, 1976–79);42 and the complex, contested policy-making around markets and statecraft of the 1980s and 1990s.43 Moreover, historians of the welfare state have situated policy in relation to government spending and Britain's post-war economic fortunes. The twenty-five years after 1945 have been described as an economic ‘golden age’, during which strong underlying growth funded an expansion of welfare services.44 Yet a focus on average rates of Gross Domestic Product (GDP) conceals Britain's turbulent post-war experience.45 Periodic bouts of stop-start growth, inflation, international credit concern, and instabilities in currency and balances of trade strongly influenced government policy, and British welfare services were often under severe pressure to limit spending across the whole post-war period.46
These economic and political trends influenced the development of managed medicine in Britain in two key ways. Firstly, Britain's erratic economy produced a financial environment within which the growing population with diabetes outstripped the available resources for care. To shift or reduce the expense of patient management, pioneering healthcare practitioners developed innovative forms of service delivery, and multiple disciplines and institutions – especially those related to health economics and service research – were forged to assess medical practice and to ensure that public monies were spent effectively.47 Secondly, as in the case of patient consumerism, managerial reforms considered characteristic of a later period emerged from longer-term political trajectories and medical innovations.48 From the 1950s onwards, government departments and health authorities – supported by patient bodies, international organisations, and think-tanks – expressed a desire to use data from service monitoring to influence professional decision-making.49 In the context of scientific innovation and undesired increases in expenditure, it was hoped that the provision of information would guide medical practitioners to more effective and resource-minded care. Furthermore, as political parties came to emphasise the importance of technocratic planning during the 1960s, health departments experimented with expanded information systems, new advisory bodies, and multi-disciplinary management structures, believing that the incorporation of experts and clinicians into formal structures would provide the knowledge and legitimacy for more effective activity.50 Innovations even stretched to Whitehall during the 1970s, as ministers and civil servants developed various techniques of objective-setting, programme review, and resource management.51 Although not effective in the ways envisaged, these developments facilitated academic interest in evaluating medicine, and provided political capital for discussions of integrated, multi-sited treatment schemes from which initial technologies for professional management emerged.
Within this context, the growing influence of neoliberal political rationality in British governance after the 1970s can be read anew. Successive British governments during the 1980s and 1990s believed that exposing the central state to the practices and institutions of the so-called private sector provided the key to transforming public services, making them more efficient, less costly, more enterprising, and able to be managed more effectively at a distance.52 Putting these convictions into practice, Conservative administrations introduced new contracting and performance management arrangements into the NHS, and supported professional efforts to set standards and review practice as a means to benchmark commissioning and enhance accountability. In so doing, however, these governments not only found a platform to cajole, and co-operate with, professional bodies over managerial technologies. They also built on earlier political and medical innovations, with the reforms of the 1980s reorienting tools and subjects developed for planning in the 1960s.53 The Thatcher governments demonstrated a greater drive to build managed medicine into policy, but they did not originate it.
Where this work departs from the extant literature is in its focus on diabetes care.54 Although this is a fascinating topic in its own right, much of the historical literature on diabetes and its management has used the condition to examine essential features of modern medicine, from changing models and social relations of knowledge production to the role of new technologies and economic practices in redefining disease and patient experiences.55 Examining the ways in which British doctors – together with civil servants, government departments, international health organisations, patient associations, and academics – sought to control diabetes illuminates hitherto hidden connections between chronic disease and the creation of systems designed to discipline professional labour.56
In essence, this work argues that long-term disease posed challenges to a health system initially organised to treat acute cases.57 Over the post-war period, the number of NHS patients with diabetes (and other long-term conditions) grew considerably. In 1951, for instance, the eminent physician R. D. Lawrence estimated 0.3 per cent of the population had diabetes, with another 0.3 per cent with asymptomatic forms.58 By 1991, British Diabetic Association (BDA, a mixed lay and professional organisation established in the 1930s) estimated total prevalence to be near 2 per cent, or around 1 million people with diabetes in England and Wales alone.59 Without the capacity for cure, or resources to cope with rising demands, doctors had to devise new ways to treat increasing numbers of ambulant patients with long-term disease, and bureaucratic observation became central to tracking patients not directly under hospital observation. Healthcare teams combined existing tools in new ways to ensure that novel forms of organisation – particularly dispersed elements and institutions of community care – were able to function effectively.60 Appointment systems, recall mechanisms, and patient registers tracked patients and proactively regulated the temporality of oversight; mobile records inscribed and communicated longitudinal data to inform long-term treatment decisions; and letters and records communicated what action had been taken so that concentration in one location was unnecessary. As teams grew, care protocol also formally allocated responsibility and guided practitioners on appropriate clinical activity, organising care along the lines of bureaucracy in the hope that treatment would be integrated and standards maintained.61 Operating at the intersection between primary and secondary care, doctors, managers, and health service planners believed diabetes to be at the forefront of these developments, casting it as a model chronic disease whose management strategies might be generalised to other conditions.62
At national level, political interest in diabetes intensified during the final two decades of the century. As government explicitly designed policy to confront ‘chronic disease’ during the 1980s and 1990s, conditions and risk factors central to the concept were subject to renewed efforts at cost-control.63 Diabetes became a condition of rising concern. Specialist practitioners, health economists, civil servants, and the BDA all lobbied ministers, and policy networks produced quantified measures of the costs of the disease and its complications.64 With the government interested in new forms of professional management, chronic diseases like diabetes provided promising subjects for piloting new programmes. Healthcare teams were already using many of the tools required for implementation, whilst elite professional bodies and international organisations were creating standards documents, clinical guidelines, and model audit systems. There were alternative routes to promoting managed medicine. Some surgical teams, for example, pioneered new forms of management.65 Yet chronic disease control proved pivotal, not just because of its cost implications and broad policy appeal, but because of how common managerial technologies were already in clinical practice, and how care penetrated both the hospital and the GP surgery.
Managing medicine before 1945
These longer-term influences, however, stretch back before the Second World War. Between the mid-nineteenth and early twentieth centuries, for instance, a recognisable ‘profession’ of medicine emerged. During this period, practitioners began to more effectively organise themselves on a national basis, and they made sustained ideological claims to professional status, expertise, and authority, successfully converting esoteric knowledge into market control and self-regulation.66 Discourses of autonomy derived from specialist knowledge outside the purview of the lay person, moreover, had a substantive impact on doctors’ identities and work patterns into the 1900s, buttressed by social networks and training.67 In Britain, the 1858 Medical Act laid the foundations for professional identity and status.68 With the creation of the General Medical Council, the state charged a small committee of elite doctors with maintaining a register of licensed practitioners, disciplining those found guilty of ‘infamous’ behaviour, and overseeing formal medical education in approved institutions. Practitioners themselves were thus placed in control of training and regulating their fellow members, positioning standard-setting and professional freedom as principles to be fiercely defended.69
Whilst this history means that examining the actions of ‘medical professionals’ in the post-war period makes analytic sense, serious professional divisions persisted after 1858. The intensification of specialisation, for instance, generated heated disputes over the second half of the nineteenth century. As Granshaw has highlighted, in the absence of a co-ordinated system, the development of specialist institutions siphoned patients away from GPs, as well as from general hospital consultants engaged in medical education.70 With their livelihoods and status challenged, generalists attacked specialisation as a dangerous innovation of little medical value.71 Such criticism, moreover, carried an ideological edge. Opponents condemned specialists for focusing on specific diseases and isolated parts of the body. Localised perspectives conflicted with a prevailing holistic medical culture, and generalists continued to argue that the effective understanding and treatment of illness required disease to be placed in the context of the whole patient.72
Into the twentieth century, concerted opposition to specialisation faded as administrative pressures within an emergent healthcare system, and a drive for professional unity, saw referral mechanisms, systems of integrated care, and specialist departments in general hospitals develop.73 However, though these compromises smoothed tensions, they were not a panacea. Even as specialisation became common amongst consultants, professional conflict continued to occur.74 Links between universities, medical schools, teaching hospitals, and state bodies deepened over the inter-war years, reinforcing divisions between hospital practitioners and rank-and-file GPs. Such interconnections also created new tensions between gentlemanly, individualist consultants and a cadre of specialist academic practitioners dedicated to research.75 Gradual educational changes ensured that qualified professionals shared a broad outlook and occupational experience by the post-war period, and over time more robust group identities formed. Nonetheless, internal divisions persisted, and splits were most visible during times of great institutional and political change.76
Since the mid-nineteenth century, then, medical professionals in Britain have rarely acted with one voice, and have been dependent upon the state for much of their authority. Indeed, rather than working in opposition to the state (as doctors frequently claimed), medical professionals and the state have consistently been partners. Though the 1858 Medical Act was intended, at least in part, to raise standards and protect the public, nineteenth-century doctors also hoped it would harness state authority to limit competition and protect the economic interests of an overcrowded profession.77 The initial legislation disappointed many registered practitioners, but political developments since 1858 increasingly folded the profession within state apparatus.78 Through the 1911 National Insurance Act a significant proportion of GPs were contracted into state-funded care.79 The Local Government Act of 1929 extended state capacity for hospital work, and before this municipal schemes had sought to co-ordinate private, charitable, and local state services.80 Finally, the creation of the NHS in 1948 consolidated the central role of the British state in funding medical practice, establishing doctors as welfare professionals whose autonomy of decision-making was guaranteed in exchange for working within fixed budgets.81
These changing relations of medicine before 1948 had considerable implications for medical professionals and service management. In a landmark article, Steve Sturdy and Roger Cooter suggested that the remaking of financial arrangements in British medicine – and particularly the expansion of state funding – drove the creation of new corporate hierarchies and divisions of labour between 1870 and 1950.82 Along with charitable investment and burgeoning international connections, these arrangements provided support to laboratory-oriented scientific medicine, and to the standardised views of physical bodies and disease common to scientific and administrative systems.83 Crucially, the pursuit of institutional efficiency during these decades also reinforced a managerial ethos in health service organisation, with work divided and re-integrated in order to maximise output within available resources.84 Faced with the accumulating bodies of nationalised structures, medicine during this period gradually embodied the bureaucratic forms and rationalities characteristic of post-Enlightenment ‘modernity’.85 Techniques of abstraction, classification, mapping, grouping, and division had been central to a host of administrative, commercial, and scientific enterprises across Europe and its colonies.86 Through the remaking of medicine's institutional and social relations, the individualistic tendencies of British practitioners were slowly overcome, and administrative practices were more intensively applied to construct new subjects, ‘chart’ bodily and organisational domains, and pursue efficiency.87
It is important not to exaggerate the extent of change experienced before 1950. In outpatient clinics, claims to efficiency seemingly outstripped practical achievements.88 Elsewhere, individualistic diagnostic categories and prescribing habits persisted into the mid-century.89 Yet the forms of organisation and practice introduced during this period left a legacy for the post-war decades. The creation of the NHS reinforced the dominance of an academic elite over British hospital practice, and some British-trained doctors found working relationships in the service, as well as rules governing employment and practice ownership, rigid enough to warrant emigration.90 The basic units of medical work, moreover, were standardised by the spread of standard tests, drugs, diagnostic labels, and bodies, thus providing the foundation for more tightly defined and managed care.91 Indeed, with the creation of multi-sited, multi-disciplinary clinical trials, the inter-war period also produced material and intellectual precedents to managed work.92 Once trials were integrated into the fabric of the health services, they offered doctors experience with protocol, statistical assessment, and models of teamwork that could be drawn upon when designing new systems of structured care and professional management. These technologies also placed knowledge about efficacious treatment outside the individual, to be determined through systematic research, and thus rendered practitioners more open to regulation.93 Previously embodied and inexplicable knowledge became communicable.94
As the 1950s came into view, though, more closely managed medical work was not inevitable. The transformations of the late nineteenth and early twentieth centuries produced values, projects, instruments, and organisational forms that fed into managed medicine as it emerged during the 1980s and 1990s, but earlier developments did not determine end results. Managed medicine had to be created through the determined action of a range of professional, state, and lay actors, within the shifting political, social, and cultural circumstances of the post-war period. As expensive concerns that linked primary and secondary care, chronic diseases provided important testing grounds for new approaches to medicine.
Diabetes, chronic disease, and managed medicine
Diabetes’ historical status as a model chronic disease offers it analytical power for the study of the emergence of professional management. During the late 1950s and the 1960s, clinicians, epidemiologists, and social medicine researchers began to discuss ‘chronic diseases’ as a coherent category. Key figures often used what was then known as ‘maturity onset’ or later ‘non-insulin-dependent’ (now type 2) diabetes to discuss some core characteristics of chronic diseases: gradual and asymptomatic onsets; long-term or incurable natures; and profound social and economic repercussions for individuals, communities, and nations.95 Equally, doctors and nurses often saw diabetes management as a model for pioneering efforts at co-ordinated shared care between hospitals and GPs, one from which practitioners engaged in other forms of long-term disease management might learn.
However, diabetes and its management also have histories that distinguish the condition from others that contemporaries included within discussions of ‘chronic disease’, such as cancer or hypertension.96 The medical and political understanding of the disease has changed significantly over time, in ways that have often made its management rather idiosyncratic. Three features are worth highlighting and considering at length. Firstly, doctors developed a quantified and bureaucratic culture of management earlier than in other chronic conditions. Secondly, clinicians, epidemiologists, and public health doctors remained divided over causative factors for diabetes, and rarely promoted primary preventive approaches. Thirdly, medical and nursing professionals provided the leading edge to the BDA, possibly the first patient-advocacy group in Britain. Through the Association, specialists created networks and connections with state agencies and other elite professional bodies. Each of these factors influenced the ways in which diabetes related to managed medicine, and to a broader concept of chronic disease.
Quantifying diabetes in the nineteenth and twentieth centuries
Typically, both academic histories and clinical texts trace the existence of diabetes at least as far back as ancient Greece, where the term ‘diabetes’ originated.97 For the present work, though, the most significant developments in the definition and management of the disease occurred in the nineteenth century. Until this point, understandings of the mechanisms and causes of ‘diabetes’ (or differently labelled states with similar symptoms in non-Greek traditions) had varied considerably between times, places, and practitioners. Despite such variations, physicians defined diabetes in symptomatic terms, diagnosing it upon noting unquenchable thirst, excessive urination, wasting, and/or extreme hunger.98 Though some ancient physicians from the non-Greek world had discussed similar diseases marked by ‘honeyed urine’, British doctors did not explicitly discuss urinary sweetness until the seventeenth century, and the term ‘diabetes mellitus’ was coined only in the later eighteenth century.99
During the nineteenth century, diabetes came to be slowly transformed in British medical discourse and practice, in line with broader epistemological and structural changes in ‘Western’ medicine.100 As hospitals grew in importance as centres of medical care, training, and research, medical perception became reorganised around new forms of clinical examination. Lesions and clinical signs – observable only to trained practitioners through skilled examination and technology – became more fundamental than symptoms described by patients in diagnosing and managing disease.101 The meaning of lesions themselves, moreover, was to be found in relation to scientific observation of the dead and, in the later nineteenth century, in relation to experimentation on the living.102 These broad trends were not totalising. The exact importance of scientific knowledge and clinical experience in deciphering sign and symptom varied between practitioners and cases during the nineteenth century.103 Patients also continued to exercise some influence over medical thought and practice, with greater ‘passivity’ having to be learned.104 Nonetheless, in terms of diabetes, physicians of the nineteenth century extended experiments with glycosuria (sugar passed into the urine) from the previous century.105 Chemists produced tests to enable easier assessment of urine content, and the clinical sign of glycosuria became as important as symptoms in the diagnosis of disease.106
As well as contributing to a change in disease understandings and patient profiles (new tests shifted the boundaries of who might be diagnosed) these innovations fed into management.107 Whilst quantitative examination of glycosuria began as a research practice during the 1860s, physiologically minded clinicians like Frederick Pavy (Guy's Hospital, London) used new tests to monitor the extent to which a variety of diets reduced bodily glucose.108 British doctors had prescribed diets to inhibit the body's production of sugar since the early nineteenth century.109 After the mid-century, though, glycosuria testing allowed diets to be more finely titrated to affect bodily outputs, following a broader trend of turning ‘abnormal’ diagnostic signs into quantitative markers of therapeutic success.110 By the early twentieth century, hospital practitioners had added more markers to their clinical assessments (most notably acids (ketonuria) and nitrogen passed in urine), buoyed by the elevation of basic laboratory practices in pre-clinical training and by the international and imperial expansion of physiological research into diabetes and nutrition.111 Inter-war innovations even made routine assessment of blood glucose practicable.112 Though some GPs and specialists disagreed about the necessity of blood testing in ongoing management, by the 1920s authoritative writers had cast diabetes as a disease of the general metabolism, with hyperglycaemia (elevated levels of glucose in the blood) as its diagnostic sign, and glycosuria and ketonuria (acids passed in urine) as markers of therapeutic performance.113
Unlike many other conditions later conceived as chronic diseases, diabetes thus had quantified, biochemical management programmes by the early twentieth century. Initial ‘stabilisation’ involved fasting, careful calculation of carbohydrate, fat, and protein in test diets, and monitoring of physiological changes to assess efficacy.114 This emergent system of biochemical review and therapeutic adjustment was further strengthened with the spread of insulin therapy in Britain, after early trials with the drug in 1923.115 Insulin facilitated some changes in approach. As a powerful therapeutic agent (enabling cells to take up glucose circulating in the blood), insulin offered hope to patients who did not take well to planned diets. With insulin, doctors could afford greater leeway on dietary constraints, and they came to emphasise psychological and social factors, as well as biochemical measurements, in devising and assessing treatment.116 Nonetheless, change had limits. Despite a growing consideration of subjective wellbeing in treatment, clinicians continued to insist on the importance of laboratory-based surveillance and quantified cultures of care. Ensuring a balance of diet and insulin – as measured through biochemical indices – remained central to therapy, as did achieving acceptable metabolic control.117 Doctors thus sought to maintain a central role in long-term disease management. Patients were charged with daily acts of self-care, but clinical teams retained responsibility for establishing balance in the parameters of individual therapy.118 Being too lenient might result in hyperglycaemia and ketonuria, risking symptoms and acute complications; being too austere might have iatrogenic consequences, with injections rendering blood glucose levels too low, triggering the novel danger of hypoglycaemia.
In light of these challenges, between the 1890s and 1920s doctors developed a range of tables, graphs, and calculations to assist assessment of diets, insulin requirements, and therapeutic success.119 They also created new records to monitor biochemical trends and record ongoing treatment. In fact, by the mid-twentieth century, some hospital wards had developed a considerable documentary culture around diabetes management, and records for treatment and laboratory results provided important resources for guiding medical and nursing practice.120 Creators of new integrated care programmes later developed similar instruments to co-ordinate activity between practitioners. Although some post-war clinicians expressed doubts about the relationship between hyperglycaemia and the development of long-term complications, the close links between diabetes care and laboratory practices thus provided diabetes management with well-developed cultures of quantification and standardisation. Such features made setting and auditing process and outcome standards simpler than for other conditions after the 1970s, and by the 1990s made diabetes care an attractive area for pioneering new target-oriented managerial frameworks.121
Diabetes, chronic disease, and risk
Although quantified management programmes were a common feature of diabetes treatment during the post-war period, the exact content of care varied between patients. Before the twentieth century, physicians had made rough divisions between ‘types’ of patient to provide indicators for diagnosis, therapy, and prognosis. On the one hand, they discussed diabetes with an onset early in life, marked by acute wasting and death following coma. On the other, they wrote of diabetes with later onset, often seen in overweight patients, who tended to live longer but in whom certain ocular, nervous, and kidney complications could occur.122 Soon after insulin became widely available, clinicians modified their discussions, dividing patients who needed insulin to stave off significant hyperglycaemia, ketonuria, and death from those who did not. Until the 1960s, these criteria roughly equated to classifications of ‘moderate’ or ‘severe’ diabetes (generally affecting thinner patients, with acute onset at young age, treated on diet and insulin) and a supposedly ‘mild’ form of the disease (generally appearing in overweight patients manageable on diet, with onset in middle age).123 Into the 1960s, doctors began to refer to ‘juvenile’ and ‘maturity onset’ diabetes respectively, with these terms replaced during the late 1970s and 1980s by insulin dependent diabetes (now type 1) and non-insulin dependent (now type 2).124 Researchers from diverse disciplinary backgrounds suggested different forms of classification, proclaiming new types and sub-types, over the century.125 However, clinicians predominantly classified patients on the basis of liability to coma and response to treatment, with the latter determining therapeutic trajectory and patient experience.
Many of the difficulties in sub-classifying diabetes emerged from uncertainty about its cause. Historically, the condition has been defined and diagnosed by an intermediate effect of pathology – elevated blood glucose – and its potential symptoms or risks, rather than by any specific lesion or trigger. During the 1940s, Harold Himsworth, Professor of Medicine at University College Hospital (London), even suggested that the diversity of disease trajectories in diabetes may have resulted from how the label functioned as an umbrella term, grouping together different problems connected by common pathophysiological processes, biomarkers, and management programmes.126
This is not to suggest that doctors before the mid-twentieth century lacked theories about causation. During the second half of nineteenth century, physicians redeveloped older models of disease that equated illness with imbalance, suggesting stress, exposure, alcoholic excess, and ‘violent mental emotion’ as potential triggers in older patients.127 Such ideas persisted into the early decades of the twentieth century, and clinicians like R. D. Lawrence considered ‘worry’ and ‘overstrain’ alongside heredity, over-eating, obesity, accidents, infections, and other diseases as potential ‘immediate cause[s]’.128 Lawrence admitted, however, that the causes of many ‘acute’ cases remained ‘complete mysteries’, and no clear consensus emerged on the precise aetiology of diabetes even after 1945.129
Doctors were somewhat uncertain about the aetiology of many chronic diseases in the second half of the twentieth century. After the mid-1950s, the novel application of epidemiological methods to chronic diseases meant that discussions of causation frequently centred upon multifactorial models of onset and statistical assessment of risk.130 Except for the case of smoking and lung cancer, it was rare for clinicians, epidemiologists, and public health doctors to implicate a single factor as triggering disease.131 Instead, medical debates about prevention came to focus on the relative contribution of numerous so-called ‘modifiable’ risk factors (such as diet, exercise, or physiological abnormalities), and preventive programmes were oriented around three levels of intervention: primary prevention (stopping the onset of disease, by either promoting healthy practices or encouraging cessation of ‘risky’ ones), secondary prevention (instituting early treatment and arresting serious progression of particular conditions), and tertiary prevention (managing long-term complications to prevent further physical deterioration).132 Although not disappearing completely in Britain, analyses of economic and social determinants of health moved to a minor key.133 New approaches to causation and prevention took time to become established, and not all parties agreed about the importance of specific risk factors for specific diseases.134 Nevertheless, doctors still instituted primary preventive programmes for many chronic diseases. Despite strong disagreements over the possible causes of heart disease, for instance, national advisory bodies of the 1970s and 1980s offered preventive advice on smoking and dietary intake. Equally, hospital doctors and GPs proposed targeted, routine blood pressure assessment, and control of patients diagnosed with hypertension.135 Even private companies turned debates about cholesterol and dietary fats into profit-making opportunities.136
As will be noted in Chapter 1, doctors, state agencies, and international organisations spent much less time discussing primary preventive strategies for diabetes than those for other conditions. Between the 1940s and 1960s, some theories about causation were advanced. Several public health doctors implicated sedentary lifestyles and over-eating in the causation of non-insulin-dependent diabetes, whilst a small group of epidemiologists and clinical researchers debated the relative aetiological importance of sugar and other refined carbohydrates. A minority of GPs and hospital doctors also suggested that lifestyle advice could be beneficial to those ‘at risk’, with risk calculated in relation to characteristics (age, weight, sex, parity, family history) seen most commonly in people with diabetes. However, no part of the profession suggested national primary preventive strategies until the late 1990s. Before this, prevention focused upon secondary and tertiary interventions – on preventing or arresting diabetes’ various microvascular and macrovascular complications. Here, dietary composition, blood glucose control, and new therapeutic technologies assumed centre stage, and diabetes itself was conceptualised as a risk factor for myriad acute problems.137 In other words, with causative factors disputed, clinical activity proved central to prevention, and specialists and the state promoted improved disease management (and, therefore, more intense professional management) as a public health activity during the later 1980s and early 1990s. Whilst this alignment of clinic and prevention was present in the history of other chronic conditions, it was particularly pronounced in diabetes. Doctors in the post-war period thus tended to portray diabetes as a model chronic disease primarily because of features related to its management – long-term surveillance, therapeutic titration, the involvement of primary and secondary health services – or its onset and effects, rather than for its aetiology. Moreover, the alignment of prevention with professional management provided policy-makers with another reason for seeing diabetes as a test site: intervention met public health, as well as clinical and service, interests.
Promoting diabetes services
One final distinctive feature of diabetes that shaped how its professionals became subject to management was the existence of an influential patients’ organisation throughout the post-war period. The Diabetic Association (later BDA, and subsequently Diabetes UK) was a mixed lay and professional group established in 1934.138 The Association itself emerged from attempts by R. D. Lawrence – the pre-eminent British diabetes specialist before the Second World War, and himself a person with diabetes – to gain financial and political support for his Diabetic Department at King's College Hospital (London). In brief, Lawrence turned to his high-profile colleagues and patients to raise capital for the department, and H. G. Wells (a private patient) penned an appeal letter in The Times on Lawrence's behalf.139 From this letter, interest in an association gained ground, and Lawrence pulled together support for the organisation, which was founded in Wells's flat by thirty-two people, including clinicians, nurses, dieticians, industry representatives, and prominent patients.140
Membership of the Association grew slowly, but seemingly accelerated over the 1970s and 1980s, and local ‘branches’ (in which patients might meet and arrange events for their own support) developed in the early post-war decades.141 However, although the Association was dedicated to work ‘for diabetics’, healthcare professionals provided the central body with much of its impetus and interests for most of the century. Lawrence was a dominant figure until the later 1950s, and professionals used the Association to form connections, design research programmes, develop their specialism, and influence government policy.142
The content and direction of the Association's activity altered over time. As will be noted in Chapter 1, as well as publishing journals and leaflets to support patient self-care, a major early interest of the Association was in promoting the creation and accessibility of specialist outpatient clinics. The development of insulin therapy had intensified patient self-management after the 1920s, introducing painful daily injections and new forms of laborious self-surveillance; where a patient could afford it, doctors encouraged home testing of urine for glucose and ketones (which initially involved boiling urine and applying a reagent in the kitchen) and noting results in record books.143 This self-monitoring, though, formed part of a larger pattern of patient surveillance grounded in new forms of hospital organisation. The Association held a belief common until the 1950s that clinics were essential to effective diabetes care, providing a space for expertise, high-technology surveillance, and (in a minority of institutions) a growing multi-disciplinary care team. Its leadership thus spent much of the 1940s, 1950s, and 1960s surveying existing facilities and lobbying for better clinic organisation.144 Its interests, however, did not remain static. Along with investigations into a range of welfare concerns over the post-war period, the Association increasingly co-operated with major professional bodies such as the Royal College of Physicians of London (RCP) during the 1970s and 1980s, producing guidance on service provision and clinical care.145 During these decades, leading figures reconceived of the BDA as a body for setting and reviewing standards, and lobbied government for support in its efforts.146
The existence of such a body distinguished diabetes from many other chronic diseases. Patient-supported organisations had existed a few years before the creation of the BDA, though bodies like the Asthma Research Council focused on basic and clinical research funding.147 The Association, therefore, remained unique in its work and composition for many years after the Second World War, and attained a position of moral and scientific authority seemingly unrivalled by other disease-specific organisations.148 Crucially, it influenced the way in which diabetes care became subject to innovative forms of management. Its members developed new models of structured and shared care, spreading them through networks developed within the BDA until they formed something of an accepted ‘common sense’. These models were then promoted nationally, and the Association actively engaged in the creation of guidelines and audits, including joint ventures with the Department of Health. Relations were not always cordial, and successive governments were wary of activities that might increase short-term costs. Nonetheless, the tireless work of the Association was a key feature of promoting diabetes as a subject for political interest.
Managing diabetes and medical professionals in post-war Britain
Through the following chapters, then, this work tells a particular history of diabetes management in Britain. It is one that offers new perspective on the development of instruments for managing professional labour, and which explores a broader history of managed medicine after 1945. Before providing an overview of the following chapters, it is worth briefly pausing to reflect upon the work's silences and parameters.
Given the interests of the study, patients will be seen only fleetingly.149 Patient testimonies are used to explore how certain systems functioned, or to examine how patients’ concerns promoted professional management, whilst the figure of ‘the patient’ appears when the ways in which medical and political discourse used such a construct are traced, perhaps to justify stasis or encourage change.150 Similarly, although references will be made to other healthcare professionals (notably managers, nurses, and technical staff), the primary focus remains on doctors and how their work became subject to codification, division, temporal regulation, and review. This is not to diminish the importance of other healthcare professionals in the management or history of diabetes care. Indeed, nurses played a considerable role both in patient management and in designing and promoting schemes for integrated care.151 Nonetheless, doctors – specialists, academics, and GPs alike – sat at the heart of managed medical practice in Britain. They were the most influential actors promoting new forms of oversight and guidance, and it was their labour and status which was most radically reworked during the twentieth century. Therefore to fully appreciate how managed medicine emerged in post-war Britain, it is crucial to place medical professionals at the centre of the forthcoming analysis.
With regard to the chosen geographical frame for the study, it might well be asked whether it makes sense to focus on ‘Britain’.152 This question can be tackled on three levels. Firstly in terms of whether differences in medical culture, society, and politics undermines the implied unity of England, Wales, and Scotland.153 It was certainly the case, for instance, that medical culture and politics in Scotland made the development of integrated care schemes much simpler than in England and Wales, and Scottish elites appeared slightly ahead of their southern counterparts in constructing guideline systems.154 Yet, as this work shows, diabetes management (and the development of professional management) was a very ‘British’ affair. Specialists, evidence, and models of care moved freely across internal borders during the decades discussed. Major reviews of, and guidelines for, diabetes care often covered the whole of Britain or, if taking place within individual countries, were closely connected to counterparts elsewhere.155 Developments in one country, in other words, informed developments in others. Similarly, in political terms, major actors and organisations – such as Parliament, the NHS, or the BDA – had British coverage. Undoubtedly, examinations of specific institutions or practices might reveal local peculiarities. But in a broad study such as this, a focus on Britain makes considerable analytical sense.
Secondly, there may be a case for adopting a wider geographical focus. For instance, as recent scholarly work has pointed out, the creation of clinical guidelines and audit was a transnational phenomenon, something perhaps characteristic of ‘modern’ medicine, with its emphasis on scientific rationalities and administrative pressures for standardisation and efficiency.156 Indeed, the organisations and actors that promoted the management of professional labour often moved across borders, operating in global institutions and promoting international programmes for reform.157 Yet the history told here is also one shaped by British peculiarities. As Day, Klein, and Miller point out, the generation and imposition of guidelines were linked far more closely to financial concerns in the USA than in Britain. In the USA, market structures and a disaggregated profession left doctors less able to institute their own vision of professional management.158 In Britain, different conditions prevailed. Popular appreciation of the NHS curtailed attempts to fully privatise health service provision, and elite specialists and local doctors were more like partners in creating managerial instruments.159 Likewise, in terms of diabetes management, British clinicians, epidemiologists, and researchers were prime movers within international agencies. They promoted models of structured, managed medical practice within these institutions, and used their organisational prestige to influence domestic practice. Once again, the peculiarities of the British political and medical context influenced the way in which international trends were received, and even informed those trends directly.
It is thus worth noting the productive power of focusing on Britain itself. With the creation of the NHS, Britain possessed a redistributive health service funded from central taxation that was of great interest to countries around the world.160 By studying its history we can examine how disease and professional management developed in a collectivised (non-insurance-based) system with a mature medical profession. We are also able to tease out the possible contributions of significant political and cultural change to such developments, with Britain experiencing the loss of empire and constant shifts in governance strategies between 1945 and 2000. In other words, by ensuring that ‘Britain’ is situated within local and international scales this work can provide an illuminating study of modern medicine in the post-war period, but one which does not reduce British history to a variation on a theme. It is therefore hoped that the findings offered here can contribute to a broader literature on diabetes care and managed medicine, providing empirically grounded scholarship that facilitates comparative perspectives.
British distinctiveness can be seen almost immediately in Chapter 1. This chapter examines the ways in which diabetes care came to be remade with the creation of the NHS, and highlights the complex relationships connecting diabetes with a reconstructed concept of chronic disease. The new service accelerated the growth of hospital-based care, with a minority of clinicians developing rudimentary bureaucratic tools for managing the disease and a growing care team. At the same time, doctors, epidemiologists, and public health practitioners interested in ‘chronic disease’ also began to reframe diabetes as an exemplar of disease management, equating prevention with good clinical care.
These developments are taken up in Chapters 2 and 3, which discuss further expansions of the care team. As patient numbers grew and resources became constrained, clinicians tried to expand the role of GPs, both in formal shared care schemes and more informally in special clinics. GPs themselves were interested in assuming greater responsibility for their diabetic patients during this period, and they actively cultivated multi-disciplinary, cross-institutional ventures to bring diabetes management into primary care. This transition, however, provoked concerns about standards of care and the ability to co-ordinate clinical activity. To solve these problems, clinicians and GPs deployed tools developed from research – and instruments created to facilitate new forms of chronic disease management – to manage care more effectively. Reflecting on what made ‘good practice’, clinical teams set new standards for undertaking patient management, against which care could be measured and reviewed.
Early schemes did not spread beyond the local institutions in which they were first mobilised. This situation changed for later initiatives. Chapter 4 outlines how diabetes re-emerged as a concern of central government during the late 1970s, setting the scene for the move of managed care from clinical to policy arenas. Specifically, this interest in diabetes arose in relation to diabetic retinopathy, a major cause of blindness nationally. Reflecting changed understandings of prevention in chronic disease, as well as the shifting connections between medical organisations and government, the BDA and elite professionals promoted the cause of retinopathy prevention in government circles. Although these efforts found a supportive ear amongst medical civil servants, finance departments demanded new forms of health-economic evidence before they would consider funding pilot studies of early detection and treatment. Ministers, moreover, picked up schemes for trialling new modes of organisation during the mid-1980s only because of the party politics surrounding public health.
In contrast, the Department of Health (and its predecessor) quickly supported and adopted new standards documents, guidelines, and audit systems during the later 1980s. As Chapter 5 shows, interest in standards and auditing was much broader than their application to diabetes, being closely related to new political rationalities regarding public services, and to anxieties about professional culpability and accountability. In medicine, the creation and use of standards had a long heritage. During the mid-1980s, however, various professional, charitable, and international agencies converged on diabetes to produce their own standards of care process (and intermediate outcomes), which mapped neatly onto managerial principles and practices developed over the previous century. These standards provided a new layer of management in medicine, adding national guidelines for practice and audit to the local systems which had emerged in previous years, and on which such guidance had often been based.
Between the late 1980s and early 1990s, the principles of managed care, if not the content of these new standards documents, made their way into policy circles. Chapter 6 examines how this occurred. It begins by situating government interest in guidelines and audit systems within the influence of neoliberal ideas about competition, professional accountability, and the role of regulated market systems in social and economic life. A new consensus was forged in this period, in part because political and medical projects for management had clear synergies. However, the movement of prominent diabetologists and experts across policy fora to forge such conceptual and practical connections was also critical. Personnel continuities across different levels of governance ensured rough agreement over managed diabetes medicine, a vision of care which dovetailed neatly with political desires to curb costs and make healthcare operate more like a market. More than this, the public health aspect of managed care attracted successive governments to new guideline and audit structures, with little thought given to the growing interest in social determinants of health that had characterised public health during the end of the twentieth century.
Since the year 2000, public health policy for diabetes has changed direction somewhat. In recent decades, governments have sought to emphasise primary prevention of type 2 diabetes through exercise and dietary strategies, and Diabetes UK (previously the BDA) has also created new risk self-assessment tools.161 And yet the managerial approach remains. The National Service Framework (NSF) and Quality and Outcomes Framework (QOF) continue to provide the financial and standards structures central to managed care, even where GPs have been brought into primary preventive strategies.162 Similarly, the new approach of risk identification and early intervention has its heritage in mid-twentieth-century discussions of chronic disease and screening, and is designed to target NHS resources and medical attention more efficiently.
Probably reflecting a mixture of improved case-finding, an ageing society, and changing social and economic structures, rates of diabetes mellitus have increased substantially over the past two decades, and are projected to increase at a faster rate in the coming years.163 As British health policy gravitates ever closer to managerial approaches to, and market commissioning of, health services, it is likely that bureaucratised clinical care will continue to play a central role in the NHS. In such a context, it will be more important than ever to see the historical trends that shape our approach to both of these major features of British life. This book provides something of a starting place for such an important undertaking. I hope it will also offer scholars a basis to extend conversations about chronic disease and managed care in different types of healthcare systems.
Notes