The international growth and influence of bioethics has led some to identify it as a decisive shift in the location and exercise of 'biopower'. This book provides an in-depth study of how philosophers, lawyers and other 'outsiders' came to play a major role in discussing and helping to regulate issues that used to be left to doctors and scientists. It discusses how club regulation stemmed not only from the professionalising tactics of doctors and scientists, but was compounded by the 'hands-off' approach of politicians and professionals in fields such as law, philosophy and theology. The book outlines how theologians such as Ian Ramsey argued that 'transdisciplinary groups' were needed to meet the challenges posed by secular and increasingly pluralistic societies. It also examines their links with influential figures in the early history of American bioethics. The book centres on the work of the academic lawyer Ian Kennedy, who was the most high-profile advocate of the approach he explicitly termed 'bioethics'. It shows how Mary Warnock echoed governmental calls for external oversight. Many clinicians and researchers supported her calls for a 'monitoring body' to scrutinise in vitro fertilisation and embryo research. The growth of bioethics in British universities occurred in the 1980s and 1990s with the emergence of dedicated centres for bioethics. The book details how some senior doctors and bioethicists led calls for a politically-funded national bioethics committee during the 1980s. It details how recent debates on assisted dying highlight the authority and influence of British bioethicists.
Speaking to the Society of Medical Officers of Health in February 1965, J. J. A. Reid – a well-known public health practitioner – addressed a familiar theme. ‘In this country’, Reid began, ‘the problems with which all branches of our profession are faced are very different from those of the past, when poverty, ignorance and infectious diseases were the main enemies of health.’ ‘Nowadays’, Reid continued, ‘it is towards cardiovascular disease, cancer, bronchitis, accidents, mental disorder, and such chronic conditions as diabetes mellitus and arthritis that we must look for the principal sources of mortality and morbidity.’ For Reid, medical advances, increased education, and economic growth might have conquered the diseases of the past, and they were probably the sources of progress in the future. In the present, however, this combination had also provided the conditions for ‘smoking … overeating, and … [lack of] exercise’ that caused ‘maladies of plenty’.1
For Reid, and other Medical Officers of Health (MOHs), doctors, and lay persons involved in public health activity, this changed profile of morbidity and mortality required new approaches. On the one hand, these practitioners spoke of a ‘New Public Health’, based on persuasive health education campaigns that would help individuals to manage the imbalanced lifestyles supposedly underpinning novel burdens of disease.2 On the other hand, they recognised that such campaigns could form only one component of efforts to confront chronic disease. For conditions like diabetes, even contributory factors to onset were unknown, and complete disease prevention was not considered possible. Moreover, patients managing such illnesses were believed to encounter psychological challenges, discrimination, and often painful long-term complications. For these problems, it was argued, early diagnosis and treatment by a multi-disciplinary team of medical, nursing, and technical staff offered the best solutions.
Unfortunately for visionaries like Reid, the tripartite division of the NHS into general practice, hospital, and local government provision made multi-disciplinary and cross-institutional disease management difficult to realise. Reid had, for instance, placed local government health visitors, district nurses, and public health doctors at the heart of his plans to manage the rising toll of chronic disease. In diabetes, public health workers would promote early diagnosis by educating the public about symptoms and screening local populations. Equally, they would contribute to multi-disciplinary disease management by providing aftercare assistance to specialist hospital clinics and GPs responsible for long-term patient surveillance.3 Ultimately, though, compromises built into the health system meant that promises of rationally planned and integrated care raised during post-war reconstruction were not realised in the ways envisaged by policy-makers.
Taking the gap between vision and practice as its starting point, this chapter analyses the ways in which diabetes management intersected with changing healthcare structures and emergent notions of chronicity during the two decades after 1945. Beginning with an overview of disease management strategies in the 1940s, it traces how the creation of the NHS confirmed diabetes as a hospital condition, one closely connected with specialist labour, a growing care team, and laboratory technologies. The concessions and divisions on which the NHS had been initially built, however, meant that regional organisation of clinic services, whilst much prized by doctors and the leading diabetes patient association, failed to take place. Furthermore, hospital care began to face serious challenges during the 1950s and 1960s. Shifting understandings of diabetes, rising patient loads, and resource constraints within the NHS encouraged clinicians to look beyond hospital management. Early innovations in cross-institutional care provided models for managing other conditions, but new forms of working were again frustrated by the limitations woven into the fabric of the health services.
Changing understandings of disease were related to shifting ideas of chronicity. During the 1940s, discussions of chronic patients generally referred to the large hospital populations deemed ‘incurable’ and admitted to old municipal and Poor Law institutions.4 These patients were generally elderly and infirm, or diagnosed with long-term physical impairments and mental health problems. The creation of the NHS and post-war welfare state brought political attention to these populations, just as new techniques for assessing mortality and morbidity drew medical interest to long-term conditions of the middle-aged.5 Although government departments were absorbed with how the health and social services could care for ‘the chronic sick’ during the 1950s and early 1960s, epidemiologists, public health agencies, clinicians, laboratory researchers, and social medicine academics all began to consider the problems posed by ‘chronic disease’ in the general population. Within discussions of chronic disease, diabetes assumed something of a symbolic position, providing a medium through which to discuss pathology and disease management. It was a position diabetes would retain, in various ways, for the rest of the century.
The transformations of the health service, diabetes management, and concepts of chronicity over the first decades of the post-war period, therefore, had ramifications lasting into the new millennium. Within the fluid political contexts of post-war Britain, the initial institutional arrangements of the NHS created dynamics that proved increasingly problematic for politicians and service staff. For professional and state bodies, improved integration and service information became solutions to rising expenditure and cross-institutional challenges of ‘chronic disease’. As a prominent chronic condition, diabetes was managed in a way that provided a pioneering example of how to undertake integrated care. The transformations noted here (and in the next chapter) reframed clinical management itself into a preventive act, alleviating pressures to find potential social aetiological factors. For now, this chapter traces the origins of these transformations, the effects of which are considered throughout the book.
Diabetes and reconstruction of the health services
On the eve of the NHS's creation, British clinicians managed diabetes according to many of the same principles and strategies that prevailed in the 1920s. Controlling a patient's metabolism within certain limits (as measured through blood sugar levels, and sugar (glycosuria) and acid bodies (ketonuria) expelled in the urine) remained a key aim of intervention. For patients deemed to be overweight, weight reduction accompanied the pursuit of control in the hope that metabolic dysfunction might be subsequently relieved. Practitioners thus sought to balance dietary schemes and – where required – insulin regimes as well as possible with a patient's metabolic capacities and work demands.6 Laboratory surveillance and monitoring for signs of long-term complications also remained central to management programmes. Most hospital doctors encouraged patients to test their own urine and to record the results. These clinicians also required patients to attend outpatient clinics at regular intervals, primarily for blood sugar examinations, urinalysis, and, in some instances, tests for skin and chest problems.7 Of course, aims and practices varied between institutions and practitioners. For instance, the content of dietary plans often differed considerably, and not all patients were expected to undertake self-monitoring.8 Equally, GPs retaining sole responsibility for their patients might lack access to the hospital's surveillance equipment and have to rely solely on urinary tests to assess treatment efficacy as a result.9
Perhaps the major area of contention in disease management during the 1940s was ‘free diets’. ‘Free dieting’ was pioneered by American and European paediatricians who believed that dietary restrictions and the pursuit of normal glycaemia stunted healthy physical and psycho-social development in young patients.10 Though loosely defined, ‘free diets’ found a minority of advocates in Britain, with practitioners extending the scheme to adults and seeking to adapt insulin intake to diet. These doctors told patients to disregard glycosuria and instead prioritise health, vigour, and remaining ketone-free in the belief that this allowed a more ‘normal’ life, free from dangerous reactions to low blood sugar.11
Intertwined with this discussion was another about the relationship between persistent hyperglycaemia and the onset of long-term complications in diabetes. As noted in the Introduction, some nineteenth-century physicians and pathologists had recorded the appearance of certain lesions and problems in older patients with diabetes. During the 1930s and 1940s these observations multiplied. Clinicians and researchers increasingly noted distinct patterns of complications in patients with diabetes of long duration, regardless of the age of onset, and they expressed consternation at the development of kidney disease, ocular changes, nerve damage, and other vascular problems.12 Although British doctors became uncertain about the relationship between metabolic imbalance and the onset of complications during the mid-century, it seems that many adopted a middle-ground position in practice. Here, they combined a desire to ‘do no harm’ – striving for near-normal glycaemia levels where possible – with a pragmatic acknowledgement that any regimen had to be simple enough to be reasonably followed, and generous enough not to generate resentment or provoke hypoglycaemia.13 Patients, moreover, approached their prescriptions in similar ways, adjusting diets according to different priorities and structural constraints.14
The continued emphasis on laboratory oversight into the 1940s meant that elite hospital doctors (and many GPs) believed the ongoing supervision of patients was best provided through specialist outpatient clinics.15 These institutions combined the laboratory equipment and expertise considered necessary for high standards of care in a complex condition.16 In line with a general stasis in views about treatment and institutional efficiency, the organisation of clinic work remained deeply hierarchical.17 Patients congregated in large waiting areas before moving through the stages of the management process: from seat to testing areas, and from testing to consultation.18 Where available, ancillary staff – such as nurses and laboratory technicians – might offer advice or undertake the various tests involved (including blood tests, feet checks, and x-rays).19 Once results were available, doctors then consulted using an accumulation of longitudinal notes, with the most senior clinician organising and overseeing the system itself.20 Though subject to rising demand, clinics retained high esteem, and the Diabetic Association (later BDA) – one of Britain's first patient-oriented bodies – promoted their creation and advertised their availability to patients.21
The key political concern during the 1940s, therefore – and one that it was hoped a national health service would resolve – was access. Before the 1940s, diabetes care operated under the same rules of provision as all other forms of care in Britain's mixed economy of services. Although some clinics had appeared in municipal institutions, in general they remained the province of teaching hospitals and larger voluntary institutions, which by the twentieth century were increasingly centres of paid care.22 This shift in funding methods reflected the changing role of voluntary institutions; rising demand, novel employment arrangements, and the transformation into important sites for teaching, research, and technologically oriented practice increased expenditure.23 Access to the diabetic clinics in these hospitals remained free, even if inpatient care often did not, but patients were generally accepted only through referral from GPs or inpatient wards.24 Furthermore, patients needed to pay for prescriptions of insulin and self-testing equipment. Poorer patients who were members of a contributory scheme might find relief through their plan, and National Insurance patients could consult and receive prescriptions from GPs without charge. By contrast, poorer patients without such access (usually women and children) were reliant upon some form of public assistance.25
Along with economic status, geographical location could also structure patient access to oversight and treatment before 1948. Clinics tended to be formed in the largest urban centres, in part a response to larger patient populations.26 Sometimes this response was positive, with clinicians forming clinics because they sought to develop specialist knowledge. On other occasions, doctors were motivated negatively: diabetes patients were seen to clog up general medical outpatient clinics, and clinicians argued that it would be more efficient to concentrate this work in special sessions.27
The creation of the NHS was supposed to rectify this inequitable and unequal geography of expertise, primarily by providing comprehensive health services for the entire population, free at the point of use. As will be noted below, however, compromises built into the service during its formation thwarted the realisation of such ideals.
The NHS was the outcome of a long history of innovation in, and debate about, health service organisation.28 The first half of the twentieth century was marked by a growing political interest in the moral and physical health of the national population (though particularly of workers, mothers, and children), and was matched by a growing state responsibility for service provision.29 Political concern with population health was closely connected to imperial and wartime politics, as well as discourses of health rights, responsibilities, and citizenship in liberal and socialist traditions.30 At the same time, clinical medicine was also organising itself around technologies and concepts of the collective. By the 1940s, experiments had been undertaken with multi-sited clinical trials and community-focused epidemiological research, both of which were later geared towards determining clinical and public health practice.31
When launched in July 1948, the NHS was, in theory at least, to form part of Britain's newly planned modernity: a vision for the nation in which rational experts guided state intervention into a vast array of social and economic activity, freeing citizens from ‘the five giants’ of want, disease, squalor, ignorance, and idleness.32 Yet, despite high hopes for, and long-term interest in, reformed health services, the creation of the NHS was riven with political compromises that posed problems for diabetes management into the second half of the century. An eclectic mix of actors took part in the formation of policy, and whilst some policy experiments with mass provision and integrated services existed, consensus over broad principles masked sharp divisions about the aims, structures, and mechanisms to be employed.33
Ultimately, these disputes became embodied in the final shape of the NHS. Against advice from senior civil servants, Cabinet colleagues, and the Labour Party, the Minister of Health, Aneurin Bevan, promoted a scheme for nationalising almost all Britain's hospitals.34 The reform, however, did not place the Ministry of Health (or the Department of Health in the Scottish Office) in direct control of hospitals. Rather, several layers of administrative bodies existed between government departments and healthcare providers. Unit committees organised day-to-day provision. These committees reported to hospital management committees (or boards of management in Scotland), which allocated responsibilities between institutions and co-ordinated services. Finally, overseeing these agencies and allocating funds were nineteen Regional Hospital Boards (RHBs) and thirty-six boards of governors of the major teaching hospitals.35 The exact duties of, and relationships between, these agencies shifted over time, and in England and Wales – though less so in Scotland – ambiguity often impaired their functioning.36 Regardless of future changes, the lack of clear lines of influence frustrated ministers and doctors in the long term, and had considerable influence on the provision of diabetes care in the short term.
That the hospitals were the only elements of the NHS to be nationalised also caused political and clinical challenges. Voluntary hospitals and consultant staff accepted enrolment in a nationalised sector in exchange for favourable administrative arrangements, generous pay settlements, and some continuation of private practice.37 By contrast, through the British Medical Association, GPs fervently defended their position as independent contractors, free from state salary and direct employment by local authorities. Building on arrangements developed under the previous National Insurance scheme, GPs contracted their work (now covering the whole population) via executive councils, paid broadly on a capitation basis.38 The result was that GPs continued to operate without central oversight or involvement in integrated planning. A central Medical Practices Committee retained some ability to limit list sizes, and to direct GPs through positive and negative inducements to new appointments.39 Equally, some professional advisory and statistical services existed, with statutory bodies mildly regulating prescribing through systems of classification (ruling certain products as ineligible for NHS prescription), and monitoring (sending GPs ‘analyses of their prescribing costs compared with the average for the area in which they practised’).40 However, such mechanisms were limited. Attempts to strengthen management in the 1950s were easily rejected, and attempts to co-ordinate care across sites had to take place without connective institutional tissue.
Finally, compounding these managerial and administrative problems, responsibility for a diverse and somewhat incoherent range of preventive and clinical public health responsibilities was assigned to local government health departments under the direction of the MOHs. Some of these activities had fallen under their jurisdiction before 1948 (for instance, sanitation and maternity and child welfare services), whilst others – such as medical social work – were new responsibilities.41 Crucially, the removal of hospital administration from local government cut short attempts to fully integrate hospital and community services, just as concessions to GPs over local government employment made co-ordinating local services considerably more difficult.42
In some respects, the new NHS was a great boon for diabetes care. The adoption of a tax-funded service removed most direct financial obstacles to accessing pharmaceuticals, self-care equipment, and clinic services. In fact, as will be noted in the next chapter, GPs were almost incentivised to refer patients diagnosed with diabetes to hospital. Clinics also became more accessible as the number of clinics grew (from 40 in 1940 to over 190 in 1955), and the regional machinery of the NHS provided a possible means for planning clinic placement.43
This regional focus was, in many ways, the result of the Diabetic Association's championing of specialist services. Discussions of reconstruction first provided the Association with an opportunity to campaign for equitable clinic distribution. As R. D. Lawrence wrote to The Lancet in 1942, the Association had asked the ‘Planning Commission to take steps to establish clinics on a regional basis throughout the country’. Such clinics, he went on, were ‘essential for the welfare of … diabetics in general’, and regionalisation would enhance accessibility.44 These efforts intensified following the creation of the NHS. Negotiations between the profession and government during the 1940s secured professional advisory mechanisms throughout the NHS's structures. Through conferences and publications, Lawrence successfully promoted the cause of regional planning for clinics amongst his colleagues.45 These efforts resulted in support from major medical journals and a review in the early 1950s by the Central Health Services Council (CHSC), an advisory body established with the NHS to advise ministers on service questions.46 The Council offered its advice to the Ministry of Health in 1953 – based on testimony from Lawrence – and the Ministry issued loosely prescriptive guidance to hospital authorities in 1953.47 Here, it was recommended that facilities should be planned to prevent patients travelling further than thirty miles for care. The Ministry also recommended bed numbers per population size, offered general guidance on the scale of facilities and staffing for centres of different sizes, and requested that regional plans be created.48
Although this political interest in diabetes marked something of a coup for the BDA, success was ultimately hollow because of the concessions made to form the NHS. Already in the 1950s, central departments wanted to exercise some control over service expenditure, even if this infringed upon clinical decision-making.49 Considerations of costs were shared by some elite GPs and emergent health service researchers, who progressively problematised variations in prescribing and speculated about accountability for resource use.50 Nonetheless, the NHS had been founded on an informal agreement that doctors would have considerable autonomy of action within set budgets.51 Appeals to ‘clinical freedom’ held considerable sway within the profession, and even sceptical politicians and civil servants feared the potential backlash to the nascent NHS that might follow attempts to proscribe clinical autonomy.52 Thus, as was common at this time, the Ministry's guidance for RHBs focused on facilities and staffing, and left considerable room for interpretation.53 Furthermore, even had more expansive standards been set, there would have been no guarantee that practitioners, administrators, or health authorities would follow any plan produced. The Ministry's only recourse to implementation was exhortation to RHBs, whilst the muddled relationships between RHBs and hospital units meant that regional plans rarely had a direct relationship with the service delivered.54 Efforts to ‘generalise the best’ in diabetes care, to paraphrase Bevan, were difficult to achieve in a system which sought to guarantee the maximum possible devolution of decision-making.55
The creation of the NHS, therefore, confirmed diabetes’ status as a hospital disease, but dashed hopes for effective regional organisation. Where planning did take place, this was largely the result of efforts from unevenly distributed interested parties. The spread of clinics in Britain compensated for some of this service disorganisation in terms of access, and certain regions managed to co-ordinate their services.56 More significant problems, though, arose in terms of co-ordinating efforts across institutions and different parts of the service. As we will see below, health authorities and clinicians began experimenting with co-ordinated hospital and community care in diabetes during the 1940s and 1950s. Underpinning such efforts were shifting ideas of chronicity, growing clinic workloads, and novel views on how the new NHS should manage such problems.
Remaking chronicity: the chronic sick, social medicine, and chronic disease
During the 1940s, medical and public health discussions of chronicity centred on a very different set of patients from those in similar discussions later in the century. At this time, the most common use of the term ‘chronic’ was in reference to ‘the chronic sick’, a rather loose term applied to an amalgam of patients with diverse concerns and needs.57 Broadly speaking, by the early 1940s institutions housing ‘the chronic sick’ tended to provide care for elderly and physically frail patients, particularly older people with physical impairments, mental health problems, and long-term and incurable diseases (such as arthritis or epilepsy), and people deemed likely to have terminal illnesses.58 During the inter-war years, these were patients for whom the majority of doctors believed cure or rehabilitation was impossible, and who required long-term medical, nursing, and domestic care. They were also patients likely to be excluded from voluntary hospitals on these grounds and to be instead admitted to municipal and former Poor Law institutions, where they received little medical or political interest.59
How, then, did a different view of chronicity emerge, one concerned with the conditions prevalent amongst the middle-aged? And how did diabetes relate to these new perspectives? The application of various techniques to questions of mortality and morbidity saw clinicians, epidemiologists, and public health practitioners grow increasingly concerned with new problems. Diabetes served as a useful filter for discussing some common elements shared by various conditions, and its management also provided a model for new forms of cross-institutional care. Before ‘chronic disease’ could become a political issue, however, the existing label of ‘chronic sick’ had to be dismantled.
The fate of the chronic sick – as a classification and population – was closely intertwined with the creation of the NHS and the post-war welfare state. There had been some interest in chronic patients before the 1940s. Following legislation expanding the role of local government in hospital administration in 1929, a small number of doctors in newly municipalised hospitals began to pay closer attention to the needs and composition of the chronic sick. Faced with a disparate array of patients, these clinicians devised new systems of diagnosis, classification, and treatment.60 They rejected passive approaches to care, arguing – and often demonstrating – that recovery and discharge were possible for many patients if they received proper, timely treatment.61
This work may have raised the profile of the chronic sick during the 1930s, but it was the nationalisation of Britain's hospitals and creation of post-war welfare services that brought many medical practitioners, healthcare administrators, and government officials into contact with chronically ill patients for the first time.62 Suddenly, a large number of clinicians and civil servants began to see ‘the chronic sick’ as a problem in need of management, mobilising humanitarian arguments to motivate improved care for marginalised populations.63 Moreover, the initial professional response – particularly through bodies like the British Medical Association – was to encourage the development of techniques that figures in municipal institutions had pioneered in the 1930s.64 Discussions even extended to the internal organisation of institutions, and generated an administrative gaze based upon functionality and social criteria. Here medical officers discussed the importance of segregating ‘annoying’ patients (incontinent patients, ‘senile dements’, patients with ‘sub-normal minds’, and the ‘mentally confused’), and of nursing ‘“likes” together’ on grounds of efficiency and patient comfort.65
Increased visibility and activity also produced tensions. Many hospital practitioners saw chronic, incurable patients as blocking beds that would be better utilised for younger, acute patients, and promoted the application of new techniques only to increase bed turnover.66 Yet the creation of a new specialty had resource implications, meaning that consultant support for geriatrics was mixed at best.67 Conflict also occurred between the health and social services. Interested clinicians and health planners consistently identified the home as the ideal location for ongoing care, and moved surveys into the community to assess both the living conditions of chronic patients and their need for domestic help and nursing.68 The subsequent discharge of patients caused tension with local government social service authorities, however, as the separation of budgets meant that for every patient removed from a hospital setting, a greater burden fell upon a local authority.69
In this sense, the management of the chronic sick became closely connected to questions of how best to use the resources of the post-war welfare state – or rather, how to ensure that heavily scrutinised resources were used for certain ends. Hospital care was a high-cost activity, and the first years of the NHS saw initial expenditure estimates greatly exceeded.70 These disparities startled the Cabinet and the Treasury, especially in light of post-war economic problems and government commitment to other areas, notably rearmament and the Korean War (1950–53). Officials thus consistently targeted the Ministry of Health to control NHS expenditure.71 In this context, the Ministry came to share clinical views of the chronic sick as unnecessary users of expensive services, and ‘bed blocking’ became a political problem.72
Political concern with the chronic sick stretched into the early 1970s. Gradually, however, the application of ever more refined administrative, political, and medical classifications transformed the subjects of interest. Though it remained an elastic category, discussions of ‘chronic sickness’ in the 1960s and 1970s increasingly centred on issues of functionality, on people whose physical condition impaired their ability to move, to undertake basic domestic tasks, or to undertake paid employment within existing architectural and social parameters.73 Whilst the majority of the chronic sick tended to be frail elderly people, this interest in impairment meant that younger patients came into view and that chronic sickness became intertwined with disability.74 Concern with the health and social service needs of older people continued into later decades, but authorities considered these needs under the rubric of old age more broadly.75 And in a similar manner, over the 1950s and 1960s people diagnosed with mental health and cognitive problems were progressively classified, discussed, and treated separately from the chronically sick.76 The administrative and clinical drive for management sparked by the creation of costly health and welfare services – combined with concerted campaigning from individuals and pressure groups – eventually disintegrated the category.77
The dismantling of the concept of ‘the chronic sick’ did not end interest in chronicity, though. Although the chronic sick attracted considerable political attention, into the 1950s and 1960s figures within clinical medicine, epidemiology, laboratory sciences, and public health became interested in the concept and challenges of ‘chronic disease’. In contrast to discussions of chronic sickness, discussions of chronic disease predominantly concerned how best to prevent and manage non-infectious conditions in order to delay impairment and death. Very broadly, that is, whereas discussions about – and management of – chronic sickness sought to ameliorate loss of physical and social functions, in the context of chronic disease such discussions and practices sought to prevent loss of function occurring.78
Nonetheless, medical interest in non-infectious conditions emerged from the same concerns that drove political focus on the chronic sick. The collectivising concern with population health that underpinned the NHS continued into the post-war period, and growing state expenditure on health and welfare services intensified interest in improving health and ameliorating financial burdens. Thus, during the 1950s, reviews of changing patterns of morbidity and mortality sparked concern over trends found amongst ‘the middle-aged’, and especially amongst middle-aged males. Public health doctors and epidemiologists noted how ‘female mortality [had] maintained its downward course [since the 1920s]; but the reduction of male mortality [had] slackened and almost stopped’.79 During the inter-war period, cases of duodenal ulcer, bronchial cancer, and coronary thrombosis increased, and infectious disease deaths proportionally declined.80 By the 1950s, doctors were less sceptical about possible statistical artificiality, and ‘lung cancer, chronic bronchitis, diabetes, arteriosclerosis, heart disease, [and] cirrhosis of the liver’ now provided additional concerns.81
As George Weisz has argued, however, the findings of novel morbidity surveys perhaps generated the most intense medical concern with chronic disease, with surveillance of illness in local communities revealing a greater prevalence of long-term and degenerative diseases than was expected from mortality figures alone.82 In Britain, the creation of the post-war welfare state and the transformation of British social medicine provided considerable spurs to such surveys during the 1950s and 1960s. The inter-war social medicine movement had begun as an international project that located the cause and remedy for illness in social and economic structures.83 Its proponents strived to reorient the thought and practice of clinical medicine along these lines, but efforts to remake medical education largely failed.84 As a result, post-war social medicine became an academic pursuit associated with epidemiological research and health service assessment.85 Now motivated by the need to plan services, and using the research opportunities offered by the welfare state, many social medicine researchers conducted extensive morbidity surveys of ‘normal’ populations in Britain and its colonies during the 1950s and 1960s.86 They were joined in this pursuit by civil servants and a host of other medical professionals. Government officials used statistical returns from GPs to map general morbidity patterns, whilst hospital clinicians, MOHs, and general practice research communities undertook extensive detection surveys of ostensibly healthy populations in the community.87 This work produced important studies on the prevalence and causes of heart disease, diabetes, and high blood pressure, alongside now-famous mortality studies on lung cancer.88 Moreover, the activity generated increasing public health attention.
Research into diabetes prevalence provided an important vehicle for such work, and raised pertinent social and medical questions. Early studies took their cue from similar exercises in North America, and formed part of international and colonial research programmes.89 In Britain, important community investigations were undertaken in Ibstock, Birmingham, and Bedford during the late 1950s and mid-1960s.90 These surveys varied in structure, scale, and origin, but all were predicated upon initial screening of post-prandial urine to identify persons suspected of having diabetes, before formal glucose tolerance tests were used to assess their metabolic state.91 By moving away from hospital populations, this work found surprising levels of diabetes in the community, and claimed that for ‘every known case there is another as yet undetected and untreated’.92 After several studies reported, the projected national prevalence of the disease increased substantially, from between 0.3 and 0.6 per cent of the population in 1953 to around 1.2 per cent in 1959.93 It was in relation to such research – and work on diabetes in particular – that practitioners began to talk of ‘the existence of the clinical “iceberg” of undetected and untreated disease’.94 Indeed, the Bedford study was so successful that it not only generated follow-up studies and clinical trials, but even provided the basis for surveys into other conditions.95
Such findings provoked comment in the medical and lay press, with articles discussing disease prevalence and the possibility of living with a ‘hidden’ disease.96 Culturally, the idea of a submerged enemy surreptitiously eroding the integrity of the physical and social body resonated with imagery of espionage and subversion slowly pervading British popular culture.97 Medical journals and doctors discussed the consequences of unaddressed, silent, diseases for the individual.98 Yet references to ‘impaired efficiency’ in their reflections indicate how the sick body was also a political concern for the nation, presenting a challenge to economic activity.99 Productivity was a key index of comparison in the ideological contest of the Cold War, and relative economic growth rates provided a measure by which cultural critics, politicians, and journalists discussed Britain's post-war industrial and imperial decline.100 Moreover, Britain's welfare state was funded through tax receipts, and was thus dependent upon the fiscal yield from productive work. It was amid such concerns that doctors and health economic agencies produced assessments of the financial and productivity implications of long-term illness.101 Specifically, they built on work undertaken in the 1930s to estimate ‘working days lost’ and social security money paid out.102 That the highest rates of death and morbidity for many conditions occurred amongst those who dominated Britain's political, cultural, and economic institutions possibly compounded existing anxieties.
It was in relation to such findings, as well as transatlantic influences and exchange, that epidemiologists, clinical practitioners, and public health doctors came to discuss the unique challenges of ‘chronic disease’. As the renowned epidemiologist and social medicine academic Jerry Morris put it, many found ‘chronic diseases’ a ‘useful term for the miscellany of degenerative, metabolic, malignant and mental conditions that increasingly dominate the practice of medicine and public health’. ‘The term’, he went on, ‘had some value because it emphasises certain common features: the life-time or very long process of development, the often insidious onset, the usual impossibility of cure, the tendency to relapse and to remit; and often their profound economic and social repercussions, particularly on the family.’103 The term did not provide the basis for service reform movements, as in the USA.104 It did, nonetheless, provide a useful shorthand for integrating seemingly diverse diseases into broad discussion and, as we will see below, for drawing out models of local service provision that might be adapted in different sites.
Diabetes fitted quite neatly within this framework during the 1960s. As noted, clinicians had long recognised the social and financial difficulties that patients with diabetes faced, and regularly discussed the psychological and physical challenges that patients might experience as a result of privations of diet. From the 1930s onwards, doctors admitted the need to make dietary and pharmacological concessions to ease these burdens, whilst into the post-war decades the BDA explored employment discrimination and welfare issues affecting specific patient groups.105 Finally, as well as mentioning its incurability, doctors frequently referred to diabetes’ long onset (outside childhood), with easily mistaken early symptoms.106 Indeed, diabetes often provided an example of hidden disease.107
At this time, though, diabetes became most widely discussed in relation to pre-symptomatic detection and diagnosis of disease. Research into prevalence, and studies of what was termed the ‘natural history’ of several chronic conditions, raised questions about when disease was said to begin.108 The relationship between asymptomatic physiological abnormalities (such as elevated blood pressure), symptoms, and the development of functional disease and long-term complications was often uncertain. For instance, surveys of diabetes revealed that glycaemia levels in the population were continuously distributed, with no strict cut-off point at which symptoms manifested (a threshold that traditionally divided healthy from pathological). Moreover, such initial research could not reveal whether borderline cases closest to diagnostic thresholds would become symptomatic in the future, whether such individuals were at risk of diabetic complications, or whether earlier treatment would prevent these outcomes.109 This interest in borderline cases generated longitudinal community research, as well as clinical trials of early intervention, with researchers and clinicians undertaking similar projects for hypertension.110 However, when discussing diagnosis and quantitative thresholds of disease, doctors regularly mentioned diabetes as challenging present assumptions. For example, opening a discussion on emergent patterns in community medicine at the annual conference of the Society of Medical Officers of Health in 1966, Reid noted that ‘although epidemiological research answered many questions, it also posed many questions: in the field of diabetes for instance, recent studies have made even an acceptable definition of the disease very difficult and have led to the suggestion that all men are diabetic, but some are more diabetic than others!’111
As we will see in the next chapter, findings from this research eventually led to a reclassification of diagnostic boundaries and therapeutic practices, with diabetes itself becoming a risk factor for heart disease, stroke, and other conditions. Notably, unlike those for other chronic conditions, the diagnostic thresholds for diabetes were revised upwards rather than downwards.112 In the meantime, uncertainty meant that many doctors were sceptical about pathologising borderline cases without being able to promise benefits. Approaches to diabetes, therefore, were unlike other those taken to other conditions, though new organisational approaches to its treatment would soon come to influence other forms of chronic disease management.
Diabetes, chronic disease, and the limits of the NHS
According to many doctors and epidemiologists during the 1950s and 1960s, the NHS and British society were confronting new and complex problems. Faced with wide-ranging and prevalent chronic diseases, medical practitioners and public health doctors asked how to prevent tragic loss of labour, social function, and life. Rising expectations of modern medicine during its ‘golden age’ meant that neither doctors nor lay public necessarily saw the onset or outcome of many chronic conditions as inevitable.113 Medical discussions of how to respond to new threats bifurcated around two interlinked poles: wholesale prevention and better disease management.
Wholesale prevention efforts were closely tied to new forms of risk-factor thinking.114 To confront rising tolls of chronic disease, MOHs and medical practitioners sought to use research into disease aetiology to build ‘primary’ preventive efforts – interventions to completely avoid onset of disease (see Chapter 2). However, studies of many conditions did not reveal simple causative mechanisms. Instead, drawing on complex statistical methods (and recent understandings of multi-factor causation pioneered in studies of epidemics), researchers developed a range of techniques and study designs for teasing out associative, predictive, and possibly contributory factors to specific diseases.115 These new understandings of causation altered the targets and methods of preventive medical intervention. British experts studying a range of conditions began to shift frequently used explanatory frameworks for patterns of morbidity away from social structures of inequality and towards behaviours and ‘accumulated vices’.116 These perspectives formed the basis for new policy networks and large-scale public health campaigns targeting ‘risky’ lifestyle choices, with health education programmes designed to cultivate self-managing subjects through the persuasive provision of advice and coded cultural messages.117 To fine-tune their practices, moreover, state bodies assumed responsibility for undertaking research-based surveillance on public attitudes and behaviours.118 Individuals, though not overtly coerced, were to be benevolently guided to healthy decisions.
The international adoption of risk-factor approaches to prevention, in socialist as well as capitalist democratic states, was the product of number of political projects.119 In Britain, the focus on individuals and education dovetailed neatly with the country's recent political history, and with the liberalism which infused the Labour Party's social democratic approach to economic and social management.120
Yet, as the NHS itself symbolised, state agencies and medical professionals provided services as well as education. In some rare instances, such as lung cancer, single agents were highlighted as definitively causative, even if such assessments were opposed for some time.121 However, doctors during the 1950s and 1960s noted that a lack of knowledge about causation in many conditions excluded a reliance on primary preventive health education. In diabetes, for instance, there were a number of theories about contributing factors – genetic predisposition, weight, consumption of sugar or refined carbohydrate, and age – but none were certain.122 As Reid bluntly put it in a 1963 symposium on diabetes, ‘the scope for primary prevention is yet limited’.123 Thus local government efforts focused upon educating the general population about the symptoms of disease, and (despite scepticism about efficacy and cost-effectiveness) establishing some screening programmes to find undiagnosed cases of the condition.124 Both methods, in other words, were dedicated to finding unknown symptomatic patients and instituting treatment, in order to, at the very least, remove symptoms and prevent the development of acute diabetic emergencies. To be sure, some medical practitioners used the knowledge gained from surveys of prevalence to construct a list of groups considered most ‘at risk’ of developing the condition.125 Others, like Reid, recommended ‘the avoidance of obesity in such groups as the relatives of diabetics’, thus translating predictive models of risk into theories about causation and practices of intervention.126 Nonetheless, programmes of primary prevention did not form the backbone of approaches to diabetes during the 1950s and 1960s.
Instead, doctors saw effective disease management in diabetes – and in conditions such as cancers – as the best means to prevent deterioration into symptoms and long-term disability.127 As noted above, even as doctors grew uncertain about the value of blood glucose control, they adopted pragmatic approaches to metabolic balance as a precaution, and surveillance remained important in order to remove symptoms and avoid certain complications.
Undertaking this work, at least in more elite institutions, was an expanding hospital care team, reflecting the growing complexity of managing patients and their complications. Clinicians, nurses, and technicians, who had been central to diabetes management in the 1930s, were increasingly assisted by dieticians, chiropodists (to monitor feet and prevent injury turning into infection), and obstetricians (for joint care of pregnant patients) during the first two decades of the post-war period.128 As teams and patient populations expanded, however, doctors acknowledged that clinical labour required co-ordination to be effective.129 Within ward settings in particular, new tools and bureaucratic cultures of systematic recording developed as a means to maintain standards of practice. Senior doctors in Cheshire, for instance, complained of problems in treating patients ‘scattered in numerous non-medical wards throughout a large general hospital’, with the result that such patients received care from staff with ‘little experience in the practical management of diabetes’.130 In response, these doctors designed new records, building upon the rich history of form-creation and techniques for tabulating and visualising data in diabetes research and care.131 The new records contained pre-formatted boxes and graphic arrangements for the most important treatment and monitoring measures, as well as designated areas for recording the timings of actions undertaken (where the temporal gap between tests would offer important clinical information). The new forms were thus clearly laid out ‘so that doctors, nursing staff, and patients can see [information] at a glance’, and so that practitioners would be guided on what data, tasks, and tests they should prioritise.132 Clearly targeting nursing staff in their efforts to influence practice, the designers even used moral judgements and institutional pressure to ensure use of the document. ‘Any sister’, they concluded, ‘who is not prepared to keep it accurately is unsuitable to nurse diabetics.’133
However, specialist doctors during these post-war decades were beginning to reflect more systematically on the psychological, social, and economic problems that patients with chronic diseases faced, with repercussions for organising services.134 For instance, Ronald Tunbridge, Professor of Medicine at the University of Leeds, concentrated a considerable part of a prestigious lecture delivered to the BDA answering the question ‘why do patients fail to maintain a satisfactory level of control?’ In response, he suggested that ‘failure is due to three main groups of causes – psychological, social, and educational’.135 In terms of social causes, he pointed out that doctors before the Second World War regularly prescribed dietary composition in relation to four meals (‘breakfast, lunch, dinner, and a supper snack’), ignoring the fact that ‘few working-class families had … two cooked meals a day’. Expanding further, he recalled a survey he had conducted with a dietician, finding that the average cost of a diabetic diet exceeded that of normal diets, even when carefully planned by the two researchers. He concluded that ‘the failure of many diabetics, particularly the elderly, to maintain a steady diet is undoubtedly [due to] financial stringency’. Likewise, he stressed the educational difficulties faced by ordinary patients, especially older individuals who had already formed strong habits, in adjusting to new demands. Even for a doctor with physiological training, Tunbridge noted, it might take ‘at least three months of dietary control before he [sic] can enter a restaurant and order an accurate meal without undue emotional tension’.136
Turning to questions of care, Tunbridge supported clinic supervision of patients, but placed considerable emphasis on being conscious of costs, tailoring treatment to individual patients, and using repetition and ‘every device possible’ with a ‘team’ of almoners, nurses, and dieticians to educate patients on the essentials.137 In general, approaches to education varied between practitioners, and it is likely that the size of most clinics, and the distribution of inpatients across wards, made tailored treatment and education difficult to deliver.138 Yet a minority of clinics did incorporate ‘socio-medical’ insights into practice, generally where clinical leads had either personal experience of long-term conditions or a strong professional interest in chronic disease management.139 Moreover, where strong links between hospitals and local health departments existed, the most innovative practitioners were able to extend oversight into the community. Recognising that pressures on clinics prevented care teams from offering patients sufficient support, these doctors designed schemes for health visiting that moved follow-up education and surveillance directly into domestic settings. Such programmes contributed to the gradual expansion of the health visitor's remit beyond infant and child health.140 More importantly for diabetes care, however, the attachment of health visitors outside local government provided one possible means for integrating care across the NHS's administrative barriers.141
Pioneering work in this direction had taken place during the 1940s in Cardiff, where ‘specialist health visitors’ for diabetes were employed to provide ‘aftercare’ for patients previously admitted to the Llandough Hospital.142 This aftercare required health visitors to discuss prescribed regimen with doctors, ward sisters, dieticians, and almoners, and then to visit patients’ homes to ensure that ‘the regimen recommended in hospital was carried out’. Whilst there, health visitors would also undertake ‘sound health education’.143 Just as with the prevalence research noted earlier, doctors subsequently extended the arrangements created for diabetes management into care for such long-term conditions as gastric diseases, asthma, and tuberculosis.144 In all such cases, self-care was essential in the absence of daily professional encounters, and by passing into the home health visitors were able to use the disciplinary technology of surveillance to reinforce adherence to the parameters of self-management.
A similar scheme was also established under the Leicester Royal Infirmary in the early 1950s.145 Here, health visitors undertook an array of domiciliary tasks, focusing on the newly diagnosed as well as children and elderly patients.146 Co-ordinating with the district nurse, health visitors paid all new patients at least three domiciliary visits. The substance of individual visits varied, but health visitors were generally responsible for delivering educational content (on diet, hypoglycaemia, general principles of self-care, and urine testing); taking notes on social circumstances (social status, work, relatives, accommodation, and hygiene); co-ordinating and advising on other services (from home helps to National Assistance benefits); and even dealing with employment troubles and school demands.147 Particularly in the case of the aged, visitors could observe competency in self-care, assess the possibilities of keeping patients in their homes, and inspect patients for possible signs of complications (especially in the feet). In addition, health visitors were supposed to subject obese patients to additional scrutiny. For these patients, the Infirmary's clinical lead wanted staff to inspect the kitchen and undertake intense dietary surveillance and education, tightening the disciplinary mesh for patients whose weight had been framed as the result of dietary ‘transgression’.148
As within the hospital itself, records played an important role in health visitor schemes, with reports sent to the clinic and a patient's GP to ensure therapeutic continuity. However, the reports compiled by health visitors were designed to expand surveillance beyond biochemistry to the patient as subject. Reflecting a systematic interest in the social and psychological world of patients that was common within discussions of chronic disease, health visitor reports turned a person's character, health practices, social relationships, and means of support into objects of interest.
As noted, some clinicians extended the model set up for diabetes management into other chronic conditions. MOHs also saw opportunities to craft new positions for themselves, liaising with clinics and providing educative services for conditions like diabetes. As Dorothy Egan, President of the Society of Medical Officers of Health, suggested in 1965, ‘the image of the personal friend and mentor who guides the family from the cradle to the grave is being replaced by one of the team. In this team-work other disciplines have their part to play, but it is essential that the three branches of the NHS should have a common aim and a shared responsibility.’149
Ultimately, resources were just as scarce for local authorities as for clinics. Domiciliary and community care staff were already stretched, dedicated primarily to supporting elderly patients in their homes, and innovative schemes did not always last. Crossing the boundaries of the NHS, moreover, depended upon dedicated personnel with an interest in diabetes, meaning that provision was patchy rather than universal. As we will see in the next chapter, MOHs had been politically undermined by the early 1970s, just as financial and political pressures on clinics were intensifying. Such pressures meant that hospital clinicians were still looking for means to ensure cross-institutional disease prevention and management. With the decline of the MOHs and the reframing of clinical activity as preventive work, GPs made claims for diabetes care. Healthcare politics, economics, and philosophy all mutually reinforced a shift away from the hospital, and GPs saw such changes align with their own interests. Once again, diabetes provided something of a pioneer in these efforts, but with the development of cross-institutional care also came calls to ensure management of professional labour.
Although the creation of the NHS brought considerable change to British healthcare, hospitals retained their leadership in, and authority over, diabetes care in the three decades after the Second World War. Hospital practice, however, was not necessarily static over this period. The number of clinics grew considerably under the NHS, with a greater emphasis in policy placed on their staffing, facilities, and organisation. Likewise, diabetes management became closely entwined with medico-political emphasis on managing chronic sickness in the community. Experiments with health visitor schemes marked the beginning of a more socially oriented medical gaze, focusing on the home conditions, attitudes, and practices of young and elderly patients, along with the newly diagnosed. As academics, public health practitioners, and clinicians began to talk more about the challenges of ‘chronic disease’, doctors even experimented with travelling clinics for continuing care of all adult patients.
The growing healthcare team demanded co-ordination, and new forms of guidance and records emerged in the bureaucratic culture of the hospital. These tools loosely managed labour, but focused primarily upon nursing and ancillary staff, and there was clearly great flexibility in the work undertaken. These dynamics were to play out over a much larger canvas and geographical area during the next two decades, as GPs and other community care actors sought to expand the care of diabetes outside hospital walls. As diagnosis improved and rates of diabetes continued to rise, clinics faced patient loads that they were never designed to handle. At the same time, their resource requirements continued to outstrip the funding available under the NHS. The result was falling standards and unsatisfactory care, and clinicians complained of clinics filled with patients who did not require their skilled labour. Moving care beyond the hospital and into GP practices, however, was not a simple affair. As we will see in the next chapter, this remaking of diabetes management involved numerous innovations, and was driven by complex aims and professional interactions. In the event, these local efforts at spatial innovation brought new forms of bureaucratised practice into the community. When combined with increased drives for surveillance and regulation of quality, they also produced local forms of professional management. It is to the changing role of primary care, and its implications for professional management, that we now turn.