The international growth and influence of bioethics has led some to identify it as a decisive shift in the location and exercise of 'biopower'. This book provides an in-depth study of how philosophers, lawyers and other 'outsiders' came to play a major role in discussing and helping to regulate issues that used to be left to doctors and scientists. It discusses how club regulation stemmed not only from the professionalising tactics of doctors and scientists, but was compounded by the 'hands-off' approach of politicians and professionals in fields such as law, philosophy and theology. The book outlines how theologians such as Ian Ramsey argued that 'transdisciplinary groups' were needed to meet the challenges posed by secular and increasingly pluralistic societies. It also examines their links with influential figures in the early history of American bioethics. The book centres on the work of the academic lawyer Ian Kennedy, who was the most high-profile advocate of the approach he explicitly termed 'bioethics'. It shows how Mary Warnock echoed governmental calls for external oversight. Many clinicians and researchers supported her calls for a 'monitoring body' to scrutinise in vitro fertilisation and embryo research. The growth of bioethics in British universities occurred in the 1980s and 1990s with the emergence of dedicated centres for bioethics. The book details how some senior doctors and bioethicists led calls for a politically-funded national bioethics committee during the 1980s. It details how recent debates on assisted dying highlight the authority and influence of British bioethicists.
Projects like the SMD-funded retinopathy screening trials reflected the British state's growing engagement with diabetes during the 1980s. In that specific instance, the DHSS's hopes for generating organisational guidance for the NHS were disappointed. Central state interest in diabetes management, however, remained undimmed, and much more extensive standards for diabetes care would be produced by the new millennium.
The work of elite practitioners and specialists proved integral to maintaining state interest in both diabetes and service guidance. Reflecting their historic concerns with service organisation, and engaging with mounting critiques of medicine made from within and without the profession, various professional bodies, international organisations, and the BDA became increasingly concerned about standards of diabetes care over the last quarter of the twentieth century. The Royal Colleges and BDA, for instance, collaborated in drawing up guidance on service organisation in 1977, and audited the staffing and facilities available for NHS diabetes management in 1984. Into the early 1990s, these bodies devised more formal clinical guidelines: specialised documents providing specific advice on acceptable standards of disease management, encompassing not just organisation, but also process and outcomes.1 As discussed in Chapter 3, since the 1970s some innovative practitioners had come to see structured care – built around locally codified protocol and audit – as the embodiment of good diabetes management. During the late 1980s and early 1990s, however, the production of guidelines increased to a rate hitherto unknown, both within diabetes care and in British medicine in general.2 Guidelines and their standards, in other words, entered the very fabric of medical practice.
Examining a mixture of published and unpublished guidelines, this chapter traces the development of standards documents in British diabetes care from the late 1970s to the early 1990s. It argues, firstly, that the nature of guidance shifted dramatically over these decades, developing in ways that opened care to external management and challenged traditional views about clinical decision-making.3 Initially covering ideal facilities and staffing, the parameters of guidance expanded to encompass standards for care process and targets for therapeutic outcomes as concerns over clinical standards and professional accountability grew. Moreover, indicative of novel visions of professionalism that emphasised self-reflection and peer critique, the new documents covered not just disease management, but also the process of review; they aimed to structure care and audit, and to provide benchmarks against which performance (and by extension, professionals themselves) could be managed.
Secondly, this chapter underlines the centrality of elite medical professionals to the production of national guidance on best practice during the 1980s and 1990s, beginning a process that would fundamentally alter the regulation of British medicine in the following decades. Operating within nationally focused organisations like the BDA and Royal Colleges, as well as international agencies like the WHO, groups of specialists and prominent British doctors created early guidelines to structure the work of fellow physicians. They did so, moreover, in the name of ‘quality’. During the mid-twentieth century, most doctors saw the quality of medicine as dependent upon the employment of sufficient numbers of trained and experienced professionals, often working together, with access to the latest diagnostic and therapeutic technologies. However, amid growing popular, political, and medical criticisms of clinical practice, academic doctors in particular began to reframe concepts of quality during the last decades of the twentieth century. Drawing on concepts and technologies developed in medical research and education, elite practitioners cast quality as measurable, assessed in relation to defined outcome measures, and best secured by following agreed protocol standards and undertaking regular review. Pre-war clinical medicine had been subject to external constraints and peer discussion, but in issuing guidance post-war specialists and their organisations began to add layers to existing, informal regimes of clinical government. During the late 1980s and the 1990s, that is, elite medical professionals produced national standards to directly inform local protocols (previously devised through experience and negotiation) and provide the basis for effective audit. In fact, moves to establish national strategies to guide and monitor the performance of health authorities could be seen in the changing form of guidelines, which shifted from published reports to consensus statements and technical documents. Thus, though not connected to a formal structure or hierarchy at this point, leading specialists (along with the range of bodies to which they were attached) were part of a project to restratify the government of British medicine. Whilst not entirely successful, by the mid-1990s their efforts created the political space for more the fulsome regulatory architecture of clinical governance that emerged at the turn of the millennium.
In taking a deeper look at guidelines within diabetes management, this chapter explores their expanding remit and the ways in which they made certain forms of work and organisation possible.4 It also traces the roots of a more fundamental reorganisation of medicine in Britain at the end of the twentieth century, one structured by political, cultural, and social trends, but nonetheless driven in part by medical practitioners themselves. Finally, although providing an important window onto broader changes, this chapter also highlights how diabetes management – along with chronic disease management more broadly – was deeply embedded within the guideline movement. In so doing, it provides the groundwork for assessing how specialists and their various institutions opened space for, and intersected with, government efforts at professional regulation, which are explored in Chapter 6.
British medicine under fire: regulating quality in medicine
By the 1970s, questions about the quality of care provided by British doctors had begun to be raised from numerous quarters. Not all concerns were related to clinical decision-making. Some focused on the demeanour of staff, whilst others related to access to services and distribution of resources. Nonetheless, this ‘crisis of quality’ intersected with several other debates – most notably those concerning the costs of care and the need for greater accountability in public life – to undermine faith in long-established mechanisms of training and licensing.5 It was this conflation of debates over quality, cost, and accountability that produced calls for more formal systems of regulation for British medicine, and that set the stage for the construction of clinical guidelines.
Sustained public concerns over standards in British medicine first emerged during the late 1960s, propelled by an increasingly critical British media. Popular media interest in healthcare had increased over the post-war period. Whilst some doctors, particularly those involved in public health campaigns, could turn public curiosity to their advantage, increased attention also brought heightened scrutiny of medical practice.6 Towards the end of the 1960s, journalists investigated reports of appalling conditions within long-stay and psychiatric hospitals, and the resulting exposés sparked political reactions.7 The DHSS launched its own reviews and attempted to subject hospitals to inspection and independent advice. It also reformed complaints mechanisms, notably creating an ombudsman.8 Intended reforms were attenuated in practice. For instance, the ombudsman could not examine complaints concerning clinical decision-making. Nonetheless, continual media attention ensured that ‘scandals’ became a regular feature of reportage into the 1980s, and doctors became subject to public criticism.9 Campaigns for change emerged out of such scrutiny, and throughout the 1980s parliamentary figures pressured the General Medical Council to clarify minimum standards for ethics and professional conduct and to bring incompetence into the disciplinary arena.10
The weakness of complaints mechanisms available to professionals and the public sat at the heart of many scandals, and the issue greatly concerned bodies claiming to speak for patient-consumers. The emergence of these organisations during the 1960s coincided with the broader professionalisation of collective consumer voices in post-war Britain and their institutionalisation within state bodies.11 Moreover, groups like the Patients’ Association built upon contemporaneous public demands for autonomy and political accountability. Recent research on the 1970s and 1980s, for instance, has traced the migration of accountability practices from financial institutions to public and commercial life and examined the reformulation of auditing practices within government.12 Such work has also examined the ways in which ‘ordinary’ people came to express desire for greater control over their lives, and to reinterpret collective identities in terms of individual rights.13 The creation of the NHS had recast health as a basic social right within the public imagination, and demands for professional accountability and patient rights can be seen in the work of patient organisations.14 These agencies helped to move complaints beyond long-stay hospitals, assisting patients with individual grievances and building campaigns to reform procedures.15 Along with Community Health Councils (created in 1974 to represent consumer voices in the NHS), these organisations also surveyed patient opinions on health services, identified areas for improvement, and worked with health service bodies to implement changes.16 Whilst not always critical of medical performance, this work added to concerns about service quality.
Anxieties about quality were not only expressed by public and political bodies. By the 1970s, certain sections of the medical profession had also questioned the capacity of certification, informal regulation, and discipline to effectively ensure quality care. During the mid-century, for instance, dissatisfied GPs, registrars, and consultants expressed numerous frustrations with the NHS, ranging from underinvestment and poor resource distribution to colleagues’ attitudes to patients and – particularly in general practice – the detrimental impact of professional isolation on care.17 Quality practice was seen to depend on more than just trained professionals with good character, also requiring effective distribution and combination of staff as well as access to up-to-date facilities and treatments. For the most part, however, these criticisms left the existing framework of regulation alone. Despite dissatisfaction with elements of medical education, critics trusted the system to produce practitioners whose clinical judgements could be relied upon.18
Later criticisms, by contrast, focused upon questions of knowledge and performance at the heart of this traditional model. From the late 1950s onwards, a small number of doctors and academics began to make uncomfortable criticisms about the efficacy of medical practices and the inconsistency of doctors’ decision-making. Such assessments developed out of older conflicts between cultures of individual expertise and universal knowledge in medicine.19 By the 1960s and 1970s, British epidemiologists and clinical researchers began to use trials and, to a lesser degree, observational studies as a basis for criticising much medical practice.20 Perhaps most famously, Archie Cochrane suggested that many services and treatments – including those in diabetes – had not been proved effective via ‘scientific’ experimentation grounded in statistical theory.21 As a result, he argued that considerable sums of public monies were either being wasted on potentially inefficacious treatment (based on fallible experience and tradition) or were being deployed inefficiently (because the best means of deploying effective tests or therapies had not been established).
Similar critiques were voiced by practitioners of newly institutionalised disciplines, such as health economics and health service research, that took the delivery of care as their object of study.22 Academic units in York and London, as well as think-tanks like the Office for Health Economics, became sites for raising questions about large variations in healthcare, in part building upon epidemiological studies of variation in diagnostic testing and interpretation.23 Contributing to longer-term trends in the standardisation of categories and techniques of investigation, these disciplines problematised clinical decision-making and sought to find practices offering the most efficient outcomes for routine care.24 In so doing, they further undermined the idea that high standards could emerge solely from the effective distribution of well-trained practitioners and high-technology medicine. Cochrane himself even recommended a loose system of monitoring, guideline production, and protocol dissemination using the existing architecture of Hospital Activity Analysis data systems, scientific papers, and ‘Cogwheel’ clinical management committees.25 He also considered the loss of clinical and administrative freedoms resulting from setting and reviewing parameters to be worthwhile if outcomes were improved.26
Finally, these discussions about standards of care and regulation of medical professionals were closely connected to other concerns about health service costs and broader debates about accountability. As discussed in Chapter 1, the unexpected costs of the NHS during the late 1940s and early 1950s provoked widespread political concerns about the service's viability. We have already noted how state agencies responded in the 1960s and 1970s, seeking to incorporate professionals into hospital administrative structures and hoping that increased information about clinical decisions and resource use would encourage better self-management and reduce costs.27 As we will see in Chapter 6, in the decades following the 1970s, governments took even more strident moves in this direction, with neoliberal analyses of professional self-interest and market efficiency underpinning the use of new accountability techniques.
Again, however, elite medical professionals and academic researchers had linked discussions of quality with concerns about public expenditure and accountability ahead of political developments. As well as generating parliamentary attention, the problems facing the new service drew interest from researchers and practitioners, and especially from academics politically dedicated to the pursuit of social equality. Alongside emergent health economists and service researchers, this small group of professionals connected questions over service finance with critiques of clinical practice, and from the late 1950s fostered new academic disciplines from the search for service stability and improvement.28
In 1957, for instance, the pioneer GP and primary care researcher John Fry reflected upon the way in which GPs received remuneration amid financial disputes with the then Conservative government.29 In a letter to the Lancet, he mused upon a ‘lack of supervision’ in current arrangements. Whilst ‘freedom of the individual to practise medicine according to his own views and principles must be jealously guarded’, Fry wrote, ‘we are being paid out of public monies to care for our patients’. It thus ‘seems only right that some steps be taken to ensure that the public is receiving a sufficient standard of medical care’, none of which were presently in place.30 Admitting that ‘no-one likes “controls”’, he went on to suggest that ‘there exist innumerable safeguards in hospital practice to see that suitable standards are being maintained and these are accepted as inevitable and reasonable interferences’. In the case of general practice, he recommended a mix of reforms. These included some pay-for-performance activities – ‘whereby the practitioner who is providing a high standard of care would have a scale of increased remuneration based on some agreed standards’ (for instance, related to the organisation of practices) – as well as the limiting of certain drugs for ‘use in specific diseases’. Along with reforms to hospital prescribing and administration, the latter might provide some remedy for ‘the ever rising cost’ of ‘the whole service’.31
Jerry Morris, a qualified clinician and renowned epidemiologist, echoed Fry's views.32 As one summary of a lecture, also given in 1957, put it: ‘Every system, in Dr Morris's view, needs itself to be regularly scrutinised, and routine systems need built-in controls of quality.’ Connecting these views to broader medical and business culture, Dr Morris went on to suggest that ‘this is widely accepted in industry, in biochemistry, and in bacteriology; and there is great scope for its application in clinical medicine and the health services’. The cost of the NHS was not far from his thoughts. ‘At present remarkably little attempt is being made to ascertain how our £500 million health service is working, what are the needs to be met, and how well they are being met.’ Comparing variations in ‘the average prescription rate’ in different cities and areas of the country, he asked, ‘what do these figures mean? Would not a coöperative [sic] inquiry by the local medical committees of these towns … be of more local value – not only in showing answers but in showing how to tackle such questions[?]’33
The challenge of such views to predominant thinking about professional practice should not be downplayed. The invocation of controls or investigations had been strongly feared by doctors who were sceptical – or even simply hesitant – about employment in state-funded services during the 1940s and early 1950s. These concerns had been particularly strong amongst GPs, many of whom had joined the NHS only on the proviso that they would operate as ‘independent unit[s]’, ‘not subject in … purely professional judgements to any lay authority or … superior medical officer’.34 Such freedom was necessary, firstly, because clinical medicine was considered irreducible to rigid formula. Revitalising a powerful turn of-the-century rhetoric, opponents of universal state service spoke of how ‘the practice of medicine’ was ‘more of an art than a science’ and ‘an art, moreover, applied in an intimate personal relationship between doctor and patient’.35 The variability of clinical disease, in other words, meant that experience provided the soundest basis for decision-making, and only a doctor who knew the individual peculiarities of their patient could determine the appropriate course of action in any given case. References to ‘intimate’ relationships, moreover, recalled the ethical obligation of the doctor to the patient that lay at the heart of the professional encounter. Medical practitioners were duty-bound to do what they felt was in the best interests of the patient, and if clinical autonomy were curtailed they could neither be fully accountable for their care nor effectively fulfil their professional commitments.
Secondly, the inviolability of clinical autonomy touched the very heart of professional self-image. GPs dissatisfied with restrictions in the early NHS, for instance, referred to ‘charge[s] of excessive prescribing’ as ‘degrading and insulting’ and as indicative of a lack of ‘trust [in] the clinical acumen of the doctor’. Already frustrated about their exclusion from the hospital, they wondered where ‘the intrusion on our liberty’ would ‘cease’, and feared becoming ‘an outcast of the profession’.36 Similarly, despite hospitals having well-established clinical hierarchies to oversee the practice of junior clinicians, anxieties about controls also existed within hospital practice.37 For example, when the Ministry of Health recommended that NHS bodies appoint ‘consultant[s] in administrative charge’ to improve co-ordination of hospital work during the early 1950s, civil servants also felt compelled to clarify that the appointment of such figures was not intended to ‘confer any authority over the clinical freedom of other consultants in the[ir] department’.38 Across the profession, therefore, freedom of action was a mark of status as well as being central to views of good and ethical medicine.
In light of such views, it is unsurprising that the majority of doctors were initially lukewarm to critiques of medical care.39 However, facing mounting political and academic critique in the decades after the 1950s, some British practitioners – often GPs involved in innovative training programmes, or clinicians with experience of trial work and international practice – began to respond proactively to concerns about standards and the need for review.40 As noted in Chapter 3, hospital clinicians and GPs engaged with ‘medical audit’ after the 1960s, drawing on developments in market-oriented and insurance-based systems.41 Similarly, as the pressures of resource constraints built during the 1970s and 1980s, debates about clinical autonomy became more intense.42 Leading journals and academic practitioners developed earlier arguments, suggesting that the financial insufficiencies in the NHS made setting limits to clinical activity in individual cases an ethical responsibility to protect the collective.43 More strident voices even echoed Cochrane's assault on individual judgement, suggesting that clinical decisions should now follow only where trials had proved measures effective.44 Criticisms of the profession and the drive for audit were also reinforced by philosophers, sociologists, and political scientists publishing in popular books and journals, exposing doctors to outside perspectives on accountability and ‘quality’.45
This was the context within which guidelines emerged. Critiques had been made about the nature, cost, and quality of medical practice from within medicine and without. A small minority of doctors and academics tried to address their concerns through novel methods for some time, but external pressure from patients and political bodies accelerated the process and informed responses. Guidelines had been mooted as a sensible way to steer practitioners in certain situations, and local protocol had already been devised in some locations to manage care. A drive for better monitoring of care as an educational aid also encouraged the development of standards. In the case of diabetes, tentative guidelines (and allied auditing systems) began with service facilities and staffing before moving on to process and outcomes. This shift itself marked a significant transformation in the nature of medical regulation and autonomy. However, attention also needs to be paid to the agencies involved in guideline production. The lead taken by professional bodies, international organisations, and the BDA not only highlighted the prominence of professionals themselves in the reformulation of managed medicine. It also marked a shift in the organisation of British medicine, with elite agencies laiming to more formally regulate the activity of local practitioners.
The emergence of guidelines in diabetes care: facilities, staffing, and nomenclature
As Chapter 1 outlined, the first official guidance on diabetes care emerged from professional advisory mechanisms that were embedded in the early NHS. Coming in the form of a Ministry of Health circular, it was not extensive. Over its eight points and three sub-points, the two-page document offered health authorities advice on organisation, bed requirements, and staffing of regional services. Although offering one quantified norm on the provision of beds (one bed per every fifty patients with diabetes), the circular's vague advice was respectful of decentralised decision-making and mindful of the way specialist services were structured around regional hierarchies of hospital provision.46 As such, the guidance focused upon how authorities might scale up provision at different levels of the service. For instance, the circular advised that local clinics should have ‘facilities for urine testing and blood sugar estimations, and in addition to medical staff should have a sister trained in dietetics and insulin administration as well as one or more other nurses’. For larger centres, dealing with greater numbers of complex patients, a ‘full range of ancillary facilities, notably pathological and radiological’ should be complemented by ‘necessary nursing staff and dietitians and probably a part-time almoner and also chiropodist’.47 Finally, the circular advised that the largest centres – at the apex of the system, and probably based ‘in Teaching Hospitals’ – should each be run by a diabetes specialist, or at least ‘a general physician with a special interest in the condition’.48 The circular then concluded by proposing that in ‘each hospital region there should be a scheme, drawn up by the Regional Hospital Board in consultation with Boards of Governors, for the provision of special facilities’.49
The circular itself did not carry the word ‘guideline’. Indeed, the term did not appear to be commonly deployed in British medical discourse until around the 1970s, and even then it initially operated with a number of interconnected meanings.50 During the 1960s and early 1970s, for instance, ‘guideline’ could refer to physical signs indicative of future diagnostic or therapeutic action; general rules of thumb guiding clinical practice; or principles, papers, or pieces of evidence that could aid clinical decision-making.51 It was not until the mid-1970s that the term ‘guideline’ was used for a specific type of document and official regulation, predominantly concerning service organisation.52
Nonetheless, despite the 1953 circular neither being called a guideline nor resembling later algorithmic forms, it reflected a new type of documentary advice to hospital doctors and administrators designing services.53 It was produced specifically to offer external advice on the organisation of care, even if the Ministry itself ultimately lacked the mechanisms and political interest to ensure take-up.54 Moreover, though further state-funded guidance on diabetes care would not be issued until the mid-1980s, the circular also marked the beginning of a process in which numerous agencies set standards for diabetes care.
In general, the guidance produced for the next three decades sat within the same framework as the 1953 guidance. Most took the form of reports and focused on facilities, staffing, and organisation, setting aside issues of clinical decision-making. There were some variations. For instance, in light of rising prevalence estimates in colonial and international surveys, in 1965 the WHO brought together an expert committee on diabetes which included British representation.55 Although it covered a host of topics, the WHO was interested in the accumulation of accurate and comparable data between locations, seeing it as central to ‘motivat[ing] action to resolve’ the ‘public health problems of diabetes’.56 As a result, along with very detailed guidance on establishing screening services, the committee produced clear provisional nomenclature standards for different stages of diabetes, and recommended quantified thresholds for definitively ruling out and providing diagnoses.57 The hope in fixing such criteria would be to standardise units for statistical comparison (providing a powerful conceptual and practical precedent for managing medical practice).58
Ultimately, the WHO standards appeared to make little immediate impact upon clinical care. Textbooks continued to use discordant terminology and diagnostic criteria, and the report itself was inconsistently cited.59 Instead, the report's standards laid the foundations for important research programmes, and for more influential diagnostic criteria produced in 1980 and revised in 1985.60 Though the 1965 report exercised little influence, it – and its successors – marked an attempt to set standards that possessed clinical implications. The production and reception of all three WHO reports also signified the increased movement of British diabetologists into expanding transnational networks, and the way in which international organisations like the WHO would exercise influence on British diabetes policy (as discussed further in Chapter 6).61
Perhaps more typical of guidelines in this period, however, was the 1977 report from a working party of the Standing Committee on Endocrinology and Diabetes Mellitus of the Royal College of Physicians of London (RCP), in which the focus remained upon facilities and staffing.62 The Standing Committee itself emerged from elite diabetologists and endocrinologists across the UK, who lobbied the RCP to establish a committee following the closure of a counterpart in the MRC.63 The Standing Committee intended to offer leadership in the field, advising the College on all matters concerning endocrinology (particularly training programmes and resources), co-ordinating research efforts, and maintaining the existing high standard of practice in clinical endocrinology.64 No doubt connected with this last point was a specific reference to ‘keep under constant review[,] and advise on[,] the facilities required by Clinical Endocrinologists and General Physicians with a special interest and experience in endocrinology working in different types of hospitals in this country’.65
The Committee's working group on diabetes contained two prominent diabetologists, Dr John Nabarro and Professor J. M. Malins, who, as we have already seen, were at the leading edge of innovative clinical practice and service organisation, and were also involved in policy discussions with the DHSS.66 Their report was much firmer than earlier documents on questions of personnel and facilities required for quality care. The group's most direct suggestion was that a general physician with an interest in diabetes (and, where possible, endocrinology) ‘should be appointed in each NHS District’ (the most local level of administration introduced during the 1974 reorganisation). Alongside ‘taking a full part in the general medical work of the District’, this physician would also ‘be responsible for promoting a service to diabetic patients in the community and in the hospital’.67 Clinical work would not be undertaken alone. In the clinic, the authors recommended that the physician in charge ‘will need at least two experienced doctors’ to cope with the ‘3,600’ follow-up appointments generated by a ‘District of 250,000’ population.68 These doctors, moreover, would lead a ‘Diabetic Team’, including ‘nurses, a dietician and a chiropodist’, as well as a medical secretary.69 Similarly, the physician would support the development of services outside the clinic, and would in turn be supported by GPs, district nurses, and health visitors.70
The report also offered greater clarity on equipment and the distribution of expertise than earlier guidance. Reflecting the importance of surveillance to patient management, district clinics would require specific ‘accommodation and equipment’, including ‘a separate room in the clinic’ for each doctor; ‘two examination rooms’; a ‘dark room for ophthalmoscopy’; ‘appropriate facilities’ for analysing urine for ‘glucose, ketones, and protein’; and access to the ‘Biochemistry Department’ and either an ‘hospital autoanalyser service’ or a ‘rapid glucose oxidase machine’ for timely ‘blood sugar examinations’.71 Alongside these more routine instruments and spaces, new evidence concerning ‘diabetic retinitis’ meant that ‘regular supervision’, particularly of ‘the younger diabetic’, was ‘of great importance’. To this end, the report suggested that ‘the Physician in Charge of the Diabetic Clinic must establish close liaison with his ophthalmological colleague[s]’ and arrange access to laser equipment, which, along with the expertise to use it, would ‘only be available in a limited number of centres’.72 In cases of retinopathy, patients might, therefore, be passed on to one of the centres existing in ‘most regions’ where ‘difficult problems may be referred’. These centres were ‘usually … in the Teaching Hospital of the Region’ and had ‘physicians with very wide range experience of the problems of diabetics’.73
In many respects, the RCP report updated arrangements first considered in 1953, responding to renewed policy interest in planning as well as to changes in clinical technologies and the care team over the intervening period. It was therefore dominated by issues of staffing, facilities, and service organisation, but now reflective of different institutional arrangements, and with more prescriptive recommendations. The 1977 report also gave greater consideration to divisions of labour than the Ministry's guidance. For instance, the physician was to depute ‘the slow and patient education of the diabetic to look after his or her condition’ to nurses, and instead take a role in promoting community services and educating staff.74 Likewise, though some patients would be seen by their GP ‘whenever possible’, the report also noted that physician follow-up would be required regularly, between every three and every twelve months.75 Notably, like the 1953 guidance, this report did not seek to infringe upon the content of clinical care, or set standards for expected outcomes. Instead, the target audience consisted of those clinicians, Community Physicians, and others involved in the planning of NHS services. In this sense, it mirrored simultaneous efforts by the DHSS to provide guidance on staffing and priorities, and to introduce better technocratic management to the health service.76
Nonetheless, this report was significant in one respect: it saw the tentative entrance of elite diabetologists and professional bodies into the realm of standard-setting and guidance production, following specialists in other fields.77 As noted above, specialists had been central to the production of earlier guidance on diabetes care, but the resulting documents derived legitimacy from the statutory powers of the CHSC and the Ministry of Health. Similarly, whilst the Royal Colleges had played historic roles in maintaining professional standards, they had hitherto pursued their aims through certification, education, and their influence over policy committees and NHS bodies.78 The 1977 guidance, therefore, represented diabetes specialists’ adoption of more formal technologies for managing care, and the beginning of a move by elite professional bodies to govern ongoing medical practice more directly.
Although the RCP report focused only on the structural elements of care, it nonetheless set a precedent. Through the creation of clinical guidelines and audit programmes over the next two decades, the College and other elite bodies would extend their managerial interests into the process and outcomes of care, features of medical practice once considered the sovereign domain of the individual professional. Such national efforts at regulating care in the name of quality were designed to influence local practices and establish some form of national system for tracking care provision. They would also provide space for further government efforts into the 1990s.
This is not to say that state-sponsored guidance completely disappeared in the 1970s and 1980s. Reflecting the more co-operative relationships between central government, health authorities, and healthcare professionals in Scotland, the National Medical Consultative Committee of the SHHD commissioned a working group in 1984 to ‘prepare guidelines for improved care of patients with diabetes’ with consideration to integrated care, technology, and resource costs.79 Senior figures in general practice, diabetology, paediatrics, health economics, nursing, and community medicine made up the membership of the working group, with a Senior Medical Officer of the SHHD on the secretariat. Creating the guideline involved approximately a year of evidence-gathering from key figures and organisations, and the Committee drew on several reports produced across Britain and the WHO.80 Once again, the guideline primarily focused on staffing, education, facilities, and relationships between various sectors of the healthcare system. However, reflecting a growing managerialisation of British medicine, it also recommended the production of performance indicators for outcomes and process to facilitate national audit. A minority of its final recommendations caused friction in the SHHD because of concerns over costs, but the guideline was revised and published in 1986 with the SHHD's approval.81 During the 1990s, this form of guideline commissioning and evidence-gathering would become more popular across Britain, though taking place within new state agencies independent from government. However, this future work would also target the content of clinical care with greater vigour, and the involvement of leading specialists and professional bodies gave such work legitimacy.
Guidelines and audit in the 1980s: process, outputs, and outcomes
The SHHD report's call for performance indicators signified the changing nature of the guidance being produced in the 1980s and early 1990s. In short, elite guideline-creating bodies increasingly produced prescriptions for the content of clinical practice, and in so doing challenged the concepts and structures of clinical autonomy at the core of traditional views of medical professionalism.
One of the earliest movements in this direction came from the BDA. In view of its strong links with elite specialists and its dedication to improving care for patients with diabetes, it was perhaps unsurprising that the Association would be at the forefront of guidance production. In 1982, the BDA published a ‘policy statement’ on dietary recommendations for the decade that became widely cited.82 Produced by an expert committee that considered a large body of published research, this work built on the state-backed guidelines for diet of the 1970s and 1980s, and brought recommendations for individuals with diabetes roughly into line with advice for the rest of the population.83 To provide the clearest possible advice, the guidance contained specific, quantified recommendations for constructing patient diets: 50 per cent of daily energy intake was to be derived from carbohydrate, 30 per cent from fat, and 20 per cent from protein. In some respects, this guidance thus marked a novel and more confident take on dietary proposals. However, the Association had given recommendations on diet before, and it was not necessarily an area which doctors assumed responsibility for or valued as part of their clinical autonomy.84
Perhaps a more significant intervention in this regard was the ‘protocol’ (later renamed ‘guideline’) for diabetes care produced by the RCGP in 1986 as part of its Quality Initiative (and related Clinical Information Folders).85 By comparison with the looser norms and organisational focus of earlier guidance, the College guideline adopted a more prescriptive form, containing step-by-step, algorithmic guidance on how to manage diabetes within general practice. Beginning with ‘identification’, it advised GPs to create a register of patients already diagnosed with diabetes. Then, using WHO criteria, it detailed the diagnostic process, and outlined the possible interpretations to make within four specified situations: fasting blood glucose results of ‘over 8 mmol/l’ or random results ‘over 11 mmol/l’ were declared to be ‘almost always indicative of diabetes’, whilst diabetes was considered ‘unlikely if the fasting blood glucose is below 6 mmol/l or the random blood glucose below 8 mmol/l’. Only in cases of uncertainty – where symptoms were absent and results fell between thresholds – were oral glucose tolerance tests ‘justified’.86
Mirroring an abstract idealisation of the developing clinical encounter, the text then moved from diagnosis to disease management, setting out a precise programme for establishing appropriate treatment:
if the patient is overweight (i.e. 20% above ideal body weight) try diet alone for three months. At the end of this period, review diet, weight loss and assess compliance. If control has not been achieved and the fasting blood glucose is above 8 mmol/l consider adding Metformin starting with 500 mg b.d. (always check blood urea and serum creatinine first and remember that metformin should not be used in patients with renal or hepatic impairment nor in those with an alcohol problem).
‘If hyperglycaemia persists after a month’ on the highest possible dose of metformin, it concluded, the criteria had been met for specialist referral.
Although demurring from directive language in favour of suggestive phrases (such as ‘consider adding’), the guideline's formulaic ‘if/then’ structure was indicative of the way in which standards documents were becoming more explicit about the ‘correct’ clinical actions to be taken in given situations. As suggested above, such codification of disease management clearly contested visions of clinical practice as inherently variable and unavoidably individual, and provided the foundations on which the autonomy at the heart of professionalism could be structured and subject to review.
Perhaps the apogee of the guideline's ambition and prescriptivism, however, was located in the advice given for the process of patient review. Alongside detailed discussions about the conditions under which certain laboratory tests were needed, the RCGP documents disaggregated and codified the actions to be undertaken at each consultation in considerable detail:
At each consultation there should be consideration of:
(a) well-being, number of hypo attacks, days off work/school, presence or absence of nocturnal frequency, visual problems etc.
(b) review of blood glucose or urine tests
(c) review of diet and/or medication
(d) review of patient's smoking habits
(e) check of weight
(f) inspection of feet and review of the need for chiropody
(g) test urine for protein
(h) review of injection sites
(i) review of urine
(j) review of urine testing or blood testing technique
(k) check on the need for pre-conception counselling
(l) check of patient's understanding of their diabetes
(m) if appropriate, set further goals for treatment
(n) discuss specific problems and arrange follow-up.87
Whilst later revisions were updated in line with recent research and thinking – for instance, adding patient-focused elements, such as ‘the patient's perceived problems’ and ‘educational needs’ – the new texts retained the layout and tone of the original.88
Without any commentary by the College on its use, the guideline may have embodied the worst fears of mid-century practitioners. As will be noted below, the proliferation of guidelines into other areas of practice certainly drew complaints about curtailed autonomy and laments that the ‘inflexibility’ of guidelines made them ‘clumsy’ in the face of patients’ uniqueness.89 The College recognised some limits to its vision and reach, however. To avoid offending the sensibilities of more individualist practitioners, the guideline's introductory segments included disclaimers to dispel fears that it would be determinative of practice. The College suggested that ‘no protocol can cover every situation’, and whilst diabetes was ‘a marvellous example of a condition which lends itself to team care’, the documents declared that ‘no attempt is made to define the responsibilities for individual members’. It was ‘better’, the authors felt, ‘that they [the team members] agree these amongst themselves’.90 A joint RCP, BDA, and RCGP guideline published in 1993, which assumed similarly prescriptive form, came to the same conclusion.91
Crucially, neither the RCGP guideline nor the joint guideline was designed to directly influence practice in the sense of replacing local frameworks and tools. Rather, these documents were intended to inform the protocols structuring local systems. Though modified to suit local situations, these guidelines were to provide a uniform standard around which doctors and agencies could organise their work. In this sense, the creators of national standards sought to subject local care to greater regulation, but believed that local ownership of clinical protocol might make adoption and adaptation of guidelines more likely.
The RCGP and joint RCP–BDA–RCGP guidelines were created at a time when ‘how-to’ guides for establishing quality diabetes services began to appear with great regularity. The College's publications probably carried greatest authority, being grounded in multi-disciplinary experience and accumulation of various sorts of evidence. However, the involvement of the BDA in guideline production was also notable, marking a move into a new role. Over the 1980s and into the 1990s, the Association self-consciously reframed its work, discursively positioning itself as ‘active in improving standards of care provided for people with diabetes in the National Health Service’.92 In part, this involved continuing its traditional role of gathering information on services and providing advice and support to the public and its patient membership.
In later decades, though, the BDA believed that improving the quality of care required the codification of process standards, and not just for professionals. In line with its role as a patient advisory body, it developed guidance for patients themselves, informing them, as individuals, on ‘what diabetic care to expect’.93 These leaflets outlined in clear and prescriptive bullet-points the types of supervisory processes professionals should perform. For instance, they declared that ‘when you have just been diagnosed you should have’:
1. A full medical examination.
2. An explanation of what diabetes is and what treatment you are likely to need: diet alone, diet and tablets, or diet and insulin.
3. A talk with a dietitian, who will want to know what you are used to eating and will give you basic advice on what to eat in the future. A follow-up meeting should be arranged for more detailed advice …
This section went on to cover a further six points, taking in treatment modality, self-monitoring, social and financial implications of diagnosis, and ongoing education. The next segment, entitled ‘Once your diabetes is reasonably controlled’, included a further four points about annual supervisory check-ups, ongoing education, accessibility of specialist staff, and a formal annual review. Finally, this last topic was then broken down into nine sub-points detailing how:
At this review:
• Your weight should be recorded.
• Your urine should be tested for ketones and protein.
• Your blood should be tested to measure long term control.
• You should discuss control, including your home monitoring results …
The list ran on to describe other tests and how a consultation should include ‘the opportunity to discuss how you are coping at home and at work’.94 Notably, the guide closed with the lines ‘the control of diabetes is important, and so is the detection and treatment of any complications. Make sure you are getting the medical care and education you need to ensure you stay healthy.’95
Clearly, these leaflets emerged from the Association's fear that patchy service from doctors and health authorities might result in missed supervision and support. The codification and supply of knowledge were therefore important because not all healthcare providers could be relied upon. Yet, in trying to persuade patients to insist upon their rights, the Association was also cultivating a particular type of patient, one who was not just an informed part of a team but also demanded submission to review.96 Although seemingly contradictory on the surface – at once empowered and subjectified – this patient emerged from deeply held convictions about the importance of oversight in diabetes care. Moreover, in the context of both neoliberal health service reform and a growing political and grassroots emphasis on patient consumerism, the Association's vigilant and vocal patient was also to offer a solution to concerns about medical practice. Informed and active patients were to add another layer of regulation to care: they would call certain forms of action into question, and encourage practitioners to behave in approved manners. Guidelines from above and protocol designed with peers would be enforced by patients below. If everyone was aware of expectations, then surveillance could come from all directions.
Through this emphasis on monitoring, the BDA and elite professional bodies sought to make the final move into the realm of healthcare government. As noted in Chapter 3, to some extent the drive for local audit provided a motivation to create practice protocols: sets of standards against which measurement could take place. However, during the 1980s and early 1990s, elite professional and specialist organisations began to audit whole systems. Some looked nationally, others on a smaller scale; some surveyed facilities and staffing, whilst others worked with performance benchmarks for process and outcomes. Regardless, the institution of regular review added new layers to healthcare regulation.
Once again, the BDA played a central role. Its UK-wide review of staffing and facilities in hospital services, undertaken with the RCP and published in 1984, marked an important take-off point for placing national diabetes management under scrutiny.97 The survey emerged, in part, from the RCP's efforts to improve standards in clinical endocrinology, which, as we have seen, involved monitoring and promoting the employment of specialists across the health service. Similarly, specialists associated with both the RCP and BDA were concerned about how UK hospitals had responded to recent developments in clinical diabetology, fearing considerable regional variations and inequalities.98 The review thus surveyed medical professionals across the NHS to generate baseline data, and simultaneously updated and transformed the recommendations of the RCP's 1977 report, turning them into benchmarks for minimum requirements. The report noted, for example, how ‘in 30 health districts in the United Kingdom there are no physicians specialising in diabetes’, and highlighted the fact that ‘of 428 respondents …48% do NOT have [a] dark room for retinal examination’.99
The staffing element of this research was followed up several years later, with only minimal progress made towards greater equality.100 As noted above, though, professionals involved in bodies like the National Medical Consultative Committee of the SHHD working group had sought to expand audit's remit in the intervening period, taking process and outcome into account. Rooting itself within interlocking contexts of Scottish, British, and international medicine, the SHHD report noted that some indicators had already been recommended elsewhere in the world. English access measures, it had recalled, were devised to facilitate comparison between health districts, but ‘Scottish health boards were too diverse for comparison between areas’. Thus the authors suggested that longitudinal measures for ‘all Scotland’ would be preferable, and ‘access measures for the process of care’ could allow for intra-area comparison. As for the metrics to be chosen, the report drew on American suggestions, with the United States National Diabetes Advisory Board proposing ‘five major indicators of the quality of care’: visual impairment, perinatal morbidity and mortality, amputations, end-stage renal failure and diabetic ketoacidosis (an acute metabolic crisis potentially resulting in coma).101
In the event, admission to hospital for diabetic ketoacidosis did become a ‘clinical outcome indicator’ in Scotland during the early 1990s.102 The Clinical Outcomes Working Group of the SHHD's Clinical Resources and Audit Group annually published a series of indicator metrics. These measures in themselves were not taken to be a guarantee of quality: ‘we must emphasise’, the authors suggested, ‘that no conclusions can or should be drawn from the comparisons in this report about the quality or efficacy of the treatment provided for the populations of different Health Boards’. Rather, it was hoped that the ‘disparities’ in this series of indicators might lead boards and clinicians to review their performance, and subsequently to ‘correc[t]’ those ‘deficiencies in service provision or therapeutic regimes’ that were uncovered.103 For four years, admissions for diabetic ketoacidosis provided one such measure, being easily quantified and providing a possible sign of problematic care prior to hospitalisation. Although this metric was dropped in 1996 after an overhaul of the measures used, its early use nonetheless reflected how widespread the desire to audit diabetes services had become.104
Once again, the role of central government agencies in facilitating performance indication was reflective of Scotland's earlier move into state-backed guideline and audit production. However, as will be discussed in Chapter 6, similar processes were underway in England and Wales. The RCP and BDA were keen to develop a national dataset and protocol for diabetes auditing, with the Department of Health providing funding for this work.
Furthermore, the eagerness to audit itself came from clinicians. As noted above, doctors increasingly deployed audit at local levels during the 1980s, slowly institutionalising the practice within hospitals, general practice, and integrated care programmes as a path to better self-management. Into the 1990s, reviews expanded to include outcome measures, both intermediate (like average HbA1c readings) and ‘end-point’ (mortality from complications). In this regard, audits of diabetes care benefited from the highly quantified culture of diabetes management, a heritage of its previous grounding in physiological concepts and practices.
Moves towards audit were supported in 1989 by the St Vincent Declaration, a document emerging from a joint International Diabetes Federation–WHO (Europe) initiative that aimed to reduce morbidity and mortality from diabetes through target-setting. Blindness, renal failure, and neuropathy all received quantified reduction targets, and the Declaration included a promise to embark on auditing programmes.105 The Declaration was signed not only by representatives of national diabetes associations, but also by government delegates. As we shall see in Chapter 6, a host of initiatives originated from the St Vincent Declaration in Britain, even though its targets were later criticised. An important element of its influence, however, was the creation of a standardised dataset for audit purposes.106 The development of audit programmes, then, emerged from bodies at a local, national, and international level, linking professional organisations, non-governmental bodies, and groups of specialists who promoted audit as a basis for quality care. In short, as well as constructing guidelines to inform local protocol, the profession itself was pushing for audit of process and outcomes as the means by which to ensure quality care from trained professionals.
Conclusion: guidelines, audit, and regulating medicine
What, then, did this shift of regulatory architecture mean? What are we to make of the role of elite professional and specialist bodies in constructing it? And how could new tools to structure and review care be squared with traditional views of professionals as trained experts, trusted to serve their patients and distinguished by their autonomy?
To some extent, the late-century pursuit of guideline construction by bodies like the Royal Colleges forms part of a longer history. Colleges and their specialists had produced service guidance during the mid-twentieth century as part of their efforts to maintain high standards in their respective fields. Indeed, some of these guidelines even recommended pursuing forms of peer conference and integrated care solutions a decade before similar suggestions in diabetes care.107 Yet the move to include the process and outcomes of clinical care within these guidelines, and to set benchmarks for future review, indicated a significant change in remit. Traditional views of professionalism – both within and without medicine – stressed that the esoteric knowledge of certain occupations justified collective regulation, as only members of these professions could set and judge reasonable standards. Throughout the first half of the twentieth century, these understandings of professionalism also suggested that the variability of the problems faced necessitated freedom for individuals to exercise autonomous judgement. Finally, professionals were deemed, by dint of their education and character, to be trustworthy.108 External practices of prescribing roles would thus be counterproductive, and techniques of accounting unnecessary. Making these claims may have been ideological acts, attempts to turn supposedly esoteric knowledge into control over market functions and work processes.109 Nonetheless, they played an important symbolic role in identity construction and had material impact in medicine.110 The construction of guidelines and auditing frameworks, therefore, seemingly contested these myths and experiences of medical professional life, formalising trust into a process of accounting and providing codified norms of practice against which actual performance could be measured.
Debates about the role of guidelines in British medicine at the beginning of the 1990s captured the possible extent of this challenge. As hitherto the least managed medical practitioners, GPs and consultants in these years criticised the ‘increasing flood of guidelines and protocols issued by royal colleges and other organisations’ and expressed concerns about how such tools might limit cherished professional freedoms and weaken medical authority.111 Some doctors, for instance, were concerned about the potential accountability issues that guidance documents raised beyond self-audit and peer review. At one meeting between senior medical representatives and the Chief Medical Officer in 1993, interlocutors voiced worries ‘about the medicolegal aspect if [guidelines] were not followed to the letter’.112 The fear of legal action was probably heightened amid ongoing medical scandals, but such a worry also spoke to broader apprehensions about external limits on the hallmark of clinical freedom. Such constraints were at the heart of contemporary sociological theories concerning ‘deprofessionalisation’, and found echoes in the complaints of a surgeon at another meeting of senior consultants.113 Linking the development of guidelines with the recent introduction of general management and internal markets (see Chapter 6), this practitioner warned of the coming of ‘“cookbook medicine”, with doctors being given clinical protocols on the most economical way to treat patients’.114 As work on the emergence of Evidence-Based Medicine has demonstrated, the concept of ‘cookbook medicine’ provided a common filter for concerns about the creation of bureaucratised, unthinking provision, and one editorial of the early 1990s summed up such anxieties with the question ‘if doctors are not required to exercise judgement what are they there for?’115 Even practitioners sympathetic to guidelines approvingly commended advocates for reassuring doctors that ‘guidelines are not intended to replace clinical judgement … and that practising medicine in the 1990s remains an art’.116
And yet many of the complaints about guidelines, at least in the medical press, did not concern the principles of codifying statements of good care, or of professionals reviewing their practice. As one of the critics quoted above suggested, ‘no one objected to broad recommendations on what was acceptable’, and we have seen that there were powerful voices supporting new trends.117 Instead, criticisms of guidelines often concerned the sheer number being produced, the frequency of disagreement, the poor evidence on which they could be based, the remoteness of their production, their potential inflexibility and limitation of innovation, and the lack of consideration given to implementation.118 By the early 1990s, debates about the potential problems and mechanics of guidelines were still in their infancy, but a consensus was forming around the idea that they could, theoretically at least, help to secure high standards of care.
In part, the support for guidelines and audit could be interpreted as part of a change in the precise meaning attached to the concept of ‘high standards’ over the twentieth century. The notion of ‘quality care’ emerged within a context of managerial reform and broader professional, public, and political concerns about professional performance. Elite professional bodies pioneered efforts to remake practice, responding to this crisis of collective regulation by constructing new tools for formal professional management. Appeals to quality here served to reframe traditional concepts of professionalism, and to transmute old features.119 A service ethic had been a key feature of older discourses of professionalism. Now, however, this selflessness could be used to justify the codification of previously individualised clinical decision-making and the use of new quality-assurance tools. Discussing its ideal GP, the RCGP suggested that ‘he [sic] subjects his work to critical self-scrutiny and peer review, and accepts a commitment to improve his skills and widen his range of services in response to newly disclosed needs’.120 The author of the College's diabetes materials translated this ideal when discussing calls for protocol and audit in the NHS reforms of the early 1990s: ‘intrinsic in many of the government's proposals … is the theme of accountability. This theme does not frighten me: does it frighten you?’ ‘Surely’, he went on, ‘the way forward must be for the nurses, the dieticians, the chiropodists, the patients to unite with the doctors in the production of suitable guidelines.’ He then closed with an appeal to that most traditional figure, Hippocrates: ‘of the recent achievements of science, the emancipation of the human mind from a servile adherence to the opinions of antiquity is one of the most important’.121
Of course, given the local use of protocol and audit since the late 1970s, the potential challenge to individual clinical autonomy from new guidelines was not novel. Though having reservations about the extent to which autonomy would become structured and performance reviewed, many rank-and-file practitioners shared the broad outlines of new visions for professionalism. Rather, the involvement of specialist bodies like the BDA and Royal Colleges in the production of formal regulatory tools was significant for how it opened the way for a transformation in the government of British medicine.
Crucially, as we will see in the next chapter, the involvement of elite professional and specialist bodies in setting and reviewing standards in diabetes care legitimated these tools as means for managing and regulating medicine. On the one hand, it provided a seal of approval for local efforts. On the other, these elite practitioners’ attempts to produce tools that informed local systems marked the emergence of what would later be called clinical governance architecture: namely, institutions whose role was to ensure that local systems had standards and accountability measures in place, and to assess whether these local structures functioned effectively. Doctors may have firmly resisted the encroachment of ‘lay’ or ‘state’ management of their work. Many did not, however, actively oppose medically led local systems for managing medical labour; nor did they disagree with the construction of frameworks that sought to inform and reinforce these local professional structures.122 Of course, the ways in which discourses of medical professionalism had been reframed over the second half of the twentieth century opened the opportunity for greater lay, external management. Moreover, audit provided the basis on which political actors could co-operate with elite practitioners to reframe clinical governance, linking it with broader structures for cost-control and welfare management. Although not sacrificing individual clinical autonomy or formalising stratifications within the profession to the extent suggested in contemporary sociological work, these developments did consolidate major changes in how British medicine was managed and regulated.123