Burying the victims of Europe’s border in a Tunisian coastal
The Mediterranean Sea has recently become the deadliest of borders for
illegalised travellers. The victims of the European Union’s liquid border
are also found near North African shores. The question of how and where to bury
these unknown persons has recently come to the fore in Zarzis, a coastal town in
south-east Tunisia. Everyone involved in these burials – the coastguards,
doctors, Red Crescent volunteers, municipality employees – agree that
what they are doing is ‘wrong’. It is neither dignified nor
respectful to the dead, as the land used as a cemetery is an old waste dump, and
customary attitudes towards the dead are difficult to realise. This article will
first trace how this situation developed, despite the psychological discomfort
of all those affected. It will then explore how the work of care and dignity
emerges within this institutional chain, and what this may tell us about what
constitutes the concept of the human.
Among the numerous human remains found in circular pits belonging to the fourth
millennium BCE cultures north of the Alps, there are many examples of bodies
laid in random (or unconventional) positions. Some of these remains in irregular
configurations, interred alongside an individual in a conventional flexed
position, can be considered as a ‘funerary accompaniment’. Other
burials, of isolated individuals or multiple individuals buried in
unconventional positions, suggest the existence of burial practices outside of
the otherwise strict framework of funerary rites. The focus of this article is
the evidence recently arising from excavation and anthropological studies from
the Upper Rhine Plain (Michelsberg and Munzingen cultures). We assume that these
bodies in unconventional positions were not dumped as trash, but that they were
a part of the final act of a complex ritual. It is hypothesised that these
bodies, interpreted here as ritual waste, were sacrificial victims, and a number
of possible explanations, including ‘peripheral accompaniment’ or
victims of acts of war, are debated.
In this article we explore the relational materiality of fragments of human
cadavers used to produce DNA profiles of the unidentified dead at a forensic
genetics police laboratory in Rio de Janeiro. Our point of departure is an
apparently simple problem: how to discard already tested materials in order to
open up physical space for incoming tissue samples. However, during our study we
found that transforming human tissues and bone fragments into disposable trash
requires a tremendous institutional investment of energy, involving negotiations
with public health authorities, criminal courts and public burial grounds. The
dilemma confronted by the forensic genetic lab suggests not only how some
fragments are endowed with more personhood than others, but also how the very
distinction between human remains and trash depends on a patchwork of multiple
logics that does not necessarily perform according to well-established or
Burials, body parts and bones in the earlier Upper Palaeolithic
Erik Trinkaus, Sandra Sázelová and Jiří Svoboda
The rich earlier Mid Upper Palaeolithic (Pavlovian) sites of Dolní
Vĕstonice I and II and Pavlov I (∼32,000–∼30,000 cal
BP) in southern Moravia (Czech Republic) have yielded a series of human burials,
isolated pairs of extremities and isolated bones and teeth. The burials occurred
within and adjacent to the remains of structures (‘huts’), among
domestic debris. Two of them were adjacent to mammoth bone dumps, but none of
them was directly associated with areas of apparent discard (or garbage). The
isolated pairs and bones/teeth were haphazardly scattered through the occupation
areas, many of them mixed with the small to medium-sized faunal remains, from
which many were identified post-excavation. It is therefore difficult to
establish a pattern of disposal of the human remains with respect to the
abundant evidence for site structure at these Upper Palaeolithic sites. At the
same time, each form of human preservation raises questions about the
differential mortuary behaviours, and hence social dynamics, of these foraging
populations and how we interpret them through an archaeological lens.
The concluding chapter brings together the preceding themes and links them to show how the British vaccination programme changed from the 1940s to the 2010s. It examines how these changes can give an insight into the deeper relationship between the public and the public health authorities that purport to act on their behalf. It argues that the relationship between the two was not entirely top-down. Public action – either directly expressed or inferred through various surveillance and governance structures – was a key driving force behind policy changes and initiatives. The longer view of vaccination policy, including periods of relative calm as well as crisis, shows how this relationship changed over time and was inextricably linked to wider political concerns. The chapter argues that twenty-first-century crises such as the measles outbreaks in North America and Europe in the 2010s are also historically contingent. Whether disease or vaccination rates are “too high” or “too low” is based on contemporary conceptions of risk, health citizenship and our relationship to public health authorities.
This chapter uses the diphtheria programme of the 1950s to explore the theme of apathy in British vaccination policy. Following the success of the war-time immunisation campaign in reducing morbidity and mortality from diphtheria, there was a sharp decline in take-up at the end of the 1940s. The Ministry of Health attributed this to apathy among the public – particularly mothers who no longer feared diphtheria because it was no longer common. However, this interpretation required a view of the public as both ignorant of health risks and amenable to education. Furthermore, it made assumptions about the responsibility of parents to protect their children even though vaccination was not compulsory. Diphtheria immunisation recovered, and the disease was virtually eliminated by the early 1960s – but not necessarily because of the Ministry’s centralised propaganda. Local medical officers made significant efforts to make immunisation more convenient, including through the provision of multi-dose vaccines to reduce clinic visits and offer protection against diseases that parents were more fearful of.
This chapter introduces the historiography of the British welfare state, vaccination and public health, and sets out the book’s structure. It argues that while much attention has been given to the various controversies in British vaccination policy, this obscures the long periods of relative calm. Even during crises, most parents continued to vaccinate their children with individual vaccines and, overall, take-up has increased markedly since the 1940s. The chapter therefore reframes the debate to ask why vaccination became normalised during the post-war period, and draws attention to the role of the public as a receiver and forger of public health priorities. This question is then explored through the following five chapters, examining five key themes – apathy, nation, demand, risk and hesitancy. The first three themes are covered in Part I of the book, showing how the modern vaccination programme became established. Part II details the pertussis and measles-mumps-rubella (MMR) vaccine crises and how they exposed the limits of public support for vaccination and the welfare state.
This chapter examines the twenty-first-century public health concept of hesitancy by placing it in a wider historical context. Hesitancy as an analytical category was developed by social scientists and adopted by the World Health Organization and other nations to explain the numerous vaccine crises that had occurred worldwide over previous decades. In Britain between 1998 and 2004 a significant drop in measles-mumps-rubella vaccine (MMR) take-up followed a series of media stories that it might cause autism. Initially, the government sought to refute this through a typical education campaign but was forced to adopt new strategies of risk communication. The internet had become an important tool for vaccine sceptics to spread doubt and for uncertain parents to seek information. Although the vaccination rate eventually recovered, many of the criticisms of the government and the vaccine during this period reflected deeper anxieties on the part of the public regarding the motives and competence of medical and political authorities in the 1990s and early 2000s. The MMR crisis was a product of a particular historical moment, and the construction of hesitancy that followed is coloured by this.
Part II begins with an examination of what the pertussis (whooping cough) vaccine crisis of the 1970s tells us about risk. The management of risk was an integral part of post-war public health and, indeed, of modern nation-states. The risks associated with infectious disease for both the state and individuals had to be weighed against the risks associated with specific vaccines. In the 1970s, reports that the pertussis vaccine might cause brain damage in some children resulted in a significant drop in take-up. A campaign for social security payments for children suffering from vaccine injury was successful, showing how the vaccination programme was tied to wider political concerns within the welfare state during a period of financial retrenchment. These debates are contrasted with those over the provision of rubella vaccine to girls and young women, where voluntary organisations demanded that the government should provide many more resources to the programme.