The Ebola Outbreak: Historical Notes on Quarantine and Isolation

“Fear remains the most difficult barrier to overcome”

Margaret Chan WHO Director General

Recent headlines immediately drew my attention. Dispatches from Monrovia, the Liberian capital, reported that under presidential orders, armed riot police and soldiers from the Ebola Task Force in riot gear and equipped with automatic weapons had quarantined West Point, a northern township of the city, on August 20, 2014. To physically cordon off the district, makeshift barricades were erected using red rope lines, wood scraps, and barbed wire. Ferry service to the peninsula was cancelled and the coast guard started patrolling the surrounding waters, turning back people attempting to flee in their canoes. A subsequent image depicted a quarantine violator, summarily detained under the chassis of a car.

Segregation and detention of people suspected or suffering from lethal contagious diseases has a long past. In an era of globalization, “contagious anxieties” become ubiquitous. Responses towards an ever-more complex sequence of biological dangers tend to flourish especially in overcrowded urban settings like Monrovia. In despair, the lessons of history are conveniently ignored. Blaming the victim continues to be a common survival skill, employed for self-defense and preservation. Appealing to basic human instincts of survival, successive Western societies developed regulations and erected institutions designed to cope with incoming diseases. Like in Monrovia today, much was judged to be at stake: catastrophic mass dying, social breakdown, political chaos, economic decline, and, at times, even national survival. 

Since the Renaissance, the term quarantine came to designate spaces for the temporary detentions of residents, travellers, and cargo suspected of carrying infection. This tactic differs somewhat from another traditional public health scheme: the sanitary cordon or blockade that closes checkpoints, roads, and borders with the similar purpose of preventing the spread and contamination of deadly infectious diseases. In both cases, groups of individuals previously marginalized on the basis of age, sex, race, religion, and class were blamed for epidemic outbreaks. The favorite scapegoats are often strangers, newcomers, and ethnic minorities. I am particularly sensitive to issues of stigma and isolation because of my recent research and completion of a book manuscript on the San Francisco Pesthouse, now under academic review. My thesis is that both fear and disgust historically managed to frame coping behaviors and drive actions. In my opinion, anger and revulsion explain the heartless, even brutal nature of the responses.

Plague in Rome 1665

Plague in Rome 1665

In Liberia, Monrovia’s crowded and dilapidated slum with its narrow alleys and rickety shanties had long been a public health nightmare: neglected by authorities, residents were without sewage disposal and only obtained their water in wheelbarrows. Ebola was only the last and perhaps the most dangerous disease to visit this community of an estimated 75,000 people. With hospitals closing to prevent further institutional contamination—some health workers were already dead or dying, potential victims had not place to go. Without consulting residents, the government decided to use a former schoolhouse in West Point as a makeshift neighborhood center for the purpose of concentrating, isolating, and managing all Ebola cases reported in the city. This decision only stoked the growing local panic, leading to protests and violence. Perhaps Ebola was only a “government hoax.” The building was stoned and its patients—presumed to be infectious-- forced to flee. The attack led to widespread looting of furnishings and medical equipment, including blood-soaked bedding believed to be contagious.

Enforced by police and even military units, indiscriminate segregations often prove counterproductive: historically they fostered mistrust, fear and panic, resistance and aggression, while encouraging flight, concealment, and social chaos.  Paradoxically, while often unsuccessful in curbing an epidemic, such containment policies contribute to the suffering and risk additional fatalities. Since the Black Death, families and isolated populations trapped in their homes and neighborhoods suffered severe hardships ranging from a lack of water, food supplies, and a higher risk of cross infection. Loss of employment and economic ruin followed. When an outbreak of plague in Rome occurred in 1656, the entire city was immediately closed with temporary stockades placed in front of two gates for screening supplies and people. Police patrols enforced the sealing of homes suspected to house victims of the disease. Individuals considered tainted through contact with the sick and who failed to follow isolation procedures were arrested and often condemned to death. In fact, contemporary iconography depicts public executions by firing squads or hanging from gallows erected in various city piazzas. In Monrovia, a crowd of desperate residents sought to break out of their prison by storming the barricades, hurling stones and attacking their jailers, only to be greeted with live ammunition firepower that managed to kill one boy and injure others. One dweller caught in the crossfire sarcastically asked ”you fight Ebola with arms?”

Risk fluctuates; it can be subjected to reevaluations within shifting social contexts and sanctioned by experts, then amplified by information systems for the purpose of mobilizing public opinion. This is particularly true if uncertainty, rumors, and a sensational media stoke fears of contagion, causing societal support systems like Monrovia’s public health departments and hospitals to break down. In the United States, three such intra-city quarantines were imposed in response to a pandemic of plague: Honolulu (1899-1900), San Francisco (March 1900) and again (May 1900). None proved beneficial.

Like Monrovia’s township, the targets were overcrowded and impoverished enclaves, unsanitary urban slums largely populated by Chinese migrants. In Hawaii, faced with a confirmed case of plague, the Honolulu Health Board issued an order to quarantine the Chinatown district on December 12, 1809 by roping it off with the help of rifle-toting National Hawaiian guardsmen. To avoid losses for white businesses, the perimeter was gerrymandered to exclude them. The afflicted neighborhoods, home to about 5,000 people--half of them Chinese--featured crumbling shanties with cesspools planted on stagnant marshy land strewn with refuse. In such circumstances, potentially plague-carrying rats posed an unacceptable risk.

Based on work by my colleague and friend James C. Mohr, frequent medical inspections and a search for new cases ensued. Disinfection, garbage removal, and the burning of “infected” lodging that had sheltered the sick began. The rationale for burning houses to kill plague germs stemmed from recommendations adopted by Hong Kong’s Sanitary Board during an epidemic in 1894 involving the walled-in and overcrowded and dilapidated district of Taipingsshan. While the official quarantine boundaries in Honolulu were constantly patrolled, security remained lax and direct contact with residents through the influx of food supplies and services continued. Yet, angry denizens protested the seclusion with threats, concealment, and flight. Businesses suffered, exports dwindled, and a paucity of new cases created pressures for the Health Board to terminate the quarantine six days after its initial decree. Reinstated on December 28 after the appearance of new cases, Honolulu’s quarantine was responsible for reducing Chinatown to ashes on January 20, 1900. With orders to burn down additional rat-infested housing, the local Fire Department misjudged the weather conditions: strong winds spread the flames to nearby buildings and a church, forcing residents to flee the district. The fire eventually scorched thirty three acres of the city and destroyed 40,000 homes. Rejected by dwellers from the outlying neighborhood, the Chinese were forced into makeshift detention camps, initially under monitored by national troops but soon replaced with Hawaiian Republic guardsmen. While the accidental fire and destruction of Honolulu’s Chinatown contributed to the eradication of plague, this episode remains a stark remainder of the adverse consequences of coerced mass segregation.

Honolulu Chinatown 1900 Quarantine

Honolulu Chinatown 1900 Quarantine

Given plague’s relentless march across the Pacific, and the close commercial ties with Honolulu, particularly the extensive sugar trade, San Franciscans braced for its arrival. Indeed, the events in Honolulu had already prompted an inspection of the city’s Chinatown by Health Board officials in early January 1900. The subsequent story is told in greater detail in my recent book Plague, Fear, and Politics in San Francisco’s Chinatown (2012). Not surprisingly, the death of a Chinese laborer on March 6 with a presumptive diagnosis of bubonic plague encouraged health officials to immediately impose a quarantine of the entire district. The enforcement was quite similar. Early Chinatown risers, including cooks, waiters, servants, and porters heading for their jobs outside the district, discovered that ropes encircled the space between Broadway and California, Kearny and Stockton Streets. Two policemen on every corner demanded that everybody turn around and return to their homes. Traffic was blocked and streetcars crossing through the area were not allowed to stop. Massive confusion ensued as frantic Chinese escaped across rooftops or sneaked through the lines, fearful that outside employers would dismiss them if they remained absent. In the confusion, stores closed although some provisions were passed across the lines. The quarantine hampered the movement of Western physicians living outside the district, preventing them from attending the sick at the Oriental Dispensary. Chinese passengers could not board river and coastal steamers to leave the city. Many managed to cross the Bay in small boats and find shelter in vegetable gardens of suburban friends and laundries. Others hid with Chinese cooks in private residences. Crowds quickly assembled in the streets, stunned by the encirclement. Rather than respond forcefully, the Chinese at first aimed for a restoration of harmony. For them the quarantine constituted an operation designed to impress the district’s ethnic population that the local health department and police were, after repeatedly faltering, up to their jobs.

San Francisco Chinatown 1900

San Francisco Chinatown 1900

Bacteriological confirmation of plague’s presence, however, remained elusive. Facing growing skepticism, the Health Board—like in Honolulu—unanimously voted to lift the “preventive” quarantine a few days later. No further cases of plague were found. The temporary encirclement of Chinatown, widely characterized by the press as “bubonic bluff,” turned into a setback for sanitarians: with regular scavenger service suspended, mountains of rotten food littered the streets, creating an unbearable stench while providing further food sources for its hungry rats. With the Honolulu experience vivid in their minds, Chinese residents expected another possible razing or burning of their homes. In a series of patterns replicated in the months ahead, laboratory findings and quarantine threats would be announced, manipulated, and denied in a climate of profound political and economic divisions. In the meantime, the reported mortality rate in Chinatown had dropped dramatically.

When cleaning and disinfection operations failed and an anti-plague vaccination program fizzled, San Francisco’s authorities decided to rent a number of grain warehouses at Mission Rock, a small island near the city’s port, converting the facilities into a temporary detention facility for Chinese suspected of suffering from plague. However, an empowered Chinese leadership hired prominent local lawyers and challenged the plan in federal court. Soon thereafter, in a landmark decision, the presiding judge ruled that all public health measures, while lawful, were not totally immune to judicial scrutiny. Inasmuch as they impaired personal liberties, totally arbitrary measures could not be permitted to stand. In this particular case, the quarantine was also racially discriminatory and harmful since confinement actually increased the risk of infection in the district.

Within hours, however, business leaders and health officials from San Francisco and California met to deal with a growing trade embargo against the state because of the presence of plague. The only proper response: reassure the trading world by sending a clear signal of California’s determination to control the presumed foe. Thus the Health Board was authorized to proceed immediately and once more close down Chinatown, again a “merely precautionary” move. Instead of ropes, a large police force and workmen descended on Chinatown, erecting a veritable fence around the district with wooden planks, posts, cement blocks, as well as barbed wire, creating an enclosure to seal off the entire area. The issue of detention centers for suspected plague victims raised, once more, its ugly head. To force the issue, San Francisco’s mayor refused to take responsibility for feeding the blockaded population.

By June 5, the Chinese leadership filed another judicial request in US District Court to stop the planned deportation. The federal judge immediately lifted the embargo on food supplies to Chinatown. He also delayed the removal of Chinese to the Mission Rock detention camp, but granted a continuance, further infuriating residents of the district. Many had already lost their jobs and were without food and other supplies. Because of growing shortages, prices for the remaining merchandise were skyrocketing. Thousands would soon be destitute and a serious disturbance was expected. While the case continued to be debated in court, protests and demonstrations escalated. Attorneys for the Chinese argued that by confining thousands of residents, the quarantine had, in effect, created a potential “plague quarter” capable of spreading disease to the rest of the city.

Ten days after the filing, the judge handed down his decision in favor of the Chinese plaintiffs. Like the previous ruling, the court employed the legal standards of due process and equal protection to grant the injunction against quarantine. Morrow ruled that the San Francisco Board of Health acted “with an evil eye and unequal hand” in an arbitrary and racially discriminatory manner, actually increasing rather than lowering the risks of infection in the isolated population. The siege of Chinatown was over. A defiant ethnic community with the help of the federal judicial system had managed to thwart two quarantines in the course of a few months. Municipal workers began taking down the makeshift fence that had encircled the district for sixteen days, allowing more than one thousand trapped Chinese residents to pour across the breached fences. Wagon after wagon of supplies arrived. After two weeks of fright, deprivation and protests, euphoria descended on Chinatown. Numerous celebrations marked the passing of the “fake” quarantine. During this judicial proceedings, only three additional deaths had been bacteriologically confirmed as caused by plague.

The above historical examples illustrate the challenges, hazards and futility of mass quarantines, their adverse economic impact and human toll. Public health is ill served when brutal coercion exacerbates mistrust and triggers hostility. Moreover, like in the Chinatown examples, forceful segregation was totally out of synch with the contemporary epidemiological understanding regarding disease transmission. In plague, the vectors were mostly fleas and rats instead of humans while Ebola fever is originally contracted from handling and cooking contaminated bush meat. Monrovia’s quarantine, imposed on an overcrowded population already burdened with poverty is unconscionable: the slum already shelters some ill and dying cases of the disease. Their contagious body fluids will only magnify the outbreak already in progress. As provisions dwindle in West Point, food prices will skyrocket, making them unaffordable and soon unavailable. Faced with a standoff, hunger and desperation will lead to further skirmishes with the military units guarding the township. Interviewed by a CNN reporter, one residents admitted: “The hunger, the Ebola, everything. I’m scared of everything!”


Sources:

Liberian News, August 17, 2014.

Margaret Chan, “Ebola Virus Disease in West Africa—No Early End to the Outlook,” New England Journal of Medicine.org, August 20, 2014.

Guenter B. Risse, “Pesthouses and Lazarettos,” in Mending Bodies, Healing Souls: A History of Hospitals, New York: Oxford University Press, 1999, pp. 190-216.

Alison Bashford and Claire Hooker, eds. Contagion: Historical and Cultural Studies, London: Routledge, 2001.

New York Times, August 21, 2014.

CBC News and CNN News, August 26, 2014.

Howard Markel, “The Concept of Quarantine,” in Quarantine!: East European Jewish Immigrants and the New York City Epidemics of 1892, Baltimore, Johns Hopkins University Press, 1997, pp. 1-12.

James C. Mohr, Plague and Fire: Battling Black Death and the 1900 Burning of Honolulu’s Chinatown, New York: Oxford University Press, 2005.

Guenter B. Risse, Plague, Fear, and Politics in San Francisco’s Chinatown, Baltimore: Johns Hopkins University Press, 2012.

The Historian as Sleuth: Malaria in Scotland

In a section of his 2002 collection of essays, the distinguished, late professor Owsei Temkin recommended old age to the historian. Indeed, Dr. Temkin, the William H. Welch Professor of the History of Medicine at Johns Hopkins University, lived long enough--he became a centenarian--to function as his own critic and revisionist, allowing himself to have “second thoughts” about some of the work he had penned decades earlier. For Temkin, senior historians had clear advantages: “the freedom from the hustle and bustle of life and the reduced nightly sleep allow more time for thinking.” If scientists reached the peak of their careers early on in their youthful years, aging historians—like good wine—became better at their craft, seasoned by a lifetime of experiences and transformation into primary sources often sought by the next generation of journalists and scholars. After all, studying history is meant to unravel the complexity of human lives and society, a practice that favors elders burdened with ample exposures to worldly events.

Now in my early eighties, I find myself in the same boat, drifting into the sunset with a brain refusing to quit. In the middle of the night or while swimming or pedaling on an exercise machine, random thoughts emerge, drift, and then spontaneously coalesce, leading to novel insights. Because of a deteriorating memory and vocabulary, a trail of hand-scribbled phrases seeks to capture the momentary inspiration, although many notes will fail to make sense later. Freed of the usual academic gamesmanship, administrative burdens, shrinking budgets and good cop-back cop deans, I empathize with Temkin’s desire to revisit certain aspects of our body of work with the proviso that the reconstructions and supplements should always include some human-interest stories necessary to stir the emotions of new potential readers. It was Temkin who jumpstarted my second career as a historian. I was grateful for his advice to attend the University of Chicago when, towards the end of my medical residency in Ohio, I visited Baltimore in the fall of 1962. He made it clear that physician/historians with proper academic credentials in both fields would stem the growing estrangement between medicine and history, an issue that still challenges today’s professionals.


For my part, more than just simply rethinking or rewriting, I hold closer to the model of the historian as perpetual investigator and synthesizer, albeit at a more leisurely pace. Early teenage fascination with detectives--notably Sherlock Holmes, Hercule Poirot, and Ellery Queen who solved mysteries by tracking down facts, assembling evidence, and then logically sifting through clues for interpretation--was seductive. Through engagement of our grey brain cells and deduction, coherence and plausibility emerged, dots were connected, the gaps skillfully bridged. At this point, Ellery always issued his challenge to readers to find the answers on their own. No matter, with tensions building, the final summing up was at hand: everything was explained and the case came to a successfull end. Since 2003, a popular PBS documentary television series, “History Detectives” has followed a similar path, its stated mission “exploring the complexities of historical mysteries, searching out the facts and conundrums,” albeit mostly limited to specific artifacts related to American history. The word ‘detective’ stressing detecting and collecting has been recently upgraded to that of ‘investigator’ examining and evaluating the evidence.

My two great loves, medicine and history, share similar investigative methodologies. Both patiently track down all pertinent information from multiple sources. The data is then carefully organized and provided with meaning. Far from being solely logical, this problem-solving process is imbedded in emotional states. The investigator’s mood is key: cherishing the suspense of the chase, the excitement of finding clues, together with the growing exhilaration of solving puzzles, all eventually ending in joy and satisfaction as the quest successfully ends, or does it? Can the final conclusions stand up to scrutiny, particularly if new pieces of information become available or the historian feelings, beliefs, and experiences shift? Indeed, ideas do not last forever and nothing is written in stone. New evidence and understanding, historical or scientific, may emerge. Historians live in a changing world with shifting priorities and perspectives. As the context changes, they must be ready to ‘recalculate’ if they intend to revisit previous topics.

A brief, concrete example of my historical sleuthing relates to the presence of malaria in Scotland. The subject is relevant given the current concerns about global warming and dire predictions that warmer temperatures will trigger a nefarious resurgence of this disease in Europe. Malaria is still a common, worldwide mosquito-borne parasitic disease that can be traced back at least to the Neolithic agricultural revolution, especially in tropical and subtropical regions of the world. For Europe, there is evidence that the disease was already present during the first millennium in coastal regions surrounding the North Sea. While temperature remains an important factor in malaria’s transmission, this essay stresses the pivotal role humans play in shaping its epidemiology. Preliminary findings were presented at the John F. Fulton Lecture at Yale University in 1988 and a fuller story published in a chapter in my book New Medical Challenges During the Scottish Enlightenment (2005). Malaria’s causal agent, a microscopic protozoa parasite called plasmodium, initially enters the human bloodstream via bites from a variety of previously infected female Anopheles mosquitoes. Part of the parasite’s life cycle takes place inside the human liver and red blood cells. Here the parasites rapidly multiply, bursting free into the bloodstream at regular intervals, destroying their cellular host. The release occurs every two or three days depending on the species of plasmodia, triggering sudden episodes of chills followed by a spike of high fever that ends with profuse sweating.

The intermittent and rhythmic character of these fevers is unique to malaria, making it easier to follow and diagnose even in historical times. In a forthcoming blog I will explain in more detail this approach for other historical places and times, emphasizing that historians must usually exert great caution when equating vague and shifting past disease constructions and nomenclatures with modern ones. Venturing into the hazardous currents of retrospective diagnosis malaria is a rare exception because of the telltale clinical manifestation and our current scientific understanding of environmental factors involved in its spread.   

My first clues to the existence of malaria came while researching aspects of 18th century Scottish medicine. Originally known as “ague” in northern Europe, malaria (male aria or bad air)--the name derived from the somewhat sulfurous odor of bacterial decomposition in saltwater mudflats--became what contemporaries characterized as the “scourge of Scotland”, vanishing after the early 1800s never to return except as an occasional tropical import. While malaria probably existed in Scotland for a millennium, I detected its presence when I discovered numerous “intermittent fever” cases between 1770-1800 in the records of charitable institutions such as the Edinburgh Infirmary and the Kelso Dispensary, which were supplemented by accounts in medical student notebooks, and writings from prominent Scottish physicians. Their presence and care--the patients were poor--confirmed the gradual medicalization of Scotland’s society during the Enlightenment.

Any Google search will uncover a steady parade of sporadic cases detected in recent decades, all imported in an era of globalization and rapid human travel from Asia and especially West Africa. However, thanks to several historians, especially Mary Dobson, we know that since at least the sixteenth century the salty marshlands of Kent and Essex in England were notorious for their lack of salubrity, depopulation, and high levels of mortality. Together with the river estuaries and flood plains in Lincolnshire, Cambridge, and Norfolk, these regions were singled out for the frequency of ‘marsh fever’ or ‘ague’ among its dwellers, imported by migrant Dutch farmhands and traders from Baltic ports where the disease was also endemic. In fact, eighteenth-century Scottish physicians frequently referred to intermittent or “autumnal” fevers as the “Kentish ague.” Malaria flourishes near their stagnant pools of salted water, ideal for breeding several genera of vector mosquitoes, notably the Anopheles atroparvus and messae. Advanced molecular methods suggest that Europe’s malaria was primarily caused by a faster reproducing but less deadly variety of protozoa parasites: the Plasmodium vivax, still present in wild apes and derived from an ancestor that escaped Africa, and perhaps Plasmodium malariae, an even less aggressive species identified with chronic cases.

My working hypothesis was that, like elsewhere in Europe and Africa, malaria was a component of a distinctive ecology of disease, a product of the unique geography, microclimates, and particular social organization prevailing in the Scottish Lowlands. My first task was to determine the geographical location of the reported cases, an undertaking made easier by the publication of Scottish parish reports in the Statistical Account of Scotland published by John Sinclair in 1794. The collection contains detailed information about climate, population, agricultural development, commerce, and prevailing diseases not available for previous centuries. Two separate regions stood out: both were near important trading centers and areas of considerable agricultural activity, thus attracting seasonal Scottish farmhands, many of them previously employed in the English fens. The first comprised lowlands located in Angus County northeast of the Firth of Tay as well as a stretch of alluvial terrain between Perth and Dundee on the northern edge of the River Tay known as the Carse of Gowrie. The second region was south and distributed around the Scottish Borders, involving various floodplain parishes near the market town of Kelso at the confluence of the Teviot and Tweed Rivers. It also included Roxburgh, and towards the northeast, the Tweed’s estuary in Berwickshire bordering England. Like in the north and because of their salinity, the marshes all offered ideal breeding grounds for A. atroparvus mosquitoes.

Climate seemed less of a factor. As researchers studying malaria in England pointed out, the “ague” was already firmly established in stagnant waters near the North Sea coast during the Little Ice Age that lasted from the 1560s to the 1750s. More important were seasonal variations with warmer and drier summers creating conditions that not only accelerated parasitic reproduction in their vectors but concentrated hungry hordes of mosquitoes in remaining pools of water desperate for blood meals. Here deficient clothing—common among poor workers, became another factor facilitating exposure. So far, so good: I concluded that ague could and indeed was present in Scotland, an occasional and temporary import linked to poverty as well as lack of employment forcing migration. I surmised that new local cases could be prevented since the country’s cold winters made it virtually impossible for the offending insects to survive.

Not so fast. Instead of just a trickle of indigenous cases from notorious swampy areas, Scotland in the early 1780s--particularly in the Borders region--witnessed some unusual epidemic outbreaks that defied previous explanations. As noted, Kelso traditionally functioned as a regional job market for farmhands and servants. With seasonal unemployment in the Highlands, this exchange place would periodically swell with people as job seekers and their families crowded into decrepit and thatched cottages. Surrounded by dunghills, these windowless cabins served us ideal shelters and potential hibernation spaces for the malaria-infested mosquitoes. A perusal of eighteenth-century weather charts confirmed that the Scottish lowlands had been subjected to an erratic climatic pattern, especially during the decade 1780-90: excessive rainfall during winter and spring but warm and dry summers. This produced excellent conditions for the proliferation of the pesky vectors necessary for parasite transmission. Moreover, in spring, strong easterly winds allowed vast mosquito populations from coastal estuaries near the Tay and Tweed Rivers to drift inland towards urban centers in Perthshire and around Kelso. Such conditions coincided with harvest time with its the usual movements of traders and itinerant migrant workers. Scottish mosquitoes were ready to pounce on the native rural population after being Infected from blood meals containing plasmodia obtained from previous visitors to the English fens--many currently in remission or suffering from chronic malaria. The warmer weather insured their year-around presence and posed the threat for larger outbreaks. Under such circumstances, climate, people, and poor housing conspired to temporarily make malaria endemic in Scotland.

Kelso Dispensary

Kelso Dispensary

Examples of eighteenth-century malaria in Scotland can be found among cases extracted from medical student notebooks. A typical patient was Leslie C., a 22-year old mother of a three-year old daughter Margaret. Both were admitted together to the teaching ward at Royal Infirmary of Edinburgh on February 4, 1795.  People diagnosed with “intermittent” fevers represented a special population of sufferers admitted to the hospital during the winter months after extended and stressful voyages from the endemic areas of malaria in southeast England. Many had experienced the first debilitating paroxysms of ague during the previous harvest season there before going into a temporary remission. Given the severity of their new symptoms, Edinburgh professors wanted students to follow the clinical course of the most challenging cases. Exhausted from riding in open wagons, Leslie claimed that for three months she had been experiencing regular attacks of high temperature every 72 hours precisely around 4.30 PM, a malaria variety defined as a “quartan” fever. Her sallow complexion and lack of menstruation hinted at anemia and emaciation. Each fit lasted for about an hour and left her extremely tired and weak. Margaret’s shivers, headaches and high fever occurred every other day, known as a “tertian” fever. Weaned for eight months, the child seemed to be grinding her teeth. The belly was quite swollen, a sign of possible starvation or spleen enlargement common in malaria. The mother’s symptoms started while near the marshes around Hilton, in Huntingtonshire. Her daughter’s onset of symptoms occurred shortly thereafter, further north at Stamford in Lincolnshire. Both locations suggest that the pair were part of an itinerant labor force returning to Scotland, presumably from Essex or Kent, desperately attempting to escape the insalubrious region and survive the winter. Hospital consultants such as William Cullen, the famous University of Edinburgh professor, believed that the “Kentish ague” was much easier to cure by nature than art. Indeed, left alone, most vivax malaria sufferers eventually went into long-term remission. According to the ledgers, Leslie and Margaret remained in the hospital for about ten weeks, provided with a flannel shirt to stimulate further sweating, a phenomenon believed to forestall further fits. Initially, both patients received ineffective small doses of a brew obtained from cinchona, the Peruvian tree bark containing the active ingredient: quinine. Unfortunately, the bitter-tasting powder seriously “loaded up” Leslie’s stomach, prompting the attending physician to temporarily stop the medication and administer the drug to the child in the form of enemas. Other patients were exposed to electric sparks in hopes of stimulating further sweating and thus “escape the fits.” With rest, full diet, wine and other tonics, later supplemented with higher doses of cinchona, Leslie’s and Margaret’s fever attacks eventually ceased. Both patients survived their ordeal and left the Infirmary in better condition.

KelsoAgueGraph.jpg

So having figured out that malaria indeed existed in Scotland early on, my next question was why did it decline and eventual disappear from Scotland? From further research, I deduced that the key was agricultural reform and the concomitant demands for additional land for cultivation including drainage of impermeable clay soils. Over time, in Scotland’s Lowlands, old salt marshes and other stagnant pools of water were eliminated. Indeed, they turned into arable land for turnip husbandry and the crop rotation, thereby eliminating most breeding habitats for mosquitoes and gradually diminishing malaria transmission. Encouraged by the presence of root crops, additional cattle and sheep were kept out grazing in the fields year around, taking the place of humans as providers of blood meals for the shrinking population of insects.

Most importantly, however, the new demands of mixed farming halted the seasonal exodus of farmers to and from England, traditionally the main source for malaria importation. New crops and tools reduced the farming workforce. Current tenants were ejected, sent towards cities like Edinburgh and Berwick, incipient centers of the Industrial Revolution, thus reducing the number of potential human hosts. Farmsteads were consolidated, separated from the traditional rows of primitive cottages hitherto crowded with laborers. Dank, dark, and buggy abodes were demolished, front door dunghills removed. With steady work, the remaining population sought higher quality wooden housing with window frames for ventilation that would prove less hazardous to their health. Last but not least, infusions or tinctures of cinchona bark, a successful remedy for malaria at high doses, became a popular domestic remedy for the survival of remaining sufferers in the1800s, further breaking the disease’s infective cycle.

Aha! Riddle solved! My historical sleuthing revealed that while relatively benign, malaria not only influenced the course of Scottish medicine, but its origins and evolution became recognized as expressions of particular geographical, climatic, biological, and above all social, and economic forces prevailing in Scotland during the late eighteenth century. Case closed.

Sources

Owsei Temkin, “On Second Thought” and Other Essays in the History of Medicine and Science, Baltimore: Johns Hopkins University Press, 2002.

Guenter B. Risse, ”Ague in Eighteenth-Century Scotland? The Shifting Ecology of a Disease,” in New Medical Challenges During the Scottish Enlightenment, Amsterdam: Rodopi, 2005, pp. 171-97.

L.J. Bruce-Chwatt, “Ague as Malaria: An Essay on the History of Two Medical Terms,” Journal of Tropical Medicine and Hygiene, 79 (1976): 168–76.

Otto S. Knottnerus, “Malaria Around the North Sea: A Survey,” in Climatic Development and History of the North Atlantic Realm, eds. Gerold Wefer et al, Berlin: Springer, 2002, pp. 339-53.

Paul Reiter, “From Shakespeare to Defoe: Malaria in England in the Little Ice Age,” Emerging Infectious Diseases (Feb 2000): 1-11.

J. Sinclair, Analysis of the Statistical Account of Scotland, II parts, Edinburgh: A. Constable, 1825, reprint, New York: Johnson Reprint Corp., 1970.

Register of the Weather’, in Transactions of the Royal Society of Edinburgh 1 (1788): 206.

C. Wilson, ‘Statistical Observations on the Health of the Labouring Population of the District of Kelso in Two Decennial Periods, From 1777 to 1787 and from 1829 to 1839,” Proceedings of the Border Medical Society, (1841): 47-85.

Cases of Leslie and Margaret Campbell, in Andrew Duncan, Sr., Clinical Reports and Commentaries, February–April 1795, presented by A.B. Morison (Edinburgh: 1795), MSS Collection, Royal College of Physicians, Edinburgh. For more details of Leslie's treatment see J.W. Estes, “Drug Usage at the Infirmary: the Example of Dr. Andrew Duncan, Sr.,” in Guenter B. Risse, Hospital Life in Enlightenment Scotland: Care and Teaching at the Royal Infirmary of Edinburgh, New York: Cambridge University Press, 1986, (note 65), 359–60.

F. Home, ‘Experiments with Regard to the Most Proper Time of Giving the Bark in Intermittents’, in Clinical Experiments, Histories, and Dissections, 3rd edn London: J. Murray, 1783,pp. 1-13.

L. J. Bruce-Chwatt and J, de Zuleata, The Rise and Fall of Malaria in Europe: A Historico-Epidemiological Study, Oxford: Oxford University Press, 1980.

M. Dobson, “Malaria in England: A Geographical and Historical Perspective”, Parassitologia, 36 (1994): 35–60, and “Marshlands, Mosquitoes and Malaria,” in Contours of Death and Disease in Early Modern England, Cambridge: Cambridge University Press, 1997, pp. 306–27.

J.H. Brotherston, “The Decline of Malaria”, in Observations on the Early Public Health Movement in Scotland. London: H. K. Lewis & Co, 1952, 26–36.

T.M. Devine, The Transformation of Rural Scotland: Social Change and the Agrarian Economy, 1660-1815, Edinburgh: Edinburgh University Press, 1994.