The Historian as Sleuth: Malaria in Scotland

In a section of his 2002 collection of essays, the distinguished, late professor Owsei Temkin recommended old age to the historian. Indeed, Dr. Temkin, the William H. Welch Professor of the History of Medicine at Johns Hopkins University, lived long enough--he became a centenarian--to function as his own critic and revisionist, allowing himself to have “second thoughts” about some of the work he had penned decades earlier. For Temkin, senior historians had clear advantages: “the freedom from the hustle and bustle of life and the reduced nightly sleep allow more time for thinking.” If scientists reached the peak of their careers early on in their youthful years, aging historians—like good wine—became better at their craft, seasoned by a lifetime of experiences and transformation into primary sources often sought by the next generation of journalists and scholars. After all, studying history is meant to unravel the complexity of human lives and society, a practice that favors elders burdened with ample exposures to worldly events.

Now in my early eighties, I find myself in the same boat, drifting into the sunset with a brain refusing to quit. In the middle of the night or while swimming or pedaling on an exercise machine, random thoughts emerge, drift, and then spontaneously coalesce, leading to novel insights. Because of a deteriorating memory and vocabulary, a trail of hand-scribbled phrases seeks to capture the momentary inspiration, although many notes will fail to make sense later. Freed of the usual academic gamesmanship, administrative burdens, shrinking budgets and good cop-back cop deans, I empathize with Temkin’s desire to revisit certain aspects of our body of work with the proviso that the reconstructions and supplements should always include some human-interest stories necessary to stir the emotions of new potential readers. It was Temkin who jumpstarted my second career as a historian. I was grateful for his advice to attend the University of Chicago when, towards the end of my medical residency in Ohio, I visited Baltimore in the fall of 1962. He made it clear that physician/historians with proper academic credentials in both fields would stem the growing estrangement between medicine and history, an issue that still challenges today’s professionals.


For my part, more than just simply rethinking or rewriting, I hold closer to the model of the historian as perpetual investigator and synthesizer, albeit at a more leisurely pace. Early teenage fascination with detectives--notably Sherlock Holmes, Hercule Poirot, and Ellery Queen who solved mysteries by tracking down facts, assembling evidence, and then logically sifting through clues for interpretation--was seductive. Through engagement of our grey brain cells and deduction, coherence and plausibility emerged, dots were connected, the gaps skillfully bridged. At this point, Ellery always issued his challenge to readers to find the answers on their own. No matter, with tensions building, the final summing up was at hand: everything was explained and the case came to a successfull end. Since 2003, a popular PBS documentary television series, “History Detectives” has followed a similar path, its stated mission “exploring the complexities of historical mysteries, searching out the facts and conundrums,” albeit mostly limited to specific artifacts related to American history. The word ‘detective’ stressing detecting and collecting has been recently upgraded to that of ‘investigator’ examining and evaluating the evidence.

My two great loves, medicine and history, share similar investigative methodologies. Both patiently track down all pertinent information from multiple sources. The data is then carefully organized and provided with meaning. Far from being solely logical, this problem-solving process is imbedded in emotional states. The investigator’s mood is key: cherishing the suspense of the chase, the excitement of finding clues, together with the growing exhilaration of solving puzzles, all eventually ending in joy and satisfaction as the quest successfully ends, or does it? Can the final conclusions stand up to scrutiny, particularly if new pieces of information become available or the historian feelings, beliefs, and experiences shift? Indeed, ideas do not last forever and nothing is written in stone. New evidence and understanding, historical or scientific, may emerge. Historians live in a changing world with shifting priorities and perspectives. As the context changes, they must be ready to ‘recalculate’ if they intend to revisit previous topics.

A brief, concrete example of my historical sleuthing relates to the presence of malaria in Scotland. The subject is relevant given the current concerns about global warming and dire predictions that warmer temperatures will trigger a nefarious resurgence of this disease in Europe. Malaria is still a common, worldwide mosquito-borne parasitic disease that can be traced back at least to the Neolithic agricultural revolution, especially in tropical and subtropical regions of the world. For Europe, there is evidence that the disease was already present during the first millennium in coastal regions surrounding the North Sea. While temperature remains an important factor in malaria’s transmission, this essay stresses the pivotal role humans play in shaping its epidemiology. Preliminary findings were presented at the John F. Fulton Lecture at Yale University in 1988 and a fuller story published in a chapter in my book New Medical Challenges During the Scottish Enlightenment (2005). Malaria’s causal agent, a microscopic protozoa parasite called plasmodium, initially enters the human bloodstream via bites from a variety of previously infected female Anopheles mosquitoes. Part of the parasite’s life cycle takes place inside the human liver and red blood cells. Here the parasites rapidly multiply, bursting free into the bloodstream at regular intervals, destroying their cellular host. The release occurs every two or three days depending on the species of plasmodia, triggering sudden episodes of chills followed by a spike of high fever that ends with profuse sweating.

The intermittent and rhythmic character of these fevers is unique to malaria, making it easier to follow and diagnose even in historical times. In a forthcoming blog I will explain in more detail this approach for other historical places and times, emphasizing that historians must usually exert great caution when equating vague and shifting past disease constructions and nomenclatures with modern ones. Venturing into the hazardous currents of retrospective diagnosis malaria is a rare exception because of the telltale clinical manifestation and our current scientific understanding of environmental factors involved in its spread.   

My first clues to the existence of malaria came while researching aspects of 18th century Scottish medicine. Originally known as “ague” in northern Europe, malaria (male aria or bad air)--the name derived from the somewhat sulfurous odor of bacterial decomposition in saltwater mudflats--became what contemporaries characterized as the “scourge of Scotland”, vanishing after the early 1800s never to return except as an occasional tropical import. While malaria probably existed in Scotland for a millennium, I detected its presence when I discovered numerous “intermittent fever” cases between 1770-1800 in the records of charitable institutions such as the Edinburgh Infirmary and the Kelso Dispensary, which were supplemented by accounts in medical student notebooks, and writings from prominent Scottish physicians. Their presence and care--the patients were poor--confirmed the gradual medicalization of Scotland’s society during the Enlightenment.

Any Google search will uncover a steady parade of sporadic cases detected in recent decades, all imported in an era of globalization and rapid human travel from Asia and especially West Africa. However, thanks to several historians, especially Mary Dobson, we know that since at least the sixteenth century the salty marshlands of Kent and Essex in England were notorious for their lack of salubrity, depopulation, and high levels of mortality. Together with the river estuaries and flood plains in Lincolnshire, Cambridge, and Norfolk, these regions were singled out for the frequency of ‘marsh fever’ or ‘ague’ among its dwellers, imported by migrant Dutch farmhands and traders from Baltic ports where the disease was also endemic. In fact, eighteenth-century Scottish physicians frequently referred to intermittent or “autumnal” fevers as the “Kentish ague.” Malaria flourishes near their stagnant pools of salted water, ideal for breeding several genera of vector mosquitoes, notably the Anopheles atroparvus and messae. Advanced molecular methods suggest that Europe’s malaria was primarily caused by a faster reproducing but less deadly variety of protozoa parasites: the Plasmodium vivax, still present in wild apes and derived from an ancestor that escaped Africa, and perhaps Plasmodium malariae, an even less aggressive species identified with chronic cases.

My working hypothesis was that, like elsewhere in Europe and Africa, malaria was a component of a distinctive ecology of disease, a product of the unique geography, microclimates, and particular social organization prevailing in the Scottish Lowlands. My first task was to determine the geographical location of the reported cases, an undertaking made easier by the publication of Scottish parish reports in the Statistical Account of Scotland published by John Sinclair in 1794. The collection contains detailed information about climate, population, agricultural development, commerce, and prevailing diseases not available for previous centuries. Two separate regions stood out: both were near important trading centers and areas of considerable agricultural activity, thus attracting seasonal Scottish farmhands, many of them previously employed in the English fens. The first comprised lowlands located in Angus County northeast of the Firth of Tay as well as a stretch of alluvial terrain between Perth and Dundee on the northern edge of the River Tay known as the Carse of Gowrie. The second region was south and distributed around the Scottish Borders, involving various floodplain parishes near the market town of Kelso at the confluence of the Teviot and Tweed Rivers. It also included Roxburgh, and towards the northeast, the Tweed’s estuary in Berwickshire bordering England. Like in the north and because of their salinity, the marshes all offered ideal breeding grounds for A. atroparvus mosquitoes.

Climate seemed less of a factor. As researchers studying malaria in England pointed out, the “ague” was already firmly established in stagnant waters near the North Sea coast during the Little Ice Age that lasted from the 1560s to the 1750s. More important were seasonal variations with warmer and drier summers creating conditions that not only accelerated parasitic reproduction in their vectors but concentrated hungry hordes of mosquitoes in remaining pools of water desperate for blood meals. Here deficient clothing—common among poor workers, became another factor facilitating exposure. So far, so good: I concluded that ague could and indeed was present in Scotland, an occasional and temporary import linked to poverty as well as lack of employment forcing migration. I surmised that new local cases could be prevented since the country’s cold winters made it virtually impossible for the offending insects to survive.

Not so fast. Instead of just a trickle of indigenous cases from notorious swampy areas, Scotland in the early 1780s--particularly in the Borders region--witnessed some unusual epidemic outbreaks that defied previous explanations. As noted, Kelso traditionally functioned as a regional job market for farmhands and servants. With seasonal unemployment in the Highlands, this exchange place would periodically swell with people as job seekers and their families crowded into decrepit and thatched cottages. Surrounded by dunghills, these windowless cabins served us ideal shelters and potential hibernation spaces for the malaria-infested mosquitoes. A perusal of eighteenth-century weather charts confirmed that the Scottish lowlands had been subjected to an erratic climatic pattern, especially during the decade 1780-90: excessive rainfall during winter and spring but warm and dry summers. This produced excellent conditions for the proliferation of the pesky vectors necessary for parasite transmission. Moreover, in spring, strong easterly winds allowed vast mosquito populations from coastal estuaries near the Tay and Tweed Rivers to drift inland towards urban centers in Perthshire and around Kelso. Such conditions coincided with harvest time with its the usual movements of traders and itinerant migrant workers. Scottish mosquitoes were ready to pounce on the native rural population after being Infected from blood meals containing plasmodia obtained from previous visitors to the English fens--many currently in remission or suffering from chronic malaria. The warmer weather insured their year-around presence and posed the threat for larger outbreaks. Under such circumstances, climate, people, and poor housing conspired to temporarily make malaria endemic in Scotland.

Kelso Dispensary

Kelso Dispensary

Examples of eighteenth-century malaria in Scotland can be found among cases extracted from medical student notebooks. A typical patient was Leslie C., a 22-year old mother of a three-year old daughter Margaret. Both were admitted together to the teaching ward at Royal Infirmary of Edinburgh on February 4, 1795.  People diagnosed with “intermittent” fevers represented a special population of sufferers admitted to the hospital during the winter months after extended and stressful voyages from the endemic areas of malaria in southeast England. Many had experienced the first debilitating paroxysms of ague during the previous harvest season there before going into a temporary remission. Given the severity of their new symptoms, Edinburgh professors wanted students to follow the clinical course of the most challenging cases. Exhausted from riding in open wagons, Leslie claimed that for three months she had been experiencing regular attacks of high temperature every 72 hours precisely around 4.30 PM, a malaria variety defined as a “quartan” fever. Her sallow complexion and lack of menstruation hinted at anemia and emaciation. Each fit lasted for about an hour and left her extremely tired and weak. Margaret’s shivers, headaches and high fever occurred every other day, known as a “tertian” fever. Weaned for eight months, the child seemed to be grinding her teeth. The belly was quite swollen, a sign of possible starvation or spleen enlargement common in malaria. The mother’s symptoms started while near the marshes around Hilton, in Huntingtonshire. Her daughter’s onset of symptoms occurred shortly thereafter, further north at Stamford in Lincolnshire. Both locations suggest that the pair were part of an itinerant labor force returning to Scotland, presumably from Essex or Kent, desperately attempting to escape the insalubrious region and survive the winter. Hospital consultants such as William Cullen, the famous University of Edinburgh professor, believed that the “Kentish ague” was much easier to cure by nature than art. Indeed, left alone, most vivax malaria sufferers eventually went into long-term remission. According to the ledgers, Leslie and Margaret remained in the hospital for about ten weeks, provided with a flannel shirt to stimulate further sweating, a phenomenon believed to forestall further fits. Initially, both patients received ineffective small doses of a brew obtained from cinchona, the Peruvian tree bark containing the active ingredient: quinine. Unfortunately, the bitter-tasting powder seriously “loaded up” Leslie’s stomach, prompting the attending physician to temporarily stop the medication and administer the drug to the child in the form of enemas. Other patients were exposed to electric sparks in hopes of stimulating further sweating and thus “escape the fits.” With rest, full diet, wine and other tonics, later supplemented with higher doses of cinchona, Leslie’s and Margaret’s fever attacks eventually ceased. Both patients survived their ordeal and left the Infirmary in better condition.

KelsoAgueGraph.jpg

So having figured out that malaria indeed existed in Scotland early on, my next question was why did it decline and eventual disappear from Scotland? From further research, I deduced that the key was agricultural reform and the concomitant demands for additional land for cultivation including drainage of impermeable clay soils. Over time, in Scotland’s Lowlands, old salt marshes and other stagnant pools of water were eliminated. Indeed, they turned into arable land for turnip husbandry and the crop rotation, thereby eliminating most breeding habitats for mosquitoes and gradually diminishing malaria transmission. Encouraged by the presence of root crops, additional cattle and sheep were kept out grazing in the fields year around, taking the place of humans as providers of blood meals for the shrinking population of insects.

Old Scottish cottage near Kelso, which was an ideal breeding ground for malaria.

Old Scottish cottage near Kelso, which was an ideal breeding ground for malaria.

Most importantly, however, the new demands of mixed farming halted the seasonal exodus of farmers to and from England, traditionally the main source for malaria importation. New crops and tools reduced the farming workforce. Current tenants were ejected, sent towards cities like Edinburgh and Berwick, incipient centers of the Industrial Revolution, thus reducing the number of potential human hosts. Farmsteads were consolidated, separated from the traditional rows of primitive cottages hitherto crowded with laborers. Dank, dark, and buggy abodes were demolished, front door dunghills removed. With steady work, the remaining population sought higher quality wooden housing with window frames for ventilation that would prove less hazardous to their health. Last but not least, infusions or tinctures of cinchona bark, a successful remedy for malaria at high doses, became a popular domestic remedy for the survival of remaining sufferers in the1800s, further breaking the disease’s infective cycle.

Aha! Riddle solved! My historical sleuthing revealed that while relatively benign, malaria not only influenced the course of Scottish medicine, but its origins and evolution became recognized as expressions of particular geographical, climatic, biological, and above all social, and economic forces prevailing in Scotland during the late eighteenth century. Case closed.

Sources

Owsei Temkin, “On Second Thought” and Other Essays in the History of Medicine and Science, Baltimore: Johns Hopkins University Press, 2002.

Guenter B. Risse, ”Ague in Eighteenth-Century Scotland? The Shifting Ecology of a Disease,” in New Medical Challenges During the Scottish Enlightenment, Amsterdam: Rodopi, 2005, pp. 171-97.

L.J. Bruce-Chwatt, “Ague as Malaria: An Essay on the History of Two Medical Terms,” Journal of Tropical Medicine and Hygiene, 79 (1976): 168–76.

Otto S. Knottnerus, “Malaria Around the North Sea: A Survey,” in Climatic Development and History of the North Atlantic Realm, eds. Gerold Wefer et al, Berlin: Springer, 2002, pp. 339-53.

Paul Reiter, “From Shakespeare to Defoe: Malaria in England in the Little Ice Age,” Emerging Infectious Diseases (Feb 2000): 1-11.

J. Sinclair, Analysis of the Statistical Account of Scotland, II parts, Edinburgh: A. Constable, 1825, reprint, New York: Johnson Reprint Corp., 1970.

Register of the Weather’, in Transactions of the Royal Society of Edinburgh 1 (1788): 206.

C. Wilson, ‘Statistical Observations on the Health of the Labouring Population of the District of Kelso in Two Decennial Periods, From 1777 to 1787 and from 1829 to 1839,” Proceedings of the Border Medical Society, (1841): 47-85.

Cases of Leslie and Margaret Campbell, in Andrew Duncan, Sr., Clinical Reports and Commentaries, February–April 1795, presented by A.B. Morison (Edinburgh: 1795), MSS Collection, Royal College of Physicians, Edinburgh. For more details of Leslie's treatment see J.W. Estes, “Drug Usage at the Infirmary: the Example of Dr. Andrew Duncan, Sr.,” in Guenter B. Risse, Hospital Life in Enlightenment Scotland: Care and Teaching at the Royal Infirmary of Edinburgh, New York: Cambridge University Press, 1986, (note 65), 359–60.

F. Home, ‘Experiments with Regard to the Most Proper Time of Giving the Bark in Intermittents’, in Clinical Experiments, Histories, and Dissections, 3rd edn London: J. Murray, 1783,pp. 1-13.

L. J. Bruce-Chwatt and J, de Zuleata, The Rise and Fall of Malaria in Europe: A Historico-Epidemiological Study, Oxford: Oxford University Press, 1980.

M. Dobson, “Malaria in England: A Geographical and Historical Perspective”, Parassitologia, 36 (1994): 35–60, and “Marshlands, Mosquitoes and Malaria,” in Contours of Death and Disease in Early Modern England, Cambridge: Cambridge University Press, 1997, pp. 306–27.

J.H. Brotherston, “The Decline of Malaria”, in Observations on the Early Public Health Movement in Scotland. London: H. K. Lewis & Co, 1952, 26–36.

T.M. Devine, The Transformation of Rural Scotland: Social Change and the Agrarian Economy, 1660-1815, Edinburgh: Edinburgh University Press, 1994.