Science communication

Communication

As this will be the last post on my blog I thought it fitting to end with a discussion of the history of science communication itself. That is, to provide an overview of the major changes that have occurred over the past few hundred years in the way that science has been conducted and communicated.

During the 18th and early 19th centuries, scientists usually depended on patrons for financial support and credibility. Those with independent means funded their own research and papers were largely produced by those working in the private sphere.

At this time, science was popular for both education and entertainment. Ordinary people were exposed to scientific ideas through exhibitions and museums as well as public lectures and demonstrations. Members of the middle classes pursued academic hobbies and amateur scientists made breakthroughs in fields such as astronomy, geology, botany and zoology. People who were literate could also read about science in books, newspapers and periodicals. A number of early Australian newspapers including the Sydney Morning Herald, Hobart Mercury, Melbourne Argus and Brisbane Courier published science articles, weather reports and agricultural information in the 19th century.

The late 1800s saw a shift in the way that science was studied and communicated. Scientists began to conduct research in labs away from the public eye and discuss their findings with other members of newly-established scientific societies. There was a greater focus on sharing findings with others working in the same field through the publication of scientific journals and the process of peer review. Most fields became dominated by experts and amateur scientists no longer made such important contributions to scientific knowledge. Overall, science became increasingly professionalised and more aligned with government and research institutions such as state-funded universities.

In industrialised nations this trend continued until the middle of the 20th century, at which point there was the emergence of Big Science. From the Second World War onwards there has been an emphasis on large-scale government funded projects and international partnerships. Increasingly, research scientists began to focus on collaborative work and have been able to utilise large budgets in multi-million dollar laboratories. Scientists also started to work for large corporations that invest in research and development. To draw an example from the biological sciences, the Human Genome Project is typical of the kind of work that was being done by the late 20th century.

These changes caused a significant shift in the way that scientific knowledge was communicated to the public. A widening gap between researchers and the general public meant that scientists were no longer able to educate people directly. The media largely took over the responsibility of communicating science and journalists played a key role in mediating between scientists and the general public. This kind of science communication used a top-down approach known as the deficit model. This strategy followed the assumption that most people had a low level of scientific literacy and simply needed to be fed the correct information about the issue at hand. This one-way communication strategy did little to support public engagement with science as it left no room for dialogue about scientific issues. Furthermore, it meant that science stories were selected based on newsworthiness and communicators were limited in their ability to target specific groups.

Over the past few decades, the relationship between scientists and the public has begun to reflect that of earlier times. An explosion of different forms of media has meant that there are new ways of interacting with the public and science is no longer just channelled passively to people through third parties. Many scientists don’t just research and publish papers but perform roles as communicators too. Scientists can discuss their work and interact with both other researchers and the general public using social media such as Facebook, Twitter and LinkedIn. Others blog or have YouTube channels. While the rise of Big Science has meant that individual scientists are generally less visible, the media has enabled some to become well known. Sir David Attenborough has been presenting nature documentaries for years and most Australians are familiar with Dr Karl Kruszelnicki’s regular science talk back on Triple J radio. Science is now easily accessible through multiple channels such as news media, radio, the internet, books, magazines, television…

Non-professionals have also started to assume a more central role in scientific research through citizen science programs whereby members of the public can participate in research projects. Science cafes, TED talks, themed events and open days at universities and research institutions are also popular. For example, Australian National Science Week has been an annual event since 1997. Science issues also receive attention as they frequently spill over into other areas of public interest such as politics, the environment, health, travel and education.

Although the ways that science research has been conducted has transformed over the past few centuries, in many ways the communication of science has come full circle as it has come back to an approach that promotes public engagement and participation.

Image credit

Feliz, E. (2009). Communication [Image]. Retrieved from https://www.flickr.com/photos/elycefeliz/3224486233

References

Burns, M. (2014). A brief history of science communication in Australia. Media International Australia, 150, 72-76.

Konneker, C. & Lugger, B. (2013). Public science 2.0 – back to the future. Science, 342(49), 49-50.

Logan, R. A. (2001). Science mass communication: its conceptual history. Science Communication, 23(2), 135-163.

Advertisements

Thalidomide, past and present

Thalidomide 1 see a Lot more in Link

Thalidomide is widely regarded as the worst drug disaster in history for its role in causing thousands of congenital malformations. This post attempts to provide a brief overview of how the tragedy came about and outlines how the same drug has now made a comeback in the treatment of cancer and leprosy.

Thalidomide was developed in West Germany in 1954 by a pharmaceutical company that was attempting to produce low-cost antibiotics. The drug displayed no antibiotic properties but was found to act as a sedative. It was labelled non-toxic as extremely high doses were given to rats, mice, guinea pigs, rabbits, cats and dogs with no apparent side effects. No further testing was done before the drug was released commercially in 1957. Sold in 46 countries worldwide, thalidomide quickly became a best-selling medication and in many places it was nearly as popular as aspirin.

Lauded as a completely safe alternative to barbiturate sleeping tablets, thalidomide was used in the treatment of anxiety, insomnia and seizure disorders. Patients reported that the drug had a calming and sleep inducing effect. After it was found to have antiemetic properties it was commonly prescribed for women suffering from morning sickness in the first trimester of pregnancy.

Concerns arose when doctors began to report that some patients who had been taking the drug were developing signs of nerve damage known as peripheral neuritis. Even more alarming was the unusually high rate of babies being born with malformations. In particular, phocomelia – a rare congenital condition in which the limbs are stunted or missing – appeared at a level never previously seen. By 1961 thalidomide had been banned and withdrawn from the international market after use of the drug was linked to the incidence of these severe birth defects.

Between 1957 and 1961 an estimated 40 000 people developed peripheral neuritis and 12 000 infants were born with malformations. In addition to missing or shortened limbs, deformities included blindness, brain damage and the absence of internal organs. More than half of these children died within their first year while the survivors suffered lasting disability. The thalidomide tragedy resulted in improved drug testing procedures and many survivors have received compensation awarded through high profile lawsuits.

Given the effects of the tragedy, it it surprising to note that thalidomide is still used today as the result of a serendipitous discovery made after the drug was banned. In 1964 the Israeli doctor Jacob Sheskin was caring for a patient with erythema nodosum leprosum, a painful dermatological complication of leprosy. After finding an old bottle of thalidomide in a cupboard he gave it to the patient to help him sleep because the man was dying and in a lot of pain. To his surprise the man slept soundly and awoke feeling better the following morning. New clinical trials supported this discovery and thalidomide revolutionised the treatment of leprosy by improving the management of erythema nodosum leprosum.

Thalidomide has also been approved for the treatment of some types of cancer and is particularly effective for multiple myeloma after attempts at standard therapies such as chemotherapy have proven unsuccessful. Research has shown that the drug inhibits the growth of new blood vessels from pre-existing ones (a process known as angiogenesis). Tumours are unable to grow without a steady blood supply; thalidomide prevents the proliferation of malignant cells by reducing the angiogenesis of surrounding blood vessels.

It has been proposed that the drug’s effect on neural tissue and blood vessel development is the mechanism for the birth defects seen in the late 1950s and early 1960s. Although it is now known that pharmaceuticals can cross the placenta, at that time it was thought to be a barrier that protected the foetus from the harmful substances the mother was exposed to. Angiogenesis of the placenta is crucial for foetal development and the transfer of oxygen, nutrients and wastes. The first trimester of pregnancy is a critical period for the development of basic body structures and organ systems and it appears that a restriction in vascular growth resulting from the drug seriously impacted this process.

Sadly there is a second generation of thalidomide victims in the babies born to Brazilian women taking the drug for the treatment of leprosy. Researchers are currently working to develop a safe analogue of thalidomide that can be administered to leprosy and cancer patients without the risk of causing birth defects.

Image credit

Derer, M. (1998). Thalidomide [Image]. Retrieved from https://www.flickr.com/photos/duckwalk/9636420141

References 

Goldman, D. A. (2001). Thalidomide use: past history and current implications for practice. Nursing Oncology forum, 28(3), 471-477.

Silverman, W. A. (2002). The schizophrenic career of a “monster drug”. Pediatrics, 110(1), 404-406.

The deadliest pandemic in history: the 1918 influenza outbreak

12-0137-009 influenza

Thousands died in 2009 with the outbreak of swine flu that swept the globe. However that figure pales in comparison to the estimated 50 to 100 million deaths that resulted from the 1918-1919 influenza pandemic that followed World War One.

The disease was unusual from an epidemiological standpoint as it attacked in 3 separate waves across a 12 month period. While influenza usually peaks in winter, the first wave began in the spring of 1918. It spread rapidly across the globe and closely resembled seasonal flu. A second wave from September to November was highly deadly and the final wave in early 1919 was less virulent than either of the previous two outbreaks.

An exceptionally severe virus, it was contracted by about 500 million people. This means one third of the global population at the time was affected. The loss of life was unprecedented; the fatality rate was over 2.5% compared to the rate of 0.01% of seasonal outbreaks. The illness came on suddenly and quickly progressed to respiratory failure. In cases that developed more slowly people often died from secondary bacterial infections.

The 1918 flu also targeted sufferers differently to milder strains. Normally the people worst affected by influenza are either the very young, the elderly or the immunocompromised. In the 1918 pandemic healthy young adults aged 20-35 were the hardest hit. In America, the death rate for 15-34 year olds was 20 times higher in 1918 than in previous years.

Although the origin of the virus is unknown, it spread worldwide along trade routes and shipping lines. Even remote Pacific islands and parts of the Arctic were affected.  The mass movement of people associated with the war and demobilisation of troops afterwards is thought to have helped spread the illness. Quarantines were introduced to reduce the spread of the disease, however they were of limited effectiveness.

Medical services were stretched to their limits and public health measures attempted to control the outbreak. Public gatherings were suspended and many shops stayed closed during the worst periods of the pandemic. People took to wearing gauze masks and stayed indoors until as quickly as it had appeared, the pandemic ended.

While the exact strain that caused the 1918 pandemic hasn’t been identified, it is thought to have evolved from an avian virus. Genomic sequencing of the entire virus is yet to be completed, however it is known that the H1N1 strains circulating today are descended from the influenza of the 1918 pandemic.

If you’d like to follow up on this short post, you can take a look at an infographic on the 1918 flu here. To compare, there is one on the 2009 swine flu here.

Image credit

Navy Medicine. (1918). Influenza [Image]. Retrieved from https://www.flickr.com/photos/navymedicine/7839585384

References

Taubenberger, J. K. & Morens, D. M. (2006). 1918 influenza: the mother of all pandemics. Emerging Infectious Diseases, 12(1), 69-79.

Scrubs: a tale of surgical attire

Medical/Surgical Operative Photography

Medical staff wear specially designed clothing to reduce the spread of disease in hospital settings. However, this wasn’t always the case, as up until the late 1800s most doctors performed surgery whilst dressed in ordinary clothing. The “scrubs” – so called because they are worn by those who have “scrubbed up” to prepare for surgery – that we see today didn’t appear until well into the 20th century.

For centuries doctors wore ordinary clothes in operating theatres and worked bare handed with non-sterile instruments. Having to wear special surgical attire was unpopular as uniforms were associated with the lower classes, but by the 1890s surgeons started to wear surgical gowns over their clothing to protect them from bloodstains. Yet these garments did little to reduce the spread of disease as they were rarely washed and usually stained with flecks of dried blood and pus.

After World War One and the outbreak of Spanish influenza in 1918, growing acceptance of the germ theory of disease meant surgeons and their assistants began to wear gowns, caps, rubber gloves and gauze masks. However, these practices were not universally adopted and the purpose of these measures was primarily to protect surgeons from catching diseases from their patients rather than for the prevention of intra-operative infections. It wasn’t until several decades later that medical professionals began to pay greater attention to maintaining a sterile environment.

By the 1940s, advances in aseptic techniques and better understandings of the aetiology of wound infection meant that more stringent measures were put into place to reduce the spread of germs in operating theatres. Instruments and dressing were routinely sterilised with steam and having standard surgical attire became regarded as an important way to prevent post-operative infections. White was associated with sterility and cleanliness and was used for surgical gowns until it was found that the glare it caused under the bright theatre lights created eye strain for surgeons. By the 1950s, most hospitals has switched to surgical attire in jade green or ceil blue instead as those colours reduce eye fatigue and provide a high contrast against the reddish colours of body tissues and blood.

Two-piece outfits consisting of a tunic shirt and pants were introduced in the 1960s and 1970s and have remained largely unchanged since that time. Worn by both men and women, scrubs are designed to be comfortable, durable and wrinkle resistant.  Their simple design aims to limit the places that pathogens can proliferate and the cotton/polyester blend of the fabric is able to withstand laundering at high temperatures for sterilisation purposes. Scrubs are also cheap enough to be easily replaced if they become badly stained or contaminated. The medical attire worn nowadays has come a long way from the unsanitary surgical practices of previous centuries.

Image credit

Ooi, P. (2012). Medical/surgical operative photography [Image]. Retrieved from https://www.flickr.com/photos/phalinn/8116024703

References

Belkin, N. L. (1998). Surgical scrubs – where we were, where we are going. Today’s Surgical Nurse, 20(2), 28-34.

Houweling, L. (2004). Image, function, and style: A history of the nursing uniform. American Journal of Nursing, 104(4), 40-48.

Dieting since the 19th century

A_corpulent_physician_diagnoses_more_leeches_for_a_young_wom_Wellcome_V0011771

Alarmingly high levels of obesity in western nations means that dieting and weight loss programs feature heavily (pardon the pun) in our popular consciousness. Yet an obsession with weight goes back at least 150 years to a period well before the current obesity crisis.

Prior to the mid-19th century, carrying a little excess weight was seen as a good thing. In the age before vaccinations and antibiotics it was commonly believed that being fatter enabled people to better withstand infectious diseases. Weight gain was also regarded as desirable because most people were thin as the result of not having enough to eat. Being overweight was a marker of prosperity as it signified that a person had the means to buy plenty of food and indulge themselves. Only wealthy merchants, businessmen or members of the aristocracy would have been able to afford to become corpulent, hence dieting to lose weight was relatively uncommon in this period.

The mechanisation and prosperity that accompanied the industrial revolution brought with it weight gain and the beginnings of modern diet plans. Reduced energy expenditure and increased access to (often poor quality) food meant that obesity began to rise in the working classes. The trend of previous centuries became reversed as obesity was increasingly associated with the lower classes while physical exercise fads and a quest for thinness became prominent amongst the wealthy upper echelons of society.

The social and economic changes that contributed to rising obesity influenced the emergence of the field of nutritional science. In the late 1800s balancing proteins, carbohydrates and fats featured in government health publications. Vegetarianism began to be promoted for good health from the early years of the 20th century. The food pyramid was created during the First World War. Calorie counting and diet pills emerged the 1920s and vitamins designed to correct nutritional deficiencies resulting from restrictive diets were sold from the following decade. Weight-for-height charts similar to the BMI charts of today first appeared in the 1940s and doctors began to advise their overweight patients to cut down on saturated fats in the 1950s.

As you can see, a concern with weight control has existed for quite some time. For a more in-depth look at the history of dieting you can read Louise Foxcroft’s entertaining and informative book Calories and Corsets: A History of Dieting Over 2000 Years.

Image credit

Numa, P. (1833). A corpulent physician diagnoses more leeches for a young woman, who lies drained and bedbound [Image]. Retrieved from http://commons.wikimedia.org/wiki/File:A_corpulent_physician_diagnoses_more_leeches_for_a_young_wom_Wellcome_V0011771.jpg

References

Rao, N. (2011). Dieting since the 1850s. The Journal of Health, Ethics, and Policy, 10(9), 38-39.

The hazards of shoe shopping in the past

1930s Shoe Fitting Fluoroscope

The most ground breaking scientific discoveries usually find their way into mainstream use and x-rays are no exception. Since they were discovered by German physicist Wilhelm Roentgen in 1895, they have been widely used in medicine, dentistry, security and other fields. However, not many people are aware that from the 1920s to the 1950s x-ray machines were common features in shoe stores.

Shoe-fitting fluoroscopes (also known as pedoscopes) were x-ray machines designed to check the fit of new shoes. They took pride of place in shops and would be positioned on specially lit raised platforms. Customers would place their feet into an opening at the bottom of a vertical wooden cabinet then look down though a viewing port to see a fluorescent image of the bones of their feet inside an outline of their shoes. Additional viewing ports allowed salespeople and companions to take a look. Visualising bones and soft tissues inside the shoes purportedly allowed salespeople to help their customers get a better fit by checking for toe room. This was especially pertinent to cash-strapped parents, who were concerned that their children would quickly outgrow poorly fitted shoes.

While the origin of the device is unknown, radiographic imaging was used in the First World War to examine foot injuries without requiring soldiers to take off their boots. After the war this technique was adapted for nonmedical use and fluoroscopes began to appear in shoe shops throughout Britain, Germany, Switzerland, Canada, America and Australia from the mid 1920s. The public response was initially one of enthusiasm and excitement, as people were delighted to see the effects of this seemingly magical x-ray technology first hand.

However, these feelings were gradually replaced by ones of fear and mistrust with growing public knowledge of the dangers of radiation exposure. Although it had long been known that scientists exposed to radioactive material suffered from severe side effects including burns, sterility and cancer, knowledge of the risks that x-rays posed to the general public took much longer to be recognised. Greater understandings of the effects of radiation after the Second World Ward led to the introduction of legislation stipulating that those who worked with radioactive material were required to wear protective shields and undergo periodic health checks. The fluoroscopes found in shoe shops came under fire from health departments and medical journals in response to public concerns that children were being regularly being exposed to harmful levels of radiation.

Testing of shoe-fitting fluoroscopes began in the late 1940s and the machines were found to be unsafe. They were discovered to emit dangerously high doses of radiation to anyone located in the near vicinity. Children who are about twice as radiosensitive as adults and salespeople who were chronically exposed were at the highest risk. Customers who tried on multiple pairs of shoes in one sitting or made several visits over the course of a year were exposed to cumulative doses. Anecdotal reports of salespeople suffering from radiation burns began to emerge as well as cases of bone damage in young children. Furthermore, shoe-fitting fluoroscopes were found to provide little benefit in ensuring a better fit as the soft fleshy part of people’s toes did not show up on the radiographic images of feet positioned inside new shoes.

Indeed, it appears that the fluoroscope functioned better as a sales promotion device than as a fitting aid. Children loved playing with the machines and they helped to reassure parents that they were not wasting their money. Fluoroscopes were widely promoted as the most scientific and accurate way of fitting shoes in an era when most people believed that having durable and high quality footwear was important to ensure good health. In reality, the machines were harmful and merely served to entice people into shops and confirm the judgement of salespeople.

By the mid 1950s, governments began to introduce legislation regulating the use of the devices. The passing of increasingly stringent regulations meant that by the early 1960s dangerous shoe-fitting fluoroscopes had been phased out altogether.

Image credit

National Museum of Nuclear Science and History. (n.d.) 1930s shoe fitting fluoroscope [Image]. Retrieved from https://www.flickr.com/photos/rocbolt/7375805180

References

Duffin, J., & Hayter, C.R.R. (2000). Baring the sole: The rise and fall of the shoe-fitting fluoroscope. Isis, 91(2), 260-282.

Resurrection men and anatomists

Resurrectionists_by_phiz

The availability of human bodies is critical to the study of anatomy. Cadavers are usually made available for research purposes through programs where people bequeath their bodies to medical schools and universities when they die. This wasn’t always the case, as during the 18th and early 19th centuries bodies were illegally procured for dissection.

The rapid growth of the biological sciences during this period was matched with an increased demand for human cadavers for dissections in medical schools and anatomy demonstrations. Up until the early 19th century in the United Kingdom the only legal means of securing corpses for anatomical research by was claiming the bodies of those condemned to death and dissection by the courts. As only those who committed the most serious felonies were sentenced to this fate only a few bodies were made available to anatomists each year and there was a severe shortage of cadavers.

Spotting a lucrative market in the trade of human bodies, people took to stealing the bodies of the recently deceased from fresh graves and selling them to medical schools. They would dig up the head end of a recent burial under the cover of night, break open the coffin, tie a rope around the neck of the corpse and pull it out. They earned the nicknames “resurrectionists” or “resurrection men”. As stealing from cemeteries was not a crime that was punished harshly in courts and many medical schools were willing to pay handsome sums for bodies there was a significant incentive for criminals to engage in body snatching.

Many people feared dissection as it was believed that the soul of a person who had been dismembered was unable to enter Heaven in the afterlife. The prevalence of body snatching caused fear amongst the pubic and people went to extreme lengths to prevent the bodies of their loved ones from ending up on anatomists’ dissecting tables. Often times, the friends and relatives of someone who had died would watch over their grave day and night until the point that the body would have decayed and become useless for medical dissection. In other cases, people were buried in heavy iron coffins or cages called mortsafes that were built around graves to prevent their contents from being exhumed.

In Britain, a spate of murders committed to obtain fresh bodies to sell to medical schools led to the passing of the Anatomy Act of 1832. This Act permitted unclaimed bodies and those donated by relatives to be used in the study of anatomy. It also regulated anatomy instruction through a licensing system that monitored private medical schools. By regulating an increased supply of corpses for scientific research this legislation finally brought the practice of grave robbing to an end.

Image credit

Knight Browne, H. (1847). Resurrectionists [Image]. Retrieved from http://en.wikipedia.org/wiki/Resurrectionists_in_the_United_Kingdom#mediaviewer/File:Resurrectionists_by_phiz.png

References

Quigley, C. (2012). Dissection on display: cadavers, anatomists and public spectacle. Jefferson: McFarland & Co.