Science communication

Communication

As this will be the last post on my blog I thought it fitting to end with a discussion of the history of science communication itself. That is, to provide an overview of the major changes that have occurred over the past few hundred years in the way that science has been conducted and communicated.

During the 18th and early 19th centuries, scientists usually depended on patrons for financial support and credibility. Those with independent means funded their own research and papers were largely produced by those working in the private sphere.

At this time, science was popular for both education and entertainment. Ordinary people were exposed to scientific ideas through exhibitions and museums as well as public lectures and demonstrations. Members of the middle classes pursued academic hobbies and amateur scientists made breakthroughs in fields such as astronomy, geology, botany and zoology. People who were literate could also read about science in books, newspapers and periodicals. A number of early Australian newspapers including the Sydney Morning Herald, Hobart Mercury, Melbourne Argus and Brisbane Courier published science articles, weather reports and agricultural information in the 19th century.

The late 1800s saw a shift in the way that science was studied and communicated. Scientists began to conduct research in labs away from the public eye and discuss their findings with other members of newly-established scientific societies. There was a greater focus on sharing findings with others working in the same field through the publication of scientific journals and the process of peer review. Most fields became dominated by experts and amateur scientists no longer made such important contributions to scientific knowledge. Overall, science became increasingly professionalised and more aligned with government and research institutions such as state-funded universities.

In industrialised nations this trend continued until the middle of the 20th century, at which point there was the emergence of Big Science. From the Second World War onwards there has been an emphasis on large-scale government funded projects and international partnerships. Increasingly, research scientists began to focus on collaborative work and have been able to utilise large budgets in multi-million dollar laboratories. Scientists also started to work for large corporations that invest in research and development. To draw an example from the biological sciences, the Human Genome Project is typical of the kind of work that was being done by the late 20th century.

These changes caused a significant shift in the way that scientific knowledge was communicated to the public. A widening gap between researchers and the general public meant that scientists were no longer able to educate people directly. The media largely took over the responsibility of communicating science and journalists played a key role in mediating between scientists and the general public. This kind of science communication used a top-down approach known as the deficit model. This strategy followed the assumption that most people had a low level of scientific literacy and simply needed to be fed the correct information about the issue at hand. This one-way communication strategy did little to support public engagement with science as it left no room for dialogue about scientific issues. Furthermore, it meant that science stories were selected based on newsworthiness and communicators were limited in their ability to target specific groups.

Over the past few decades, the relationship between scientists and the public has begun to reflect that of earlier times. An explosion of different forms of media has meant that there are new ways of interacting with the public and science is no longer just channelled passively to people through third parties. Many scientists don’t just research and publish papers but perform roles as communicators too. Scientists can discuss their work and interact with both other researchers and the general public using social media such as Facebook, Twitter and LinkedIn. Others blog or have YouTube channels. While the rise of Big Science has meant that individual scientists are generally less visible, the media has enabled some to become well known. Sir David Attenborough has been presenting nature documentaries for years and most Australians are familiar with Dr Karl Kruszelnicki’s regular science talk back on Triple J radio. Science is now easily accessible through multiple channels such as news media, radio, the internet, books, magazines, television…

Non-professionals have also started to assume a more central role in scientific research through citizen science programs whereby members of the public can participate in research projects. Science cafes, TED talks, themed events and open days at universities and research institutions are also popular. For example, Australian National Science Week has been an annual event since 1997. Science issues also receive attention as they frequently spill over into other areas of public interest such as politics, the environment, health, travel and education.

Although the ways that science research has been conducted has transformed over the past few centuries, in many ways the communication of science has come full circle as it has come back to an approach that promotes public engagement and participation.

Image credit

Feliz, E. (2009). Communication [Image]. Retrieved from https://www.flickr.com/photos/elycefeliz/3224486233

References

Burns, M. (2014). A brief history of science communication in Australia. Media International Australia, 150, 72-76.

Konneker, C. & Lugger, B. (2013). Public science 2.0 – back to the future. Science, 342(49), 49-50.

Logan, R. A. (2001). Science mass communication: its conceptual history. Science Communication, 23(2), 135-163.

The moustachioed soldiers of WW1

As October draws to a close men all over the world are getting ready for Movember, an annual event that gives them a chance to raise money and awareness for men’s health issues by sporting a moustache in the month of November. However, in the era before safety razors and shaving foam personal grooming was a much more difficult affair. Men faced many challenges whilst attempting to maintain their facial hair in the trenches of WW1 as this post by Dr Alun Withey explains.

Thalidomide, past and present

Thalidomide 1 see a Lot more in Link

Thalidomide is widely regarded as the worst drug disaster in history for its role in causing thousands of congenital malformations. This post attempts to provide a brief overview of how the tragedy came about and outlines how the same drug has now made a comeback in the treatment of cancer and leprosy.

Thalidomide was developed in West Germany in 1954 by a pharmaceutical company that was attempting to produce low-cost antibiotics. The drug displayed no antibiotic properties but was found to act as a sedative. It was labelled non-toxic as extremely high doses were given to rats, mice, guinea pigs, rabbits, cats and dogs with no apparent side effects. No further testing was done before the drug was released commercially in 1957. Sold in 46 countries worldwide, thalidomide quickly became a best-selling medication and in many places it was nearly as popular as aspirin.

Lauded as a completely safe alternative to barbiturate sleeping tablets, thalidomide was used in the treatment of anxiety, insomnia and seizure disorders. Patients reported that the drug had a calming and sleep inducing effect. After it was found to have antiemetic properties it was commonly prescribed for women suffering from morning sickness in the first trimester of pregnancy.

Concerns arose when doctors began to report that some patients who had been taking the drug were developing signs of nerve damage known as peripheral neuritis. Even more alarming was the unusually high rate of babies being born with malformations. In particular, phocomelia – a rare congenital condition in which the limbs are stunted or missing – appeared at a level never previously seen. By 1961 thalidomide had been banned and withdrawn from the international market after use of the drug was linked to the incidence of these severe birth defects.

Between 1957 and 1961 an estimated 40 000 people developed peripheral neuritis and 12 000 infants were born with malformations. In addition to missing or shortened limbs, deformities included blindness, brain damage and the absence of internal organs. More than half of these children died within their first year while the survivors suffered lasting disability. The thalidomide tragedy resulted in improved drug testing procedures and many survivors have received compensation awarded through high profile lawsuits.

Given the effects of the tragedy, it it surprising to note that thalidomide is still used today as the result of a serendipitous discovery made after the drug was banned. In 1964 the Israeli doctor Jacob Sheskin was caring for a patient with erythema nodosum leprosum, a painful dermatological complication of leprosy. After finding an old bottle of thalidomide in a cupboard he gave it to the patient to help him sleep because the man was dying and in a lot of pain. To his surprise the man slept soundly and awoke feeling better the following morning. New clinical trials supported this discovery and thalidomide revolutionised the treatment of leprosy by improving the management of erythema nodosum leprosum.

Thalidomide has also been approved for the treatment of some types of cancer and is particularly effective for multiple myeloma after attempts at standard therapies such as chemotherapy have proven unsuccessful. Research has shown that the drug inhibits the growth of new blood vessels from pre-existing ones (a process known as angiogenesis). Tumours are unable to grow without a steady blood supply; thalidomide prevents the proliferation of malignant cells by reducing the angiogenesis of surrounding blood vessels.

It has been proposed that the drug’s effect on neural tissue and blood vessel development is the mechanism for the birth defects seen in the late 1950s and early 1960s. Although it is now known that pharmaceuticals can cross the placenta, at that time it was thought to be a barrier that protected the foetus from the harmful substances the mother was exposed to. Angiogenesis of the placenta is crucial for foetal development and the transfer of oxygen, nutrients and wastes. The first trimester of pregnancy is a critical period for the development of basic body structures and organ systems and it appears that a restriction in vascular growth resulting from the drug seriously impacted this process.

Sadly there is a second generation of thalidomide victims in the babies born to Brazilian women taking the drug for the treatment of leprosy. Researchers are currently working to develop a safe analogue of thalidomide that can be administered to leprosy and cancer patients without the risk of causing birth defects.

Image credit

Derer, M. (1998). Thalidomide [Image]. Retrieved from https://www.flickr.com/photos/duckwalk/9636420141

References 

Goldman, D. A. (2001). Thalidomide use: past history and current implications for practice. Nursing Oncology forum, 28(3), 471-477.

Silverman, W. A. (2002). The schizophrenic career of a “monster drug”. Pediatrics, 110(1), 404-406.

A long history of medical satire

Dara O’Briain is one of my favourite comedians and he frequently deals with science issues in his stand up routines. A mathematics and theoretical physics graduate, he is also the host of the BBC programmes Dara O’Briain’s Science Club and School of Hard Sums. These programmes educate viewers about maths, physics, chemistry and biology through a series of silly brainteasers and conundrums. As comedy shows they try to change the way that people think about science by making them laugh.

However, taking a light-hearted look at scientific ideas is not new and historical examples of cartoons and caricatures making fun of bad science abound. In particular, medical practitioners viewed as quack doctors – like Dara’s homeopaths – have long been a target of satirists. To read about the various ways that medical practitioners have been lampooned throughout history, check out this amusing post by Dr Mark Bryant.

Dr Seuss as a science communicator

Before publishing his famous children’s books under the pen-name Dr Seuss, Theodor Geisel started out as an illustrator for advertising agencies and during WW2 worked as a political cartoonist. He used his talents to support the war effort by illustrating military materials for the US Treasury Department and War Production Board. To see a pamphlet he created to educate American soldiers about the risk of malaria and read more about this publication take a look at the post on this topic at the Contagions blog.

The deadliest pandemic in history: the 1918 influenza outbreak

12-0137-009 influenza

Thousands died in 2009 with the outbreak of swine flu that swept the globe. However that figure pales in comparison to the estimated 50 to 100 million deaths that resulted from the 1918-1919 influenza pandemic that followed World War One.

The disease was unusual from an epidemiological standpoint as it attacked in 3 separate waves across a 12 month period. While influenza usually peaks in winter, the first wave began in the spring of 1918. It spread rapidly across the globe and closely resembled seasonal flu. A second wave from September to November was highly deadly and the final wave in early 1919 was less virulent than either of the previous two outbreaks.

An exceptionally severe virus, it was contracted by about 500 million people. This means one third of the global population at the time was affected. The loss of life was unprecedented; the fatality rate was over 2.5% compared to the rate of 0.01% of seasonal outbreaks. The illness came on suddenly and quickly progressed to respiratory failure. In cases that developed more slowly people often died from secondary bacterial infections.

The 1918 flu also targeted sufferers differently to milder strains. Normally the people worst affected by influenza are either the very young, the elderly or the immunocompromised. In the 1918 pandemic healthy young adults aged 20-35 were the hardest hit. In America, the death rate for 15-34 year olds was 20 times higher in 1918 than in previous years.

Although the origin of the virus is unknown, it spread worldwide along trade routes and shipping lines. Even remote Pacific islands and parts of the Arctic were affected.  The mass movement of people associated with the war and demobilisation of troops afterwards is thought to have helped spread the illness. Quarantines were introduced to reduce the spread of the disease, however they were of limited effectiveness.

Medical services were stretched to their limits and public health measures attempted to control the outbreak. Public gatherings were suspended and many shops stayed closed during the worst periods of the pandemic. People took to wearing gauze masks and stayed indoors until as quickly as it had appeared, the pandemic ended.

While the exact strain that caused the 1918 pandemic hasn’t been identified, it is thought to have evolved from an avian virus. Genomic sequencing of the entire virus is yet to be completed, however it is known that the H1N1 strains circulating today are descended from the influenza of the 1918 pandemic.

If you’d like to follow up on this short post, you can take a look at an infographic on the 1918 flu here. To compare, there is one on the 2009 swine flu here.

Image credit

Navy Medicine. (1918). Influenza [Image]. Retrieved from https://www.flickr.com/photos/navymedicine/7839585384

References

Taubenberger, J. K. & Morens, D. M. (2006). 1918 influenza: the mother of all pandemics. Emerging Infectious Diseases, 12(1), 69-79.

Re-evaluating scurvy in the Irish famine

Historical records from the 18th and 19th century document cases of scurvy at a level that is unsupported by archaeological evidence. Scurvy is a nutritional condition that results from vitamin C deficiency and it commonly occurs during times of famine. The characteristic bone lesions formed upon the re-introduction of vitamin C into the diet have long been used by archaeologists to identify the disease in skeletal remains. This post explains how recent improvements in bioarchaeology technology and techniques have been used to identify the disease in victims of the Irish Famine (1845-1852) and suggests that early studies showing lower rates of scurvy may have missed signs of the disease.