Quantcast
Channel: University of Cambridge - Latest news
Viewing all 4368 articles
Browse latest View live

Epigenetic discovery suggests DNA modifications more diverse than previously thought

$
0
0

Published today in the journal Nature Structural and Molecular Biology, the discovery suggests that many more DNA modifications than previously thought may exist in human, mouse and other vertebrates.

DNA is made up of four ‘bases’: molecules known as adenine, cytosine, guanine and thymine – the A, C, G and T letters. Strings of these letters form genes, which provide the code for essential proteins, and other regions of DNA, some of which can regulate these genes.

Epigenetics (epi - the Greek prefix meaning ‘on top of’) is the study of how genes are switched on or off. It is thought to be one explanation for how our environment and behaviour, such as our diet or smoking habit, can affect our DNA and how these changes may even be passed down to our children and grandchildren.

Epigenetics has so far focused mainly on studying proteins called histones that bind to DNA. Such histones can be modified, which can result in genes being switched on or of. In addition to histone modifications, genes are also known to be regulated by a form of epigenetic modification that directly affects one base of the DNA, namely the base C. More than 60 years ago, scientists discovered that C can be modified directly through a process known as methylation, whereby small molecules of carbon and hydrogen attach to this base and act like switches to turn genes on and off, or to ‘dim’ their activity. Around 75 million (one in ten) of the Cs in the human genome are methylated.

Now, researchers at the Wellcome Trust-Cancer Research UK Gurdon Institute and the Medical Research Council Cancer Unit at the University of Cambridge have identified and characterised a new form of direct modification – methylation of the base A – in several species, including frogs, mouse and humans.

Methylation of A appears to be far less common that C methylation, occurring on around 1,700 As in the genome, but is spread across the entire genome. However, it does not appear to occur on sections of our genes known as exons, which provide the code for proteins.

“These newly-discovered modifiers only seem to appear in low abundance across the genome, but that does not necessarily mean they are unimportant,” says Dr Magdalena Koziol from the Gurdon Institute. “At the moment, we don’t know exactly what they actually do, but it could be that even in small numbers they have a big impact on our DNA, gene regulation and ultimately human health.”

More than two years ago, Dr Koziol made the discovery while studying modifications of RNA. There are 66 known RNA modifications in the cells of complex organisms. Using an antibody that identifies a specific RNA modification, Dr Koziol looked to see if the analogous modification was also present on DNA, and discovered that this was indeed the case. Researchers at the MRC Cancer Unit then confirmed that this modification was to DNA, rather than from any RNA contaminating the sample.

“It’s possible that we struck lucky with this modifier,” says Dr Koziol, “but we believe it is more likely that there are many more modifications that directly regulate our DNA. This could open up the field of epigenetics.”

The research was funded by the Biotechnology and Biological Sciences Research Council, Human Frontier Science Program, Isaac Newton Trust, Wellcome Trust, Cancer Research UK and the Medical Research Council.

Reference
Koziol, MJ et al. Identification of methylated deoxyadenosines in vertebrates reveals diversity in DNA modifications. Nature Structural and Molecular Biology; 21 Dec 2015

The world of epigenetics – where molecular ‘switches’ attached to DNA turn genes on and off – has just got bigger with the discovery by a team of scientists from the University of Cambridge of a new type of epigenetic modification.

It’s possible that we struck lucky with this modifier, but we believe it is more likely that there are many more modifications that directly regulate our DNA
Magdalena Koziol
Christmas Lights

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Christmas Letters from a Second World War prison camp

$
0
0

Christmas letters written by the Cambridge academic John Crook while he was a 22-year-old POW during the Second World War have been placed online.

From his prison camp in central Europe, Crook, who would go on to become a Fellow and Professor of Ancient History at St John’s College, wrote unyieldingly positive Christmas letters to his family, demonstrating a remarkably steadfast character in spite of the harsh conditions in which he found himself.

The letters are now held in the Special Collections of the College’s Library, where they are available for research, but selected items are now being made available to a global audience as well through the website.

Crook, the only child of parents of modest means, had begun an undergraduate degree in Classics after being awarded a scholarship to St John’s in 1939, but his studies were interrupted by the war. In 1942, he enlisted as a private with the 9th Royal Fusiliers. Captured at Salerno in September 1943, during the allied landings in Italy, he was sent to Stalag Luft VIII-B at Larnsdorf in Silesia, where he remained for the next two years.

The letters present a vivid picture of life in the camp. They describe plays and concerts organised by the men (Crook himself was a keen clarinetist) and offer reassurance to Crook’s parents, urging them not to worry about him.

On 20 December he explained that he had been: “So intensely busy about rehearsals and writing parts, teaching Greek, cooking gelatine-and-chips, planning a chamber music concert and acting as usual as a general confidant and receiver of everyone’s troubles.” With a supply of Red Cross parcels, plenty of fuel and ample entertainment, Crook optimistically predicted that they “shall do all right” over the festive season.

Crook kept himself busy during the Christmas period, allowing himself “no time to pine away”. In his letter dated 12 December, 1943, he records performances of “carols, ‘Messiah’, a band concert, cabaret, pantomime [and] decorating our barrack with paper chains”.

In truth, life in the prison camp was extremely difficult - something which Crook hinted at in the letters in touchingly positive terms. His mention of “very Xmassy weather - snow and ice” refers to the perishing cold temperatures and challenging conditions experienced by the men. Life as a prisoner of war meant learning to live without certain basic comforts, and while Crook instructed his parents not to send any clothes, he did request cigarettes, explaining: “They are currency here, and one can obtain for them anything from a banjo to a tin of porridge.” 

Crook’s letter to his parents on Christmas Day, 1944, is particularly moving and reveals the determination of the men in the camp to remain brave despite missing friends and family. He wrote that, notwithstanding the good cheer and cold, crisp weather, all that he and the other men could think of was their loved ones at home. Dances and concerts had taken place, with everyone “in their best khaki slacks”, but Crook longed to see the faces of his parents again.

Eleanor Swire, a Graduate Library Trainee at St John’s, researched the letters for an article on the College website, having come across them in a folder that Crook had poignantly marked with the words “Lost Time”.

“It must have been very difficult to be separated from his loved ones and to have to endure the harsh conditions of a prison camp,” she said. “When I sat down to read his letters in a quiet corner of the library, I was moved to tears by how brave he was to stay so upbeat and to put a positive spin on his circumstances in order to comfort his family.”

As the Soviet army advanced in the final stages of the war in 1945, Crook, along with 80,000 other PoWs, was forced by his German captors to march west in extreme winter weather conditions. He survived the “death march”, but many of his comrades died of hunger, exhaustion and the bitter cold before they could be liberated.

The rest of his wartime service was spent as a sergeant in the Royal Army Educational Corps, before returning to St John’s to complete his degree in 1947. He later became a Fellow of the College, where he remained for more than 50 years, at the top of his chosen profession as Professor of Ancient History at Cambridge. Crook became a world expert on Roman Law and legal practices and taught Greek and Latin to Classics scholars.

The letters form part of a collection of personal items including papers, letters and photographs that were left to St John’s after his death aged 85 in 2007. Since 2010, the College has offered a scholarship in Crook’s name and memory, open to gifted students from similar backgrounds, which reflects the spirit of his achievements. 

Swire added: “It was an honour to read the private letters from the youth of this remarkable man and gain an insight into the kindness and humility he showed throughout his life, to which many members of the College can testify.”

To view the letters and other items from the collection, click here.

Inset images: The military band that was formed at the camp, Crook is pictured on the first row, fifth from right / John Crook. All images reproduced by permission of St John's College, Cambridge. 

Moving letters sent by the academic John Crook while he was a prisoner at the notorious Stalag Luft VIII-B camp in World War II reveal his indomitable spirit and brave resolve to remain positive for the sake of loved ones back home.

So intensely busy about rehearsals and writing parts, teaching Greek, cooking gelatine-and-chips, planning a chamber-music concert and acting as usual as a general confidant and receiver of everyone’s troubles...
John Crook, letter dated 20th December 1943
One of several letters Crook sent from his prison camp, Stalag Luft VIII-B

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Beware the ‘awestruck effect’

$
0
0

While charismatic leaders may be magnetic, they can cause their followers to suppress emotions, which can harm companies through increased strain, lower job satisfaction and reduced information exchange among employees, according to new research from the University of Cambridge.

The study, published in The Leadership Quarterly, found that while charismatic leaders may put their followers in awe, reinforcing the leader’s standing in the group, ‘awestruck’ followers are unlikely to benefit the group in the long-term.

The study also finds that leaders who show individual consideration tend to encourage followers’ emotional expression. While this may circumvent the negative implications of emotion suppression, at “rampant” levels such expressiveness can be detrimental because it violates workplace norms and can cause conflict and harm employee coordination.

The study is based on responses to various leadership scenarios by several hundred research participants at universities and companies in Germany and Switzerland.

Although previous studies had looked at how charismatic leaders influence followers’ emotional experience, the new study focuses instead on how followers regulate their emotional expressiveness in response to charismatic leaders – and does so by examining separately the effect of both a leader’s charisma and individualised consideration on followers.

“Emotion suppression is associated with a wide range of negative outcomes,” said Dr Jochen Menges of Cambridge Judge Business School, one of the study’s authors. “The problem is that for emotions to be suppressed, our brain needs to allocate resources to self-regulation processes that allow us to appear calm and collected on the outside when on the inside we are emotionally stimulated.”

While our brain is busy keeping emotions in check, it cannot allocate resources to other mental tasks – such as memorising or scrutinising messages or coming up with new ideas. “So while we are awestruck – overwhelmed with the emotions that charismatic leaders stir and yet too intimidated to express these emotions – we are impaired in our mental abilities,” said Menges. “That makes us vulnerable to the influence of charismatic leaders, and likely impairs our own effectiveness in dealing with work challenges.”

Such inhibition of expressiveness can deploy mental resources and impair the cognitive processing capacity of followers – which may make them less able to evaluate the actual messages of charismatic leaders, and therefore make them more likely to endorse such leaders with little scrutiny.

If there is such an impairment of cognitive functioning, then charismatic leadership may carry costs for followers that have so far been overlooked. Charismatic leadership may have a dark side for followers irrespective of whether leaders’ goals are moral or immoral.

“Charisma has effects that can be harmful, but these effects can be counterbalanced by other leadership behaviours, such as individualised consideration and support as well as mentoring and coaching,” said Menges.

The two styles of leadership that the researchers looked at are quite different, but they are not mutually exclusive. It is the combination of both styles that will serve the leader best, so they can bring together people for a common mission with charismatic messages from the podium, but then also solicit their advice and input when stepping down from the podium.

“While charisma can help leaders establish power and exert influence, it may be intimidating to those who look up to them for guidance and inspiration,” said Menges. “To leverage the full potential of their followers, leaders need to balance charismatic appeal with the consideration of each follower’s individual needs. And for those who find themselves awestruck by the charisma of their leader, remember that even the most charismatic person is only human.”

Reference:
Menges, Jochen et. al. “The awestruck effect: Followers suppress emotion expression in response to charismatic but not individually considerate leadership.” The Leadership Quarterly (2015). DOI: 10.1016/j.leaqua.2015.06.002

Charismatic business leaders can cause their followers to suppress emotions, which can harm companies over the long term, according to new research. 

Remember that even the most charismatic person is only human.
Jochen Menges
Audience members listen to the President Obama's speech on India and America at the Siri Fort Auditorium

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

New origami-like material may help prevent brain injuries in sport

$
0
0

Researchers from Cambridge and Cardiff Universities are developing an origami-like material that could help prevent brain injuries in sport, as part of a programme sponsored in part by American football’s National Football League (NFL).

A number of universities and commercial companies are taking part in the NFL’s Head Health Challenge, which, as one of its goals, aims to develop the next generation of advanced materials for use in helmets and other types of body protection, for sport, military and other applications.

The Cambridge and Cardiff team are working in collaboration with helmet designer and manufacturer Charles Owen Inc, with funding support from the NFL, GE Healthcare, Under Armour and the National Institute of Standards and Technology, to develop and test their material over the next 12 months.

The Head Health Challenge competition is a $20 million collaborative project to develop new and innovative technologies in order to improve early-stage detection of mild traumatic brain injuries and to improve brain protection. The five-year collaboration is aiming to improve the safety of athletes, members of the military and society overall.

“The key challenge for us is to come up with a material that can be optimised for a range of different types of impacts,” said Dr Graham McShane of Cambridge’s Department of Engineering, who is part of the team behind the material. “A direct impact is different than an oblique impact, so the ideal material will behave and deform accordingly, depending on how it’s been hit – what we are looking at is the relationship between the material, geometry and the force of impact.”

In high-impact sports such as American football, players can hit each other with the equivalent of half a ton of force, and in an average game, there can be dozens of these high-impact blocks or tackles. More than 100 concussions are reported each year in the NFL, and over the course of a career, multiple concussions can do serious long-term damage.

The Head Health Challenge has created three separate challenges as part of its program of funding: Challenge I – Methods for Diagnosis and Prognosis of Mild Traumatic Brain Injuries; Challenge II – Innovative Approaches for Preventing and Identifying Brain Injuries; and Challenge III – Advanced Materials for Impact Mitigation. The C3 project is part of Challenge III. The various projects from Challenge III will have their efforts judged in a year’s time by a review panel, with the most promising technology receiving another $500,000 to develop the material further.

The multi-layered, elastic material developed by McShane and his colleagues at Cambridge and Cardiff, called C3, has been designed and tested using a mixture of theoretical and experimental techniques, so that it can be tailored for specific impact scenarios.

C3 has its origins in cellular materials conceived in the Department of Engineering for defence applications, and is based on folded, origami-like structures. It is more versatile than the polymer foams currently used in protective helmets, which are highly limited in terms of how they behave under different conditions.

Structures made from C3 can be designed in such a way that impact energy can be dissipated relatively easily, making it an ideal material to use in protective clothing and accessories.

Dr Peter Theobald, a Senior Lecturer at Cardiff University who is leading on the project, said: “Head injury prevention strategies have remained relatively stagnant versus the evolution of other technologies. Our trans-Atlantic collaboration with Charles Owen Inc. has enabled us to pool our highly relevant skills and expertise in injury prevention, mechanics, manufacturing and commercialisation.”

“This approach has already enabled us to develop C3 which, in the words of our evaluators, presents a potentially ‘game-changing’ material with great promise to better absorb the vertical and horizontal components of an oblique impact. This highly prestigious award provides us with a platform to continue developing C3 towards our ultimate goal of achieving a material that provides a step-change in head health and protection, whilst achieving metrics that ensure commercial viability.”

Inset image: Sample of C3. Credit: Cardiff University.

Adapted from Cardiff University press release. 

Researchers are developing the next generation of advanced materials for use in sport and military applications, with the goal of preventing brain injuries. 

The key challenge for us is to come up with a material that can be optimised for a range of different types of impacts
Graham McShane
Eddie Lacy (cropped)

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Newton, Darwin, Shakespeare – and a jar of ectoplasm: Cambridge University Library at 600

$
0
0

Older than the British Library and the Vatican Library, Cambridge University Library was first mentioned by name in two wills dated March 1416 and its most valuable contents stored in a wooden chest. The library now holds nine million books, journals, maps and magazines – as well as some of the world's most iconic scientific, literary and cultural treasures.

Its priceless collections include Newton’s own annotated copy of Principia Mathematica, Darwin’s papers on evolution, 3000-year-old Chinese oracle bones, and the earliest reliable text for 20 of Shakespeare’s plays.

But is also home to a bizarre assembly of non-book curiosities, collected over centuries, including a jar of ectoplasm, a trumpet for hearing spirits and a statue of the Virgin Mary, miraculously saved from an earthquake on Martinique.

Since 1710, Cambridge University Library has also been entitled to one copy of each and every publication in the UK and Ireland under Legal Deposit – meaning the greatest works of more than three millennia of recorded thought sit alongside copies of Woman’s Own and the Beano on more than 100 miles of shelves. With two million of its volumes on open display, readers have the largest open-access collection in Europe immediately available to them.

To celebrate the Library’s 600th birthday, a spectacular free exhibition, Lines of Thought, will open on March 11, 2016. Featuring some of Cambridge’s most iconic and best-known treasures, it investigates through six distinct themes how both Cambridge and its collections have changed the world and will continue to do so in the digital era.

As well as the iconic Newton, Darwin and Shakespeare artefacts mentioned above, items going on display include:

  • Edmund Halley’s handwritten notebook/sketches of Halley’s Comet (1682)
  • Stephen Hawking’s draft typescript of A Brief History of Time
  • Darwin’s first pencil sketch of Species Theory and his Primate Tree
  • A second century AD fragment of Homer’s Odyssey.
  • The Nash Papyrus – a 2,000-year-old copy of the Ten Commandments
  • Codex Bezae – 5th New Testament, crucial to our understanding of The Bible.
  • A hand-coloured copy of Vesalius’ 1543 De fabrica – the most influential work in western medicine
  • A written record of the earliest known human dissection in England (1564)
  • A Babylonian tablet dated 2039 BCE (the oldest object in the library)
  • The Gutenberg Bible – the earliest substantive printed book in Western Europe (1454)
  • The first catalogue listing the contents of the Library in 1424, barely a decade after it was first identified in the wills of William Loryng and William Hunden

 

As well as Lines of Thought, 2016 will also see dozens of celebratory events including the library’s 17-storey tower being lit up as part of the e-Luminate Festival in February. Cambridge University Library is also producing a free iPad app giving readers the chance to interact with digitised copies of six of the most revolutionary texts held in its collections. The app analyses the context of the six era-defining works, including Darwin's family copy of On the origin of species, Newton's annotated copy of Principia Mathematica, and William Tyndale's translation of the New Testament into English, an undertaking which led to his execution for heresy.

From October 2016, an exhibition featuring some of the University Library’s most unusual curiosities and oddities will replace Lines of Thought as the second major exhibition of the sexcentenary.

Over the past 600 years, Cambridge has accumulated an extraordinary collection of objects, often arriving at the library as part of bequests and donations. Some of the library’s more unusual artefacts include children’s games, ration books, passports, prisoner art, Soviet cigarettes and cigars and an East African birthing stool.

University Librarian Anne Jarvis said: “For six centuries, the collections of Cambridge University Library have challenged and changed the world around us. Across science, literature and the arts, the millions of books, manuscripts and digital archives we hold have altered the very fabric of our understanding. Thousands of lines of thoughts run through them, back into the past, and forward into tomorrow. Our 600th anniversary is a chance to celebrate one of the world’s oldest and greatest research libraries, and to look forward to its future.

“Only in Cambridge, can you find Newton’s greatest works sitting alongside Darwin’s most important papers on evolution, or Sassoon’s wartime poetry books taking their place next to the Gutenberg Bible and the archive of Margaret Drabble. Our aim now, through our Digital Library, is to share as many of these great collections as widely as possible so that anyone, anywhere in the world, can stand on the shoulders of these giants.”

In 2016, Cambridge University Library will celebrate 600 years as one of the world's greatest libraries with a spectacular exhibition of priceless treasures – and a second show throwing light on its more weird and wonderful collections.

For six centuries, the collections of Cambridge University Library have challenged and changed the world around us. Across science, literature and the arts, the millions of books, manuscripts and digital archives we hold have altered the very fabric of our understanding.
Anne Jarvis
Detail from Vesalius' Epitome, a companion piece to his 1543 De fabrica, - the most influential work in western medicine

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Landslides after Nepal earthquake 'could have been much worse'

$
0
0

The 2015 Nepal earthquake which led to the death of around 9,000 people and caused widespread damage initiated fewer landslides than similar seismic events and could have been much worse, according to research released by geologists and glacier experts.

They hope the study, which includes extensive supplementary material detailing specific hazards, will contribute to a greater understanding of landslides and earthquake-related hazards.

The group of 64 scientists from nine countries voluntarily came together after the earthquake in April to study the impact of the 7.8 magnitude earthquake which struck in the Gorkha District of Nepal on 25 April. They were concerned that it could lead to destructive glacier lake outburst floods as a result of landslides triggered by the earthquake.

Some mountain villages were heavily damaged by the earthquake, including Langtang village - which many scientists have used as a base for research on the region’s glaciers and which was buried by an enormous rock and ice avalanche. Blocked roads and landslide-dammed rivers prevented access to remote locations.

The scientists’ research is published in Science. They say: “The total number of landslides was far fewer than generated by comparable earthquakes elsewhere…”, which they attribute to a lack of surface ruptures, the location of seismic action and the geological make-up of the region.

A major focus of their concerns was the possibility of outburst flooding caused by the impact of the earthquake on Nepal’s many glacier lakes. These have formed as a result of sustained glacial retreat, which has progressed rapidly in response to climate change.

However, the scientists found no evidence of glacier lake outburst floods which could have wreaked even greater destruction on the region.

All the experts are volunteers who came together rapidly in the week after the earthquake. Most have conducted research in Nepal previously and felt a need to contribute to the disaster response from abroad. As emergency relief and recovery operations were taking place, they were given access to a large pool of satellite data from NASA and the US Geological Survey as part of the one of the largest ever NASA-led data campaigns in a disaster zone.

They analysed the data to locate the site of potential geohazards, particularly those that affected local populations, due to landslides and avalanches.

The scientists mapped 4,312 co-seismic and post-seismic landslides. The mapping shows a strong link between landslide distribution and the steepness of mountain slopes combined with the intensity of ground shaking. While slight earthquake shaking affected ice, snow and glacial debris hanging on steep slopes, moderate to strong shaking affected loose sediment deposited in low-sloping river valleys. The scientists found that in Langtang Valley a combination of earthquake-induced landslides and air blasts brought some of the most concentrated destruction and losses of life outside the Kathmandu Valley rather than direct shaking from the earthquake.

The scientists assessed the stability of 491 glacier lakes to check for potential earthquake-induced outburst floods. They found only nine had been affected by the earthquake, which they attributed to lower shaking magnitudes in the valley-bottoms. In one or two cases they were able to alert policymakers to potential dangers so they could assess the hazards on the ground and take preventive action if necessary.

Evan Miles [2012] is the only co-author from the UK. He is a Gates Cambridge Scholar doing a PhD at the Scott Polar Research Institute at the University of Cambridge. He had been due to travel to Nepal to conduct field work the day after the earthquake occurred.

Evan, who has spent considerable amounts of time in Langtang Village in 2013 and 2014 for his research, helped to map some of the 4,300 landslides related to the earthquake. He says: “Knowing where the landslides are is very useful for policymakers to be able to assess ongoing hazards and plan rebuilding. Although there have been other disasters where experts have come together voluntarily to assess the impact, this is probably the fastest, most coordinated response to date. It will serve us well for similar future events of this magnitude.”

*Picture credit: Langtang Village in October 2013

 

Scientists join together to map and assess thousands of co-seismic and post-seismic landslides in aftermath of earthquake.

Knowing where the landslides are is very useful for policymakers to be able to assess ongoing hazards and plan rebuilding.
Evan Miles
Photo of Langtang taken in 2013

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Opinion: Paying people to stay away is not always the best way to protect watersheds

$
0
0

 

In the American West, unprecedented droughts have caused extreme water shortages. The current drought in California and across the West is entering its fourth year, with precipitation and water storage reaching record low levels.

Such drought and water scarcity are only likely to increase with climate change, and the chances of a “megadrought” – one that lasts 35 years or longer — affecting the Southwest and central Great Plains by 2100 are above 80% if greenhouse gas emissions are not reduced.

Droughts are currently ranked second in the US in terms of national weather-related damages, with annual losses just shy of US$9 billion annually. Such economic impacts are likely to worsen as the century progresses.

As the frequency and severity of droughts increases, the successful protection of watersheds to capture, store and deliver water downstream in catchments will become increasingly important, even as the effective protection of watersheds becomes more challenging.

Since the early 2000s, the prevailing view in watershed protection is that paying upstream resource users for avoiding harmful activities, or rewarding positive action, is the most effective and direct method. This is the case of the Catskills watershed in New York, where environmentally sound economic development is incentivized.

There are, however, many different ways communities can invest in watersheds to harness the benefits they provide downstream communities.

In a recently published paper in the journal Ecosystem Services, we highlight an alternative option with the example of Salt Lake City’s successful management of the Wasatch watershed. Instead of offering financial incentives for the “ecosystem services” provided by this watershed, planners use regulations to secure the continued delivery of water, while allowing for recreational and public use.

The successful management of the Wasatch demonstrates that an overreliance on markets to deliver watershed protection might be misguided.

Perhaps part of the reason for this overreliance on market-based tools is a paucity of alternative success stories of watershed management. We note that the Wasatch story has been largely absent from much of the literature that discusses the potential of investing in watersheds for the important services that they provide. This absence results in an incomplete understanding of options to secure watershed ecosystem services, and limits the consideration of alternative watershed conservation approaches.

The Wasatch management strategy

The Wasatch is a 185-square-mile watershed that is an important drinking water source to over half a million people in Salt Lake City. This water comes from the annual snowmelt from the 11,000-foot-high peaks in the Wasatch range, which act as Salt Lake City’s virtual reservoir.

Salt Lake City’s management of the Wasatch watershed is somewhat unusual in contemporary examples of watershed protection in that it is focused on nonexclusionary regulation – that is, allowing permitted uses – and zoning to protect the urban water supply. For instance, the cities of Portland, Oregon and Santa Fe, New Mexico have worked with the US Forest Service to prohibit public access to source water watersheds within forests to protect drinking water supplies. In contrast, the governance of the Wasatch allows for public access and both commercial and noncommercial activities to occur in the watershed, such as skiing and mountain biking. It also imposes restrictions on allowable uses, such as restricting dogs in the watershed.

This permitted use, socially negotiated, helps mitigate the potential trade-offs associated with protection activities.

The suite of policies that protect the Wasatch do not include a “payments for ecosystem services” or other market-based incentives component, nor has there been any discussion of compensating potential resource users in the watershed for foregone economic opportunities. By not having a market-based incentives component, the Wasatch example provides an alternative regulatory-based solution for the protection of natural capital, which contrasts with the now prevalent market-based payments approach.

Importantly, the Wasatch example reinforces the rights of citizens to derive positive benefits from nature, without these being mediated through the mechanism of markets. In most payment-based systems, potential harm to a watershed is avoided by organizing beneficiaries so that they can compensate upstream resource users for foregone activities. In contrast, reliance on regulation and permitted activities supports the ‘polluter pays principle,’ which might be more appropriate in many circumstances.

Why we need alternative strategies

With the American West facing ever-increasing droughts, policymakers will be faced with the increasingly difficult task of protecting and preserving water supplies. Thus, awareness of alternative, successful strategies of watershed protection and management is crucially important.

The Wasatch offers an important example of how natural capital can be instrumentally and economically valued, but conserved via regulatory approaches and land use management and zoning, rather than a reliance on the creation of water markets, which are often misplaced and not suitable. Bringing stakeholders together to negotiate allowable uses that preserve critical watershed functions is an additional option within the policymaker’s toolkit, and one that is at risk of being forgotten in the rush to payment-based systems.

Libby Blanchard, Gates Cambridge Scholar and PhD Candidate, University of Cambridge and Bhaskar Vira, Reader in Political Economy at the Department of Geography and Fellow of Fitzwilliam College; Director, University of Cambridge Conservation Research Institute, University of Cambridge

This article was originally published on The Conversation. Read the original article.

Libby Blanchard and Bhaskar Vira from Cambridge's Department of Geography argue that we need to consider alternative approaches in order to protect watersheds.  

The successful management of the Wasatch demonstrates that an overreliance on markets to deliver watershed protection might be misguided.
Silver Lake, Wasatch watershed, Utah

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Second contagious form of cancer found in Tasmanian devils

$
0
0

The discovery, published in the journal Proceedings of the National Academy of Science, calls into question our current understanding of the processes that drive cancers to become transmissible.

Tasmanian devils are iconic marsupial carnivores that are only found in the wild on the Australian island state of Tasmania. The size of a small dog, the animals have a reputation for ferocity as they frequently bite each other during mating and feeding interactions.

In 1996, researchers observed Tasmanian devils in the north-east of the island with tumours affecting the face and mouth; soon it was discovered that these tumours were contagious between devils, spread by biting. The cancer spreads rapidly throughout the animal’s body and the disease usually causes the death of affected animals within months of the appearance of symptoms. The cancer has since spread through most of Tasmania and has triggered widespread devil population declines. The species was listed as endangered by the International Union for Conservation of Nature in 2008.

To date, only two other forms of transmissible cancer have been observed in nature: in dogs and in soft-shell clams. Cancer normally occurs when cells in the body start to proliferate uncontrollably; occasionally, cancers can spread and invade the body in a process known as 'metastasis'; however, cancers do not normally survive beyond the body of the host from whose cells they originally derived. Transmissible cancers, however, arise when cancer cells gain the ability to spread beyond the body of the host that first spawned them, by transmission of cancer cells to new hosts.

Now, a team led by researchers from the University of Tasmania, Australia, and the University of Cambridge, UK, has identified a second, genetically distinct transmissible cancer in Tasmania devils.

“The second cancer causes tumours on the face that are outwardly indistinguishable from the previously-discovered cancer,” said first author Dr Ruth Pye from the Menzies Institute for Medical Research at the University of Tasmania. “So far it has been detected in eight devils in the south-east of Tasmania.”

“Until now, we’ve always thought that transmissible cancers arise extremely rarely in nature,” says Dr Elizabeth Murchison from the Department of Veterinary Medicine at the University of Cambridge, a senior author on the study, “but this new discovery makes us question this belief.

"Previously, we thought that Tasmanian devils were extremely unlucky to have fallen victim to a single runaway cancer that emerged from one individual devil and spread through the devil population by biting. However, now that we have discovered that this has happened a second time, it makes us wonder if Tasmanian devils might be particularly vulnerable to developing this type of disease, or that transmissible cancers may not be as rare in nature as we previously thought.”

Professor Gregory Woods, joint senior author from the Menzies Institute for Medical Research at the University of Tasmania, adds: “It’s possible that in the Tasmanian wilderness there are more transmissible cancers in Tasmanian devils that have not yet been discovered. The potential for new transmissible cancers to emerge in this species has important implications for Tasmanian devil conservation programmes.”

The discovery of the second transmissible cancer began in 2014, when a devil with facial tumours was found in south-east Tasmania. Although this animal’s tumours were outwardly very similar to those caused by the first-described Tasmanian devil transmissible cancer, the scientists found that this devil’s cancer carried different chromosomal rearrangements and was genetically distinct. Since then, eight additional animals have been found with the new cancer in the same area of south-east Tasmania.

The research was primarily supported the Wellcome Trust and the Australian Research Council, with additional support provided by Dr Eric Guiler Tasmanian Devil Research Grants and by the Save the Tasmanian Devil Program.

For more information about the research into Tasmanian devils, see T is for Tasmanian Devil.

Reference
Pye, RJ et al. A second transmissible cancer in Tasmanian devils. PNAS; 28 Dec 2015

Transmissible cancers – cancers which can spread between individuals by the transfer of living cancer cells – are believed to arise extremely rarely in nature. One of the few known transmissible cancers causes facial tumours in Tasmanian devils, and is threatening this species with extinction. Today, scientists report the discovery of a second transmissible cancer in Tasmanian devils.

Until now, we’ve always thought that transmissible cancers arise extremely rarely in nature, but this new discovery makes us question this belief
Elizabeth Murchison
Tassie devil orphan

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Opinion: How frugal innovation can kickstart the global economy in 2016

$
0
0

In late 2015 a Cambridge-based nonprofit released the Raspberry Pi Zero, a tiny £4 computer that was a whole £26 cheaper than the original 2012 model. The Zero is not only remarkable for its own sake – a computer so cheap it comes free with a £5.99 magazine – it is also symptomatic of a larger “frugal innovation” revolution that is taking the world by storm.

With the global economy struggling, this is the kind of innovation that could kickstart it in 2016. Empowered by cheap computers such as the Raspberry Pi and other ubiquitous tools such as smartphones, cloud computing, 3D printers, crowdfunding, and social media, small teams with limited resources are now able to innovate in ways that only large companies and governments could in the past. This frugal innovation – the ability to create faster, better and cheaper solutions using minimal resources – is poised to drive global growth in 2016 and beyond.

More than four billion people around the world, most of them in developing countries, live outside the formal economy and face significant unmet needs when it come to health, education, energy, food, and financial services. For years this large population was either the target of aid or was left to the mercy of governments.

More recently, large firms and smaller social enterprises have begun to see these four billion as an enormous opportunity to be reached through market-based solutions. These solutions must, however, be frugal – highly affordable and flexible in nature. They typically include previously excluded groups both as consumers and producers. Bringing the next four billion into the formal economy through frugal innovation has already begun to unleash growth and create unprecedented wealth in Asia, Africa and Latin America. But there’s much, much more to come.

Good news

Take the case of telecommunications. Over the last decade or so, highly affordable handsets and cheap calling rates have made mobile phones as commonplace as toothbrushes. In addition to bringing massive productivity gains to farmers and small businesses – not to mention creating new sources of employment – mobile phones also enable companies to roll out financial, healthcare and educational services affordably and at scale.

Take the case of Safaricom, Vodafone’s subsidiary in Kenya. In 2007 the company introduced M-Pesa, a service that enables anyone with a basic, SMS-enabled mobile phone to send and receive money that can be cashed in a corner shop acting as an M-Pesa agent.

This person-to-person transfer of small amounts of money between people who are often outside the banking system has increased financial inclusion in Kenya in a highly affordable and rapid way. So much so that more than 20m Kenyans now use M-Pesa and the volume of transactions on the system is more than US$25 billion, more than half the country’s GDP. M-Pesa (and services like it) have now spread to several other emerging markets in Africa and Asia.

Similar frugal innovations in medical devices, transport, solar lighting and heating, clean cookstoves, cheap pharmaceuticals, sanitation, consumer electronics and so on, have driven growth in Asia and Africa over the past decade and will continue to do so in the decades to come.

Catching on

Meanwhile the developed world is catching up. Declining real incomes and government spending, accompanied by greater concern for the environment, are making Western consumers both value and values conscious.

The rise of two massive movements in recent years, the sharing economy and the maker movement, shows the potential of frugal innovation in the West. The sharing economy, exemplified by Airbnb, BlaBlaCar and Kickstarter, has empowered consumers to trade spare assets with each other and thus generate new sources of income. The maker movement, meanwhile, features proactive consumers who tinker in spaces such as FabLabs, TechShops and MakeSpaces, designing solutions to problems they encounter.

Square, a small white, square device that fits into the audio jack of a smartphone, using its computing power and connectivity to make credit card payments, is an example of a product that was developed in a TechShop. Launched in 2010, the Square is on track to make US$1 billion in revenue in 2015.

Frugal innovation not only has the power to drive more inclusive growth by tackling poverty and inequality around the world, it is also increasingly the key to growth that will not simultaneously wreck the planet. The big issue at the Paris climate summit was the increasing wedge between the developed and the developing world. On the one hand, the rich countries cannot stop the poor ones from attempting to achieve the West’s levels of prosperity. On the other, however, poor countries cannot grow in the way the West did without wrecking the planet.

The only way to square this circle is to ensure that the growth is sustainable. The need for frugal innovation is therefore all the more vital in areas such as energy generation and use, manufacturing systems that are more local, and a move to a circular economy where companies (and consumers) reduce, reuse and recycle materials in a potentially endless loop.

Never before have so many been able to do so much for so little. Aiding and stimulating this frugal innovation revolution holds the key to driving global growth by employing more people to solve some of the big problems of poverty, inequality and climate change that stalk the planet.

Jaideep Prabhu, Director, Centre for India & Global Business at Cambridge Judge Business School, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Inset images: M-Pesa stand (Fiona Bradley); Square readers (Dom Sagolla).

Jaideep Prabhu (Cambridge Judge Business School) discusses the frugal innovation revolution that is taking the world by storm.

Raspberry Pi B+

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Boosting farm yields to restore habitats could create greenhouse gas ‘sink’

$
0
0

New research into the potential for sparing land from food production to balance greenhouse gas emissions has shown that emissions from the UK farming industry could be largely offset by 2050. This could be achieved if the UK increased agricultural yields and coupled this with expanding the areas of natural forests and wetlands to match its European neighbours.

The new study suggests that by upping forest cover from 12% to 30% of UK land over the next 35 years – close to that of France and Germany, but still less than the European average – and restoring 700,000 hectares of wet peatland, these habitats would act as a carbon ‘sink’: sucking in and storing carbon.

This could be enough to meet government targets of 80% greenhouse gas reduction by 2050 for the farming industry. Agriculture currently produces around 10% of all the UK’s damaging greenhouse gas emissions. 

The new woodlands and wetlands would be more than just a carbon sink, say researchers. They would help support declining UK wildlife – including many species of conservation concern – provide more areas for nature recreation, and help to reduce flooding.

However, to make space for habitat restoration, and to meet rising levels of food demand, land sparing would depend on increases in farm yields, so that food needs can be met from less farmland.

The new study, published today in the journal Nature Climate Change, is the first to show that land sparing has the technical potential to significantly reduce greenhouse gas emissions at a national scale.  

“Land is a source of greenhouse gases if it is used to farm fertiliser-hungry crops or methane-producing cattle, or it can be a sink for greenhouse gases – through sequestration. If we increase woodland and wetland, those lands will be storing carbon in trees, photosynthesising it in reeds, and shunting it down into soils,” said senior author Prof Andrew Balmford, from Cambridge University’s Department of Zoology.     

“We estimate that by actively increasing farm yields, the UK can reduce the amount of land that is a source of greenhouse gases, increase the ‘sink’, and sequester enough carbon to hit national emission reduction targets for the agriculture industry by 2050,” he said.

The study originated from a workshop run as part of the new Cambridge Conservation Initiative, which convened leading experts and asked them to “look into their crystal balls”, says Balmford. “We wanted to know what food yield increases they reckoned were achievable in the 2050 timescale across crop and livestock sectors,” he said.

This included researchers from the Universities of California, Bangor, Aberdeen, East Anglia, the Royal Society for the Protection of Birds, Forestry Commission, Rothamsted Research, ADAS UK Ltd and Scotland’s Rural College (SRUC).

The potential they identified included improving farm management and optimising breeding programmes to produce plants that are better at capturing soil nutrients, sunlight and water, and to produce more efficient animals that produce less methane.     

The researchers then used these and other data to produce a series of modelled scenarios that projected long-term farm yields. Scenarios ranged from yield declines through to sustained yield growth that averaged 1.3% per year until 2050.

If yields rise, the area of farmland required for food production can decline – allowing countryside to be spared. By converting spared land back to natural habitats of woodland and wetland, which would have been a large portion of the UK’s native land cover in the past, a carbon sink is created that the research suggests could come close to cancelling out agricultural emissions in just a few decades.     

Dr Toby Bruce, co-author from Rothamsted Research, said: "The current findings show the value of land sparing for reducing greenhouse gases. To allow this productivity needs to increase on the remaining land, for example, by minimising crop losses to pests, weeds and diseases or by improving crop nutrition.”

Importantly, says Balmford, the research team did not allow themselves the “get-out-of-jail-free card” of increasing food imports. Overall food consumption looks set to rise substantially – some 38% – in the UK by 2050, and the researchers locked into their future models the contribution that UK production makes to its food supply. 

“We made sure we met expected production requirements in all our figures, and then explored the consequences of different ways of achieving them,” he said.

However, it is not all or nothing, say the researchers, who conducted lots of sensitivity analyses around different ways of using spared land, and different levels of yield growth, consumer waste, and meat consumption – which has a disproportionate environmental footprint

“Reducing meat consumption appears to offer greater mitigation potential than reducing food waste, but more importantly, our results highlight the benefits of combining measures,” said Balmford.

“For example, coupling even moderate yield growth with land sparing and reductions in meat consumption has the technical potential to surpass an 80% reduction in net emissions,” he said.

Added Balmford: “We need to turn our minds to figuring out policy mechanisms that can deliver sustainable high yield farming that doesn’t come at the expense of animal welfare, soil and water quality, as well as safeguarding and restoring habitats.

“The right incentives need to be provided to landowners to spare land. Subsidies under the EU’s Common Agricultural Policy could be redirected so that landowners get paid properly for taking land out of food production and putting it into climate regulation.  

“If we are serious about saving the planet for anything more than food production then the focus has to be on increasing yields and sparing land for the climate. We need to look objectively and dispassionately at every option we have for achieving that.” 

New study using UK data is first to show that raising farm yields and allowing ‘spared’ land to be reclaimed for woodlands and wetlands could offset greenhouse gas produced by farming industry to meet national target of 80% emissions reduction by 2050.

Land is a source of greenhouse gases if it is used to farm fertiliser-hungry crops or methane-producing cattle, or it can be a sink for greenhouse gases – through sequestration
Andrew Balmford
Rural Landscape near Fife

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Cambridge academics honoured over the New Year

$
0
0

Professor Dame Ann Dowling has been appointed to the Order of Merit by HM The Queen.

The Order of Merit is given to those who have rendered exceptionally meritorious services towards the advancement of the arts, learning, literature and science. The award is in the personal gift of The Queen, and is limited to 24 recipients.

President of the Royal Academy of Engineering, Dowling’s research is primarily in the fields of combustion, acoustics and vibration, aimed at low-emission combustion and quiet vehicles. The Professor of Mechanical Engineering is one of the founders of the Energy Efficient Cities initiative in Cambridge and was the UK lead of the Silent Aircraft Initiative.

She was appointed CBE for services to Mechanical Engineering in 2002, and promoted DBE for services to Science in 2007.

Dowling, a Fellow of Sidney Sussex College, said: “I was surprised, delighted and very, very honoured to be appointed to the Order of Merit.”

Members of the University were also recognised in the New Year’s honours list. Professor David MacKay, the Regius Professor of Engineering since 2013, has been knighted ‘for services to scientific advice in Government and science outreach.’

Mackay, a Fellow of Darwin College, said: “I am absolutely delighted to receive this honour. I'd like to thank all those from across the political spectrum who supported my work advocating a numerate, engineering-based approach to energy policy and climate-change action, and the civil servants who taught me how to deliver scientific advice in Whitehall; I'd also like to express my gratitude to the University of Cambridge for their support for me throughout my career.”

Harvey McGrath, co-chair of the £2 billion fundraising campaign for the University and Colleges of Cambridge, was also knighted in the New Year’s honours list ‘for services to economic growth and public life’. McGrath, an Honorary Fellow of St Catharine’s College, is a philanthropist and businessman.

Professor of Neurology Alastair Compston was appointed CBE ‘for services to multiple sclerosis treatment’. Compston has been involved with research on the mechanisms and treatment of multiple sclerosis since 1976, the last 26 years based in Cambridge.

Apart from work identifying many genetic risk variants for susceptibility to the disease, he has introduced Alemtuzumab (Lemtrada) as a highly effective treatment for early relapsing-remitting multiple sclerosis. Since 2013, Alemtuzumab has been licensed throughout the world and is now used increasingly as a first-line therapy in young adults with this potentially disabling condition.

Compston, a Fellow of Jesus College, said: “I am delighted to receive this honour but especially pleased that the citation recognises the work of a large research community, in Cambridge and elsewhere, that has transformed the outlook for young people with multiple sclerosis facing an otherwise uncertain future.”

Dr Emily Shuckburgh has been appointed OBE 'for services to science and public communication of science'. A Fellow of Darwin College, she is a member of the Faculty of Mathematics and holds a number of positions in the University.

Image shows (left to right): Professor Sir David MacKay, Professor Alastair Compston, and Professor Dame Ann Dowling.

Members have been recognised for their outstanding contribution to society

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Melting of massive ice ‘lid’ resulted in huge release of CO2 at the end of the ice age

$
0
0

A new study reconstructing conditions at the end of the last ice age suggests that as the Antarctic sea ice melted, massive amounts of carbon dioxide that had been trapped in the ocean were released into the atmosphere.

The study includes the first detailed reconstruction of the Southern Ocean density of the period and identified how it changed as the Earth warmed. It suggests a massive reorganisation of ocean temperature and salinity, but finds that this was not the driver of increased concentration of carbon dioxide in the atmosphere. The study, led by researchers from the University of Cambridge, is published in the journal Proceedings of the National Academy of Sciences.

The ocean is made up of different layers of varying densities and chemical compositions. During the last ice age, it was thought that the deepest part of the ocean was made up of very salty, dense water, which was capable of trapping a lot of CO2. Scientists believed that a decrease in the density of this deep water resulted in the release of CO2 from the deep ocean to the atmosphere.

However, the new findings suggest that although a decrease in the density of the deep ocean did occur, it happened much later than the rise in atmospheric CO2, suggesting that other mechanisms must be responsible for the release of CO2 from the oceans at the end of the last ice age. 

“We set out to test the idea that a decrease in ocean density resulted in a rise in CO2 by reconstructing how it changed across time periods when the Earth was warming,” said the paper’s lead author Jenny Roberts, a PhD student in Cambridge’s Department of Earth Sciences who is also a member of the British Antarctic Survey. “However what we found was not what we were expecting to see.”

In order to determine how the oceans have changed over time and to identify what might have caused the massive release of CO2, the researchers studied the chemical composition of microscopic shelled animals that have been buried deep in ocean sediment since the end of the ice age. Like layers of snow, the shells of these tiny animals, known as foraminifera, contain clues about what the ocean was like while they were alive, allowing the researchers to reconstruct how the ocean changed as the ice age was ending.

They found that during the cold glacial periods, the deepest water was significantly denser than it is today. However, what was unexpected was the timing of the reduction in the deep ocean density, which happened some 5,000 years after the initial increase in CO2, meaning that the density decrease couldn’t be responsible for releasing CO2 to the atmosphere.

“Before this study there were these two observations, the first was that glacial deep water was really salty and dense, and the second that it also contained a lot of CO2, and the community put two and two together and said these two observations must be linked,” said Roberts. “But it was only through doing our study, and looking at the change in both density and CO2 across the deglaciation, that we found they actually weren’t linked. This surprised us all.”

Through examination of the shells, the researchers found that changes in CO2 and density are not nearly as tightly linked as previously thought, suggesting something else must be causing CO2 to be released from the ocean.

Like a bottle of wine with a cork, sea ice can prevent CO2-rich water from releasing its CO2 to the atmosphere. The Southern Ocean is a key area of exchange of CO2 between the ocean and atmosphere. The expansion of sea ice during the last ice age acted as a ‘lid’ on the Southern Ocean, preventing CO2 from escaping. The researchers suggest that the retreat of this sea ice lid at the end of the last ice age uncorked this 'vintage' CO2, resulting in an increase in carbon dioxide in the atmosphere.  

“Although conditions at the end of the last ice age were very different to today, this study highlights the importance that dynamic features such as sea ice have on regulating the climate system, and emphasises the need for improved understanding and prediction as we head into our ever warming world,” said Roberts. 

Reference:
Roberts, J. et. al. ‘Evolution of South Atlantic density and chemical stratification across the last deglaciation.’ PNAS (2016). DOI: 10.1073/pnas.1511252113

 

A new study of how the structure of the ocean has changed since the end of the last ice age suggest that the melting of a vast ‘lid’ of sea ice caused the release of huge amounts of carbon dioxide into the atmosphere.

Although conditions at the end of the last ice age were very different to today, this study highlights the importance that dynamic features such as sea ice have on regulating the climate system.
Jenny Roberts
Foraminifera

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Let’s go wild: how ancient communities resisted new farming practices

$
0
0

A box of seemingly unremarkable stones sits in the corner of Dr Giulio Lucarini’s office at the McDonald Institute for Archaeological Research where it competes for space with piles of academic journals, microscopes and cartons of equipment used for excavations. These palm-sized pebbles were used as grinding tools by people living in North Africa around 7,000 years ago. Tiny specks of plant matter recently found on their surfaces shine light on a fascinating period of human development and confirm theories that the transition between nomadic and settled lifestyles was gradual.

The artefacts in Lucarini’s office come from a collection held in the store of the Museum of Archaeology and Anthropology (MAA) just a couple of minutes’ walk away. In the 1950s the well-known Cambridge archaeologist Sir Charles McBurney undertook an excavation of a cave called Haua Fteah located in northern Libya.  He showed that its stratigraphy (layers of sediment) is evidence of continuous human habitation from at least 80,000 years ago right up to the present day.  Finds from McBurney’s excavation were deposited at MAA.

In 2007, Professor Graeme Barker, also from Cambridge, started to re-excavate Haua Fteah with support from the ERC-funded TRANS-NAP Project. Until 2014, Barker and his team had the chance to spend more than one month each year excavating the site and surveying the surrounding Jebel Akhdar region, in order to investigate the relationships between cultural and environmental change in North Africa over the past 200,000 years.

Now an analysis of stone grinders from the Neolithic layers of Haua Fteah (dating from 8,000-5,500 years ago), carried out by Lucarini as his Marie Skłodowska-Curie Project ‘AGRINA’, in collaboration with Anita Radini (University of York) and Huw Barton (University of Leicester), yields new evidence about people living at a time seen as a turning point in human exploitation of the environment, paving the way for rapid expansion in population.

Around 11,000 years ago, during the early phase of the geological period known as Holocene, nomadic communities of Near Eastern regions made the transition from a hunter-gatherer lifestyle to a more settled farming existence as they began to exploit domesticated crops and animals developed locally. The research Lucarini is carrying out in Northern Libya and Western Egypt is increasingly revealing a contrasting scenario for the North African regions.

In a paper published today, Lucarini and colleagues explain that the surfaces of the grinders show plant use-wear and contain tiny residues of wild plants that date from a time when, in all likelihood, domesticated grains would have been available to them.  These data are consistent with other evidence from the site, notably those from the analysis of the plant macro-remains carried out by Jacob Morales (University of the Basque Country), which confirmed the presence of wild plants alone in the site during the Neolithic. Together, this evidence suggests that domesticated varieties of grain were adopted late, spasmodically, and not before classical times, by people who lived in tune with their surroundings as they moved seasonally between naturally-available resources.

Lucarini is an expert in the study of stone tools and has a particular interest in the beginning of food production economies in North Africa. Using an integrated approach of low and high-power microscopy in the George Pitt-Rivers Lab at the McDonald Institute, and in the BioArCh Lab at the University of York, he and his colleagues were able to spot plant residues, too small to be visible to the naked eye, caught in the pitted surface of several of the stones from Haua Fteah.  Some of the grinders themselves exhibit clear ‘use-wear’ with their surfaces carrying the characteristic polish of having been used for grinding over long periods.

“It was thrilling to discover that microscopic traces of the plants ground by these stones have survived for so long, especially now that we’re able to use powerful high-power microscopes to look at the distinctive shape of the starch granules that offer us valuable clues to the identities of the plant varieties they come from,” says Lucarini.

By comparing the characteristic shape and size of the starch found in the grinders’ crevices to those in a reference collection of wild and domestic plant varieties collected in different North African and Southern European countries, Lucarini and Radini were able to determine that the residues most probably came from one of the species belonging to the Cenchrinae grasses. Various species of the genus Cenchrus are still gathered today by several African groups when other resources are scarce. Cenchrus is prickly and its seed is laborious to extract. But it is highly nutritious and, especially in times of severe food shortage, a highly valuable resource.

“Haua Fteah is only a kilometre from the Mediterranean and close to well-established coastal routes, giving communities there access to commodities such as domesticated grain, or at least the possibility to cultivate them. Yet it seems that people living in the Jebel Akhdar region may well have made a strategic and deliberate choice not to adopt the new farming practices available to them, despite the promise of higher yields but, instead, to integrate them into their existing practices,” says Lucarini.

“It’s interesting that today, even in relatively affluent European countries, the use of wild plants is becoming more commonplace, complementing the trend to use organically farmed food. Not only do wild plants contribute to a healthier diet, but they also more sustainable for the environment.”

Lucarini suggests that North African communities delayed their move to domesticated grains because it suited their highly mobile style of life. “Opting to exploit wild crops was a successful and low-risk strategy not to rely too heavily on a single resource, which might fail. It’s an example of the English idiom of not putting all your eggs in one basket. Rather than being ‘backward’ in their thinking, these nomadic people were highly sophisticated in their pragmatism and deep understanding of plants, animals and climatic conditions,” he says.

Evidence of the processing of wild plants at Haua Fteah challenges the notion that there was a sharp and final divide between nomadic lifestyles and more settled farming practices – and confirms recent theories that the adoption of domesticated species in North Africa was an addition to, rather than a replacement of, the exploitation of wild resources such as the native grasses that still grow wild at the site.

“Archaeologists talk about a ‘Neolithic package’ – made up of domestic plants and animals, tools and techniques – that transformed lifestyles. Our research suggests that what happened at Haua Fteah was that people opted for a mixed bag of old and new. The gathering of wild plants as well as the keeping of domestic sheep and goats chime with continued exploitation of other wild resources – such as land and sea snails – which were available on a seasonal basis with levels depending on shifts in climatic conditions,” says Lucarini.

“People had an intimate relationship with the environment they were so closely tuned to and, of course, entirely dependent on. This knowledge may have made them wary of abandoning strategies that enabled them to balance their use of resources – in a multi-spectrum exploitation of the environment.”

Haua Fteah continues to pose puzzles for archaeologists. The process of grinding requires two surfaces – a hand-held upper grinding tool and a base grinding surface. Excavation has yielded no lower grinders which made have been as simple as shallow dish-shaped declivities in local rock surfaces. “Only a fraction of the extensive site has been excavated so it may be that lower grinders do exist but they simply haven’t been found yet,” says Lucarini.

The uncertain political situation in Libya has resulted in the suspension of fieldwork in Haua Fteah, in particular the excavation of the Neolithic and classical layers of the cave. Lucarini hopes that a resolution to the current crisis will allow work to resume within the next few years. He says: “Haua Fteah, with its 100,000 years of history and continuous occupation by different peoples, is a symbol of how Libya can be hospitable and welcoming. We trust in this future for the country.”

Inset images: Giulio Lucarini analysing the artefacts at the microscope, George Pitt-Rivers Laboratory, McDonald Institute for Archaeological Research (Aude Gräzer Ohara); Upper grinder found in the Neolithic layers of the cave, with plant residues stuck inside a crevice (Giulio Lucarini); Anita Radini collecting plants and algae for reference collection in Fezzan, Libya (Muftah Haddad); Cenchrinae starch granules from the Haua Fteah archaeological tools (a-c) and modern starch granules of Cenchrus biflorus (d) scale 20 µm (Anita Radini); Cenchrus ciliaris L., Burkina Faso (Arne Erpenbach, African plants - A Photo Guide www.africanplants.senckenberg.de).

Analysis of grinding stones reveals that North African communities may have moved slowly and cautiously from hunter-gatherer lifestyles to more settled farming practices. Newly published research by Cambridge archaeologist Dr Giulio Lucarini suggests that a preference for wild crops was a strategic decision.

Rather than being ‘backward’ in their thinking, these nomadic people were highly sophisticated in their pragmatism and deep understanding of plants, animals and climatic conditions
Giulio Lucarini
Haua Fteah, Cyrenaica, Libya. The cave’s entrance.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: What science can tell us about the ‘world’s largest sapphire’

$
0
0

The “Star of Adam”, recently found in a mine in Sri Lanka, is believed to be the biggest sapphire ever discovered. It weighs in at over 1,404 carats, that’s around 280g or just under ten ounces. But what do we know about the formation of this remarkable gemstone – and how could it grow so huge?

Sapphire is a bright blue gem mineral and a form of corundum (aluminium oxide), the hard gritty stuff used as an abrasive in emery paper. It is incredibly hard – a fact important in understanding its occurrence in places like the Sri Lankan mines.

Sapphire is a type of “dirty” corundum. If you add just a trace of iron and titanium to the mixture of aluminium and oxygen from which the corundum is growing, it forms as sapphire. (If you add chromium to the corundum as it grows then you will get a ruby – Sri Lanka is also famous for its rubies).

The Star of Adam sapphire is an example of a “star sapphire”. When you look at it it appears to have a six-pointed star inside, which shines out from the gem and is due to reflections of light from tiny whisker-like crystals of rutile (a titanium-dioxide mineral) that were trapped within the sapphire crystal as it grew.

Ancient river sediments

The stone was found in the Ratnapura mines in the south of the country, about 100km south-east of the capital, Colombo. Ratnapura is Singhalese for “gem town” and Sri Lanka has been known for its gem deposits for more than 2,000 years. It seems likely that Sinbad’s “Valley of Gems” in the Tales of the Arabian Nights is a reference to the Ratnapura area. In 1292, Marco Polo wrote: “The Island of Ceylon is, for its size, the finest island in the world, and from its streams come rubies, sapphires, topaz, amethyst and garnet."

 

Ratnapura gem mine.hassage/Flickr, CC BY-SA

 

The gems of Ratnapura are found in ancient river sediments – old river beds that are now covered with more layers of mud and sand in an area that is largely given over to paddy fields. The hard gem minerals, sapphires, rubies, spinels and garnets, were long ago weathered and eroded from the nearby highlands. Because of their hardness they survived as large pebbles and crystals, eroded out of the rocks where they first formed, and transported down the rivers which acted like a natural panning system. River-borne (alluvial) gold and diamonds are often sorted and concentrated in river sands by similar processes, elsewhere on the globe.

On their journey along the river the softer rocks from the highlands would have been worn down into mud and fine sand, but the harder minerals survive better, and retain their size and often their shape. The average annual rainfall for the island is more than 2,000mm, and the tropical weather means that the erosion and weathering of the highland mountains is even more accelerated.

Ratnapura is in the “wet zone” of the island. Its gem-bearing gravels have yielded a number of historic gemstones, possibly including a 400-carat red spinel given to Catherine the Great of Russia, and a giant oval-cut spinel, known as the “Black Prince Ruby” (it was mistakenly identified as a ruby), which features in the British Queen’s imperial state crown.

The Star of Adam sapphire would originally have been created within rocks and granites of the Sri Lankan highlands. The granites, which form when molten magma cools and becomes solid, have been dated as almost two billion years old, and were subsequently squeezed and re-worked in a massive mountain-building episode due to tectonic churning of the Earth’s crust that happened more than 500m years ago.

Temperatures and pressures deep within the roots of these mountains would have reached more than 900˚C and over 9,000 atmospheres pressure during this event. The sapphire could have formed either within the granite, as part of a rock type called a pegmatite, or within the younger rock created by pressurisation and heating.

In either case the temperatures and pressures would have changed only very slowly over millions and millions of years, and this is how the crystal was able to grow so big. Once formed, the mountains that it sat within would have been eroded and uplifted, and so it was brought to the surface, picked out of the rock by the forces of rain and weathering, and transported down river to the gem sands of Ratnapura. Today it sits in the hands of a private owner.

Simon Redfern, Professor in Earth Sciences, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Simon Redfern (Department of Earth Sciences) discusses how the "Star of Adam" sapphire was formed in the highlands of Sri Lanka.

The Star of Adam

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Roman toilets gave no clear health benefit, and Romanisation actually spread parasites

$
0
0

The Romans are well known for introducing sanitation technology to Europe around 2,000 years ago, including public multi-seat latrines with washing facilities, sewerage systems, piped drinking water from aqueducts, and heated public baths for washing. Romans also developed laws designed to keep their towns free of excrement and rubbish.

However, new archaeological research has revealed that – for all their apparently hygienic innovations – intestinal parasites such as whipworm, roundworm and Entamoeba histolytica dysentery did not decrease as expected in Roman times compared with the preceding Iron Age, they gradually increased.  

The latest research was conducted by Dr Piers Mitchell from Cambridge’s Archaeology and Anthropology Department and is published today in the journal Parasitology. The study is the first to use the archaeological evidence for parasites in Roman times to assess “the health consequences of conquering an empire”.

Mitchell brought together evidence of parasites in ancient latrines, human burials and ‘coprolites’ – or fossilised faeces – as well as in combs and textiles from numerous Roman Period excavations across the Roman Empire. 

Not only did certain intestinal parasites appear to increase in prevalence with the coming of the Romans, but Mitchell also found that, despite their famous culture of regular bathing, ‘ectoparasites’ such as lice and fleas were just as widespread among Romans as in Viking and medieval populations, where bathing was not widely practiced.

Some excavations revealed evidence for special combs to strip lice from hair, and delousing may have been a daily routine for many people living across the Roman Empire

Piers Mitchell said: “Modern research has shown that toilets, clean drinking water and removing faeces from the streets all decrease risk of infectious disease and parasites. So we might expect the prevalence of faecal oral parasites such as whipworm and roundworm to drop in Roman times – yet we find a gradual increase. The question is why?”

One possibility Mitchell offers is that it may have actually been the warm communal waters of the bathhouses that helped spread the parasitic worms. Water was infrequently changed in some baths, and a scum would build on the surface from human dirt and cosmetics. “Clearly, not all Roman baths were as clean as they might have been,” said Mitchell.

Another possible explanation raised in the study is the Roman use of human excrement as a crop fertilizer. While modern research has shown this does increase crop yields, unless the faeces are composted for many months before being added to the fields, it can result in the spread of parasite eggs that can survive in the grown plants.

“It is possible that sanitation laws requiring the removal of faeces from the streets actually led to reinfection of the population as the waste was often used to fertilise crops planted in farms surrounding the towns,” said Mitchell.

The study found fish tapeworm eggs to be surprisingly widespread in the Roman Period compared to Bronze and Iron Age Europe. One possibility Mitchell suggests for the rise in fish tapeworm is the Roman love of a sauce called garum.

Made from pieces of fish, herbs, salt and flavourings, garum was used as both a culinary ingredient and a medicine. This sauce was not cooked, but allowed to ferment in the sun. Garum was traded right across the empire, and may have acted as the “vector” for fish tapeworm, says Mitchell.

“The manufacture of fish sauce and its trade across the empire in sealed jars would have allowed the spread of the fish tapeworm parasite from endemic areas of northern Europe to all people across the empire. This appears to be a good example of the negative health consequences of conquering an empire,” he said.

The study shows a range of parasites infected people living in the Roman Empire, but did they try to treat these infections medically? While Mitchell says care must be taken when relating ancient texts to modern disease diagnoses, some researchers have suggested that intestinal worms described by Roman medical practitioner Galen (130AD - 210AD) may include roundworm, pinworm and a species of tapeworm.

Galen believed these parasites were formed from spontaneous generation in putrefied matter under the effect of heat. He recommended treatment through modified diet, bloodletting, and medicines believed to have a cooling and drying effect, in an effort to restore balance to the ‘four humours’: black bile, yellow bile, blood and phlegm.       

Added Mitchell: “This latest research on the prevalence of ancient parasites suggests that Roman toilets, sewers and sanitation laws had no clear benefit to public health. The widespread nature of both intestinal parasites and ectoparasites such as lice also suggests that Roman public baths surprisingly gave no clear health benefit either.”

“It seems likely that while Roman sanitation may not have made people any healthier, they would probably have smelt better.”

Archaeological evidence shows that intestinal parasites such as whipworm became increasingly common across Europe during the Roman Period, despite the apparent improvements the empire brought in sanitation technologies.

It seems likely that while Roman sanitation may not have made people any healthier, they would probably have smelt better
Piers Mitchell
Left: Roman latrines from Lepcis Magna in Libya. Right: Roman whipworm egg from Turkey

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: Why the Romans weren’t quite as clean as you might have thought

$
0
0

Prior to the Romans, Greece was the only part of Europe to have had toilets. But by the peak of the Roman Empire in the 3rd century AD, the Romans had introduced sanitation to much of their domain, stretching across western and southern Europe, the Middle East and North Africa. Their impressive technologies included large multi-seat public latrines, sewers, clean water in aqueducts, elegant public baths for washing, and laws that required towns to remove waste from the streets. But how effective were these measures in improving the health of the population?

Modern clinical research has shown that toilets and clean drinking water decrease the risk of human gastrointestinal infections by bacteria, viruses and parasites. We might, therefore, expect that this area of health would improve under the Romans compared with the situation in Bronze Age and Iron Age Europe, when these sanitation technologies did not exist. Similarly, we might also expect that ectoparasites such as fleas and body lice might become less common with the introduction of regular bathing and personal hygiene.

A new study I’ve published in Parasitology has brought together all the archaeological evidence for intestinal parasites and ectoparasites in the Roman world in order to evaluate the impact of Roman sanitation technology upon health. The study compares the species of parasites present before the Romans in the Bronze Age and Iron Age, and also after the Romans in the early medieval period.

 

Bog standard.Sphinx Wang/Shutterstock.com

 

I found a number of surprising findings. Unexpectedly, there was no drop in parasites spread by poor sanitation following the arrival of the Romans. In fact, parasites such as whipworm, roundworm and dysentery infections gradually increased during the Roman period instead of falling as expected. This suggests that Roman sanitation technologies such as latrines, sewers and clean water were not as effective in improving gastrointestinal health as you might think.

 

Whipworm egg.Piers Mitchell, Author provided

 

It is possible that the expected benefits from these technologies was counteracted by the effects of laws requiring waste from the streets to be taken outside towns. Texts from the Roman period mention how human waste was used to fertilise crops in the fields, so parasite eggs from human faeces would have contaminated these foods and allowed reinfection of the populations when they ate.

The second surprising finding was that there was no sign of a decrease in ectoparasites following the introduction of public bathing facilities to keep the population clean. Analysis of the number of fleas and lice in York, in northern England, found similar numbers of parasites in Roman soil layers as was the case in Viking and medieval soil layers. Since the Viking and medieval populations of York did not bath regularly, we would have expected Roman bathing to reduce the number of parasites found in Roman York. This suggests that Roman baths had no clear beneficial effect upon health when it comes to ectoparasites.

 

The head of fish tapeworm, Diphyllobothrium latum.커뷰, CC BY-SA

 

The fish tapeworm also became more common in Europe under the Romans. In Bronze Age and Iron Age Europe fish tapeworm eggs have only been found in France and Germany. However, under the Roman Empire, fish tapeworm was found in six different European countries. One possibility that would explain the apparent increase in distribution of the parasite is the adoption of Roman culinary habits.

One popular Roman food was garum, an uncooked fermented fish sauce made from fish, herbs, spices and salt. We have archaeological and textual evidence for its manufacture, storage in sealed clay pots, transport and sale across the empire. It is possible that garum made in northern Europe would have contained fish infected with fish tapeworm, and when traded to other parts of the empire this could have infected people living outside the original area endemic for the disease.

This is not to say that Roman sanitation was a waste of time. It would have been useful having public latrines so that people in town would not have had to return home to use the toilet. A culture of public bathing would have made people smell better too. However, the archaeological evidence does not indicate any health benefit from this sanitation, but rather that Romanisation led to increase in certain parasite species due to trade and migration across the empire.

The ConversationPiers Mitchell, Affiliated Lecturer in Biological Anthropology, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Piers Mitchell (Department of Biological Anthroplogy) discusses what Roman toilets did for the health of the population.

Roman toilets

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: Mysterious footprint fossils point to dancing dinosaur mating ritual

$
0
0

Studying dinosaurs is a lot like being a detective. Just as Sherlock Holmes was noted for his ability to interpret the behaviour of victims or criminals using footprints, palaeontologists have a similar practice when looking for evidence of dinosaur behaviour known as ichnology.

This is the study of the traces living organisms leave behind including bones, footprints and even bite marks on leaves. Indeed, Sherlock Holmes' creator, Sir Arthur Conan Doyle, was very well aware of the traces of dinosaur footprints that had been discovered in the rocks of the Weald near his home in south-east England.

Now researchers in the US have discovered some very unusual trace fossils they believe could also be footprints. Although it is far from certain, these markings may provide the first clue as to whether dinosaurs performed dance-like mating rituals similar to those of living birds.

Scratching the surface

The team from the University of Colorado Denver have unearthed some truly extraordinary trace fossils on the bedding surfaces of sedimentary rocks of Cretaceous age in Colorado. The bedding surfaces have revealed an irregular array of large scoop-shaped depressions up to 2m in diameter and adjacent hummocks. Many of the scoops also display clear and unequivocal elongate scratch marks.

Given the geological ages of these rocks, the only large, powerful ground-dwelling creatures likely to be able to make such structures would have been dinosaurs. These curious sedimentary structures are not simply a one-off isolated discovery that can be explained as just a weird bit of geology, but have been found in clusters at a number of discrete sites across Colorado. Each site has a rather similar, comparatively dense, cluster of these scoop-like structures.

At first sight it would be perfectly reasonable to consider that such structures were the remnants of ancient dinosaur nests. Dinosaur nest sites, including eggs, shell fragments and even nestling dinosaur remains are comparatively well known. They have been reported from a range of Cretaceous aged sites that have been found in North America, South America and Asia.

 

Dinosaur detectives.M. Lockley

 

But these “scoops and hummocks” differ in their detailed structure when compared to definitive dinosaur nests. Dinosaur nests tend to be circular, rather flat-bottomed, usually have traces of egg shell and are typically surrounded by a rim-like perimeter wall.

In fact, these new and distinctive structures show no evidence of what appear to be conventional dinosaur nest structure or scattered egg shell fragments. They are elongate, concave depressions with sediment clearly heaped to one side. In many instances, they display scrape marks that appear to have been made by dragging claws.

These structures are most comparable to the “leks” produced by living ground-nesting birds. Leks are effectively display arenas in which male birds perform a courtship ritual that can include dancing, showing off their feathers and making calls to attract the attention of nearby females.

The researchers suggest the geological structures were originally created by theropods, the group of dinosaurs most closely related to living birds and which includes Tyrannosaurus Rex. Theropods may well have been very like modern birds in their behaviour and made the scrapes as part of the production of a display arena for courtship. However, it seems likely that if these marks were leks they would have been next to actual breeding/nesting sites, but so far no trace of nests has been discovered.

Tracking down Cinderella

The frustrating thing about ichnology is that while the tracks and traces left by living creatures can be matched to observations of their actual behaviour, this is rarely the case when it comes to the fossilised traces of dinosaurs. Trying to tie the identity of fossilised tracks to the original track-maker has been a persistent problem for palaeontologists. It’s rather like the hunt for Cinderella: they have to look for animals that lived at the exact time the tracks were formed, with feet bones the right size and shape to precisely fit the shoe of the fossilised footprint.

Fossilised tracks and traces used to be rather disparaged by palaeontologists because the difficulties surrounding the identity of the actual track-maker seemed more or less insurmountable. However, the past few decades has seen a growing appreciation of the information that can be gleaned from such tracks and traces.

This includes the local environmental conditions when the tracks were made, the texture of the sediments that the creature was walking upon, and the details of foot placement, stride length and stride pattern. These can reveal a surprising amount of information about the way the track-maker walked, its posture and even the likely speed at which it was moving – very reminiscent of the skills demonstrated by Conan Doyle’s heroic sleuth.

Just a few years ago the question of bird-dinosaur affinities was also a topic that was very hotly disputed. The discovery of feathered theropods in the 1990s finally proved that theropod dinosaurs were ancestral to living birds. Although we can’t yet be sure, the new research suggests some dinosaurs may have been not just anatomically similar to birds but also have shared some mating behaviours. This gives rise to the amusing possibility of a dancing T. Rex trying to impress his potential mate.

The ConversationDavid Norman, Reader in Paleobiology, Curator of Palaeontology, Sedgwick Museum of Earth Sciences, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

David Norman (Sedgwick Museum of Earth Sciences) discusses how palaeontologists can interpret fossil footprints to find clues as to whether dinosaurs performed dance-like mating rituals.

Tyrannosaurus tango

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Do you say splinter, spool, spile or spell? English Dialects app tries to guess your regional accent

$
0
0

Along with colleagues from the universities of Zurich and Bern, Cambridge’s Adrian Leemann has developed the free app English Dialects (available on iOS and Android) which asks you to choose your pronunciation of 26 different words before guessing where in England you’re from.

The app, officially launched today on the App Store and Google Play, also encourages you to make your own recordings in order to help researchers determine how dialects have changed over the past 60 years. The English language app follows the team’s hugely successful apps for German-speaking Europe which accumulated more than one million hits in four days on Germany’s Der Spiegel website, and more than 80,000 downloads of the app by German speakers in Switzerland.

“We want to document how English dialects have changed, spread or levelled out,” said Dr Leemann, a researcher at Cambridge’s Department of Theoretical and Applied Linguistics. “The first large-scale documentation of English dialects dates back 60-70 years, when researchers were sent out into the field – sometimes literally – to record the public. It was called the ‘Survey of English Dialects’. In 313 localities across England, they documented accents and dialects over a decade, mainly of farm labourers.”

The researchers used this historical material for the dialect guessing app, which allows them to track how dialects have evolved into the 21st century.

“We hope that people in their tens of thousands will download the app and let us know their results – which means our future attempts at mapping dialect and language change should be much more precise,” added Leemann. “Users can also interact with us by recording their own dialect terms and this will let us see how the English language is evolving and moving from place to place.”

The app asks users how they pronounce certain words or which dialect term they most associate with commonly-used expressions; then produces a heat map for the likely location of your dialect based on your answers.

For example, the app asks how you might say the word ‘last’ or ‘shelf’, giving you various pronunciations to listen to before choosing which one most closely matches your own. Likewise, it asks questions such as: ‘A small piece of wood stuck under the skin is a…’ then gives answers including: spool, spile, speel, spell, spelk, shiver, spill, sliver, splinter or splint. The app then allows you to view which areas of the country use which variations at the end of the quiz.

It also asks the endlessly contentious English question of whether ‘scone’ rhymes with ‘gone’ or ‘cone’.

 

“Everyone has strong views about the pronunciation of this word, but, perhaps surprisingly, we know rather little about who uses which pronunciation and where,” said Professor David Britain, a dialectologist and member of the app team based at the University of Bern in Switzerland.

“Much of our understanding of the regional distribution of different accent and dialect features is still based on the wonderful but now outdated Survey of English Dialects – we haven’t had a truly country-wide survey since. We hope the app will harness people’s fascination with dialect to enable us to paint a more up-to-date picture of how dialect features are spread across the country.”

At the end of the 26 questions, the app gives its best three guesses as to the geography of your accent based on your dialect choices. However, while the Swiss version of the app proved to be highly accurate, Leemann and his colleagues have sounded a more cautious note on the accuracy of the English dialect app.

Dr Leemann said: “English accents and dialects are likely to have changed over the past decades. This may be due to geographical and social mobility, the spread of the mass media and other factors. If the app guesses where you are from correctly, then the accent or dialect of your region has not changed much in the last century. If the app does not guess correctly, it is probably because the dialect spoken in your region has changed quite a lot over time.”

At the end of the quiz, users are invited to share with researchers their location, age, gender, education, ethnicity and how many times they have moved in the last decade. This anonymous data will help academics understand the spread, evolution or decline of certain dialects and dialect terms, and provide answers as to how language changes over time.

“The more people participate and share this information with us, the more accurately we can track how English dialects have changed over the past 60 years,” added Dr Leemann.

After taking part in the quiz, users can also listen to both historic and contemporary pronunciations, taking the public on an auditory journey through England and allowing them to hear how dialects have altered in the 21st century. The old recordings are now held by the British Library and were made available for use in the app. One of these recordings features a speaker from Devon who discusses haymaking and reflects on working conditions in his younger days (http://bit.ly/1OReBDT).

Dr Leemann added: “Our research on dialect data collected through smartphone apps has opened up a new paradigm for analyses of language change. For the Swiss version nearly 80,000 speakers participated. Results revealed that phonetic variables (e.g. if you say ‘sheuf’ or ‘shelf’) tended to remain relatively stable over time, while lexical variables (e.g. if you say ‘splinter’, ‘spelk’, ‘spill’ etc.) changed more over time. The recordings from the Swiss users also showed clear geographical patterns; for example people spoke consistently faster in some regions than others. We hope to do such further analyses with the English data in the near future.”

The findings of the German-speaking experiments were published last week in PLOS ONE: (http://bit.ly/1R98WQ0).

A new app which tries to guess your regional accent based on your pronunciation of 26 words and colloquialisms will help Cambridge academics track the movement and changes to English dialects in the modern era.

We hope that people in their tens of thousands will download the app and let us know their results.
Adrian Leemann
Screen grab of one of the app's questions

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Banning trophy hunting could do more harm than good

$
0
0

Banning trophy hunting would do more harm than good in African countries that have little money to invest in critical conservation initiatives, argue researchers from the Universities of Cambridge, Adelaide and Helsinki. Trophy hunting can be an important conservation tool, provided it can be done in a controlled manner to benefit biodiversity conservation and local people. Where political and governance structures are adequate, trophy hunting can help address the ongoing loss of species.

The researchers have developed a list of 12 guidelines that could address some of the concerns about trophy hunting and enhance its contribution to biodiversity conservation. Their paper is published in the journal Trends in Ecology & Evolution.  

“The story of Cecil the lion, who was killed by an American dentist in July 2015, shocked people all over the world and reignited debates surrounding trophy hunting,” said Professor Corey Bradshaw of the University of Adelaide, the paper’s senior author.

“Understandably, many people oppose trophy hunting and believe it is contributing to the ongoing loss of species; however, we contend that banning the US$217 million per year industry in Africa could end up being worse for species conservation,” he said.  

Professor Bradshaw says trophy hunting brings in more money and can be less disruptive than ecotourism. While the majority of animals hunted in sub-Saharan Africa are more common and less valuable species, the majority of hunting revenue comes from a few valuable species, particularly the charismatic ‘Big Five’: lion, leopard, elephant, buffalo and black or white rhinoceros.

“Conserving biodiversity can be expensive, so generating money is essential for environmental non-government organisations, conservation-minded individuals, government agencies and scientists,” said co-author Dr Enrico Di Minin from the University of Helsinki.  

“Financial resources for conservation, particularly in developing countries, are limited,” he said. “As such, consumptive (including trophy hunting) and non-consumptive (ecotourism safaris) uses are both needed to generate funding. Without such these, many natural habitats would otherwise be converted to agricultural or pastoral uses.

“Trophy hunting can also have a smaller carbon and infrastructure footprint than ecotourism, and it generates higher revenue from a lower number of uses.”

However, co-author Professor Nigel Leader-Williams from Cambridge’s Department of Geography said there is a need for the industry to be better regulated.

“There are many concerns about trophy hunting beyond the ethical that currently limit its effectiveness as a conservation tool,” he said. “One of the biggest problems is that the revenue it generates often goes to the private sector and rarely benefits protected-area management and the local communities. However, if this money was better managed, it would provide much needed funds for conservation.”

The authors’ guidelines to make trophy hunting more effective for conservation are:

  1. Mandatory levies should be imposed on safari operators by governments so that they can be invested directly into trust funds for conservation and management;
  2. Eco-labelling certification schemes could be adopted for trophies coming from areas that contribute to broader biodiversity conservation and respect animal welfare concerns;
  3. Mandatory population viability analyses should be done to ensure that harvests cause no net population declines;
  4. Post-hunt sales of any part of the animals should be banned to avoid illegal wildlife trade;
  5. Priority should be given to fund trophy hunting enterprises run (or leased) by local communities;
  6. Trusts to facilitate equitable benefit sharing within local communities and promote long-term economic sustainability should be created;
  7. Mandatory scientific sampling of hunted animals, including tissue for genetic analyses and teeth for age analysis, should be enforced;
  8. Mandatory 5-year (or more frequent) reviews of all individuals hunted and detailed population management plans should be submitted to government legislators to extend permits;
  9. There should be full disclosure to public of all data collected (including levied amounts);
  10. Independent government observers should be placed randomly and without forewarning on safari hunts as they happen;
  11. Trophies must be confiscated and permits are revoked when illegal practices are disclosed; and
  12. Backup professional shooters and trackers should be present for all hunts to minimise welfare concerns.

Reference:
E. Di Minin et al. ‘Banning Trophy Hunting Will Exacerbate Biodiversity Loss.’ Trends in Ecology & Evolution (2015). DOI: 10.1016/j.tree.2015.12.006

Adapted from a University of Adelaide press release.

Trophy hunting shouldn’t be banned, but instead it should be better regulated to ensure funds generated from permits are invested back into local conservation efforts, according to new research. 

There are many concerns about trophy hunting beyond the ethical that currently limit its effectiveness as a conservation tool.
Nigel Leader-Williams
Lion waiting in Namibia

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Global learning is needed to save carbon capture and storage from being abandoned

$
0
0

Carbon capture and storage, which is considered by many experts as the only realistic way to dramatically reduce carbon emissions in an affordable way, has fallen out of favour with private and public sector funders. Corporations and governments worldwide, including most recently the UK, are abandoning the same technology they championed just a few years ago.

In a commentary published today (11 January) in the inaugural issue of the journal Nature Energy, a University of Cambridge researcher argues that now is not the time for governments to drop carbon capture and storage (CCS). Like many new technologies, it is only possible to learn what works and what doesn’t by building and testing demonstration projects at scale, and that by giving up on CCS instead of working together to develop a global ‘portfolio’ of projects, countries are turning their backs on a key part of a low-carbon future.

CCS works by separating the carbon dioxide emitted by coal and gas power plants, transporting it and then storing it underground so that the CO2 cannot escape into the atmosphere. Critically, CCS can also be used in industrial processes, such as chemical, steel or cement plants, and is often the only feasible way of reducing emissions at these facilities. While renewable forms of energy, such as solar or wind, are important to reducing emissions, until there are dramatic advances in battery technology, CCS will be essential to deliver flexible power and to build green industrial clusters.

“If we’re serious about meeting aggressive national or global emissions targets, the only way to do it affordably is with CCS,” said Dr David Reiner of Cambridge Judge Business School, the paper’s author. “But since 2008, we’ve seen a decline in interest in CCS, which has essentially been in lock step with our declining interest in doing anything serious about climate change.”

Just days before last year’s UN climate summit in Paris, the UK government cancelled a four-year, £1 billion competition to support large-scale CCS demonstration projects. And since the financial crisis of 2008, projects in the US, Canada, Australia, Europe and elsewhere have been cancelled, although the first few large-scale integrated projects have recently begun operation. The Intergovernmental Panel on Climate Change (IPCC) says that without CCS, the costs associated with slowing global warming will double.

According to Reiner, there are several reasons that CCS seems to have fallen out of favour with both private and public sector funders. The first is cost – a single CCS demonstration plant costs in the range of $1 billion. Unlike solar or wind, which can be demonstrated at a much smaller scale, CCS can only be demonstrated at a large scale, driven by the size of commercial-scale power plants and the need to characterise the geological formations which will store the CO2.

“Scaling up any new technology is difficult, but it’s that much harder if you’re working in billion-dollar chunks,” said Reiner. “At 10 or even 100 million dollars, you will be able to find ways to fund the research & development. But being really serious about demonstrating CCS and making it work means allocating very large sums at a time when national budgets are still under stress after the global financial crisis.”

Another reason is commercial pressures and timescales. “The nature of demonstration is that you work out the kinks – you find out what works and what doesn’t, and you learn from it,” said Reiner. “It’s what’s done in science or in research and development all the time: you expect that nine of ten ideas won’t work, that nine of ten oil wells you drill won’t turn up anything, that nine of ten new drug candidates will fail. Whereas firms can make ample returns on a major oil discovery or a blockbuster drug to make up for the many failures along the way, that is clearly not the case for CCS, so the answer is almost certainly government funding or mandates.

“The scale of CCS and the fact that it’s at the demonstration rather than the research and development phase also means that you don’t get to play around with the technology as such – you’re essentially at the stage where, to use a gambling analogy, you’re putting all your money on red 32 or black 29. And when a certain approach turns out to be more expensive than expected, it’s easy for nay-sayers to dismiss the whole technology, rather than to consider how to learn from that failure and move forward.”

There is also the issue that before 2008 countries thought they would each be developing their own portfolios of projects and so they focused inward, rather than working together to develop a global portfolio of large-scale CCS demonstrations. In the rush to fund CCS projects between 2005 and 2009, countries assembled projects independently, and now only a handful of those projects remain.

According to Reiner, building a global portfolio, where countries learn from each other’s projects, will assist in learning through diversity and replication, ‘de-risking’ the technology and determining whether it ever emerges from the demonstration phase.

“If we’re not going to get CCS to happen, it’s hard to imagine getting the dramatic emissions reductions we need to limit global warming to two degrees – or three degrees, for that matter,” he said. “However, there’s an inherent tension in developing CCS – it is not a single technology, but a whole suite and if there are six CCS paths we can go down, it’s almost impossible to know sitting where we are now which is the right path. Somewhat ironically, we have to be willing to invest in these high-cost gambles or we will never be able to deliver an affordable, low-carbon energy system.”

Reference:
David M. Reiner. ‘Learning through a portfolio of carbon capture and storage demonstration projects.’ Nature Energy (2016). DOI: 10.1038/nenergy.2015.11

 

Governments should not be abandoning carbon capture and storage, argues a Cambridge researcher, as it is the only realistic way of dramatically reducing carbon emissions. Instead, they should be investing in global approaches to learn what works – and what doesn’t.

If we’re serious about meeting aggressive national or global emissions targets, the only way to do it affordably is with carbon capture and storage
David Reiner
Power plant

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 
Viewing all 4368 articles
Browse latest View live




Latest Images