Articles on this Page
- 12/10/17--01:22: _Industrial Revoluti...
- 12/09/17--17:53: _The Billingford Hut...
- 12/11/17--08:13: _Presenting facts as...
- 12/13/17--16:14: _Mistletoe and (a la...
- 12/14/17--20:15: _Ancient faeces reve...
- 12/18/17--02:49: _Calf’s foot jelly a...
- 12/18/17--08:01: _Birds learn from ea...
- 12/18/17--15:38: _Mindfulness trainin...
- 12/19/17--07:03: _Habitable planets c...
- 12/20/17--10:00: _Political instabili...
- 12/22/17--03:30: _Researchers chart t...
- 12/31/17--16:01: _Cambridge in the 20...
- 01/02/18--06:21: _New brain mapping t...
- 01/03/18--10:00: _Direct genetic evid...
- 01/04/18--16:05: _Advances in brain i...
- 01/10/18--03:18: _Harnessing the powe...
- 01/10/18--10:00: _Astronomers detect ...
- 01/15/18--08:00: _How incurable mitoc...
- 01/18/18--02:00: _AI 'scientist' find...
- 01/18/18--03:42: _Unusually sophistic...
- 12/18/17--15:38: Mindfulness training reduces stress during exam time
- 12/19/17--07:03: Habitable planets could exist around pulsars
- 12/22/17--03:30: Researchers chart the ‘secret’ movement of quantum particles
- 12/31/17--16:01: Cambridge in the 2018 New Year Honours List
- 01/10/18--10:00: Astronomers detect ‘whirlpool’ movement in earliest galaxies
People living in the former industrial heartlands of England and Wales are more disposed to negative emotions such as anxiety and depressive moods, more impulsive and more likely to struggle with planning and self-motivation, according to a new study of almost 400,000 personality tests.
The findings show that, generations after the white heat of Industrial Revolution and decades on from the decline of deep coal mining, the populations of areas where coal-based industries dominated in the 19th century retain a “psychological adversity”.
Researchers suggest this is the inherited product of selective migrations during mass industrialisation compounded by the social effects of severe work and living conditions.
They argue that the damaging cognitive legacy of coal is “reinforced and amplified” by the more obvious economic consequences of high unemployment we see today. The study also found significantly lower life satisfaction in these areas.
The UK findings, published in the Journal of Personality and Social Psychology, are supported by a North American “robustness check”, with less detailed data from US demographics suggesting the same patterns of post-industrial personality traits.
“Regional patterns of personality and well-being may have their roots in major societal changes underway decades or centuries earlier, and the Industrial Revolution is arguably one of the most influential and formative epochs in modern history,” says co-author Dr Jason Rentfrow, from Cambridge’s Department of Psychology.
“Those who live in a post-industrial landscape still do so in the shadow of coal, internally as well as externally. This study is one of the first to show that the Industrial Revolution has a hidden psychological heritage, one that is imprinted on today’s psychological make-up of the regions of England and Wales.”
An international team of psychologists, including researchers from the Queensland University of Technology, University of Texas, University of Cambridge and the Baden-Wuerttemberg Cooperative State University, used data collected from 381,916 people across England and Wales during 2009-2011 as part of the BBC Lab’s online Big Personality Test.
The team analysed test scores by looking at the “big five” personality traits: extraversion, agreeableness, conscientiousness, neuroticism and openness. The results were further dissected by characteristics such as altruism, self-discipline and anxiety.
The data was also broken down by region and county, and compared with several other large-scale datasets including coalfield maps and a male occupation census of the early 19th century (collated through parish baptism records, where the father listed his job).
The team controlled for an extensive range of other possible influences – from competing economic factors in the 19th century and earlier, through to modern considerations of education, wealth and even climate.
However, they still found significant personality differences for those currently occupying areas where large numbers of men had been employed in coal-based industries from 1813 to 1820 – as the Industrial Revolution was peaking.
Neuroticism was, on average, 33% higher in these areas compared with the rest of the country. In the ‘big five’ model of personality, this translates as increased emotional instability, prone to feelings of worry or anger, as well as higher risk of common mental disorders such as depression and substance abuse.
In fact, in the further “sub-facet” analyses, these post-industrial areas scored 31% higher for tendencies toward both anxiety and depression.
Areas that ranked highest for neuroticism include Blaenau Gwent and Ceredigion in South Wales, and Hartlepool in England.
Conscientiousness was, on average, 26% lower in former industrial areas. In the ‘big five’ model, this manifests as more disorderly and less goal-oriented behaviours – difficulty with planning and saving money. The underlying sub-facet of ‘order’ itself was 35% lower in these areas.
The lowest three areas for conscientiousness were all in Wales (Merthyr Tydfil, Ceredigion and Gwynedd), with English areas including Nottingham and Leicester.
An assessment of life satisfaction was included in the BBC Lab questionnaire, which was an average of 29% lower in former industrial centres.
While researchers say there will be many factors behind the correlation between personality traits and historic industrialisation, they offer two likely ones: migration and socialisation (learned behaviour).
The people migrating into industrial areas were often doing so to find employment in the hope of escaping poverty and distressing situations of rural depression – those experiencing high levels of ‘psychological adversity’.
However, people that left these areas, often later on, were likely those with higher degrees of optimism and psychological resilience, say researchers.
This “selective influx and outflow” may have concentrated so-called ‘negative’ personality traits in industrial areas – traits that can be passed down generations through combinations of experience and genetics.
Migratory effects would have been exacerbated by the ‘socialisation’ of repetitive, dangerous and exhausting labour from childhood – reducing well-being and elevating stress – combined with harsh conditions of overcrowding and atrocious sanitation during the age of steam.
The study’s authors argue their findings have important implications for today’s policymakers looking at public health interventions.
“The decline of coal in areas dependent on such industries has caused persistent economic hardship – most prominently high unemployment. This is only likely to have contributed to the baseline of psychological adversity the Industrial Revolution imprinted on some populations,” says co-author Michael Stuetzer from Baden-Württemberg Cooperative State University, Germany.
“These regional personality levels may have a long history, reaching back to the foundations of our industrial world, so it seems safe to assume they will continue to shape the well-being, health, and economic trajectories of these regions.”
The team note that, while they focused on the negative psychological imprint of coal, future research could examine possible long-term positive effects in these regions born of the same adversity – such as the solidarity and civic engagement witnessed in the labour movement.
Study finds people in areas historically reliant on coal-based industries have more ‘negative’ personality traits. Psychologists suggest this cognitive die may well have been cast at the dawn of the industrial age.
A visitor to the Parker Library at Corpus Christi College may have solved the puzzle of a curious decorative detail on a chest dating from the early 15th century. The massive oak chest is known as the Billingford Hutch and takes its name from Richard de Billingford, the fifth Master of Corpus Christi (1398-1432).
Jeremy Purseglove, environmentalist and Cambridge resident, visited the Library during Open Cambridge in September 2017. “It was a wonderful chance to get a glimpse of some of the Library’s medieval manuscripts,” he said.
“We were given a fascinating talk by Alexander Devine, one of the librarians. He showed us a massive chest that had recently been moved to the Library from elsewhere in the College. My eye was drawn to the leaf shapes in the metal work.”
The chest is made from oak planks and measures approximately 1.8m x 0.5m x 0.4m. It is reinforced by numerous iron bands and five iron hasps, secured in three locks, all operated by different keys. Each of the lock plates (the metal plates containing the locks, hasps and keyholes) is decorated with the outline of a plant punched into the metal.
No-one knew the significance of this decorative detail. Purseglove, who is passionate about plants, suspected the distinctive shape was likely to be that of moonwort, a fern much mentioned by 16th- century herbalists. He said: “I rushed home and looked it up. I found that it had been associated with the opening of locks and guarding of silver.”
According to the renowned herbalist Nicholas Culpepper, writing in the 17th century: “Moonwort is an herb which (they say) will open locks, and unshoe such horses as tread upon it. This some laugh to scorn, and those no small fools neither; but country people, that I know, call it Unshoe the Horse.” Moonwort is also mentioned by dramatist Ben Jonson as an ingredient of witches’ broth.
In both design and structure, the Billingford Hutch is similar to many surviving chests made for the storage of valuables in late medieval Europe, from strongboxes and trunks to coffers and caskets. However, what makes the Billingford Hutch remarkable is that it’s a loan chest, a rare example of late medieval ‘financial furniture’.
University loan chests operated a bit like pawn shops and afforded temporary financial assistance to struggling scholars. “Richard de Billingford gave the College a sum of £20 which was placed in the chest under the guardianship of three custodians,” said Devine.
“Masters and Fellows of Corpus Christi were able to obtain loans up to a value of 40 shillings, around £2, by pledging objects of greater value, most often manuscripts, which would be held in the chest. After a specified time, the pledge – if unredeemed – would be sold and the original loan repaid to the chest with any profit going to the borrower.”
Billingford created the loan fund in 1420 but the chest itself may be even older. Other Cambridge colleges also had loan chests during the late Middle Ages but precious few survive. Corpus has retained not only the chest itself but also its register, containing its administrative records for more than 300 years.
The register offers great insight into the role of the chest in late medieval academic life at Corpus. Every one of the College’s Fellows and its Masters is named in the register, and many were repeat borrowers, demonstrating that the chest fulfilled a genuine need. The most frequent objects pledged to the Hutch were books. Other valuables included sacred vessels and chalices, silver spoons and salt cellars.
Devine said: “The Billingford Hutch is probably the best surviving example of its kind in Europe. To have a possible answer to the puzzle of its decorative motif is fantastic. We’re immensely grateful to Jeremy for enriching our understanding of its history. His wonderful discovery is further proof that sharing your collections with the public is the key to unlocking their secrets.”
Inset images: decorative motif on the hasp of the Billingford Hutch; the Hutch in its present position in the Parker Library; illustrations of 'the lunaria plant' from a 15th-century Catalan compilation of alchemical tracts (Corpus Christi College).
A heavy oak chest in the Parker Library (Corpus Christi College) was used to store objects left as collateral for loans of money. Its ironwork features the outline of a plant – but no-one knew why. Now a visitor to the Library may have unravelled the meaning of this decorative motif.
In the murk of post-truth public debate, facts can polarise. Scientific evidence triggers reaction and spin that ends up entrenching the attitudes of opposing political tribes.
Recent research suggests this phenomenon is actually stronger among the more educated, through what psychologists call 'motived reasoning': where data is rejected or twisted - consciously or otherwise - to prop up a particular worldview.
However, a new study in the journalNature Human Behaviour finds that one type of fact can bridge the chasm between conservative and liberal, and pull people's opinions closer to the truth on one of the most polarising issues in US politics: climate change.
Previous research has broadly found US conservatives to be most sceptical of climate change. Yet by presenting a fact in the form of a consensus - "97% of climate scientists have concluded that human-caused global warming is happening" - researchers have now discovered that conservatives shift their perceptions significantly towards the scientific 'norm'.
In an experiment involving over 6,000 US citizens, psychologists found that introducing people to this consensus fact reduced polarisation between higher educated liberals and conservatives by roughly 50%, and increased conservative belief in a scientific accord on climate change by 20 percentage points.
Moreover, the latest research confirms the prior finding that climate change scepticism is indeed more deeply rooted among highly educated conservatives. Yet exposure to the simple fact of a scientific consensus neutralises the "negative interaction" between higher education and conservatism that strongly embeds these beliefs.
"The vast majority of people want to conform to societal standards, it's innate in us as a highly social species," says Dr Sander van der Linden, study lead author from the University of Cambridge's Department of Psychology.
"People often misperceive social norms, and seek to adjust once they are exposed to evidence of a group consensus," he says, pointing to the example that college students always think their friends drink more than they actually do.
"Our findings suggest that presenting people with a social fact, a consensus of opinion among experts, rather than challenging them with blunt scientific data, encourages a shift towards mainstream scientific belief - particularly among conservatives."
For van der Linden and his co-authors Drs Anthony Leiserowitz and Edward Maibach from Yale and George Mason universities in the US, social facts such as demonstrating a consensus can act as a "gateway belief": allowing a gradual recalibration of private attitudes.
"Information that directly threatens people's worldview can cause them to react negatively and become further entrenched in their beliefs. This 'backfire effect' appears to be particularly strong among highly educated US conservatives when it comes to contested issues such as manmade climate change," says van der Linden.
"It is more acceptable for people to change their perceptions of what is normative in science and society. Previous research has shown that people will then adjust their core beliefs over time to match. This is a less threatening way to change attitudes, avoiding the 'backfire effect' that can occur when someone's worldview is directly challenged."
For the study, researchers conducted online surveys of 6,301 US citizens that adhered to nationally representative quotas of gender, age, education, ethnicity, region and political ideology.
The nature of the study was hidden by claims of testing random media messages, with the climate change perception tests sandwiched between questions on consumer technology and popular culture messaging.
Half the sample were randomly assigned to receive the 'treatment' of exposure to the fact of scientific consensus, while the other half, the control group, did not.
Researchers found that attitudes towards scientific belief on climate change among self-declared conservatives were, on average, 35 percentage points lower (64%) than the actual scientific consensus of 97%. Among liberals it was 20 percentage points lower.
They also found a small additional negative effect: when someone is highly educated and conservative they judge scientific agreement to be even lower.
However, once the treatment group were exposed to the 'social fact' of overwhelming scientific agreement, higher-educated conservatives shifted their perception of the scientific norm by 20 percentage points to 83% - almost in line with post-treatment liberals.
The added negative effect of conservatism plus high education was completely neutralised through exposure to the truth on scientific agreement around manmade climate change.
"Scientists as a group are still viewed as trustworthy and non-partisan across the political spectrum in the US, despite frequent attempts to discredit their work through 'fake news' denunciations and underhand lobbying techniques deployed by some on the right," says van der Linden.
"Our study suggests that even in our so-called post-truth environment, hope is not lost for the fact. By presenting scientific facts in a socialised form, such as highlighting consensus, we can still shift opinion across political divides on some of the most pressing issues of our time."
New evidence shows that a ‘social fact’ highlighting expert consensus shifts perceptions across US political spectrum – particularly among highly educated conservatives. Facts that encourage agreement are a promising way of cutting through today’s ‘post-truth’ bluster, say psychologists.
Both the types of alcoholic drink and the amount consumed in England has fluctuated over the last 300 years, largely in response to economic, legislative and social factors. Until the second part of the 20th century, beer and spirits were the most common forms of alcohol consumed, with wine most commonly consumed by the upper classes.
Wine consumption increased almost four-fold between 1960 and 1980, and almost doubled again between 1980 and 2004. Increased alcohol consumption since the mid-20th century reflects greater affordability, availability and marketing of alcoholic products, as well as licensing liberalisations leading to supermarkets competing in the drinks retail business.
In 2016, Professor Marteau and colleagues carried out an experiment at the Pint Shop in Cambridge, altering the size of wine glasses while keeping the serving sizes the same. They found that this led to an almost 10% increase in sales.
“Wine will no doubt be a feature of some merry Christmas nights, but when it comes to how much we drink, wine glass size probably does matter,” says Professor Theresa Marteau, Director of the Behaviour and Health Research Unit at the University of Cambridge.
In a study published today in The BMJ, Professor Marteau and colleagues looked at wine glass capacity over time to help understand whether any changes in their size might have contributed to the steep rise in its consumption over the past few decades.
“Wine glasses became a common receptacle from which wine was drunk around 1700,” says first author Dr Zorana Zupan. “This followed the development of lead crystal glassware by George Ravenscroft in the late 17th century, which led to the manufacture of less fragile and larger glasses than was previously possible.”
Through a combination of online searches and discussions with experts in antique glassware, including museum curators, the researchers obtained measurements of 411 glasses from 1700 to modern day. They found that wine glass capacity increased from 66 ml in the 1700s to 417ml in the 2000s, with the mean wine glass size in 2016-17 being 449ml.
“Our findings suggest that the capacity of wine glasses in England increased significantly over the past 300 years,” adds Dr Zupan. “For the most part, this was gradual, but since the 1990s, the size has increased rapidly. Whether this led to the rise in wine consumption in England, we can’t say for certain, but a wine glass 300 years ago would only have held about a half of today’s small measure. On top of this, we also have some evidence that suggests wine glass size itself influences consumption.”
Increases in the size of wine glasses over time likely reflect changes in a number of factors including price, technology, societal wealth and wine appreciation. The ‘Glass Excise’ tax, levied in the mid-18th century, led to the manufacture of smaller glass products. This tax was abolished in 1845, and in the late Victorian era glass production began to shift from more traditional mouth-blowing techniques to more automated processes. These changes in production are reflected in the data, which show the smallest wine glasses during the 1700s with no increases in glass size during that time-period – the increase in size beginning in the 19th century.
Two changes in the 20th century likely contributed further to increased glass sizes. Wine glasses started to be tailored in both shape and size for different wine varieties, both reflecting and contributing to a burgeoning market for wine appreciation, with larger glasses considered important in such appreciation. From 1990 onwards, demand for larger wine glasses by the US market was met by an increase in the size of glasses manufactured in England, where a ready market was also found.
A further influence on wine glass size may have come both from those running bars and restaurants, as well as their consumers. If sales of wine increased when sold in larger glasses, this may have incentivised vendors to use larger glasses. Larger wine glasses can also increase the pleasure from drinking wine, which may also increase the desire to drink more.
In England, wine is increasingly served in 250ml servings with smaller sizes of 125ml often absent from wine lists or menus, despite a regulatory requirement introduced in 2010 that licensees make customers aware that these smaller measures are available. A serving size of 250ml – one third of a standard 75cl bottle of wine and one fifth of the weekly recommended intake for low risk drinking – is larger than the mean capacity of a wine glass available in the 1980s.
Alongside increased wine glass capacity, the strength of wine sold in the UK since the 1990s has also increased, thereby likely further increasing any impact of larger wine glasses on the amount of pure alcohol being consumed by wine drinkers.
The researchers argue that if the impact of larger wine glasses upon consumption can be proven to be a reliable effect, then local licencing regulations limiting the size of glasses would expand the number of policy options for reducing alcohol consumption out of home. Reducing the size of wine glasses in licensed premises might also shift the social norm of what a wine glass should look like, with the potential to influence the size of wine glasses people use at home, where most alcohol, including wine, is consumed.
In the final line of their report, the researchers acknowledge the seasonal sensitivity to these suggestions: “We predict - with moderate confidence – that, while there will be some resistance to these suggestions, their palatability will be greater in the month of January than that of December.”
The research was funded by a Senior Investigator Award to Theresa Marteau from the National Institute for Health Research.
Zupan, Z et al. Wine glass size in England from 1700 to 2017: A measure of our time. BMJ; 14 Dec 2017; DOI: 10.1136/bmj.j5623
WA19188.8.131.520 Enamelled Jacobite portrait glass. © Ashmolean Museum, University of Oxford
Our Georgian and Victorian ancestors probably celebrated Christmas with more modest wine consumption than we do today – if the size of their wine glasses are anything to go by. Researchers at the University of Cambridge have found that the capacity of wine glasses has increased seven-fold over the past 300 years, and most steeply in the last two decades as wine consumption rose.
Ancient faeces from prehistoric burials on the Greek island of Kea have provided the first archaeological evidence for the parasitic worms described 2,500 years ago in the writings of Hippocrates – the most influential works of classical medicine.
University of Cambridge researchers Evilena Anastasiou and Piers Mitchell used microscopy to study soil formed from decomposed faeces recovered from the surface of pelvic bones of skeletons buried in the Neolithic (4th millennium BC), Bronze Age (2nd millennium BC) and Roman periods (146 BC – 330 AD).
The Cambridge team worked on this project with Anastasia Papathanasiou and Lynne Schepartz, who are experts in the archaeology and anthropology of ancient Greece, and were based in Athens.
They found that eggs from two species of parasitic worm (helminths) were present: whipworm (Trichuris trichiura), and roundworm (Ascaris lumbricoides). Whipworm was present from the Neolithic, and roundworm from the Bronze Age.
Hippocrates was a medical practitioner from the Greek island of Cos, who lived in the 5th and 4th centuries BC. He became famous for developing the concept of humoural theory to explain why people became ill.
This theory – in which a healthy body has a balance of four ‘humours’: black bile, yellow bile, blood and phlegm – remained the accepted explanation for disease followed by doctors in Europe until the 17th century, over 2,000 years later.
Hippocrates and his students described many diseases in their medical texts, and historians have been trying to work out which diseases they were. Until now, they had to rely on the original written descriptions of intestinal worms to estimate which parasites may have infected the ancient Greeks. The Hippocratic texts called these intestinal worms Helmins strongyle, Ascaris, and Helmins plateia.
The researchers say that this new archaeological evidence identifies beyond doubt some of the species of parasites that infected people in the region. The findings are published today in the Journal of Archaeological Science: Reports.
“The Helmins strongyle worm in the ancient Greek texts is likely to have referred to roundworm, as found at Kea. The Ascaris worm described in the ancient medical texts may well have referred to two parasites, pinworm and whipworm, with the latter being found at Kea,” said study leader Piers Mitchell, from Cambridge’s Department of Archaeology.
“Until now we only had estimates from historians as to what kinds of parasites were described in the ancient Greek medical texts. Our research confirms some aspects of what the historians thought, but also adds new information that the historians did not expect, such as that whipworm was present”.
The mention of infections by these parasites in the Hippocratic Corpus includes symptoms of vomiting up worms, diarrhoea, fevers and shivers, heartburn, weakness, and swelling of the abdomen.
Descriptions of treatment for intestinal worms in the Corpus were mainly through medicines, such as the crushed root of the wild herb seseli mixed with water and honey taken as a drink.
“Finding the eggs of intestinal parasites as early as the Neolithic period in Greece is a key advance in our field,” said Evilena Anastasiou, one of the study’s authors. “This provides the earliest evidence for parasitic worms in ancient Greece.”
“This research shows how we can bring together archaeology and history to help us better understand the discoveries of key early medical practitioners and scientists,” added Mitchell.
Earliest archaeological evidence of intestinal parasitic worms infecting the ancient inhabitants of Greece confirms descriptions found in writings associated with Hippocrates, the early physician and ‘father of Western medicine’.
Customers today may settle for a flat white and a cinnamon swirl, but at coffee shops 250 years ago, many also expected ale, wine, and possibly a spot of calf’s foot jelly, a new study has shown.
Following its identification during an archaeological survey, researchers are publishing complete details of the most significant collection of artefacts from an early coffee shop ever recovered in the UK. The establishment, called Clapham’s, was on a site now owned by St John’s College, Cambridge, but in the mid-to-late 1700s it was a bustling coffeehouse – the contemporary equivalent, academics say, of a branch of Starbucks.
Researchers from the Cambridge Archaeological Unit – part of the Department of Archaeology at the University of Cambridge – uncovered a disused cellar which had been backfilled with unwanted items, possibly at some point during the 1770s. Inside, they found more than 500 objects, many in a very good state of preservation. These included drinking vessels for tea, coffee and chocolate, serving dishes, clay pipes, animal and fish bones, and an impressive haul of 38 teapots.
The assemblage has now been used to reconstruct what a visit to Clapham’s might have been like, and in particular what its clientele ate and drank. The report suggests that the standard view of early English coffeehouses, as civilised establishments where people engaged in sober, reasoned debate, may need some reworking.
Customers at Clapham’s, while they no doubt drank coffee, also enjoyed plenty of ale and wine, and tucked into dishes ranging from pastry-based snacks to substantial meals involving meat and seafood. The discovery of 18 jelly glasses, alongside a quantity of feet bones from immature cattle, led the researchers to conclude that calf’s foot jelly, a popular dish of that era, might well have been a house speciality.
Craig Cessford, from the Cambridge Archaeological Unit, said that by modern standards, Clapham’s was perhaps less like a coffee shop, and more like an inn.
“Coffee houses were important social centres during the 18th century, but relatively few assemblages of archaeological evidence have been recovered and this is the first time that we have been able to study one in such depth,” he said.
“In many respects, the activities at Clapham’s barely differed from contemporary inns. It seems that coffeehouses weren’t completely different establishments as they are now – they were perhaps at the genteel end of a spectrum that ran from alehouse to coffeehouse.”
Although the saturation of British high streets with coffee shops is sometimes considered a recent phenomenon, they were in fact also extremely common centuries ago. Coffee-drinking first came to Britain in the 16th century and increased in popularity thereafter. By the mid-18th century there were thousands of coffeehouses, which acted as important gathering places and social hubs. Only towards the end of the 1700s did these start to disappear, as tea eclipsed coffee as the national drink.
Clapham’s was owned by a couple, William and Jane Clapham, who ran it from the 1740s until the 1770s. It was popular with students and townspeople alike, and a surviving verse from a student publication of 1751 even attests to its importance as a social centre: “Dinner over, to Tom’s or Clapham’s I go; the news of the town so impatient to know.”
The researchers think that the cellar was perhaps backfilled towards the end of the 1770s, when Jane, by then a widow, retired and her business changed hands. It then lay forgotten until St John’s commissioned and paid for a series of archaeological surveys on and around the site of its Old Divinity School, which were completed in 2012.
Some of the items found were still clearly marked with William and Jane’s initials. They included tea bowls (the standard vessel for drinking tea at the time), saucers, coffee cans and cups, and chocolate cups – which the researchers were able to distinguish because they were taller, since “chocolate was served with a frothy, foamy head”. They also found sugar bowls, milk and cream jugs, mixing bowls, storage jars, plates, bowls, serving dishes, sauceboats, and many other objects.
Even though Clapham’s was a coffeehouse, the finds suggest that tea was fast winning greater affection among drinkers; tea bowls were almost three times as common as coffee cans or cups.
Perhaps more striking, however, was the substantial collection of tankards, wine bottles and glasses, indicating that alcohol consumption was normal. Some drinkers appear to have had favourite tankards reserved for their personal use, while the team also found two-handled cups, possibly for drinking “possets” – milk curdled with wine or ale, and often spiced.
Compared with the sandwiches and muffins on offer in coffee shops today, dining was a much bigger part of life at Clapham’s. Utensils and crockery were found for making patties, pastries, tarts, jellies, syllabubs and other desserts. Animal bones revealed that patrons enjoyed shoulders and legs of mutton, beef, pork, hare, rabbit, chicken and goose. The researchers also found oyster shells, and bones from fish such as eel, herring and mackerel.
Although coffeehouses have traditionally been associated with the increasing popularity of smoking in Britain, there was little evidence of much at Clapham’s. Just five clay pipes were found, including one particularly impressive specimen which carries the slogan “PARKER for ever, Huzzah” – possibly referring to the naval Captain Peter Parker, who was celebrated for his actions during the American War of Independence. The lack of pipes may be because, at the time, tobacco was considered less fashionable than snuff.
Together, the assemblage adds up to a picture in which, rather than making short visits to catch up on the news and engage in polite conversation, customers often settled in for the evening at an establishment that offered them not just hot beverages, but beer, wine, punch and liqueurs, as well as extensive meals. Some even seem to have “ordered out” from nearby inns if their favourite food was not on the menu.
There was little evidence, too, that they read newspapers and pamphlets, the rise of which historians also link to coffeehouses. Newspapers were perishable and therefore unlikely to survive in the archaeological record, but the researchers also point out that other evidence of reading – such as book clasps – has been found on the site of inns nearby, while it is absent here.
“We need to remember this was just one of thousands of coffeehouses and Clapham’s may have been atypical in some ways,” Cessford added. “Despite this it does give us a clearer sense than we’ve ever had before of what these places were like, and a tentative blueprint for spotting the traces of other coffeehouse sites in archaeological assemblages in the future.”
Researchers have published details of the largest collection of artefacts from an early English coffeehouse ever discovered. Described as an 18th century equivalent of Starbucks, the finds nonetheless suggest that it may have been less like a café, and more like an inn.
Many animals have evolved to stand out. Bright colours may be easy to spot, but they warn predators off by signalling toxicity or foul taste.
Yet if every individual predator has to eat colourful prey to learn this unappetising lesson, it’s a puzzle how conspicuous colours had the chance to evolve as a defensive strategy.
Now, a new study using the great tit species as a “model predator” has shown that if one bird observes another being repulsed by a new type of prey, then both birds learn the lesson to stay away.
By filming a great tit having a terrible dining experience with conspicuous prey, then showing it on a television to other tits before tracking their meal selection, researchers found that birds acquired a better idea of which prey to avoid: those that stand out.
The team behind the study, published in the journal Nature Ecology & Evolution, say the ability of great tits to learn what to avoid through observing others is an example of “social transmission” of information.
The scientists scaled up data from their experiments through mathematical modelling to reveal a tipping point: where social transmission has occurred sufficiently in a predator species for its potential prey to stand a better chance with bright colours over camouflage.
“Our study demonstrates that the social behaviour of predators needs to be considered to understand the evolution of their prey,” said lead author Dr Rose Thorogood, from the University of Cambridge’s Department of Zoology.
“Without social transmission taking place in predator species such as great tits, it becomes extremely difficult for conspicuously coloured prey to outlast and outcompete alternative prey, even if they are distasteful or toxic.
“There is mounting evidence that learning by observing others occurs throughout the animal kingdom. Species ranging from fruit flies to trout can learn about food using social transmission.
“We suspect our findings apply over a wide range of predators and prey. Social information may have evolutionary consequences right across ecological communities.”
Thorogood (also based at the Helsinki Institute of Life Science) and colleagues from the University of Jyväskylä and University of Zürich captured wild great tits in the Finnish winter. At Konnevesi Research Station, they trained the birds to open white paper packages with pieces of almond inside as artificial prey.
The birds were given access to aviaries covered in white paper dotted with small black crosses. These crosses were also marked on some of the paper packages: the camouflaged prey.
One bird was filmed unwrapping a package stamped with a square instead of a cross: the conspicuous prey. As such, its contents were unpalatable – an almond soaked with bitter-tasting fluid.
The bird’s reaction was played on a TV in front of some great tits but not others (a control group). When foraging in the cross-covered aviaries containing both cross and square packages, the birds exposed to the video were quicker to select their first item, and 32% less likely to choose the ‘conspicuous’ square prey.
“Just as we might learn to avoid certain foods by seeing a facial expression of disgust, observing another individual headshake and wipe its beak encouraged the great tits to avoid that type of prey,” said Thorogood.
“By modelling the social spread of information from our experimental data, we worked out that predator avoidance of more vividly conspicuous species would become enough for them to survive, spread, and evolve.”
Great tits – a close relation of North America’s chickadee – make a good study species as they are “generalist insectivores” that forage in flocks, and are known to spread other forms of information through observation.
Famously, species of tit learned how to pierce milk bottle lids and siphon the cream during the middle of last century – a phenomenon that spread rapidly through flocks across the UK.
Something great tits don’t eat, however, is a seven-spotted ladybird. “One of the most common ladybird species is bright red, and goes untouched by great tits. Other insects that are camouflaged, such as the brown larch ladybird or green winter moth caterpillar, are fed on by great tits and their young,” said Thorogood.
“The seven-spotted ladybird is so easy to see that if every predator had to eat one before they discovered its foul taste, it would have struggled to survive and reproduce.
“We think it may be the social information of their unpalatable nature spreading through predator species such as great tits that makes the paradox of conspicuous insects such as seven-spotted ladybirds possible.”
A new study of TV-watching great tits reveals how they learn through observation. Social interactions within a predator species can have “evolutionary consequences” for potential prey – such as the conspicuous warning colours of insects like ladybirds.
While the prevalence of anxiety and depression among first year undergraduates is lower than the general population, it increases to overtake this during their second year. The number of students accessing counselling services in the UK grew by 50% from 2010 to 2015, surpassing the growth in the number of students during the same period. There is little consensus as to whether students are suffering more mental disorders, are less resilient than in the past or whether there is less stigma attached to accessing support. Regardless, mental health support services for students are becoming stretched.
Recent years have seen increasing interest in mindfulness, a means of training attention for the purpose of mental wellbeing based on the practice of meditation. There is evidence that mindfulness training can improve symptoms of common mental health issues such as anxiety and depression. However, there is little robust evidence on the effectiveness of mindfulness training in preventing such problems in university students.
“Given the increasing demands on student mental health services, we wanted to see whether mindfulness could help students develop preventative coping strategies,” says Géraldine Dufour Head of the University of Cambridge’s Counselling Service. Dufour is one of the authors of a study that set out to test the effectiveness of mindfulness – the results are published today in The Lancet Public Health.
In total, 616 students took part in the study and were randomised across two groups. Both groups were offered access to comprehensive centralised support at the University of Cambridge Counselling Service in addition to support available from the university and its colleges, and from health services including the National Health Service.
Half of the cohort (309 students) were also offered the Mindfulness Skills for Students course. This consisted of eight, weekly, face-to-face, group-based sessions based on the course book Mindfulness: A Practical Guide to Finding Peace in a Frantic World, adapted for university students. Students were encouraged to also practice at home, starting at eight minute meditations, and increasing to about 15-25 minutes per day, as well as other mindfulness practices such as a mindful walking and mindful eating. Students in the other half of the cohort were offered their mindfulness training the following year.
The researchers assessed the impact of the mindfulness training on stress (‘psychological distress’) during the main, annual examination period in May and June 2016, the most stressful weeks for most students. They measured this using the CORE-OM, a generic assessment used in many counselling services.
The mindfulness course led to lower distress scores after the course and during the exam term compared with students who only received the usual support. Mindfulness participants were a third less likely than other participants to have scores above a threshold commonly seen as meriting mental health support. Distress scores for the mindfulness group during exam time fell below their baselines levels (as measured at the start of the study, before exam time), whereas the students who received the standard support became increasingly stressed as the academic year progressed.
The researchers also looked at other measures, such as self-reported wellbeing. They found that mindfulness training improved wellbeing during the exam period when compared with the usual support.
“This is, to the best of our knowledge, the most robust study to date to assess mindfulness training for students, and backs up previous studies that suggest it can improve mental health and wellbeing during stressful periods,” says Dr Julieta Galante from the Department of Psychiatry at Cambridge, who led the study.
“Students who had been practising mindfulness had distress scores lower than their baseline levels even during exam time, which suggests that mindfulness helps build resilience against stress.”
Professor Peter Jones, also from the Department of Psychiatry, adds: “The evidence is mounting that mindfulness training can help people cope with accumulative stress. While these benefits may be similar to some other preventative methods, mindfulness could be a useful addition to the interventions already delivered by university counselling services. It appears to be popular, feasible, acceptable and without stigma.”
The team also looked at whether mindfulness had any effect of examination results; however, their findings proved inconclusive.
The research was supported by the University of Cambridge and the National Institute for Health (NIHR) Collaboration for Leadership in Applied Health Research and Care East of England, hosted by Cambridgeshire and Peterborough NHS Foundation Trust.
Galante, J et al. Effectiveness of providing university students with a mindfulness-based intervention to increase resilience to stress: a pragmatic randomised controlled trial. Lancet Public Health; 19 December 2017; DOI: 10.1016/S2468-2667(17)30231-1
Mindfulness training can help support students at risk of mental health problems, concludes a randomised controlled trial carried out by researchers at the University of Cambridge.
Dr Julieta Galante is a research associate in the Department of Psychiatry. Her interests lie in mental health promotion, particularly the effects of meditation on mental health. She hopes to contribute to the growing number of approaches to preventing mental health problems that do not rely on medication.
“What fascinates me is the idea that you could potentially train your mind to improve your wellbeing and develop yourself as a person,” she says. “It’s not the academic type of mind-training –meditation training is more like embarking on a deep inner-exploration.”
Galante’s research involves studying large numbers of people in real-world settings, such as busy students revising for their exams. It’s a very complex research field, she says: there are many factors, social, psychological and biological, that contribute to an individual’s mental health.
“Our projects are most successful (and enjoyable) when we collaborate with people outside the academic sphere, in this particular project with the Student Counselling Service, University authorities, and the students themselves.”
The mindfulness trial was ‘blinded’, meaning that the researchers did not know which students (and hence which data) belonged to which group. The ‘unblinding’ of the results – when they found out whether their trial was successful – was nerve-wracking, she says. “The team statistician didn’t know which group had received mindfulness training and which group was the control. He showed his results to the rest of the team and we could all see that there was a clear difference between the groups, but we didn’t know whether this meant really good or really bad news for mindfulness training. When the results were then unveiled, we all laughed with relief!”
Pulsars are known for their extreme conditions. Each is a fast-spinning neutron star - the collapsed core of a massive star that has gone supernova at the end of its life. Only 10 to 30 kilometres across, a pulsar possesses enormous magnetic fields, accretes matter, and regularly gives out large bursts of X-rays and highly energetic particles.
Surprisingly, despite this hostile environment, neutron stars are known to host exoplanets. The first exoplanets ever discovered were around the pulsar PSR B1257+12 - but whether these planets were originally in orbit around the precursor massive star and survived the supernova explosion, or formed in the system later remains an open question. Such planets would receive little visible light but would be continually blasted by the energetic radiation and stellar wind from the host. Could such planets ever host life?
For the first time, astronomers have tried to calculate the ‘habitable’ zones near neutron stars - the range of orbits around a star where a planetary surface could possibly support water in a liquid form. Their calculations show that the habitable zone around a neutron star can be as large as the distance from our Earth to our Sun. An important premise is that the planet must be a super-Earth, with a mass between one and ten times our Earth. A smaller planet will lose its atmosphere within a few thousand years under the onslaught of the pulsar winds. To survive this barrage, a planet’s atmosphere must be a million times thicker than ours - the conditions on a pulsar planet surface might resemble those of the deep ocean floor on Earth.
The astronomers studied the pulsar PSR B1257+12 about 2300 light-years away as a test case, using the X-ray Chandra space telescope. Of the three planets in orbit around the pulsar, two are super-Earths with a mass of four to five times our Earth, and orbit close enough to the pulsar to warm up. According to co-author Alessandro Patruno from Leiden University, “The temperature of the planets might be suitable for the presence of liquid water on their surface. Though, we don't know yet if the two super-Earths have the right, extremely dense atmosphere.”
In the future, Patruno and his co-author Mihkel Kama from Cambridge's Institute of Astronomy would like to observe the pulsar in more detail and compare it with other pulsars. The European Southern Observatory’s ALMA Telescope would be able to show dust discs around neutron stars, which are good predictors of planets. The Milky Way contains about one billion neutron stars, of which about 200,000 are pulsars. So far, 3000 pulsars have been studied and only five pulsar planets have been found.
A. Patruno & M. Kama. ‘Neutron Star Planets: Atmospheric processes and habitability.’ Accepted for publication in Astronomy & Astrophysics.
Adapted from a NOVA press release.
It is theoretically possible that habitable planets exist around pulsars - spinning neutron stars that emit short, quick pulses of radiation. According to new research, such planets must have an enormous atmosphere that converts the deadly x-rays and high energy particles of the pulsar into heat. The results, from astronomers at the University of Cambridge and Leiden University, are reported in the journal Astronomy & Astrophysics.
A vast new study of changes in global wildlife over almost three decades has found that low levels of effective national governance are the strongest predictor of declining species numbers – more so than economic growth, climate change or even surges in human population.
The findings, published in the journal Nature, also show that protected conservation areas do maintain wildlife diversity, but only when situated in countries that are reasonably stable politically with sturdy legal and social structures.
The research used the fate of waterbird species since 1990 as a bellwether for broad biodiversity trends, as their wetland habitats are among the most diverse as well as the most endangered on Earth.
An international team of scientists and conservation experts led by the University of Cambridge analysed over 2.4 million annual count records of 461 waterbird species across almost 26,000 different survey sites around the world.
The researchers used this giant dataset to model localised species changes in nations and regions. Results were compared to the Worldwide Governance Indicators, which measure everything from violence rates and rule of law to political corruption, as well as data such as gross domestic product (GDP) and conservation performance.
The team discovered that waterbird decline was greater in regions of the world where governance is, on average, less effective: such as Western and Central Asia, South America and sub-Saharan Africa.
The healthiest overall species quotas were seen in continental Europe, although even here the levels of key species were found to have nosedived.
This is the first time that effectiveness of national governance and levels of socio-political stability have been identified as the most significant global indicator of biodiversity and species loss.
“Although the global coverage of protected areas continues to increase, our findings suggest that ineffective governance could undermine the benefits of these biodiversity conservation efforts,” says Cambridge’s Dr Tatsuya Amano, who led the study at the University’s Department of Zoology and Centre for the Study of Existential Risk.
“We now know that governance and political stability is a vital consideration when developing future environmental policies and practices.”
For the latest study, Amano worked with Cambridge colleagues as well as researchers from the universities of Bath, UK, and Santa Clara, US, and conservation organisations Wetlands International and the National Audubon Society.
The lack of global-level data on changes to the natural world limits our understanding of the “biodiversity crisis”, say the study’s authors. However, they say there are advantages to focusing on waterbirds when trying to gauge these patterns.
Waterbirds are a diverse group of animals, from ducks and heron to flamingos and pelicans. Their wetland habitats cover some 1.3 billion hectares of the planet – from coast to freshwater and even highland – and provide crucial “ecosystem services”. Wetlands have also been degraded more than any other form of ecosystem.
In addition, waterbirds have a long history of population monitoring. The annual global census run by Wetlands International has involved more than 15,000 volunteers over the last 50 years, and the National Audubon Society’s annual Christmas bird count dates back to 1900.
“Our study shows that waterbird monitoring can provide useful lessons about what we need to do to halt the loss of biodiversity,” said co-author Szabolcs Nagy, Coordinator of the African-Eurasian Waterbird Census at Wetlands International.
Compared to all the “anthropogenic impacts” tested by the researchers, national governance was the most significant. ”Ineffective governance is often associated with lack of environmental enforcement and investment, leading to habitat loss,” says Amano.
The study also uncovered a relationship between the speed of GDP growth and biodiversity: the faster GDP per capita was growing, the greater the decline in waterbird species.
Diversity on a localised level was worst affected on average in South America, with a 0.95% annual loss equating to a 21% decline across the region over 25 years. Amano was also surprised to find severe species loss across inland areas of western and central Asia.
The researchers point out that poor water management and dam construction in parts of Asia and South America have caused wetlands to permanently dry out in counties such as Iran and Argentina – even in areas designated as protected.
Impotent hunting regulations can also explain species loss under ineffective governance. “Political instability can weaken legal enforcement, and consequently promote unsuitable, often illegal, killing even in protected areas,” says Amano.
In fact, the researchers found that protected conservation areas simply did not benefit biodiversity if they were located in nations with weak governance.
Recent Cambridge research involving Amano suggests that grassroots initiatives led by local and indigenous groups can be more effective than governments at protecting ecosystems – one possible conservation approach for regions suffering from political instability.
Amano, T et al. Successful conservation of global waterbird populations depends on effective governance. Nature; 20 December 2017; DOI: 10.1038/nature25139
Big data study of global biodiversity shows ineffective national governance is a better indicator of species decline than any other measure of “anthropogenic impact”. Even protected conservation areas make little difference in countries that struggle with socio-political stability.
One of the fundamental ideas of quantum theory is that quantum objects can exist both as a wave and as a particle, and that they don’t exist as one or the other until they are measured. This is the premise that Erwin Schrödinger was illustrating with his famous thought experiment involving a dead-or-maybe-not-dead cat in a box.
“This premise, commonly referred to as the wave function, has been used more as a mathematical tool than a representation of actual quantum particles,” said David Arvidsson-Shukur, a PhD student at Cambridge’s Cavendish Laboratory, and the paper’s first author. “That’s why we took on the challenge of creating a way to track the secret movements of quantum particles.”
Any particle will always interact with its environment, ‘tagging’ it along the way. Arvidsson-Shukur, working with his co-authors Professor Crispin Barnes from the Cavendish Laboratory and Axel Gottfries, a PhD student from the Faculty of Economics, outlined a way for scientists to map these ‘tagging’ interactions without looking at them. The technique would be useful to scientists who make measurements at the end of an experiment but want to follow the movements of particles during the full experiment.
Some quantum scientists have suggested that information can be transmitted between two people – usually referred to as Alice and Bob – without any particles travelling between them. In a sense, Alice gets the message telepathically. This has been termed counterfactual communication because it goes against the accepted ‘fact’ that for information to be carried between sources, particles must move between them.
“To measure this phenomenon of counterfactual communication, we need a way to pin down where the particles between Alice and Bob are when we’re not looking,” said Arvidsson-Shukur. “Our ‘tagging’ method can do just that. Additionally, we can verify old predictions of quantum mechanics, for example that particles can exist in different locations at the same time.”
The founders of modern physics devised formulas to calculate the probabilities of different results from quantum experiments. However, they did not provide any explanations of what a quantum particle is doing when it’s not being observed. Earlier experiments have suggested that the particles might do non-classical things when not observed, like existing in two places at the same time. In their paper, the Cambridge researchers considered the fact that any particle travelling through space will interact with its surroundings. These interactions are what they call the ‘tagging’ of the particle. The interactions encode information in the particles that can then be decoded at the end of an experiment, when the particles are measured.
The researchers found that this information encoded in the particles is directly related to the wave function that Schrödinger postulated a century ago. Previously the wave function was thought of as an abstract computational tool to predict the outcomes of quantum experiments. “Our result suggests that the wave function is closely related to the actual state of particles,” said Arvidsson-Shukur. “So, we have been able to explore the ‘forbidden domain’ of quantum mechanics: pinning down the path of quantum particles when no one is observing them.”
D. R. M. Arvidsson-Shukur, C. H. W. Barnes, and A. N. O. Gottfries. ‘Evaluation of counterfactuality in counterfactual communication protocols’. Physical Review A (2017). DOI: 10.1103/PhysRevA.96.062316
Researchers from the University of Cambridge have taken a peek into the secretive domain of quantum mechanics. In a theoretical paper published in the journal Physical Review A, they have shown that the way that particles interact with their environment can be used to track quantum particles when they’re not being observed, which had been thought to be impossible.
Professor Sir Keith Peters, who was first honoured as a Knight Bachelor in the 1993 New Year’s Honours list, was awarded a GBE (Knights Grand Cross of the British Empire) for Services to the Advancement of Medical Science.
Sir Keith Peters, former Regius Professor of Physic at Cambridge University and an honorary fellow of Clare Hall and Christ's, said: “I am delighted to have been able to contribute to Cambridge.
"This is indeed a great personal honour but one which also reflects the contribution of many colleagues in Cambridge who’ve done so much for Cambridge medicine.”
The citation for his honour reads: “Sir Keith Peters is one of the UK’s most influential clinical academics who has made a series of lasting impacts on medicine and science. Most recently, he made a major contribution to the conception and establishment of the Francis Crick Institute.
"Earlier, he was a driving force at the Royal Postgraduate Medical School, Hammersmith, where his work on immune mechanisms in kidney disease changed clinical practice.
"In Cambridge he transformed its Clinical School and led the development of what is now the Cambridge Biomedical Campus. He is a Fellow of the Royal Society, was President of the Academy of Medical Sciences, and from 2005-2016 Senior Consultant to GlaxoSmithKline.”
Ian Goodyer, Emeritus Professor of Child and Adolescent Psychiatry at the Department of Psychiatry in the Cambridge Clinical School was honoured for his work in psychiatric research with an OBE. He is an Emeritus Fellow of Wolfson College.
Dr Tina Barsby, CEO of Cambridge-based crop science organisation NIAB and Fellow of St Edmund’s College, has been awarded an OBE for services to agricultural science and biotechnology.
Dr Barsby said: “This award is a great honour for me and a tribute to all the colleagues I’ve worked with across the industry over the years.
"Every day I’m inspired by the work being carried out at NIAB and the essential contribution we are making to help our industry fulfil its potential in food production.”
Professor Diane Coyle, who will become Cambridge’s inaugural Bennett Professor of Public Policy in March, was awarded a CBE. A Fellow of Churchill College, she will join the Department of Politics and International Studies at the University of Cambridge in her new role.
Cambridge alumni honoured include actor Hugh Laurie, who read English at Selwyn College and was President of the Footlights. He was made an OBE in 2007, and is now being honoured with the higher award of CBE.
Founder of search engine blinkx Suranga Chandratillake, who has an MA in Computer Science from Cambridge, was awarded an OBE for his achievements in engineering and technology.
Former deputy prime minister and Liberal Democrat leader Nick Clegg, who read Archaeology and Anthropology at Robinson College, was appointed a knight bachelor.
Two other alumni, Anthony Habgood and Kenneth Aphunezi Olisa, will also be appointed knight bachelors – the former for services to UK Industry, the latter for services to business and philanthropy. Christopher Geidt and Philip McDougall Rutnam, both Trinity Hall alumni, are being knighted in the Order of the Bath, the former as a Knight Grand Cross, and the latter as a Knight Commander.
Altogether, at least 14 Cambridge alumni have been recognised in the New Year Honours list. They include Oxford professor of economic history Jane Humphries (CBE), Hay Festival director Peter Florence (CBE), and CEO of DeepMind Dr Demis Hassabis (CBE).
Dr John Sulston, former Director of the Sanger Centre, was awarded a knighthood for services to genome research in the list. He stressed that he felt he was accepting the award on behalf of all the staff at the Sanger Centre, adding: "What I most value is the recognition of the Sanger Centre team, and that their achievement is important to the people of this country."
The Honours list, which dates back to around 1890, recognises notable services and contributions to Britain.
Members of collegiate Cambridge have been recognised for their outstanding contributions to society
In recent years, there has been a concerted effort among scientists to map the connections in the brain – the so-called ‘connectome’ – and to understand how this relates to human behaviours, such as intelligence and mental health disorders.
Now, in research published in the journal Neuron, an international team led by scientists at the University of Cambridge and the National Institutes of Health (NIH), USA, has shown that it is possible to build up a map of the connectome by analysing conventional brain scans taken using a magnetic resonance imaging (MRI) scanner.
The team compared the brains of 296 typically-developing adolescent volunteers. Their results were then validated in a cohort of a further 124 volunteers. The team used a conventional 3T MRI scanner, where 3T represents the strength of the magnetic field; however, Cambridge has recently installed a much more powerful Siemens 7T Terra MRI scanner, which should allow this technique to give an even more precise mapping of the human brain.
A typical MRI scan will provide a single image of the brain, from which it is possible to calculate multiple structural features of the brain. This means that every region of the brain can be described using as many as ten different characteristics. The researchers showed that if two regions have similar profiles, then they are described as having ‘morphometric similarity’ and it can be assumed that they are a connected network. They verified this assumption using publically-available MRI data on a cohort of 31 juvenile rhesus macaque monkeys to compare to ‘gold-standard’ connectivity estimates in that species.
Using these morphometric similarity networks (MSNs), the researchers were able to build up a map showing how well connected the ‘hubs’ – the major connection points between different regions of the brain network – were. They found a link between the connectivity in the MSNs in brain regions linked to higher order functions – such as problem solving and language – and intelligence.
“We saw a clear link between the ‘hubbiness’ of higher-order brain regions – in other words, how densely connected they were to the rest of the network – and an individual’s IQ,” explains PhD candidate Jakob Seidlitz at the University of Cambridge and NIH. “This makes sense if you think of the hubs as enabling the flow of information around the brain – the stronger the connections, the better the brain is at processing information.”
While IQ varied across the participants, the MSNs accounted for around 40% of this variation – it is possible that higher-resolution multi-modal data provided by a 7T scanner may be able to account for an even greater proportion of the individual variation, says the researchers.
“What this doesn’t tell us, though, is where exactly this variation comes from,” adds Seidlitz. “What makes some brains more connected than others – is it down to their genetics or their educational upbringing, for example? And how do these connections strengthen or weaken across development?”
“This could take us closer to being able to get an idea of intelligence from brain scans, rather than having to rely on IQ tests,” says Professor Ed Bullmore, Head of Psychiatry at Cambridge. “Our new mapping technique could also help us understand how the symptoms of mental health disorders such as anxiety and depression or even schizophrenia arise from differences in connectivity within the brain.”
The research was funded by the Wellcome Trust and the National Institutes of Health.
Seidlitz, J et al. Morphometric Similarity Networks Detect Microscale Cortical Organisation and Predict Inter-Individual Cognitive Variation. Neuron; 21 Dec 2017; DOI: 10.1016/j.neuron.2017.11.039
A new and relatively simple technique for mapping the wiring of the brain has shown a correlation between how well connected an individual’s brain regions are and their intelligence, say researchers at the University of Cambridge.
Jakob Seidlitz is at PhD student on the NIH Oxford-Cambridge Scholars Programme. A graduate of the University of Rochester, USA, he spends half of his time in Cambridge and half at the National Institutes of Health in the USA.
Jakob’s research aims to better understand the origins of psychiatric disease, using techniques such as MRI to study child and adolescent brain development and map patterns of brain connectivity.
“A typical day consists of performing MRI data analysis, statistical testing, reading scientific literature, and preparing and editing manuscripts. “It’s great being able to work on such amazing large-scale neuroimaging datasets that allow for answering longstanding questions in psychiatry,” he says.
“Cambridge is a great place for my work. Ed [Bullmore], my supervisor, is extremely inclusive and collaborative, which meant developing relationships within and outside the department. Socially, the college post-grad community is amazingly diverse and welcoming, and the collegiate atmosphere of Cambridge can be truly inspiring.”
Jakob is a member of Wolfson College. Outside of his research, he plays football for the ‘Blues’ (the Cambridge University Association Football Club).
The data, which came from archaeological finds in Alaska, also points to the existence of a previously unknown Native American population, whom academics have named “Ancient Beringians”.
The findings are being published in the journal Nature and present possible answers to a series of long-standing questions about how the Americas were first populated.
It is widely accepted that the earliest settlers crossed from what is now Russia into Alaska via an ancient land bridge spanning the Bering Strait which was submerged at the end of the last Ice Age. Issues such as whether there was one founding group or several, when they arrived, and what happened next, are the subject of extensive debate, however.
In the new study, an international team of researchers led by academics from the Universities of Cambridge and Copenhagen sequenced the full genome of an infant – a girl named Xach'itee'aanenh t'eede gay, or Sunrise Child-girl, by the local Native community - whose remains were found at the Upward Sun River archaeological site in Alaska in 2013.
To their surprise, they found that although the child had lived around 11,500 years ago, long after people first arrived in the region, her genetic information did not match either of the two recognised branches of early Native Americans, which are referred to as Northern and Southern. Instead, she appeared to have belonged to an entirely distinct Native American population, which they called Ancient Beringians.
Further analyses then revealed that the Ancient Beringians were an offshoot of the same ancestor population as the Northern and Southern Native American groups, but that they separated from that population earlier in its history. This timeline allowed the researchers to construct a picture of how and when the continent might have been settled by a common, founding population of ancestral Native Americans, that gradually divided into these different sub-groupings.
The study was led by Professor Eske Willerslev, who holds positions both at St John’s College, University of Cambridge, and the University of Copenhagen in Denmark.
“The Ancient Beringians diversified from other Native Americans before any ancient or living Native American population sequenced to date. It’s basically a relict population of an ancestral group which was common to all Native Americans, so the sequenced genetic data gave us enormous potential in terms of answering questions relating to the early peopling of the Americas,” he said.
“We were able to show that people probably entered Alaska before 20,000 years ago. It’s the first time that we have had direct genomic evidence that all Native Americans can be traced back to one source population, via a single, founding migration event.”
The study compared data from the Upward Sun River remains with both ancient genomes, and those of numerous present-day populations. This allowed the researchers first to establish that the Ancient Beringian group was more closely related to early Native Americans than their Asian and Eurasian ancestors, and then to determine the precise nature of that relationship and how, over time, they split into distinct populations.
Until now, the existence of two separate Northern and Southern branches of early Native Americans has divided academic opinion regarding how the continent was populated. Researchers have disagreed over whether these two branches split after humans entered Alaska, or whether they represent separate migrations.
The Upward Sun River genome shows that Ancient Beringians were isolated from the common, ancestral Native American population, both before the Northern and Southern divide, and after the ancestral source population was itself isolated from other groups in Asia. The researchers say that this means it is likely there was one wave of migration into the Americas, with all subdivisions taking place thereafter.
According to the researchers’ timeline, the ancestral population first emerged as a separate group around 36,000 years ago, probably somewhere in northeast Asia. Constant contact with Asian populations continued until around 25,000 years ago, when the gene flow between the two groups ceased. This cessation was probably caused by brutal changes in the climate, which isolated the Native American ancestors. “It therefore probably indicates the point when people first started moving into Alaska,” Willerslev said.
Around the same time, there was a level of genetic exchange with an ancient North Eurasian population. Previous research by Willerslev has shown that a relatively specific, localised level of contact between this group, and East Asians, led to the emergence of a distinctive ancestral Native American population.
Ancient Beringians themselves then separated from the ancestral group earlier than either the Northern or Southern branches around 20,000 years ago. Genetic contact continued with their Native American cousins, however, at least until the Upward Sun River girl was born in Alaska around 8,500 years later.
The geographical proximity required for ongoing contact of this sort led the researchers to conclude that the initial migration into the Americas had probably already taken place when the Ancient Beringians broke away from the main ancestral line. José Víctor Moreno-Mayar, from the University of Copenhagen, said: “It looks as though this Ancient Beringian population was up there, in Alaska, from 20,000 years ago until 11,500 years ago, but they were already distinct from the wider Native American group.”
Finally, the researchers established that the Northern and Southern Native American branches only split between 17,000 and 14,000 years ago which, based on the wider evidence, indicates that they must have already been on the American continent south of the glacial ice.
The divide probably occurred after their ancestors had passed through, or around, the Laurentide and Cordilleran ice sheets – two vast glaciers which covered what is now Canada and parts of the northern United States, but began to thaw at around this time.
The continued existence of this ice sheet across much of the north of the continent would have isolated the southbound travellers from the Ancient Beringians in Alaska, who were eventually replaced or absorbed by other Native American populations. Although modern populations in both Alaska and northern Canada belong to the Northern Native American branch, the analysis shows that these derive from a later “back” migration north, long after the initial migration events.
“One significant aspect of this research is that some people have claimed the presence of humans in the Americas dates back earlier – to 30,000 years, 40,000 years, or even more,” Willerslev added. “We cannot prove that those claims are not true, but what we are saying, is that if they are correct, they could not possibly have been the direct ancestors to contemporary Native Americans.”
Direct genetic traces of the earliest Native Americans have been identified for the first time in a new study. The genetic evidence suggests that people may have entered the continent in a single migratory wave, perhaps arriving more than 20,000 years ago.
An estimated 44 million people worldwide are living with Alzheimer’s disease, a disease whose symptoms include memory problems, changes in behaviour and progressive loss of independence. These symptoms are caused by the build-up in the brain of two abnormal proteins: amyloid beta and tau. It is thought that amyloid beta occurs first, encouraging the appearance and spread of tau – and it is this latter protein that destroys the nerve cells, eating away at our memories and cognitive functions.
Until a few years ago, it was only possible to look at the build-up of these proteins by examining the brains of Alzheimer’s patients who had died, post mortem. However, recent developments in positron emission tomography (PET) scanning have enabled scientists to begin imaging their build-up in patients who are still alive: a patient is injected with a radioactive ligand, a tracer molecule that binds to the target (tau) and can be detected using a PET scanner.
In a study published today in the journal Brain, a team led by scientists at the University of Cambridge describe using a combination of imaging techniques to examine how patterns of tau relate to the wiring of the brain in 17 patients with Alzheimer’s disease, compared to controls.
Quite how tau appears throughout the brain has been the subject of speculation among scientists. One hypothesis is that harmful tau starts in one place and then spreads to other regions, setting off a chain reaction. This idea – known as ‘transneuronal spread’ – is supported by studies in mice. When a mouse is injected with abnormal human tau, the protein spreads rapidly throughout the brain; however, this evidence is controversial as the amount of tau injected is much higher relative to brain size compared to levels of tau observed in human brains, and the protein spreads rapidly throughout a mouse’s brain whereas it spreads slowly throughout a human brain.
There are also two other competing hypotheses. The ‘metabolic vulnerability’ hypothesis says that tau is made locally in nerve cells, but that some regions have higher metabolic demands and hence are more vulnerable to the protein. In these cases tau is a marker of distress in cells.
The third hypothesis, ‘trophic support’, also suggests that some brain regions are more vulnerable than others, but that this is less to do with metabolic demand and more to do with a lack of nutrition to the region or with gene expression patterns.
Thanks to the developments in PET scanning, it is now possible to compare these hypotheses.
“Five years ago, this type of study would not have been possible, but thanks to recent advances in imaging, we can test which of these hypotheses best agrees with what we observe,” says Dr Thomas Cope from the Department of Clinical Neurosciences at the University of Cambridge, the study’s first author.
Dr Cope and colleagues looked at the functional connections within the brains of the Alzheimer’s patients – in other words, how their brains were wired up – and compared this against levels of tau. Their findings supported the idea of transneuronal spread, that tau starts in one place and spreads, but were counter to predictions from the other two hypotheses.
“If the idea of transneuronal spread is correct, then the areas of the brain that are most highly connected should have the largest build-up of tau and will pass it on to their connections. It’s the same as we might see in a flu epidemic, for example – the people with the largest networks are most likely to catch flu and then to pass it on to others. And this is exactly what we saw.”
Professor James Rowe, senior author on the study, adds: “In Alzheimer’s disease, the most common brain region for tau to first appear is the entorhinal cortex area, which is next to the hippocampus, the ‘memory region’. This is why the earliest symptoms in Alzheimer’s tend to be memory problems. But our study suggests that tau then spreads across the brain, infecting and destroying nerve cells as it goes, causing the patient’s symptoms to get progressively worse.”
Confirmation of the transneuronal spread hypothesis is important because it suggests that we might slow down or halt the progression of Alzheimer’s disease by developing drugs to stop tau from moving along neurons.
The same team also looked at 17 patients affected by another form of dementia, known as progressive supranuclear palsy (PSP), a rare condition that affects balance, vision and speech, but not memory. In PSP patients, tau tends to be found at the base of the brain rather than throughout. The researchers found that the pattern of tau build-up in these patients supported the second two hypotheses, metabolic vulnerability and trophic support, but not the idea that tau spreads across the brain.
The researchers also took patients at different stages of disease and looked at how tau build-up affected the connections in their brains.
In Alzheimer’s patients, they showed that as tau builds up and damages networks, the connections become more random, possibly explaining the confusion and muddled memories typical of such patients.
In PSP, the ‘highways’ that carry most information in healthy individuals receives the most damage, meaning that information needs to travel around the brain along a more indirect route. This may explain why, when asked a question, PSP patients may be slow to respond but will eventually arrive at the correct answer.
The study was funded by the NIHR Cambridge Biomedical Research Centre, the PSP Association, Wellcome, the Medical Research Council, the Patrick Berthoud Charitable Trust and the Association of British Neurologists.
Cope, TE et al. Tau Burden and the Functional Connectome in Alzheimer's Disease and Progressive Supranuclear Palsy. Brain; 5 Jan 2018; DOI: 10.1093/brain/awx347
Recent advances in brain imaging have enabled scientists to show for the first time that a key protein which causes nerve cell death spreads throughout the brain in Alzheimer’s disease – and hence that blocking its spread may prevent the disease from taking hold.
As the global population increases, so too does energy demand. The threat of climate change means that there is an urgent need to find cleaner, renewable alternatives to fossil fuels that do not contribute extensive amounts of greenhouse gases with potentially devastating consequences on our ecosystem. Solar power is considered to be a particularly attractive source as on average the Earth receives around 10,000 times more energy from the sun in a given time than is required by human consumption.
In recent years, in addition to synthetic photovoltaic devices, biophotovoltaics (BPVs, also known as biological solar-cells) have emerged as an environmentally-friendly and low-cost approach to harvesting solar energy and converting it into electrical current. These solar cells utilise the photosynthetic properties of microorganisms such as algae to convert light into electric current that can be used to provide electricity.
During photosynthesis, algae produce electrons, some of which are exported outside the cell where they can provide electric current to power devices. To date, all the BPVs demonstrated have located charging (light harvesting and electron generation) and power delivery (transfer to the electrical circuit) in a single compartment; the electrons generate current as soon as they have been secreted.
In a new technique described in the journal Nature Energy, researchers from the departments of Biochemistry, Chemistry and Physics have collaborated to develop a two-chamber BPV system where the two core processes involved in the operation of a solar cell – generation of electrons and their conversion to power – are separated.
“Charging and power delivery often have conflicting requirements,” explains Kadi Liis Saar, of the Department of Chemistry. “For example, the charging unit needs to be exposed to sunlight to allow efficient charging, whereas the power delivery part does not require exposure to light but should be effective at converting the electrons to current with minimal losses.”
Building a two-chamber system allowed the researchers to design the two units independently and through this optimise the performance of the processes simultaneously.
“Separating out charging and power delivery meant we were able to enhance the performance of the power delivery unit through miniaturisation,” explains Professor Tuomas Knowles from the Department of Chemistry and the Cavendish Laboratory. “At miniature scales, fluids behave very differently, enabling us to design cells that are more efficient, with lower internal resistance and decreased electrical losses.”
The team used algae that had been genetically modified to carry mutations that enable the cells to minimise the amount of electric charge dissipated non-productively during photosynthesis. Together with the new design, this enabled the researchers to build a biophotovoltaic cell with a power density of 0.5 W/m2, five times that of their previous design. While this is still only around a tenth of the power density provided by conventional solar fuel cells, these new BPVs have several attractive features, they say.
"While conventional silicon-based solar cells are more efficient than algae-powered cells in the fraction of the sun’s energy they turn to electrical energy, there are attractive possibilities with other types of materials," says Professor Christopher Howe from the Department of Biochemistry. “In particular, because algae grow and divide naturally, systems based on them may require less energy investment and can be produced in a decentralised fashion."
Separating the energy generation and storage components has other advantages, too, say the researchers. The charge can be stored, rather than having to be used immediately – meaning that the charge could be generated during daylight and then used at night-time.
While algae-powered fuel cells are unlikely to generate enough electricity to power a grid system, they may be particularly useful in areas such as rural Africa, where sunlight is in abundance but there is no existing electric grid system. In addition, whereas semiconductor-based synthetic photovoltaics are usually produced in dedicated facilities away from where they are used, the production of BPVs could be carried out directly by the local community, say the researchers.
“This a big step forward in the search for alternative, greener fuels,” says Dr Paolo Bombelli, from the Department of Biochemistry. “We believe these developments will bring algal-based systems closer to practical implementation.”
The research was supported by the Leverhulme Trust, the Engineering and Physical Sciences Research Council and the European Research Council.
Saar, KL et al. Enhancing power density of biophotovoltaics by decoupling storage and power delivery. Nature Energy; 9 Jan 2018; DOI: 10.1038/s41560-017-0073-0
A new design of algae-powered fuel cells that is five times more efficient than existing plant and algal models, as well as being potentially more cost-effective to produce and practical to use, has been developed by researchers at the University of Cambridge.
Dr Paolo Bombelli is a post-doctoral researcher in the Department of Biochemistry, where his research looks to utilise the photosynthetic and metabolic activity of plants, algae and bacteria to create biophotovoltaic devices, a sustainable source of renewable current. He describes himself as “a plants, algae and bacteria electrician”.
“Photosynthesis generates a flow of electrons that keeps plants, algae and other photosynthetic organisms alive,” he explains. “These electrons flow though biological wires and, like the electrical current obtained from a battery and used to power a radio, they are the driving force for any cellular activity.”
Dr Bombelli’s fascination with this area of research began during his undergraduate studies at the University of Milan.
“Plants, algae and photosynthetic bacteria are the oldest, most common and effective solar panels on our planet,” he says. “For billions of years they have been harnessing the energy of the sun and using it to provide oxygen, food and materials to support life. With my work I aim to provide new ways to embrace the potential of these fantastic photosynthetic organisms.”
His work is highly cross-disciplinary, with input from the Departments of Biochemistry, Plant Sciences, Chemistry and Physics, and the Institute for Manufacturing, as well as from researchers at Imperial College London, UCL, the University of Brighton, the Institute for Advanced Architecture of Catalonia in Spain and the University of Cape Town, South Africa.
“Universities are great places to work and so they attract many people,” he says. “People choose to come to Cambridge because they know the ideas they generate here will go on to change the world.”
In 2016, Dr Bombelli won a Public Engagement with Research Award by the University of Cambridge for his work engaging audiences at more than 40 public events, including science festivals and design fairs, reaching thousands of people in seven countries. His outreach work included working with Professor Chris Howe to develop a prototype ‘green bus shelter’ where plants, classical solar panels and bio-electrochemical systems operate in synergy in a single structure.
An international team led by Dr Renske Smit from the Kavli Institute of Cosmology at the University of Cambridge used the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile to open a new window onto the distant Universe, and have for the first time been able to identify normal star-forming galaxies at a very early stage in cosmic history with this telescope. The results are reported in the journal Nature, and will be presented at the 231st meeting of the American Astronomical Society.
Light from distant objects takes time to reach Earth, so observing objects that are billions of light years away enables us to look back in time and directly observe the formation of the earliest galaxies. The Universe at that time, however, was filled with an obscuring ‘haze’ of neutral hydrogen gas, which makes it difficult to see the formation of the very first galaxies with optical telescopes.
Smit and her colleagues used ALMA to observe two small newborn galaxies, as they existed just 800 million years after the Big Bang. By analysing the spectral ‘fingerprint’ of the far-infrared light collected by ALMA, they were able to establish the distance to the galaxies and, for the first time, see the internal motion of the gas that fuelled their growth.
“Until ALMA, we’ve never been able to see the formation of galaxies in such detail, and we’ve never been able to measure the movement of gas in galaxies so early in the Universe’s history,” said co-author Dr Stefano Carniani, from Cambridge’s Cavendish Laboratory and Kavli Institute of Cosmology.
The researchers found that the gas in these newborn galaxies swirled and rotated in a whirlpool motion, similar to our own galaxy and other, more mature galaxies much later in the Universe’s history. Despite their relatively small size – about five times smaller than the Milky Way – these galaxies were forming stars at a higher rate than other young galaxies, but the researchers were surprised to discover that the galaxies were not as chaotic as expected.
“In the early Universe, gravity caused gas to flow rapidly into the galaxies, stirring them up and forming lots of new stars – violent supernova explosions from these stars also made the gas turbulent,” said Smit, who is a Rubicon Fellow at Cambridge, sponsored by the Netherlands Organisation for Scientific Research. “We expected that young galaxies would be dynamically ‘messy’, due to the havoc caused by exploding young stars, but these mini-galaxies show the ability to retain order and appear well regulated. Despite their small size, they are already rapidly growing to become one of the ‘adult’ galaxies like we live in today.”
The data from this project on small galaxies paves the way for larger studies of galaxies during the first billion years of cosmic time. The research was funded in part by the European Research Council and the UK Science and Technology Facilities Council (STFC).
Renske Smit et al. ‘Rotation in [C II]-emitting gas in two galaxies at a redshift of 6.8.’ Nature (2018). DOI: 10.1038/nature24631
Astronomers have looked back to a time soon after the Big Bang, and have discovered swirling gas in some of the earliest galaxies to have formed in the Universe. These ‘newborns’ – observed as they appeared nearly 13 billion years ago – spun like a whirlpool, similar to our own Milky Way. This is the first time that it has been possible to detect movement in galaxies at such an early point in the Universe’s history.
Dr Renske Smit is a postdoctoral researcher and Rubicon Fellow at the Kavli Institute of Cosmology and is supported by the Netherlands Organisation for Scientific Research. Prior to arriving in Cambridge in 2016, she was a postdoctoral researcher at Durham University and a PhD student at Leiden University in the Netherlands.
Her research aims to understand how the first sources of light in the Universe came to be. In her daily work, she studies images of deep space, taken by telescopes such as the Hubble Space Telescope. To gather data, she sometimes travels to places such as Chile or Hawaii to work on big telescopes.
“In Cambridge, I have joined a team working on the James Webb Space Telescope, the most ambitious and expensive telescope ever built,” she says. “With this telescope, we might be able to see the very first stars for the first time. To have this kind of privileged access to world-leading data is truly a dream come true.
“I would like to contribute to changing the perception of what a science professor looks like. Women in the UK and worldwide are terribly underrepresented in science and engineering and as a result, people may feel women either don’t have the inclination or the talent to do science. I hope that one day I will teach students that don’t feel they represent the professor stereotype and make them believe in their own talent.”
Mitochondrial diseases caused by mutations in mitochondrial DNA are rare, affecting approximately 1 in 10,000 births, but can cause severe conditions. For example, Leigh Syndrome is a severe brain disorder causing progressive loss of mental and movement abilities, which usually becomes apparent in the first year of life and typically results in death within two to three years.
Mitochondria are the powerhouses inside our cells, producing energy and carrying their own DNA instructions (separate from the DNA in the nucleus of every cell). Mitochondria are inherited from a person’s mother via the egg.
In the study, published in Nature Cell Biology, the researchers isolated mouse and human female embryonic germ cells – the cells that will go on to be egg cells in an adult woman – and tested their mitochondrial DNA.
They found that a variety of mutations were present in the mitochondrial DNA in the developing egg cells of all 12 of the human embryos studied, showing that low levels of mitochondrial DNA mutations are carried by healthy humans.
Professor Patrick Chinnery, from the MRC Mitochondrial Biology Unit and the Department of Clinical Neurosciences at the University of Cambridge, said: “We know that these devastating mitochondrial mutations can pop up in families without any previous history, but previously we didn’t know how that happened. We were surprised to find that egg cells in healthy females all carry a few defects in their mitochondrial DNA.”
For most of the human genome, mutations are kept in check by the processes of sexual reproduction, when eggs and sperm combine; however, mitochondria replicate asexually and mitochondrial DNA is inherited unchanged from the mother’s egg. This means that over time mutations can accumulate which, if left unchecked over generations, could eventually lead to malfunction and disease in offspring.
This conundrum led researchers to predict that a “bottleneck,” where only healthy mitochondria survive, may explain how mitochondria are kept healthy down the generations.
In this study, the researchers identified and measured this bottleneck for the first time in developing human egg cells. In these cells, the number of mitochondria decreased to approximately 100 mitochondria per cell, compared to around 100,000 mitochondria in a mature egg cell.
In a mature cell, a few faulty mitochondria could hide unnoticed amongst the thousands of healthy mitochondria, but the small number of mitochondria in the cell during the bottleneck means that the effects of faulty mitochondria are no longer masked.
The exact mechanism by which cells with unhealthy mitochondria are eliminated is not yet known, but since developing egg cells need a lot of energy - produced by the mitochondria - the researchers suggest that after the bottleneck stage, eggs cells containing damaged mitochondria cannot generate enough energy to mature and are lost.
This study found every developing egg cell may carry a few faulty mitochondria, so occasionally, by chance, after the bottleneck these could be the mitochondria that repopulate the egg cell. The scientists suggest that if the quality-control step fails, then this faulty egg could survive and develop into a child with a mitochondrial disease.
Professor Patrick Chinnery said: “Unfortunately, the purification process is not perfect, and occasionally defective mitochondria leak through. This can cause a severe disease in a child, despite no one else in the family having been affected.”
Mitochondrial diseases are currently incurable, although a new IVF technique of mitochondrial transfer gives families affected by mitochondrial disease the chance of having healthy children – removing affected mitochondria from an egg or embryo and replacing them with healthy ones from a donor.
The study authors also suggest that this process could be relevant for human aging. Professor Chinnery added: “Previously it was assumed that the mitochondrial DNA mutations that have been associated with diseases of ageing, such as Alzheimer’s disease, Parkinson’s disease and other neurodegenerative disorders, happened over a person’s lifetime. This study shows how some of these mutations can be inherited from your mother, potentially predisposing you to late onset brain diseases.”
Professor Chinnery is a Wellcome Trust Senior Research Fellow and the researchers were funded by Wellcome, the Medical Research Council and the National Institute for Health Research.
Dr Nathan Richardson, MRC Head of Molecular and Cellular Medicine, said: “This is an exciting study that reveals important new insights into how mitochondrial diseases develop and are inherited between generations. The researchers have made great use of the tissues available from the MRC-Wellcome Human Developmental Biology Resource (HDBR). The HDBR is an internationally unique biobank resource that provides human embryonic and foetal tissue, donated through elective terminations, facilitating research into a large number of distressing medical disorders, such as mitochondrial diseases.”
Floros, V et al. Segregation of mitochondrial DNA heteroplasmy through a developmental genetic bottleneck in human embryos. Nature Cell Biology; 15 Jan 2018; DOI: 10.1038/41556-017-0017-8
Press release from the Medical Research Council.
Researchers have shown for the first time how children can inherit a severe – potentially fatal – mitochondrial disease from a healthy mother. The study, led by researchers from the MRC Mitochondrial Biology Unit at the University of Cambridge, reveals that healthy people harbour mutations in their mitochondrial DNA and explains how cases of severe mitochondrial disease can appear unexpectedly in previously unaffected families.
When a mosquito infected with malaria parasites bites someone, it transfers the parasites into their bloodstream via its saliva. These parasites work their way into the liver, where they mature and reproduce. After a few days, the parasites leave the liver and hijack red blood cells, where they continue to multiply, spreading around the body and causing symptoms, including potentially life-threatening complications.
Malaria kills over half a million people each year, predominantly in Africa and south-east Asia. While a number of medicines are used to treat the disease, malaria parasites are growing increasingly resistant to these drugs, raising the spectre of untreatable malaria in the future.
Now, in a study published today in the journal Scientific Reports, a team of researchers employed the Robot Scientist ‘Eve’ in a high-throughput screen and discovered that triclosan, an ingredient found in many toothpastes, may help the fight against drug-resistance.
When used in toothpaste, triclosan prevents the build-up of plaque bacteria by inhibiting the action of an enzyme known as enoyl reductase (ENR), which is involved in the production of fatty acids.
Scientists have known for some time that triclosan also inhibits the growth in culture of the malaria parasite Plasmodium during the blood-stage, and assumed that this was because it was targeting ENR, which is found in the liver. However, subsequent work showed that improving triclosan’s ability to target ENR had no effect on parasite growth in the blood.
Working with ‘Eve’, the research team discovered that in fact, triclosan affects parasite growth by specifically inhibiting an entirely different enzyme of the malaria parasite, called DHFR. DHFR is the target of a well-established antimalarial drug, pyrimethamine; however, resistance to the drug among malaria parasites is common, particularly in Africa. The Cambridge team showed that triclosan was able to target and act on this enzyme even in pyrimethamine-resistant parasites.
“Drug-resistant malaria is becoming an increasingly significant threat in Africa and south-east Asia, and our medicine chest of effective treatments is slowly depleting,” says Professor Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge. “The search for new medicines is becoming increasingly urgent.”
Because triclosan inhibits both ENR and DHFR, the researchers say it may be possible to target the parasite at both the liver stage and the later blood stage.
Lead author Dr Elizabeth Bilsland, now an assistant professor at the University of Campinas, Brazil, adds: “The discovery by our robot ‘colleague’ Eve that triclosan is effective against malaria targets offers hope that we may be able to use it to develop a new drug. We know it is a safe compound, and its ability to target two points in the malaria parasite’s lifecycle means the parasite will find it difficult to evolve resistance.”
Robot scientist Eve was developed by a team of scientists at the Universities of Manchester, Aberystwyth, and Cambridge to automate – and hence speed up – the drug discovery process by automatically developing and testing hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research.
Professor Ross King from the Manchester Institute of Biotechnology at the University of Manchester, who led the development of Eve, says: “Artificial intelligence and machine learning enables us to create automated scientists that do not just take a ‘brute force’ approach, but rather take an intelligent approach to science. This could greatly speed up the drug discovery progress and potentially reap huge rewards.”
The research was supported by the Biotechnology & Biological Sciences Research Council, the European Commission, the Gates Foundation and FAPESP (São Paulo Research Foundation).
Bilsland, E et al. Plasmodium dihydrofolate reductase is a second enzyme target for the antimalarial action of triclosan. Scientific Reports; 18 Jan 2018; DOI: 10.1038/s41598-018-19549-x
An ingredient commonly found in toothpaste could be employed as an anti-malarial drug against strains of malaria parasite that have grown resistant to one of the currently-used drugs. This discovery, led by researchers at the University of Cambridge, was aided by Eve, an artificially-intelligent ‘robot scientist’.
New work at the settlement of Dhaskalio, the site adjoining the prehistoric sanctuary on the Cycladic island of Keros, has shown this to be a more imposing and densely occupied series of structures than had previously been realised, and one of the most impressive sites of the Aegean during the Early Bronze Age (3rd millennium BC).
Until recently, the island of Keros, located in the Cyclades, south of Naxos, was known for ritual activities dating from 4,500 years ago involving broken marble figurines. Now new excavations are showing that the promontory of Dhaskalio (now a tiny islet because of sea level rise), at the west end of the island next to the sanctuary, was almost entirely covered by remarkable monumental constructions built using stone brought painstakingly from Naxos, some 10km distant.
Professor Colin Renfrew of the University of Cambridge, Co-Director of the excavation, suggested that the promontory, with its narrow causeway to the main island, “may have become a focus because it formed the best natural harbour on Keros, and had an excellent view of the north, south and west Aegean”.
The promontory was naturally shaped like a pyramid, and the skilled builders of Dhaskalio enhanced this shape by creating a series of massive terrace walls which made it look more like a stepped pyramid. On the flat surfaces formed by the terraces, the builders used stone imported from Naxos to construct impressive, gleaming structures.
The research team, led by archaeologists from the University of Cambridge, the Ephorate of the Cyclades and the Cyprus Institute, have calculated that more than 1000 tons of stone were imported, and that almost every possible space on the island was built on, giving the impression of a single large monument jutting out of the sea. The complex is the largest known in the Cyclades at the time.
Renfrew noted that “investigations at multiple points throughout the site have given unique insight into how the architecture was organised and how people moved about the built environment”.
While excavating an impressive staircase in the lower terraces, archaeologists began to see the technical sophistication of this civilisation 1000 years before the famous palaces of the Mycenaeans. Underneath the stairs and within the walls they discovered sophisticated systems of drainage, signalling that the architecture was multipurpose and carefully planned in advance. Tests are now underway to discover whether the drains were for managing clean water or sewage.
What was the reason for this massive undertaking here?
The rituals practised in the nearby sanctuary meant that this was already an important central place for the Cycladic islanders. Another aspect of the expansion of Dhaskalio is the use of new agricultural practices, whose study is led by Dr Evi Margaritis of the Cyprus Institute. She says: “Dhaskalio has already provided important evidence about the cultivation of olive and grape, two key new domesticates that expanded the horizons of agriculture in the third millennium. The environmental programme is revealing how agricultural strategies developed through the lifetime of the site.”
The excavated soil of the site is being examined in great detail for tiny clues in the form of burnt seeds, phytoliths (plant remnants preserved as silica), burnt wood, and animal and fish bones. Lipid and starch analysis on pottery and grinding stones is giving clues about food production and consumption.
Plant remains have been recovered in carbonised form, predominantly pulses and fruits such as grape, olives, figs and almonds, but also cereals such as emmer wheat and barley. Margaritis notes: “Keros was probably not self-sustaining, meaning that much of this food was imported: in the light of this evidence we need to reconsider what we know about existing networks to include food exchange”.
Another clue may be found in metalworking, the most important new technology of the third millennium BC. The inhabitants of Dhaskalio were proficient metalworkers, and the evidence for the associated technologies is strong everywhere on the site. No metal ore sources are located on Keros, so all raw materials were imported from elsewhere (other Cycladic islands such as Seriphos or Kythnos, or the mainland).
These imported ores were smelted just to the north of the sanctuary, where the winds were strongest, needed to achieve the very high temperatures required to extract metals from ores. Within the buildings of Dhaskalio, the melting of metals and casting of objects were commonplace.
The new excavations have found two metalworking workshops, full of metalworking debris and related objects. In one of these rooms a lead axe was found, with a mould used for making copper daggers, along with dozens of ceramic fragments (such as tuyères, the ceramic end of a bellows, used to force air into the fire to increase its temperature) covered in copper spills. In another room, which only appeared at the end of excavation this year, the top of an intact clay oven was found, indicating another metalworking area, which will be excavated next year.
What is the significance of the metalworking finds?
Dr Michael Boyd of the University of Cambridge, Co-Director of the excavation, says: “At a time when access to raw materials and skills was very limited, metalworking expertise seems to have been very much concentrated at Dhaskalio. What we are seeing here with the metalworking and in other ways is the beginnings of urbanisation: centralisation, meaning the drawing of far-flung communities into networks centred on the site, intensification in craft or agricultural production, aggrandisement in architecture, and the gradual subsuming of the ritual aspects of the sanctuary within the operation of the site. This gives us a clear insight into social change at Dhaskalio, from the earlier days where activities were centred on ritual practices in the sanctuary to the growing power of Dhaskalio itself in its middle years.”
The excavations on Keros are leading the charge of technical innovation in Aegean archaeology. All data are recorded digitally, using a new system called iDig – an app that runs on Apple’s iPads. For the first time in the Aegean, not only data from the excavation, but the results of study in the laboratory are all recorded in the same system, meaning that anyone on the excavation has access to all available data in real time. Three dimensional models are created at every stage in the digging process using a technique called photogrammetry; at the end of each season the trenches are recorded in detail by the Cyprus Institute’s laser scanning team.
The Cyprus Institute co-organised for a second year an educational programme during this year’s excavations with Cambridge University. Students from Greece, Australia, New Zealand, the USA, Canada, and the UK joined the excavation and gained valuable experience of up to the minute excavation and scientific techniques. The syllabus epitomised the twin goals of promoting science in archaeology and establishing the highest standards of teaching and research.
The project is organised under the auspices of the British School at Athens and conducted with permission of the Greek Ministry of Culture and Sport. The project is directed by Colin Renfrew and Michael Boyd of the McDonald Institute for Archaeological Research, University of Cambridge. The project is supported by the Institute for Aegean Prehistory, the Cyprus Institute, the McDonald Institute for Archaeological Research, the British Academy, the Society of Antiquaries of London, the Gerda Henkel Stiftung, National Geographic Society, Cosmote, Blue Star Lines, EZ-dot and private donors.
New excavations on the remote island of Keros reveal monumental architecture and technological sophistication at the dawn of the Cycladic Bronze Age.