Quantcast
Channel: University of Cambridge - Latest news
Viewing all 4361 articles
Browse latest View live

Study in mice suggests personalised stem cell treatment may offer relief for progressive MS

$
0
0

The study, led by researchers at the University of Cambridge, is a step towards developing personalised treatments based on a patient’s own skin cells for diseases of the central nervous system (CNS).

In MS, the body’s own immune system attacks and damages myelin, the protective sheath around nerve fibres, causing disruption to messages sent around the brain and spinal cord. Symptoms are unpredictable and include problems with mobility and balance, pain, and severe fatigue.

Key immune cells involved in causing this damage are macrophages (literally ‘big eaters’), which ordinarily serve to attack and rid the body of unwanted intruders. A particular type of macrophage known as microglia are found throughout the brain and spinal cord – in progressive forms of MS, they attack the CNS, causing chronic inflammation and damage to nerve cells.

Recent advances have raised expectations that diseases of the CNS may be improved by the use of stem cell therapies. Stem cells are the body’s ‘master cells’, which can develop into almost any type of cell within the body. Previous work from the Cambridge team has shown that transplanting neural stem cells (NSCs) – stem cells that are part-way to developing into nerve cells – reduces inflammation and can help the injured CNS heal.

However, even if such a therapy could be developed, it would be hindered by the fact that such NSCs are sourced from embryos and therefore cannot be obtained in large enough quantities. Also, there is a risk that the body will see them as an alien invader, triggering an immune response to destroy them.

A possible solution to this problem would be the use of so-called ‘induced neural stem cells (iNSCs)’ – these cells can be generated by taking an adult’s skin cells and ‘re-programming’ them back to become neural stem cells. As these iNSCs would be the patient’s own, they are less likely to trigger an immune response.

Now, in research published in the journal Cell Stem Cell, researchers at the University of Cambridge have shown that iNSCs may be a viable option to repairing some of the damage caused by MS.

Using mice that had been manipulated to develop MS, the researchers discovered that chronic MS leads to significantly increased levels of succinate, a small metabolite that sends signals to macrophages and microglia, tricking them into causing inflammation, but only in cerebrospinal fluid, not in the peripheral blood.

Transplanting NSCs and iNSCs directly into the cerebrospinal fluid reduces the amount of succinate, reprogramming the macrophages and microglia – in essence, turning ‘bad’ immune cells ‘good’. This leads to a decrease in inflammation and subsequent secondary damage to the brain and spinal cord.

“Our mouse study suggests that using a patient’s reprogrammed cells could provide a route to personalised treatment of chronic inflammatory diseases, including progressive forms of MS,” says Dr Stefano Pluchino, lead author of the study from the Department of Clinical Neurosciences at the University of Cambridge.

“This is particularly promising as these cells should be more readily obtainable than conventional neural stem cells and would not carry the risk of an adverse immune response.”

The research team was led by Dr Pluchino, together with Dr Christian Frezza from the MRC Cancer Unit at the University of Cambridge, and brought together researchers from several university departments.

Dr Luca Peruzzotti-Jametti, the first author of the study and a Wellcome Trust Research Training Fellow, says: “We made this discovery by bringing together researchers from diverse fields including regenerative medicine, cancer, mitochondrial biology, inflammation and stroke and cellular reprogramming. Without this multidisciplinary collaboration, many of these insights would not have been possible."

The research was funded by Wellcome, European Research Council, Medical Research Council, Italian Multiple Sclerosis Association, Congressionally-Directed Medical Research Programs, the Evelyn Trust and the Bascule Charitable Trust.

Reference
Peruzzotti-Jametti, L et al. Macrophage-derived extracellular succinate licenses neural stem cells to suppress chronic neuroinflammation. Cell Stem Cell; 2018; 22: 1-14; DOI: 10.1016/j.stem.2018.01.20

Scientists have shown in mice that skin cells re-programmed into brain stem cells, transplanted into the central nervous system, help reduce inflammation and may be able to help repair damage caused by multiple sclerosis (MS).

Our mouse study suggests that using a patient’s reprogrammed cells could provide a route to personalised treatment of chronic inflammatory diseases, including progressive forms of MS
Luca Peruzzotti-Jametti
Neuron with oligodendrocyte and myelin sheath (edited)
Researcher profile: Dr Luca Peruzzotti-Jametti

It isn’t every day that you find yourself invited to play croquet with a Nobel laureate, but then Cambridge isn’t every university, as Dr Luca Peruzzotti-Jametti discovered when he was fortunate enough to be invited to the house for Professor Sir John Gurdon.

“It was an honour meet a Nobel laureate who has influenced so much my studies and meet the man behind the science,” he says. “I was moved by how kind he is and extremely impressed by his endless passion for science.”

Dr Peruzzotti-Jametti began his career studying medicine at the University Vita-Salute San Raffaele, Milan. His career took him across Europe, to Switzerland, Denmark, Sweden and now to Cambridge. After completing a PhD in Clinical Neurosciences here he is now a Wellcome Trust Research Training fellow.
His work focuses on multiple sclerosis (MS), an autoimmune disease that affects around 100,000 people in the UK alone. Despite having several therapies to help during the initial (or ‘relapsing remitting’) phase of MS, the majority of people with MS will develop a chronic worsening of disability within 15 years after diagnosis. This late form of MS is called secondary progressive, and differently from relapsing remitting MS, it does not have any effective treatment.

“My research sets out to understand how progression works in MS by studying how inflammation is maintained in the brains of patients, and to develop new treatments aimed at preventing disease progression,” he explains. Among his approaches is the use of neural stem cells and induced neural stem cells, as in the above study. “My hope is that using a patient’s reprogrammed cells could provide a route to personalised treatment of chronic inflammatory diseases, including progressive forms of MS.”

Dr Peruzzotti-Jametti is based on the Cambridge Biomedical Campus where he works closely with clinicians at Addenbrooke’s Hospital and with basic scientists, a community he describes as “vibrant”.

“Cambridge has been the best place to do my research due to the incredible concentration of scientists who pursue novel therapeutic approaches using cutting-edge technologies,” he says. “I am very thankful for the support I received in the past years from top notch scientists. Being in Cambridge has also helped me competing for major funding sources and my work could have not been possible without the support of the Wellcome Trust.

“I wish to continue working in this exceptional environment where so many minds and efforts are put together in a joint cause for the benefit of those who suffer.”

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

In tech we trust?

$
0
0

Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new Strategic Research Initiative on Trustworthy Technologies, which brings together science, technology and humanities researchers from across the University.

In fact, Singh, a researcher in Cambridge’s Department of Computer Science and Technology, has been collaborating with lawyers for several years: “A legal perspective is paramount when you’re researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.”

Governance and public trust present some of the greatest challenges in technology today. The European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a ‘right to an explanation’ regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. “With penalties including fines of up to 4% of global turnover or €20 million, people are realising that they need to take data protection much more seriously,” he says.

Singh is particularly interested in how data-driven systems and algorithms – including machine learning – will soon underpin and automate everything from transport networks to council services.

As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the ‘Internet of Things’ continues to instrument the physical world, machines will increasingly mediate and influence our lives.

It’s a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: “We work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they’re doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.”

What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.

“Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”

But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that “black-sounding” names were 25% more likely to result in the delivery of this kind of advertising.

Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”

Transparency, reliability and trustworthiness are at the core of Weller’s work at the Leverhulme Centre for the Future of Intelligence and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.

Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.

With penalties including fines of up to €20 million, people are realising that they need to take data protection much more seriously
Jat Singh

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Young children use physics, not previous rewards, to learn about tools

$
0
0

The findings of the study, based on the Aesop’s fable The Crow and the Pitcher, help solve a debate about whether children learning to use tools are genuinely learning about physical causation or are just driven by what action previously led to a treat.

Learning about causality – about the physical rules that govern the world around us – is a crucial part of our cognitive development. From our observations and the outcome of our own actions, we build an idea – a model – of which tools are functional for particular jobs, and which are not.

However, the information we receive isn’t always as straightforward as it should be. Sometimes outside influences mean that things that should work, don’t. Similarly, sometimes things that shouldn’t work, do.

Dr Lucy Cheke from the Department of Psychology at the University of Cambridge says: “Imagine a situation where someone is learning about hammers. There are two hammers that they are trying out – a metal one and an inflatable one. Normally, the metal hammer would successfully drive a nail into a plank of wood, while the inflatable hammer would bounce off harmlessly.

“But what if your only experience of these two hammers was trying to use the metal hammer and missing the nail, but using the inflatable hammer to successfully push the nail into a large pre-drilled hole? If you’re then presented with another nail, which tool would you choose to use? The answer depends on what type of information you have taken from your learning experience.”

In this situation, explains, Cheke, a learner concerned with the outcome (a ‘reward’ learner) would learn that the inflatable hammer was the successful tool and opt to use it for later hammering. However, a learner concerned with physical forces (a ‘functionality’ learner) would learn that the metal hammer produced a percussive force, albeit in the wrong place, and that the inflatable hammer did not, and would therefore opt for the metal hammer.

Now, in a study published in the open access journal PLOS ONE, Dr Cheke and colleagues investigated what kind of information children extract from situations where the relevant physical characteristics of a potential tool are observable, but often at odds with whether the use of that tool in practice achieved the desired goal.

The researchers presented children aged 4-11 with a task through which they must retrieve a floating token to earn sticker rewards. Each time, the children were presented with a container of water and a set of tools to use to raise the level. This experiment is based on one of the most famous Aesop’s fables, where a thirty crow drops stones into a pitcher to get to the water.

In this test, some of the tools were ‘functional’ and some ‘non-functional’. Functional tools were those that, if dropped into a standard container, would sink, raising the water level and bringing the token within reach; non-functional tools were those that would not do so, for example because they floated.

However, sometimes the children used functional tools to attempt to raise the level in a leaking container – in this context, the water would never rise high enough to bring the token within reach, no matter how functional the tool used.

At other times, the children were successful in retrieving the reward despite using a non-functional tool; for example, when using a water container that self-fills through an inlet pipe, it doesn’t matter whether the tool is functional as the water is rising anyway.

After these learning sessions, the researchers presented the children with a ‘standard’ water container and a series of choices between different tools. From the pattern of these choices the researchers could calculate what type of information was most influential on children’s decision-making: reward or function. 

“A child doesn’t have to know the precise rules of physics that allow a tool to work to have a feeling of whether or not it should work,” says Elsa Loissel, co-first author of the study. “So, we can look at whether a child’s decision making is guided by principles of physics without requiring them to explicitly understand the physics itself.

“We expected older children, who might have a rudimentary understanding of physical forces, to choose according to function, while younger children would be expected to use the simpler learning approach and base their decisions on what had been previously rewarded,” adds co-first author Dr Cheke. “But this wasn’t what we found.”

Instead, the researchers showed that information about reward was never a reliable predictor of children’s choices. Instead, the influence of functionality information increased with age – by the age of seven, this was the dominant influence in their decision making.

“This suggests that, remarkably, children begin to emphasise information about physics over information about previous rewards from as young as seven years of age, even when these two types of information are in direct conflict.”

This research was funded by the European Research Council under the European Union’s Seventh Framework Programme.

Reference
Elsa Loissel, Lucy Cheke & Nicola Clayton. Exploring the Relative Contributions of Reward-History and Functionality Information to Children’s Acquisition of The Aesop’s Fable Task. PLOS ONE; 23 Feb 2018; DOI: 10.1371/journal.pone.0193264

Children as young as seven apply basic laws of physics to problem-solving, rather than learning from what has previously been rewarded, suggests new research from the University of Cambridge.

Remarkably, children begin to emphasise information about physics over information about previous rewards from as young as seven years of age, even when these two types of information are in direct conflict
Lucy Cheke
Dominoes 3

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Helping police make custody decisions using artificial intelligence

$
0
0

"It’s 3am on Saturday morning. The man in front of you has been caught in possession of drugs. He has no weapons, and no record of any violent or serious crimes. Do you let the man out on police bail the next morning, or keep him locked up for two days to ensure he comes to court on Monday?”

The kind of scenario Dr Geoffrey Barnes is describing – whether to detain a suspect in police custody or release them on bail – occurs hundreds of thousands of times a year across the UK. The outcome of this decision could be major for the suspect, for public safety and for the police.

“The police officers who make these custody decisions are highly experienced,” explains Barnes. “But all their knowledge and policing skills can’t tell them the one thing they need to now most about the suspect – how likely is it that he or she is going to cause major harm if they are released? This is a job that really scares people – they are at the front line of risk-based decision-making.”

Barnes and Professor Lawrence Sherman, who leads the Jerry Lee Centre for Experimental Criminology in the University of Cambridge’s Institute of Criminology, have been working with police forces around the world to ask whether AI can help. 

“Imagine a situation where the officer has the benefit of a hundred thousand, and more, real previous experiences of custody decisions?” says Sherman. “No one person can have that number of experiences, but a machine can.”

In mid-2016, with funding from the Monument Trust, the researchers installed the world’s first AI tool for helping police make custodial decisions in Durham Constabulary.

Called the Harm Assessment Risk Tool (HART), the AI-based technology uses 104,000 histories of people previously arrested and processed in Durham custody suites over the course of five years, with a two-year follow-up for each custody decision. Using a method called “random forests”, the model looks at vast numbers of combinations of ‘predictor values’, the majority of which focus on the suspect’s offending history, as well as age, gender and geographical area. 

“These variables are combined in thousands of different ways before a final forecasted conclusion is reached,” explains Barnes. “Imagine a human holding this number of variables in their head, and making all of these connections before making a decision. Our minds simply can’t do it.”

The aim of HART is to categorise whether in the next two years an offender is high risk (highly likely to commit a new serious offence such as murder, aggravated violence, sexual crimes or robbery); moderate risk (likely to commit a non-serious offence); or low risk (unlikely to commit any offence). 

“The need for good prediction is not just about identifying the dangerous people,” explains Sherman. “It’s also about identifying people who definitely are not dangerous. For every case of a suspect on bail who kills someone, there are tens of thousands of non-violent suspects who are locked up longer than necessary.”

Durham Constabulary want to identify the ‘moderate-risk’ group – who account for just under half of all suspects according to the statistics generated by HART. These individuals might benefit from their Checkpoint programme, which aims to tackle the root causes of offending and offer an alternative to prosecution that they hope will turn moderate risks into low risks. 

“It’s needles and haystacks,” says Sherman. “On the one hand, the dangerous ‘needles’ are too rare for anyone to meet often enough to spot them on sight. On the other, the ‘hay’ poses no threat and keeping them in custody wastes resources and may even do more harm than good.” A randomised controlled trial is currently under way in Durham to test the use of Checkpoint among those forecast as moderate risk.

HART is also being refreshed with more recent data – a step that Barnes explains will be an important part of this sort of tool: “A human decision-maker might adapt immediately to a changing context – such as a prioritisation of certain offences, like hate crime – but the same cannot necessarily be said of an algorithmic tool. This suggests the need for careful and constant scrutiny of the predictors used and for frequently refreshing the algorithm with more recent historical data.”

No prediction tool can be perfect. An independent validation study of HART found an overall accuracy of around 63%. But, says Barnes, the real power of machine learning comes not from the avoidance of any error at all but from deciding which errors you most want to avoid. 

“Not all errors are equal,” says Sheena Urwin, head of criminal justice at Durham Constabulary and a graduate of the Institute of Criminology’s Police Executive Master of Studies Programme. “The worst error would be if the model forecasts low and the offender turned out high.”

“In consultation with the Durham police, we built a system that is 98% accurate at avoiding this most dangerous form of error – the ‘false negative’ – the offender who is predicted to be relatively safe, but then goes on to commit a serious violent offence,” adds Barnes. “AI is infinitely adjustable and when constructing an AI tool it’s important to weigh up the most ethically appropriate route to take.”

The researchers also stress that HART’s output is for guidance only, and that the ultimate decision is that of the police officer in charge.

“HART uses Durham’s data and so it’s only relevant for offences committed in the jurisdiction of Durham Constabulary. This limitation is one of the reasons why such models should be regarded as supporting human decision-makers not replacing them,” explains Barnes. “These technologies are not, of themselves, silver bullets for law enforcement, and neither are they sinister machinations of a so-called surveillance state.”

Some decisions, says Sherman, have too great an impact on society and the welfare of individuals for them to be influenced by an emerging technology.

Where AI-based tools provide great promise, however, is to use the forecasting of offenders’ risk level for effective ‘triage’, as Sherman describes: “The police service is under pressure to do more with less, to target resources more efficiently, and to keep the public safe. 

“The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review. At the same time, better triaging can lead to the right offenders receiving release decisions that benefit both them and society.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Police at the “front line” of difficult risk-based judgements are trialling an AI system trained by University of Cambridge criminologists to give guidance using the outcomes of five years of criminal histories.

The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review
Lawrence Sherman

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Scientists link genes to brain anatomy in autism

$
0
0

Previous studies have reported differences in brain structure of autistic individuals. However, until now, scientists have not known which genes are linked to these differences.

The team at the Autism Research Centre analysed magnetic resonance imaging (MRI) brain scans from more than 150 autistic children and compared them with MRI scans from similarly aged children but who did not have autism. They looked at variation in the thickness of the cortex, the outermost layer of the brain, and linked this to gene activity in the brain.

They discovered a set of genes linked to differences in the thickness of the cortex between autistic kids and non-autistic children. Many of these genes are involved in how brain cells (or neurons) communicate with each other. Interestingly, many of the genes identified in this study have been shown to have lower gene activity at the molecular level in autistic post mortem brain tissue samples.

The study was led by two postdoctoral scientists, Dr Rafael Romero-Garcia and Dr Richard Bethlehem, and Varun Warrier, a PhD student. The study is published in the journal Molecular Psychiatry and provides the first evidence linking differences in the autistic brain to genes with atypical gene activity in autistic brains.

Dr Richard Bethlehem said: “This takes us one step closer to understanding why the brains of people with and without autism may differ from one another. We have long known that autism itself is genetic, but by combining these different data sets (brain imaging and genetics) we can now identify more precisely which genes are linked to how the autistic brain may differ. In essence, we are beginning to link molecular and macroscopic levels of analysis to better understand the diversity and complexity of autism.”

Varun Warrier added: “We now need to confirm these results using new genetic and brain scan data so as to understand how exactly gene activity and thickness of the cortex are linked in autism.”

“The identification of genes linked to brain changes in autism is just the first step,” said Dr Rafael Romero-Garcia. “These promising findings reveal how important multidisciplinary approaches are if we want to better understand the molecular mechanisms underlying autism. The complexity of this condition requires a joint effort from a wide scientific community.”

The research was supported by the Medical Research Council, the Autism Research Trust, the Wellcome Trust, and the Templeton World Charity Foundation, Inc.

Reference
Romero-Garcia, R et al. Synaptic and transcriptionally downregulated genes are associated with cortical thickness differences in autism. Molecular Psychiatry; 26 Feb; DOI: 10.1038/s41380-018-0023-7

A team of scientists at the University of Cambridge has discovered that specific genes are linked to individual differences in brain anatomy in autistic children.

This takes us one step closer to understanding why the brains of people with and without autism may differ from one another
Richard Bethlehem
What are you looking at?

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

New evidence suggests nutritional labelling on menus may reduce our calorie intake

$
0
0

Eating too many calories contributes to people becoming overweight and increases the risks of heart disease, diabetes and many cancers, which are among the leading causes of poor health and premature death.

Several studies have looked at whether putting nutritional labels on food and non-alcoholic drinks might have an impact on their purchasing or consumption, but their findings have been mixed. Now, a team of Cochrane researchers has brought together the results of studies evaluating the effects of nutritional labels on purchasing and consumption in a systematic review.

The team reviewed the evidence to establish whether and by how much nutritional labels on food or non-alcoholic drinks affect the amount of food or drink people choose, buy, eat or drink. They considered studies in which the labels had to include information on the nutritional or calorie content of the food or drink. They excluded those including only logos (e.g. ticks or stars), or interpretative colours (e.g. ‘traffic light’ labelling) to indicate healthier and unhealthier foods. In total, the researchers included evidence from 28 studies, of which 11 assessed the impact of nutritional labelling on purchasing and 17 assessed the impact of labelling on consumption.

The team combined results from three studies where calorie labels were added to menus or put next to food in restaurants, coffee shops and cafeterias. For a typical lunch with an intake of 600 calories, such as a slice of pizza and a soft drink, labelling may reduce the energy content of food purchased by about 8% (48 calories). The authors judged the studies to have potential flaws that could have biased the results.

Combining results from eight studies carried out in artificial or laboratory settings could not show with certainty whether adding labels would have an impact on calories consumed. However, when the studies with potential flaws in their methods were removed, the three remaining studies showed that such labels could reduce calories consumed by about 12% per meal. The team noted that there was still some uncertainty around this effect and that further well conducted studies are needed to establish the size of the effect with more precision.

The Review’s lead author, Professor Theresa Marteau, Director of the Behaviour and Health Research Unit at the University of Cambridge, UK, says: “This evidence suggests that using nutritional labelling could help reduce calorie intake and make a useful impact as part of a wider set of measures aimed at tackling obesity,” She added, “There is no ‘magic bullet’ to solve the obesity problem, so while calorie labelling may help, other measures to reduce calorie intake are also needed.”

Author, Professor Susan Jebb from the University of Oxford commented: “Some outlets are already providing calorie information to help customers make informed choices about what to purchase. This review should provide policymakers with the confidence to introduce measures to encourage or even require calorie labelling on menus and next to food and non-alcoholic drinks in coffee shops, cafeterias and restaurants.”

The researchers were unable to reach firm conclusions about the effect of labelling on calories purchased from grocery stores or vending machines because of the limited evidence available. They also added that future research would also benefit from a more diverse consideration of the possible wider impacts of nutritional labelling including impacts on those producing and selling food, as well as consumers.

Professor Ian Caterson, President of the World Obesity Federation, commented: “Energy labelling has been shown to be effective: people see it and read it and there is a resulting decrease in calories purchased. This is very useful to know – combined with a suite of other interventions, such changes will help slow and eventually turnaround the continuing rise in body weight.”

Reference
Crockett RA, et al. Nutritional labelling for healthier food or non-alcoholic drink purchasing and consumption. Cochrane Database of Systematic Reviews 2018, Issue 2. Art. No.: CD009315.

New evidence published in the Cochrane Library today shows that adding calorie labels to menus and next to food in restaurants, coffee shops and cafeterias, could reduce the calories that people consume, although the quality of evidence is low. 

There is no ‘magic bullet’ to solve the obesity problem, so while calorie labelling may help, other measures to reduce calorie intake are also needed
Theresa Marteau
Wall_Food_10087
Making sense of our unhealthy behaviour

Professor Marteau will be speaking at the 2018 Cambridge Science Festival about why we sometimes make 'unhealthy' choices and how we might encourage people to change.

Friday 16 March: 5:30pm - 6:30pm

Babbage Lecture Theatre, New Museums Site Downing Street, CB2 3RS

​Details and how to book here.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Identification of brain region responsible for alleviating pain could lead to development of opioid alternatives

$
0
0

The team, led by the University of Cambridge, have pinpointed an area of the brain that is important for endogenous analgesia – the brain’s intrinsic pain relief system. Their results, published in the open access journal eLife, could lead to the development of pain treatments that activate the painkilling system by stimulating this area of the brain, but without the dangerous side-effects of opioids.

Opioid drugs such as oxycodone, hydrocodone and fentanyl hijack the endogenous analgesia system, which is what makes them such effective painkillers. However, they are also highly addictive, which has led to the opioid crisis in the United States, where drug overdose is now the leading cause of death for those under 50, with opioid overdoses representing two-thirds of those deaths.

“We’re trying to understand exactly what the endogenous analgesia system is: why we have it, how it works and where it is controlled in the brain,” said Dr Ben Seymour of Cambridge’s Department of Engineering, who led the research. “If we can figure this out, it could lead to treatments that are much more selective in terms of how they treat pain.”

Pain, while unpleasant, evolved to serve an important survival function. After an injury, for instance, the persistent pain we feel saps our motivation, and so forces us towards rest and recuperation which allows the body to use as much energy as possible for healing.

“Pain can actually help us recover by removing our drive to do unnecessary things - in a sense, this can be considered ‘healthy pain’,” said Seymour. “So why might the brain want to turn down the pain signal sometimes?”

Seymour and his colleagues thought that sometimes this ‘healthy pain’ could be a problem, especially if we could actively do something that might help - such as try and find a way to cool a burn.

In these situations, the brain might activate the pain-killing system to actively look for relief. To prove this, and to try and identify where in the brain this system was activated, the team designed a pair of experiments using brain scanning technology.

In the first experiment, the researchers attached a metal probe to the arm of a series of healthy volunteers - and heated it up to a level that was painful, but not enough to physically burn them. The volunteers then played a type of gambling game where they had to find which button on a small keypad cooled down the probe. The level of difficulty was varied over the course of the experiments - sometimes it was easy to turn the probe off, and sometimes it was difficult. Throughout the task, the volunteers frequently rated their pain, and the researchers constantly monitored their brain activity.

The results found that the level of pain the volunteers experienced was related to how much information there was to learn in the task. When the subjects were actively trying to work out which button they should press, pain was reduced. But when the subjects knew which button to press, it wasn't. The researchers found that the brain was actually computing the benefits of actively looking for and remembering how they got relief, and using this to control the level of pain.

Knowing what this signal should look like, the researchers then searched the brain to see where it was being used. The second experiment identified the signal in a single region of the prefrontal cortex, called the pregenual cingulate cortex.

“These results build a picture of why and how the brain decides to turn off pain in certain circumstances, and identify the pregenual cingulate cortex as a critical ‘decision centre’ controlling pain in the brain,” said Seymour.

This decision centre is a key place to focus future research efforts. In particular, the researchers are now trying to understand what the inputs are to this brain region, if it is stimulated by opioid drugs, what other chemical messenger systems it uses, and how it could be turned on as a treatment for patients with chronic pain.

Reference
Suyi Zhang et al. ‘The control of tonic pain by active relief learning.’ eLife (2018). DOI:https://doi.org/10.7554/eLife.31949

Researchers from the UK & Japan have identified how the brain’s natural painkilling system could be used as a possible alternative to opioids for the effective relief of chronic pain, which affects as many as one in three people at some point in their lives. 

Pain can actually help us recover by removing our drive to do unnecessary things - in a sense, this can be considered ‘healthy pain’.
Ben Seymour
Prescription bottle for Oxycodone tablets and pills on metal table

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Silent witnesses: how an ice age was written in the trees

$
0
0

Researchers use tree rings to unravel past climates and their impact on civilisations. 

READ THE STORY HERE

What connects a series of volcanic eruptions and severe summer cooling with a century of pandemics, human migration and the rise and fall of civilisations? Tree rings, says Ulf Büntgen, who leads Cambridge’s first dedicated tree-ring laboratory at the Department of Geography.

Once you embark on these integrative approaches you can ask questions like how did complex societies cope with climate change? That’s when it starts to get really exciting.
Ulf Büntgen
Subfossil trees preserved in Iceland

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Living with artificial intelligence: how do we get it right?

$
0
0

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?

True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.

If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?

On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.

So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.

The ‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.

As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?

These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.

This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”

We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.

But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence, where they work on 'Agents and persons'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.

Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.

For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time
Huw Price and Karina Vold

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Cambridge kids show you can make a rainbow - even when it's snowing

$
0
0

The school children were joined by staff and Eddington residents who each donned clothing to match one colour in the rainbow in a show of support for diversity.

With temperatures plummeting to -3°C and snow flurries disrupting plans for an outdoor celebration, the local Eddington and Cambridge community packed into the assembly hall in a show of solidarity.

Vice-Chancellor of the University of Cambridge, Professor Stephen Toope also attended, saying: “As LGBT+ History Month reaches its end, we have much to celebrate. Exhibitions, talks and performances have charted the rich and vibrant history of the LGBT+ community – but also its struggles.

“In my field of law, there have been advances in gaining equality for LGBT people – from protection from discrimination, to celebration of civil unions and parenthood. But equality in law doesn’t always translate into equality in life. That’s why we will keep up our efforts to celebrate Cambridge’s diversity.

“Specific initiatives, including the School of Humanities and Social Sciences’ recently announced programme focusing on LGBTQ+ research, illustrate the importance of acknowledging this diversity in our academic pursuits as well as in our daily lives. Our primary school continues to embrace opportunities to define what a truly inclusive education could be.

“LGBT+ History Month has shown what we can achieve when we all work together with the common goal of creating the Cambridge we want to live, work in and  study in. We are committed to being a place where people are allowed to be themselves – to think their own way, define their own boundaries and form their own identities.

“Thanks to all of you who have participated, given your support and helped Cambridge to be a welcoming, open and tolerant place.”

LGBT+ History Month takes place every February to promote the visibility of lesbian, gay, bisexual and transgender people, their history, lives and experience. This encourages diversity and equality, as well as raising awareness and advancing education on matters affecting the LGBT+ community.

Eddington’s rainbow photo call marked the end of the month of activities across the school, University and society to raise awareness of and celebrate the LGBT+ community. 

Heather Topel, Project Director for the North West Cambridge Development, said: “The Eddington Rainbow was a success and we are pleased to support the development of Eddington as a new community in Cambridge that is open to all.  We will be hosting a range of events throughout the year that support the broad range of individuals and communities that are part of Cambridge.”

Three hundred pupils at the University of Cambridge Primary School formed a giant rainbow to mark the end of LGBT+ history month today.

Our primary school continues to embrace opportunities to define what a truly inclusive education could be
Vice-Chancellor Professor Stephen Toope
University of Cambridge Primary School rainbow flag

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Statement from the Vice-Chancellor on industrial action

$
0
0

"I recognise UUK’s limited room for manoeuvre due to extremely low real interest rates and the views of the Pensions Regulator, who will ultimately decide on the scheme’s viability. Our influence is, therefore, limited. But I strongly support the exploration of ideas to resolve this situation, including those put forward by UCU. There has to be compromise. I hope that further disruption to students’ studies can be avoided while talks continue.

"The current situation cannot go on. It has, understandably, led to anger from staff and anxiety from students. I therefore urge the parties to agree a pragmatic solution to bring to an end the current dispute. Once this has been achieved we can focus on a long-term, sustainable solution which is in the best interests of the sector, the University and individual members of the USS.

"Pensions form a key component of our compensation package for staff and they play a significant role in the attractiveness of the UK’s higher education sector to talented individuals from around the world.

"Cambridge University has been actively working on options for some time and we have been discussing these with UUK. We believe a sector-wide scheme has significant benefits. One option to maintain a sector-wide approach, at least in the short-term, would be an alternative that retains a Defined Benefit (DB) element, but combines it with a Defined Contribution (DC) component along the lines of our existing Cambridge University Assistants’ Contributory Pension Scheme (CPS).

"There are other approaches that could be explored as longer-term solutions. These could include a Collective DC scheme, similar to that being considered by the Royal Mail, or a government-backed solution. These might offer better benefits than the current scheme, yet still be affordable for universities. However, these require new legislation or government action.

"If all else fails and no sector-wide scheme is deliverable, Cambridge will have to consider whether there is scope for a Cambridge-specific scheme – either within or outside the USS. We must recognise, however, that there are serious obstacles to such an approach.

"Cambridge University is also prepared to consider assuming the costs of additional contributions in the short-term should no other option be viable. It should be noted, however, that this approach would likely require trade-offs and cuts in other parts of the University. 

"You have my absolute commitment to working with all parties to find a way through this dispute; a way which recognises the concerns of our staff, ensures the sustainability of the University, and maintains an excellent education for our students. Such an outcome is imperative if we are to safeguard the global leadership of the UK’s higher education sector, and of Cambridge University in particular."

Professor Stephen J. Toope
Vice-Chancellor

"I welcome the commitment to further talks between UCU and UUK to end the current strike.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

£2.5million gift to Cambridge Sport funds two new hockey pitches for use by the University and wider Cambridge community

$
0
0
Field hockey ball and stick

The ambitious plans recognise that the University forms part of a vibrant and growing city where sport is valued highly.

As a result of this gift, the existing Wilberforce Road Sports Ground will be transformed and provide opportunities for both Cambridge students and the wider community to play and enjoy hockey, via a partnership with Cambridge City Hockey Club. Demand for hockey, especially at junior level, has grown in recent years and bringing the City Club activity together on one site means the club will generate a greater sense of unity. The gift will fund two new high quality floodlit artificial pitches to be built alongside the current pitch to form a ‘Hockey Hub’.

The gift, the largest to University sport from private philanthropy, comes from Chris and Sarah Field, who were inspired to make a lasting contribution to Cambridge sport by watching their sons play hockey as they grew up.

The new pitches represent part of a wider ambition to improve sports provision, including updating the facilities at Grange Road and the consideration of a swimming pool on the West Cambridge site. The mission for sports at the University is broad in its vision and scope, and supports the University’s intention to be inclusive and accessible to the wider Cambridge community through encouraging participation in sport.

 

More about the impact of philanthropic giving

Read other examples of the positive impact of philanthropy at Cambridge

The gift marks a defining moment in the mission of the University to improve sports facilities and recognise the many wide-ranging benefits sport gives to all who take part.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License.

Yes

Rare mineral discovered in plants for first time

$
0
0

Scientists at Sainsbury Laboratory Cambridge University have found that the mineral vaterite, a form (polymorph) of calcium carbonate, is a dominant component of the protective silvery-white crust that forms on the leaves of a number of alpine plants, which are part of the Garden’s national collection of European Saxifraga species.

Naturally occurring vaterite is rarely found on Earth. Small amounts of vaterite crystals have been found in some sea and freshwater crustaceans, bird eggs, the inner ears of salmon, meteorites and rocks. This is the first time that the rare and unstable mineral has been found in such a large quantity and the first time it has been found to be associated with plants.

The discovery was made through a University of Cambridge collaboration between the Sainsbury Laboratory Cambridge University microscopy facility and Cambridge University Botanic Garden, as part of an ongoing research project that is probing the inner workings of plants in the Garden using new microscopy technologies. The research findings have been published in the latest edition of Flora.

The laboratory’s Microscopy Core Facility Manager, Dr Raymond Wightman, said vaterite was of interest to the pharmaceutical industry: “Biochemists are working to synthetically manufacture vaterite as it has potential for use in drug delivery, but it is not easy to make. Vaterite has special properties that make it a potentially superior carrier for medications due to its high loading capacity, high uptake by cells and its solubility properties that enable it to deliver a sustained and targeted release of therapeutic medicines to patients. For instance, vaterite nanoparticles loaded with anti-cancer drugs appear to offload the drug slowly only at sites of cancers and therefore limit the negative side-effects of the drug.”

Other potential uses of vaterite include improving the cements used in orthopaedic surgery and as an industrial application improving the quality of papers for inkjet printing by reducing the lateral spread of ink.

Dr Wightman said vaterite was often associated with outer space and had been detected in planetary objects in the Solar System and meteorites: “Vaterite is not very stable in the Earth’s humid atmosphere as it often reverts to more common forms of calcium carbonate, such as calcite. This makes it even more remarkable that we have found vaterite in such large quantities on the surface of plant leaves.”

Botanic Garden Alpine and Woodland Supervisor, Paul Aston, and colleague Simon Wallis, are pioneering studies into the cellular-level structures of these alpine plants with Dr Wightman. Mr Wallis, who is also Chairman of the international Saxifrage Society, said: “We started by sampling as wide a range of saxifrage species as possible from our collection. The microscope analysis of the plant material came up with the exciting discovery that some plants were exuding vaterite from “chalk glands” (hydathodes) on the margins of their leaves.

"We then noticed a pattern emerging. The plants producing vaterite were from the section of Saxifraga called Porphyrion. Further to this, it appears that although many species in this section produced vaterite along with calcite, there was at least one species, Saxifraga sempervivum, that was producing pure vaterite.”

Dr Wightman said two new pieces of equipment at the microscopy facility were being used to reveal the inner workings of the plants and uncovering cellular structures never before described: “Our cryo-scanning electron microscope allows us to view, in great detail, cells and plant tissues in their “native” fully hydrated state by freezing samples quickly and maintaining cold under a vacuum for electron microscopy.

"We are also using a Raman microscope to identify and map molecules. In this case, the microscope not only identified signatures corresponding to calcium carbonate as forming the crust, but was also able to differentiate between the calcite and vaterite forms when it was present as a mixture while still attached to the leaf surface.”

So why do these species produce a calcium carbonate crystal crust and why are some crusts calcite and others vaterite?

The Cambridge University Botanic Garden team is hoping to answer this question through further analysis of the leaf anatomy of the Saxifraga group. They suspect that vaterite may be present on more plant species, but that the unstable mineral is being converted to calcite when exposed to wind and rain. This may also be the reason why some plants have both vaterite and calcite present at the same time.

The microscopy research has also turned up some novel cell structures. Mr Aston added: “As well as producing vaterite, Saxifraga scardica has a special tissue surrounding the leaf edge that appears to deflect light from the edge into the leaf. The cells appear to be producing novel cell wall structures to achieve this deflection. This may be to help the plant to collect more light, particularly if it is growing in partly shaded environments.”

The team believes the novel cell wall structures of Saxifrages could one day help inform the manufacture of new bio-inspired optical devices and photonic structures for industry such as communication cables and fibre optics.

Mr Aston said these initial discoveries were just the start: “We expect that there may be other plants that also produce vaterite and have special leaf anatomies that have evolved in harsh environments like alpine regions. The next species we will be looking to study is Saxifraga lolaensis, which has super tiny leaves with an organisation of cell types not seen in a leaf before, and which we think will reveal more fascinating secrets about the complexity of plants.”

There is a risk that some of these tiny but amazing alpine plants could potentially disappear due to climate change, damage from alpine recreation sports and over-collecting. There is still much to learn about these plants, but the collaborative work of the Sainsbury Laboratory and Cambridge University Botanic Garden team is revealing fascinating insights into leaf anatomy and biochemistry as well as demonstrating the potential for Saxifrages to supply a new range of biomaterials.

Story by Kathy Grube, Communications Manager, Sainsbury Laboratory.

A rare mineral with potential industrial and medical applications has been discovered on alpine plants at Cambridge University Botanic Garden.

Biochemists are working to synthetically manufacture vaterite as it has potential for use in drug delivery, but it is not easy to make
Raymond Wightman
Saxifraga sempervivum, an alpine plant species discovered to produce "pure vaterite".

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Method to predict drug stability could lead to more effective medicines

$
0
0

The researchers, from the Universities of Cambridge and Copenhagen, have developed a new method to solve an old problem: how to predict when and how a solid will crystallise. Using optical and mechanical measuring techniques, they found that localised movement of molecules within a solid is ultimately responsible for crystallisation.

This solution to the problem was first proposed in 1969, but it has only now become possible to prove the hypothesis. The results are reported in two papers in Physical Chemistry Chemical Physics and The Journal of Physical Chemistry B.

Solids behave differently depending on whether their molecular structure is ordered (crystal) or disordered (glass). Chemically, the crystal and glass forms of a solid are exactly the same, but they have different properties.

One of the desirable properties of glasses is that they are more soluble in water, which is especially useful for medical applications. To be effective, medicines need to be water-soluble, so that they can be dissolved within the body and reach their target via the bloodstream.

“Most of the medicines in use today are in the crystal form, which means that they need extra energy to dissolve in the body before they enter the bloodstream,” said study co-author Professor Axel Zeitler from Cambridge’s Department of Chemical Engineering & Biotechnology. “Molecules in the glass form are more readily absorbed by the body because they can dissolve more easily, and many glasses that can cure disease have been discovered in the past 20 years, but they’re not being made into medicines because they’re not stable enough.”

After a certain time, all glasses will undergo spontaneous crystallisation, at which point the molecules will not only lose their disordered structure, but they will also lose the properties that made them effective in the first place. A long-standing problem for scientists has been how to predict when crystallisation will occur, which, if solved, would enable the widespread practical application of glasses.

“This is a very old problem,” said Zeitler. “And for pharmaceutical companies, it’s often too big of a risk. If they develop a drug based on the glass form of a molecule and it crystallises, they will not only have lost a potentially effective medicine, but they would have to do a massive recall.”

In order to determine when and how solids will crystallise, most researchers had focused on the glass transition temperature, which is the temperature above which molecules can move in the solid more freely and can be measured easily. Using a technique called dynamic mechanical analysis as well as terahertz spectroscopy, Zeitler and his colleagues showed that instead of the glass transition temperature, the molecular motions occurring until a lower temperature threshold, are responsible for crystallisation.

These motions are constrained by localised forces in the molecular environment and, in contrast to the relatively large motions that happen above the glass transition temperature, the molecular motions above the lower temperature threshold are much subtler. While the localised movement is tricky to measure, it is a key part of the crystallisation process.

Given the advance in measurement techniques developed by the Cambridge and Copenhagen teams, drug molecules that were previously discarded at the pre-clinical stage can now be tested to determine whether they can be brought to the market in a stable glass form that overcomes the solubility limitations of the crystal form.

“If we use our technique to screen molecules that were previously discarded, and we find that the temperature associated with the onset of the localised motion is sufficiently high, we would have high confidence that the material will not crystallise following manufacture,” said Zeitler. “We could use the calibration curve that we describe in the second paper to predict the length of time it will take the material to crystallise.”

The research has been patented and is being commercialised by Cambridge Enterprise, the University’s commercialisation arm. The research was funded by the Engineering and Physical Sciences Research Council (EPSRC).

References:
Michael T. Ruggiero et al. ‘The significance of the amorphous potential energy landscape for dictating glassy dynamics and driving solid-state crystallisation’ Physical Chemistry Chemical Physics, 19, 30039-30047 (2017). DOI: 10.1039/c7cp06664c

Eric Ofosu Kissi et al. ‘The glass transition temperature of the β-relaxation as the single predictive parameter for recrystallization of neat amorphous drugs.’ The Journal of Physical Chemistry B (2018). DOI: 10.1021/acs.jpcb.7b10105

Researchers from the UK and Denmark have developed a new method to predict the physical stability of drug candidates, which could help with the development of new and more effective medicines for patients. The technology has been licensed to Cambridge spin-out company TeraView, who are developing it for use in the pharmaceutical industry in order to make medicines that are more easily released in the body. 

This is a very old problem, and for pharmaceutical companies, it’s often too big of a risk.
Axel Zeitler
Medication

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Cambridge University Press celebrate women in academia for International Women's Day

$
0
0

The collection – spanning the subjects of Arts, Humanities and Social Sciences - contains a wide range of online book chapters, articles, journals and blog posts.

The purpose of the campaign is to share, and make accessible, the excellent knowledge and research of our authors around this incredibly important topic.

As one of the world's leading and most respected university presses, Cambridge University Press has a significant part to play in helping women in academia have a voice and in championing their work.

Mandy Hill, Managing Director for Academic Publishing at the Press, said: “Cambridge University Press is delighted to support International Women’s Day with our collection of academic work by and about great women across the globe and across history.

“As a University Press and global publisher, we see it as intrinsic within our role to support, develop and publish the highest standards of education and research for everyone and by everyone, irrespective of gender, race, age, or sexuality. This year we have expanded our IWD2018 campaign to include work from all our academic subjects and have made content free to ensure accessibility.”

The free content will be accessible from www.cambridge.org/IWD2018, throughout March, and includes the full 2017 volume of leading agenda setting journal Politics & Gender, blog posts, a wide selection of articles and book chapters including:

The Wonders of Light
Marta Garcia-Matos 
Discover the spectacular power of light with this visually stunning celebration of the multitude of ways in which light-based technology has shaped our society.

Women in Twentieth-Century Africa
Iris Berger
Explores the paradoxical image of African women as exceptionally oppressed, but also as strong, resourceful and rebellious.

Property in the Body: Feminist Perspectives
Donna Dickenson
Commodification of the human body is gaining ground, strengthened by powerful interests. This book helps us understand and regulate it.

The Cambridge Introduction to Margaret Atwood
Heidi Slettedahl Macpherson
An engaging overview for students and readers of Atwood's life, works, contexts and reception.

Sex, Gender, and Episcopal Authority in an Age of Reform, 1000-1122
Megan McLaughlin
New perspective on western European ecclesiastical reform between 1000-1122 through an examination of images of the 'private' life of the Church.

Gender and Race in Antebellum Popular Culture
Sarah N. Roth
This book argues that white women, as creators and consumers of popular culture media, played a pivotal role in the demasculinization of black men during the antebellum period.

The Logics of Gender Justice: State Action on Women’s Rights Around the World
Mala Htun
This book explains when and why governments around the world take action to advance - or undermine - women’s rights.

The Experiences of Face Veil Wearers in Europe and the Law
Eva Brems
Studies the experiences of face veil wearers in Europe and examines the ramifications of the empirical findings for legislative agendas.

In celebration of International Women’s Day (March 8), Cambridge University Press has made a collection of inspirational work written by, or about, leading academics and pioneers such as Marie Curie, Margaret Atwood and Angela Merkel, available to read for free online.

This year we have expanded our IWD2018 campaign to include work from all our academic subjects and have made content free to ensure accessibility.
Mandy Hill

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

"We all need to press for progress, in science and beyond"

Conservationists gather to mark International Women's Day

$
0
0

The event, jointly organised by the Museum and the Cambridge Conservation Forum’s Women in Conservation Leadership Network, was held to mark International Women’s Day and included a keynote lecture by Professor Rebecca Kilner from Cambridge’s Department of Zoology.

Kilner said; “Events like this are important for showing the next generation that anyone with a spark for science can work in science subjects. They are designed to excite and encourage young people to pursue their interests - and not to be held back by their gender, race or background.”

The Museum welcomed over 100 visitors to the event, which included ‘meet the scientist’ stalls, a poster exhibition, and a sneak-peek at the newly refurbished Whale Hall.

More than 30 women working in different scientific fields took part, from organisations including the United Nations’ Environment World Conservation Monitoring Centre, the RSPB and the International Union for the Conservation of Nature. They were joined by staff and volunteers from the museum, and Cambridge postgraduate students.

Dr Rosalyn Wade, the Museum’s Interpretation and Learning Officer, helped to coordinate the event. She said; “A key role for the Museum is engaging with the public and raising awareness of work in biological and environmental sciences.

“It’s important to raise awareness of the different kinds of careers available in scientific fields. A number of our visitors were GCSE and A-Level students, and it was a great opportunity for them to see the range of roles that might be available to them in the future.

“We also had lots of new mums who are thinking about a career change and were interested to learn more about different areas. It was great to see such a diverse range of people.”

The Museum has undergone a massive redevelopment, and will officially re-open to the public on 23 June.

Scientists from around the world gathered at the Museum of Zoology yesterday to celebrate and promote the work of women in conservation.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Study finds that genes play a role in empathy

$
0
0
Hand holding

Empathy has two parts: the ability to recognize another person’s thoughts and feelings, and the ability to respond with an appropriate emotion to someone else’s thoughts and feelings. The first part is called ‘cognitive empathy’ and the second part ‘affective empathy’.

Fifteen years ago, a team of scientists at the University of Cambridge developed the Empathy Quotient (EQ), a brief self-report measure of empathy. The EQ measures both parts of empathy.

Previous research showed that some of us are more empathetic than others, and that on average, women are slightly more empathetic than men. It also showed that, on average, autistic people score lower on the EQ, and that this was because they struggle with cognitive empathy, even though their affective empathy may be intact.

In a new study published in the journal Translational Psychiatry, the Cambridge team, working with the genetics company 23andMe and a team of international scientists, report the results of the largest genetic study of empathy using information from more than 46,000 23andMe customers. The customers all completed the EQ online and provided a saliva sample for genetic analysis.

The study was led by Varun Warrier, a Cambridge PhD student, and Professors Simon Baron-Cohen, Director of the Autism Research Centre at Cambridge University, Thomas Bourgeron, of the University Paris Diderot and the Institut Pasteur, and David Hinds, Principal Scientist at 23andMe.

The new study has three important results. First, it found that how empathetic we are is partly due to genetics. Indeed, a tenth of this variation is due to genetic factors. This confirms previous research examining empathy in identical versus non-identical twins.

Second, the new study confirmed that women are on average more empathetic than men. However, this difference is not due to our DNA as there were no differences in the genes that contribute to empathy in men and women.

This implies that the sex difference in empathy is the result of other non-genetic biological factors, such as prenatal hormone influences, or non-biological factors such as socialisation, both of which also differ between the sexes.

Finally, the new study found that genetic variants associated with lower empathy are also associated with higher risk for autism. 

Varun Warrier said: “This is an important step towards understanding the small but important role that genetics plays in empathy. But keep in mind that only a tenth of individual differences in empathy in the population are due to genetics. It will be equally important to understand the non-genetic factors that explain the other 90%.”

Professor Thomas Bourgeron added: “This new study demonstrates a role for genes in empathy, but we have not yet identified the specific genes that are involved. Our next step is to gather larger samples to replicate these findings, and to pin-point the precise biological pathways associated with individual differences in empathy.”

Dr David Hinds said: “These are the latest findings from a series of studies that 23andMe have collaborated on with researchers at Cambridge. Together these are providing exciting new insights into the genetics influences underlying human behaviour.”

Professor Simon Baron-Cohen added: “Finding that even a fraction of why we differ in empathy is due to genetic factors helps us understand people such as those with autism who struggle to imagine another person’s thoughts and feelings. This can give rise to disability no less challenging than other kinds of disability, such as dyslexia or visual impairment. We as a society need to support those with disabilities, with novel teaching methods, work-arounds, or reasonable adjustments, to promote inclusion.”

This study also benefitted from support from the Medical Research Council, the Wellcome Trust, the Institut Pasteur, the CNRS, the University Paris Diderot, the Bettencourt-Schueller Foundation, the Cambridge Commonwealth Trust, and St John’s College, Cambridge.

Reference
Genome-wide analyses of self-reported empathy: correlations with autism, schizophrenia, and anorexia nervosa, by V Warrier, R Toro, B Chakrabarti, iPSYCH-Broad Autism Group, Grove J, Borglum AD, D Hinds, T Bourgeron, and S Baron-Cohen. Translational Psychiatry. DOI: 10.1038/s41398-017-0082-6

A new study published today suggests that how empathic we are is not just a result of our upbringing and experience but also partly a result of our genes.

This is an important step towards understanding the small but important role that genetics plays in empathy
Varun Warrier
Researcher profile: Varun Warrier

Varun Warrier is a PhD student at the Autism Research Centre, where he studies the genetics of autism and related traits. He moved to Cambridge in 2013 from India because of the Centre’s world-leading reputation.

There are several key challenges in the field, he says. “First, we have identified only a fraction of the genes associated with autism. Second, no two autistic people are alike. Third, within the spectrum autistic people have different strengths and difficulties. Finally, those with a clinical diagnosis blend seamlessly into those in the population who don’t have a diagnosis but simply have a lot of autistic traits. We all have some autistic traits – this spectrum runs right through the population on a bell curve.”

Although much of his work is computational, developing statistical tools to interrogate complex datasets that will enable him to answer biological questions, he also gets to meet many people with autism. “When I meet autistic people, I truly understand what's often said – no two autistic people are alike.”

Warrier hopes his research will lead to a better understanding of the biology of autism, and that this will enable quicker and more accurate diagnosis. “But that's only one part of the challenge,” he says. “Understanding the biology has its limits, and I hope that, in parallel, there will be better social policies to support autistic people.” 

Cambridge is an exciting place to be a researcher, he says. “In Cambridge, there's always a local expert, so if you have a particular problem there usually is someone who can help you out. People here are not just thinking about what can be done to address the problems of today; they are anticipating problems that we will face in 20 years’ time, and are working to solve those.”

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

One in ten stroke survivors need more help with taking medication

$
0
0

According to the Stroke Associations, as many as four in ten people who have had a stroke, go on to have another one within ten years. As a second stroke carries a greater risk of disability and death than first time strokes, it is important that survivors take medicine daily to lower their risk. There are around 1.2 million stroke survivors in the UK, and at least a third suffer from severe impairments, potentially making adherence to their medicine difficult.

Half of survivors of stroke are dependent on others for everyday activities, though the proportion dependent on others for medicine taking or needing more practical help with tablets is not known.

To examine the practical support stroke survivors living in the community need and receive with taking their medicines, researchers at the University of Cambridge and Queen Mary University of London carried out a postal questionnaire study. The researchers developed the questionnaire together with stroke survivors and caregivers. The questionnaire was completed by 600 participants across 18 GP practices in the UK.

More than half (56%) of respondents needed help with taking medication. This included help with prescriptions and collection of medicines (50%), getting medicines out of the packaging (28%), being reminded to take medicines (36%), swallowing medicines (20%) and checking that medicines have been taken (34%). Being dependent on others was linked to experiencing more unmet needs with daily medicine taking.

Around one in ten (11%) of respondents answered yes to the question “Do you feel you need more help?” The most commonly reported areas where respondents said they needed more assistance were being reminded to take medicines, dealing with prescriptions and collection of medicines, and getting medicines out the packaging. As a result, around one in three (35%) of respondents said they had missed taking medicine in the previous 30 days.

Stroke survivors taking a higher number of daily medicines and experiencing a greater number unmet needs with practical aspects of medicine-taking were more likely to miss medications.

Interestingly, the researchers found that younger stroke survivors were more likely to miss their medicines, possibly because they are less likely to receive help from a caregiver.

“Because of the risk of a second stroke, it’s important that stroke survivors take their medication, but our study has shown that this can present challenges,” says Dr Anna De Simoni from the University of Cambridge and Queen Mary University of London. “In the majority of cases, they receive the help they need, but there is still a sizeable minority who don’t receive all the assistance they need.”

James Jamison at the Department of Public Health and Primary Care, University of Cambridge, who led the study as part of his PhD, adds: “Our study has shown us some of the barriers that people face to taking their medication regularly. We also learned that stroke survivors who are dependent on others are most likely to need more assistance than they currently receive.

“Our response rate was relatively low – just over one in three – so we need more research to find out if what we’ve heard from our respondents is widespread among stroke survivors. If so, this will have implications for the care provided.”

The team point to the need to develop new interventions focused on the practicalities of taking medicines and aimed at improving stroke survivors’ adherence to treatment. “Advances in technology have the potential to help improve adherence, such as electronic devices prompting medication taking times,” says Jamison. “Efforts to improve medication taking among survivors of stroke using technology are already underway and have shown promise.”

The research was supported by the Royal College of General Practitioners, National Institute for Health Research (NIHR), the Stroke Association and the British Heart Foundation.

Reference
Jamison J, Ayerbe L, Di Tanna GL, Sutton S, Mant J, De Simoni A. Evaluating practical support stroke survivors get with medicines and unmet needs in primary care: A survey. 2018 BMJ Open. DOI: 10.1136/bmjopen-2017-019874

Over a half of stroke patients require a degree of help with taking medicine and a sizeable minority say they do not receive as much assistance as they need, according a study published today in the journal BMJ Open.

Because of the risk of a second stroke, it’s important that stroke survivors take their medication, but our study has shown that this can present challenges
Anna De Simoni
pop life
Researcher profile: James Jamison

James’ research seeks to understand why people sometimes do not take the medications prescribed by their GP – and then to use this to inform interventions aimed at improving their medication taking practices.

His day-to-day activities are very varied, he says. “They can involve anything from the development of research proposals, liaising with GP practices and pharmacies in the East of England to set up research studies, training health care professionals in research procedures, conducting interviews with patients, collecting questionnaire data or writing up research for publications in health care journals.”

However, the most rewarding and interesting part is coming face-to-face with stroke survivors and their caregivers to talk about their condition and the daily challenges they face, he adds.

“Working at Cambridge provides the opportunity to be part of a leading department conducting research in primary care and offers the potential to work with esteemed colleagues in the field,” he says. “The opportunity to deliver high quality research outputs and undertake collaborations will hopefully help further my career as a successful health care researcher.”

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

A Stray Sumerian Tablet: Unravelling the story behind Cambridge University Library’s oldest written object

$
0
0

A Stray Sumerian Tablet has been published today by Cambridge University Library and focuses on a diminutive clay tablet, written by a scribe in ancient Iraq, some 4,200 years ago. A description of the tablet along with high-resolution images and a 3D model can also be seen on Cambridge Digital Library.

Containing six lines of cuneiform script, and roughly the size of an adult thumb, it was donated to the University Library in 1921 but then lost to sight for many years before its rediscovery in 2016, during research for the Curious Objects exhibition, held as part of the University Library’s 600th anniversary.

The full translation of the laconic text runs as follows: 18 jars of pig fat – Balli. 4 jars of pig fat – Nimgir-ab-lah. Fat dispensed (at ?) the city of Zabala. Ab-kid-kid, the scribe. 4th year 10th month.

The man named Balli turns up regularly in other texts from the same area during the same period of history, and seems to be an official in charge of a wide range of oils: from pig fat and butter to sesame oil and almond oil.

Professor Nicholas Postgate, a Senior Fellow at the McDonald Institute for Archaeological Research at Cambridge, who has studied the tablet, said: “This little piece of clay is packed full of information from 4,200 years ago. The language is Sumerian, the oldest written language, and there are six professionally written lines of cuneiform script on it.

 

“In the early years of the 20th century, the antiquity market in the west was flooded, disastrously, with thousands of cuneiform tablets which had been ripped out of their original context from sites where illicit robbers were working. These tablets were then distributed across the world from Moscow, to London to Chicago.

“The content is very simple, it mentions a large quantity (22 jars) of lard or pigs fat and gives the name of the responsible official (Balli). It states that this fat was dispensed in the city of Zabala. We think these jars were eighty litres each, so that means we’re talking about hundreds of litres of lard.”

Since it was displayed as part of Curious Objects, Professor Postgate has conducted further research on the tablet and plans to publish an academic paper on both the tablet and its text later this year.

“We may be able to reconstruct what’s going on in individual tablet, but we can never reconstruct the physical archaeological context from which they came, so there’s a great loss of information there,” he added.

“Since the 1920s, many other tablets from the same archive have surfaced all over the world and our own small tablet makes its own contribution to the reconstruction of a government office more than 4,000 years ago.”

 

The story surrounding the oldest written document at one of the world’s great research libraries has been unravelled in a new film.

This little piece of clay is packed full of information from 4,200 years ago
Nicholas Postgate
The 4,200-year-old Sumerian tablet, the oldest written object belonging to Cambridge University Library

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
Viewing all 4361 articles
Browse latest View live




Latest Images