Articles on this Page
- 06/01/16--11:00: _Genetic approach co...
- 06/01/16--16:01: _Sex and the brain: ...
- 06/01/16--21:00: _Genetic switch that...
- 06/02/16--03:20: _Opinion: Accurate s...
- 06/02/16--06:56: _Opinion: Uncertain,...
- 06/03/16--02:52: _Waterworld: can we ...
- 06/03/16--02:00: _Squeezing out opal-...
- 06/05/16--16:01: _Women and people un...
- 06/06/16--16:01: _Larger wine glasses...
- 06/03/16--03:20: _Opinion: Ancient Gr...
- 06/06/16--07:07: _Opinion: How the Br...
- 06/06/16--09:17: _Opinion: Droughts a...
- 07/06/16--06:30: _Blueprint for succe...
- 06/07/16--16:01: _Smart glass goes fr...
- 06/08/16--01:44: _Minecraft tree “pro...
- 06/08/16--16:01: _Just your cup of te...
- 06/09/16--02:38: _Cellulose: new unde...
- 06/09/16--07:26: _Opinion: When it co...
- 06/09/16--08:23: _Opinion: Translatio...
- 06/10/16--02:20: _Going green: why do...
- 06/01/16--16:01: Sex and the brain: fruitless research?
- 06/01/16--21:00: Genetic switch that turned moths black also colours butterflies
- 06/03/16--02:52: Waterworld: can we learn to live with flooding?
- 06/03/16--02:00: Squeezing out opal-like colours by the mile
- 06/05/16--16:01: Women and people under the age of 35 at greatest risk of anxiety
- 06/06/16--16:01: Larger wine glasses may lead people to drink more
- 07/06/16--06:30: Blueprint for success: what makes a city thrive?
- 06/08/16--01:44: Minecraft tree “probably” the tallest tree in the Tropics
- 06/09/16--02:38: Cellulose: new understanding could lead to tailored biofuels
- 06/10/16--02:20: Going green: why don't we all do it?
The technique involves identifying genetic variants that mimic the action of a drug on its intended target and then checking in large patient cohorts whether these variants are associated with risk of other conditions, such as cardiovascular disease.
When developing a new drug for market, pharmaceutical companies must not only demonstrate that it is effective at treating a particular condition, but also that the drug does not have any adverse side-effects in patients. For example, the Food and Drug Administration, which approves all new medicines for use in the USA, has defined that any new anti-diabetic medicines need to demonstrate cardiovascular safety. However, in many cases adverse safety profiles do not become apparent until late in the drug development process, by which point millions – possibly even billions – of pounds will have been invested.
In a study published today in the journal Science Translational Medicine, scientists have provided a proof of concept that it is possible to use genetic analyses to demonstrate systematically at a very early stage whether a drug will alter the risk of developing other conditions.
A major class of anti-diabetic therapies are those known as glucose-lowering glucagon-like peptide-1 receptor (GLP1R)-agonists. These drugs bind to the GLP-1 receptor (which is encoded by the GLP1R gene) to increase insulin production, helping reduce levels of blood sugar. However, the cardiovascular safety, of this class of agents, including the risk of heart disease for example, remains unknown.
By analysing genetic variations in DNA encoding drug targets for type 2 diabetes and obesity in almost 12,000 individuals, the researchers identified a variant in the GLP1R gene that was associated with lower fasting glucose and a lower risk of type 2 diabetes – in other words, the variant appeared to mimic the action of the diabetes drugs. They confirmed this result in a further 40,000 individuals.
The researchers then used genetic data available through an international data-sharing consortium to study the association of that same variant with coronary heart disease in almost 62,000 individuals with coronary heart disease and over 160,000 controls. In fact, they found that the variant actually reduced the risk of heart disease. Long-term large-scale randomised controlled clinical trials to evaluate the cardiovascular safety of GLP1R-agonists are underway and results from a large trial are scheduled to be released later this month.
“This further suggests that human genetics can support the development of new therapies, and can offer insights into their safety profile early in the development process,” says Dr Robert Scott from the Medical Research Council (MRC) Epidemiology Unit at the University of Cambridge, the study’s first author.
Professor Nick Wareham, Director of the MRC Epidemiology Unit, added: “These findings suggest that beyond their effectiveness in treating diabetes, these drugs may have the added benefit of lowering risk of heart disease.”
“Researching and developing new medicines is a lengthy, expensive and risky journey, and any insights we can gain in to the processes of the body related to disease could help improve our ability to succeed,” says Dr Dawn Waterworth, joint senior author from GSK. “By pooling our resources and expertise in collaborations like this one with Cambridge University, we believe there’s an opportunity to expand our knowledge of disease biology, which in turn could help reduce the risk of late-stage failures and accelerate the development of innovative new treatments for patients.”
The study was primarily funded by GSK and the Medical Research Council.
Scott, R et al. A genomic approach to therapeutic target validation identifies a glucose-lowering GLP1R variant protective for coronary heart disease. Sci Trans Med; 2 June 2016; DOI: 10.1126/scitranslmed.aad3744
An approach that could reduce the chances of drugs failing during the later stages of clinical trials has been demonstrated by a collaboration between the University of Cambridge and pharmaceutical company GlaxoSmithKline (GSK).
The sex life of the fruit fly is a simple affair. If a fly smells male pheromones, regardless of whether it is a male or a female fly, its response is clear and consistent. The pheromones activate different clusters of neurons in the brains of male and female flies and so, as a female fly, it would engage in courtship behaviour or as a male fly, it would become more aggressive. But the differences do not end here.
The fruit fly carries an important gene nicknamed ‘fruitless’. It’s the master gene that controls the male fruit fly’s courtship ritual; when disabled, male flies don’t mate. In contrast, when the gene is activated in females, they show male courtship behaviour and begin wooing other females.
For humans, the story is far more complex and the study of sex differences in the brain is more controversial, and more emotionally laden than in any other species. This hot topic is frequently misrepresented in the media. Studies on sex differences are often oversimplified and taken out of context. For example, some articles claiming that we now know “why men are so obsessed with sex” were merely reporting a study focused on worms. This style of reporting promotes stereotypes and misconception of science. The truth is that the brains of men and women have a lot in common.
The Royal Society recently released a special issue on sex differences in the brain. It features an opinion piece which argues that human brains do not fall into the two distinct categories of male and female. The piece is partly based on a study from last year: a revolutionary analysis on some 1,400 human brains. The authors looked at the volume of brain regions and connectivity between them to select the areas that differed most between the sexes. For each area, the researchers then designated the upper and lower ends of the spectrum as either ‘male’ or ‘female’, according to where men or women were more prevalent. If brains truly fell into two distinct categories, we would see brains which had either all ‘male’ or all ‘female’ areas. The study revealed that such consistent brains are indeed rare. Our brains are more like a patchwork quilt, with most people having a mixture of features that are ‘typical’ male or female, or common to both sexes.
Biology alone cannot explain why our brains are such a colourful mixture; we also need to consider the environment. How stressed was your mother during pregnancy? Did you grow up with close friends? How often did you exercise? All these factors will influence the development of your brain and consequently its appearance today. Even as an adult, your daily experiences shape the anatomy of your brain.
Despite the patchwork structure of our brains, there seem to be differences in brain anatomy between the average man and woman. But do these differences necessarily cause different behaviours? Actions such as mating, navigating through London, or writing an essay are controlled by complex networks. The underlying anatomy is important, but so are other internal and external influences, such as stress, hunger, or exhaustion. Our behaviour is shaped by many pathways.
Geert de Vries, director of the Neuroscience Institute at Georgia State University, has another take on sex differences in the brain. He argues that these differences do not produce, but instead prevent differences in behaviour. According to de Vries, men and women differ dramatically in their physiology and hormones; having different brains might be a way of compensating for these differences. Do male and female brains develop differently in order to promote similar behaviour?
We do not know if these structural differences really are compensatory. However, this concept is not new and we can observe such compensations on other levels. For example, female mammals have two copies of the X chromosome in their cells, while males only receive one copy. If all chromosomes were equally active, females would make twice as many gene products from their X chromosomes as males. To prevent this, females silence one of their X chromosomes, a process known as X-inactivation. A similar process might happen with brain structures on a more complex level.
So our brains are not distinctly male or female, and structural differences do not necessarily cause behavioural differences. Then why study sex differences at all? There are five times more studies with all-male than all-female animals in neuroscience and pharmacology; only one in four studies includes animals of both sexes. Researchers worry that hormonal fluctuations in females could confound their results, and often believe that sex differences are irrelevant for the research question. However, results from males do not always apply to females. Some drugs such as aspirin are taken up or cleared away differently in men and women. Sex is also important for some diseases: multiple sclerosis is more common in women, as are depression and anorexia. On the other hand, autism and some addictions are more common in males.
Clearly it is not sufficient to investigate and address these questions by using subjects of only one sex. How can we expect to get the whole picture by looking at only one half of the population? Since 1993, the inclusion of women has been a requirement in clinical trials funded by the National Institutes of Health in the USA. Since 2014, all animal studies funded by them also have to include females. Many scientific journals now ask authors to publish the numbers of males and females included in their sample.
Steps like these are necessary to learn more about how sex and gender influence our development and eventually our brains. The findings need to be analysed and communicated carefully. Men and women might be different in subtle ways, but our similarities probably outweigh the differences. A small change in your complex anatomy would usually not reverse your behaviour – after all, you’re not a fruit fly!
This is an edited version of the article Does your brain have a sex?
The male and female brains have more in common than media reports often suggest, argues Julia Gottwald, a third year PhD student at the Department of Psychiatry. Writing in the student science magazine BlueSci, she explains what we understand about the similarities and differences in our brains and why this is an important area of research.
The same gene that enables tropical butterflies to mimic each other’s bright and colourful patterning also caused British moths to turn black amid the grime of the industrial revolution, researchers have found.
Writing in the journal Nature, a team of researchers led by academics at the Universities of Cambridge and Sheffield, report that a fast-evolving gene known as “cortex” appears to play a critical role in dictating the colours and patterns on butterfly wings.
A parallel paper in the same journal by researchers from the University of Liverpool shows that this same gene also caused the peppered moth to turn black during the mid-19th century, when it evolved to find new ways to camouflage itself; a side-effect of industrial pollution at the time.
The finding offers clues about how genetics plays a role in making evolution a predictable process. For reasons the researchers have yet to understand in full, the cortex gene, which helps to regulate cell division in butterflies and moths, has become a major target for natural selection acting on colour and pattern on the wings.
Chris Jiggins, Professor of Evolutionary Biology and a Fellow of St John’s College, University of Cambridge, said: “What’s exciting is that it turns out to be the same gene in both cases. For the moths, the dark colouration developed because they were trying to hide, but the butterflies use bright colours to advertise their toxicity to predators. It raises the question that given the diversity in butterflies and moths, and the hundreds of genes involved in making a wing, why is it this one every time?”
Dr Nicola Nadeau, a NERC Research Fellow from the University of Sheffield added: “It’s amazing that the same gene controls such a diversity of different colours and patterns in butterflies and a moth. Our study, together with the findings from the University of Liverpool, shows that the cortex gene is important for colour and pattern evolution in this whole group of insects.”
Butterflies and moths comprise the order of insects known as Lepidoptera. Nearly all of the 160,000 types of moth and 17,000 types of butterfly have different wing patterns, which are adapted for purposes like attracting mates, giving off warnings, camouflage (also known as “crypsis”), and thermal regulation.
These wing patterns are actually made up of tiny coloured scales arranged like tiles on a roof. Although they have been studied by biologists for over a century, the molecular mechanisms which control their development are only now starting to be uncovered.
The peppered moth is one of the most famous examples of evolution by natural selection. Until the 19th Century, peppered moths were predominantly pale-coloured, and used this to camouflage themselves against lichen-covered tree trunks, which made them almost invisible to predators.
During the industrial revolution, however, the lichen on trees in some parts of the country was killed by pollution, and soot turned the trunks black. A corresponding change was seen in the in peppered moths which turned black as well, helping them to remain camouflaged from birds. The process is known as industrial melanism – melanism meaning the development of dark coloured pigmentation.
The Liverpool-led team found that this colour change was produced by a mutation in the cortex gene, which occurred during the mid 1800s, just before the first reported sighting of black peppered moths. Fascinatingly, however, the Cambridge-Sheffield study has now shown that exactly the same gene also influences the extremely bright and colourful patterns of Heliconius – the name given to about 40 different closely-related species of beautiful, tropical butterflies found in South America.
Heliconius colour patterns are used to send a signal to potential predators that the butterflies are toxic if eaten, and different types of Heliconius butterfly mimic one another by using their bright colours as warning signals. Unlike the dark colouring of the peppered moth, it is therefore an evolutionary development that is meant to be seen.
The researchers carried out fine-scale mapping, looking for parts of the DNA sequence that were specifically different in butterflies with different patterns, in three different Heliconius species, and in each case the cortex gene was found to be responsible for this adaptation in their patterning.
Because Heliconius species are extremely diverse, the study of what causes variations in their patterning can provide more general clues about the genetic switches that control diversification in species.
In most cases, the genes responsible for these processes are known as “transcription factors” – meaning that they are responsible for turning other genes on and off. Intriguingly, what made cortex such an elusive switch to spot was the fact that it does not do this. Instead, it is a cell cycle regulator, which means that it controls when cells divide and thus when different coloured scales develop within a butterfly wing.
“It’s a different gene to the one we might have expected and we still need to do more to understand exactly what it’s doing, and how it’s doing it,” Jiggins said. Dr Nadeau added “Our results are even more surprising because the cortex gene was previously thought to only be involved in producing egg cells in female insects, and is very similar to a gene that controls cell division in everything from yeast to humans.”
Nadeau N. et al. The gene cortex controls mimicry and crypsis in butterflies and moths. Nature, 2 June 2016; DOI: 10.1038/nature17961
Additional image: The study reveals that the black colour of the moth (above) and the yellow patches on the butterfly (below) were caused by the same gene, known as “cortex”. Credits: Yikrazuul and Loz, both via Wikimedia Commons.
Heliconius butterflies have evolved bright yellow colours to deter predators, while peppered moths famously turned black to hide from birds. A new study reveals that the same gene causes both, raising fascinating questions about how evolution by natural selection occurs in these species.
Every day, millions of people take to search engines with common concerns, such as “How can I lose weight?” or “How can I be productive?” In return, they find articles that offer simple advice and quick solutions, supposedly based on what “studies have shown.”
A closer look at these articles, however, reveals a troubling absence of scientific rigor. Few bother to cite research or discuss studies’ methodologies or limitations. The authors seldom have scientific training.
As young scientists from four diverse fields (psychology, chemistry, physics and neuroscience), we’ve noticed that much writing about science, particularly on topics most relevant to the daily lives of readers, is currently failing to resolve the trade-off between accessibility and accountability. Rigorous findings shared by researchers in specialist journals are obscured behind jargon and paywalls, while accessible science shared on the internet is untrustworthy, unregulated and often click-bait.
If this communication crisis is due to a lack of scientifically literate voices, the solution may be for more scientists to enter the fray. Scientists have the expertise to publicly correct misinterpretations of their and others' data. By developing new ways to disseminate science knowledge, they can help prevent inaccurate and overhyped stories from gaining traction. We argue that scientists bear a responsibility to reform the way their work is ultimately communicated.
Science gets lost in translation
Scientific publication – which operates through an intensive peer review process – is flourishing. In 2014, over 2.5 million scholarly articles were published on topics that ranged from how to reduce carbon emissions to how Twitter influences the rate of heart disease and how regular exercise can prevent inflammation associated with rheumatic diseases. Because of recent research, we know there’s little evidence that genetically modified vegetables are unhealthy, and that eating less meat is a simple way to positively influence the environment.
These are important messages, and when people don’t hear or listen to them, there can be serious consequences. Misinformed campaigns arise against vaccinations, and near-extinct diseases return. Mental illness remains shamefully stigmatized. Climate change is dismissed as fiction. People become erroneously convinced that red meat causes cancer and that eating dark chocolate helps weight loss.
Rigorous science is locked away
So how can we ensure that everyone has access to useful science knowledge?
Most scientific articles are aimed at an audience of other experts in highly specific fields, making them ill-suited for popular consumption. Between complex methodological language and frequent acronyms, even scientists have trouble following the jargon specific to other fields, leaving little hope for those with less scientific training.
An even more pressing issue, however, is that people outside of research institutions can’t even access most journal articles. Many of these papers are hidden behind a publisher paywall, and nonsubscribers are forced to pay US$30-$50 for a single article.
These paywalls are not merely obstructive; we would argue they’re also unethical. Most research is publicly funded, yet taxpayers are charged to consume scientific articles.
Ideally, scientific publishing will transition to healthy open-access journals that serve both researchers and readers. Legislation regarding quasi-monopolistic scientific publishing companies, predatory publishing practices and public access to primary scientific sources would go far to serve this end.
The European Union recently stipulated that all publicly funded research articles be freely accessible by 2020, but the United States has not yet passed a similar mandate. Scientists will play a crucial role in calling for and implementing these kinds of changes.
The public wants accessible science
As debates over open access continue, people’s desire and need for evidence-based solutions to medical and social dilemmas has not diminished. As a consequence, we see a rising tide of popular science outlets that are more accessible both in content and availability than the research journals some of their content is ostensibly based on.
These platforms range in accuracy, from questionable blogs preaching “7 ways to get happy now” to serious websites and magazines like Discover and American Scientist. As part of our own efforts to bridge the divide between accessibility and accuracy, we each contribute content to the nonprofit Useful Science, which curates research for the general public through short reviewed summaries and an in-depth podcast.
However, even reputable sources are not immune to sensational headlines. In 2012, an article in ScienceNews on female mimicry in snakes was titled “She-male garter snakes: some like it hot.” An article on male sheep neuroendocrinology was headlined “Brokeback mutton” by the Washington Post, and “Yep, they’re gay” by Time. This unfortunate trend in popular science suggests that open-access publishing, even if it does proliferate, would still need to compete with flashier posts that sacrifice strict validity for clicks.
The growth of science communication websites that solicit and address questions and feedback directly and immediately from the general public provides some hope. These include Quora and communities on Reddit such as AskScience. The popularity of these resources (AskScience has over eight million subscribers) shows that a good portion of the public wants scientific information communicated, on demand, in an accurate and approachable manner. Furthermore, a lack of direct incentive for contributors may make content manipulation less likely.
These efforts are laudable but suffer from a lack of accountability – any author can claim to be speaking from a perspective of expertise. Even in the best cases, when authors have training in science or its communication, advice is not scrutinized prior to posting.
There are ways to resolve these problems. Science journalists should solicit feedback from independent experts before publishing. Posts in scientific communities could go through an expedited peer-review process. In all cases, scientists and science communicators should be working together to match the accessibility of their content with accuracy and precision.
Who will lead the revolution?
The present state of science communication reveals important work to be done, but no burden of responsibility.
Some responsibility seems to fall on scientific journals, but most journals are profit vehicles, not conscientious individuals. Some seems to fall on media outlets, but many websites and magazines are squeezed by intense competition for ad revenue. Furthermore, reporters are seldom trained to understand science, let alone contribute to the discipline’s evolution.
The onus, then, is on scientists. There are 20 million people with science or engineering degrees in the United States alone. Instead of passively consuming media with outrageous scientific claims, it should be scientists’ personal responsibility to make research freely available, and to moderate accessible scientific communities so they’re accurate and accountable. Scientists should also work with journalists to set guidelines for media publication, such as a vetting process where popular articles are approved by experts in the field before publication, and should speak up when inaccurate information is disseminated.
It’s time for the scientific community to act; not only as individuals, but also as interdisciplinary groups. If scientists do so, the next generation of science communication vehicles may be coalitions of journalists and researchers (as in The Conversation’s collaborative model) who can disseminate messages that are both exciting and responsible. Science will not only be more interesting and accountable. It will also be more useful.
Joshua Conrad Jackson, Doctoral Student, Department of Psychology and Neuroscience, University of North Carolina – Chapel Hill; Ian Mahar, Postdoctoral Research Fellow, Neuroscience, Boston University; Jaan Altosaar, Ph.D. Student in Physics, Princeton University, and Michael Gaultois, Postdoctoral Researcher in Chemistry, University of Cambridge
The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.
Michael Gaultois (Department of Chemistry), Joshua Conrad Jackson (University of North Carolina - Chapel Hill), Ian Mahar (Boston University), and Jaan Altosaar (Princeton University) discuss why much reporting on science is currently failing to resolve the trade-off between accessibility and accountability.
It has been suggested, as the EU referendum approaches, that younger voters are more likely to vote to remain than their older compatriots. A poll conducted in April showed 54% of over 55s back Brexit, while only 30% said they would vote to remain in the EU. It showed almost exactly the reverse among voters aged between 18 and 34. The 35-54-year olds were more evenly split, with 38% saying they’d vote to remain and 42% saying they’d leave.
There are a host of possible reasons why older people might be more likely to vote to leave the European Union. They may be xenophobic or they may distrust an alien, distant political system. They may believe that Europe is not democratic. They may fear losing national sovereignty to Brussels.
But at the core of the older Brexiter’s thinking is a combination of nostalgia, uncertainty and bloody-mindedness. Their views are born of dissatisfaction with established practices and bewilderment over technological innovation and information overload.
The world is too fast, too mobile and too globalised. Getting out of Europe would mark a return to more old-fashioned values, a half-remembered simpler life when politicians could be trusted, the media was restrained and Britain was sovereign.
There seems to be a nostalgic vision of a Great Britain, untrammelled by external pressures and domestic vicissitudes. We may know it’s wrong but, given the discomfort and mistrust of contemporary politics and the global economy, it seems as though it was somehow better then.
Older voters look back on a period of mass production, when unions were strong and governments championed their nations. Now they see a world based on competing technologies and transnational flows of investment that undermine governments’ ability to manage the economy. They see cross-border migration that seems to displace the familiar with the (often much needed) foreign carer, health-worker, construction worker or even footballer. All these forces have created hostility towards globalisation and that hostility easily finds a scapegoat in the EU.
Brexiters perhaps forget Suez in 1956 or the IMF intervention of 1976, when the hollowness of Britain’s control over sterling and its economy became only too obvious. They overlook the collapse of Britain’s heavy industry from the 1960s onwards or any winter of discontent and union strike action, or even the bloodiness of Britain’s withdrawal from South-East Asia or East Africa.
But to add to uncertainties about the new and hostility towards the “global” (read “foreign”) there is an element of bewilderment. Gone are “reliable” sources of authoritative information. Governments have lost whatever role they had as gatekeeper between the international and the domestic; domestically, the conventional media often seem to take a delight in being anti-government – regardless of that government’s political hue – for their own political or other reasons, creating suspicion of both.
And, if one can master it, there is the web – where an infinite number of websites offer up limitless information. But the provenance is often dubious and the multiplicity of sources is bewildering. One doesn’t know who to trust. It compounds that loss of deference towards authority and exacerbates uncertainties.
It is not surprising that so many feel they don’t know enough about the EU but also don’t know who to trust for objective information. Or, of course, they can’t be bothered to find out – after all Europe was relatively low on their agenda even if it’s temptingly easy to identify it with unrestrained immigration. And concern for their children’s and even grandchildren’s job prospects leads to the demand for British jobs undertaken by British people – even if they don’t seem to want to do them or don’t have the qualifications to do them.
Perhaps the biggest contradiction of all, though, is the desire to return to old certainties and thereby reduce risk, leading to support for the leap into the unknown. Older voters who support Brexit may have that dream, but it adds up to little more than a nightmare of uncertainties.
The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.
Geoffrey Edwards (Department of Politics and International Studies) discusses what motivates some people to support Brexit.
In December 2015, Storm Desmond hit the north of the UK. In its wake came floods, the misery of muddy, polluted water surging through homes and the disruption of closed businesses, schools and roads.
Rapid urban growth and progressively unpredictable weather have focused attention on the resilience of cities worldwide not just to extreme events, but also to heavier-than-normal rainstorms, and raised questions as to how flood risk can be managed.
There is of course no ‘one-size-fits-all’ strategy. For some areas, defence is a possibility. For others, retreat is the only option. “But for those unable to do either, we need to fundamentally rewrite the rule book on how we perceive water as a hazard to towns and cities,” says Ed Barsley, PhD student working with Dr Emily So in the Cambridge University Centre for Risk in the Built Environment (CURBE). Barsley believes that adaptation and planning for resilience can provide a unique opportunity for increasing the quality of towns and cities (see panel).
Dr Dick Fenner from Cambridge's Department of Engineering agrees that resilience to water should be regarded positively. He is part of the UK-wide Blue–Green Cities project, which is developing strategies to manage urban flood risk in ways that also pay dividends in many other areas, through ‘greening’ the city. “We want to turn rainfall into a win-win-win event,” he says.
When it comes to dealing with floods, one of the major difficulties that many cities face is the impermeability of the built environment. In a city that is paved, concreted and asphalted, surface water can’t soak away quickly and naturally into the earth.
Newcastle city centre, for instance, is around 92% impermeable, and has suffered major flooding in the past. “The ‘flood footprint’ of the 2012 ‘Toon Monsoon’ caused around £129 million in direct damages and £102 million in indirect damages, rippling to economic sectors far beyond the physical location of the event,” says Fenner.
“Traditionally, cities have been built to capture water run-off in gutters and drains, to be piped away. But where is away? And how big would we have to build these pipes if the city can’t cope now?” he adds. The principal behind a ‘Blue–Green City’ is to create a more natural water cycle – one in which the city’s water management and its green infrastructure can be brought together.
Cities worldwide are already taking up the concept of ‘greening’, using permeable paving, bioswales (shallow ditches filled with vegetation), street planting, roof gardens and pocket parks. Green infrastructure benefits health and biodiversity, and can help combat rising CO2 levels, heat island effects, air pollution and noise.
“Not only do they also provide a place for water to soak away,” says Fenner, “they can even create resources from water – such as generating energy from the water flow through sustainable drainage systems and providing places for amenity and recreation.”
All well and good but with a long list of potential ‘blue–green’ choices, and an equally long list of benefits, how do cities choose the best options?
One of the major outputs of the Blue–Green Cities initiative is a ‘toolbox’ for authorities, planners, businesses and communities to help them decide. Using Newcastle University’s CityCat model, the team assessed how well green infrastructures performed in holding back surface flows, and used novel tracer techniques to follow the movement and trapping of sediments during intense storms. Then they mapped the benefits in a geographic information system (GIS) to identify physical locations that are ‘benefit hotspots’.
The tools were developed by evaluating the performance benefits of green infrastructure gathered from sites in both the UK and USA. As part of a recent 12-month demonstration study in Newcastle, a Learning Action Alliance network was set up with local stakeholders that has, says Fenner, led to new opportunities that reflect the priorities and preferences of communities and local residents.
Now, Newcastle City Council, the Environment Agency, Northumbrian Water, Newcastle University, Arup and Royal Haskoning DHV have combined to be the first organisations in the country to explicitly commit to a blue–green approach, as recommended by the research. The hope is that other local and national organisations will follow suit.
Embracing resilience, as these organisations are doing, is vitally important when dealing with natural hazards, says Emily So, who leads CURBE: “We should remember that flooding is a natural process and a hazard we need to learn to live with. It is often the disjointed configuration of the built environment that results in it being a risk to the communities. Our aim should be to design to reduce the impact of, and our recovery time from, this natural hazard.”
Fenner adds: “Continuing to deliver an effective and reliable water and wastewater service despite disruptive challenges such as flooding is hard, but vital; it requires continuous and dramatic innovation. In the future, we will see fully water-sensitive cities, where water management is so good that it’s almost as if the city isn’t there.”
The Blue–Green Cities project is funded by the Engineering and Physical Sciences Research Council (EPSRC), involves researchers from nine UK universities and is led by the University of Nottingham. A parallel project, Clean Water for All, funded by EPSRC and the National Science Foundation, connects the team with researchers in the USA.
Inset image: Ed Barsley.
Flash floods, burst riverbanks, overflowing drains, contaminants leaching into waterways: some of the disruptive, damaging and hazardous consequences of having too much rain. But can cities be designed and adapted to live more flexibly with water – to treat it as friend rather than foe?
While the Blue–Green Cities project focuses on urban drainage at times of normal to excessive rainfall, Ed Barsley is more concerned with helping communities consider the consequences of extreme events.
“Floods are devastating in their impact and flood risk is often seen as a burden to be endured,” says Barsley, “but future proofing and planning for resilience can and should be used as a driver for increasing the quality of buildings, streets and neighbourhoods – a chance for exciting change in our cities.”
As a case study, Barsley is using the village of Yalding in Kent, which has endured physical, economic and psychological impacts as a result of flooding.
He looked at how each house in the village prepared for and was affected by its most recent flood, its location and building material, and even its millimetre threshold height; and then he looked at future flood risk scenarios. The result is a methodology for assessing resilience that can be used to help inform and plan for adaptation, and is transferable to other communities large or small across the UK and worldwide.
“When we communicated the risks to the community, we found that resilience means different things to different people. Understanding priorities can help them tailor their own strategy to be contextually appropriate,” explains Barsley, who is special advisor on flood risk in the South East to Greg Clark MP, Secretary of State for the Department for Communities and Local Government.
For homes in which resistance measures like flood barriers will be overcome, one option might be to regard the lower floor as a sacrificial space – an area that can be flooded without disrupting waste, power or water. In Yalding, there are examples of homeowners who have done just this and added an extra storey to their homes.
“I’d like to see resilience rewarded and for us to begin to live with water in a different manner. Embedding long-term resilience has huge potential for creating vibrant and enriching spaces.”
The team, led by the University of Cambridge, have invented a way to make such sheets on industrial scales, opening up applications ranging from smart clothing for people or buildings, to banknote security.
Using a new method called Bend-Induced-Oscillatory-Shearing (BIOS), the researchers are now able to produce hundreds of metres of these materials, known as ‘polymer opals’, on a roll-to-roll process. The results are reported in the journal Nature Communications.
Some of the brightest colours in nature can be found in opal gemstones, butterfly wings and beetles. These materials get their colour not from dyes or pigments, but from the systematically-ordered microstructures they contain.
The team behind the current research, based at Cambridge’s Cavendish Laboratory, have been working on methods of artificially recreating this ‘structural colour’ for several years, but to date, it has been difficult to make these materials using techniques that are cheap enough to allow their widespread use.
In order to make the polymer opals, the team starts by growing vats of transparent plastic nano-spheres. Each tiny sphere is solid in the middle but sticky on the outside. The spheres are then dried out into a congealed mass. By bending sheets containing a sandwich of these spheres around successive rollers the balls are magically forced into perfectly arranged stacks, by which stage they have intense colour.
By changing the sizes of the starting nano-spheres, different colours (or wavelengths) of light are reflected. And since the material has a rubber-like consistency, when it is twisted and stretched, the spacing between the spheres changes, causing the material to change colour. When stretched, the material shifts into the blue range of the spectrum, and when compressed, the colour shifts towards red. When released, the material returns to its original colour. Such chameleon materials could find their way into colour-changing wallpapers, or building coatings that reflect away infrared thermal radiation.
“Finding a way to coax objects a billionth of a metre across into perfect formation over kilometre scales is a miracle,” said Professor Jeremy Baumberg, the paper’s senior author. “But spheres are only the first step, as it should be applicable to more complex architectures on tiny scales.”
In order to make polymer opals in large quantities, the team first needed to understand their internal structure so that it could be replicated. Using a variety of techniques, including electron microscopy, x-ray scattering, rheology and optical spectroscopy, the researchers were able to see the three-dimensional position of the spheres within the material, measure how the spheres slide past each other, and how the colours change.
“It’s wonderful to finally understand the secrets of these attractive films,” said PhD student Qibin Zhao, the paper’s lead author.
Cambridge Enterprise, the University’s commercialisation arm which is helping to commercialise the material, has been contacted by more than 100 companies interested in using polymer opals, and a new spin-out Phomera Technologies has been founded. Phomera will look at ways of scaling up production of polymer opals, as well as selling the material to potential buyers. Possible applications the company is considering include coatings for buildings to reflect heat, smart clothing and footwear, or for banknote security and packaging applications.
The research is funded as part of a UK Engineering and Physical Sciences Research Council (EPSRC) investment in the Cambridge NanoPhotonics Centre, as well as the European Research Council (ERC).
Q. Zhao et al. “Large-scale ordering of nanoparticles using viscoelastic shear processing”, Nature Communications (2016); DOI: 10.1038/ncomms11661
Researchers have devised a new method for stacking microscopic marbles into regular layers, producing intriguing materials which scatter light into intense colours, and which change colour when twisted or stretched.
The review, published today in the journal Brain and Behavior, also highlighted how anxiety disorders often provide a double burden on people experiencing other health-related problems, such as heart disease, cancer and even pregnancy.
Anxiety disorders, which often manifest as excessive worry, fear and a tendency to avoid potentially stressful situations including social gatherings, are some of the most common mental health problems in the Western world. The annual cost related to the disorders in the United States is estimated to be $42.3 million. In the European Union, over 60 million people are affected by anxiety disorders in a given year.
There have been many studies looking at the number of people affected by anxiety disorders and the groups that are at highest risk, and in an attempt to synthesise the various studies, National Institute for Health Research (NIHR)-funded researchers from the University of Cambridge’s Institute of Public Health carried out a global review of systematic reviews. Out of over 1,200 reviews, the researchers identified 48 reviews that matched their criteria for inclusion.
Between 1990 and 2010, the overall proportion of people affected remained largely unchanged, with around four out of every 100 experiencing anxiety. The highest proportion of people with anxiety is in North America, where almost eight out of every 100 people are affected; the proportion is lowest in East Asia, where less than three in 100 people have this mental health problem.
Women are almost twice as likely to be affected as men, and young individuals – both male and female – under 35 years of age are disproportionately affected.
The researchers also found that people with other health conditions are often far more likely to also experience anxiety disorders. For example, around one in ten adults (10.9%) with cardiovascular disease and living in Western countries are affected by generalised anxiety disorder, with women showing higher anxiety levels than men. People living with multiple sclerosis are most affected – as many as one in three patients (32%) also have an anxiety disorder.
According to first author Olivia Remes from the Department of Public Health and Primary Care at the University of Cambridge: “Anxiety disorders can make life extremely difficult for some people and it is important for our health services to understand how common they are and which groups of people are at greatest risk.
“By collecting all these data together, we see that these disorders are common across all groups, but women and young people are disproportionately affected. Also, people who have a chronic health condition are at a particular risk, adding a double burden on their lives.”
Obsessive compulsive disorder (OCD) – which is an anxiety disorder characterized by obsessions and compulsions – was found to be a problem in pregnant women and in the period immediately after birth. In the general population, only one in a hundred people are affected by OCD, but the proportion with the disorder was double in pregnant women and slightly higher in post-partum women.
However, the analysis also showed that data on some populations was lacking or of poor quality. This was particularly true for marginalised communities, such as indigenous cultures in North America, Australia and New Zealand, and drug users, street youth and sex workers. Anxiety disorders also represent an important issue among people identifying as lesbian, gay, and bisexual; however, there are not enough studies in these populations, and those that have looked at it are of variable quality.
Dr Louise Lafortune, Senior Research Associate at the Cambridge Institute of Public Health, explains: “Anxiety disorders affect a lot of people and can lead to impairment, disability, and risk of suicide. Although many groups have examined this important topic, significant gaps in research remain.”
Professor Carol Brayne, Director of the Cambridge Institute of Public Health, adds: “Even with a reasonably large number of studies of anxiety disorder, data about marginalised groups is hard to find, and these are people who are likely to be at an even greater risk than the general population. We hope that, by identifying these gaps, future research can be directed towards these groups and include greater understanding of how such evidence can help reduce individual and population burdens.”
Remes, O et al. A systematic review of reviews on the prevalence of anxiety disorders in adult populations. Brain and Behavior; 6 June 2016; DOI: 10.1002/brb3.497
Women are almost twice as likely to experience anxiety as men, according to a review of existing scientific literature, led by the University of Cambridge. The study also found that people from Western Europe and North America are more likely to suffer from anxiety than people from other cultures.
Alcohol consumption is one of the leading risk factors for disease and has been linked to conditions such as type 2 diabetes, cancer and liver disease. The factors that influence consumption are not clear; a recent Cochrane review published by the Behaviour and Health Research Unit (BHRU) at the University of Cambridge found that larger portion sizes and tableware increased consumption of food and non-alcoholic drinks, but found no evidence relating to consumption of alcohol.
To examine whether the size of glass in which alcohol is served affects consumption, the team at the BHRU, together with Professor Marcus Munafo from the University of Bristol, carried out a study in The Pint Shop in Cambridge from mid-March to early July 2015. The establishment has separate bar and restaurant areas, both selling food and drink. Wine (in 125ml or 175ml servings) could be purchased by the glass, which was usually a standard 300ml size.
Over the course of a 16-week period, the owners of the establishment changed the size of the wine glasses at fortnightly intervals, alternating between the standard (300ml) size, and larger (370ml) and smaller (250 ml) glasses.
The researchers found that the volume of wine purchased daily was 9.4% higher when sold in larger glasses compared to standard-sized glasses. This effect was mainly driven by sales in the bar area, which saw an increase in sales of 14.4%, compared to an 8.2% increase in sales in the restaurant. The findings were inconclusive as to whether sales were different with smaller compared to standard-sized glasses.
“We found that increasing the size of wine glasses, even without increasing the amount of wine, leads people to drink more,” says Dr Rachel Pechey from the BHRU at Cambridge. “It’s not obvious why this should be the case, but one reason may be that larger glasses change our perceptions of the amount of wine, leading us to drink faster and order more. But it’s interesting that we didn’t see the opposite effect when we switched to smaller wine glasses.”
Professor Theresa Marteau, Director of the Unit, adds: “This suggests that avoiding the use of larger wine glasses could reduce the amount that people drink. We need more research to confirm this effect, but if it is the case, then we will need to think how this might be implemented. For example, could it be an alcohol licensing requirements that all wine glasses have to be below a certain size?”
The research was funded by the Department of Health.
Pechey, R et al. Does wine glass size influence sales for on-site consumption? A multiple treatment reversal design. BMC Public Health; 7 June 2016; DOI: 10.1186/s12889-016-3068-z
Selling wine in larger wine glasses may encourage people to drink more, even when the amount of wine remains the same, suggests new research from the University of Cambridge. In a study published today in the journal BMC Public Health, researchers found that increasing the size of wine glasses led to an almost 10% increase in wine sales.
We owe to the ancient Greeks much, if not most of our own current political vocabulary. All the way from anarchy and democracy to politics itself. But their politics and ours are very different beasts. To an ancient Greek democrat (of any stripe), all our modern democratic systems would count as “oligarchy”. By that I mean the rule of and by – if not necessarily or expressly for – the few, as opposed to the power or control of the people, or the many (demo-kratia).
That is the case even if – and indeed because – the few happen to be elected to serve by (all) the people. For in ancient Greece elections were considered to be in themselves oligarchic. They systematically favoured the few and, more particularly, the few extremely rich citizens – or “oligarchs”, as we now familiarly call them thanks to Boris Berezhovsky and his kind, who are also known as “plutocrats” or just “fat cats”.
On the other hand, there are some significant commonalities between ancient and modern ways of thinking politically. To both ancient and modern democrats, for example, freedom and equality are of the essence – they are core political values. However, freedom to an ancient Greek democrat didn’t just mean the freedom to participate in the political process but also freedom from legal servitude, from being an actual slave chattel.
And freedom to participate meant not just the sort of occasional saturnalia that we take to be the key mode of democracy for most of us – a temporary exchange of roles by political masters and slaves come general or local election (or referendum) time. But rather the freedom actually to share political power, to rule on an almost day-to-day basis.
In the fourth century BC(E), the Athenian democratic assembly of 6,000-plus adult male citizens met on average every nine days or so. It was government by mass meeting, but also the equivalent of holding a referendum on major issues every other week.
Equality then and now
Equality today is but a pipe dream at best, at least in socioeconomic terms, when the richest 1% of the world’s population owns as much as the remaining 99% put together. They managed these things a whole lot better in ancient Greece, and especially in the ancient Athenian democracy.
Statistical data are lacking – the ancients were notoriously unbureaucratic and they considered direct personal taxation to be a civic insult. But it’s plausibly been argued that “Classical” (5th-4th century BCE) Greece and especially Classical Athens were more populous and urbanised societies, with a higher proportion of their population living above the level of mere subsistence – and with a more equal distribution of property ownership – than has been the case in Greece at any time since, or indeed than in pretty much any other pre-modern society.
This does not mean that ancient Greece can supply us with a directly transferable example for democratic imitation – we tend to believe formally in the absolute equality of all citizens at any rate as adult voters, regardless of gender, and not to believe in the validity or utility of the legal enslavement of human beings as chattels.
However, there are a number of ancient democratic notions and techniques that do seem highly attractive: the use of sortition, for instance – a random method of polling by lottery that aimed to produce a representative sample of elected officials. Or the practice of ostracism– which allowed the population to nominate a candidate who had to go into exile for 10 years, thus ending their political career.
And comparison, or rather contrast, of our democracies with those of ancient Greece does serve to highlight what’s been called creeping crypto-oligarchy in our own very different (representative, not direct) democratic systems.
Worst of all possible systems
We are all democrats now, aren’t we? Or are we? Not if we consider the following five flaws variously embedded in all contemporary systems.
Most pertinently at the moment, it was possible for the US and the UK to go to war in Iraq in 2003, even though neither US president George W Bush nor the UK prime minister, Tony Blair, had at any point received the endorsement for that decision from the majority of their own citizens.
Citizens in our “democracies” spend up to one-fifth of their lives governed by a party or candidate other than the party or candidate that most of them voted for at the last election. Moreover, elections are not in fact “free and fair”: they’re nearly invariably won by the side that spends the most money, and thus are more or less corrupted thereby.
When it comes to winning elections, no party has ever come to power without (blatantly self-interested) corporate backing in one shape or another. And, perhaps most damning of all, the vast majority of people are systematically excluded from public decision-making – thanks to vote-skewing, campaign financing and the right of elected representatives simply to ignore with impunity anything that happens in between (local or general) elections.
Democracy in short has changed its meaning from anything like the “people power” of ancient Greece and has seemingly lost its purpose as a reflection let alone realisation of the popular will.
One can well see why Winston Churchill was once moved to describe democracy as the worst of all systems of government– apart from all the rest. But that should be no good reason for us to continue ignoring the widely admitted democratic deficit. Back to the future – with the democrats of ancient Greece.
The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.
Paul Cartledge (Faculty of Classics) discusses what the ancient Greeks would think of our democracy.
When an army deploys in a foreign country, there are clear advantages if the soldiers are able to speak the local language or dialect. But what if your recruits are no good at other languages? In the UK, where language learning in schools and universities is facing a real crisis, the British army began to see this as a serious problem.
In a new report on the value of languages, my colleagues and I showcased how a new language policy instituted last year within the British Army, was triggered by a growing appreciation of the risks of language shortages for national security.
Following the conflicts in Iraq and Afghanistan, the military sought to implement language skills training as a core competence. Speakers of other languages are encouraged to take examinations to register their language skills, whether they are language learners or speakers of heritage or community languages.
The UK Ministry of Defence’s Defence Centre for Language and Culture also offers training to NATO standards across the four language skills – listening, speaking, reading and writing. Core languages taught are Arabic, Dari, Farsi, French, Russian, Spanish and English as a foreign language. Cultural training that provides regional knowledge and cross-cultural skills is still embryonic, but developing fast.
There are two reasons why this is working. The change was directed by the vice chief of the defence staff, and therefore had a high-level champion. There are also financial incentives for army personnel to have their linguistic skills recorded, ranging from £360 for a lower-level western European language, to £11,700 for a high level, operationally vital linguist. Currently any army officer must have a basic language skill to be able to command a sub unit.
We should not, of course, overstate the progress made. The numbers of Ministry of Defence linguists for certain languages, including Arabic, are still precariously low and, according to recent statistics, there are no speakers of Ukrainian or Estonian classed at level three or above in the armed forces. But, crucially, the organisational culture has changed and languages are now viewed as an asset.
The British military’s new approach is a good example of how an institution can change the culture of the way it thinks about languages. It’s also clear that language policy can no longer simply be a matter for the Department for Education: champions for language both within and outside government are vital for issues such as national security.
This is particularly important because of the fragmentation of language learning policy within the UK government, despite an informal cross-Whitehall language focus group.
Experience on the ground illustrates the value of cooperation when it comes to security. For example, in January, the West Midlands Counter Terrorism Unit urgently needed a speaker of a particular language dialect to assist with translating communications in an ongoing investigation. The MOD was approached and was able to source a speaker within another department.
There is a growing body of research demonstrating the cost to business of the UK’s lack of language skills. Much less is known about their value to national security, defence and diplomacy, conflict resolution and social cohesion. Yet language skills have to be seen as an asset, and appreciation is needed across government for their wider value to society and security.
Wendy Ayres-Bennett (Department of Theoretical and Applied Linguistics) discusses the impact of the military's new language policy.
India is facing one of its most serious droughts in recent memory – official estimates suggest that at least 330m people are likely to be affected by acute shortages of water. As the subcontinent awaits the imminent arrival of the monsoon rains, bringing relief to those who have suffered the long, dry and exceptionally warm summer, the crisis affecting India’s water resources is high on the public agenda.
Unprecedented drought demands unconventional responses, and there have been some fairly unusual attempts to address this year’s shortage. Perhaps most dramatic was the deployment of railway wagons to transport 500,000 litres of water per day across the Deccan plateau, with the train traversing more than 300km to provide relief to the district of Latur in Maharashtra state.
The need to shift water on this scale sheds light on the key issue that makes water planning in the Indian subcontinent so challenging. While the region gets considerable precipitation most years from the annual monsoon, the rain tends to fall in particular places – and for only a short period of time (about three months). This water needs to be stored, and made to last for the entire year.
In most years, it also means that there is often too much water in some places, resulting in as much distress due to flooding as there currently is due to drought. So there is a spatial challenge as well – water from the surplus regions needs to reach those with a shortfall, and the water train deployed in Maharashtra is one attempt to achieve this.
The current crisis has led the Indian government to announce that it hopes to resurrect an ambitious plan to try and link the major river basins of the country, under the Interlinking of Rivers (ILR) Project. The scale and magnitude of this exercise, both financial (it is estimated to cost more than £100 billion) and in engineering terms (involving the transfer of 174 billion cubic metres of water annually) is unprecedented.
Critics suggest that it is unlikely to work and is likely to create further ecological and social disruption, especially due to the uncertainties in weather and precipitation patterns due to climate change. There is a risk that other alternatives, perhaps less dramatic in their scope, might be neglected in the rush for the big headline-grabbing schemes.
A specific way forward might be to work more directly with natural processes to secure the regeneration of water sources at the local level. In the dry plains, this involves the revitalisation of aquifers and the replenishment of groundwater through recharge during the monsoon, as has been attempted already in some regions. In the hilly areas, there is considerable scope for investment in spring recharge and source sustainability, as has been undertaken on a significant scale in the Himalayan state of Sikkim.
Our current research is examining the need to invest in source protection and sustainability in detail, especially in the Himalayas, which have been described as the “Water Towers of Asia”. Urbanisation trends in the region suggest that there will be a growing number of small towns and settlements that will need water infrastructure to meet their needs – and there is a critical need to secure these water sources.
Deforestation, land conversion and degradation, as well as urban encroachment due to illegal construction, pose major threats to the water bearing capacity of the Himalayan landscape. There is an urgent need to invest in the identification, protection and restoration of these “critical water zones”.
Potential for conflict
The Himalayan context also demonstrates the transboundary nature of the water issue. The Hindu Kush Himalayan region extends across eight countries, from Afghanistan to Myanmar, and supports ten major river systems, potentially affecting the lives of more than 1.5 billion people. Cooperation across political boundaries is vital to manage these fragile resources, further threatened by the uncertain impacts of climate change.
There is some hope, despite three major wars since independence, that India and Pakistan have managed to maintain some semblance of cooperation under the Indus Waters Treaty, which was negotiated in 1960. However, analysts suggest that regional conflict over water is going to worsen– and much depends on the role of China, which is the dominant upstream water controller in the region.
The other key response is managing water demand – and making explicit choices over alternative uses. This year, the shifting of Indian Premier League cricket matches away from water-scarce Maharashtra was a high-profile, though somewhat symbolic, example of an explicit prioritisation of water use.
More generally, though, managing water demands has rarely been prioritised. Water-thirsty crops – sugarcane, for example – dominate the landscape in the dry regions of Marathwada and Vidarbha in Maharashtra. Farmers receive subsidies on energy, which allow them to pump dry the already-depleted aquifers in other parts of the country. And, there are important issues of distributional equity – the poor in many urban contexts pay significantly more per litre for erratic and unreliable water, while their richer neighbours luxuriate in swimming pools and spend weekends on plush golf greens.
Water is an issue that cuts across all aspects of social and economic life in India. Compartmentalised responses are unlikely to be adequate to address the current crises. There is a need for an integrated approach, which addresses source sustainability, land use management, agricultural strategies, demand management and the distribution and pricing of water. With growing pressures due to climate change, migration and population growth, creative and imaginative governance is needed to manage this precious resource.
Bhaskar Vira (Department of Geography and University of Cambridge Conservation Research Institute) discusses ways of dealing with the crisis affecting India’s water resources.
Arguably, everything about Milton Keynes is deliberate: its site, its transport, its housing, its business sectors, its jobs. From the moment of its ‘birth’ in 1967 as one of the country’s ‘new towns’, Milton Keynes was planned as a whole. Over the past three decades, it has out-smarted every other city in the UK in terms of its annual average growth rate of output and employment.
Meanwhile, most of Britain’s old industrial cities – Newcastle, Sheffield, Birmingham, Glasgow and Liverpool among them – underwent a dramatic slippage in growth from the beginning of the 1980s to the late 1990s. Although their decline has slowed, they still lag behind the national average in terms of economic growth.
Not so London. After years of relative decline, it has experienced a turnaround since the early 1990s, thanks to its flourishing financial sector and rapidly expanding business services. It even weathered the recent recession better than almost all other parts of the UK. It has become one of the fastest growing parts of country, and is predicted to pull further ahead of the rest of the country in the next decade.
This story of a ‘great divergence’ opening up between cities is being played out all across the UK, as well as elsewhere in the industrialised world. In the USA, for instance, the downturn in the fortunes of Detroit and Cleveland stand in stark contrast to boom cities like San Francisco and Boston. Recent efforts in the UK to rebalance the economy have included the ‘Northern Powerhouse’ investment to boost Manchester, Liverpool, Leeds, Sheffield and Newcastle.
“Cities have always had upturns and downturns. But for the first time in human history more than half of the world’s population lives in cities and so now more than ever it’s important to understand what it is that makes a city flourish,” says Professor Ron Martin, from Cambridge's Department of Geography. “Adaptability and resilience may be today’s buzzwords but this is the way that cities – and those making policies that affect cities – need to think to keep them working well.”
Martin leads a major research project aimed at understanding transitions through boom, bust and austerity for UK cities, and the lessons that can be learned from the past 50 years of economic history that might help cities prepare for the decades ahead.
It will be the largest ever analysis of the post-industrial fortunes of the UK’s cities. It builds on work that Martin and his team carried out for the UK Government Office for Science (GoS) Foresight Project on the Future of Cities – the brainchild of Government Chief Scientific Adviser, Professor Sir Mark Walport.
“Cities matter to the UK,” explains the GoS Future of Cities project. “They are the concentrations of the UK’s population, trade, commerce, cultural and social life. They are also the sites where most of the UK’s future growth, both population and economic, is forecast to occur. The UK’s future is now closely linked to that of its cities.”
The fastest growing cities over the past three decades have been those in the south of the country, linked with a downturn in manufacturing in the north and an increase in service industries in the south.
“Manufacturing still happens of course, and is still important to the economy of many of our cities,” says Martin. “But the number of people working in manufacturing has fallen over the past 30 years because of increasing efficiency and productivity, and the rise of foreign competitors.”
The team’s study for the Foresight Project looked at 63 cities across the UK. One finding was that the fastest growing cities over the past three decades were the new towns like Milton Keynes that had been deliberately planned and developed through post-war public policy. “In these cases, everything to do with a new settlement could be thought about holistically – an argument perhaps for focusing on expanded cities in the future.”
But what about old industrialised cities? “There’s a huge infrastructure in these cities and a lot of talent,” says Martin. “We found that whereas some cities are experiencing a worsening of economic inequalities and failures, others have managed to reinvent themselves by growing new sectors – electronics, pharmaceuticals, finance, business support services. Those cities that hadn’t managed to diversify, reorientate and adapt their employment sectors fared less well.”
Martin cites Cambridge as an exemplar of diversification: “Cambridge is interesting because it is often held up as a high-tech cluster but in fact you can identify around 14 specialisms, with life sciences currently moving into pole position. The key is to keep branching into new areas.”
He adds: “This shouldn’t really come as a surprise. It’s a bit like buying stocks and shares – you wouldn’t put all your money in one company because you risk losing everything. The same goes here – you need a diverse economic structure to withstand shocks to the system.”
Before this study, however, the data simply wasn’t available to draw this conclusion. Martin’s team worked with Cambridge Econometrics, business management consultants, who are specialists in building up city-level datasets from local data.
Now, with funding from the Economic and Social Research Council, the team has increased to include researchers at the Universities of Southampton, Newcastle and Aston, and the dataset they are building is the largest of its kind ever constructed for the UK: the growth, employment and economic structure of 75 cities, for four decades from 1971, with hopes eventually to look back to the mid-19th century. Ten cities have been chosen for an in-depth analysis of firm dynamics, local governance and public policy.
“Cities have come to dominate how we think and talk about economies, particularly as they navigate the turbulent and uncertain context of austerity and economic reform,” says Martin. “There is little doubt that cities face an unprecedented and intense set of economic, social and environmental challenges. Our research as part of the Future of Cities project provided the first quantitative evidence that different cities demonstrate very different capacities to cope with and respond to challenges, and that these lead to diverse economic outcomes.”
It’s timely, he adds, to be considering what makes a city thrive, given the ongoing discussions by government on the business case for devolving central policy decisions and budgets to regions.
“In this context, while fiscal devolution is desirable, differences between cities could assume greater significance,” says Martin. “If cities and regions take more control of the purse strings, they will need to know where best to invest resources to help them thrive. This research programme is ambitious but we are confident that we will be able to identify at least some of the keys to city economic success and how to maintain that success over time.”
Why is Milton Keynes one of the most successful cities in the UK, and Dundee one of the least? What gives Leeds its economic edge over Liverpool? How did London survive the 1990s recession, going from boom to bust and boom again? Researchers are asking these questions and many more in the largest ever analysis of what makes cities thrive.
Imagine a glass skyscraper in which all of the windows could go from clear to opaque at the flick of a switch, allowing occupants to regulate the amount of sunlight coming through the windows without having to rely on costly air conditioning or other artificial methods of temperature control.
Researchers at the University of Cambridge have developed a type of ‘smart’ glass that switches back and forth between transparent and opaque, while using very low amounts of energy. The material, known as Smectic A composites, could be used in buildings, automotive or display applications.
Working with industrial partners including Dow Corning, the Cambridge researchers have been developing ‘Smectic A’ composites over the past two decades. The team, based at the Centre for Advanced Photonics and Electronics (CAPE), has made samples of Smectic A based glass, and is also able to produce it on a roll-to-roll process so that it can be printed onto plastic. It can be switched back and forth from transparent to opaque millions of times, and can be kept in either state for as long as the user wants.
“In addition to going back and forth between clear and opaque, we can also do different levels of opacity, so for example, you could have smart windows in an office building that automatically became more or less opaque, depending on the amount of sunlight coming through,” said Professor Daping Chu of CAPE, one of the developers of Smectic A technology.
The main component of the developed composite material is made up of a type of liquid crystal known as a ‘smectic’ liquid crystal, which is different than a solid crystal or a liquid.
The simplest definition of a crystal is a solid in which the atoms form a distinct spatial order. A liquid crystal, such as those that are used in many televisions, flows like a liquid, but has some order in its arrangements of molecules. The liquid crystals used in televisions are known as nematic crystals, where the molecules are lined up in the same direction, but are otherwise randomly arranged.
In a smectic liquid crystal, the molecules have a similar directional ordering, but they are also arranged in stacked layers which confines the movement of ionic additives. When a voltage is applied, the liquid crystal molecules all try to align themselves with the electric field, and the material they are embedded in (glass or plastic) will appear transparent.
When the direction of the voltage is slowly changed, the ionic additives disrupt the layer structure of the smectic liquid crystals, which has the result of making the glass or plastic panel appear milky. Increasing the frequency of the voltage causes the movement of the ionic additives to freeze out and then switches the plastic or glass panel back to transparent. These transitions happen in a fraction of a second, and when the voltage is switched off, the material will remain either transparent or opaque until the user wants it to switch again, meaning that unless the material is actively switching states, it requires no power.
Possible applications for smectic A composites include uses in the construction, advertising and automotive industries. For example, it could be applied to glass buildings in order to regulate the amount of sunlight that could get through, or it could be used as a ‘sunroof’ in a car that could be switched back and forth between transparent and opaque.
The work has been patented, and is being commercialised by Cambridge Enterprise, the University’s commercialisation arm, to a major industrial partner through a technology framework transfer agreement.
The original motivation behind the development of smectic A was to develop a type of low-power electronic signage, of the type commonly seen at bus stops, that would use low amounts of energy, and would not fade in bright sunlight.
The original form of smectic A was based on organic materials, but the newer version is silicon-based. One sample of smectic A in a lab at CAPE has been switched back and forth between opaque and transparent more than 27 million times, switching once per second for several years.
“The earlier glass-based samples we produced worked well, but there was a challenge in making them in sizes larger than a metre square,” said Chu. “So we started making it in plastic, which meant we could make bigger samples, and attach it to things like windows in order to retrofit them. This would reduce the effects of solar radiation, since the energy is being scattered rather than absorbed.”
Chu’s team is also working on other possible applications for Smectic A and related technologies, including the possibility of a transparent heat controllable window and non-intrusive public information messaging system.
A smart material that switches back and forth between transparent and opaque could be installed in buildings or automobiles, potentially reducing energy bills by avoiding the need for costly air conditioning.
The Yellow Meranti stands 89.5m tall in an area of forest known as ‘Sabah’s Lost World’ – the Maliau Basin Conservation Area, one of Malaysia’s last few untouched wildernesses. Its height places it ahead of the previous record-holder, an 88.3m Yellow Meranti in the Tawau Hills National Park.
The giant tree was discovered during reconnaissance flights by conservation scientists from the University of Cambridge working with the Sabah Forestry Department to help protect the area’s biodiversity. It comes at a crucial time, as the Sabah government takes measures to protect and restore heavily logged areas in the region.
Measuring a tree’s exact height is tricky when the tree is quite possibly the tallest tree in the Tropics. The only way is to climb it, and to take a tape measure with you. This is precisely what Unding Jami, an expert tree-climber from Sabah, did recently. When he reached the top, he confirmed the tree’s height and texted “I don’t have time to take photos using a good camera because there’s an eagle around that keeps trying to attack me and also lots of bees flying around.”
The tree actually stands on a slope: downhill it’s 91m tall, and uphill it’s around 88m tall. “We’d put it at 89.5m on average,” explains lead researcher Dr David Coomes, from Cambridge’s Department of Plant Sciences. “It’s a smidgen taller than the record, which makes it quite probably the tallest tree recorded in the Tropics!”
At this height, the tree is roughly equivalent to the height of 65 people standing on each other’s shoulders, or 20 double-decker London buses. It’s just a few metres short of London’s Big Ben.
“Trees in temperate regions, like the giant redwoods, can grow up to 30m taller; yet around 90m seems to be the limit in the Tropics. No-one knows why this should be the case,” adds Coomes.
The tree was spotted using a LiDAR scanner – a machine that’s capable of producing exquisitely detailed three-dimensional images of rainforest canopies over hundreds of square kilometres. Its laser range finder hangs from the undercarriage of the research plane, peppering the forest with 200,000 laser pulses every second, and calculating distances in 3D from each reflected pulse. The researchers then ‘stitch’ the images together, enabling them to map the forest tree by tree.
Threatened by habitat loss, the Yellow Meranti (Shorea faguetiana) is classified as endangered on the International Union for Conservation of Nature ‘Red list’, the world's most comprehensive inventory of the global conservation status of biological species.
“Interestingly, there may be more of this tree in cyberspace than in the world. It’s one of the trees that players can grow in the computer game Minecraft,” adds Coomes.
“Conserving these giants is really important. Some, like the California redwoods, are among the largest and longest-living organisms on earth. Huge trees are crucial for maintaining the health of the forest and its ecology. But they are difficult to find, and monitor regularly, which is where planes carrying LiDAR can help.”
Globally, around one billion hectares of degraded forest might be restorable, enabling then to continue to contribute to the planet’s biodiversity and its carbon and water cycles. However, a major problem faced by conservation managers is how to survey extensive areas in which conditions can vary in just a few hundred square metres and are continually changing through natural regeneration. “LiDAR scanning together with digital photography and hyperspectral scanners now provide us with unprecedented information on the state of the forest,” explains Coomes.
With funding from the Natural Environment Research Council (NERC), the Cambridge scientists worked with the Sabah Forestry Department, the South-East Asia Rainforest Research Partnership and the NERC Airborne Remote Sensing Facility.
“The Sabah government is extremely proud of this discovery, which lays credence to the fact that our biodiversity is of global importance,” says Sam Mannan, Director of the Sabah Forestry Department. “Our international collaboration, as in this case, has brought great scientific dividends to the state and we shall continue to pursue such endeavours.”
Adds Coomes: “The discovery of this particular tree comes at a critical moment because, set against a backdrop of decades of forest loss, the Sabah government has decided to protect and restore a huge tract of heavily logged forest just to the east of the Maliau Basin. It’s exciting to know that these iconic giants of the forest are alive and well so close to this major restoration project.”
Inset images: Stephanie Law.
A tree the height of 20 London double-decker buses has been discovered in Malaysia by conservation scientists monitoring the impact of human activity on the biodiversity of a pristine rainforest. The tree, a Yellow Meranti, is one of the species that can be grown in the computer game Minecraft.
Nothing says Britain quite like a cup of tea. As a nation, we have been drinking it for over 350 years. But tea has endured a tumultuous journey to reach its status as the nation’s favourite beverage.
Originating in China, where it was thought to have medicinal properties, tea’s history is closely intertwined with the history of botany and herbal medicine. Legend states that the very first cup of tea was drunk in 2737BC by the Chinese emperor Shennong, believed to be the creator of Chinese medicine. Shennong was resting under the shade of a Camellia sinensis tree, boiling water to drink when dried leaves from the tree floated into the water pot, changing the water’s colour. Shennong tried the infusion and was pleased by its flavour and restorative properties.
A more gruesome Indian legend attributes the discovery of tea to the Buddha. During a pilgrimage to China, he vowed to meditate non-stop for nine years but inevitably fell asleep. Outraged by his weakness, he cut off his own eyelids and threw them to the ground. Where they fell, a tree with eyelid-shaped leaves took root: the first tea tree.
Regardless of the truth behind the legends, tea has played a pivotal role in Asian culture for centuries. The earliest known treatise on tea is ‘Ch’a Ching’ or ‘The Classic of Tea,’ written by the Chinese writer Lu Yu. The book describes the mythological origins of tea, as well as its horticultural and medicinal properties, and contains prolific instructions on the practice and etiquette of making tea. This was considered a highly valued skill in China and to be unable to make tea well and with elegance was deemed a disgrace.
Tea was thought of as a medicinal drink until the late sixth century. During the T’ang dynasty between the seventh to tenth centuries, tea drinking was particularly popular. Different preparations emerged, with increasing oxidation producing darker teas ranging from white to green to black. Other plant substances were added, including onion, ginger, orange or peppermint with different infusions believed to have unique medicinal properties. Over time, tea was no longer restricted to medicinal use and was also generally consumed as a beverage.
Tea came to Europe in the late sixteenth century during the Age of Discovery, a time of extensive overseas exploration. Natural philosophers discovered many new plants which they collected and used for medicines or for general consumption. Of particular interest were plants with stimulant properties, such as team coffee, chocolate, tobacco and ginseng. Europeans learned of the medicinal uses of plants from local people. However, Asians remained sceptical that the healing properties of tea would have any effect on the health of Europeans, claiming that the medicinal value was unique to Asians.
Portuguese merchants were the first to bring home tea (known to them as ‘Cha,’ from the Cantonese slang) from their travels in China. However, the Dutch were the first to commercially import tea, which quickly became fashionable across Europe. Tea came to Britain in the 17th century and its popularity stems from Catherine of Braganza, a Portuguese princess and tea addict, the wife of Charles II. Her love of tea made it fashionable both at court and amongst the wealthy classes. Due to high taxes, tea remained a drink of the wealthy for many years. In the 18th century, an organised crime network of tea smuggling and adulteration emerged. Leaves from other plants were used in the place of tea leaves and a convincing colour was achieved by adding substances ranging from poisonous copper carbonate to sheep’s dung.
When tea was introduced to Britain, it was advertised as a medicine. Thomas Garraway, owner of Garraway’s coffee house in London, claimed that tea would, “maketh the body active and lusty” but also “…removeth the obstructions of the spleen…” and that it was “very good against the Stone and Gravel, cleaning the Kidneys and Uriters”. The Dutch doctor Cornelius Decker, profusely prescribed the consumption of tea, recommending eight to ten cups per day and claiming to drink 50 to 100 cups daily himself. Samuel Johnson was yet another doctor known to indulge in excessive tea drinking, rumoured to have consumed as many as sixteen cups at one tea party, and was an avid defender of the health benefits of tea. In 1730, Thomas Short performed many experiments on the health effects of tea and published the results, claiming that it had curative properties against ailments such as scurvy, indigestion, chronic fear and grief.
However, the health effects of tea were debated and by the mid-18th century accusations that tea was detrimental to health were brewing. Wealthy philanthropists worried that excessive tea drinking amongst the working classes would cause weakness and melancholy. One French doctor warned that overconsumption of tea would result in excess heat within the body, leading to sickness and death. John Wesley, an Anglican minister, condemned tea due to its stimulant properties, stating that it was harmful to the body and soul, leading to numerous nervous disorders. Wesley even offered advice on how to deal with the awkward situation of having to refuse an offered cup of tea.
The English traveller Jonas Hanway believed that tea-drinking was a risk to the nation, leading to declining health of the workforce. He was particularly concerned about the effect on women, warning that it made them less beautiful. Arthur Young, a political economist, objected to tea because of the time lost to tea breaks. He criticised the fact that some members of the working class would drink tea instead of eating a hot meal at midday, reducing their nutritional intake: tea replaced the traditionally working class drink of home brewed beer, which had a higher nutritional value than tea; tea contains no calories without milk or sugar. Thomas Short, a Scottish doctor, claimed that tea caused disastrous ailments and argued that people would spend money on tea over food. In reality, the working class often bought very cheap grades of tea or once-used tea leaves from wealthier families.
Eventually, tea regained popularity as philanthropists realised the value of tea drinking in the temperance movement, offering tea as a substitute for alcohol. During the 1830s, many coffee houses and cafes opened as alternatives to pubs and inns, and from the 1880s, tea rooms and tea shops became popular and fashionable.
Today, tea remains the most widely consumed beverage in the world. It has been estimated that tea accounts for 40% of the daily fluid intake of the British public. So, is this lavish consumption affecting our health? A study at Harvard University Medical School suggests that tea may have health benefits; tea contains polyphenols, which are especially prevalent in green tea, and have anti-inflammatory and anti-oxidant properties which could prevent damage caused by elevated levels of oxidants, including damage to artery walls which can contribute to cardiovascular disease. However, these effects have not been directly studied in humans and it may be that tea drinkers simply live healthier lives. However, to date there is no conclusive evidence suggesting tea has any genuine effects on health, either positive or negative.
It seems that the controversies surrounding the medicinal use of tea may be little more than a storm in a teacup.
This is an edited version of the article Just Your Cup of Tea.
How do you take your tea – with a drop of poisonous chemicals or a spoonful of sheep dung? Throughout history, the health benefits – and harms – of this popular beverage have been widely debated. In an article originally published in the student science magazine BlueSci, Sophie Protheroe, an undergraduate student at Murray Edwards College, examines the global history of tea and its effect on our health.
Scientists have identified new steps in the way plants produce cellulose, the component of plant cell walls that provides strength, and forms insoluble fibre in the human diet.
The findings could lead to improved production of cellulose and guide plant breeding for specific uses such as wood products and ethanol fuel, which are sustainable alternatives to fossil fuel-based products.
Published in the journal Nature Communications today, the work was conducted by an international team of scientists, led by the University of Cambridge and the University of Melbourne.
"Our research identified several proteins that are essential in the assembly of the protein machinery that makes cellulose", said Melbourne's Prof Staffan Persson.
“We found that these assembly factors control how much cellulose is made, and so plants without them can not produce cellulose very well and the defect substantially impairs plant biomass production. The ultimate aim of this research would be breed plants that have altered activity of these proteins so that cellulose production can be improved for the range of applications that use cellulose including paper, timber and ethanol fuels."
The newly discovered proteins are located in an intracellular compartment called the Golgi where proteins are sorted and modified.
“If the function of this protein family is abolished the cellulose synthesizing complexes become stuck in the Golgi and have problems reaching the cell surface where they normally are active” said the lead authors of the study, Drs. Yi Zhang (Max-Planck Institute for Molecular Plant Physiology) and Nino Nikolovski (University of Cambridge).
“We therefore named the new proteins STELLO, which is Greek for to set in place, and deliver.”
“The findings are important to understand how plants produce their biomass”, said Professor Paul Dupree from the University of Cambridge's Department of Biochemistry.
“Greenhouse-gas emissions from cellulosic ethanol, which is derived from the biomass of plants, are estimated to be roughly 85 percent less than from fossil fuel sources. Research to understand cellulose production in plants is therefore an important part of climate change mitigation.”
“In addition, by using cellulosic plant materials we get around the problem of food-versus-fuel scenario that is problematic when using corn as a basis for bioethanol.”
“It is therefore of great importance to find genes and mechanisms that can improve cellulose production in plants so that we can tailor cellulose production for various needs.”
Previous studies by Profs. Persson’s and Dupree’s research groups have, together with other scientists, identified many proteins that are important for cellulose synthesis and for other cell wall polymers.
With the newly presented research they substantially increase our understanding for how the bulk of a plant’s biomass is produced and is therefore of vast importance to industrial applications.
The work was funded, in part, by the BBSRC and was conducted with the BBSRC Sustainable Bioenergy Centre Cell Wall Sugars Programme.
Adapted from a University of Melbourne press release.
In the search for low emission plant-based fuels, new research may help avoid having to choose between growing crops for food or fuel.
If sugary drinks were sold in smaller bottles, stores stocked fewer of them, and positioned them less prominently, we would drink fewer of them. But would we find these changes acceptable? The results of our recent study show that most people find these “nudges” (altering cues in the environment to change people’s behaviour) to be acceptable ways to prevent obesity. Taxing sugary drinks, however, was only acceptable to a minority.
But for both nudging and taxing, the acceptability of the intervention increased the more effective participants judged them to be. This suggests that people are prepared to trade off their dislike of an intervention for achieving a goal they value, such as tackling obesity.
As a population, we consume too much energy. Most people in the UK are now obese or overweight. We spend an estimated 10% of the NHS budget on treating the consequences. Excess consumption of sugar, including from sugary drinks, contributes to this.
Sugary drinks are consumed more by the poorest in society explaining, in part, the higher rates of obesity in this group. Unfortunately, educating people about the health harms of consuming an excess of sugary drinks – an intervention that most people find acceptable – does not reduce their consumption.
But the evidence is now growing that “nudges” as well as taxes could reduce consumption of sugary drinks. The recent announcement of a tax on sugary drinks in England comes with much public support, and the case is made more compelling by recent evidence from Mexico that taxing drinks reduces consumption, particularly among the poor. But obesity won’t be cracked by tax alone. Adding nudges to taxes would likely help, but the acceptability of nudging has, until now, been largely unknown.
In New York, a recent attempt to cap the size of sugary drinks sold in restaurants and other food outlets elicited a strong reaction from locals. But these views may have been influenced by campaigns run by industry-funded consumer groups that placed adverts on billboards and in newspapers asserting that this measure undermined individual freedom. Given that introducing these sorts of interventions will probably require regulation, it is important to gauge public acceptability outside of the context of a media campaign in one city.
Change the environment, change the behaviour
For our study, we recruited 1,093 participants from the UK and 1,082 from the US.
We compared the acceptability of three nudge interventions (reducing the size of sugary drinks bottles, elongating the shape of cans of sugary drinks so they look larger than current cans, and altering where on the shelf drinks were placed) with two more traditional interventions: education and taxation.
Education was the most accepted intervention (more than 80% of participants considered it to be acceptable), taxation the least (fewer than 46% judged it acceptable), and the nudge interventions rated between these (range: 51% to 68%).
Highlighting the unconscious nature of nudges did not reduce their acceptability. Nudging is more acceptable than taxation, but the acceptability of both may be sensitive to evidence of their effectiveness.
Perceived effectiveness was the strongest predictor of acceptability for all interventions across both the US and UK groups. In other words, the more effective people perceived an intervention to be, the more acceptable they found it. This replicates findings from other studies.
Mexico provides an interesting case study. With funding from Bloomberg Philanthropies, non-governmental organisations bought prominent advertising space to counter industry opposition to sugary drinks taxes. This included presenting evidence of taxation’s effectiveness at preventing obesity and other consequences of high consumption. This supports the idea that clear communication to the public of an intervention’s effectiveness – in this case, taxing sugary drinks – can increase public acceptability of the intervention. This may then lead politicians to implement the intervention.
Attributing obesity to the environment, rather than willpower, also predicted acceptability. The more people attributed over-consumption to the environment, the greater their support for interventions, particularly the three nudge interventions. This suggests that the public’s judgements of nudging could become even more favourable if we can successfully convey scientific understandings of human behaviour. This would result in people understanding that much of our behaviour, that is shaped by the environment, takes place outside of conscious awareness. Changing the environment could therefore help tackle obesity.
Theresa Marteau (Behaviour and Health Research Unit) discusses how to get people to consume less sugar.
Back in the 1990s, the football manager Dennis Wise was unfazed that some of his new players were foreigners. They would soon be able to communicate, he reassured everyone, since he intended to “learn them a bit of English”.
Earlier this year, David Cameron had the same bright idea. Writing in The Times in January, the prime minister lamented that 22% of British Muslim women speak “little or no English”. He argued that it was prohibiting their social integration and holding them back economically. These problems would be solved, he suggested, if these women acquired fluency in English.
The very fact that this fatuous idea can be solemnly propounded by prominent politicians reveals the extent to which linguistic diversity has become a conundrum in our vast, sprawling, multi-ethnic, multi-lingual, multi-cultural, post-industrial societies.
The ongoing migration crisis is unparalleled in living memory, and it painfully illustrates how large-scale population displacements can rapidly create social situations in which linguistic differences become flashpoints. The inability of migrants to speak the first language of a country to which they have travelled can arouse suspicion and alienation. These differences create divisions that can only be bridged by translating from one language to another, and from one culture to another.
Oddly, we hear little in the media about how translation operates in societies where there are class-based divisions or displaced communities. Some recent research has explored the role of translation in war zones, but this has mainly emphasised the rhetoric of the political elite, rather than the day-to-day linguistic difficulties encountered by civilians caught up in the conflict.
This is disturbing since the power imbalances in any society are manifest in its languages. For instance, Lin Kenan has written at length about how translation could potentially help to trigger social change in China.
In this age of relentless globalisation, certain groups of people are routinely disenfranchised due to gender, ethnicity, nationality and social class. In this context, it’s helpful to consider the role translation plays in all of this, and whether it can ever help to empower the disenfranchised – or only serve to increase their vulnerability.
The controversial translation theorist Lawrence Venuti has argued insistently that fluent translations frequently perpetuate socio-political inequalities. In his view, translation is not an innocuous activity that facilitates communication – it can entrench inequality by bolstering the supremacy of dominant cultures.
Recent research has started to explore these complex issues. The translation scholar Israel Hephzibah focuses on English translations of Tamil literature produced by members of the so-called “untouchable” Dalit communities in India. These translations inevitably destabilise the traditional caste system by conferring literary credibility on the writings of a severely marginalised group. Such cases suggest that translation can become aligned with social justice.
But the fraught issue of endangered languages and cultures complicates the picture. UNESCO has estimated that 50-90% of the world’s languages will have become extinct by the year 2100.
It has been recognised for some time now that translations of indigenous texts (whether oral or written) can hasten language erosion in communities where there are few surviving native speakers. In contrast, translations into the endangered tongues can help to strengthen those languages.
On the whole, we seem to care less about vanishing languages than we do about endangered species – especially cuddly ones. When the last giant panda finally goes to the great bamboo grove in the sky, there will undoubtedly be prolonged global lamentation. But the Native American Klallam language expired on February 4 2014, when Hazel Sampson (its last speaker) died. Few news organisations felt its passing merited more than a cursory mention.
And even some translation theorists are sceptical. Emily Apter declared bluntly that she has “real reservations” about mingling translation studies and linguistic ecology – the study of how languages interact with their environment. Apter is concerned that the exoticising of expressions by native-speakers and other distinctive characteristics of a language risks imposing a fixed grammar where a natural variation should instead be allowed to prevail.
There are many different kinds of periphery in the modern world, and life close to them can be difficult, even precarious. But languages are spoken there too. They may not be the same languages as those uttered closer to the “centre” of things, but that does not invalidate them.
If we can understand more fully how translation both strengthens and weakens these often disregarded tongues and cultures, then we might be forced to reconsider some of our rather simplistic presuppositions about language and society. And, fortunately, if all else fails, we can always make the world a better place by “learning” everyone a bit of English.
Marcus Tomalin (Department of Engineering) discusses the role of translation in social inequality and social justice.
For those of us who pay fuel bills, saving energy by insulating our homes or perhaps installing solar panels seems to make perfect sense. It saves money, therefore as rational human beings why don’t we all do it?
It’s a question that preoccupies Dr Franz Fuerst from the Department of Land Economy. “If you just follow the bottom line, you should see a lot more investment in energy efficiency purely from a profit-maximising perspective. And we should see even more if we take the costs of climate change into account. Why it’s not happening is a puzzle that keeps me awake at night,” he admits.
Fascinated to find the factors at play, Fuerst and his colleague Ante Busic-Sontic started supplementing their economic models with insights derived from psychology.
According to their newly developed theoretical framework, our personalities – as described by the ‘Big Five’ personality traits of Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism – have a major influence on our decisions about investing in energy efficiency because of the way they relate to our attitudes to risk and the environment.
To provide evidence for the theory, they analysed data from the Understanding Society survey (previously called the British Household Panel Survey), which since 2009 has surveyed some 50,000 households and 100,000 individuals every year. “It’s a rich data set that captures all possible indicators for a household, including energy efficiency decisions and attitudes as well as personality traits,” says Fuerst.
They found that personality traits matter in terms of investment decisions in energy efficiency, even when controlled for drivers such as income, gender and education, although some personality traits are more strongly associated with investment decisions than others.
“Making any investment is almost always a risky undertaking. This is particularly true for many energy efficiency investments that require upfront capital expenditure while the actual energy savings and payback periods occur at a later time, which introduces risk. But what is considered an acceptable level of risk differs widely across people and households.”
This makes attitudes toward risk an interesting factor to consider when explaining energy efficiency investment decisions. “Openness, which is generally related to lower risk aversion, has a distinct impact on investment behaviour and is our strongest trait,” he explains. “Neuroticism and Agreeableness lead people to be more risk averse, while Extraversion has a positive association with risk. Conscientiousness instead shows only a weak impact on investment behaviour through the risk channel.”
They also found sizeable differences between personality traits and environmental attitudes and the same personality traits and actual investment outcomes. “We find that with rising income, personality traits become more important as factors that determine green investment. Your personality traits don’t get a chance to manifest themselves if you lack the money to invest.”
Given policymakers’ limited success in encouraging more of us to invest in energy efficiency, understanding how personality affects these decisions could help us develop more effective policies and incentives, Fuerst believes.
“Because perceived risk and risk aversion are the two key mediating factors, there is scope for developing more bespoke financial products that are attractive if you have a very low appetite for risk.”
There is already a range of financing options available that involve transferring some or all of the investment risk from the property owner to a private or public sector third party but these are currently focused on larger organisations and businesses and have yet to be rolled out to households on a large scale. Additionally, information campaigns can help to increase the awareness among the risk averse that the ‘do nothing’ option is by no means risk-free and might in fact be the riskiest choice. “For example, via the larger exposure to future energy prices, tightening regulations or a potential drop in market value for properties with poor energy efficiency,” he says.
Environmental decisions, are affected not just by personality and attitudes to risk, but also by urban design and social conditions – factors that Professor Doug Crawford-Brown and PhD student Rosalyn Old are exploring.
“Cities are increasingly incorporating sustainability metrics in the way buildings are built, the materials and resources used by occupants, and how waste is disposed of,” Crawford-Brown explains. “But how can we ensure cities will meet these metrics and perform sustainably? One of the most important challenges is to understand how people are motivated to act sustainably, and how those motivations are stimulated by the design and operations of communities.”
To understand more about the take up and use of green technologies, Old is studying the University’s North West Cambridge (NWC) Development, an extension to the city that is currently being built. “House builders are under pressure to include green technologies in new buildings, yet the take up and use of these technologies is uncertain,” she says. “This is a good opportunity to look at what’s special about NWC, how energy and carbon can be saved in a development like this, and learn lessons that can be transferred to future sites.”
Taking a range of technologies – from solar panels and water recycling to the district heating system and electric car charging points – that are being built into NWC, Old is modelling which technologies will be most efficient according to how people behave.
Residents have yet to move into NWC, so she is surveying equivalent demographic groups in Cambridge, such as postgraduates, key workers and families, to find out about their values, norms and attitudes so that she can model how they are likely to use the green technologies on offer.
“We can look at the energy impact given certain scenarios,” she explains. “For example, if 50% of postgraduates are ‘keen greens’ and they all cycle to their departments, the model will tell us the energy impact.”
And because the model includes the ability to interrogate different scenarios, it allows project managers to calculate the carbon savings associated with encouraging certain groups to be more environmentally conscious, opening up new ways of nudging residents to be greener.
Old hopes the model will help shape future phases of NWC, as well as other sustainable city sites and other sectors: “What we discover about how to shift people between different behavioural groups is important and can be used in policy work in many sectors. Even small changes in urban design can make a big difference.”
From wind turbines and solar photovoltaics to grey water recycling and electric vehicles, technology is making it ever easier for us to be green – yet many of us are not. Now, Cambridge researchers are discovering that our personalities and communities have a major impact on our environmental decisions, opening up new ways to ‘nudge’ us into saving energy and carbon.
Nudging people to make sustainable lifestyle choices is one thing, but can a city be nudged towards energy efficient investments, lower emissions and cleaner air?
Cities cope with pollution and uncomfortable temperatures by closing windows and installing units that heat, ventilate and air condition, which themselves guzzle energy and frustrate efforts to decarbonise.
A new interdisciplinary research project aims to halt this unsustainable trend by creating solutions that make cities cleaner with minimum use of energy. The key to progress, says project leader Professor Paul Linden, in the Department of Applied Mathematics and Theoretical Physics, is to start treating the city as a complete, integrated system.
“Experience over the past two decades suggests that when infrastructure investment works closely with innovative urban design across a city, there’s a shift towards low emissions and lower carbon travel through spontaneous citizen choices,” he explains.
The £4.1 million Managing Air for Clean Inner Cities (MAGIC) project will link data fed from sensors monitoring a city’s air to an understanding of air flow inside and outside buildings, and innovations in natural ventilation processes. The idea is to develop an integrated suite of models to manage air quality and temperature (and, consequently, energy, carbon, health and wellbeing) – at the level of buildings, blocks and across the whole city.
To do so, engineers, chemists, mathematicians, architects and geographers from 12 university and industry organisations will be working together, with funding from the Engineering and Physical Sciences Research Council.
The model and associated decision support system will, for example, provide information on how traffic routes can be optimised to reduce pollution, and the cost-benefits of introducing cycling routes and green spaces. But the main value of understanding energy use and air flow, says Linden, is that a city can monitor itself continuously – it can, in effect, become its own natural air conditioner.