Quantcast
Channel: University of Cambridge - Latest news
Viewing all 4368 articles
Browse latest View live

Global responsibilities

$
0
0
A globe

I was recently invited to address a meeting of the International Alliance of Research Universities at the University of Cape Town. The theme was Global transformation, and I spoke about global universities and their global responsibilities.

Stimulated by the lively discussion among the IARU members and energised by the powerful strategic transformation process underway at our host institution, I have continued to reflect on the theme.

What does it mean, today, to be a global university? For me, a university is global when it reflects global diversity, when it addresses global issues, when it establishes global partnerships, and when it assumes the mantle of global leadership.

Diversity is key. At Cambridge we understand diversity to mean that universities must not only offer variety in the coursework and subject matter they teach, but also in the types of education they provide.

I feel strongly that we cannot call ourselves global if we are beholden to views or practices that are too parochial, and so we must be open to a multiplicity of educational approaches, and we must be prepared to incorporate them into our own working practices.

This isn’t always easy. Cambridge has struggled, for instance, to find a way to make Massive Open Online Courses (known as MOOCs) fit its own tried-and-tested small-group educational methods, though we are currently reviewing the best ways to make use of new technologies to improve student experience.

Another case in point: we believe that our collegiate model of undergraduate teaching is not replicable outside the university, and so we are reluctant to establish overseas teaching campuses. But we have long understood the benefits of facilities established overseas for the purpose of research.

Underpinning Cambridge’s claims to be global is also a willingness to consider, if not always embrace, a diversity of world views - even those that we may find challenging.  

And surely we can only call ourselves global if we acknowledge and integrate the full range of talent available to us regardless of gender, sexual orientation, ethnicity or financial capability.

The scale of the challenge

Next: we are global because we address global issues.

Whether it is food security or energy sustainability, whether it is the perils of climate change or the realities of mass migration, the challenges we face are truly global.

And so, too, must be the solutions. Infectious diseases are not bothered by borders. Regardless of whether we are in China or Australia, we are all affected by the problems of ageing societies.

Which brings me to the next point about what it means to be a global university: global partnerships.

Collaboration between universities, within countries as well as across borders, is no longer optional. In an age of diminishing resources, and as the scale and complexity of the challenges increase, collaboration is an imperative whether we are in Cambridge, Copenhagen or California.

No matter how good it is, no matter how high in the rankings, an individual university cannot attain excellence on its own. Nor can a single country.

World-class research is a global project. And truly global universities are those able to harness the power of strategic partnerships—with other universities, with businesses, with civil society, or with governments.

The knowledge community

And the final qualification for what makes a university global:  The assumption of a role of global leadership.

Universities like the members of IARU are perhaps the only modern institutions with the means and the legitimacy to bridge the gaps between disciplines, between different sectors of society, and between different cultures. This legitimacy gives universities a convening power unlike anyone else’s.

No institutions are better placed than leading universities to bring together policymakers, non-governmental and international organisations, businesses and the knowledge community to thrash out solutions to the challenges ahead. This legitimacy allows us to lead in efforts to improve lives not just at our doorstep, but anywhere in the world that improvement is needed.

But global leadership requires courage, creativity and close cooperation.

It demands being able to successfully balance our commitments, from engaging with our immediate neighbourhoods to engaging on a world scale. We do this to satisfy our societies’ aspirations for equality, development and growth.

It comes from an understanding that what we do at home can positively affect lives, and livelihoods, on the far side of the world.

It means knowing, for instance, that research in plant sciences carried out in Cambridge can help make crops in Ghana more resilient, but also that the knowledge developed by clinicians in a Ugandan maternity ward can save lives in Cambridge.

It requires us all to take full responsibility for that knowledge—and to act on it.

When it comes to discharging its global responsibilities, the University of Cambridge has been leading by example. I could mention, for instance, the Cambridge-Africa Programme, which since 2010 has been engaging formally with partners across sub-Saharan Africa to boost their research capacity.

This successful and sustainable model for global engagement is about allowing excellent African research to flourish.

It contributes, in a modest but decisive way, to the Sustainable Development Goals set out by the UN in 2015, in particular by helping to break the pernicious effect of poverty on health, nutrition, and education.

It makes a direct contribution to mitigating poverty, to ensuring food security, healthy lives, and equitable education for all, and to empowering women.

It helps to re-balance asymmetries between global partners, to expand the global knowledge ecosystem, and to put in place a global network of future academic and civic leaders.

It allows us to engage with new partners, and to strengthen our collaboration with others we already know well.

We acknowledge that the success of our Cambridge-Africa Programme is not only dependent on the expertise and the personal commitment of our researchers, but also on the generous support of academic and funding partners.

And we acknowledge that the Cambridge-Africa Programme is relatively small compared to some of the capacity-building initiatives and scholarship schemes currently in place.

But I leave you with one question: What if every one of the world’s leading research universities could do something similar?

Imagine the transformational effect that the commitment and the concerted efforts of the world’s top research-intensive institutions might have on Africa’s capacity to produce knowledge.

That would be “global transformation” indeed.

Eilis Ferran is pro-vice-chancellor for international affairs at the University of Cambridge.

World-leading universities can play an important role in strengthening African research, writes Pro-Vice-Chancellor for Institutional and International Relations Eilis Ferran

The globe

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: More accountability needed in how drugs are priced and reimbursed

$
0
0

Approving new medicines that hit the market is the responsibility of the EU, but it is left up to individual member states to decide which ones they wish to subsidise. New prescription medicines can be very expensive and few patients could afford novel drugs for cancer and especially rare conditions if they had to pay out of their own pockets. But, thanks to state subsides, what most EU patients pay is only a fraction of the original price, be it in the form of a flat prescription fee (as happens in England) or various levels of patient co-payment depending on the patient and the medicine (as happens in Poland).

The case of Poland is interesting. In recent years, it has adopted many institutional innovations that seek to ensure it makes sound pricing and reimbursement decisions. However, in new research we’ve published in the British Journal of Sociology we found that Poland’s pricing and reimbursement system (P&R) still lacks transparency and accountability, which allows informal social actors to evade regulations that govern conflicts of interest.

EU member states use complex policy instruments to determine how much they are willing to pay the pharmaceutical industry for its products (pricing) and which medicines are to be prioritised and made accessible to patients (reimbursement). Given the steeply increasing prices of new medicines, P&R has a considerable impact on budgets. In combination with finite health budgets (and often decreasing in real terms), P&R is associated with important “opportunity costs” and ethical dilemmas, as brilliantly portrayed in Andrew Wishard’s documentary, The Price of Life.

It has long been recognised that P&R decisions need to be based on sound evidence about drug efficacy, safety, cost-effectiveness and likely impact on health budgets. This has resulted in the increasing role of expert advisory bodies such as the National Institute of Health and Care Excellence (NICE) in the UK, carrying out scientific evaluations of medical, economic and ethical considerations associated with the public funding of new drugs. Politicians and civil servants involved in the process must also use clear criteria for making their decisions.

The Polish paradox

Poland has been at the forefront of central European countries in embedding the principles of these scientific assessments into its P&R system. For example, Poland was quick to establish its Agency for Health Technology Assessment and Tariff System, while no equivalent body exists in the neighbouring, and more economically advanced, Czech Republic.

Nevertheless, Poland’s P&R has suffered from persistent irregularities, including lobbying scandals as well as strong corporate and political pressures on the agency it set up. These seem to be part of a general pattern of informal dealings in the healthcare sector, including the cherry-picking of winners in public tenders, nepotism and informal payments to doctors. Importantly, those involved in these dealings typically remain unaccountable.

Mechanisms of ‘deniability’

Drawing on more than 100 interviews with insiders in Poland’s system, we identified four mechanisms that amount to what political anthropologist Janine Wedel calls “deniability”.

• We found evidence of blurred boundaries between institutions involved in the policy process. This allowed policymakers, for example, to shift blame for controversial reimbursement decisions to bureaucratic or expert advisory bodies.

• Some of the key stakeholders played roles in different sectors – public institutions, the pharmaceutical sector or civil society organisations, and sometimes all at the same time. While these “coincidences of interest”, to use another term coined by Wedel, could reasonably be seen as controversial, they tended to escape the definitions of “conflict of interest” included in formal regulations.

• Playing multiple roles allowed stakeholders to maximise their influence by choosing the most convenient hat depending on the situation. For example, some legal advisers acted as “objective” commentators of reimbursement policy while representing pharmaceutical companies in the process.

• We identified evidence of activity of elite cliques. Members of these informal groups were able to coordinate their resources and influence while officially representing different organisations.

The last few years have seen the introduction of more comprehensive rules governing conflicts of interest. This includes publishing increasingly detailed protocols from sessions of the Polish agency’s main expert advisory body and introducing toughened conflict of interest requirements for top ministerial medical advisers. Whether these improvements address the problem of limited accountability depends on whether policymakers are willing to act on the spirit rather than the letter of regulations, among other things.

Revolving door in other countries.Dan4th Nicholas, CC BY

 

These problems are clearly not limited to Poland. For example, in the US and the EU alike, concerns have been expressed over the revolving door between drug regulators and the pharmaceutical sector as well as some senior clinicians acting as seemingly independent third parties on the industry’s behalf. There have also been criticisms of the activity of some contract research organisations playing roles in multiple arenas ranging from organising clinical trials to delivering public relations services to drug companies.

What can be improved?

There are no easy solutions to the issues we have identified. One important way of addressing “coincidences of interest” is by introducing a comprehensive cooling off period for public officials leaving state institutions. This issue can also be addressed by continually reviewing conflict of interest policies, especially declarations submitted by those consulted in the drug evaluation process, to make sure they reflect emerging forms of collaboration with the pharmaceutical industry.

High ranking officials should also commit to building a culture of transparency by following conflict of interest disclosure declarations. There is also a big role to be played by journalists in the context of holding policymakers, civil servants and other stakeholders to account. And there is much for others in the EU to learn about what – and what not – to do, from Poland.

Piotr Ozieranski, Lecturer, University of Bath and Lawrence King, Professor of Sociology, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Lawrence King (Department of Sociology) and Piotr Ozieranski (University of Bath) discuss how EU member states use complex policy instruments to determine how much they are willing to pay the pharmaceutical industry for its products.

Pills

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Opinion: Speaking dialects trains the brain as well as bilingualism does

$
0
0

There has been a lot of research to back up the idea that people who use two or more languages everyday experience significant advantages. The brain-training involved in having to use a different language depending on the context and speaker is credited with enhancing attention and memory skills – as well as better recovery after stroke and even later onset of the symptoms of dementia. But there is another – often hidden – source of brain-training in language use which many of us are not even aware of: dialects.

English accents and dialects.Maunus at English Wikipedia (based on Hughes & Trudgill, 1996), CC BY

 

Bi-dialectalism – which simply means the systematic use of two different dialects of the same language – is widespread in many parts of the world. In the US, millions of children grow up speaking African American Vernacular English at home as well as mainstream American English at school. The Arab-speaking world is bi-dialectal too. Similar situations arise in many parts of Europe, such as the German-speaking parts of Switzerland – where school children may only feel comfortable to talk about school subjects in High German, but switch to Swiss-German for everyday conversation. The Flemish-speaking parts of Belgium, and parts of Italy and Spain, also speak regional varieties as well as the standard variety.

What is interesting to emphasise is that what we call “the language” of a country, such as Italian, is just one of a cluster of linguistically related varieties that for cultural, historical and political reasons was chosen as the standard variety. People who speak two varieties of the same language are often aware of only the “standard” variety and don’t realise the regional variation they are speaking is a formal dialect, as they may feel it has negative associations.

But what our research suggests is that for the human mind, the difference in the distance between dialects and languages might not be important at all and that people who speak two dialects may share a cognitive profile with people who speak two languages.

All Greek to some people

Working with Kyriakos Antoniou and colleagues at the University of Cyprus and the Cyprus University of Technology we studied the cognitive performance of children who grew up speaking both Cypriot Greek and Standard Modern Greek – two varieties of Greek which are closely related but differ from each other on all levels of language analysis (vocabulary, pronunciation and grammar).

Modern Greek dialects.Pitichinaccio via Wikimedia Commons, CC BY

 

The study consisted of 64 bi-dialectal children, 47 multilingual children and 25 monolingual children. Comparisons between the three groups were performed in two stages and the socio-economic status, language proficiency and general intelligence of all children taking part were factored into the analyses.

Participants had to recall digits in the reverse order of presentation. That is, if presented with “three, nine, five, six” they had to recall “six, nine, five, three” – a test that measures their ability to hold and manipulate information in memory.

Somewhat to our surprise, multilingual and bi-dialectal children exhibited an advantage over monolingual children in a composite cognitive processes score based on tests of memory, attention and cognitive flexibility.

Interestingly, another recent study investigated the educational achievement of some Norwegian children who are taught to write in two forms reflecting two different Norwegian dialects. Using data from standardised national tests, including tests in reading and arithmetic, the children who were taught to write in both dialectal forms had scores higher than the national average.

This suggested that advantages previously reported for multilingual children could be shared by children speaking any two or more dialects. That is, the advantages of bilingualism arise with any combination of language varieties that differ enough to challenge the brain. They could be dialects of the same language, two related languages such as Italian and Spanish, or as diverse as English and Mandarin Chinese. Systematically switching between any two forms of language, even quite similar ones, seems to provide the mind with the extra stimulation that leads to higher cognitive performance.

What our research suggests – contrary to some widely held beliefs – is that, when it comes to language, plurality is an advantage and in this respect dialects are under-recognised and undervalued. This kind of research can make people appreciate there is an advantage to bi-dialectalism – and this may be important when we think about our identity, how we educate children and the importance of language learning.

In the future

We are now retesting and extending our hypotheses on a larger scale in collaboration with researchers at the University of Brussels. Belgium offers an ideal testing ground – dialects of Dutch such as West Flemish are spoken alongside more standard versions of Dutch and French. The new study includes larger samples and new measures to better understand the effects of bi-dialectalism on cognitive and linguistic development and their relation to bilingualism.

Benelux languages.Gruna_1, CC BY

 

It is also important to emphasise that the research on bilingualism has focused on a relatively narrow range of cognitive skills. But welcome steps towards widening our understanding of the effect of bilingualism are underway. In our research we are also investigating the effects of bilingualism on understanding implied meaning in conversation – in other words, whether the experience of anticipating which language a speaker will use makes bilingual and bi-dialectal children more adept at reading the speaker’s intentions more generally – and understanding the real meaning of what they say, specifically. Our preliminary findings suggest that this is indeed the case and we hope to substantiate this in the near future.

Napoleon Katsos, Senior Lecturer Department of Theoretical and Applied Linguistics, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Napoleon Katsos (Department of Theoretical and Applied Linguistics) discusses why speakers of two dialects may share cognitive advantage with speakers of two languages.

Conversations

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Study finds little change in the IMF’s policy advice, despite rhetoric of reform

$
0
0

A new study, the largest of its kind, has systematically examined International Monetary Fund (IMF) policies over the past three decades. It found that – despite claims to have reformed their practices following the global financial crisis – the IMF has in fact ramped up the number of conditions imposed on borrower nations to pre-crisis levels.

The crisis revived a flagging IMF in 2009, and the organisation has since approved some of its largest loans to countries in economic trouble. At the same time, IMF rhetoric changed dramatically. The ‘structural adjustment programs’ of austerity and privatisation were seemingly replaced with talk of the perils of inequality and the importance of social protection.   

Researchers from the University of Cambridge’s Department of Sociology collected archival material on the IMF’s lending operations and identified all policy conditions in loan agreements between 1985 and 2014 – extracting 55,465 conditions across 131 countries in total.

They found that structural adjustment conditions increased by 61% between 2008 and 2014, and reached a level similar to the pre-crisis period. 

The authors of the study, which used newly-available data and is published today in the Review of International Political Economy, say their findings show that the IMF has surreptitiously returned to the practices it claims it has abandoned: encroaching on the policy space of elected governments by enforcing free market reforms as conditions of lending. This is despite the IMF Managing Director Christine Lagarde rejecting concerns over the return of structural adjustment: “We do not do that anymore”*.  

“The IMF has publicly acknowledged their objectives to include creating breathing space for borrowing countries, and economic stability combined with social protection,” said lead author Alexander Kentikelenis. “Yet, we show the IMF has in fact increased its push for market-oriented reforms in recent years – reforms that can be detrimental to vital public services in borrowing countries.”

Although the IMF claims its programs can “create policy space” for governments, structural adjustment conditions can reduce this space as they are often aimed at an economy’s underlying structure: privatising state-owned enterprises and deregulating labour markets, for example.

“Our research suggests that structural adjustment is not a policy fad of the past,” said co-author Thomas Stubbs. “The emphatic return of structural conditionality in recent years calls into question the IMF’s ‘we don’t do that anymore’ rhetoric. These reforms at the IMF are basically just hot air.”

Many of these conditions continue to intrude on policy areas such as the labour market, despite claims to the contrary. Post-crisis, examples have included:

  • The elimination of 4,000 civil service positions in Moldova in 2010.
  • A 15% cut in pensions and raising of the retirement age in Romania, re-introduced as a ‘binding’ condition after it was struck down by the country’s constitutional court in 2010.
  • Extensive labour market liberalisation in Greece, including: the precedence of firm-level over sector-wide pay agreements to reduce the power of collective bargaining; the reduction of minimum wages and employee dismissal costs.
  • An increased retirement age in Portugal in 2012, followed by a realignment of public sector worker rights to “private sector rules”, including job termination.

In recent years, the IMF emphasised its attention to poverty reduction and social protection, with increasing use of conditions that specify minimum expenditures on health, education and other social policies.

The researchers found that inclusion of social spending conditions had indeed jumped since 2012, mostly applicable to sub-Saharan African countries. However, after detailed analysis, the authors found that nearly half such conditions were not implemented. Yet those African nations with the weakest adherence to social spending conditions still consistently met, and often far-exceeded, the IMF’s fiscal deficit targets.                      

“The IMF’s well-advertised ‘pro-poor’ measures are only superficially incorporated into programme design, and are, at best, of secondary importance to stringent macroeconomic targets,” said co-author Lawrence King.

Added Kentikelenis: “We have shown that the IMF has been particularly adept at introducing layers of ceremonial pretences of reform designed to obscure the actual content of its adjustment programmes. These gaps between rhetoric and practice in the IMF’s lending activities reveal an escalating commitment to hypocrisy.” 

Researchers describe IMF as having an “escalating commitment to hypocrisy”, as study reveals that strict lending conditions have returned to pre-crisis levels, while ‘pro-poor’ targets frequently go unmet.

These gaps between rhetoric and practice in the IMF’s lending activities reveal an escalating commitment to hypocrisy
Alexander Kentikelenis
Russian President Medvedev meets with Christine Lagarde, Managing Director of International Monetary Fund
Reference

* “We provide lending, and, by the way, structural adjustments? That was before my time. I have no idea what it is. We do not do that anymore. No, seriously, you have to realize that we have changed the way in which we offer our financial support.” - Christine Lagarde, International Monetary and Financial Committee (IMFC) Press Briefing, Washington, D.C, April 12, 2014

 

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Urgent action needed to close UK languages gap

$
0
0

The findings are included in a new report, The Value of Languages, published by the University of Cambridge this week, after wide-ranging consultation with government bodies and agencies including the MoD, Foreign and Commonwealth Office, GCHQ, and the Department for Education.

The report argues for the full contribution of languages to the UK economy and society to be realised across government, rather than falling solely under the remit of the Department for Education, thereby allowing a centralised approach in how language impacts the UK in almost every sphere of 21st-century life.

Recent independent research, highlighted within the report, indicates the language deficit could be costing the UK economy billions of pounds per year.

The Value of Languages draws on discussions at a workshop held in Cambridge, co-chaired by Professor Wendy Ayres-Bennett of the Department of Theoretical and Applied Linguistics, and Baroness Coussins, Co-Chair of the All-Party Parliamentary Group on Languages. The workshop was attended by representatives from across government and is likely to inform future policy decisions in this area.

Professor Ayres-Bennett said: “It is vital that we communicate clearly and simply the value of languages for the health of the nation. English is necessary, but not sufficient. We cannot leave language policy to the Department for Education alone.

“We need a more coordinated cross-government approach which recognises the value of languages to key issues of our time including security and defence, diplomacy and international relations, and social cohesion and peace-building. Our report aims to raise awareness of the current deficiencies in UK language policy, put forward proposals to address them, and illustrate the strategic value of languages to the UK.”

The report also suggests:

  • Education policy for languages must be grounded in national priorities and promote a cultural shift in the attitude towards languages.
  • Language policy must be underpinned by organisational cultural change. The report highlights how cultural change is being achieved, for example, in the military with language skills being valued and rewarded financially. Military personnel are encouraged to take examinations to record their language skills, regardless of whether they are language learners or speakers of community or heritage languages.
  • Champions for languages both within and outside government are vital.

“Whereas the STEM subjects are specifically highlighted under the responsibilities for the Minister of State for Universities and Science, and there is a Chief Government Scientist, languages lack high-level champions within parliament and Whitehall,” added Ayres-Bennett.  “Modern languages also need media champions. Figures such as Simon Schama for history or Brian Cox for physics and astronomy have helped bring the importance of these subjects to the public’s attention.”

Imminent or immediate problems for government to address include the decline of languages and language learning in the UK from schools through to higher education, where language departments and degree courses are closing; business lost to UK companies through lack of language skills; and an erosion of the UK’s ‘soft power’ in conflict and matters of national security, which are currently limited by a shortage of speakers of strategically important languages

The report finds that the UK is also under-represented internationally, for instance in the EU civil service or in the translating and interpreting departments of the UN – and that the community and heritage languages spoken in the UK are often undervalued.

“A UK strategy for languages would mean that UK businesses can participate fully in the global market place using the language and communication skills of their workforce,” said Professor Ayres-Bennett. 

“It would also mean that the UK is able to maximise its role and authority in foreign policy through language and diplomacy. Educational attainment in a wide range of languages brings with it personal cognitive benefits as well as the ‘cultural agility’ vital to international relations and development, as well as enhancing the cultural capital and social cohesion of the different communities of the UK.”

The report cites a number of case studies illustrating the value of languages. For example, a Spanish linguist recruited to GCHQ was from her first day able to use her ‘street language’ acquired during her year abroad and her knowledge of certain Latin American countries to translate communications related to an international drugs cartel looking to transport cocaine into the UK.  Comparing her analysis with those developed by two of her language community colleagues in Russian and Urdu, she was able to create a clear intelligence picture of the likely methods and dates of the imminent drugs importation. Meetings with Law Enforcement agents eventually led to the seizure of large quantities of cocaine and lengthy jail terms for the key players.

Policy workshops and briefings will be a key element of a new £4m research project on multilingualism led by Professor Ayres-Bennett at the University of Cambridge, and funded under the AHRC’s Open World Research Initiative.

The full report can be seen here: http://www.publicpolicy.cam.ac.uk/research-impact/value-of-languages

The UK Government needs to urgently adopt a new, comprehensive languages strategy if it is to keep pace with its international competitors and reduce a skills deficit that has wide-reaching economic, political, and military effects.

It is vital that we communicate clearly and simply the value of languages for the health of the nation. English is necessary, but not sufficient.
Wendy Ayres-Bennett
Swahili

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: The flower breeders who sold X-ray lilies and atomic marigolds

$
0
0

The Chelsea Flower Show, one of the biggest and best known horticultural shows in the world, is now open. In the coming days, some 150,000 visitors will make their way to the Royal Hospital Chelsea, expecting to be wowed by innovative garden designs and especially by gorgeous flowers. Among other things, show-goers will have a chance to learn the winner of the Royal Horticultural Society’s Plant of the Year award. This annual prize goes to the “most inspiring new plant” on display at the show – a high honour indeed given the number and range of varieties introduced each year.

The relentless pursuit of showy flowers for garden display extends back significantly further than the 104 years of the Chelsea show. One need only recall the infamous Dutch tulip craze of the 17th century to be reminded that fascination with floral novelties has a long and storied history.

Over the centuries, entrepreneurial cultivators have endeavoured to create unique plant varieties, either by bringing together the genetic material from established lines through hybridisation or through the discovery of new genetic variation such as a chance mutation in a field. Today, flower breeding is pursued with a far better understanding of plant biology than ever before, in some cases with the aid of technologies such as tissue culture and genetic transformation. Yet the goal remains the same: the creation of tantalising tulips, ravishing roses, show-stopping snapdragons and myriad other plants that will ideally prove irresistible to gardeners and turn a handsome profit.

The quest to produce profitable new varieties – and to do so as fast as possible – at times led to breeders to embrace methods that today seem strange. There is no better illustration of this than the mid-century output of one of America’s largest flower-and-vegetable-seed companies, W Atlee Burpee & Co.

Gardening with X-rays

In 1941, Burpee Seed introduced a pair of calendula flowers called the “X-Ray Twins”. The company president, David Burpee, claimed that these had their origins in a batch of seeds exposed to X-rays in 1933 and that the radiation had generated mutant types, from which the “X-Ray Twins” were eventually developed.

At the time, Burpee was not alone in exploring whether X-rays might facilitate flower breeding. Geneticists had only recently come to agree that radiation could lead to genetic mutation: the possibilities for creating variation “on demand” now seemed boundless. Some breeders even hoped that X-ray technologies would help them press beyond existing biological limits.

The Czech-born horticulturist Frank Reinelt thought that subjecting bulbs to radiation might help him produce an elusive red delphinium. Unfortunately, the experiment did not produce the hoped-for hue. Greater success was achieved by two engineers at the General Electric Research Laboratory, who produced – and patented– a new variety of lily as a result of their experiments in X-ray breeding.

Though Reinelt’s and other breeders' tangles with X-ray technology resulted in woefully few marketable plant varieties, David Burpee remained keen on testing new techniques as they appeared on the horizon. He was especially excited about methods that, like X-ray irradiation, promised to generate manifold genetic mutations. He thought these would transform plant breeding by making new inheritable traits – the essential foundation of a novel flower variety – available on demand. He estimated that “in his father’s time” a breeder chanced on a mutation “once in every 900,000 plants”. He and his breeders, by comparison, equipped with X-rays, UV-radiation, chemicals, and other mutation-inducing methods, could "turn them out once in every 900 plants. Or oftener”.

Scientific sales pitches

A 1973 Burpee cover.Burpee, CC BY-NC-ND

 

Burpee’s numbers were hot air, but in a few cases plant varieties produced through such methods did prove hot sellers. In the late 1930s Burpee breeders began experimentation with a plant alkaloid called colchicine, a compound that sometimes has the effect of doubling the number of chromosomes in a plant’s cells. They exploited the technique to create new varieties of popular garden flowers such as marigold, phlox, zinnia, and snapdragons.

All were advertised as larger and hardier as a result of their chromosome reconfiguration – and celebrated by the company as the products of “chemically accelerated evolution”. The technique proved particularly successful with snapdragons, giving rise to a line of "Tetra Snaps” that were by the mid-1950s the best-selling varieties of that flower in the United States.

Burpee’s fascination with (in his words) “shocking mother nature” to create novel flowers for American gardeners eventually led him to explore still more potent techniques for generating inheritable variation. He even had some of the company’s flower beds seeded with radioactive phosphorus in the 1950s. These efforts do not appear to have led to any new varieties – Burpee Seed never hawked an “atomic-bred” flower – but the firm’s experimentation with radiation did result in a new Burpee product. Beginning in 1962, they offered for sale packages of “atomic-treated” marigold seeds, from which home growers might expect to grow a rare white marigold among other oddities.

Burpee was, above all, a consummate showman and a master salesman. His enthusiasm for the use of X-rays, chemicals, and radioisotopes in flower breeding emerged as much from his knowledge that these methods could be effectively incorporated into sales pitches as from his interest in more efficient and effective breeding. Many of his mid-century consumers wanted to see the latest science and technology at work in their gardens, whether in the form of plant hormones, chemical treatments, or varieties produced through startling new techniques.

Times have changed, 60-odd years later. Chemicals and radiation are as more often cast as threatening than benign, and it is likely that many of today’s visitors to the Chelsea Flower Show hold a different view about the kinds of breeding methods they’d like to see employed on their garden flowers. But as the continued popularity of the show attests, their celebration of flower innovations and the human ingenuity behind these continues, unabated.

Helen Anne Curry, Peter Lipton Lecturer in History of Modern Science and Technology, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Helen Anne Curry (Department of History and Philosophy of Science) discusses the history of our fascination with floral novelties.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: GM crops already feed much of the world today – why not tomorrow’s generations too?

$
0
0

My parents researched malnutrition and under-nutrition in India, especially among children, and found that many diets recommended by Western nutritionists were in fact completely inapplicable to the poor. So they formulated cheap, healthy diets based on indigenous food with which people were familiar. Yet despite their many other efforts, a quarter of people in Indian and nearly one in nine people around the world do not have enough food to live a healthy active life.

The World Bank estimates that we will need to produce about 50% more food by 2050 to feed a population of nine billion people. And the past 50 years have seen agricultural productivity soar – corn yields in the US have doubled, for example. But this has come with sharp increases in the use of fertilisers, pesticides and water which has brought its own problems. There is also no guarantee that this rate of increase in yields can be maintained.

Just as new agricultural techniques and equipment spurred on food production in the Middle Ages, and scientific crop breeding, fertilisers and pesticides did so for the Green Revolution of the 20th century, so we must rely on the latest technology to boost food production further. Genetic modification, or GM, used appropriately with proper regulation, may be part of the solution. Yet GM remains a highly contentious topic of debate where, unfortunately, the underlying facts are often obscured.

Views on GM differ across the world. Almost half of all crops grown in the US are GM, whereas widespread opposition in Europe means virtually no GM crops are grown there. In Canada, regulation is focused on the characteristics of the crop produced, while in the EU the focus is on how it has been modified. GM crops do not damage the environment by nature of their modification; GM is merely a technology, and it is the resulting product that we should be concerned about and regulate, just as we would any new product.

There are outstanding plant scientists who work on GM in the UK, but the Scottish, Welsh and Northern Irish governments have declared their opposition to GM plants. Why is there such strong opposition in a country with great trust in scientists?

About 15 years ago when GM was just emerging, its main proponents and many of the initial products were from large multinational corporations – even though it was publicly funded scientists who produced much of the initial research. Understandably, many felt GM was a means for these corporations to impose a monopoly on crops and maximise their profits. This perception was not helped by some of the practices of these big companies, such as introducing herbicide resistant crops that led to the heavy use of herbicides – often made by the same companies.

The debate became polarised, and any sense that the evidence could be rationally assessed evaporated. There have been claims made about the negative health effects and economic costs of GM crops – claims later shown to be unsubstantiated. Today, half of those in the UK do not feel well informed about GM crops.

Everyday genetic modification

GM involves the introduction of very specific genes into plants. In many ways this is much more controlled than the random mutations that are selected for in traditional plant breeding. Most of the commonly grown crops that we consider natural actually bear little resemblance to their wild ancestors, having been selectively modified through cross-breeding over the thousands of years that humans have been farming crops – in a sense, this is a form of genetic modification itself.

In any case, we accept genetic modification in many other contexts: insulin used to treat diabetes is now made by GM microbes and has almost completely replaced animal insulin, for example. Many of the top selling drugs are proteins such as antibodies made entirely by GM, and now account for a third of all new medicines (and over half of the biggest selling ones). These are used to treat a host of diseases, from breast cancer to arthritis and leukaemia.

Millions of acres growing GM crops worldwide.Fafner/ISSSA, CC BY-SA

 

GM has been used to create insect-resistance in plants that greatly reduces or even eliminates the need for chemical insecticides, reducing the cost to the farmer and the environment. It also has the potential to make crops more nutritious, for example by adding healthier fats or more nutritious proteins. It’s been used to introduce nutrients such as beta carotene from which the body can make vitamin A – the so-called golden rice– which prevents night blindness in children. And GM can potentially create crops that are drought resistant – something that as water becomes scarce will become increasingly important.

More than 10% of the world’s arable land is now used to grow GM plants. An extensive study conducted by the US National Academies of Sciences recently reported that there has been no evidence of ill effects linked to the consumption of any approved GM crop since the widespread commercialisation of GM products 18 years ago. It also reported that there was no conclusive evidence of environmental problems resulting from GM crops.

GM is a tool, and how we use it is up to us. It certainly does not have to be the monopoly of a few multinational corporations. We can and should have adequate regulations to ensure the safety of any new crop strain (GM or otherwise) to both ourselves and the environment, and it is up to us to decide what traits in any new plant are acceptable. People may be opposed to GM crops for a variety of reasons and ultimately consumers will decide what they want to eat. But the one in nine people in poor countries facing malnutrition or starvation do not enjoy that choice. The availability of cheap, healthy and nutritious food for them is a matter of life and death.

Alongside other improvements in farming practices, genetic modification is an important part of a sustainable solution to global food shortages. However, the motto of the Royal Society is nullius in verba; roughly, “take nobody’s word for it”. We need a well-informed debate based on an assessment of the evidence. The Royal Society has published GM Plants: questions and answers which can play its part in this. People should look at the evidence – not just loudly voiced opinions – for themselves and make up their own minds.

Venki Ramakrishnan, Professor and Deputy Director, MRC Laboratory of Molecular Biology, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Professor Sir Venki Ramakrishnan (MRC Laboratory of Molecular Biology) discusses how genetically modified crops could help solve the problem of food security.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: How does a bike stay upright? Surprisingly, it’s all in the mind

$
0
0

It’s as easy as riding a bike … or so the saying goes. But how do we manage to stay upright on a bicycle? If anyone ventures an answer they most often say that it’s because of the “gyroscopic effect”– but this can’t be true.

Put simply, the gyroscopic effect occurs because a spinning wheel wants to stay spinning about its axis, just as a spinning top or even planet Earth stay aligned to their spin axes. While motorcyclists with their big, heavy, fast-spinning wheels may notice the gyro effect, a modest everyday cyclist won’t because the wheels are much lighter and at a leisurely riding speed they don’t spin quickly enough.

If a pedal bicycle did stay upright because of the gyroscopic effect then any novice getting on a bike could just push off and the bike – and the effect – would do the rest. The simple truth is that you have to learn how to ride, just as you must learn how to walk. Riding a bike is all in the mind.

Imagine you had to ride along a perfectly straight line on a perfectly flat path. Easy, surely. Well, no. It’s virtually impossible to ride along a narrow straight line just as it’s really hard to walk perfectly along a straight line, even when you’re not drunk. Try it.

Now attempt this little experiment: stand on the ball of one foot, using your arms to balance. It’s quite hard. But now try hopping from one foot to the other. It is much easier to keep your balance. It’s called running. What your brain has learned to do is to make a little correction every time you take off so that if, say, you’re falling to the right, then you’ll hop a bit to the left with the next step.

It’s the same with pedalling a bike. When riding, you’re always making tiny corrections. If you are falling to the right, then you subconsciously steer a bit to the right so that your wheels move underneath you. Then, without thinking, you steer back again to stay on the path.

This “wobbling” is perfectly normal. It is more obvious among beginners (mostly children) who wobble around quite a lot, but it may be almost imperceptible in an expert cyclist. Nevertheless, these little wobbles are all part of the process and explain why walking – or riding – on a dead straight line is so hard because you can’t make those essential little side-to-side corrections.

Grand designs

There are some really clever bits in bicycle design to make riding a bike easier, too. Most important is the fact that the steering column (the “head tube”) is tilted so that the front wheel makes contact with the ground at a point that lies behind where the steering axis intersects with the ground. The distance between these two points is called “the trail”.

Bicycle dimensions.By Rishiyur1, own work

 

The trail really helps to stabilise a bike when you’re riding with no hands because when you lean to the right, say, the force at the contact point on the pavement will turn the front wheel to the right. This helps you to steer effortlessly and it allows for hands-free steering by leaning slightly left or right.

But people have built bikes with vertical head tubes and they are perfectly rideable, too. In fact, it’s quite hard to make a bike you can’t ride, and many have tried.

That’s because keeping a bike upright is largely to do with you and your brain – something that’s easy to prove. Try crossing your hands over, for example. You will not even be able to get started, and if you switch hands while you’re riding, be warned, you will fall off instantaneously – something that wouldn’t happen if it were the gyroscopic effect keeping you upright.

Clowns and street performers ride bikes with reverse-geared steering. It takes months of practice to learn how to ride a bike like this, and it’s all about unlearning how to ride a normal bike. It’s amazing how the brain works.

The gyroscopic effect

But what about the gyroscopic effect I referred to earlier? Surely it helps a bit? Well, no it doesn’t … unless you’re going pretty fast. There is a well-known demonstration that seems to show how a bike wheel is really affected by the gyroscopic effect but if you do the sums you can show that the effect is nowhere near strong enough to hold you up when you’re riding a bike.

To prove that the gyro effect is unimportant I built a bike with a second, counter-rotating front wheel. I’m not the first to have done this – David Jones built one in 1970. We both had the same idea. Essentially, the backward spinning wheel cancels out the gyroscopic effect of the front wheel, proving that it doesn’t matter and that the only thing keeping you upright is your brain. It’s also a really fun experiment that anyone can do.

So what’s the best way to learn to ride? Well, watching children learning to ride with trainer wheels distresses me because every time one of the stabilisers touches the ground it is an unlearning experience. To cycle, your brain has to learn to wobble, so take off the trainer wheels – and the more you wobble the quicker you’ll learn. Cycling really is all in the mind.

Hugh Hunt, Reader in Engineering Dynamics and Vibration, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Hugh Hunt (Department of Engineering) discusses how we manage to stay upright on a bicycle.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

A 100 million-year partnership on the brink of extinction

$
0
0

A relationship that has lasted for 100 million years is at serious risk of ending, due to the effects of environmental and climate change. A species of spiny crayfish native to Australia and the tiny flatworms that depend on them are both at risk of extinction, according to researchers from the UK and Australia.

Look closely into one of the cool, freshwater streams of eastern Australia and you might find a colourful mountain spiny crayfish, from the genus Euastacus. Look even closer and you could see small tentacled flatworms, called temnocephalans, each only a few millimetres long. Temnocephalans live as specialised symbionts on the surface of the crayfish, where they catch tiny food items, or inside the crayfish’s gill chamber where they can remove parasites. This is an ancient partnership, but the temnocephalans are now at risk of coextinction with their endangered hosts. Coextinction is the loss of one species, when another that it depends upon goes extinct.

In a new study, researchers from the UK and Australia reconstructed the evolutionary and ecological history of the mountain spiny crayfish and their temnocephalan symbionts to assess their coextinction risk. This study was based on DNA sequences from crayfish and temnocephalans across eastern Australia, sampled by researchers at James Cook University, sequenced at the Natural History Museum, London and Queensland Museum, and analysed at the University of Sydney and the University of Cambridge. The results are published in the Proceedings of the Royal Society B.

“We’ve now got a picture of how these two species have evolved together through time,” said Dr Jennifer Hoyal Cuthill from Cambridge’s Department of Earth Sciences, the paper’s lead author. “The extinction risk to the crayfish has been measured, but this is the first time we’ve quantified the risk to the temnocephalans as well – and it looks like this ancient partnership could end with the extinction of both species.”

Mountain spiny crayfish species diversified across eastern Australia over at least 80 million years, with 37 living species included in this study. Reconstructing the ages of the temnocephalans using a ‘molecular clock’ analysis showed that the tiny worms are as ancient as their crayfish hosts and have evolved alongside them since the Cretaceous Period.

Today, many species of mountain spiny crayfish have small geographic ranges. This is especially true in Queensland, where mountain spiny crayfish are restricted to cool, high-altitude streams in small pockets of rainforest. This habitat was reduced and fragmented by long-term climate warming and drying, as the continent of Australia drifted northwards over the last 165 million years. As a consequence, mountain spiny crayfish are severely threatened by ongoing climate change and the International Union for the Conservation of Nature (IUCN) has assessed 75% of these species as endangered or critically endangered.

“In Australia, freshwater crayfish are large, diverse and active ‘managers’, recycling all sorts of organic material and working the sediments,” said Professor David Blair of James Cook University in Australia, the paper’s senior author. “The temnocephalan worms associated only with these crayfish are also diverse, reflecting a long, shared history and offering a unique window on ancient symbioses. We now risk extinction of many of these partnerships, which will lead to degradation of their previous habitats and leave science the poorer.”

The crayfish tend to have the smallest ranges in the north of Australia, where the climate is the hottest and all of the northern species are endangered or critically endangered. By studying the phylogenies (evolutionary trees) of the species, the researchers found that northern crayfish also tended to be the most evolutionarily distinctive. This also applies to the temnocephalans of genus Temnosewellia, which are symbionts of spiny mountain crayfish across their geographic range. “This means that the most evolutionarily distinctive lineages are also those most at risk of extinction,” said Hoyal Cuthill.

The researchers then used computer simulations to predict the extent of coextinction. This showed that if all the mountain spiny crayfish that are currently endangered were to go extinct, 60% of their temnocephalan symbionts would also be lost to coextinction. The temnocephalan lineages that were predicted to be at the greatest risk of coextinction also tended to be the most evolutionarily distinctive. These lineages represent a long history of symbiosis and coevolution of up to 100 million years. However they are the most likely to suffer coextinction if these species and their habitats are not protected from ongoing environmental and climate change.

“The intimate relationship between hosts and their symbionts and parasites is often unique and long lived, not just during the lifespan of the individual organisms themselves but during the evolutionary history of the species involved in the association,” said study co-author Dr Tim Littlewood of the Natural History Museum. “This study exemplifies how understanding and untangling such an intimate relationship across space and time can yield deep insights into past climates and environments, as well as highlighting current threats to biodiversity.”

Reference:
Jennifer F. Hoyal Cuthill et al. ‘Australian spiny mountain crayfish and their temnocephalan ectosymbionts: an ancient association on the edge of coextinction?’ Proceedings of the Royal Society B (2016). DOI: 10.1098/rspb.2016.0585

A symbiotic relationship that has existed since the time of the dinosaurs is at risk of ending, as habitat loss and environmental change mean that a species of Australian crayfish and the tiny worms that depend on them are both at serious risk of extinction. 

We’ve now got a picture of how these two species have evolved together through time, and it looks like this ancient partnership could end with the extinction of both species.
Jennifer Hoyal Cuthill
Light microscope image of the five tentacle temnocephalan Temnosewellia c.f rouxi from cultured redclaw crayfish

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Landscapes from other worlds

$
0
0

What can a picture tell us about our world - and our universe?

Astronomer Professor Paul Murdin says images from outer space can give scientists useful information which they can use to infer what lies behind the image and to imagine what conditions may have caused particular landscapes.

Professor Murdin, Senior Fellow Emeritus at the Institute of Astronomy, has recently published a book drawing parallels between our terrestrial landscapes and space landscapes and will be speaking about it in a session at the Hay Festival on 2nd June as part of the Cambridge Series at the Hay Festival.

Professor Murdin’s talk will include cutting edge images of towering cliffs and icy canyons on distant planets, from mountain ranges on Mars to a volcanic landscape on Venus.

He says such images, in addition to their aesthetic appeal, pique the scientific imagination. “For me as a scientist,” he states, “space exploration is about viewing landscapes to imagine the underlying reason that it is both ordinary and strange.”

In the last few years, thanks to technological advances, the images sent back from space have been increasingly detailed and have prompted speculation about the history of distant planets. Last week, for instance, a report from the Planetary Science Institute stated that mega-tsunamis in an ancient ocean on Mars may have shaped the landscape and left deposits that hint at whether the planet was once habitable.

Professor Murdin decided to write his book, Planetary Vistas. The Landscapes of Other Worlds, as a result of a road trip to the US. He says: “I was in White Sands, New Mexico, and the scenery reminded me of pictures that I had seen of Mars.  The idea came to mind of a book about the surfaces of planets and their moons, based on what you would see as a space tourist.”

He says the main differences between territorial and space landscapes are obvious - the lack of vegetation, which is a unique feature to Earth. However, he adds: “There are similarities in the science that generates the unliving structures in the landscape: the violent events like earthquakes, floods or meteor impacts which expose cliffs and the geological history of the planet; the rush of water that rumbles and rounds off boulders in stream beds, whether the streams were water or liquid methane; the deserts of drifting sand and dunes which are evidence of extreme climates.”

His presentation will mix artistic and photographic images of landscapes. Professor Murdin says there are links between the role of art and science in presenting landscapes. “In both art and science, people are communicating with other people,” he says. “A space scientist with something to say about the surface of the Moon or Mars has to select what he or she wants to talk about, isolate it and place it into a frame to put on a screen, colour it to bring out its interesting features and tell the story behind the picture.  Painters do the same. It was a surprise to discover how similar were the techniques used by both groups of people.”

He adds that many academics still believe there is a division between the ‘two cultures’ of arts and sciences, something he finds “depressing”. “There is no separation in the arts and sciences as intellectual pursuits,” he states.

Asked how much artists and fiction writers have shaped how we view space and how much their imaginings have matched up to the reality we are now seeing, he says that the scientific imagination has often surpassed the artistic vision.

Professor Murdin says: “The artist Georgia O'Keeffe saw what amounted to the extraterrestrial beauty of other worlds in her paintings of New Mexico landscapes, but I think that many lesser artists and writers have failed to deliver as extraordinary a vision of the surfaces of the planets as is actually there.  Saturn's moon, Titan, for example, is a land of rain, fog, hills, streams and lakes, almost prosaically like the Earth.  The other-worldly feature is that the Lakeland scenery of Titan is shaped by liquid methane.  What you see is ordinary.  But the underlying reality is most extraordinary: mind-boggling, something provoked by the scientific imagination.”

Professor Murdin himself says he has been moved by the images coming back from space. He says: “I didn't realise how prevalent and varied the beauty was.”

His next book, out in July, is on asteroids. Professor Murdin had an asteroid named after him in 2012 and, as with space landscapes, his interest is in what they can show about the how the universe developed. He says: “Asteroids are fragments left over from the early history of the solar system, something like an archaeological midden of rubbish that reveals the history of a former civilisation.”

 

Professor Paul Murdin will be speaking in the Cambridge Series at the Hay Festival about his latest book on planetary landscapes.

For me as a scientist, space exploration is about viewing landscapes to imagine the underlying reason that it is both ordinary and strange.
Professor Paul Murdin
Sand dunes on Mars

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Female meerkats compete to outgrow their sisters

$
0
0

Meerkats live in groups of up to 50 individuals, yet a single dominant pair will almost completely monopolise reproduction, while subordinates help to raise offspring through feeding and babysitting. Since only a small minority of individuals ever get to be dominants, competition for the breeding role is intense in both sexes and females are unusually aggressive to each other.

Within groups, subordinate females are ranked in a hierarchy based on age and weight, forming a “reproductive queue”. When dominant females die, they are usually replaced by their oldest and heaviest daughter, though younger sisters sometimes outgrow their older sisters and can replace them in breeding queues.

University of Cambridge scientists working on wild Kalahari meerkats identified pairs of sisters and artificially increased the growth of the younger member of each pair by feeding them three times a day with hard-boiled egg.

The scientists weighed them and their (unfed) older sisters daily for three months. The results, published today in the journal Nature, show that the increased growth of younger females stimulated their older sisters to increase their daily food intake and weight gain in an attempt to outgrow their rivals. 

Tellingly, the extent to which the older sister increased her weight was greater when her younger sister’s weight gain was relatively large than when it was slight.

These results suggest that subordinate meerkats are continually keeping tabs on those nearest them in the breeding queue, and make concerted efforts to ensure they are not overtaken in size and social status by younger and heavier upstarts.

But competitive growth does not stop there. If a female meerkat gets to be a dominant breeder, her period in the role (and her total breeding success) is longer if she is substantially heavier than the heaviest subordinate in her group. 

During the three months after acquiring their new status, dominant females gain further weight to reduce the risk of being usurped. Regular weighing sessions of newly established dominants showed that that, even if they were already adult, they increased in weight during the first three months after acquiring the dominant position – and that the magnitude of their weight increase was greater if the heaviest subordinate of the same sex in their group was close to them in weight.

This is the first evidence for competitive growth in mammals. The study’s authors suggest that other social mammals such as domestic animals, primates and even humans might also adjust their growth rates to those of competitors, though these responses may be particularly well developed in meerkats as a result of the unusual intensity of competition for breeding positions.

“Size really does matter and it is important to stay on top,” said senior author Professor Tim Clutton-Brock, who published the first major overview of research on mammalian social evolution this month in the book Mammal Societies (Wiley).

“Our findings suggest that subordinates may track changes in the growth and size of potential competitors through frequent interactions, and changes in growth rate may also be associated with olfactory cues that rivals can pick up,” Clutton-Brock said.  

“Meerkats are intensely social and all group members engage in bouts of wrestling, chasing and play fighting, though juveniles and adolescents play more than adults. Since they live together in such close proximity and interact many times each day, it is unsurprising that individual meerkats are able to monitor each other’s strength, weight and growth.”

Male meerkats leave the group of their birth around the age of sexual maturity and attempt to displace males in other groups, and here, too, the heaviest male often becomes dominant. The researchers found a similar strategy of competitive weight-gain in subordinate males.  

The data was collected over the course of twenty years and encompassed more than forty meerkat groups, as part of the long-term study of wild meerkats in the Southern Kalaharu at the Kuruman River Reserve, South Africa, which Clutton-Brock began in 1993. In the course of the study, the team have followed the careers of several thousand individually-recognisable meerkats – some of which starred in the award winning docu-soap Meerkat Manor, filmed by Oxford Scientific Films.

The meerkats were habituated to humans and individually recognisable due to dye marks. Most individuals were trained to climb onto electronic scales for their weigh-ins, which occurred at dawn, midday and dusk, on ten days of every month throughout their lives. This is the first time it has been feasible to weigh large numbers of wild mammals on a daily basis.


Weighing meerkats. Image credit: Tim Clutton-Brock

Latest research shows subordinate meerkat siblings grow competitively, boosting their chance of becoming a dominant breeder when a vacancy opens up by making sure that younger siblings don’t outgrow them.

Size really does matter and it is important to stay on top
Tim Clutton-Brock
Sub-adult meerkats playing.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Cambridge App maps decline in regional diversity of English dialects

$
0
0

The English Dialects App (free for Android and iOS) was launched in January 2016 and has been downloaded more than 70,000 times. To date, more than 30,000 people from over 4,000 locations around the UK have provided results on how certain words and colloquialisms are pronounced. A new, updated version of the app – which attempts to guess where you’re from at the end of the quiz – is available for download from this week.

Based on the huge new dataset of results, researchers at Cambridge, along with colleagues at the universities of Bern and Zurich, have been able to map the spread, evolution or decline of certain words and colloquialisms compared to results from the original survey of dialect speakers in 313 localities carried out in the 1950s.

One of the major findings is that some features of regional accents, such as pronouncing the 'r' in words like 'arm'– a very noticeable pronunciation feature which was once normal throughout the West Country and along much of the south coast – are disappearing in favour of the pronunciations found in London and the South-East (see map slideshow).

Lead researcher Dr Adrian Leemann, from Cambridge’s Department of Theoretical and Applied Linguistics, said: “When it comes to language change in England, our results confirm that there is a clear pattern of levelling towards the English of the south-east; more and more people are using and pronouncing words in the way that people from London and the south-east do.

Professor David Britain from the University of Bern added: “People in Bristol speak much more similarly to those in Colchester now than they did fifty years ago. Regional differences are disappearing, some quite quickly. However, while many pockets of resistance to this levelling are shrinking, there is still a stark north-south divide in the pronunciation of certain key words.”

Dialect words are even more likely to have disappeared than regional accents, according to this research. Once, the word ‘backend’ instead of ‘autumn’ was common in much of England, but today very few people report using this word (see map slideshow).

 

However, the research has shown some areas of resistance to the patterns of overall levelling in dialect. Newcastle and Sunderland stood out from the rest of England with the majority of people from those areas continuing to use local words and pronunciations which are declining elsewhere. For example, many people in the North-East still use a traditional dialect word for 'a small piece of wood stuck under the skin', 'spelk' instead of Standard English 'splinter'.

Other dialect words, like ‘shiver’ for ‘splinter’, are still reported in exactly the same area they were found historically—although they are far less common than they once were (see map slideshow).

The data collected to date shows that one northern pronunciation has proved especially robust: saying words like 'last' with a short vowel instead of a long one. In this case, the northern form actually appears to have spread southwards in the Midlands and the West Country compared with the historical survey.

In other cases, new pronunciations were found to be spreading. Pronouncing words like 'three' with an 'f' was only found in a tiny region in the south east in the 1950s, but the data from today show this pronunciation is much more widespread – 15% of respondents reported saying 'free' for 'three', up from just 2% in the old Atlas.

Cambridge PhD student Tam Blaxter, who worked alongside Dr Leemann to map the 30,000 responses supplied by the public, suggests that greater geographical mobility is behind the changes when compared to the first systematic nationwide investigation of regional speech, the Survey of English Dialects from the 1950s.

“There has been much greater geographical mobility in the last half century,” said Blaxter. “Many people move around much more for education, work and lifestyle and there has been a significant shift of population out of the cities and into the countryside.

“Many of the results have confirmed what language experts might predict – but until now we just didn’t have the geographical breadth of data to back up our predictions. If we were to do the survey in another 60-70 years we might well see this dialect levelling expanding further, although some places like the north-east seem to have been especially good at preserving certain colloquialisms and pronunciations.”

When the app was originally launched in January, users were quizzed about the way they spoke 26 different words or phrases. The academics behind the app wanted to see how English dialects have changed, spread or levelled out since the Survey of English Dialects. The 1950s project took eleven years to complete and captured the accents and dialects of mainly farm labourers.

Perhaps one of the most surprising results of the data provided so far is how the use of ‘scone’ (to rhyme with ‘gone’ rather than ‘cone’) is much more common in the north of England that many might imagine (see map slideshow).

Adrian Leemann said: “Everyone has strong views about how this word is pronounced but until we launched the app in January, we knew rather little about who uses which pronunciation and where. Our data shows that for the North and Scotland, ‘scone’ rhymes with ‘gone’, for Cornwall and the area around Sheffield it rhymes with ‘cone’ – while for the rest of England, there seems to be a lot of community-internal variation. In the future we will further unpick how this distribution is conditioned socially.”

The launch of the English Dialects App in January has also allowed language use in Wales, Scotland and Northern Ireland to be compared with language use in England (the original 1950s survey was limited to England and similar surveys of the other parts of the UK were not undertaken at the same time or using the same methods).

The huge levels of feedback have also meant the team have improved the prediction of where users might be from. The app now correctly places 25 per cent of respondents within 20 miles, compared with 37 miles for the old method.

Regional diversity in dialect words and pronunciations could be diminishing as much of England falls more in line with how English is spoken in London and the south-east, according to the first results from a free app developed by Cambridge researchers.

More and more people are using and pronouncing words in the way that people from London and the south-east do.
Adrian Leemann

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

On the life (and deaths) of democracy

$
0
0

Following the history of democracy from its invention in 508 BCE to the 21st century, Democracy: A Life traces the development of political thinking over millennia. It also examines the many sustained attacks on the original notion of Athenian democracy across the intervening centuries which have left it degraded, deformed and largely unrecognisable from its original incarnation.

The book, published by OUP, traces the grand sweep of democracy in around 500BCE down through the Classical era to its general demise in its original forms about 300BCE. 

Thereafter, though the word democracy persisted, it continued only in degraded versions from the Hellenistic era, through late Republican and early Imperial Rome, down to early Byzantium in the sixth century CE. For many centuries after that, from late Antiquity, through the Middle Ages, to the Renaissance, democracy was effectively eclipsed by other forms of government – before enjoying a revival in 17th century England and further renewals in late 18th century North America and France.

“We owe to the ancient Greeks much, if not most, of our own currently political vocabulary – from the words anarchy and democracy to politics itself,” said Cartledge. “But their politics and ours are very different beasts. To an ancient Greek democrat (of any stripe), all our modern democratic systems would count as oligarchy: rule for and by the few.

“Politics is the art of the possible and the art of persuasion – and nowhere was this more evident than in ancient Athens where all but 20 of 700 offices of the Assembly were filled by lottery every year.”

The Assembly was government by mass meeting, every nine days or so. On the agenda of every principal Assembly meeting were such fundamental issues as relations with the gods, state security and the overseas supply of wheat.

However, the 6,000 or so ordinary members of the Assembly who were able and willing to turn up in central Athens could not decide such profound matters by themselves. At the meeting, they listened in the open air to the arguments and counter arguments of prominent and well-known speakers before a mass vote was taken on a show of hands.

Even with such mass participation, there was still the chance for further scrutiny if sufficient numbers felt an error or crime had been committed in and by the Assembly. People’s jury courts could stymie demagogic self-promotion and offer the chance of delivering a considered second opinion on a measure.

Above all, there was also the ‘Boule’ or Council of 500 – the Assembly’s steering committee and chief administrative body of the state. This annually recruited body, like the annual panel of the 6,000 jurors in the People’s courts, was filled by the use of lottery, not by election. The lot was, democrats believed, the democratic way to fill public offices. It was random, gave all qualified male adult citizens an equal chance of selection, and so encouraged them to throw their hats into the ring, to step up to the plate and do their public civic duty.

In essence, Cartledge argues that this truly represented government of the people by the people for the people.

“Ancient Athenians did not have political parties, they thought elections were undemocratic,” he added. “Any male who wished to attend the Assembly could do so, and anyone who wished to have his say could call out and make his voice heard. It was the equivalent of holding a referendum on major issues every other week.”

Cartledge argues that the notion of such equality today is but a pipe dream at best, at least in socioeconomic terms, when the richest 1pc of a country’s population can own more than the remaining 99pc put together.

“Today, our MPs get elected and feel they have to toe the party line. And they are in turn protected by the party system and infrequent elections. There is no way to be held to account after an election – and this is a modern phenomenon. The word ostracise comes from ancient Greece where politicians could be physically cast out for ten years if they were felt to be abusing office. If a week is a long time in politics today, you can imagine what a decade in the wilderness would mean.”

While few in number, Cartledge does highlight two modern democratic system where echoes of the Athenian concept of demokratia (demos meaning people and kratos meaning power) can be found.

In Switzerland, at the federal level, changes to the constitution can be proposed by citizens and can only be completed by referendum; and the Swiss populace votes regularly on issues at all levels of the political scale – from the building of a new street to the foreign policy of the country.

Meanwhile, following the 2008 financial crash in Iceland, referenda, assemblies, and a people’s parliament were formed as citizens of the country campaigned to make their voices and views heard by means of mass participation in the country’s new politics.

The notion of government by referendum is particularly apposite to the United Kingdom of 2016 as the battle lines are drawn, often with crude, crass and alarmist hyperbole from both the Leave and Remain camps, for the EU referendum on June 23.

“The EU referendum will give us an all too brief taste of what it was like in ancient Athens,” added Cartledge. “If it’s a majority of one, then that will be the decision. This system is so rarely used, and so risky, but it’s the nearest thing to trusting the people. It’s an extraordinary thing to trust people who are not experts – but this system existed and lasted for 200 years, and has flourished on and off since.

“Government by referendum suited the Ancient Athenians. Whether it’s a useful add-on to, or a flagrant contradiction of, our democracy – that’s a matter on which we the electorate should have been asked to give our decisive view. But our democracy, being as it is, merely representative – would look like a creeping, crypto-oligarchy to the ancient Greeks – and many today may be coming to a similar conclusion.”

Democracy: A Life is out now.

The ‘life’ of democracy – from its roots in ancient Athens to today’s perverted and ‘creeping, crypto-oligarchies’ – is the subject of a newly-published book by eminent Cambridge classicist Paul Cartledge.

Our democracy would look like a creeping, crypto-oligarchy to the ancient Greeks – and many today may be coming to a similar conclusion.
Paul Cartledge

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Sixth formers see the future in ancient Egypt & Mesopotamia

$
0
0
Dr Martin Worthington reads a Neo-Assyrian royal inscription in the British Museum
Fifty students from 24 schools from across the UK attended the inaugural, all-day conference at The British Museum in London.
 
The students heard experts from the Museum as well as the Universities of Cambridge, Oxford, Liverpool, Cardiff, Swansea, UCL, Durham and SOAS. Participants also joined special tours of the Egyptian and Mesopotamian sections of The British Museum, led by specialists from the University of Cambridge.
 
Egypt and Mesopotamia (modern Iraq, ancient Sumer, Assyria and Babylon) have produced some of the most fascinating discoveries about the ancient world. Today it is possible to learn the languages, study the artefacts, and reconstruct the most varied aspects of these ancient civilisations in astonishing detail. 
 
Recent events, notably the desecration of monuments in the ancient city of Palmyra, in modern-day Syria, have underlined the relevance and fragility of this cultural heritage in the 21st century.
 

 
Martin Worthington, Lecturer in Assyriology at Cambridge, said: “These subjects are not offered at A-Level and few sixth formers are aware that they even exist as university subjects. We wanted to show them what makes studying Egypt and Mesopotamia so intellectually and culturally exciting, highlight the various degree courses which are available to them, and explain what admissions tutors are looking for.”
 
The programme included talks about careers, information on admissions, and the opportunity to meet current students and academic staff from many of the institutions in the UK that teach these subjects.
 
Highlights of the day included Kate Spence, Senior Lecturer in Egyptian Archaeology at Cambridge, talking about ‘Egypt in Nubia: cultures in collision’; Richard Parkinson, Professor of Egyptology at Oxford, sharing his passion for reading ancient Egyptian texts; and Paul Nicholson, Professor in Archaeology at Cardiff exploring ‘The Catacombs of Anubis at North Saqqara’.
 
Adam Agowun, 17, a student at Parmiter’s School in Hertfordshire said: "I loved seeing everything and hearing the various talks. It has reaffirmed everything that I've hoped for. I am going to study Egyptology."
 
Cambridge, which organised the event, is one of the few universities in the country to teach Egyptology and Assyriology. From October 2017, these subjects will be included in the University’s new Single Honours degree in Archaeology. Professor Cyprian Broodbank, Head of Cambridge’s Division of Archaeology, said “The subject embraces a wide range of approaches spanning the sciences and humanities and it’s a superb medium for training the flexible, innovative minds that our society needs.” 
 
After the event, 93 per cent of respondents said the day had made them more likely to study Egypt and Mesopotamia at university.

The University’s archaeologists recently teamed up with The British Museum to inspire sixth formers to consider studying Egyptology and Assyriology, subjects which very few have the opportunity to study at school.

We wanted to show what makes studying Egypt and Mesopotamia so intellectually and culturally exciting
Martin Worthington, Lecturer in Assyriology
Dr Martin Worthington reads a Neo-Assyrian royal inscription in the British Museum

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Grand designs: the role of the house in American film

$
0
0

The Lonely Villa tells the story of four women subjected to a terrifying break-in by intruders. A woman barricades herself and her daughters into the house as her absent husband, alerted by a phone call, hastens to their rescue. In the opening shot, the villains are seen lurking in the shrubbery of the handsome all-American home that stands in splendid isolation, an icon of the property-owning dream.

Rhodes’ exploration of the house in American cinema has taken him deep into the history and theory of both film and architecture, and will result in a book due for publication in 2017. He is Director of the newly launched Centre for Film and Screen, which brings together researchers from subjects as diverse as English, philosophy, history of art, architecture and languages, and continues a tradition of teaching and research on the subject of film since the 1960s.

“Houses are built to be lived in but also to be looked at – and you only have to switch on your television to see how much they fascinate us,” he says. “In watching cinema, too, we are forever looking at and into people’s houses. Cinema’s preoccupation with the house stems from cinema’s strong relation to realism and to the representation of human lives, a large portion of which plays out in domestic interiors.”

Central to Rhodes’ research into films that range from Meet Me in St Louis and Gone with the Wind to Psycho and Citizen Kane is the idea of property and possession as well as their opposites – alienation and dispossession. It’s a theme that flows through the cinematic experience right to the temporary possession of the seat in which the viewer watches a film and enters the intimate spaces of other people’s lives. “Property reigns in many aspects of the cinema experience,” he says. “Not just in the drama unfolding on the screen itself but also in the process of film-making, practices of production, distribution and exhibition.”

Rhodes suggests that the pleasure we take in immersing ourselves in the visual and sensual experiences of entering other people’s worlds has an antecedent in country house tours and, most specifically, the collections known as ‘cabinets of curiosities’. Objects acquired to display and impress, these museum-like collections are examples of belonging and, by the same token, of not belonging. “At the heart of visual pleasure is a constant negotiation of property boundaries,” says Rhodes. “It’s a question of mine but not yours – of inviting in yet keeping out.”

Revealed to a chosen few guests, cabinets of curiosities and their modern equivalents speak powerfully of their owner’s taste. A short film titled House: After Five Years of Living (1955) perfectly encapsulates the house as an object of desire and as a container for carefully curated possessions. Directed by designers Charles and Ray Eames, it shows their modernist house – one they designed themselves – in a series of stills that venerate this landmark building and its collection of modern and folk art, textiles and design objects. Neither of its owners appears yet their presence is palpable through the framing, shot by shot, of the house they created to work so beautifully in its Californian context.

Ownership is not confined to buildings but extends to those who live and work in them. Rhodes says that his thesis is implicitly feminist. His forthcoming book will draw attention to the ways in which, in film and in real life, women are forced into uncomfortably close relationships with the home, becoming part of the same parcel of ownership.

An even more tightly binding relationship is played out between servant and home, particularly in the representation of African American slavery in the American South following the Civil War. Two thirds of the way into Gone with the Wind, the servant girl Prissy looks up at her employers’ newly constructed mansion and exclaims: “We sure is rich now!” The viewer is apparently invited to laugh both at her delight and at her naivety, and in a manner that only repeats the film’s explicit racism. Yet the spectator is also the butt of this joke.

“This shot is a kind of ‘hall of mirrors’ of property relations,” says Rhodes. “The cinema audience looks at the image which was Metro-Goldwyn-Mayer’s property. Inside the image, the servants gaze up at the property of the house. But if we look carefully we see that there is no house there: what they are really looking at is either a painted background or else a matt painting inserted in the post-production process. Whether or not the image was there when the scene was shot, what they are looking at is a ‘prop’.”

The word prop is, of course, an abbreviation for property. The house, as the ultimate prop, takes many forms, its physical form acting as a powerful pointer. The mansion and the bungalow, the rambling shingle and stick-style residence, the modernist home with its picture windows: all convey messages (about status, class, race, politics) and shape the action that takes place within them.

“In much of the US, the possession of land, even if it’s a tiny strip of grass separating one house from another, is fundamental to a feeling of ownership. The bungalow was initially seen as a space for easeful, convenient living – but this modest home quickly came to spell failure,” says Rhodes. “If you think about entrances and exits, a suburban home with a hallway allows for a gradual transition from outside to inside while a bungalow offers none of that dignity. The cramped space of the bungalow leads to too much intimacy and to uncomfortable confrontations.”

Dwelling places are objects of desire – especially so in the affluent Western world. Our homes absorb our money and eat into our time: perhaps, in the process of acquisition, they own us just as much as we own them. As backdrops to our lives, they tell stories about the kind of people we are and would like to be. In film, and on the screen, houses convey multiple meanings – not just about class and status but also about childhood and our relationship with history.

When a house is broken into, a dream is shattered. In Griffith’s The Lonely Villa, the ruffians are hampered by the solidity of the house’s doors and the weight of the furniture pushed up against them. All ends well when the mother and daughters are rescued, just in time, by the man of the house. But property is fragile and, in the final reckoning, all ownership is a question of controlling impermanent and shifting borders.

Inset image: Credit, The District.

It’s black and white, silent and just short of ten minutes in length. But D.W. Griffith’s 1909 classic The Lonely Villa inspired Dr John David Rhodes, Director of Cambridge’s new Centre for Film and Screen, to look at the role and meaning of the house in American cinema.

Houses are built to be lived in but also to be looked at – you only have to switch on your television to see how much they fascinate us.
John David Rhodes
Screenshots from D.W. Griffith’s The Lonely Villa (1909)
Centre for Film and Screen

The University of Cambridge has fostered teaching and research on the subject of film since the 1960s, with pioneering work undertaken in the 1970s-80s by influential figures such as Stephen Heath and Colin MacCabe. Over time, film studies rose in prominence across the University’s faculties. In 2008, Cambridge’s strengths in this subject were consolidated with the launch of the University’s first MPhil in Screen and Media Cultures.

From this heritage of Cambridge’s thoughtful consideration of the art of the moving image, the new Centre for Film and Screen has been developed. Although based mainly in the Faculty of Modern and Medieval Languages, the Centre is truly interdisciplinary, featuring researchers from across subjects as diverse as English, Philosophy, History of Art, Architecture and Languages.

This year, the Centre is launching the University’s first ever PhD programme in Film and Screen Studies, to complement the existing MPhil course and to enable doctoral students to join the active and varied film and screen studies research culture at Cambridge and participate in the Centre’s teaching, research and seminars.

Cambridge itself is a cinematic city. Its architectural beauty and history have, over the years, made it a very attractive location for film production. The city is home to a thriving art cinema and numerous film and arts festivals, including the annual Cambridge Film Festival. Many of the Colleges of the University have film screening programmes and host visiting filmmakers.

The broader culture of the University has long been associated with creativity and dynamism in the arts and humanities, and continues to produce some of the most noteworthy names in the film and television industry, such as actors Tilda Swinton and Tom Hiddleston and director Sam Mendes. Cambridge’s postgraduate degrees in Film and Screen Studies combine the wealth of the University’s humanistic traditions with innovative inquiry into the contemporary culture of the moving image.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Lines of Thought: Communicating Faith

$
0
0

As part of its 600th celebrations, the University Library has made a series of six films – one for each of the six themes explored in Lines of Thought – with the latest film: Communicating Faith taking a close look at some iconic religious treasures across all the major faiths including Christianity, Islam and Judaism.

The oldest item in Communicating Faith is a text for prayer, the so-called Nash Papyrus. Dating from the second century before Christ, the fragments on display in Cambridge contain the Ten Commandments and until the discovery of the Dead Sea Scrolls, it was the oldest surviving manuscript of any part of the Hebrew Bible.

However, one of the oldest and perhaps the most valuable items in the Library’s collections – and perhaps one of the stars of Lines of Thought – is a recovered text called the Codex Zacynthius.

Codex Zacynthius is a parchment book where the leaves have been scraped and rewritten (a palimpsest). What they rewrote was an 11th or 12th century text from the gospels, but underneath it is a very early text of the gospel of St Luke. This very early undertext was first deciphered in the 19th century. It’s now possible, using modern imaging techniques, to get a much more precise image of what this book would have looked like when it was written in the 6th or 7th century. Work will continue on the codex when the exhibition comes to an end in September.

The translation of religious texts has always been central to the transmission of faith across barriers of religion and culture, but could be a perilous activity. William Tyndale’s English translation of the New Testament ultimately cost him his life. His pioneering translation survived, however. In 1611, the team of Cambridge scholars and theologians tasked with helping to prepare the text of the authoritative King James Bible drew heavily on Tyndale’s work.

Will Hale, who curated Communicating Faith, said: “Our copy of Tyndale’s New Testament was printed in Antwerp in 1534. Translating the Bible was an act of heresy at the time according to the mainstream church who thought the one true translation was the Vulgate into Latin and only the church had the right to interpret it to the people. Tyndale felt that even the ploughboys at the plough should be able to recite scripture in their own language. And of course, for his pains, he was strangled and burnt as a heretic two years after this translation was published.

“Today’s academics are exploiting digital technology to unearth new secrets from documents penned in antiquity. Cutting-edge multispectral imaging allows us to read texts erased from a seventh-century manuscript of the Gospel of Saint Luke, whilst dispersed collections of fragments of manuscripts from a Cairo synagogue are being painstakingly reunited in the digital realm.”

Some of the world’s most important religious texts are currently on display in Cambridge as part of Cambridge University Library’s 600th anniversary exhibition – Lines of Thought: Discoveries that Changed the World.

For his pains, Tyndale was strangled and burnt as a heretic two years after this translation was published.
Will Hale
Detail from the Codex Zacynthius

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

The myth of quitting in anger

$
0
0

Anger at the workplace is commonly associated with employees storming out of the office and quitting their jobs, but a new study from the Cambridge Judge Business School suggests that the picture is far more complex.

More broadly, positive emotions are usually thought to lead to constructive outcomes and negative emotions to damaging outcomes for business and other organisations.

A new academic study finds, however, that these generalisations are often a myth: when identification with a company is high, anger over job situations often decreases (rather than boosts) a person’s intention to leave because such employees want to stick it out and improve the organisation rather than walk out in a huff.

Conversely, when a person’s identity with their organisation is low, anger increases their intention to quit, says the study published in the Academy of Management Journal.

Researchers at the Cambridge Judge Business School found that for an individual highly-identified with the organisation, anger directed toward the organisation is similar to self-blame because the organisation is part of their self-definition, and hence such people are less likely to respond to negative feelings by disengaging.

The practical implication of the research, the authors say, is that it is unwise for companies to broadly characterise specific emotions as beneficial or detrimental to the organisation.

“The study suggests that company policies that are designed to promote positive emotions or minimise negative emotions may in fact not have the intended effect,” says Jochen Menges, University Lecturer in Organisational Behaviour at Cambridge Judge Business School and Professor of Leadership at WHU – Otto Beisheim School of Management in Germany. “So rather than seeking to suppress certain workplace emotions, companies should instead adopt practices that seek to encourage greater organisational identification.”

The research focused on a large company in the pilot training and certification business, with a final dataset of 135 people employed in the United States and Europe who were evaluated over a one-year period.  They were asked about their intentions to leave the company or remain, and about both general organisation issues (such as schedule and pay) and specific matters related to the job – such as events that “made you feel good at your job,” “made you feel disrespected as a pilot” or “made you feel close to other pilot instructors.”

As a follow-up, the study looked at actual staff turnover at the flight training company six months after the last survey of employees and found a significant correlation between the number of employees intending to leave the company and the actual staff turnover.

The study examined guilt and pride, in addition to anger – and found here, too, a dark side of positive emotion and a bright side of negative emotion. For example, while pride is generally associated with a likelihood to remain at a company, for employees lacking in work-related identifications, a feeling of pride made them more likely to consider moving on.

The research looked at a people’s identity with their occupation as well as organisation, and found that while occupational identity is not as powerful as organisational identity in staff turnover, it does play a complementary role.

Reference:
Samantha Conroy, William Becker and Jochen Menges. 'The Meaning of My Feelings Depends on Who I Am: Work-related Identifications Shape Emotion Effects in Organizations.' Academy of Management Journal (2016). DOI: 10.5465/amj.2014.1040

Originally published on the Cambridge Judge Business School website

Anger often decreases – rather than boosts – a person’s intention to quit a job when they identify strongly with their company, says a new study. 

Company policies that are designed to promote positive emotions or minimise negative emotions may in fact not have the intended effect.
Jochen Menges

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

What birds' attitudes to litter tell us about their ability to adapt

$
0
0

The study led by Gates Cambridge Scholar Alison Greggor and published in the journal Animal Behaviour, shows that corvids - the family of birds which includes crows, ravens and magpies - are more likely to show fear in relation to unfamiliar objects than other birds. However, if they and other bird species have previously encountered similar objects they are able to overcome some of their fear.

The researchers measured levels of fear of new objects in birds across urban and rural habitats, comparing corvids, a family known for being behaviourally flexible and innovative, with other bird species found in urban areas. The birds' hesitancy to approach food when different types of objects were nearby was compared to their behaviour when food was presented alone.

The researchers found corvids were more afraid of objects than other birds. However, birds were less fearful if the objects involved were similar to something they may have encountered before, for instance, urban birds were less hesitant in approaching litter.

Alison Greggor, who is doing a PhD in Psychology at the University of Cambridge, said: "From a broad perspective this work aims to help us understand how animals adapt to human-dominated landscapes. We found that although species differ in their overall levels of fear towards new things, populations of all species in urban areas showed lesser fear towards objects that looked like rubbish, but did not show reductions in fear towards all types of novelty. Therefore,  they may actually be learning which specific parts of urban habitats are safe and which are dangerous. In future, others might be able to use this information to predict what types of things animals need to learn to be able to survive in urban areas. Such predictions may help us understand why some species are unable to adjust to urban areas."

Reference
Greggor, AL et al. Street smart: faster approach towards litter in urban areas by highly neophobic corvids and less fearful birds. Animal Behaviour; 30 May 2016; DOI: 10.1016/j.anbehav.2016.03.029

Urban birds are less afraid of litter than their country cousins, according to a new study, which suggests they may learn that litter in cities is not dangerous. The research could help birds to adapt to urban settings better, helping them to survive increasing human encroachment on their habitats.

[The birds] may actually be learning which specific parts of urban habitats are safe and which are dangerous
Alison Greggor
Crows in Kingston

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

The illiterate boy who became a maharaja

$
0
0

In May 1875, an illiterate 12-year-old boy was chosen by the British to become the Maharaja of Baroda, the most important princely state in western India. He left his village and travelled some 300 miles to the state’s capital, where he was given a new name and a new family. For the next six years, he underwent rigorous academic and physical training in a school set up for the express purpose of making him into a prince.

A photograph in the archives of the British Library shows the diminutive Maharaja Sayaji Rao III shortly after arriving in Baroda. He wears a heavily brocaded outfit and holds a full-length sword.  Only a few months earlier, he had been running barefoot in the fields. Brought up speaking Marathi, he was now learning to read, write and speak in English as well as in the principal languages of his subjects – Gujarati and Urdu.

The extraordinary efforts undertaken by British colonialists to shape young Indian princes into rulers who would be unswervingly loyal to the British crown (while remaining visibly loyal to Indian traditions) are revealed in research undertaken by University of Cambridge PhD candidate Teresa Segura-Garcia. She will present her research in a  talk in Cambridge tomorrow (1 June 2016).

The education of Indian princes in the colonial period is an under-explored topic. Segura-Garcia’s pioneering research into archival material shows that the late 19th century heralded the beginning of an era when the British realised that future rulers needed to be rooted in their own culture as well as equipped to operate on a global stage.

By the 1870s the majority of India was a British colony. But 500 princely states, home to two-fifths of the Indian population, enjoyed political autonomy. Their independence was, however, nominal and they were subject to varying degrees of British control. Interference in their affairs included playing an active part in the education of Indian princes.

In many of these states, the British took a leading role in developing and overseeing educational programmes for future leaders – such as the intensive routine designed to bring young Sayaji Rao up to speed – that aimed to produce an Indian elite politically aligned with imperial rule and prepared to suppress anti-colonial resistance.

“The British knew that the future ruler of a state as powerful and wealthy as Baroda needed to follow Indian traditions of kingship and to be highly visible in the environment he was destined to rule over. This is why Sayaji Rao, an outsider to the court, was not sent to the Indian boarding schools that catered to Indian princes from lesser states. Rather, he was educated within the confines of the court at Baroda,” says Segura-Garcia.

“His education was devised to prepare him to hold his own in a variety of settings. The growing tensions between India and Britain meant that he was under close scrutiny on both sides - in terms, for example, of his friendships and activities – which put him under considerable pressure as young man.”

Segura-Garcia’s close examination of primary sources shows just how deep the British transformative endeavour went. Not only were young Indian aristocrats tutored in gentlemanly pursuits but they were encouraged to embrace ‘virtues’ such as punctuality, diligence and discipline – qualities that British administrators thought to be alien to the ‘backward’ Indian character.

A fluent grasp of English was vital for a prince destined to be a modern ruler – a ruler who was to travel widely and showcase the civilising influence of Britain’s empire. Sayaji Rao also learnt arithmetic (he performed poorly). Academic study was accompanied by tutoring in horseriding and Indian physical exercise, including pehlwani (a form of wrestling) and gymnastics. Essential too was an appreciation of European culture.

The way in which Sayaji Rao was plucked from obscurity to become one of India’s most powerful rulers was nothing short of a social experiment. The British selected him from a number of possible candidates related to the deposed Maharaja of Baroda, who in 1875 had been ousted by the British for alleged misrule. Sayaji Rao was judged to be the right age and, with no formal schooling, he represented an attractively blank slate.

“Education was central to Britain’s civilising mission in India,” says Segura-Garcia. “And the notion of education as a means of transformation was nothing new. By the early 19th century, the East India Company had begun subtly to alter the education of Indian princes. Motivated by the need to forge alliances with local allies, the Company singled out Indian aristocrats to prove that education had the power to civilise.”

The most notable proponent of this view was Thomas Macaulay, an influential member of the East India Company. Macaulay argued for the creation, through the education of elites, of a class of ‘interpreters’ who would act as bridge between the Company and the millions it governed. He described such people as “a class of persons, Indian in blood and colour, but English in taste, in opinions, in morals, and in intellect”.

It was against this backdrop that Sayaji Rao arrived in the walled city Baroda and entered the privileged milieu of the court. He wasn’t entirely removed from his family circle, as he was accompanied by one of his brothers and by a cousin. He was adopted by the deposed Maharaja’s sister-in-law, with whom he had breakfast every day.

For the first few months of his schooling, the young Maharaja was given intensive tuition by two Indian teachers. When this failed to bring the desired results sufficiently quickly, the Resident – the British representative in Baroda – set up a small school that became known as the Prince’s School.

Crucially, the Resident hired a “British gentleman” to take charge of the prince’s education. The new tutor, Frederick Elliot, a graduate of Oxford, reported that his first impression of Sayaji Rao was not promising: the 12-year-old was, in his opinion, “apparently and actually dull”. Interestingly, tutor and pupil (and their respective wives) went on to form a genuine and lasting bond of friendship – a development that British administrators viewed with the utmost suspicion, and which hindered Elliot’s career in the Indian Civil Service.

In the households of Indian royal courts, children were traditionally brought up within the zenana, the women’s secluded inner quarters. The British took a dim view of an environment to which, as men, they had little access. They believed that Indian noblewomen were unintelligent and superstitious, which made them unfit to bring up young rulers. 

Sayaji Rao’s days were structured by a relentless routine imposed by his tutor, who had to squeeze 12 years’ learning into half that time. The prince rose at 6am and exercised for two hours before breakfast. Six hours of lessons followed (sadly no detailed records of his curriculum have survived). Once the school day was over, there was more exercise and preparations for the next day’s studies.

The choice to educate Sayaji Rao within court circles rather than send him to boarding school (where he might be exposed to undesirable influences, notably homosexuality) allowed his mentors to put him on display in Baroda as a young ruler undergoing an education that incorporated elements of both Indian and British traditions.

Each morning the teenage Maharaja was taken, accompanied by a military escort, from the royal palace to the Prince’s School, on the edge of the city – a daily and highly public reminder of his presence and the power invested in him. Sayaji Rao was fond of riding and took part in the traditional aristocratic pursuit of hunting – an opportunity to show off his physical prowess from the back of a horse and exhibit his mastery over nature.

Elliot noted that his protégé learnt slowly but fortunately he “refused to forget much of anything which he has once learnt”. When, as an adult, Sayaji Rao was asked about his education, he remembered the “good and useful books” in the palace library (which housed more than 20,000 volumes) which he read “devoutly and zealously” to acquire as much knowledge as possible.

In 1881, Sayaji Rao reached the age of 18; his formal education ended and he assumed full ruling powers of the state of Baroda. Six years later, he took an initial European tour – and later travelled widely in North America, East Asia, the Middle East and Africa. In all, from 1887 until his death in 1939 he undertook 28 overseas trips that ranged in duration from eight to 14 months – an astonishing achievement even at a time when major Indian rulers were becoming very well-travelled.

Becoming increasingly independent of his British minders, Sayaji Rao made use of the intellectual capital afforded by his education to develop far-ranging links that sometimes bolstered and sometimes worked against the interests of the British empire. “The British hoped that Sayaji Rao would be a poster boy for colonial India,” says Segura-Garcia. “But, thanks to his education, he was more than that: during his travels he met with, and supported, the Indian exiles who were actively resisting British colonialism.”

Teresa Segura-Garcia will talk about ‘Princely education in India in the age of colonialism: the education of Maharaja Sayaji Rao III of Baroda, 1875-81’ at 5pm tomorrow (1 June 2016) in Seminar Room SG1, Alison Richard Building, 7 West Road, Cambridge CB3 9DT. All welcome.

 

As they struggled to maintain their grip on India as the jewel in the colonial crown, the British attempted to mould the character of India’s princes. Research by Teresa Segura-Garcia into the remarkable story of Sayaji Rao III, Maharaja of Baroda, reveals the thinking behind his education and its practical implications. She presents her work in a talk tomorrow (1 June 2016).

The British knew that the future ruler of a state as powerful and wealthy as Baroda needed to follow Indian traditions of kingship and to be highly visible in the environment he was destined to rule over.
Teresa Segura-Garcia
Maharaja Sayaji Rao III of Baroda, aged twelve, November 1875

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Innovating for the future of cities

$
0
0

There is a clear line of sight on the broad features of the cities of the future.

They will be large, with significantly more than half of the world’s growing population crammed into them.

They will house an increasingly older population, placing stress on services to the elderly and a rising tax burden on young workers whose taxes pay for those services.

They will be environmentally constrained, require a lower environmental impact of almost everything we depend on today, and they will need more resilient infrastructure, buildings and economies as the climate shifts.

In at least the developing world, the megacities will be a complex and messy mix of formal and informal settlements, with no obvious governance structure covering the entire city.

These are very broad sketches of the challenges. The more interesting issues revolve around how we respond to those challenges, and how those responses affect the design, operation and governance of cities. How we respond will in turn profoundly influence the quality of life of residents and what it feels like to live in such cities.

The future depends on the innovations we create and put in place today. But what form might those innovations take? We divide them into the physical city, urban governance and the choices made by the residents of a city. Each is the focus of intensive research at the University of Cambridge in collaboration with our partners elsewhere and in the public and private sectors.

The physical city

Future cities must become smarter, since resources and services will be stretched to their limits. Our cities today are built on projections of long-term needs, and locked into the infrastructure to meet those needs with a large margin of safety so they are robust against different potential futures. This is wasteful of materials and energy.

Buildings and infrastructure of the future will be fitted with sensors monitoring every aspect of operations from climate to energy performance to material safety and service demand. Energy will flow in real time to where it is most needed. Transport will be directed around areas of high air pollution so human health is preserved. Buildings will be monitored for stresses, allowing actions to be taken before catastrophic failure, reducing the over-engineering of buildings with more concrete and steel than may ever be required.

The same sensors will monitor the climate and allow buildings and infrastructure to respond so damage from extreme weather events is minimised. The technologies for climate adaptation are well known. The problem is how to allocate limited technological and financial resources so the overall impact on a city by a changing climate is minimised. This requires understanding the role of specific parts of the physical city in the economy and services. An approach is needed to rationalising adaptation resources so they are used wisely to protect the city’s economy and services, in turn ensuring livelihoods and well-being are preserved. Macroeconomic models linked to engineering knowledge allow decision-makers to understand where adaptation and recovery resources can best be directed to get a city back on its feet after an extreme weather event.

Governance

Cities will become living laboratories for sustainability, requiring changes in governance. Since cities are heterogeneous mixtures of planned and unplanned buildings, formal and informal developments, no single set of solutions to service provision, crime, health or education will work everywhere within the city. Systems of governance will allow for experimentation, testing solutions in some parts of the city but not others, with the design of those trials allowing us to see what works where and under what conditions.

The city will become a laboratory in the scientific sense, with the language of case-control and cohort studies. The messy and complex nature of cities will be turned into an asset, allowing for natural experiments. This in turn requires governance systems that embrace experimentation; politicians who are willing to admit when an experiment has failed and move on to the next experiment; a public that will not penalise those who are brave enough to try something in the face of profound uncertainty and then adjust their decisions when evidence emerges.

Cities will also find an intermediate ground between top-down planning (as in the ‘new towns’ such as Milton Keynes) and bottom-up growth (think of the favelas of South American cities). Bottom-up solutions allow for highly local differences in economies, architectural style, material and energy consumption. However, they can reduce the efficiency of resource use of the city when viewed as a system. The ‘transmission’ of a future city, sitting somewhere between the Mayor’s office and neighbourhood groups, will enable local solutions to remain local while facilitating solutions for the greater good of the city overall.

The challenge is to design a governance structure that enables the efficiency of technocratic, systemic control of planning and development to take place while also allowing citizens to develop solutions that work for their local conditions. The challenge is to find a system where bottom-up and top-down decisions co-exist comfortably.

People

Citizens must become smarter as well. Future technologies will not simply provide data. They will be linked to data analytics that reflect who is taking decisions, why, when and where. The data will be turned into information to guide decisions on (for example) assets, and transmitted in easily understood form to the pinch points where decisions are taken. People will be re-connected to the ebb and flow of material and energy in the city, with much deeper understandings of how their personal actions influence the performance of their city, and how the information around them influences their own decisions on use of materials, energy and services.

Future cities will make increasing use of natural ventilation based on advances in ecology and fluid dynamics. With the transport system dominated by much quieter electric vehicles, windows will be left open, indoor pollution will be reduced and levels of comfort will rise as the heat island effect disappears. Improved walking and cycling paths will bring the benefits of exercise and re-connect people to their neighbourhood activities. Health and well-being will be improved by, rather than be collaterally damaged from, urban life.

These are just three examples of future challenges being explored at the University of Cambridge in collaboration with partners at other universities in the UK and globally, and with public and private sector organisations. Taken together, they are providing the evidence base that will solve the high level and ground level challenges, and enable the top-down and bottom -up solutions, that are emerging as urban life becomes the norm for a growing global population.

Professor Doug Crawford-Brown is at the Department of Land Economy, Professor Lord Robert Mair is at the Department of Engineering and Professor Koen Steemers is at the Department of Architecture.

Today, we commence a month-long focus on the future of cities. To begin, Doug Crawford-Brown, Robert Mair and Koen Steemers describe the challenges our future cities will face and how mitigation depends on the innovations we create and put in place today. 

How we respond to these challenges will profoundly influence the quality of life of residents and what it feels like to live in such cities.
Doug Crawford-Brown, Robert Mair, Koen Steemers
Dark City

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
Viewing all 4368 articles
Browse latest View live


Latest Images