Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

older | 1 | .... | 48 | 49 | (Page 50) | 51 | 52 | .... | 141 | newer

    0 0

    Sir John, as Senior Bursar of Trinity College, was instrumental in the founding of Darwin College in 1964, and remained actively involved with Darwin for the following half century. 

    With the generous support of Trinity College, the John Bradfield Court will be created as a lasting memorial to his remarkable foresight and outstanding contribution.

    Sir John was a driving force in the establishment of Darwin College as Cambridge's first wholly graduate college in the modern era and its first mixed college.  

    The College, which has celebrated this year its 50th Anniversary, has grown from its modest beginnings into one of the largest colleges, with its membership of over 700 drawn from every discipline and from around the globe.

    Sir John's bursarial expertise and practicality were made unstintingly available to the College from the first steps in its foundation and for the five decades in which he contributed to the life of College, and especially to the management of its endowment.

    He was elected an Honorary Fellow of the College in 1973.

    The John Bradfield Court will be based around the former tennis court lawn on the eastern part of the Darwin domus, and will involve the creation of a John Bradfield Room (and associated amenities) for multiple uses as a significant addition to the College's facilities. 

    The Court will sensitively incorporate the existing historic buildings facing the river, in particular the Old Granary, the iconic face of Darwin around the world and beloved of generations of its student occupants.

    Trinity College has warmly supported these proposals, and will be generously making a donation of up to £1.5 million towards this project.  Of this amount £0.5m is subject to matching donations to be raised by Darwin College.

    Sir John Bradfield, who died late last year aged 89, is to be commemorated by the naming of a court in his honour in Darwin College.

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    People in a society are bound together by a set of connections – a social network. Cooperation between people in the network is essential for societies to prosper, and the question of what drives the emergence and sustainability of cooperation is a fundamental one.

    What we know about other people in a network informs how much we are willing to cooperate with them. By conducting a series of online experiments, researchers explored how two key areas of network knowledge effect cooperation in decision-making: what we know about the reputation and social connections of those around us.  

    In most social contexts, knowledge about others’ reputation – what we know about their previous actions – is limited to those we have immediate connections with: friends, neighbours and so on.

    But the new study shows that if the reputation of everyone in a network is completely transparent – made common knowledge and visible to all – rather than limited to the individuals who are directly connected, the level of cooperation across the overall network almost doubles. The network also becomes denser and more clustered (so your connections tend to be connected with each other).

    The researchers also tested how transparency of social connections in the group influences cooperation. On its own, common knowledge of social connections had little impact on overall levels of cooperation.

    However, when the researchers combined transparency of social connections with transparency of everyone’s reputation, a community of the most cooperative formed. Members of the community actively removed links from less cooperative individuals and refused their proposals to reconnect.

    Researchers found that belonging to the community of cooperators is profitable. Each interaction in the cooperative community is 23% more beneficial than the equivalent interaction in the less cooperative community.

    The study is published today in the journal PNAS, and was conducted by Cambridge and Oxford researchers. 

    “We show that knowing others’ past actions is the key driver of a high contribution level. Additionally, knowing who is connected to whom matters for the distribution of contributions: it allows contributors to form their own community,” said study author Dr Edoardo Gallo, from the Faculty of Economics and Queens’ College at the University of Cambridge.

    “This finding suggests that in a world where social information is more available, people may increasingly insulate themselves in communities with other like-minded individuals. In the case we examined, belonging to the community of contributors is highly beneficial,” he said.

    The research sheds light on the problem of ‘public good’ provision: what motivates people to make costly actions towards a good that benefits everyone, even those who do not contribute to it. Perhaps the most defining example of ‘public good’ in the modern era is the preservation of our environment.

    Gallo, along with Oxford colleague Chang Yan, devised an online experiment involving people forming connections and playing a ‘game’ of public good provision, also popularly known as the Prisoner’s Dilemma.

    First, the participants in a group can freely form connections with each other which determine the network. After the network is formed, each individual decides whether to cooperate by contributing to a public good that only benefits their neighbours in the network.

    Contributing benefits all the neighbours, but it is costly to the contributor. Not cooperating by not contributing, however, is costless.

    The best possible outcome for the group is for everyone to contribute. However, each individual has an incentive not to contribute: they can gain the benefits from others’ contributions without paying any cost themselves.

    The researchers recruited 364 people from crowdsourcing platform Amazon Mechanical Turk to play several rounds of a network formation game followed by a public good game. They investigated four treatments that varied the amount of knowledge subjects have about the network and previous actions of others.

    When the reputation (previous actions) of everyone in the network was rendered transparent, the overall levels of cooperation were almost twice as high as when only the previous actions of immediate connections were known.

    When the social connections for the entire network were also revealed to all, the cooperators formed their own community, leaving those with a history of being uncooperative out in the cold.

    Gallo points out that whether the community formation – the insulating and ostracizing – that occurred in the transparent network is a desirable outcome depends on the nature of the behaviour that leads to the separation.

    “In the experiment, the ‘good’ cooperators ostracize the ‘bad’ defectors, but one can argue the defectors brought it on themselves with their actions. If the same pattern occurred because of another more neutral behaviour, like an accent when speaking a language, then the ostracization might be undesirable for society,” Gallo said.

    An online experiment reveals that the overall level of cooperation in a group almost doubles when the previous actions of all its members are rendered transparent. When all social connections within the group are also made transparent, the most cooperative band together to form their own community – ostracizing the less cooperative.

    This finding suggests that in a world where social information is more available, people may increasingly insulate themselves in communities with other like-minded individuals
    Edoardo Gallo

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0

    A team of astronomers from the University of Cambridge have identified nine new dwarf satellites orbiting the Milky Way, the largest number ever discovered at once. The findings, from newly-released imaging data taken from the Dark Energy Survey, may help unravel the mysteries behind dark matter, the invisible substance holding galaxies together.

    The new results also mark the first discovery of dwarf galaxies – small celestial objects that orbit larger galaxies – in a decade, after dozens were found in 2005 and 2006 in the skies above the northern hemisphere. The new satellites were found in the southern hemisphere near the Large and Small Magellanic Cloud, the largest and most well-known dwarf galaxies in the Milky Way’s orbit.

    The Cambridge findings are being jointly released today with the results of a separate survey by astronomers with the Dark Energy Survey, headquartered at the US Department of Energy’s Fermi National Accelerator Laboratory. Both teams used the publicly available data taken during the first year of the Dark Energy Survey to carry out their analysis.

    The newly discovered objects are a billion times dimmer than the Milky Way, and a million times less massive. The closest is about 95,000 light years away, while the most distant is more than a million light years away.

    According to the Cambridge team, three of the discovered objects are definite dwarf galaxies, while others could be either dwarf galaxies or globular clusters – objects with similar visible properties to dwarf galaxies, but not held together with dark matter.

    “The discovery of so many satellites in such a small area of the sky was completely unexpected,” said Dr Sergey Koposov of Cambridge’s Institute of Astronomy, the study’s lead author. “I could not believe my eyes.”

    Dwarf galaxies are the smallest galaxy structures observed, the faintest of which contain just 5000 stars – the Milky Way, in contrast, contains hundreds of billions of stars. Standard cosmological models of the universe predict the existence of hundreds of dwarf galaxies in orbit around the Milky Way, but their dimness and small size makes them incredibly difficult to find, even in our own ‘backyard’.

    “The large dark matter content of Milky Way satellite galaxies makes this a significant result for both astronomy and physics,” said Alex Drlica-Wagner of Fermilab, one of the leaders of the Dark Energy Survey analysis. 

    Since they contain up to 99 percent dark matter and just one percent observable matter, dwarf galaxies are ideal for testing whether existing dark matter models are correct. Dark matter – which makes up 25 percent of all matter and energy in our universe – is invisible, and only makes its presence known through its gravitational pull.

    “Dwarf satellites are the final frontier for testing our theories of dark matter,” said Dr Vasily Belokurov of the Institute of Astronomy, one of the study’s co-authors. “We need to find them to determine whether our cosmological picture makes sense. Finding such a large group of satellites near the Magellanic Clouds was surprising, though, as earlier surveys of the southern sky found very little, so we were not expecting to stumble on such treasure.”

    The closest of these pieces of ‘treasure’ is 97,000 light years away, about halfway to the Magellanic Clouds, and is located in the constellation of Reticulum, or the Reticle. Due to the massive tidal forces of the Milky Way, it is in the process of being torn apart.

    The most distant and most luminous of these objects is 1.2 million light years away in the constellation of Eridanus, or the River. It is right on the fringes of the Milky Way, and is about to get pulled in. According to the Cambridge team, it looks to have a small globular cluster of stars, which would make it the faintest galaxy to possess one.

    “These results are very puzzling,” said co-author Wyn Evans, also of the Institute of Astronomy. “Perhaps they were once satellites that orbited the Magellanic Clouds and have been thrown out by the interaction of the Small and Large Magellanic Cloud. Perhaps they were once part of a gigantic group of galaxies that – along with the Magellanic Clouds – are falling into our Milky Way galaxy.”

    The Dark Energy Survey is a five-year effort to photograph a large portion of the southern sky in unprecedented detail. Its primary tool is the Dark Energy Camera, which – at 570 megapixels – is the most powerful digital camera in the world, able to see galaxies up to eight billion light years from Earth. Built and tested at Fermilab, the camera is now mounted on the four-metre Victor M Blanco telescope at the Cerro Tololo Inter-American Observatory in the Andes Mountains in Chile. The camera includes five precisely shaped lenses, the largest nearly a yard across, designed and fabricated at University College London (UCL) and funded by the UK Science and Technology Facilities Council (STFC).

    The Dark Energy Survey is supported by funding from the STFC, the US Department of Energy Office of Science; the National Science Foundation; funding agencies in Spain, Brazil, Germany and Switzerland; and the participating institutions.

    The Cambridge research, funded by the European Research Council, will be published in The Astrophysical Journal.

    Inset image: The Magellanic Clouds and the Auxiliary Telescopes at the Paranal Observatory in the Atacama Desert in Chile. Only 6 of the 9 newly discovered satellites are present in this image. The other three are just outside the field of view. The insets show images of the three most visible objects (Eridanus 1, Horologium 1 and Pictoris 1) and are 13x13 arcminutes on the sky (or 3000x3000 DECam pixels). Credit: V. Belokurov, S. Koposov (IoA, Cambridge). Photo: Y. Beletsky (Carnegie Observatories)

    Astronomers have discovered a ‘treasure trove’ of rare dwarf satellite galaxies orbiting our own Milky Way. The discoveries could hold the key to understanding dark matter, the mysterious substance which holds our galaxy together.

    Earlier surveys of the southern sky found very little, so we were not expecting to stumble on such treasure
    Vasily Belokurov
    The dwarf galaxies are located near the Large and Small Magellanic Clouds, at the centre of the image.

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0

    A pocket-sized fingerprint scanner that links individuals' fingerprints to their health records has been created by a team of students and has the potential for widespread health benefits, according to a new study.

    In an article in the peer-review journal Global Health: Science and Practice, four students - three of them Gates Cambridge Scholars - outline how SimPrints addresses a major problem in developing countries: the inability to uniquely identify clients impedes access to services and contributes to inefficiencies. The scanner works by wirelessly syncing with a health worker's smartphone.

    The students - Daniel Storisteanu, Alexandra Grigore and Toby Norman from the University of Cambridge and Tristram Norman from the Royal Holloway, University of London - say the benefits of the SimPrints system include high accuracy and secure identification, fast access and modification of records, allowing health workers in the field to make better decisions by providing immediate and reliable access to critical medical information, increasing programme accountability by facilitating the measurement of indicators such as vaccination coverage and supporting civil registration and vital statistics systems by enabling tracking of vital events (such as births).

    In areas where connectivity in the field is poor, the students say the SimPrints system can access and modify offline health records that have been previously downloaded and are stored in a local database on the phone. Any updates to the health records will then be synced with the central database once Internet connectivity is restored. In order to increase access to charging points and make it easier to replace parts, the SimPrints scanner uses the same BL-5C Nokia batteries commonly used in mobile phones globally.

    They say challenges remain in fingerprint identification of infants, the elderly, and individuals with worn fingerprints due to manual labour. Current strategies used to prevent the exclusion of services to these individuals include connecting an infant's record to the fingerprints of their legal guardians and enrolling multiple fingerprints for manual labourers and the elderly to increase matching accuracy, as well as using secondary identification tags, such as their name or location, as a back-up.

    Supported by funding from the Saving Lives at Birth innovation grant and ARM Ltd, the SimPrints team are conducting a pilot study in partnership with BRAC and the Johns Hopkins Global mHealth Initiative to test the system with health workers in Gaibandha, Bangladesh. The study is focusing on threshold testing to assess false positive, false negative, and failure-to-enroll rates, and research on performance analysis, usability, acceptability, usage patterns, and key health indicators such as the number of successful antenatal health visits. The technology is also being piloted in Mozambique.

    Professor Alain Labrique, Director of the JHU Global mHealth Initiative, expressed his enthusiasm for the project saying, “This is a very exciting initial investment into a promising technology that addresses a key bottleneck in global health programs. As we struggle to identify ways to strengthen vital registration systems that improve our ability to deliver care to every person who needs it – knowing who someone is and being able to pull up their prior health record is a real game changer for the footsoldiers of global health.”

    The idea for SimPrints came through a global health competition where teams of students had to address different health challenges. SimPrints won the competition and Toby Norman formed a development team which sought funding through various grant initiatives. Last year the team won funding of $250,000 from the Bill and Melinda Gates Foundation’s Saving Lives at Birth competition and $180,000 from Cambridge-based ARM Ltd, whose technology is incorporated in over 95% of all the world's mobile phones. It is up for several awards, including start-up of the year at the Business Weekly Awards and is entered in the UKAID Direct competition whose results will be revealed in mid-March.

    Toby Norman, who is doing a PhD in Management Studies, said: "Despite the incredible potential of mobile health to improve lives, accurately linking a patient to their health records has proved a critical stumbling block in too many projects. This technology can ensure that even the poorest have the right to an identify within health systems."

    A new film of how SimPrints was formed and its current pilot work can be found here.

    A new fingerprint ID device gives healthworkers access to accurate patient records.

    This technology can ensure that even the poorest have the right to an identify within health systems.
    Toby Norman

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    A new intensive survey of the Messak Settafet escarpment, a massive outcrop of sandstone in the middle of the Saharan desert, has shown that stone tools occur “ubiquitously” across the entire landscape: averaging 75 artefacts per square metre, or 75 million per square kilometre.

    Researchers say the vast ‘carpet’ of stone-age tools – extracted from and discarded onto the escarpment over hundreds of thousands of years – is the earliest known example of an entire landscape being modified by hominins: the group of creatures that include us and our ancestral species.

    The Messak Settafet runs a total length of 350 km, with an average width of 60 km. Parts of the landscape are ‘anthropogenic’, or man-made, through build-up of tools over hundreds of thousands of years.

    The research team have used this and other studies to attempt to estimate the volume of stone tools discarded over the last one million years of human evolution on the African continent alone. They say that it is the equivalent of more than one Great Pyramid of Giza per square kilometre of the entire continent (2.1 x 1014 cubic metres of rock).

    “The Messak sandstone, now in the middle of the vast sand seas of Libya, would have been a high quality rock for hominins to fracture – the landscape is in effect a carpet of stone tools, most probably made in the Middle and Upper Pleistocene,” said Dr Robert Foley, from the Leverhulme Centre for Evolutionary Studies at the University of Cambridge, who conducted the research with colleague Dr Marta Mirazón Lahr.

    “The term ‘anthropocene’ is now used to denote the point at which humans began to have a significant effect on the environment,” said Mirazón Lahr. “The critical time may well be the beginning of the industrial revolution about 200 years ago. Some talk of an ‘early anthropocene’ about 10,000 years ago when forests began being cleared for agriculture.

    “Making stone tools, however, dates back more than two million years, and little research has been done on the impact of this activity. The Messak Settafet is the earliest demonstrated example of the scars of human activity across an entire landscape; the effects of our technology on the environment may be considerably older than previously thought,” Mirazón Lahr said. The study is published today in the journal PLOS One.

    The survey, conducted in 2011, involved randomly selecting plots of one metre squared across the parts of the plateau surface. In each square, the researchers sifted through all the stones to identify the number that showed evidence of modification through hominin activity – evidence such as a ‘bulb of percussion’: a bulge or curved dent on the surface of a stone tool produced by the angular blows of hominin percussion. The average number of artefacts across all sample squares was 75. 

    At the simple end, large flakes of stone would have been opportunistically hacked from boulders to be used for cutting or as weapons. At the more sophisticated level, researchers found evidence that specific tools had been used to wedge into the stone in order split it.

    “It is clear from the scale of activity how important stone tools were, and shows that African hominins were strongly technologically dependent,” said Foley. “Landscapes such as these must have been magnets for hominin populations, either for ‘stone foraging trips’ or residential occupation.”

    The researchers say that if – as seems likely – the success of Stone Age communities depended significantly on tool technology, there would be enormous advantage to knowing, remembering and indeed controlling access to areas with a “super-abundance” of raw materials, such as the Messak Settafet.

    “Hominins may well have become tethered to these areas, unable to stray too far if survival depended on access to the raw materials for tools, and forced to make other adaptations subservient to that need,” said Mirazón Lahr.   

    One way that the environmental impact of hominin tool excavation may have been positive for later humans is through the clusters of small quarrying pits dotted across the landscape (ranging up to 2 metres in diameter, and 50 centimetres in depth).

    These pits would have retained moisture – with surface water still visible today after rains – and the small pools would have attracted game. In many of these pits, the team found ‘trapping stones’: large stones used for traps and ties for game and/or cattle during the last 10,000 years.         
    By combining their data with previous extensive surveys carried out across Africa, the researchers attempted to estimate roughly how much stone had been used as tools and discarded during human evolution.

    Although stone tool manufacture dates back at least 2.5 million years, the researchers limited the estimate to one million years. Based on their and others research, they standardised population density (based on extant hunter-gatherers), tool volume, the number of tools used by one person in a year and the amount of resulting debris per tool.

    They estimate an average density of between 0.5 and 5 million stone artefacts per square kilometre of Africa. When converted into an estimate of volume, this is the equivalent of between 42 to 84 million Great Pyramids of Giza.

    Researchers say this would be the equivalent of finding between 1.3 and 2.7 Great Pyramids per square kilometre throughout Africa.

    Researchers used the new survey of the Messak Settafet to estimate that enough stone tools were discarded over the course of human evolution in Africa to build more than one Great Pyramid for every square kilometre of land on the continent.

    Landscapes such as these must have been magnets for hominin populations, either for ‘stone foraging trips’ or residential occupation
    Robert Foley
    Left: A view across a valley in the Messak landscape. Right: A Levallois core, a distinctive type of Middle Stone Age stone tool, recovered on the surface of the Messak

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    Over the past 40 years the family has altered in ways that few people imagined back in the days of the Janet and John reading books in which mummy baked and daddy mowed the lawn. In the 1970s, the ‘nuclear’ family (heterosexual married couple with genetically related children) was in a clear majority. Advances in assistive reproductive technologies, a rise in numbers of single parent and step families resulting from divorce, and the creation of families by same-sex couples and single people have changed all that.  Today ‘non-traditional’ families outnumber nuclear families in the UK and many other countries.

    When it comes to family, everyone has opinions – but they are just opinions. In her new book, Modern Families: Parents and Children in New Family Forms (published 12 March 2015), Professor Susan Golombok charts the remarkable changes that have taken place in the context of the empirical research that has sought to answer a series of contested questions. Are children less likely to thrive in families headed by same-sex parents, single mothers by choice or parents who conceived them using assisted reproductive technologies? Will children born to gay fathers through egg donation and surrogacy be less likely to flourish than children conceived by IVF to genetically related heterosexual parents?

    Golombok’s contribution to family research goes back to 1976 when she responded to an article in the feminist magazine Spare Rib by conducting an objective study of the development of children of lesbian mothers. Spare Rib had revealed that, both in the UK and USA, lesbian mothers in child custody disputes invariably lost their cases to their ex-husbands. Courts argued that it was not in children’s best interests to be raised by lesbian women, not least because their gender development would be skewed. Golombok, and other researchers, have shown in successive studies that boys are no less masculine and girls no less feminine than boys and girls with heterosexual parents.

    In 2006 Golombok was appointed director of Cambridge University’s Centre for Family Research – a research centre known for its focus on family influences on child development. Modern Families brings together for the first time the growing body of research into the wide range of family forms, undertaken not just in the UK but also in the USA and around the world. Most strikingly, these studies show, again and again, that it is the quality of relationships that matters most to the well-being of families, not the number, gender, sexual orientation or genetic relatedness of the parents, or whether the child was conceived with the assistance of reproductive technology.

    These findings fly in the face of the media hysteria that greeted the birth of the first IVF baby in 1978. Societal attitudes have since moved on. However, deep-seated assumptions of what is ‘right and proper’ continue to colour notions of what a family ‘should’ be in order to raise a well-balanced child. Real families are complex. Golombok is careful to be even-handed in her unpacking (family type by family type) of the issues, the arguments and the relevant research in a field that, by virtue of its human intimacy, demands a high level of sensitivity and diplomacy.

    She also addresses the fact that research into so emotionally charged a field is bound to be imperfect. Parents willing to take part in research are more likely to be those who are functioning well than those who struggle. “It is important to study new family forms to find out what they are really like. Otherwise, all we have is speculation and assumption, usually negative, which simply fuel prejudice and discrimination and are harmful to the children involved,” she says.

    Some findings are counterintuitive, others less so. One of the arguments most famously used against same-sex parenting has been that children may lack models on which to base their own gender identity and behaviour. In a study of play preferences, lesbian mothers chose a mix of masculine and feminine toys but their children chose toys and activities that were highly sex-typed. It seems that parents have little influence over the sex-typed toy and activity preferences of their daughters and sons.

    In studies of children born through assisted reproduction, their mothers have consistently been found to show more warmth and emotional involvement, and less parenting stress, than natural conception mothers.

    “Contrary to the expectation that parents of children born through assisted reproductive technologies would experience difficulties in parenting, research has found them to be highly committed and involved parents, even in donor-conceived families where one or both parents lack a genetic relationship with their children,” says Golombok.

    “A key factor in the positive functioning of children in new family forms appears to be that they are very wanted children. Parents in new family forms often struggle to have children against the odds. Many experience years of infertility before becoming parents; others become parents in the face of significant social disapproval; and still others surmount both hurdles in order to have a child.”

    When surrogacy hit the headlines in 1985 with the case of Kim Cotton, the furore about the payment made to her by the intended parents of the child she was carrying led the UK to outlaw commercial surrogacy. Although attitudes to surrogacy have softened, it remains the most controversial form of assisted reproduction. Studies report that relationships between intended parents and surrogate mothers are generally both enduring and positive. Children born through surrogacy sometimes form relationships with the surrogate’s own children.

    Modern Families offers a measured appraisal of the broader issues that are likely to prove increasingly salient (and debated) as reproductive technologies offer novel routes to the conception of a healthy child and society’s understanding of what constitutes ‘family’ is increasingly extended. Last month’s approval in the UK for the use of a technique called mitochondrial replacement has rekindled accusations of scientists ‘playing God’. Perhaps, in time, society will be more accepting of techniques like mitochondrial replacement, developed primarily to avoid a child being born with a devastating medical condition.

    Two generations ago, same-sex parenting was widely vilified as ‘against nature’. Today, same-sex couples and single people are considered alongside heterosexual couples as prospective adoptive and foster parents. “Attitudes towards same-sex parent families in the UK have changed enormously over a relatively short period of time. In less than half a century we have moved from a situation in which lesbian mothers were ostracised, and gay men were at risk of imprisonment, to a time where same-sex couples can marry, adopt children jointly, and become the joint legal parents of children born through assisted reproductive technologies,” says Golombok.

    “But it’s important to remember that these laws are far from universal. Lesbian and gay relationships remain a criminal offence in some countries of the world with lesbian and gay people still living in fear of their lives.”

    Families aren’t self-contained units. How do parents handle the prejudice they and their children are almost bound to encounter and how do children cope with what are perceived as ‘differences’? Sometimes the attitudes of the wider world make things hard. While children of same-sex parents are just as likely to flourish as those with heterosexual parents, children with lesbian or gay parents have to ‘explain’ their families in a way that their peers don’t. The need to explain can be burdensome.

    “It’s stigmatisation outside the family, rather than relationships within it, that creates difficulties for children in new family forms,” says Golombok.

    Children born through egg or sperm donation grow up with a realisation that they have a biological mother or father who may not live with them. The research covered in Modern Families shows that the question of disclosure – informing children conceived through donated gametes about their genetic parentage – is a foggy one. 

    Legislation that took effect in 2005 gives anyone conceived with donated gametes after that date the right to have, at the age of 18, access to information about the identity of their donor via records held by the UK’s Human Fertilisation and Embryology Authority (HFEA).  Not until 2023 will it begin to be apparent how many donor-conceived young people might seek information about their donors from the HFEA.  If adoption law is any guide, the numbers will not be insignificant.

    As the legislation stands, young people will not know that they have been donor conceived unless they have been told – and only those with this knowledge will have any reason to seek access to the information held about their donor. This situation puts the onus firmly on the parents to make the decision about disclosure. Interestingly, although many parents profess the intention of bringing their children up with the knowledge that they were donor conceived, significant numbers of parents never find the right moment to broach the subject.

    Golombok says: “Parents fear that telling children about their donor conception will jeopardise the loving relationship that has developed between the child and the non-genetic parent. However, our research has shown this fear to be unfounded. Parents who are open with their children when they are young – before they reach school age – say that their children accept this information and are not distressed by it. Finding out in adolescence or adulthood appears to be more difficult to accept.”

    Modern Families is a timely reminder that every family is different – and that families are both fluid and flexible. There is more variation within family types than between them. Many of the newer routes helping people to fulfil their desires to have a family are still in their infancy. Progress is never smooth – and, quite rightly, innovations in conception are bound to be, and need to be, a matter for public debate. Research by Golombok and her colleagues, at Cambridge and beyond, provides a firm and informed basis for discourse to take place. 

    Modern Families: Parents and Children in New Family Forms by Susan Golombok is published on 12 April 2015 (Cambridge University Press).

    Top two inset images from Flickr Creative Commons



    Families come in many guises. Some parents are same-sex; others are single by choice. Growing numbers of children are conceived through assistive reproductive technology.  What do these developments mean for the parents and children involved?  Professor Susan Golombok’s book, Modern Families, examines ‘new family forms’ within a context of four decades of empirical research. 

    It’s stigmatisation outside the family, rather than relationships within it, that creates difficulties for children in new family forms.
    Susan Golombok
    Brain Building and the Arts (Flickr Creative Commons)

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    451 delegates from around the world attended the Centenary celebration of the establishment of the Archaeology and Anthropology degree at the University of Cambridge.

    The degree has attracted a broad range of participants including  royalty, politicians and diplomats as well as leading academics.

    Events included a reception in the Archaeology and Anthropology Museum and a formal dinner at Magdalene College where College Master Dr Rowan Williams, former Archbishop of Canterbury, was guest speaker. 

    Nelson Mandela visited Magdalene to accept an honorary Fellowship ten years ago

    Dr Williams said: “In a world where it is easy to polarize between humanities and sciences, which can sometimes cripple intellectual adventurousness, here is a field of study which has managed to weave them together with colossal creativity.”

    “I know Nelson Mandela was deeply committed to the establishment of this Professorship in his own time and he gave his blessing to early discussions about it.   As we look to the future we have many reasons for thinking that this is a natural, proper and worthy memorial for the University and Magdalene College.”

    The University has extensive collections of African archaeology and anthropology.

    Dr Simon Stoddart of the Department said: “In the last hundred years so much which is profoundly  creative and innovative in this field came from this relatively small group in Cambridge.

    “Today enormous investment in modern infrastructure and cutting edge research facilities is spearheading the continuing success of the Department as a world-leading force.”

    The Department  of Archaeology and Anthropology is part of the Faculty of Human, Social and Political Science (HSPS).


    Photo Credit: Howard Guest

    A campaign has been launched to provide a Mandela Professorship in African Archaeology at the University of Cambridge.

    I know Nelson Mandela was deeply committed to the establishment of this Professorship in his own time and he gave his blessing to early discussions about it.
    Dr Rowan Williams

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    The transfer of genes between organisms living in the same environment is known as horizontal gene transfer. It is well known in single-celled organisms and thought to be an important process that explains how quickly bacteria evolve resistance to antibiotics, for example.

    Horizontal gene transfer is also thought to play an important role in the evolution of some animals, including nematode worms, which have acquired genes from microorganisms and plants, and some beetles that gained bacterial genes to produce enzymes for digesting coffee berries. However, the idea that horizontal gene transfer occurs in more complex animals, such as humans has been widely debated and contested.

    Lead author Alastair Crisp from the Department of Chemical Engineering and Biotechnology at the University of Cambridge said: “This is the first study to show how widely horizontal gene transfer occurs in animals, including humans, giving rise to tens or hundreds of active 'foreign' genes.  Surprisingly, far from being a rare occurrence, it appears that this has contributed to the evolution of many, perhaps all, animals and that the process is ongoing. We may need to re-evaluate how we think about evolution.”

    The researchers studied the genomes of 12 species of fruit fly, four species of nematode worm, and ten species of primate, including humans. They calculated how well each of their genes aligns to similar genes in other species to estimate how likely they were to be foreign in origin. By comparing with other groups of species, they were able to estimate how long ago the genes were likely to have been acquired.

    In humans, they confirmed 17 previously-reported genes acquired from horizontal gene transfer, and identified 128 additional foreign genes in the human genome that have not previously been reported. A number of genes, including the ABO gene, which determines an individual’s blood group, were also confirmed as having been acquired by vertebrates through horizontal gene transfer. The majority of the genes were related to enzymes involved in metabolism.

    In humans, some of the genes were involved in lipid metabolism, including the breakdown of fatty acids and the formation of glycolipids. Others were involved in immune responses, including the inflammatory response, immune cell signalling, and antimicrobial responses, while further gene categories include amino-acid metabolism, protein modification and antioxidant activities.

    The team identified the likely class of organisms from which the transferred genes came. Bacteria and protists, another class of microorganisms, were the most common donors in all species studied. They also identified horizontal gene transfer from viruses, which was responsible for up to 50 more foreign genes in primates.

    Some genes were identified as having originated from fungi. This explains why some previous studies, which only focused on bacteria as the source of horizontal gene transfer, originally rejected the idea that these genes were ‘foreign’ in origin.

    The majority of horizontal gene transfer in primates was found to be ancient, occurring sometime between the common ancestor of Chordata and the common ancestor of the primates.

    The authors say that their analysis probably underestimates the true extent of horizontal gene transfer in animals and that direct transfer between complex multicellular organisms is also plausible, and already known in some host-parasite relationships.

    The study also has potential impacts on genome sequencing more generally. Genome projects frequently remove bacterial sequences from results on the assumption that they are contamination.

    “It’s important to screen for contamination when we’re doing genome sequencing, but our study shows that we shouldn’t ignore the potential for bacterial sequences being a genuine part of an animal’s genome originating from horizontal gene transfer,” adds Dr Chiara Boschetti from the Department of Chemical Engineering and Biotechnology.

    The research was supported by the European Research Council.

    Adapted from a press release from BioMed Central.

    Crisp, A et al. Expression of multiple horizontally acquired genes is a hallmark of both vertebrate and invertebrate genomes. Genome Biology; 12 March 2015

    Many animals, including humans, acquired essential ‘foreign’ genes from microorganisms co-habiting their environment in ancient times, according to research published in the open access journal Genome Biology. The study challenges the conventional view that animal evolution relies solely on genes passed down through ancestral lines and suggests that, at least in some lineages, the process is still ongoing.

    We may need to re-evaluate how we think about evolution
    Alastair Crisp
    DNA (cropped)

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0

    Ladies and gentlemen, it is a real pleasure to be able to speak to you today, and to join with friends and colleagues in celebrating the 650th anniversary of this great university. Six years ago, in Cambridge, we celebrated our 800th year, so I understand the pride that staff and students here in Vienna feel on reaching such an impressive milestone. It is a privilege and a personal pleasure to join you at such a special time.

    I am delighted to be in Vienna as the history of this great city was held in high regard throughout my childhood. Although I was born in the UK to Polish parents, a single date is inscribed in my memory – the 12th September 1683. This date marked the end of the Siege of Vienna, relieved under the leadership of the Polish King Jan Sobieski. It is when Vienna took her place at the heart of Europe, and is a cornerstone in the history of your University.  At a more personal level, my Grandfather was born in the Austro-Hungarian region of Poland, and served in the Hussars during the First World War.

    Like Cambridge, the University of Vienna has played a key role in academic enlightenment from the 19th century to today. Our ancient institutions share academic leadership and core values exemplified by our amazing alumni, from Newton to Hawking in Cambridge, and Landsteiner to Lorenz in Vienna. Few organisations today have both the history and the foresight to be able to look back over centuries of progress and simultaneously make bold plans for a long-term future.

    Anniversaries are a time for celebrating achievements. They are a time for remembering challenges and threats that have been overcome – and for renewing the values that have sustained us. But they are also a time for looking forward; we must ask ourselves as Universities committed to a long term view: what does the future look like – in a year, in five years, yes, but most importantly in 20 years’ time? What are the challenges and opportunities we can predict and how do we remain fit for purpose to deal with the unpredictable?

    And it is the question of the future of universities such as Vienna and Cambridge – and our role in creating a prosperous Europe for the 21st century – that I want to focus on today.

    The health of Europe and the health of our universities are strongly connected. In fact, I firmly believe that we cannot have one without the other. A strong Europe, benefiting from economic partnership, freedom of movement, and a deep respect for the individual – while respecting national and regional cultures – creates the conditions that universities need to thrive. Given those conditions, universities will continue to do what they have been doing for centuries: contribute to society through research and learning. New medicines will be developed. Tomorrow’s leaders nurtured. Jobs created. But without the right support, and without political leaders who are willing to take a long-term view, I fear universities – and countries – will suffer.

    A little under four months’ ago, His Holiness Pope Francis described Europe as “elderly and haggard”. I can understand why he chose those words. We are still grappling with the effects of the deepest financial crisis in over 80 years. That pain has led to startling inequalities in employment, health and basic services across the European Union.

    Europe and innovation

    But we need to be optimistic. Europe is also a continent with huge potential: a region packed with many of the world’s best minds, boldest entrepreneurs and most dedicated teachers. All over Europe, and often located close to and associated with major universities, committed men and women are building new economies based on knowledge, discovery and innovation. They must be supported for the sake of our future.

    Innovation – and specifically innovation driven by academic collaboration, technology clusters and exciting relationships between universities and businesses – will play an increasingly important role in driving forward growth and prosperity. This is something the European Commission has been rightly vocal about, and through Horizon 2020, it has committed to investing 3% of the EU’s GDP in research and innovation.

    I see on a daily basis what commitment to innovation can do. In Cambridge, our innovation cluster began in 1960 with the simple idea of putting “the brains of Cambridge University at the disposal of industry”. One of its leading protagonists is a son of this city – Hermann Hauser - whose outstanding contribution is recognised by the Hauser Forum: a striking building on our West Cambridge campus that has become a focal point for entrepreneurship and knowledge exchange in our region.

    Today, the result is the Cambridge cluster.  In a city with a population of just over 120,000, more than 1,500 technology-based firms, employing some 57,000 people and generating more than £13bn in revenue have been created. This results in a local unemployment rate of 1.4% and we do not need to always look to the USA as Cambridge is rated alongside MIT and Stanford as the top 3 world leaders in the University Innovation Ecosystem Benchmark Study 2012-14 well ahead of the other 200 universities studies.  Who did this study - MIT!It is not just the Greater Cambridge Region that benefits from this. Two years ago, the global pharmaceutical firm AstraZeneca announced it would establish a new €400m R&D headquarters in the city. Without the University and its track record in world-leading science and medicine, together with the close proximity of two large hospitals and the environment of the cluster, the company could easily have gone to the US – to the detriment of the UK and European life sciences community, and of our economies.

    We at Cambridge are very successful and we already demonstrate that this success can be achieved in Europe.  Similar university-led innovation is evident across Europe. Vienna is known for its contribution to the life sciences cluster. Other examples range from Munich’s high-tech, medical cluster, Estonia’s health-tech cluster in Tallinn, to Nice’s technology park at Sophia Antipolis.

    Investing in knowledge

    This wealth of intellectual and entrepreneurial capital makes the EU, in the Commission’s own words, the “knowledge production centre of the world… accounting for almost a third of the world’s science and technology production”.

    That sounds impressive, and it is. But it also has a genuine impact on the quality of people’s lives in countries all across Europe. Let’s remember that in the financial crisis of 2008, European countries that invested most heavily in research and innovation were the countries that recovered more quickly.

    But standing still means falling behind. The potent, catalytic power that research and innovation play in creating prosperity for regions, countries and citizens is one of the fought-over commodities of the early 21st century. And the global competition is fierce from America, and the developed Asian economies, such as China and South Korea. Unfortunately, the indicators of power and influence in the knowledge economy do not look promising for Europe. China is investing far more in research and innovation, fast catching up with the US, and more researchers from Europe head for America than the other way round.

    In a few moments, I want to make the case for a more significant and autonomous commitment to research funding – both within the EU and nationally – and to outline my own view that the EU is still the body best positioned to support universities in their role as creators of economic growth.

    But before I do, let us look at some of the unique attributes of universities that make us such effective innovators and contributors to growth and social wellbeing.

    Universities – four key attributes

    First, we are very good at taking the long-term view. Today’s leading European universities – including Vienna – have long and rich histories suggesting that they have a resilience that can deal with uncertainty. We pre-date and have survived many economic and political upheavals. How? The value we place on autonomy. Autonomy at the level of individual researchers, who have the freedom to follow their intellectual curiosity, but also autonomy at the institutional level itself.

    At Cambridge, this dates to the 16th century – the right granted to us by Queen Elizabeth 1 to govern ourselves. We prize this greatly and never take it for granted. It gives us the advantage of a strong focus, and an ability to take bold decisions that support our mission.

    Let me give you an example. Two years ago, the University of Cambridge committed to the largest expansion of our campus in our 800-year history – a project that will ultimately cost us £1bn. Why? Because we are committed to ensuring that the Cambridge of 2040 carries forward our mission in a new and different yet still uncertain world. A world that we do not understand, but need to meet head on and adapt to.  We as a University are concerned in overspecialisation as this restricts flexibility and who here can predict where the next major discovery such as DNA will occur.  Therefore this needs new, adaptable research and teaching strategies and facilities, as well as homes for both staff and students to be found on this campus.

    Second, we are focused on excellence in everything we do. At Cambridge – as is the case at nearly all British universities, this starts when we select 17- and 18-year-olds to study for their undergraduate degrees. We seek to encourage the very brightest students to apply for what are fiercely contested places. It doesn’t matter what their backgrounds are, where they go to school, whether their parents have been to university. We want the brightest students, and those with the most potential. And we work with schools in every part of the country to encourage children to put themselves forward to study at Cambridge – including some of whom may not have the confidence to do so. Our pursuit of talent extends to our PhD students, our research staff and to our most senior professorial positions.

    Third, we value diversity – diversity, of opinion, as well as in our staff and student body, are vital to our success.

    Around 60% of our postgraduate staff come from overseas and we recruit 25% of our research staff from within the EU. And without the EU’s support in creating mobility for international students and early career researchers, our contribution to the world would be severely compromised.

    Finally, we are excellent at creating partnerships. We build alliances – with businesses, hospitals, local authorities, governments and other institutions.

    In December last year the InnoLife Knowledge and Innovation Community, a €2.1bn project was initiated supported by the European Institute of Innovation and Technology, to address the impact of ageing populations and dependence.  The scale of the project, not just its funding, is truly impressive. It brings together 144 European companies, research institutes and universities across nine EU countries, including the University of Cambridge, to tackle one of the major challenges that will affect us all.  But that scale is exactly what is needed if we are to overcome society’s grand challenges. Put simply, we cannot access the talent, develop the infrastructure or provide the funding at a national level. We need to leverage expertise across multiple geographies and sectors; to develop networks, learning opportunities, new products and services.

    These attributes make it clear that Universities are at the heart of efforts to create growth, economic stability and wellbeing. Equally, Europe can and should be the region to exert influence on the global stage in the interests of our nation states, institutions and especially individual citizens. It is clear that the interests of Europe and Universities are mutually aligned.

    However, a perfect storm of fiscal short-sightedness, a political debate on immigration that is based on fear and emotion – in my own country at least – and a slow erosion of universities’ autonomy, threaten our future. A threat to Europe is a threat to our Universities – and vice versa. What are those threats?

    Threats to universities, threats to Europe

    Funding under pressure

    First of all, funding. This is always a complex area with multiple priorities, but let me focus on one much publicised issue: the plan to divert €2.7bn out of the Horizon 2020 budget to a new European Fund for Strategic Investment. While I certainly agree that investment in growth and jobs is crucial at a time when the threat of EU disintegration has never been so great, but cutting the research budget is not the solution: protecting it is.

    The European Fund for Strategic Investment has been created to address the challenging economic situation we find ourselves in. Yet its impact on Europe’s long-term competitiveness could be very damaging. To understand the potential ramifications, just rewind the logic of the argument that economic growth is dependent on research and innovation. Less investment, less innovation. Less innovation, fewer jobs. Fewer jobs, more hardship for people and communities. Not to mention that Europe as a region will be weakened – right at the moment when its competitors are increasing their investment in knowledge production.

    We have, in Horizon 2020, an excellent, evidence-based framework that was born out of widespread consultation. It is the latest instalment in a long line of research programmes that have, until now, demonstrated a commitment by the European Commission to research and innovation. And yet, a year on from its inception, it has been weakened considerably – a victim of political expediency and false logic. Diverting money from a proven funding model – the success of which stretches back more than 30 years – makes no sense. Be in no doubt: the cuts to research and innovation will damage Europe’s economic future.

    UK membership of the EU

    Second, we must have a sensible debate about the UK’s contribution to, and membership of, the European Union. There are so many reasons why this is important, but let me give you some that are close to my heart: the positive effects of mutual security and cross-European mobility. My parents were victims of European conflict, captured in Eastern Poland at the outbreak of the Second World War and incarcerated in Siberia. Since the EU was formed we have avoided such previously common conflagrations, something the University of Vienna and everyone here must be glad of.  After release and journeys across Asia and fighting in Italy, my parents chose to settle in the United Kingdom in 1947, as to return to their native Poland was fraught with peril. It is true to say that, without the UK’s open and positive attitude to immigrants then, I would not be standing here in front of you today.

    So it equally alarms and disappoints me, to hear the manner in which immigration is discussed in the UK: in the media and across the political spectrum. It is the language of the ‘other’ – fearful, emotional and reactionary. Migration and freedom of movement have always played a revitalising role in ‘receiving’ economies. It is something that university vice-chancellors know only too well. Nearly a third of our academic workforce is made up of postdoctoral researchers. Highly mobile and ambitious, they are the engine of our research output. Make it difficult or unattractive for them to work with us, and they will take their talents elsewhere. The same is true for international students – the brightest of whom we want to stay in our countries, and in Europe, where they can make positive and long-lasting contributions.

    We cannot let political short-sightedness stand in the way of our continued economic recovery. The UK’s future, as a member of the EU, cannot be decided by an intemperate, ill-defined and ill-informed debate on immigration.

    Let me be clear: I believe the UK’s future lies at the heart of the EU, and that many people in the UK support that too. EU funding to individuals and institutions alone is too important to be sacrificed for short-term electoral success. Much European funding is collaborative and trans-national by nature, and the same projects could not be pursued, or the same level of impact achieved, were the UK contribution to the EU research or higher education budgets invested at a national level. In an era of globalization, research and higher education is international and externally facing.  Therefore an exit from the EU would be highly damaging to the UK higher education sector.

    No, the European Union is not perfect and there is always room for improvement.  But without membership of the Council of the European Union or the European Parliament, the UK would lose out on the power to influence the future direction of the EU including its hugely important research, innovation and higher education directions.

    For an idea of what this might look like, we need look no further than Switzerland. Following their referendum last year on immigration quotas – carried by the narrowest of margins – the country is no longer able to participate in the Erasmus student exchange programme. Despite the Swiss government and the EU bridging gaps recently, some areas of Horizon 2020 are still off-limits, meaning that the Swiss government has to put in place – and fund – transitional arrangements for those academics unable to access EU research money. This, I must remind you, in a country that was incredibly successful in attracting EU funding for research.

    The current agreement takes Switzerland up to 2016. Beyond that lies uncertainty. And as all academics will tell you, two years is not a sensible timeframe to plan and conduct any significant research project

    I sympathise with colleagues in Switzerland but I hope that they will understand when I say that I don’t want the UK to be in that position. A position that we are certain to be in if we sleepwalk our way through a UK withdrawal from the EU.

    A UK exit from the EU would also be detrimental to our European partners.  The UK has much to offer and in the University context it will damage much collaborative research. It would hit particularly ‘grand challenge’ programmes, that seek to tackle global issues such as ageing, energy, climate change to name a few. And Europe would be without one of its most influential voices and partners at a time when it needs unity of purpose and vision.


    The third threat to universities involves the erosion of our autonomy – at both the individual and institutional level.

    When you look at the contribution academic work has made to the progress of humanity, whether in the sciences, or in the arts and humanities, the importance of academic freedom is clear. I am not talking about freedom without accountability. Accountability – in various forms – is important if universities are to retain the trust they need in continuing their work as autonomous, self-governing institutions. I am talking about the freedom of thought and of fundamental, investigator-led inquiry.

    In Cambridge fundamental research led to the discovery of monoclonal antibodies in the 1970s followed by basic research to adapt them to human therapeutic use. In the past two years, two new drugs, developed at Cambridge, have received regulatory approval. The first, Alemtuzumab, is a new treatment for multiple sclerosis. The second, Lynparza, is an anti-cancer medicine.

    I make two key points. The first is that those timescales don’t fit in to short-term, purely government-backed or commercial priorities – but nobody can seriously claim that the investment of time, money and trust placed in the individuals and groups involved has not contributed to society.

    The second is that it is often the cumulative effect of fundamental research; the ongoing development of new knowledge and insight, which is not easily quantifiable, and does not fit in to funding cycles, or research themes, that leads to breakthroughs.

    Universities create the environments where this can happen – but only if their autonomy is valued and protected. Yes, we enjoy significant levels of autonomy already. But there are many manifestations of autonomy, and many ways in which it can be compromised. Governments, funders and policymakers must listen to universities, and support them in supporting society in this most challenging and yet opportunity-filled of centuries.


    So there are important choices to be made by all of those who can shape the future of higher education. Support universities, or put at risk the things that matter most to ordinary people. Jobs. Prosperity. Freedom. Health. Opportunities.

    Put in place strong funding streams that support autonomous intellectual inquiry and grand challenge projects. These, not top-down, government-backed strategies, are the root of innovation.

    Understand that universities such as Vienna and Cambridge are unique institutions, and respect their space and way of working in the 21st century. We value the past, just as we value our long-term future. We do not fit easily into election cycles, or participate willingly in reactionary politics. But we have a great track record, longevity that is the envy of many, and a clear and tested plan for success.

    This is a critical time for Europe. We need to have the confidence in the excellence of our institutions, to think globally, and gain strength from the power of collective endeavour.  Partnerships, whether between nations, institutions or individuals, are not easy. They require effort, commitment and compromise. But the rewards far exceed our ability to act alone.

    So while we celebrate this important milestone in the University of Vienna’s illustrious history, let us also commit to making our future something the next generation can look back on in 50 years’ time and say: “The right choices were made.”

    Speech given on 13 March by Professor Sir Leszek Borysiewicz at the Global Universities and their Regional Impact conference marking the University of Vienna's 650th Anniversary.

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    A new 3D reconstruction of skull of one of the earliest four-footed vertebrate – which differs from earlier 2D reconstructions – suggests such creatures, which lived their lives primarily in shallow water environments, were more like modern crocodiles than previously thought.

    The researchers applied high-resolution X-ray computed tomography (CT) scanning to several specimens of Acanthostega gunnari, one of the ‘four-footed’ vertebrates known as tetrapods which invaded the land during one of the great evolutionary transitions in Earth’s history, 380-360 million years ago. Tetrapods evolved from lobe-finned fishes and display a number of adaptations to help them survive on land. 

    An iconic fossil species, Acanthostega gunnari is crucial for understanding the anatomy and ecology of the earliest tetrapods.  However, after hundreds of millions of years in the ground fossils are often damaged and deformed.  No single specimen of Acanthostega preserves a skull that is complete and three-dimensional which has limited scientists’ understanding of how this key animal fed and breathed – until now.

    Researchers from Cambridge and Bristol University used specialist software to ‘digitally prepared’ a number of Acanthostega specimens from East Greenland, stripping away layers of rock to reveal the underlying bones.  

    They uncovered a number of bones deep within the skull, including some that had never before been seen or described, resulting in a detailed anatomical description of the Acanthostega skull. 

    Once all of the bones and teeth were digitally separated from each other, cracks were repaired and missing elements duplicated.  Bones could then be manipulated individually in 3D space.  Using information from other specimens, the bones were fitted together like puzzle pieces to produce the first 3D reconstruction of the skull of Acanthostega, with surprising results.

    Co-author Dr Laura Porro, formerly of Cambridge’s Department of Zoology and Bristol’s School of Earth Sciences (now at the Royal Veterinary College) said: “Because early tetrapods skulls are often ‘pancaked’ during the fossilization process, these animals are usually reconstructed having very flat heads.  Our new reconstruction suggests the skull of Acanthostega was taller and somewhat narrower than previously interpreted, more similar to the skull of a modern crocodile.”

    The researchers also found clues to how Acanthostega fed.  The size and distribution of its teeth and the shape of contacts between individual bones of the skull (called sutures) suggest Acanthostega may have initially seized prey at the front of its jaws using its large front teeth and hook-shaped lower jaw.

    The team say that these new analyses provide fresh clues about the evolution of the jaws and feeding system as the earliest animals with limbs and digits began to conquer the land.

    The researchers plan to apply these methods to other flattened fossils of the earliest tetrapods to better understand how these early animals modified their bones and teeth to meet the challenges of living on land.

    “This work is the first stage of a study towards understanding how the earliest tetrapods fed, and that might lead us to what they fed on, and give further clues as to when and how they started to feed on land,” said co-author Professor Jennifer Clack from Cambridge’s Zoology Department.

    Digital models of the original fossils and the 3D reconstruction are also useful in scientific research and education.  They can be accessed by researchers around the world, without risking damage to fragile original fossils and without scientists having to travel thousands of miles to see original specimens. Furthermore, digital models and 3D printouts can be easily and safely handled by students taking courses and by the public during outreach events. The study is published recently in the journal PLOS ONE.

    Adapted from a Bristol University press release.

    Inset image: 3D model showing the complete skull on top with ‘exploded’ views of the upper and lower jaws below.

    The first 3D reconstruction of the skull of a 360 million-year-old near-ancestor of land vertebrates has been created by scientists.

    This work is the first stage of a study towards understanding how the earliest tetrapods fed, and that might lead us to what they fed on
    Jennifer Clack
    Left: 3D model with the jaws open; the individual bones are colour-coded to show the boundaries between them. Right: Original fossil skull of Acanthostega gunnari

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    Real-time dynamic holographic displays, long the realm of science fiction, could be one step closer to reality, after researchers from the University of Cambridge developed a new type of pixel element that enables far greater control over displays at the level of individual pixels. The results are published in the journal Physica Status Solidi.

    As opposed to a photograph, a hologram is created when light bounces off a sheet of material with grooves in just the right places to project an image away from the surface. When looking at a hologram from within this artificially-generated light field, the viewer gets the same visual impression as if the object was directly in front of them.

    Currently, the development of holographic displays is limited by technology that can allow control of all the properties of light at the level of individual pixels. A hologram encodes a large amount of optical information, and a dynamic representation of a holographic image requires vast amounts of information to be modulated on a display device.

    A relatively large area exists in which additional functionality can be added through the patterning of nanostructures (optical antennas) to increase the capacity of pixels in order to make them suitable for holographic displays.

    “In a typical liquid crystal on silicon display, the pixels’ electronics, or backplane, provides little optical functionality other than reflecting light,” said Calum Williams, a PhD student at Cambridge’s Department of Engineering and the paper’s lead author. “This means that a large amount of surface area is being underutilised, which could be used to store information.”

    Williams and his colleagues have achieved a much greater level of control over holograms through plasmonics: the study of how light interacts with metals on the nanoscale, which allows the researchers to go beyond the capability of conventional optical technologies.

    Normally, devices which use plasmonic optical antennas are passive, meaning that their optical properties cannot be switched post-fabrication, which is essential for real-world applications.
    Through integration with liquid crystals, in the form of typical pixel architecture, the researchers were able to actively switch which hologram is excited and there which output image is selected.

    “Optical nanoantenas produce a strong interaction with light according to their geometry. Furthermore, it is possible to modulate this interaction with the aid of liquid crystals,” said co-author Yunuen Montelongo, a PhD student at the Department of Engineering.

    The work highlights the opportunity for utilising the plasmonic properties of optical antennas to enable multi-functional pixel elements for next generation holographic display technologies.

    Scaling up these pixels would mean a display would have the ability to encode switchable amplitude, wavelength and polarisation information, a stark contrast to conventional pixel technology.

    Researchers from the University of Cambridge have designed a new type of pixel element and demonstrated its unique switching capability, which could make three-dimensional holographic displays possible.

    A large amount of surface area is being underutilised, which could be used to store information
    Calum Williams
    Rendered schematic of holographic pixels in operation showing switching states

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    Mycobacterium tuberculosis

    TB, caused by infection with the pathogen Mycobacterium tuberculosis, is a major global public health problem. According to the World Health Organization, in 2013 nine million people fell ill with TB and 1.5 million died from the disease. Over 95% of TB deaths occur in low- and middle-income countries. About one-third of the world's population has latent TB – in other words, they carry the infection but show no symptoms; only around one in ten of infected individuals develop active TB.

    Evidence suggests that an individual’s DNA affects their susceptibility to TB, both in terms of becoming infected and whether the disease progresses from latent to active TB. In order to identify genes that predispose people to TB, an international team of researchers carried out a genome-wide association study (GWAS), comparing the genomes of 5,500 TB patients against those of 5,600 healthy controls. In total, the researchers analysed 7.6 million genetic variants.

    The team found that variants of the gene ASAP1 on chromosome 8 affect individuals’ susceptibility to TB. The gene encodes a protein carrying the same name and is highly expressed – in other words, larger amounts of the protein are found – in a particular type of immune cells known as dendritic cells that play a key role in kick-starting the body’s immune response to incoming pathogens.

    The researchers showed that infection with M. tuberculosis leads to the reduction of ASAP1 expression in dendritic cells – but people who have a particular genetic variant in the ASAP1 gene associated with greater susceptibility to TB show stronger reduction of ASAP1 expression after infection than people who have a protective variant of this gene.

    The researchers found that reducing levels of the ASAP1 protein affects the ability of dendritic cells to move, which explains the mechanism of the previously-known slow migration of dendritic cells infected with M. tuberculosis and may help the pathogen to evade the immune system, leading to TB.

    “Our study provides a new insight into biological mechanisms of TB,” says Dr Sergey Nejentsev, Wellcome Trust Senior Research Fellow from the Department of Medicine at the University of Cambridge, who led the research. “TB is a major global health problem and the threat of drug-resistance means that we urgently need to develop new ways of fighting back. In future, it may be possible to target immune pathways that involve ASAP1 to design efficient vaccines for TB prevention.”

    The study was supported by the Wellcome Trust, EU Framework Programme 7, European Research Council, the Royal Society and the NIHR Cambridge Biomedical Research Centre.

    The largest genetic study of tuberculosis (TB) susceptibility to date has led to a potentially important new insight into how the pathogen manages to evade the immune system. Published today in the journal Nature Genetics, the study advances understanding of the biological mechanisms involved in TB, which may open up new avenues to design efficient vaccines for its prevention.

    TB is a major global health problem and the threat of drug-resistance means that we urgently need to develop new ways of fighting back
    Sergey Nejentsev
    Mycobacterium tuberculosis

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0

    The research, published today in Nature Neuroscience, is the first to isolate the adaptive forgetting mechanism in the human brain. The brain imaging study shows that the mechanism itself is implemented by the suppression of unique patterns in the cortex that underlie competing memories. Via this mechanism, remembering dynamically alters which aspects of our past remain accessible.

    In a study funded by the Medical Research Council (MRC), researchers monitored patterns of brain activity in the participants using magnetic resonance imaging (MRI) scans while the participants were asked to recall individual memories based on images they had been shown earlier.

    The team from the University of Cambridge, the MRC Cognition and Brain Sciences Unit, Cambridge, and the University of Birmingham, was able to track the brain activity induced by individual memories and show how this suppressed others by dividing the brain into tiny voxels (3D pixels). Based on the fine-grained activation patterns of these voxels, the researchers were able to witness the neural fate of individual memories as they were initially reactivated, and subsequently suppressed.

    Over the course of four selective retrievals the participants in the study were cued to retrieve a target memory, which became more vivid with each trial. Competing memories were less well reactivated as each trial was carried out, and indeed were pushed below baseline expectations for memory, supporting the idea that an active suppression of memory was taking place.

    Dr Michael Anderson from the MRC Cognition and Brain Sciences Unit and the Behavioural and Clinical Neurosciences Institute at the University of Cambridge said: “People are used to thinking of forgetting as something passive.  Our research reveals that people are more engaged than they realise in shaping what they remember of their lives.  The idea that the very act of remembering can cause forgetting is surprising, and could tell us more about selective memory and even self-deception.”

    Dr Maria Wimber from the University of Birmingham added: “Forgetting is often viewed as a negative thing, but of course, it can be incredibly useful when trying to overcome a negative memory from our past. So there are opportunities for this to be applied in areas to really help people.”

    The team note that their findings may have implications for the judicial process, for example, in eyewitness testimonies. When a witness is asked to recall specific information about an event and is quizzed time and time again, it could well be to the detriment of associated memories, giving the impression that their memory is sketchy.

    Studying the neural basis of forgetting has proven challenging in the past because the ’engram’ – the unique neural fingerprint that an experience leaves in our memory – has been difficult to pinpoint in brain activity. By capitalising on the relationship between perception and memory, the study detected neural activity caused by the activation of individual memories, giving a unique window into the invisible neurocognitive processes triggered when a reminder recapitulates several competing memories.

    Adapted from a press release by the Medical Research Council.

    Wimber, M et al.  Retrieval induces adaptive forgetting of competing memories via cortical pattern suppression.  Nature Neuroscience; 16 March 2015

    Intentionally recalling memories may lead us to forget other competing experiences that interfere with retrieval, according to a study published today. In other words, the very act of remembering may be one of the major reasons why we forget.

    The idea that the very act of remembering can cause forgetting is surprising, and could tell us more about selective memory and even self-deception
    Michael Anderson

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    The underlying mechanism behind an enigmatic process called “singlet exciton fission”, which could enable the development of significantly more powerful solar cells, has been identified by scientists in a new study.

    The process is only known to happen in certain materials, and occurs when they absorb light. As the light particles come into contact with electrons within the material, the electrons are excited by the light, and the resulting “excited state” splits into two.

    If singlet exciton fission can be controlled and incorporated into solar cells, it has the potential to double the amount of electrical current produced from highly energetic blue and green light, capturing a great deal of energy that would normally be wasted as heat and significantly enhancing the efficiency of solar cells as a source of green energy. Until now, however, scientists have not really understood what causes the process, and this has limited their ability to integrate it into solar devices.

    Writing in the journal Nature Physics, a team of researchers shows that there is an unexpected link between the splitting process and the vibration of the molecule that occurs when light comes into contact with the electrons. This vibration is thought to drive the production of two excited electrons, revealing for the first time how singlet exciton fission happens.

    The study was carried out by researchers from the Cavendish Laboratory at the University of Cambridge, and the University of Oxford. As well as solving a hitherto mysterious problem of quantum physics, it potentially provides a basis on which new singlet fission materials could be developed for use in solar cells.

    Dr Andrew Musser, a post-doctoral research associate and former PhD student at St John’s College, University of Cambridge, who co-authored the research paper, said: “We tend to characterise singlet exciton fission as a sort of two for the price of one deal on electrons, because you get twice as much electrical current. The problem is that if we want to implement this in a solar cell, the material needs to be engineered so that it is compatible with all the other components in the device. That means that we need to design a range of materials that could be used, and to do that, we need to understand more about why and how singlet exciton fission occurs in the first place.”

    At its most basic, singlet exciton fission is a product of the fact that when light particles, or photons, come into contact with an electron, the electron is excited by the light and moves. In doing so, it leaves a “hole” in the material’s electronic structure. The electron and the hole are still connected, however, by a state of mutual attraction, and the two together are referred to by physicists as an “exciton”.

    These excitons come in two very different flavours: spin-singlet and spin-triplet, and in rare circumstances, they can convert from one to the other.

    In the natural world, spin-singlet excitons are a part of photosynthesis in plants, because the light absorbed by pigments in the plant generates excitons which then carry energy throughout it. Solar cells imitate this process to generate and drive an electrical current. Conventional solar cells are silicon-based, and the absorption of a single photon leads to the formation of a single, excited electron that can be harvested as electrical current.

    In a handful of materials, however, singlet exciton fission occurs instead. Rather than producing just one spin-singlet exciton, two spin-triplets appear when a photon is absorbed. This offers the tantalising prospect of a 100% increase in the amount of electrical current generated.

    Researchers attempting to solve the puzzle of why the process happens at all, and why only in certain materials, have typically looked at how the electrons behave when they absorb light. In the new study, however, the team instead focused on the fact when the electrons move in response to the light, the molecule of which they are a part vibrates.

    The team used thin samples of TIPS-pentacene, a semiconducting material in which singlet exciton fission is known to occur. They then fired ultra-fast pulses of laser light at the samples, each pulse lasting just 10 “femtoseconds”, or 10 quadrillionths of a second. The miniscule timescale was necessary so that large numbers of molecules could be vibrated synchronously, enabling the researchers to measure the response of the molecule and the resulting effect on the electrons as light hit the material. The measurements themselves were made using ultra-fast vibronic spectroscopy.

    To the researchers’ surprise, they found that the molecules in the pentacene samples not only vibrated as singlet exciton fission occurred, but also continued to do so afterwards. This implies that the formation of two spin-triplet excitons is stimulated by the vibrations themselves, and the resulting tiny, fast changes in the shape of the molecules.

    “We are fairly confident that this underlies all ultrafast singlet fission,” Dr Akshay Rao, a Research Associate at St John’s College, Cambridge, who led the Cambridge team, said. “The picture that emerges is that when they are excited by light, the intrinsic vibrations drive the development of a new electronic state.”

    By understanding the fundamentals of singlet exciton fission, the study opens up the possibility of designing new singlet fission materials that would enable the process to be effectively integrated into a new generation of highly efficient solar cells. Future research is already being planned in which the group will examine the precise vibrational states that are required for singlet exciton fission to happen, which will further add to this knowledge.

    The work at Cambridge forms part of a broader initiative to harness high tech knowledge in the physical sciences to tackle global challenges such as climate change and renewable energy. This initiative is backed by the UK Engineering and Physical Sciences Research Council (EPSRC) and the Winton Programme for the Physics of Sustainability.

    The causes of a hitherto mysterious process that could enhance the power of solar cells have been explained in a new study.

    If we want to implement this in a solar cell, we need to understand more about why and how singlet exciton fission occurs in the first place.
    Andrew Musser
    "Green Power". While conventional solar cells use silicon, it is possible that other materials could eventually be used that would increase their efficiency.

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0

    Ebola virus

    A number of emerging infectious diseases – including some of the most deadly outbreaks such as Ebola, SARS and HIV – are the result of humans becoming infected with a pathogen that normally infects another species. The amount of harm caused by a pathogen when it jumps into a new species can be very variable, at times causing few, if any symptoms, while at other times causing high levels of mortality.

    A team led by researchers at the Department of Genetics, University of Cambridge, has looked at how the harm a pathogen causes can change following a jump between species. The researchers infected 48 species of fruit fly with an RNA virus, and found that the amount of harm caused by the virus was extremely variable in the new hosts, with some species having relatively benign infections and other species dying rapidly. Most of the deadly emerging diseases that infect humans are caused by RNA viruses.

    The different species of fruit fly shared a common ancestor around 40 million years ago. The relationships between the different species can be examined using an evolutionary ‘tree’ known as a phylogeny. Species that cluster together are the most genetically similar.

    In a study published today in the journal PLOS Pathogens, the researchers show that closely related species show similar levels of virulence when infected with the virus, with the tree of species being a patchwork of closely related groups showing high or low virulence. The level of virulence observed appears to be due to the amount of virus that accumulates in the hosts. The viral load also likely affects the ability of the virus to spread.

    Although the research was carried out in fruit flies, the researchers suggest that the general principle should be applicable across species. A study published in the journal Science in 2014 showed a pattern consistent with such effects in amphibians infected by chytrid fungus.

    “We see such patterns in the wild,” explains Dr Ben Longdon. “The Ebola virus, for example, appears to cause few symptoms in its natural reservoir, the fruit bat, but it is deadly in chimpanzees, gorillas and humans.

    “While there may be no clear rule to predict how deadly a pathogen will be in a new host, a simple rule of thumb may be that if it causes high levels of virulence in any given host species, it will typically cause similar levels of virulence in closely-related hosts. If we see a new disease emerge that causes high levels of mortality in chimpanzees, for example, then it may also be a danger to humans.”

    The research was mainly funded by the European Research Council and the Natural Environment Research Council (NERC).

    Longdon, B et al. The causes and consequences of changes in virulence following pathogen host shifts. PLOS Pathogens. 19 March 2015

    When viruses such as influenza and Ebola jump from one species to another, their ability to cause harm can change dramatically, but research from the University of Cambridge shows that it may be possible to predict the virus’s virulence by looking at how deadly it is in closely-related species.

    A simple rule of thumb may be that if a pathogen causes high levels of virulence in any given host species, it will cause similar levels of virulence in closely-related hosts
    Ben Longdon
    Ebola virus

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0

    In a study published recently in Genome Research, scientists from University of Cambridge, Estonian Biocentre, University of Tartu, Arizona State University and 64 other institutions around the world discovered that accumulation of material culture during the middle and late stages of Neolithic, four to eight thousand years ago, is associated with a dramatic decline in genetic diversity in male lineages whereas female genetic diversity was on the rise.

    It has been widely recognized that a major bottleneck, or decrease in genetic diversity, occurred approximately 50 thousand years ago when a subset of humans left Africa to colonize the rest of the world. Signatures of this bottleneck can be seen in most genes of non-African populations regardless of whether they are inherited from both parents or, as confirmed in this work, only along the father’s or mother’s genetic lines.

    “Most surprisingly to us, we detected another, male-specific, bottleneck during the period of global growth. The signal for this bottleneck dates to a time period when humans in different parts of the world had already for thousands of years been sedentary farmers,” said senior author Toomas Kivisild from the University of Cambridge’s Division of Biological Anthropology.

    View an infographic of the research story here.

    Melissa Wilson-Sayers, one of the lead authors from the School of Life Sciences at Arizona State University, added: “Instead of ‘survival of the fittest’ in biological sense, the accumulation of wealth and power may have increased the reproductive success of a limited number of socially ‘fit’ males and their sons.”

    The researchers said studying genetic history is important for understanding underlying levels of genetic variation. Having a high level of genetic diversity is beneficial to humans for several reasons. First, when the genes of individuals in a population vary greatly, the group has a greater chance of thriving and surviving — particularly against disease. It may also reduce the likelihood of passing along unfavorable genetic traits, which can weaken a species over time.

    According to Monika Karmin, co-author from University of Tartu, their findings further stress the differences in human male and female genetic histories which also may have implications related to human health.

    “The striking difference in the number of reproductive males and females in that time window certainly affected the diversity of genes on the male genetic line,” said Karmin. “We know that some populations are predisposed to certain types of genetic disorders. Researchers worldwide are trying to figure out what the underlying genetic structure is, so now also the fact that the male part of human lineages has gone trough a severe bottleneck has to be considered.”

    “When a doctor tries to provide a diagnosis when you are sick, you’ll be asked about your environment, what’s going on, and your genetic history based on your family’s health. If we want to understand human health on a global scale, we need to know our global genetic history; that is what we are studying here,” added Wilson-Sayers.

    The researchers believe this will be relevant for informing patterns of genetic diversity across whole human populations, including informing about susceptibility to diseases, independently in different populations.

    Adapted from an Arizona State University press release.

    Wealth and power may have played a stronger role than “survival of the fittest”.

    Most surprisingly to us, we detected another, male-specific, bottleneck during the period of global growth
    Toomas Kivisild
    Details from infographic produced by Arizona State University

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    It's often said that “If you've got nothing to hide, you've got nothing to fear.” This argument, which is often used to justify the total surveillance of society, is based on the curious idea that things done in secret must necessarily be immoral, unethical or illegal.

    It is also based on a reduction of any subtle notions of persona – we portray ourselves differently to others, depending on our relationship, or lack of relationship, with that person. None of us, not even the highest-profile celebrities, truly ‘lives in public.’

    There is a long list of reasons why the ‘nothing to hide’ argument is false, but much of it stems from the power imbalance which occurs when private discussions are revealed to normally unconcerned listeners, whether those listeners are known or unknown.

    What might happen after such an unwitting revelation creates genuine fear, uncertainty and doubt in the person whose information is being revealed. Much of that is due to the unseen power wielded by the great leverage provided by the internet, the NSA, GCHQ or any other member of the surveillance industrial complex.

    Surveillance is toxic: it reduces everyone’s choice of behaviour to that which is acceptable to everyone else, for all time. There are many examples of this, ranging from the mildly embarrassing to the deadly. At the relatively benign end of the spectrum, there are numerous instances of private conversations by public figures being secretly recorded and shared, so that we’re now seemingly at the point where the most innocuous of comments can be used as a weapon if they are overheard by the wrong person. Think also of the numerous instances of public shaming, where people’s lives and careers have been shattered after one poorly-judged tweet. There are also far more serious implications for individuals involved in witness protection programmes, for instance: how can you hide people in a population where everyone is traceable?

    We present ourselves differently to different people: our family, our close friends, our colleagues, our acquaintances, and people that we encounter – all are given different levels of trust, because there are different levels of shared experience. Context matters.

    And because context changes over time, we need to control aspects of information about ourselves as it is seen by others. Indeed, we need to have obsolete data removed from their view – we need the right to change our mind.

    Calling this is censorship is false. It is about a generalisation of the public’s ‘right to know,’ (or not know, in this case), or for an outdated, and likely wrong impression to persist, perhaps more powerfully than a recent one.

    In general, the ‘public’ is a set of people who we can send information to. Most of these, most the time, do not have a ‘right’ to know. I have a right to share information or not. I can, and should, be the judge of what is a suitable context in a given situation. 

    Perhaps we need a new, nuanced model of how freedom of speech and the public’s right to know should work without trumping privacy. Solutions could be based on copyright, custom/convention, or control, but should rest in the hands of the speaker, not the listener, in order to restore the power balance. A suitable combination of technology can tell us if people send our data further than we wish, and data protection laws with real teeth need to be passed, because of the heavily asymmetric power held by security agencies compared with the individual.

    We can also age and remove from sight data that is no longer relevant, such as spent criminal records for old crimes, health records of no public interest, or financial information that is out of date.

    Ideally, enforcement of these solutions should be partly social, but should include suitable independent organisations. GCHQ and other surveillance organisations are in no special privileged relation to most people. We need to incentivise them to do their job right. With great surveillance power comes even greater responsibility. We see reports of daily incidents of abuse of power in many of these organisations. If their culture doesn't change, we need to use more powerful means in order to restore sane behaviour. Google, Facebook and other internet companies aren't exempt either. Money doesn't confer rights, any more than counter-terrorism trumps all other rights.

    Data, just because it can be copied without error, is not necessarily true in the first place, and it can become false, through a change in the law for example. Recall by humans, is revisionist, because context changes. Data without context is inherently false.

    If you do care about what’s happening to your data, you want to know where it’s going and what’s being done with it. We want to see systems where people have agency over their data, giving them the ability to allow or prevent certain types of access.

    While it may sometimes seem as if we live in an age where people accept their lack of privacy online, in reality, privacy is something which the vast majority of people value highly. We need to start thinking about how to build win-win scenarios where useful information can be easily shared, but where all of us can hold on to our privacy.

    We live in an age of near-total surveillance. In a talk given earlier this week, Professor Jon Crowcroft argued that total surveillance of society is toxic, and that those who claim that ‘if you’ve got nothing to hide, you’ve got nothing to fear’ are helping perpetuate a massive power imbalance which is doing harm to society.

    Surveillance is toxic: it reduces everyone’s choice of behaviour to that which is acceptable to everyone else, for all time
    Jon Crowcroft
    What are you looking at

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

    0 0
  • 03/18/15--03:00: Music in the tree of life
  • Music in the tree of life

    When Joseph Haydn completed his Symphony No. 95, shortly before its first performance in 1791, he forgot to include the oboes.

    Although Haydn corrected himself — his hastily scrawled ‘flauto’ and ‘fagot’ in the margin are crossed out and replaced by ‘oboe’ — this snapshot of musical history serves as an evocative reminder of human fallibility. It’s impossible to get things right all of the time, and no less so than for activities like the writing or copying of complex musical scores.

    In fact, ‘mistakes’ are quite common in hand-copied texts and music. Before the introduction of printing in the late 15th century, the only means of spreading written culture was for monks and other scribes to replicate manuscripts. For music, the challenges of printing musical notation meant that hand-copying continued well into the 17th and 18th centuries.

    Unwittingly, the careless scribes — and those who deliberately made changes, perhaps to fit their own style or contemporary fashions, or to ‘improve’ on an earlier literary or musical composition — were helping future historians.

    Each time a piece was copied, the change was propagated, and occasionally joined by a fresh change. Scholars use these variations to build family trees of “which was copied from which” that chart the relationship between pieces, helping them to ask questions about the authors, and the history and even the movement of specific texts across continents.

    Two such scholars are Professor Christopher Howe and Dr Heather Windram. Yet, they are neither historians nor musicologists. They are biochemists.

    Howe is known for his work on photosynthesis and the molecular evolution of photosynthetic microorganisms. It turns out that there are important similarities between the evolution of a species and the evolution of anything copied successively — texts, music, languages and even Turkmen carpets.  

    “What’s exciting is how many different things follow this pattern of copying with incorporation and propagation of changes. It’s just a fundamental principle about how the world is,” said Howe.

    “During the evolution of species, mutations that occur when DNA is copied are passed on and, as species diverge, you see their DNA sequences become increasingly different. Evolutionary biologists use what are called ‘phylogenetic’ computational methods to compare the variations and work out the family tree. We realised we could also do this for pieces of literature.”

    His first work in the 1990s was on Chaucer’s Canterbury Tales. He and his collaborators found that the technique worked well and, with Windram, he proposed a new name for the process — ‘phylomemetics’ — “well, it’s easier to say than phylogenetic analysis of non-biological sequence data,” said Howe.

    Very little work, however, has been carried out on music, until now. In what they believe is the first test case of this type of analysis, Howe, Windram and one of the UK’s leading early keyboard players, Professor Terence Charlston from the Royal College of Music, have used their sophisticated algorithms to trace the relationships among a set of 16 copies of the Prelude in G by Orlando Gibbons from the 17th and 18th centuries.

    “Although music is a form of written tradition, we needed to pick up on the relationships between pieces in a different way,” explained Windram. “Music is a guide for performance so I was very conscious that some changes might be silent — like two tied quavers changing to a crotchet — but others might be sufficient to change the sound of the music.”

    In each of the 38 to 39 bars of music (depending on the source), Windram looked for changes in, for example, the note pattern, pitch and rhythm. She knew it should be possible — she had previously worked on a text that had 16,000 points of variation — and, with Charlston’s expertise in 17th-century music, she was able painstakingly to unpick the ‘mutant manuscripts’ and turn the variations into a numerical code.

    The researchers are quick to explain that no computer analysis can replace the expertise of the musicologist, who considers a mass of other information in addition to the patterns of variation, such as the cutting edge research carried out by Cambridge’s Faculty of Music to capture the creative history of music in the Online Chopin Variorum Edition. Where Howe’s team sees a significant benefit of its evolutionary approach is the ability to exploit a large amount of complex information and then carry out multiple consecutive analyses very rapidly.

    "Once the coding is complete, you can focus on specific aspects of the music,” explained Windram. “You can run the analysis again and again, perhaps considering only certain categories of changes, or only looking at a section of the music at a time. It’s another tool in the musicologist’s toolbox, and how you interpret what you are seeing is aided by their expertise.”

    Just as scholars of texts are using family relationships to ask broader questions about literature, the same can be carried out with music, as Charlston explained: “We can use these techniques to look at the corrections made by a single composer. Take Bach for instance. He was an inveterate reviser of his own music as he performed or taught it, and we can use these techniques to look at the creative process from notation to how it lives through performance.”

    What pleases the researchers is how the tool could also help performance choices in the future: “Current musicology tends to look for a single correct version but, for a piece like Gibbons’s Prelude in G, there may be as many versions as there are people playing it at the time. My ideal would be to suggest to players today that within certain confines they should be seeking to make their own variants,” said Charlston.

    For Howe, the excitement of the approach also lies in what it might do for evolutionary biology. “We can use this technique to identify if a copyist is copying from more than one piece at the same time — called contamination. There is a parallel in biology called lateral gene transfer where unrelated organisms exchange DNA. We now want to see if a program that can handle contamination in text can tell us something about lateral gene transfer in living organisms.”

    The hope is that, like handwriting, musical notation will betray the hand of its composer or copyist. Analysis of variations in ‘mutant’ manuscripts — now carried out more quickly using the team’s phylomemetic tools — will help both to reconstruct musical history and to provide a tantalising glimpse of a creative process evolving.

    Inset image, left to right: Christopher Howe, Heather Windram and Terence Charlston.

    See and hear the visual and audible effects of the variants found between different sources of the same piece of Orlando Gibbon’s Prelude in G:

    Example 1

    Example 2

    Both are from the same passage taken from the middle of Orlando Gibbon’s Prelude in G but from different sources; the variants occur in both hands. Example 1 is the original printed edition and Example 2 is a much later source held in the Fitzwilliam Museum in Cambridge. The right changes concern subtle alterations of pitch (often between f-sharp and f-natural) while the left hand bars differ mainly in rhythm and texture, but also with occasional changes in pitch. In addition to seeing the differences in music notation, you can hear each passage played on a 17th-century style harpsichord by Terence Charlston using a single-manual harpsichord by David Evans built in 2014, after an anonymous French original dated 1676. The instrument is tuned in 1/4 –comma meantone at a pitch of a1 = 415Hz.

    The sound files were recorded by Terence Charlston in February 2015 using a single-manual harpsichord by David Evans built in 2014, after an anonymous French original dated 1676. The instrument is tuned in 1/4 –comma meantone at a pitch of a1 = 415Hz.


    Modern scientific methods for mapping the evolution of species are being applied to centuries-old hand-copied music, providing new inspiration for how it is performed.

    Music in the tree of life
    Reconsidering the master of harpsichord

    If you were a harpsichord player in the 17th century you almost certainly will have played Prelude in G by Orlando Gibbons (1583-1625). In fact, you are likely to have written out your own version, perhaps even ‘by ear’ while listening to your teacher.

    Such was the esteem in which Orlando Gibbons was held that six of his compositions, including  the Prelude in G, were included in the Parthenia – ‘the first musicke that ever was printed for the Virginalls [an instrument of the harpsichord family]’ in 1612-13, to commemorate the marriage of ‘the high and mighty Frederick, Elector Palatine of the Reine: and his betrothed Lady, Elizabeth the only daughter of my Lord the king [James I]’.

    On one copy of the Prelude, now held in The Fitzwilliam Museum in Cambridge, an unknown hand has annotated it with the words ‘This is No 21 in the Parthenia & was a favorite Lesson(s) for upwards of a Hundred years’.

    When Chris Howe, Heather Windram and Terence Charlston decided to try using their phylogenetic analytical techniques on music, Gibbons’s Prelude in G seemed a natural choice to make.

    “Although it was available in print, few would have been able to afford to buy Parthenia, and yet almost all harpsichord players will have come across it,” explained Charlston. “It would have been copied and recopied by hand.”

    The researchers were able to track down 16 copies in existence in museums and libraries across the UK and in Japan and the USA. Although further copies may yet come to light, 16 copies were enough to carry out the analysis as a proof of principle.

    “We wanted to see if the variants segregated into ‘families’ with similar relationships using phylomemetics,” explained Windram. “Even in this test case we found two main families that corresponded to manuscripts copied earlier and later in the 17th century.”

    The choice of Gibbons – who was a chorister in Cambridge’s King’s College between 1596 and 1598, where his brother was master of the choristers – has a personal resonance for Charlston. As Professor of Harpsichord at the Royal College of Music, and an internationally recognised performer, he knows Gibbons’s compositions well. “While he was famous in his own day, he isn’t widely known today,” he said. “This is a wonderful opportunity to reconsider him.”

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    In 1999, NATO bombs rained down on Belgrade, hitting various targets including a TV centre just 300 metres from Ivan Rajic’s flat.

    His mother lived in fear that a neighbouring newspaper office would be next on the list. His father, meanwhile, was documenting the situation in a daily diary for a Norwegian newspaper. The series of articles was later published as a book, which Ivan has recently translated into Serbo-Croat. The diary details his father’s views on a war which he believed had much more to do with conflicts between elite groups in Kosovo, Serbia and the West, than with ethnic hatred.

    Ivan’s father, Ljubisa Rajic, died two years ago, but his ideas and influence helped shape Ivan’s political views and his interest in nominally nationalist struggles in Europe today. For his PhD at Cambridge he is focusing on how different levels of regional development within countries can form the basis for independence movements. He is looking in particular at the recent referendum on Scottish independence.

    Ivan [2009], who was born in Belgrade, was brought up in a house full of politics and ideas. His father, who founded the Department of Scandinavian Studies at the University of Belgrade, was always very politically active. He was a leading member of the Association for Yugoslav Democratic Initiative, one of the first democratic parties after the introduction of multi-party democracy in Yugoslavia in the late 1980s.

    Although he became disillusioned by party politics by the mid-1990s, he remained an outspoken public intellectual. Ivan’s mother, a retired meteorologist, shared the same views as his father.

    Ivan’s father was a leftist and an atheist,  who saw the Yugoslav wars as a conflict between elites, who used nationalism and religion to further their aims. Ivan was taught not to sing nationalist songs and he refused to cross himself on school trips to  monasteries or churches of cultural importance.

    The conflict in the former Yugoslavia affected him in other ways too. During primary school a refugee from Sarajevo came to live with the family. She was from an ethnically mixed marriage, and her father, an ex-army officer, had been tortured.

    Having a father who was a professor and public intellectual and being surrounded at home by around 7,000 mostly non-fiction books gave Ivan a significant advantage in his education at a time when the country’s education system was suffering from economic and political turbulence. Ivan applied to study economics at the University of Belgrade.

    During his studies, he came across How Rich Countries Got Rich and Why Poor Countries Stay Poor by the Norwegian economist Erik Reinert. His father had brought him the Norwegian original on one of his trips to Norway.

    The book was to prove very influential in his thinking on economics. “It was critical of two things. One is the flawed notion that economic development comes spontaneously from free markets. The other is the fact that one school of thought is dominant in economics – neoclassical economics – and that it has narrowed down economics to mathematical modelling,” says Ivan. He translated the book into Serbo-Croat soon after he got it.

    Ivan finished his undergraduate course in 2009 after having already started to work as a junior economist in a think tank in Belgrade. He applied to continue his studies at Cambridge and started his MPhil in Development Studies in 2009, progressing to a PhD. For both he has received the support of a Gates Cambridge Scholarship.

    His first idea for his PhD was to explore some of the inefficiencies in the former Yugoslav economic system, the explanations for which he thought had been oversimplified by some as being down to the fact that it was not capitalist enough.

    “If you grow up in any of the post-Yugoslav republics, you hear all the time how the economy was inefficient, how workers’ self-management was a stupid idea, how it was wrong to industrialise," he says. "But, all that makes little sense. For starters, why is the IMF austerity programme imposed on Yugoslavia in the 1980s excluded from any analysis of Yugoslavia’s economic problems? You also hear all the time how the Yugoslav wars were about ethnicity and religion. But what caused the war was the domestic elites manipulating people into hating and fearing each other, combined with a very healthy dose of Western elites engaging in divide-and-conquer imperialism. So, you have two areas – politics and economics – that are, of course, connected, but severely misunderstood. I thought that perhaps if they were analysed in a better way, a real connection could be seen."

    Ivan’s supervisor suggested he focus on a more current topic. The Scottish referendum was just about to take place. He sees the roots of the latest move towards independence in Scotland as being economic and originating from the neoliberal slant that the UK has taken over the last several decades, which has heavily impacted most areas (Scotland included) outside of the South East.

    “There are many parallels between Scotland and the UK, Catalonia and Spain, Quebec and Canada, Singapore and Malaysia, Yugoslavia, the Soviet Union, and a host of other examples,” says Ivan. “In all those cases, the trail leads back to certain similarities in the political economy of the countries in question. We need to understand how and why such moves towards independence happen. Because, as a number of examples show, the country where I was born included, things can end up in a very, very bad way.”

    Ivan Rajic's interest in the economic roots of independence movements is based on his personal experience of growing up in Belgrade.

    "What caused the war in Yugoslavia was the domestic elites manipulating people into hating and fearing each other, combined with a very healthy dose of Western elites engaging in divide-and-conquer imperialism."
    Ivan Rajic
    NATO bombing of Belgrade

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.


    0 0

    “Breast implants to carry cancer warning” read the headline of an article on the MailOnline website today. This follows the outcome of a study by the French National Cancer Institute, which reported that there was “clearly a link” between breast implants and anaplastic large cell lymphoma, a relatively rare form of cancer of the immune system that usually affects children and young adults independently of breast implants.

    This is the not the first health scare to surround breast implants in France: remember the “exploding” Poly Implant Prosthese (PIP) breast implants widely reported in 2010, which led to the company that manufacture them being closed down?

    So, should women who have received a breast implant be concerned and should they consider having them removed?

    I run an active research group at the Department of Pathology, University of Cambridge. Last year, I co-led an independent study, funded by Leukaemia and Lymphoma Research, looking at the risks to women with breast implants of developing implant-associated anaplastic large cell lymphoma (iALCL). Our study, which was published last year in the scientific journal Mutation Research Reviews, was part of the evidence considered by the French National Cancer Institute when preparing its report.

    In our study, we found 71 cases of iALCL worldwide: this means it is an extremely rare occurrence – for every three million breast implant procedures, we estimated that between one and six women would develop iALCL.

    The French study, which considered cases of iALCL diagnosed in France since 2011, found that of approximately 400,000 women with breast implants in France, 18 had been diagnosed with iALCL. This rate is clearly significantly higher than that found by our study – the reason why is not clear, though it may be due in part to better screening. The French National Cancer Institute suggests that this increasing incidence confirms a strong link between breast implants and cancer – as a consequence, some newspaper reports have suggested that the French government may even consider a ban on this cosmetic procedure in France.

    Whether such drastic action is required will depend on future monitoring of these patients; of the 49 cases reported in our study where information on the patients’ progress was available, there were only five reported deaths. While some patients received chemotherapy and radiotherapy, for many women their lymphoma was put into remission simply through removal of the breast implant and surrounding tissue. This suggests that it is the body’s abnormal immune response to the implant that is causing the cancer. Chemotherapy did not appear to significantly increase a patient’s chances of survival.

    So far, the incidence of iALCL has not been associated with any specific form of breast implant and there have been no links with the PIP prostheses, even though the first woman reported to die from this lymphoma in France was a carrier of a PIP implant.

    On Tuesday, the French Minister of Social Affairs, Health and Women’s Rights, Marisol Touraine, called a press conference at which she sought to calm fears about the risks. According to the Mail, she said: “We do not recommend that women carrying these implants have them removed.” Instead, it has recommended that women remain vigilant for symptoms of iALCL, which include swelling of the breast sometimes associated with pain and ulceration and a general impaired condition around the breast containing the implant. If any of these symptoms are recognised, they should consult their GP for advice.

    There are many reasons why women choose to have a breast implant. It’s not all about vanity, a desire for larger breasts, as unfairly characterised by much of the media. It can be about corrective surgery following breast cancer removal, increasing one’s self-image and confidence or correcting uneven breasts, for example.

    I would like to see the UK establish a cancer registry to record and follow-up on all cases of iALCL in the future. There are still many unanswered questions and only by getting to the bottom of this very rare disease will we be able to find alternative ways to treat it. It’s becoming clear that having implants is not itself without risk, but the associated cancer risk is still extremely small. In the meantime, we need a measured debate: alarmist headlines do not help, but only serve to cause unnecessary anxiety.

    Newspaper reports suggest that France may be considering health warnings – or even an outright ban – on breast implants, following a cancer scare. Should women be concerned? Dr Suzanne Turner from the Department of Pathology, University of Cambridge, looks at the truth behind the headlines.

    It’s becoming clear that having breast implants is not itself without risk, but the associated cancer risk is still extremely small
    Suzanne Turner
    Breast implant

    The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

    License type: 

older | 1 | .... | 48 | 49 | (Page 50) | 51 | 52 | .... | 141 | newer