Quantcast
Channel: University of Cambridge - Latest news
Viewing all 4361 articles
Browse latest View live

The winners and losers of ocean acidification

$
0
0

Populations of certain types of marine organisms known collectively as the ‘biofouling community’ – tiny creatures that attach themselves to ships’ hulls and rocks – may quadruple within decades, while others may see their numbers reduced by as much as 80%, if the world’s oceans continue to become more acidic, according to new research.

While these animals are primarily viewed by humans as pests – removal of biofouling organisms costs about $22 billion annually – they also play an important role in marine environments, primarily as food sources for larger organisms.

The researchers, from the University of Cambridge, British Antarctic Survey and Centro de Ciências do Mar, found that as acidity increased, organisms with shells, such as tube worms, saw their numbers reduced to just one-fifth their current numbers, while animals without shells, such as sponges and sea squirts, doubled or even quadrupled in number. The results, published today (28 January) in the journal Global Change Biology, show how these communities may respond to future change.

There is overwhelming evidence to suggest the world’s oceans are becoming, and will continue to become more acidic in the future, but there are many questions about how it will affect marine life. The biofouling community affects many industries including underwater construction, desalination plants and ship hulls.

For the first experiment of its kind, over 10,000 animals from the highly productive Ria Formosa Lagoon system in Algarve, Portugal were allowed to colonise hard surfaces in six aquarium tanks. In half the tanks, the seawater had the normal acidity for the lagoon (pH 7.9) and the other half were set at an increased acidity of pH 7.7. The conditions represented the Intergovernmental Panel on Climate Change’s (IPCC) prediction for ocean acidification over the next 50 years.

The researchers expected that the calcium carbonate shells would be broken down by the acidic environment, but scanning electron microscopy found that it was the organic ‘glue’ holding the calcium carbonate crystals together that was ‘eaten’ by the acid, causing the shells to disintegrate.

There is a great deal of competition for hard surfaces in the ocean – for example, a clean piece of tile placed in the sea would very quickly become covered in biofouling organisms. When organisms with hard shells, such as tube worms, die, they leave behind their shells, giving the next generation of organisms an additional surface to cling onto.

“These environments are almost like mini-reefs, and if you lose some of that three-dimensional complexity, you reduce the space and opportunities for some types of marine life – it becomes harder for some organisms to take their space,” said co-author Dr Elizabeth Harper of Cambridge’s Department of Earth Sciences.

“Our experiment shows the response of one ‘biofouling community’ to a very rapid change in acidity, but nonetheless shows the degree to which these communities could be impacted by ocean acidification, and to which its associated industries may need to respond,” said Professor Lloyd Peck from British Antarctic Survey (BAS), the paper’s lead author. “What’s interesting is that the increased acidity at the levels we studied destroys not the building blocks in the outer shell itself, but the binding that holds it together. Many individuals perish, but we also showed their larvae and juveniles are also unable to establish and create their hard exoskeleton.”

Peck continues, “Although a pH reduction of 0.2 is less than the IPCC’s ‘business as usual’ scenario of pH 0.3 – 0.4 in ocean surface waters by 2100, it will likely be achieved between 2055 and 2070.”

“Taking into consideration the importance of the Ria Formosa lagoon as a natural park the modified community structure driven by a reduction in PH, while potentially reducing biofouling issues, will almost certainly affect lagoon productivity and impact on biodiversity,” said co-author Dr Deborah Power, from Centro de Ciências do Mar.

The study was carried out by scientists from British Antarctic Survey, Centro de Ciências do Algarve, Instituto Portugues do Mar e da Atmosfera and University of Cambridge, and was funded by the Natural Environmental Research Council (NERC) and an EU Research Infrastructure Action under the FP7 ‘Capacities’ Specific Programme.

The population balance of some marine ‘pests’ could be drastically changed as the world’s oceans become increasingly acidic.

These environments are almost like mini-reefs, and if you lose some of that three-dimensional complexity, you reduce the space and opportunities for some types of marine life
Elizabeth Harper
A Spirobid worm feeding

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Cambridge announced as one of five key partners in new national Alan Turing Institute

$
0
0
Big Data

The Alan Turing Institute will promote the development and use of advanced mathematics, computer science, algorithms and ‘Big Data’ – the collection, analysis and interpretation of immense volumes of data – for human benefit.  Located at the British Library in London, it will bring together leaders in advanced mathematics and computing science from the five of the UK’s most respected universities – Cambridge, Edinburgh, Oxford, UCL and Warwick – and partners.

Dr Cable said: “Alan Turing’s genius played a pivotal role in cracking the codes that helped us win the Second World War. It is therefore only right that our country’s top universities are chosen to lead this new institute named in his honour.

“Headed by the universities of Cambridge, Edinburgh, Oxford, Warwick and UCL - the Alan Turing Institute will attract the best data scientists and mathematicians from the UK and across the globe to break new boundaries in how we use big data in a fast moving, competitive world.” 

The delivery of the Institute is being coordinated by the Engineering and Physical Sciences Research Council (EPSRC), which invests in research and postgraduate training across the UK. The Institute is being funded over five years with £42 million from the UK government. The selected university partners will contribute further funding. In addition, the Institute will seek to partner with other business and government bodies.

Professor Philip Nelson, EPSRC’s Chief Executive said: “The Alan Turing Institute will draw on the best of the best academic talent in the country. It will use the power of mathematics, statistics, and computer science to analyze Big Data in many ways, including the ability to improve online security. Big Data is going to play a central role in how we run our industries, businesses and services. Economies that invest in research are more likely to be strong and resilient; the Alan Turing Institute will help us be both.”

The University of Cambridge has a strong historical association with Alan Turing, who studied as an undergraduate from 1931 to 1934 at King's College, from where he gained first-class honours in mathematics. Research at Cambridge continues his legacy of groundbreaking work in mathematics and computer science, extending into many areas that he helped pioneer, including mathematical biology, language modelling, statistical inference and artificial intelligence.

Cambridge researchers will play a critical role in shaping the research agenda for the Alan Turing Institute, bringing in world experts in mathematics, statistics, computer science and information engineering, and linking to the research challenges of the future, such as the study of huge genomic datasets, or the development of the world’s largest radio telescope, the Square Kilometre Array.

In 2013, the University created Cambridge Big Data, a cross-School strategic initiative bringing together experts in a number of themes. These range from the fundamental technologies of data science, to applications in disciplines as diverse as astronomy, clinical medicine and education, as well as experts exploring the ethical, legal, social and economic questions that are critical to making data science work in practice. The research developed at the Alan Turing Institute will link to the first of these themes, allowing for a rich exchange of ideas within a broad researcher community, and a joined-up and multidisciplinary approach to the big data challenges of the future.

Professor Paul Alexander, who heads Cambridge Big Data, said:  “Modern technology allows for the collection of immense volumes of data, but the challenge of converting this ‘Big Data’ into useful information is enormous. The Alan Turing Institute is an immensely exciting opportunity for the collective expertise of Cambridge and its partners to rise to this very important challenge and make a huge contribution to the future success of the UK economy, our ability to provide health and societal benefits and the ability of British universities to remain at the cutting edge of research.”

The University of Cambridge is to be one of the five universities that will lead the new Alan Turing Institute, announced the Rt Hon Dr Vince Cable, Secretary of State for Business, Innovation and Skills today.

Alan Turing’s genius played a pivotal role in cracking the codes that helped us win the Second World War. It is therefore only right that our country’s top universities are chosen to lead this new institute named in his honour
Vince Cable
Big Data

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes
License type: 

Previously un-exhibited art by 15 Royal Academicians goes on display at Wolfson College to mark the 50th anniversary

$
0
0

‘The Royal Academy at Wolfson’ is an extraordinary exhibition, curated by Anthony Green RA, which includes paintings, prints, drawings and small sculptures that have been lent to the College by the artists. Many of these works have never been exhibited before and, as a group, they represent some of the finest art being produced in Britain today.

With free admission, the exhibition will be open to the public on Saturday and Sunday afternoons 2–4pm from 31 January to 19 December 2015. It will feature works by the following Royal Academicians: Ivor Abrahams, Eileen Cooper, Gus Cummins, Anthony Eyton, Peter Freeth, Paul Huxley, Timothy Hyman, Neil Jeffries, Sonia Lawson, Ben Levene, Christopher Le Brun, Chris Orr, Mick Rooney, Anthony Whishaw and Anthony Green himself.

The exhibition is the first in an outstanding programme of modern and contemporary art events designed to celebrate Wolfson College’s 50th anniversary. One of the most cosmopolitan of the 31 Colleges in the University of Cambridge, Wolfson is a leading academic research institution with fellows, postgraduate students, and mature undergraduates from 80 countries around the world. Distinguished by its modernity and diversity, but also by its informality and egalitarianism, Wolfson was the first College to admit men and women as both students and fellows. Professor Sir Richard Evans FBA, Regius Professor of History until 2014 and currently Provost of Gresham College, London, is the fifth President of the College.

‘The Royal Academy at Wolfson’ will also be accompanied by the display of a significant and unique collection of pottery from renowned potter Bernard Leach CBE. This collection was recently donated to the College by his colleagues and students, and will feature alongside works by other notable potters, including: Richard Batterham, Clive Bowen, Amanda Brier, Alan Brough, Ray Finch, John Leach and Bill Marshall.


Admission is free via the Porter’s Lodge, Barton Road.

For further information, please contact finearts@wolfson.cam.ac.uk.

For public enquiries, please contact the Porter’s Lodge on 01223 335900 or porters@wolfson.cam.ac.uk.

Twenty-eight exceptional works by 15 Royal Academicians will be on display at Wolfson College, Cambridge throughout 2015 as part of a programme of celebrations to mark Wolfson’s 50th anniversary.

The exhibition is the first in an outstanding programme of modern and contemporary art events designed to celebrate Wolfson College’s 50th anniversary.
Eileen Cooper RA, Dwelling, 2009, oil on canvas

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

School pupils' chance to see what life as a vet is all about

$
0
0

The 2015 VetCam course has opened for bookings and thanks to continued funding from the University’s Widening Participation Project Fund, a number of bursaries are available to help students from under-represented groups cover the cost of the course. 

The course will take place from 30-31 of March.

The two-day residential course is designed to give Year 12 students an insight into studying veterinary medicine in general, and also to show the unique benefits of the Cambridge course through a mixture of practical demonstrations, lectures, tours of pre-clinical departments and the Queen’s Veterinary School Hospital

Attendees stay overnight at Queens’ College for a taste of life as a Cambridge undergraduate. 

One of last year’s bursary awardees, Amy Sansby, said “I had a brilliant experience at VetCam, thank you for the opportunity to attend. VetCam gave me a chance to speak to teaching staff and current vet students and get answers to all my questions about the profession. I left feeling inspired. “

Applications need to be made by the student’s school or college.

For more details regarding the bursaries, please contact Katheryn Ayres at kma28@cam.ac.uk.

For details on how to book and pay for a place on this year’s VetCam course, please contact tutorial.office@vet.cam.ac.uk

The 2015 VetCam course offers Year 12 students a taste of life as a Cambridge undergraduate.

I left feeling inspired.
Amy Sansby, bursary awardee

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Rightmove: a T-rex called Clare finds a perfect home

$
0
0

There was never any debate about her name: it had to be Clare.  She is a metal sculpture of a T-rex, and she made a spectacular debut as the centrepiece for last summer’s Clare College May Ball which had a ‘primordial’ theme.

The puzzle of what to do with the sculpture once the celebrations were over was solved when the Sedgwick Museum of Earth Sciences expressed an interest in homing her.

The morning after the party, the model T-rex was trundled through the centre of Cambridge by a posse of dinosaur-attendants clad in high-vis tabards. She has seen out the winter hidden in a corner of the Downing Site.   

Now Clare has found a permanent home outside the entrance to the historic Sedgwick Museum, the oldest geological museum in the world, where she is likely to become one of Cambridge’s best loved landmarks.

Clare was formally welcomed to her new stamping ground earlier today (30 January 2015). The unveiling was attended by Clare’s creator, the sculptor Ian Curran, along with members of Clare College and the Department of Earth Sciences.

Sedgwick Museum director, Ken McNamara said: “The sculpture will add to the excitement experienced by visitors as they arrive to see our unique collection. It includes thousands of fossils, including dinosaur remains and a life-size Iguanodon.”  

The model is a half-size artistic representation of the iconic T-rex, a species which lived 66-68 million years ago. It was made by Curran in his Doncaster workshop and travelled down the A1 to Cambridge on the back of a lorry.

Curran said: “It's tremendous to see one of my sculptures in such a prestigious location. I'm thrilled that the Sedgewick Museum has her on display where she will be seen by so many more families.

“Normally my work is displayed on my front lawn for the benefit of local children and the grandparents who bring them, so this wider audience is an absolute thrill.”

 

Cambridge gained a new landmark when Clare, a sculpture of a T-rex, was unveiled at the Sedgwick Museum of Earth Sciences earlier today. 

The sculpture will add to the excitement experienced by visitors as they arrive to see our unique collection.
Ken McNamara
A T-rex called Clare settles into her new home outside the Sedgwick Museum

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Imaging: interpreting the seen and discovering the unseen

$
0
0

We humans are visual creatures. An image aims to depict reality to us, but also invokes our imagination. It speaks more than a thousand words. We live in a world saturated with images and images allow us to see this world – from brain cells to distant galaxies – as never before. Advanced imaging techniques enable us to ask new research questions, break down disciplinary boundaries and extend our knowledge across an immense variety of fields. 

Scientists have always used images of various kinds – drawings, pictures, photographs and videos, to name a few – to make discoveries, describe processes in nature, catalogue and achieve specimens, and illustrate observations and ideas. In scientific discoveries, images are often the scientific finding itself.

A work of art, an image in itself, can be analysed and its making can be understood with the help of advanced scientific imaging techniques. New images are created by this analysis, and the worlds of arts and science are becoming increasingly overlapping. 

Scientific imaging has never been as exciting as it is now, with new technologies emerging all the time. The resolution limit in light microscopy, which had seemed unbreakable, is now less than 100 nanometres. These advances in super-resolved fluorescence microscopy were recognised in  2014 by the awarding of the Nobel Prize in Chemistry to Eric Betzig and W. E. Moerner in the USA and Stefan Hell in Germany.

Cambridge is home to a wealth of research which includes developing tools for acquisition, visualisation, automated processing and analysis of images. In January 2014, a group was formed to connect, present, discuss and advance research on or with images. IMAGES brings together leading academics from across the disciplines, as well as international experts and research-led industries that work on pioneering imaging technologies and analytical algorithms.

The complex process, from acquiring images, to their interpretation and problem-solving applications, requires multi-expertise partnerships. Different problems and image applications inform similar methodologies and interpretative strategies. Cross-disciplinary collaboration is needed to analyse the image information not explicit in machine-generated data.

Mathematicians, physicists, chemists and biologists work together to develop new instruments, chemical dyes and model systems to interrogate biological questions with more precision and at greater resolution.

At CRUK Cambridge Institute and the Department of Applied Mathematics and Theoretical Physics, microscopists and mathematicians are developing new ways of tracking cells and analysing the effect of cancer drugs in tissues and whole organisms.

At the Cambridge Biomedical Campus, clinicians use magnetic resonance, positron emission tomography, and acoustic imaging as tools for looking into our internal organs. The challenge here is to produce a high quality description of patients and their ailments from data that is necessarily limited by the capability of scanners and the need to minimise exposure to harmful radiation.

Mathematicians and engineers create automated image-processing and analysis algorithms that extract meaningful, essential information from often large-scale, high-dimensional and imperfect image data.

The importance of reliable image analysis extends to astronomy, the arts, seismology, surveillance and security. Image de-noising and image restoration algorithms are also essential pieces of any further image analysis pipeline such as object segmentation and tracking, pattern recognition, in fact any quantitative and qualitative analysis of image content.

At the Fitzwilliam Museum and the Hamilton Kerr Institute, spectroscopy methods underpin the non-invasive analyses of artists’ materials and techniques, informing the conservation and cross-disciplinary interpretation of paintings, illuminated manuscripts and Egyptian papyri. The research unites imaging scientists, chemists, physicists, mathematicians, biologists, conservators, artists and historians. Thanks to cutting-edge imaging techniques, we can now see art works as never before, uncovering centuries-long secrets of their production and ensuring their preservation into the future.

The IMAGES group aims to stimulate new inquiries and focused dialogues between these many disciplines across the sciences, arts and humanities by providing them with a platform for communication. As collaborations across the University show, art and science are not disparate, but complementary ways of seeing the world. Both depend on the subtle observations of life and attempt to interpret the seen and discover the unseen.

Dr Stella Panayotova (Fitzwilliam Museum Cambridge), Dr Stefanie Reichelt (Cancer Research UK Cambridge Institute) and Dr Carola-Bibiane Schönlieb (Department of Applied Mathematics and Theoretical Physics) lead IMAGES

From visualising microscopic cells to massive galaxies, imaging is a core tool for many disciplines, and it’s also the basis of a surge in recent technical developments – some of which are being pioneered in Cambridge. Today, we begin a month-long focus on research that is exploring far beyond what the eye can see, introduced here by Stella Panayotova, Stefanie Reichelt and Carola-Bibiane Schönlieb.

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Michelangelo bronzes discovered

$
0
0

They are naked, beautiful, muscular and ride triumphantly on two ferocious panthers. And now the secret of who created these magnificent metre-high bronze male nudes could well be solved. A team of international experts led by the University of Cambridge and Fitzwilliam Museum has gathered compelling evidence that argues that these masterpieces, which have spent over a century in relative obscurity, are early works by Michelangelo, made just after he completed the marble David and as he was about to embark on the Sistine Chapel ceiling.

If the attribution is correct, they are the only surviving Michelangelo bronzes in the world.

They are a non-matching pair, one figure older and lithe, the other young and athletic. Long admired for the beauty of their anatomy and powerful expressions, their first recorded attribution was to Michelangelo when they appeared in the collection of Adolphe de Rothschild in the 19th century. But, since they are undocumented and unsigned, this attribution was dismissed and over the last 120 years, the bronzes have been attributed to various other talented sculptors.

That changed last autumn when Prof Paul Joannides, Emeritus Professor of Art History at the University of Cambridge, connected them to a drawing by one of Michelangelo’s apprentices now in the Musée Fabre, Montpellier, France.

A Sheet of studies with Virgin embracing Infant Jesus, c.1508, is a student’s faithful copy of various slightly earlier lost sketches by Michelangelo. In one corner is a composition of a muscular youth riding a panther, which is very similar in pose to the bronzes, and drawn in the abrupt, forceful manner that Michelangelo employed in designs for sculpture. This suggests that Michelangelo was working up this very unusual theme for a work in three dimensions.

This revelation triggered further art-historical research with input from a number of international experts. The bronzes were compared with other works by Michelangelo and found to be very similar in style and anatomy to his works of 1500-1510; a date confirmed by the preliminary conclusions of initial scientific analysis. Interdisciplinary research is continuing; the findings and conclusions of which will be presented at an international conference on Monday 6 July, 2015.

It is a common misconception that Michelangelo sculpted almost exclusively in marble and never in bronze. However it is historically verifiable that he was associated with bronze throughout his 75-year-long career.  Michelangelo is documented as having made a two-thirds life-size David for a French grandee, and an over twice life-size statue of Pope Julius II.  Sadly neither survives – the first disappeared during the French Revolution; the second was melted down for artillery less than three years after it was made.

Dr Victoria Avery, Keeper of the Applied Arts Department of the Fitzwilliam Museum, commented: “It has been fantastically exciting to have been able to participate in this ground-breaking project, which has involved input from many art-historians in the UK, Europe and the States, and to draw on evidence from conservation scientists and anatomists. The bronzes are exceptionally powerful and compelling works of art that deserve close-up study – we hope the public will come and examine them for themselves, and engage with this ongoing debate.”

The bronzes have gone on display in advance of the Fitzwilliam Museum’s bicentenary in 2016 and before its next major exhibition Treasured Possessions, the result of a interdisciplinary University research project revealing hidden items in the Museum’s reserves. The bronzes and a selection of the evidence are now on display in the Italian galleries at the Fitzwilliam Museum, Cambridge, from February 3 until 9 August 2015. Admission to the Fitzwilliam is free.

Click on images to enlarge:

Researchers behind the discovery
Prof Paul Joannides - Emeritus Professor of Art History, University of Cambridge
Dr Victoria Avery - Keeper of Applied Arts, Fitzwilliam Museum
Dr Robert van Langh - Head of Conservation, Rijksmuseum
Arie Pappot - Junior conservator of metals, Rijksmuseum
Professor Peter Abrahams – Clinical Anatomist, Warwick Medical School, University of Warwick

Lead consultants
Martin Gayford - Art critic and author of Michelangelo: An Epic Life (2013)
Dr Charles Avery – Independent art historian, Cambridge (UK)
Dr Andrew Butterfield – Author of The Sculptures of Andrea del Verrocchio and many other publications on Renaissance art

It was thought that no bronzes by Michelangelo had survived - now experts believe they have found not one, but two - with a tiny detail in a 500-year-old drawing providing vital evidence.

The bronzes are exceptionally powerful and compelling works of art that deserve close-up study
Victoria Avery
Nude bacchants riding panthers, c.1506 - 08

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Protein threshold linked to Parkinson’s Disease

$
0
0

The circumstances in which a protein closely associated with Parkinson’s Disease begins to malfunction and aggregate in the brain have been pinpointed in a quantitative manner for the first time in a new study.

The research, by a team at the University of Cambridge, identified a critical threshold in the levels of a protein called alpha-synuclein, which normally plays an important role in the smooth flow of chemical signals in the brain.

Once that threshold is exceeded, the researchers found that the chances that alpha-synuclein proteins will aggregate into potentially toxic structures are dramatically enhanced. This process, known as nucleation, is the first, critical step in the chain of events that scientists think leads to the development of Parkinson’s Disease.

The findings represent another important step towards understanding how and why people develop Parkinson’s. According to the charity Parkinson’s UK, one in every 500 people in the UK – an estimated 127,000 in all – currently has the condition, but as yet it remains incurable.

Dr Celine Galvagnion, a Research Associate at St John’s College, University of Cambridge, and the lead author of the study, said: “Finding a cure for Parkinson’s depends on our ability to understand it. For the first time, we have been able to provide a mechanistic description of the initial, molecular events that can ultimately result in the development of the disease.”

The study suggests that the likelihood of an individual developing Parkinson’s is related to a delicate balance between the protein, alpha-synuclein, and the number of synaptic vesicles in their brain. Synaptic vesicles are tiny, bubble-like structures that help to carry neurotransmitters, or chemical signals, between nerve cells. The cell constantly reproduces the vesicles to enable this.

Under normal circumstances, alpha-synuclein plays a pivotal role in the release of these neurotransmitters from one nerve cell to another. It does this by attaching itself to a thin membrane around the synaptic vesicle, known as the lipid bilayer.

When alpha-synuclein binds to lipid vesicles, it folds into a helical shape in order to perform its function. In certain circumstances, however, the proteins on the vesicle surface misfold and stick together. Once this nucleation process has begun, there is then a danger that free protein molecules within the brain cell will come into contact with the misshapen nucleus on the lipid surface. As these combine, they form thread-like chains, called amyloid fibrils, and start to become toxic to other cells. These amyloid deposits of aggregated alpha-synuclein, also known as Lewy-bodies, are the hallmark of Parkinson’s Disease.

Previous research has suggested that overexpression of alpha-synuclein in the brain is somehow associable with the onset of Parkinson’s, and that the interaction of alpha-synuclein with the lipid bilayer could play a role in modulating the development of the disease, but until now it was not clear why this might be the case.

In the new study, the research team simulated the process by which the proteins attach themselves to the vesicles by creating synthetic vesicles in the lab. These were then incubated with different quantities of alpha-synuclein.

The results showed that when the ratio of protein molecules to vesicles exceeds a level of about 100 (a level 10 times higher than that typically found in a human brain), the proteins attaching themselves to the lipid bilayer around a vesicle are too highly concentrated and bunch together on the surface. As a result, the chances of proteins nucleating on the lipid surface are, remarkably, at least a thousand times higher than the chances of two proteins randomly binding together in solution.

“It became clear in our experiment that there are specific conditions in which you can see the aggregation happening, and other conditions in which you don’t,” Galvagnion added. “It turns out that the ratio determines the ability of alpha-synuclein proteins to nucleate. This provides us with a likely explanation of how the initial steps leading to Parkinson’s occur.”

Together, the results provide, for the first time, a mechanistic description of the key role that membrane interactions can play in the initiation of neurodegenerative diseases, including Parkinson’s Disease.

The full report appears in the new issue of Nature Chemical Biology.

Excess quantities of a specific protein in the brain dramatically increase the chances of so-called “nucleation events” that could eventually result in Parkinson’s Disease, according to a new study.

Finding a cure for Parkinson’s depends on our ability to understand it. For the first time, we have been able to provide a mechanistic description of the initial, molecular events that can ultimately result in the development of the disease
Celine Galvagnion
Detail of an atomic force microscopy image which shows amyloid fibrils of alpha-synuclein grown out of synthetic lipid vesicles

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Can the Revolution in Kurdish Syria succeed?

$
0
0

Since the descent into civil war in Syria, revolutionary forces have seized control of the Kurdish region of Rojava. The mainstream media has been quick to publicise who the revolutionary forces in Rojava are fighting against: the brutality of Islamic State (IS); but what they fighting for is often neglected. In December of 2014, we had the chance to visit the region as part of an academic delegation. The purpose of our trip was to assess the strengths, challenges and vulnerabilities of the revolutionary project under way (read the Delegation’s Joint Statement here).

Rojava is the de facto autonomous Kurdish region in northern Syria. It consists of three cantons: Afrîn in the west, Kobanê in the centre, and Cizîre in the east. It is, for the most part, isolated and surrounded by hostile forces. However – despite the brutal war with IS, the painful embargo of Turkey and the even more painful embargo of Barzani and his Kurdish Regional Government (KRG) in Iraq – systems of self-governance and democratic autonomous rule have been established in Rojava, and are radically transforming social and political relations in an emancipatory direction.

As Saleh Muslim Mohamed, co-president of the Democratic Union Party (PYD) representing the independent communities of Rojava, explained in an interview in November 2014: “[We are engaged in the construction of] radical democracy: to mobilize people to organize themselves and to defend themselves by means of peoples armies like the Peoples Defense Unit (YPG) and Women’s Defense Unit (YPJ). We are practicing this model of self-rule and self-organization without the state as we speak. Democratic autonomy is about the long term. It is about people understanding and exercising their rights. To get society to become politicized: that is the core of building democratic autonomy.”

At the forefront of this politicization is gender equality and women’s empowerment, with equal representation and active participation of women in all political and social circles. “We [have] established a model of co-presidency – each political entity always has both a female and a male president – and a quota of 40% gender representation in order to enforce gender equality throughout all forms of public life and political representation,” explains Saleh.

The revolutionary forces in Rojava are not fighting for an independent nation state, but advocating a system they call democratic confederalism: one of citizenry-led self-governance through the formation of neighbourhood-level people’s councils, town councils, open assemblies, and cooperatives. These self-governing instruments allow for the participation of diverse political, ethnic, and religious groups, promoting consensus-led decision-making. Combined with local academies aimed at politicising and educating the population, these structures of self-governance give the populace the ability to raise and solve their own problems.  

During our nine day trip to Cizîre canton, we visited rural towns as well as cities, where we met with representatives and members of schools, cooperatives, women's academies, security forces, political parties, and the self-government in charge of economic development, healthcare, and foreign affairs.

Throughout the visit, we witnessed discipline, revolutionary commitment and impressive collective mobilisation of the population in Cizîre. Despite the isolation and difficult conditions, a perseverance and even confidence seemed to dominate the collective mood among representatives and members of all the diverse groups we met. This collective optimism and willingness to sacrifice was in the pursuit of an admirable ideological program and genuine steps towards collective emancipation. We were particularly struck by the emphasis on education, politicization, and a consciousness-raising of the general population in accordance with a grass-roots democratic transformation of social and property relations.

Images by Jeff Miley. Click on images to enlarge.
 

An obvious and striking strength of the revolution clearly on display throughout our trip were the strides towards gender emancipation. Our meetings with government representatives, members of academies, women’s militias, and people’s councils all demonstrated that women’s empowerment is not mere programmatic window-dressing but is in fact being implemented. This, in the context of the Middle East and in sharp contrast to both the IS as well as the KRG, was most impressive.

Another feature of the programmatic agenda we found admirable was the insistence by the revolutionary government in Rojava that it is committed to a broader struggle for a democratic Syria, and in fact a democratic Middle East, capable of accommodating cultural, ethnic and religious diversity through democratic confederalism. In this vein, we witnessed proactive attempts by the revolutionary forces to include ethnic and religious minorities into the struggle underway in Rojava, including the institutionalisation of positive discrimination, quotas, and self-organisation of minority groups, such as the Syriac community – which even formed their own militias.

That said, the integration of the local Arab population into the revolutionary project remains a critical challenge, as does coordination and the formation of alliances with groups outside of the three cantons. Extra-Kurdish coordination and alliances are certainly prerequisites for ensuring the survival of the revolution in the medium and long term and are especially critical if democratic confederalism is to spread across Syria and the Middle East. Such a task is as difficult as it is urgent. It is crucial that the revolutionary authorities do everything in their power to assuage Arab fears of a Greater Kurdistan agenda, and include them in this struggle. Avoiding a Kurdish-centric version of history, literature and even the temptation to push for a Kurdish-only language educational system will help prevent the alienation of ethnic and religious minorities.

Revolutionary symbols (e.g. flags, maps, posters) are particularly important when it comes to integrating ethnic and religious minorities, as well as publicising the revolution across the world. More inclusive imagery would certainly facilitate the task of winning support and sympathy – both in the Middle East and more globally. References beyond the Kurdish movement were strikingly absent from the symbols we saw. The positive side of the Kurdish revolutionary symbols cannot be ignored and certainly plays a significant role in facilitating the mobilisation of the Kurdish population. However, at the same time it is likely to alienate non-Kurds and Kurds who might misidentify the struggle as one for a Greater Kurdistan.


Listen to Jeff Miley's talk on Rojava and the Kurdish revolutionary movement

Our biggest concern is that the revolution will be compromised – if not sacrificed – by broader geopolitical games. The current close alliance between the KRG and the United States, and the recent US-led airstrikes in Syria, fuel the suspicions of many – especially Sunni Arabs – that the Kurds are but pawns to yet another imperialist intervention in the region in pursuit of oil.

The politics of divide and conquer employed by the imperialist powers have a long, bloody and effective history in the Middle East and beyond. This sad reality reinforces how crucial it is to build alliances, and to transcend the Kurdish nationalist imaginary within the ranks of the movement. Indeed, one of the principal strengths of IS has been its ability to mobilise militants both locally and globally in seemingly implacable opposition to imperialist powers. 

It is especially important for the Kurdish revolution to appeal to the Turkish left, and to encourage them to denounce and fight against the crippling embargo enforced by the Turkish state on Rojava. The effects of and challenges created by the embargo were all too evident with respect to the basic health needs of the population we encountered. Unexpectedly, it was not a lack of medical expertise but rather a lack of medicine and medical equipment that most threatens population health.

The effects of the embargo also reach beyond the immediate needs of the population in Rojava. The environmental toll was evident, most notably in the oil-seeped soil around the rigs. Given the circumstances, it is certainly understandable and indeed inevitable that the revolutionary authorities are nearly exclusively preoccupied with the tasks of providing for immediate energy and food needs of the population while searching for financial assistance to keep the revolutionary project afloat. Nevertheless, the revolution offers a unique opportunity to carefully establish an environmentally sustainable and democratically managed economy.

In the broader context of tyranny, violence, and political upheaval rocking many countries in the Middle East, it is highly unlikely that problems can be understood in isolation or solved on a country-by-country basis. One of the best things about the model of democratic confederalism institutionalized in Rojava is that it is potentially applicable to the entire region – a region, it should be recalled, the borders of which were largely drawn in illegitimate fashion by imperialist forces a century ago. The sins of Imperialism have yet to be forgotten in the region. 

Democratic confederalism, however, is not about dissolving state borders, but transcending them. At the same time, it allows for the construction of a local, participatory democratic alternative to tyrannical states. Indeed, the model of democratic confederalism promises to help foster peace throughout the region, from the Israeli-Palestine conflict, through Turkey, Iraq, Yemen, Lebanon, etc. If only this democratic revolution could spread.

The long siege on Kobanê, facilitated by the criminal complicity of the Turkish state, constituted not just an assault on the Kurdish people but on a revolutionary democratic project. The region is being torn asunder in a destructive process protagonized by a variety of reactionary brands of political Islam. The revolutionary project of Rojava, based on democratic participation, gender emancipation, and multi-cultural, multi-religious, multi-ethnic, and even multi-national accommodation, represents a third way, perhaps the only way, for achieving a just and lasting peace in the Middle East. For these reasons the recent liberation of Kobanê should be hailed by progressives, indeed, by all advocates of peace, freedom and democracy around the world.

Watch Sociology PhD Candidate and Kurdish activist Dilar Dirik's talk on the Kurdish Women's Movement at the New World Summit in Brussels last year.

 

We can but hope, argue sociologist Dr Jeff Miley and Gates Scholar Johanna Riha, who here summarise some of their observations following a recent field visit to Rojava in northern Syria, and give a brief overview of the political and social ideologies underpinning the Kurdish revolution.

Democratic confederalism is not about dissolving state borders, but transcending them. At the same time, it allows for the construction of a local, participatory democratic alternative to tyrannical states
Jeff Miley
Deliberations among a Local Women's Council in Qamislu, Rojava

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Artificially-intelligent Robot Scientist ‘Eve’ could boost search for new drugs

$
0
0

Robot scientists are a natural extension of the trend of increased involvement of automation in science. They can automatically develop and test hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research. Robot scientists are also well suited to recording scientific knowledge: as the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process.

In 2009, Adam, a robot scientist developed by researchers at the Universities of Aberystwyth and Cambridge, became the first machine to independently discover new scientific knowledge. The same team has now developed Eve, based at the University of Manchester, whose purpose is to speed up the drug discovery process and make it more economical. In the study published today, they describe how the robot can help identify promising new drug candidates for malaria and neglected tropical diseases such as African sleeping sickness and Chagas’ disease.

“Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year,” says Professor Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge. “We know what causes these diseases and that we can, in theory, attack the parasites that cause them using small molecule drugs. But the cost and speed of drug discovery and the economic return make them unattractive to the pharmaceutical industry.

“Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide.”

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget.

Eve’s robotic system is capable of screening over 10,000 compounds per day. However, while simple to automate, mass screening is still relatively slow and wasteful of resources as every compound in the library is tested. It is also unintelligent, as it makes no use of what is learnt during screening.

To improve this process, Eve selects at random a subset of the library to find compounds that pass the first assay; any ‘hits’ are re-tested multiple times to reduce the probability of false positives. Taking this set of confirmed hits, Eve uses statistics and machine learning to predict new structures that might score better against the assays. Although she currently does not have the ability to synthesise such compounds, future versions of the robot could potentially incorporate this feature.

Professor Ross King, from the Manchester Institute of Biotechnology at the University of Manchester, says: “Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards.”

To test the viability of the approach, the researchers developed assays targeting key molecules from parasites responsible for diseases such as malaria, Chagas’ disease and schistosomiasis and tested against these a library of approximately 1,500 clinically approved compounds. Through this, Eve showed that a compound that has previously been investigated as an anti-cancer drug inhibits a key molecule known as DHFR in the malaria parasite. Drugs that inhibit this molecule are currently routinely used to protect against malaria, and are given to over a million children; however, the emergence of strains of parasites resistant to existing drugs means that the search for new drugs is becoming increasingly more urgent.

“Despite extensive efforts, no one has been able to find a new antimalarial that targets DHFR and is able to pass clinical trials,” adds Professor King. “Eve’s discovery could be even more significant than just demonstrating a new approach to drug discovery.”

The research was supported by the Biotechnology & Biological Sciences Research Council and the European Commission.

Eve, an artificially-intelligent ‘robot scientist’ could make drug discovery faster and much cheaper, say researchers writing in the Royal Society journal Interface. The team has demonstrated the success of the approach as Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria.

Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards
Ross King, University of Manchester
Eve, the Robot Scientist

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Celestial bodies

$
0
0

Despite their red-brick finish, the corridors of the Institute of Astronomy can seem more like an art gallery than a research centre, so beautiful are the images of supernovae and nebulae hanging there. Dr Nic Walton passes these every day as he makes his way to his office to study the formation of the Milky Way and search for planets outside our solar system.

On the screen of Walton’s computer is what appears to be a map of stars in our Milky Way. In fact, it is something that is around 25 orders of magnitude smaller (that’s ten followed by 25 zeros).

It is an image of cells taken from a biopsy of a patient with breast cancer; the ‘stars’ are the cells’ nuclei, stained to indicate the presence of key proteins. It is the similarities between these patterns and those of astronomical images that he, together with colleagues at the Cancer Research UK (CRUK) Cambridge Institute, is exploiting in PathGrid, an interdisciplinary initiative to help automate the analysis of biopsy tissue.

“Both astronomy and cell biology deal with huge numbers: our Milky Way contains several billion stars, our bodies tens of trillions of cells,” explained Walton.

PathGrid began at a cross-disciplinary meeting in Cambridge to discuss data management. Walton has been involved for many years with major international collaborations that, somewhat appropriately, amass an astronomical amount of data. But accessing data held by research teams across the globe was proving to be a challenge, with a lack of standardised protocols. Something needed to be done and Walton was part of an initiative to sort out this mess.

The issue of data management in an era of ‘big data’ is not unique to astronomy. Departments across the University – from the Clinical School to the Library – face similar issues and this meeting was intended to share ideas and approaches. It was at this meeting that Walton met James Brenton from the CRUK Cambridge Institute. They soon realised that data management was just one area where they could learn from each other: image analysis was another.

Walton and his colleagues in Astronomy capture their images using optical or near-infrared telescopes, such as the prosaically named Very Large Telescope or the recently launched Gaia satellite, the biggest camera in space with a billion pixels. These images must then be manipulated to adjust for factors including the telescope’s own ‘signature’, cosmic rays and background illumination. They are tagged with coordinates to identify their location, and their brightness is determined.

Analysing these maps is an immense, but essential, task. Poring over images of tens of thousands of stars is a laborious, time-consuming process, prone to user error, so this is where computer algorithms come in handy. Walton and colleagues run their images through object detection software, which looks for astronomical features and automatically classifies them.

“Once we start characterising the objects, looking at what’s a star, what’s a galaxy, then we start to see the really interesting bigger picture. Light is distorted by gravitational mass on its way to us, so the shapes of the galaxies, for example, can tell us about the distribution of dark matter towards them. When we start counting stars, we start to see structures, like tidal streams.”

Professor Carlos Caldas, one of Brenton’s colleagues at the CRUK Institute, and now a collaborator of Walton’s, says the problems faced by medical pathologists are very similar, if at the opposite extreme of measurements. Could the same algorithms help pathologists analyse images taken by microscopes?

When a patient presents with suspected breast cancer, a pathologist takes a core of the tumour tissue – a tiny sample, less than 1 mm in diameter. The tumour samples are arranged on a block, typically together with 200 other samples taken from different patients. Each sample needs to have its own ‘coordinates’ so that the researchers know that a particular tumour came from a particular patient.

“We then cut a slice of the 200 or so cores, mount it into a slide that is stained, and take a digital picture of this slide,” explained Caldas, “but each of these high-resolution images is a few gigabytes of data, so we quickly accumulate hundreds of terabytes of data.”

By adapting the astronomers’ image analysis software, the PathGrid collaborators are able to analyse the tumour images, for example to recognise the three types of cells in the tissue samples: cancer cells, immune cells and stromal cells. Just as object identification in astronomy reveals hidden patterns and information, so the information from the slides begins to tell researchers how the different cell types relate to each other. Staining the samples to highlight elements such as potentially important proteins could also help the researchers identify new biomarkers to aid in the diagnosis or prognosis of cancers.

Equally important will be how the data is stored so that several years down the line, as researchers find new questions to ask, they can still access and analyse any of the 15,000 different tumours and their hundred stains. “We need to know that at some point in the future we can extract sample 53, for example, or find all tumours that were positive for a particular stain,” said Caldas. “Imagine if you had a million sheets of paper and you just threw them all into a room and asked someone to find page 53. They’d have to sort through all the papers to find the right one, but if you could make it glow, you’d be able to find it more easily. This is similar to what we do, except we do this digitally.”

As well as this technology allowing oncologists to ask new questions and at a much larger scale, Caldas believes that in the future it could be used as ‘digital pathology’, aiding diagnosis and prognosis even in regions with no specialist oncologists. “You could imagine a scenario where a clinician takes a biopsy and a pathologist processes and stains the slide, takes a picture and digitally relays it. This is then analysed by one of the algorithms to say if it is a tumour, identify the tumour type and say how aggressive it will be.”

Walton makes an interesting and unexpected comparison between his and Caldas’s work: “We deal with star deaths, they deal with patient deaths.” If PathGrid is successful, this might change: while the astronomers continue to watch star deaths, their collaborators will hopefully become even better at preventing many more patient deaths from cancer.

Inset image: Left - Carlos Caldas; right - Nic Walton

Astronomy and oncology do not make obvious bedfellows, but the search for new stars and galaxies has surprising similarities with the search for cancerous cells. This has led to new ways of speeding up image analysis in cancer research.

We deal with star deaths, they deal with patient deaths
Nic Walton

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

New initiative to train specialists in risk, mitigation and Big Data

$
0
0

Hosted as a Centre for Doctoral Training (CDT) and funded by the Natural Environment Research Council (NERC), the aim of the consortium is to produce a cohort of researchers who can use large or complex datasets to understand and ease the risks posed by a range of societal and environmental changes, such as a rapidly expanding population, limited natural resources, and natural hazards.

It’s imperative that business, government and wider society understand these risks so that they can develop appropriate mitigation strategies. Many businesses already recognise the need to understand and reduce the risks to infrastructure, operations and supply chains.

Through the insurance and reinsurance industries, for example, the UK is a world-renowned international centre in risk assessment, modelling and its application. The sector values strong collaboration with the UK academic research base to enable the improvement of their risk models, maintain a competitive edge, and grow their markets. As a result, the UK insurance sector is the largest in Europe, making a major contribution to the UK economy, employing 320,000 people and contributing almost 3% to GDP.

But other industries and humanitarian and development organisations also need the tools to help them assess the risks arising from increasingly complex and interconnected hazards, for example, international disaster risk reduction.

And, as identified by the recent BIS data strategy – Seizing the Data Opportunity – the UK has some of the best universities and institutes in the world, some truly innovative small businesses, and some of the richest historic datasets of any country.

This means there’s potential to produce a new generation of risk scientists able to maximise the opportunities big data offers, filling a skills shortage in this area. The CDT is a direct response to this shortage.

Minister for Universities, Science & Cities, Greg Clark, said: “In this fast-moving, digital world, the ability to handle and analyse large volumes of complex data is vital for the UK to maintain its competitive edge. That is why the government identified Big Data as one of our 8 Great Technologies and central to our Industrial Strategy.

“This £2.5 million investment to train the next generation of Big Data experts will enable a skilled workforce to develop innovative tools to assess and mitigate risk that will help business, government and wider society cope effectively with big environmental and societal changes.”

Professor Simon Pollard of Cranfield University will lead the consortium, called DREAM (Data, Risk and Environmental Analytical Methods). It will include researchers from the University of Cambridge, Newcastle University and the University of Birmingham, together with a wide range of partners from the private and public sectors. Dr Mike Bithell from Cambridge’s Department of Geography is one of four centre directors for the programme.

Funding for ten studentships will be awarded each year. The CDT award will provide funding for three years of new student intake – 30 studentships in total – from 2015 to 2016. Two of the studentships in the 2016 cohort will be interdisciplinary and co-funded by the Economic and Social Research Council (ESRC) and NERC.

Students will gain advanced technical skills, be regularly brought together as a cohort to share ideas and skills, and have the opportunity to take part in partner-based secondments.

CDTs support strategically-targeted, focused PhD studentships aimed at addressing specific research and skills gaps.

Cambridge is one of four leading UK universities awarded funding to train the next generation of researchers to become experts at assessing and mitigating risk using Big Data.

In this fast-moving, digital world, the ability to handle and analyse large volumes of complex data is vital for the UK to maintain its competitive edge
Greg Clark
2013 Esri International User Conference

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

From one extreme to the next?

$
0
0

A radical Islamist group has exploited the vacuum created by civil war to capture cities, towns and oil fields across Syria and Iraq – leaving horror and destruction in their wake. Although this might seem unique to a post-9/11 world, religious radicalism exploiting a power vacuum is not new, as research going back 30 years to a different civil war in the same region is showing. 

Since April 2013, the Sunni jihadist group Islamic State in Iraq and the Levant, referred to as the ‘Islamic State’ (IS, or Isis), has taken control of vast swathes of Syrian and Iraqi territory, bringing with it an onslaught of appalling atrocities and acts of cruelty. “It will take some time before its full impact is determined… [the threat it poses] is unprecedented in the modern age,” stated a recent report by the Soufan Group, a security intelligence firm in New York.

Meanwhile, Syrian refugees, fleeing IS and the bitter civil war, continue to spill across the borders of neighbouring states, straining their own societies and resources. Lebanon in particular has been greatly affected – almost a quarter of its current population are Syrian refugees.

Yet, while the sudden appearance of reports about the barbarity of IS makes this seem like an unprecedented shock, new research is starting to show that parallels may exist in the recent past.

In the 1980s, Lebanon itself witnessed the ascent of one such precursor Islamic movement, known as ‘Tawheed’, during the country’s civil war. Raphaël Lefèvre, a Gates Cambridge Scholar and PhD candidate working with Professor George Joffe in the Department of Politics and International Studies, and until recently a visiting fellow at the Carnegie Middle East Center, is researching its rise and fall.

His work investigates the ways in which the group seized control of the northern port city of Tripoli and imposed its conservative agenda on locals, before being largely rejected and marginalised by civil society and leftist militants. “While it’s important to keep in mind that history does not necessarily repeat itself, the parallels are great between the history of the rise and fall of Tawheed’s emirate in Tripoli and the current rule of the Islamic State,” he said.

IS may have few obvious antecedents in terms of the way in which its members practise their extremism, but because the pattern of its emergence presents a striking echo of those of earlier radical forces, says researcher Lefèvre, this pattern may provide pointers as to the direction and trajectory of IS.

He also hopes that his research on the events of the 1980s in Lebanon will focus attention on the roots of an increasingly unstable situation in the country. The timing of his research is poignant, given that street violence is rising, sectarianism is reaching boiling point and IS has now inaugurated a Lebanese chapter in Tripoli.

Today, references to the failed ‘Tawheed phenomenon’ are common among the citizens of Tripoli. During a year-long visit to Lebanon, where he has now returned, Lefèvre spoke to many who remember the events of the 1980s, and he finds commonalities between Tawheed and IS.

When Tawheed seized control, it imposed its ideological and religious norms on the people, but it also began to fill the socioeconomic gap left by the absence of a Lebanese state during the civil war. “They filled a void – provided security, ran hospitals and even gave education to the kids,” he explained.

Likewise, IS has both imposed a harsh conservative social agenda on the population who live under its sway and used resources such as oil and gas fields to win over locals. “They distribute subsidies and provide state-like services to a population in severe need given the quasi-absence of the Syrian state in remote areas outside of Damascus.”

Just like IS, Tripoli’s Tawheed movement was led by a charismatic figure, the Sunni cleric Said Shaaban. He gathered under his wing three Islamist groups that merged together to form Tawheed. Their aim was to struggle against impurities in society – the warlords and drug dealers – in accordance with Sharia law.

“But, once Tawheed seized control of the city in 1983, all of these grand goals very quickly disappeared. People started realising that there wasn’t much that was Islamic about the group; it was just another political faction trying to rule their city instead of Syria and Israel, and in increasingly corrupt and murky ways.”

After three years, and in the face of pressure from the Syrian regime, internal disagreements over deciding the group’s next steps led to its collapse from within.

IS, too, has been linked with corruption, including suggestions that the organisation has been selling looted antiquities and earning significant amounts from the oil fields it controls in eastern Syria by selling supplies to the Syrian government and across the borders into Turkish and Jordanian underworlds.

Tawheed lost legitimacy when it began to be perceived as a militia using a religious discourse to mobilise people. Lefèvre believes that movements collapse when they try to force society to adapt to their norms: “very often, civil society resists and in the end strikes back.”

In Lebanon today, he sees an increasing feeling of socioeconomic and political marginalisation on the part of Lebanon’s Sunni community – a “highly toxic cocktail”, he calls it, of unemployment, low literacy rates and poverty, leading many to turn away from the state and look for alternative sources of support and protection, including joining Islamic groups. He fears that the current situation may lead back to a situation not dissimilar to that witnessed in the 1980s.

“The influence of the Syrian crisis on Lebanon is very real. Ultimately, whether the country is able to weather the storm, or fall prey to civil war and the rise of extremism, will depend on the ability of Lebanese policymakers to address issues that have long been ignored.”

As for the future of IS, Lefèvre says: “It is unpopular in the cities it is controlling, but we are not yet seeing so much resistance – possibly because of the socioeconomic help they currently provide. While the same collapse may not necessarily happen to IS, the rise and fall of Tawheed shows that internal tensions within a group – whether about the group’s leadership or its priorities – are an important factor that should be taken into account to understand how such movements operate. The ‘IS phenomenon’ is in fact far from being a new one.”

The threat to peace posed by the Islamic State group has been described as “unprecedented in the modern age”, yet research on the rise and fall of an extremist group in 1980s Lebanon suggests that we may have seen this all before.

While it’s important to keep in mind that history does not necessarily repeat itself, the parallels are great between the history of the rise and fall of Tawheed’s emirate in Tripoli and the current rule of the Islamic State
Raphael Lefevre
Lebanon, after the 2006 war

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Planck reveals first stars were born late

$
0
0

New maps of ‘polarised’ light in the young Universe have revealed that the first stars formed 100 million years later than earlier estimates. The new images of cosmic background radiation, based on data released today from the European Space Agency’s Planck satellite, have shown that the process of reionisation, which ended the ‘Dark Ages’ as the earliest stars formed, started 550 million years after the Big Bang.

The history of our Universe is a 13.8 billion-year tale that scientists endeavour to read by studying the planets, asteroids, comets and other objects in our Solar System, and gathering light emitted by distant stars, galaxies and the matter spread between them.

A major source of information used to piece together this story is the Cosmic Microwave Background, or CMB, the fossil light resulting from a time when the Universe was hot and dense, only 380,000 years after the Big Bang.

Thanks to the expansion of the Universe, we see this light today covering the whole sky at microwave wavelengths.

Between 2009 and 2013, the Planck satellite surveyed the sky to study this ancient light in unprecedented detail. Tiny differences in the background’s temperature trace regions of slightly different density in the early cosmos, representing the seeds of all future structure, the stars and galaxies of today.

Scientists from the Planck collaboration have published the results from the analysis of these data in a large number of scientific papers over the past two years, confirming the standard cosmological picture of our Universe with ever greater accuracy.

The imaging is based on data from the Planck satellite, and was developed by the Planck collaboration, which includes the Cambridge Planck Analysis Centre at the University's Kavli Institute for Cosmology, Imperial College London and the University of Oxford at the London Planck Analysis Centre.

“The CMB carries additional clues about our cosmic history that are encoded in its ‘polarisation’,” explains Jan Tauber, ESA’s Planck project scientist. “Planck has measured this signal for the first time at high resolution over the entire sky, producing the unique maps released today.”

Light is polarised when it vibrates in a preferred direction, something that may arise as a result of photons – the particles of light – bouncing off other particles. This is exactly what happened when the CMB originated in the early Universe.

Initially, photons were trapped in a hot, dense soup of particles that, by the time the Universe was a few seconds old, consisted mainly of electrons, protons and neutrinos. Owing to the high density, electrons and photons collided with one another so frequently that light could not travel any significant distant before bumping into another electron, making the early Universe extremely ‘foggy’.

Slowly but surely, as the cosmos expanded and cooled, photons and the other particles grew farther apart, and collisions became less frequent. This had two consequences: electrons and protons could finally combine and form neutral atoms without them being torn apart again by an incoming photon, and photons had enough room to travel, being no longer trapped in the cosmic fog.

The new Planck data fixes the date of the end of these ‘Dark Ages’ to roughly 550 million years after the Big Bang, more than 100 million years later than previously determined by earlier polarisation observations from the NASA WMAP satellite (Planck’s predecessor), and has helped resolve a problem for observers of the early Universe.

The Dark Ages lasted until the formation of the first stars and galaxies, specifically the formation of very large stars with extremely hot surfaces, which resulted in the energetic UV-radiation that began the process of reionisation of the neutral hydrogen throughout the Universe.

Very deep images of the sky from the NASA/ESA Hubble Space Telescope have provided a census of the earliest known galaxies, which started forming perhaps 300–400 million years after the Big Bang.

The problem is that with a date for the end of the Dark Ages set at 450 million years after the Big Bang, astronomers can estimate that UV-radiation from such a source would have proved insufficient. “In that case, we would have needed additional, more exotic sources of energy to explain the history of reionisation,” said Professor George Efstathiou, Director of the Kavli Institute of Cosmology.

The additional margin of 100 million years provided by Planck removes this need as stars and galaxies would have had the time to supply the energetic radiation required to bring the Dark Ages to a close and begin the Epoch of reionisation that would last for a further 400 million years.

Although the joint investigation between Planck and BICEP2, searching for the imprinted signature on the polarisation of the CMB of gravitational waves triggered by inflation, found no direct detection of this signal, it crucially placed strong upper-limits on the amount of primordial gravitational waves.

Searching for this signal remains a major focus of ongoing and planned CMB experiments. “The results of the joint analysis demonstrate the power of combining CMB B-mode polarisation observations with measurements at higher frequency from Planck to clean Galactic dust,” said Dr Anthony Challinor of the Kavli Institute for Cosmology.

Inset image: Polarised emission from Milky Way dust Credit: ESA and the Planck Collaboration

New maps from the Planck satellite uncover the ‘polarised’ light from the early Universe across the entire sky, revealing that the first stars formed much later than previously thought.

We would have needed additional, more exotic sources of energy to explain the history of reionisation
George Efstathiou
Polarisation of the Cosmic Microwave Background

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Sports calibrated

$
0
0
The view from the top of the stands of Lee Valley VeloPark, London.

The bat makes contact with the ball; the ball flies back, back, back; and a thousand mobile phones capture it live as the ball soars over the fence and into the cheering crowd. Baseball is America’s pastime and, as for many other spectator sports, mobile phones have had a huge effect on the experience of spending an afternoon at the ballpark.

But what to do with that video of a monster home run or a spectacular diving catch once the game is over? What did that same moment look like from the other end of the stadium? How many other people filmed exactly the same thing but from different vantage points? Could something useful be saved from what would otherwise be simply a sporting memory?

Dr Joan Lasenby of the University of Cambridge’s Department of Engineering has been working on ways of gathering quantitative information from video, and thanks to an ongoing partnership with Google, a new method of digitally ‘reconstructing’ shared experiences such as sport or concerts is being explored at YouTube.

The goal is for users to upload their videos in collaboration with the event coordinator, and a ‘cloud’-based system will identify where in the space the video was taken from, creating a ‘map’ of different cameras from all over the stadium. The user can then choose which camera they want to watch, allowing them to experience the same event from dozens or even hundreds of different angles.

But although stitching together still images is reasonably straightforward, doing the same thing with video, especially when the distance between cameras can be on a scale as massive as a sports stadium, is much more difficult. “There’s a lot of information attached to the still images we take on our phones or cameras, such as the type of camera, the resolution, the focus, and so on,” said Lasenby. “But the videos we upload from our phones have none of that information attached, so patching them together is much more difficult.”

Using a series of videos taken on mobile phones during a baseball game, the researchers developed a method of using visual information contained in the videos, such as a specific advertisement or other distinctive static features of the stadium, as a sort of ‘anchor’ which enables the video’s location to be pinpointed.

“Another problem we had to look at was a way to separate the good frames from the bad,” said Dr Stuart Bennett, a postdoctoral researcher in Lasenby’s group who developed this new method of three-dimensional reconstruction while a PhD student. “With the videos you take on your phone, usually you’re not paying attention to the quality of each frame as you would with a still image. We had to develop a way of efficiently, and automatically, choosing the best frames and deleting the rest.”

To identify where each frame originated from in the space, the technology selects the best frames automatically via measures of sharpness and edge or corner content and then selects those which match. The system works with as few as two cameras, and the team has tested it with as many as ten. YouTube has been stress testing it further, expecting that the technology has the potential to improve fan engagement in the sports and music entertainment sectors.

Although the technology is primarily intended for use in an entertainment context, Lasenby points out it could potentially be applied for surveillance purposes as well. “It is a possible application down the road, and could one day be used by law enforcement to help provide information at the crime scene,” said Lasenby. “At the moment, a lot of surveillance is done with fixed cameras, and you know everything about the camera. But this sort of technology might be able to give you information about what’s going on in a particular video shot on a phone by making locations in that video identifiable.”


Another area where Lasenby’s group is extracting quantitative data from video is in their partnership with British Cycling. Over the past decade, the UK has become a dominant force in international cycling, thanks to the quality of its riders and equipment, its partnerships with industry and academia, and its use of technology to help improve speeds on the track and on the road.

“In sport, taking qualitative videos and photographs is commonplace, which is extremely useful, as athletes aren’t robots,” said Professor Tony Purnell, Head of Technical Development for the Great Britain Cycling Team and Royal Academy of Engineering Visiting Professor at Cambridge. “But what we wanted was to start using image processing not just to gather qualitative information, but to get some good quantitative data as well.”

Currently, elite cyclists are filmed on a turbo trainer, which is essentially a stationary bicycle in a lab or in a wind tunnel. The resulting videos are then assessed to improve aerodynamics or help prevent injuries. “But for cyclists, especially sprinters, sitting on a constrained machine just isn’t realistic,” said Lasenby. “When you look at a sprinter on a track, they’re throwing their bikes all over the place to get even the tiniest advantage. So we thought that if we could get quantitative data from video of them actually competing, it would be much more valuable than anything we got from a stationary turbo trainer.”

To obtain this sort of data, the researchers utilised the same techniques as are used in the gaming industry, where markers are used to obtain quantitative information about what’s happening – similar to the team’s work with Google.

One thing that simplifies the gathering of quantitative information from these videos is the ability to ‘subtract’ the background, so that only the athlete remains. But doing this is no easy task, especially as the boards of the velodrome and the legs of the cyclist are close to the same colour. Additionally, things that might appear minor to the human eye, such as shadows or changes in the light, make the maths of doing this type of subtraction extremely complicated. Working with undergraduate students, graduate students and postdoctoral researchers, however, Lasenby’s team has managed to develop real-time subtraction methods to extract the data that may give the British team the edge as they prepare for the Rio Olympics in 2016.

“Technology is massively important in sport,” said Lasenby. “The techniques we’re developing here are helping to advance how we experience sport, both as athletes and as fans.”

Inset images: credit British Cycling

 

New methods of gathering quantitative data from video – whether shot on a mobile phone or an ultra-high definition camera – may change the way that sport is experienced, for athletes and fans alike.

The techniques we’re developing here are helping to advance how we experience sport, both as athletes and as fans
Dr Joan Lasenby
The view from the top of the stands of Lee Valley VeloPark, London.

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Capital structure used to gauge firms’ foundations

$
0
0

An analysis which shows how the financing activities, or “capital structure”, of real estate firms can be used as a barometer of their overall value has been published online.

The research provides the first evidence-based assessment of which capital structure characteristics are common to successful firms in the US and several European countries. It promises to provide financial decision-makers with a model for enhancing company value, while also offering potential investors a litmus-test from which they can make sounder judgements about a firm’s overall strength.

The work represents a break from traditional approaches to gauging company value, which often deliberately presume that a firm’s financing activity is irrelevant to the value of the firm itself. By making this presumption, economists can account for real-world stresses that might affect the financial decision-making that goes on inside a company, such as taxation, or fluctuations in capital markets.

By contrast, the new research argues that stronger firms nevertheless display consistent characteristics in how they finance their activities and manage debt, and that these can be used as a basis of gauging their value overall.

In particular, it suggests that low leverage – a low ratio of debt to the overall value of shares and assets – is a particularly strong and dependable indicator of the value of a firm.

The study was commissioned by the European Public Real Estate Association (EPRA), and carried out by Dr Eva Steiner, a member of the Department of Land Economy and Fellow of St John’s College, University of Cambridge, and Dr Timothy Riddiough, from the Wisconsin School of Business in the US.

“Most financial managers pick investments and look at how they can generate value; we look at how the manager can generate value themselves by making the right financial choices,” Dr Steiner said.

“Optimal capital structure is a multi-dimensional problem, and this is the first research to look at what combination of characteristics give, on the strength of the available evidence, the best firm value.”

The study assessed all real estate firms covered by the SNL financial database, which collects and disseminates corporate, financial, market, and mergers and acquisitions data. Real estate companies are a good subject for a study of capital structure management because they own and operate large, long-term assets with changeable value, and need to be set up to handle a large amount of debt.

For US firms, the data covered the period from 1993 to 2012, while information was also collected about companies in Germany, France, the UK and the Netherlands between 2001 and 2012. The strength of these firms was measured using a standard rating called “Tobin’s Q”. Having organised the list of companies according to their annual Q ratios, the researchers then compared the capital structure characteristics of firms with the highest Q ratios with those of the lowest.

The process revealed several features of financial strategy that were common to the highest-value firms. In particular, the strongest companies in the sample group maintained low levels of leverage, at a ratio of approximately 35%, whereas weaker firms had on average a higher leverage ratio of approximately 59%.

This inverse relationship between leverage and firm value was the strongest finding of the whole survey. “The general robustness of this finding raises the question why firms deviate from what appears to be a clear, characteristic-informed optimal leverage ratio that is associated with significantly stronger firm value,” the authors note.

The study also found several other strong indicators of company value. Stronger firms have longer debt maturity, matching the typically longer-term maturity of their assets, and a higher proportion of fixed-rate, rather than variable-rate debt.

Higher-value companies also tend to have lower levels of secured debt, whereas weaker firms typically pledge collateral when borrowing capital. “We found that stronger firms tend to be able to borrow on grounds of their overall creditworthiness,” Steiner said. “They can access capital markets trading on the quality of the firm, without having to rely on collateral to mitigate lender concerns.”

While all of these factors were indicative of value among companies in the US, in Europe the relationship between leverage and firm value was far more significant than the others. The authors suggest that this indicates that capital structure has a more consistent and decisive impact on company value in the US.

The study is ongoing, with Steiner and Riddiough due to publish further analysis from their research in 2015. “Overall our findings suggest that a defensive, prudent capital structure with low leverage, aimed at matching debt and asset maturity and managing interest rate risk through utilising fixed-rate instruments, is able to make a significant contribution to firm value,” Steiner added.

The full report can be found on the EPRA website. 

Patterns in the financing activities of firms could be used as a litmus-test to determine company value, according to a new report.

Most financial managers pick investments and look at how they can generate value; we look at how the manager can generate value themselves by making the right financial choices
Eva Steiner
Stock exchange workers. Typically the valuation of companiespresumes that capital structure does not influence value, but the new study identifies several consistent features among the financing activity of real estate firms

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes
License type: 

Computer model of blood development could speed up search for new leukaemia drugs

$
0
0

The human body produces over 2.5 million new blood cells during every second of our adult lives, but how this process is controlled remains poorly understood. Around 30,000 new patients each year are diagnosed with cancers of the blood each year in the UK alone. These cancers, which include leukaemia, lymphoma and myeloma, occur when the production of new blood cells gets out of balance, for example if the body produces an overabundance of white blood cells.

Biomedical scientists from the Wellcome Trust-MRC Cambridge Stem Cell Institute and the Cambridge Institute for Medical Research collaborated for the past 2 years with computational biologists at Microsoft Research and Cambridge University’s Department of Biochemistry.  This interdisciplinary team of researchers have developed a computer model to help gain a better understanding of the control mechanisms that keep blood production normal. The details are published today in the journal Nature Biotechnology.

“With this new computer model, we can carry out simulated experiments in seconds that would take many weeks to perform in the laboratory, dramatically speeding up research into blood development and the genetic mutations that cause leukaemia,” says Professor Bertie Gottgens whose research team is based at the University’s Cambridge Institute for Medical Research.

Dr Jasmin Fisher from Microsoft Research and the Department of Biochemistry at the University of Cambridge says: “This is yet another endorsement of how computer programs empower us to gain better understanding of remarkably complicated processes. What is ground-breaking about the current work is that we show how we can automate the process of building such programs based on raw experimental data. It provides us with a blueprint to develop computer models relevant to other human diseases including common cancers such as breast and colon cancer.”

To construct the computer model, PhD student Vicki Moignard from the Stem Cell Institute measured the activity of 48 genes in over 3,900 blood progenitor cells that give rise to all other types of blood cell: red and white blood cells, and platelets. These genes include TAL1 and RUNX1, both of which are essential for the development of blood stem cells, and hence to human life.

Computational biology PhD student Steven Woodhouse then used the resulting dataset to construct the computer model of blood cell development, using computational approaches originally developed at Microsoft Research for synthesis of computer code. Importantly, subsequent laboratory experiments validated the accuracy of this new computer model.

One way the computer model can be used is to simulate the activity of key genes implicated in blood cancers.  For example, around one in five of all children who develop leukaemia has a faulty version of the gene RUNX1, as does a similar proportion of adults with acute myeloid leukaemia, one of the most deadly forms of leukaemia in adults. The computer model shows how RUNX1 interacts with other genes to control blood cell development: the gene produces a protein also known as Runx1, which in healthy patients activates a particular network of key genes; in patients with leukaemia, an altered form of the protein is thought to suppress this same network. If the researchers change the ‘rules’ in the network model, they can simulate the formation of abnormal leukaemia cells. By tweaking the leukaemia model until the behaviour of the network reverts back to normal, the researchers believe they can identify promising pathways to target with drugs.

Professor Gottgens adds: “Because the computer simulations are very fast, we can quickly screen through lots of possibilities to pick the most promising ones as pathways for drug development. The cost of developing a new drug is enormous, and much of this cost comes from new candidate drugs failing late in the drug development process. Our model could significantly reduce the risk of failure, with the potential to make drug discovery faster and cheaper.”

The research was supported by the Medical Research Council, the Biotechnology and Biological Sciences Research Council, Leukaemia and Lymphoma Research, the Leukemia and Lymphoma Society, Microsoft Research and the Wellcome Trust.

Dr Matt Kaiser, Head of Research at UK blood cancer charity Leukaemia & Lymphoma Research, which has funded Professor Gottgens’ team for over a decade, said: “For some leukaemias, the majority of patients still ultimately die from their disease. Even for blood cancers for which the long-term survival chances are fairly good, such as childhood leukaemia, the treatment can be really gruelling. By harnessing the power of cutting-edge computer technology, this research will dramatically speed up the search for more effective and kinder treatments that target these cancers at their roots.”

Reference
Moignard, V et al. Decoding the regulatory network of early blood development from single-cell gene expression measurements. Nature Biotech; 9 Feb 2015.

The first comprehensive computer model to simulate the development of blood cells could help in the development of new treatments for leukaemia and lymphoma, say researchers at the University of Cambridge and Microsoft Research.

With this new computer model, we can carry out simulated experiments in seconds that would take many weeks to perform in the laboratory
Bertie Gottgens
SEM image of normal red blood cells, computer-coloured red

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

If you go down to the woods today…

$
0
0
Airborne mapping of the tree canopy in a tropical West African forest

Soaring over the tree canopy of one of the most biodiverse forests on earth, a tiny unmanned plane buzzes quietly through the air. Its pilot stands 250 m below, controlling its flight remotely. This unmanned aerial vehicle (UAV) is gathering data essential to understanding and diagnosing the health of the rainforest below.

The plane is one of a small fleet currently undergoing test flights in Indonesia. Each has been equipped with remote sensors. Their task is to image both the Harapan Rainforest – a 100,000 hectare area of formerly logged forest that is now managed for conservation by a group of NGOs including the RSPB – and a highly threatened forested area on the coast of Kenya.

Globally, around one billion hectares of degraded tropical forest like Harapan might be restorable, enabling them to continue to contribute to the planet’s biodiversity and its carbon and water cycles. But a major problem faced by conservation managers is how to survey extensive areas in which conditions can vary in just a few hundred square metres and are continually changing through natural regeneration.

A group of conservation scientists at the Department of Plant Sciences, RSPB and A Rocha International (which works in Kenya) has embarked on what it hopes is a cost-effective and high-quality solution, funded by the Cambridge Conservation Initiative Collaborative Fund. Lead researcher Dr David Coomes, explained: “Forest conservation activities often rely on airborne monitoring and satellite imagery to provide information but these are either expensive or don’t offer a fine-enough resolution. We’ve decided to use inexpensive sensors on UAVs to spot areas of the trashed forest that are showing early signs of recovery.”

The researchers need to measure the health of the forest on a tree-by-tree basis – locating, identifying and counting key species indicative of recovery. Multiply this up by hundreds of thousands of hectares, repeated at time intervals in the future, and it becomes a huge imaging, computational and ‘big data’ challenge.

As the datasets grow, being able to manage and analyse the images automatically and with very high accuracy becomes crucial, and so a key part of the project is to develop the mathematical tools that will do this. This is the job of Dr Carola Schӧnlieb and her team of digital image analysts at the Department of Applied Mathematics and Theoretical Physics.

The mathematical tools have similarities to technology they are also developing for tumours. For the past year, Schӧnlieb’s group has been working on the VoxTox Research Programme – a five-year study funded by Cancer Research UK and led by Professor Neil Burnet at Cambridge’s Department of Oncology – which aims to reduce the ‘collateral damage’ toxicity that can arise during cancer radiotherapy. 

About half of all people with cancer receive a course of radiotherapy, a form of treatment in which X-rays are used to shrink or destroy the tumour. With the benefit of advanced systems, it’s now possible to aim radiation beams at tumours more effectively than ever before, allowing increasing doses of radiotherapy with increased cancer cure rates, and also reducing side effects.

However, although clinicians use planning software to define the target area for treatment and deliver the optimal dose, any dose that falls outside the target area – for instance due to the positioning of the patient and their internal organs during treatment – can cause permanent and severe damage to normal tissues. VoxTox, which brings together cancer specialists, mathematicians, radiologists, physicists and engineers, is developing a set of tools that can be used to provide patients with the optimal dosage for their condition.

“The similarity between what we are doing in VoxTox and forest mapping is the development of mathematical algorithms that combine datasets – a process called registration – and then segment them into objects of interest,” explained Schӧnlieb.

For VoxTox, imaging data gathered during the course of a patient’s radiotherapy is analysed mathematically pixel by pixel (or, in fact, ‘voxel by voxel’ because it’s in three dimensions) within the patient outline, and the dose is then re-computed at that point, each day, during treatment.

Airborne remote sensors for conservation, by contrast, gather data on which trees are in the forest, where they are and how healthy. “It’s like an airborne well tree clinic,” said Coomes. The data might include digital photography to record what can be seen; a three-dimensional laser scanner (or LiDAR) to measure the height of the canopy; and hyperspectral scanners to monitor the wavelengths of radiation each plant absorbs – these ‘chemical signatures’ can be used to identify species.

“Being able to bring these datasets together gives us a much fuller idea of the health of the forest than each of the datasets individually,” he added. “With the addition of GPS too, it means we can map the forest tree by tree, over time, in three dimensions.”

To develop the algorithms, researchers led by Schӧnlieb and Coomes are using test data previously acquired by Coomes’ group using manned flights over five European sites, as well as data recently gathered from 200 km2 of Malaysian forest as part of a project funded by the Natural Environment Research Council.

This is complex mathematical image processing, as Schӧnlieb explained: “As for VoxTox, the aim is to faultlessly match different types of sensing data as a hybrid dataset and then segment it based on the different levels of information present in each voxel. This sort of analysis hasn’t been done before with this kind of accuracy. It pushes over the boundaries of state-of-the-art methods.

“The next stage is to see how far we can push the segmentation method, which is the part that identifies individual trees,” she said. “If we can maintain high levels of accuracy using cheaper and fewer sensors – like those being used on the UAVs – then you can take imagery that’s as good as it’s going to get and maximise the Meanwhile information gain from what you have.”

Meanwhile, by mid-2015, the UAVs will begin streaming data back to the researchers who, with their algorithms ready, will start mapping the voxel forest and feeding the results into its management. “The appeal of this technology is you are dealing with individual plants and trees,” said Coomes, “it’s finally approaching what conservation scientists need to have: seeing the wood and the trees.”

Inset image – top: LiDAR image from Sierra Leone showing the Moru river along the border between Sierra Leone and Liberia; Credit: RSPB

Inset image – bottom: Dr Carola Schönlieb and Dr David Coomes; Credit: University of Cambridge

Recent advances in medical imaging are being applied to airborne remote sensing of vegetation, enabling conservation scientists to see the wood and the trees.

It’s finally approaching what conservation scientists need to have: seeing the wood and the trees
David Coomes
Airborne mapping of the tree canopy in a tropical West African forest

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

The ‘flying scientist’ who chased spores

$
0
0
R100 at mast in Canada

On a July day in 1930, British Airship R100 took to the sky from a Bedfordshire airfield on its first transatlantic flight. As it made its way across the Atlantic Ocean, 2,000ft above sea-level, a window opened and Squadron Leader Booth, wearing a pair of rubber gloves, leaned out. In his hand was a Petri dish.

Below, on the HMS Ausonia, Cambridge mycologist Dr W.A.R. Dillon Weston watched through the porthole of his cabin. It was his Petri dish – in reality, a spore trap, capturing minute particles released from fungi and carried with the wind – that Booth was holding. “The thrill of the airship excited Dillon Weston as much as the thrill of spore chasing,” explained Dr Ruth Horry from the Department of History and Philosophy of Science, who has been researching his story.

This adventure was set against the backdrop of what Picture Post magazine declared a “man-versus-fungi battle”. Wheat rust had wiped out enormous areas of American and Canadian wheat production and coffee rust had destroyed entire plantations in Ceylon.  “Those who know most about them are still frightened of the fungi,” said Picture Post.

Dillon Weston and fellow scientists suspected that one route of spore transmission over long distances was through air currents. But how to test this? “He was carrying out his studies in the 1920s and 1930s when research methodology was in its infancy,” said Horry. “Where his creativity literally took off was in realising that to test the atmosphere for spores he had to invent ways to catch them, using aeroplanes and home-made Vaseline spore traps.”

“At first sight it may appear ludicrous that the aeroplane can have any significance in biologic research. Is it, however, absurd?” said Dillon Weston in 1929. Intrigued by the finding of some of his American colleagues that aircraft-borne spore traps could detect spores at 11,000 feet, Dillon Weston persuaded friends in the Cambridge University Air Squadron to fly over the Cambridgeshire countryside at various heights. Although his results were as much about devising the perfect spore trap as about the spores themselves, he concluded that the air was a viable medium for spores to be transported.

“Devastating yet invisible plant diseases were an important enemy to conquer and new aviation technologies were vital in winning the war against them,” said Horry. “Newspaper coverage of the time showed that the scientist who chased invisible diseases captured both tiny spores and the imagination of the public: ‘Disease germs two miles up – flying scientists chase them’ declared one newspaper.”

But it was Dillon Weston’s next foray into the skies that is perhaps the most fascinating as a milestone in mycology, and the history of science, as British Airship R100 took off with his spore traps aboard. The mycologist had in effect moved his laboratory from the earth into the skies above.

“He watched the airship through the porthole of his cabin, with his spore traps 2,000 ft skywards in the hands of the airship’s Captain,” said Horry. Using official flight papers, telegrams, family letters and newspaper reports, Horry has pieced together not only the events of the day, but also how he managed to ‘piggy-back’ such a high-profile experimental flight with his homemade spore traps.

“The airship project had been foundering through technical setbacks and lack of financial support,” she explained. “Sensing an opportunity, the Air Ministry co-opted Dillon Weston’s spore experiment as a means of adding scientific legitimation to the scheme – it helped to sell an unknown airship to a public suffering from ‘airship fatigue’.”

Dillon Weston’s results from the airship experiment were never published, as it became impossible to repeat this initial trial. Two months after the R100 completed her journey, the British Air Ministry’s airship R101 tragically crashed on its first voyage to India, claiming the lives of all on board. Less than a year after the spore experiment, the airship scheme was terminated. 

Although the experiment was never to be repeated, Horry believes that it is representative of a wider concept in science: the idea of ‘piggy-backing’ small-scale experiments on larger scale projects. “Dillon Weston’s scientific work aboard R100 was a small-scale experiment that required complex technologies to reach its location of study,” she said.

“As fascinating as this story of airships and fungi is, its wider value has been in revealing that historians need a better understanding of scientific experiments that are dependent upon large-scale, external technological programmes for their existence.” She points towards astrobiology experiments to study the origins of extraterrestrial life on board early NASA space flights as a more recent example of piggy-back science.

Horry added: “The spore experiment’s subsequent disappearance from view acts as an indicator that other now-forgotten examples of piggyback science could have been attached to large scale 20th-century technologies. It may just require us to don our historical rubber gloves, take to the air and chase them down.” 

Inset image – top: Dillon Weston. Credit: John S Murray

Inset image – middle: R100 at mast in Canada. Credit: Ruth Horry

Inset image – bottom: Puffballs - Lycoperdon. Credit: Whipple Museum of the History of Science

A passion for fungi led Cambridge mycologist Dr Dillon Weston to ever-more inventive means of trapping fungal spores, even from the open window of an airship on its maiden flight in the first half of the 20th century.

Newspaper coverage of the time showed that the scientist who chased invisible diseases captured both tiny spores and the imagination of the public
Ruth Horry
R100 at mast in Canada
Fungi formed from glass

At the same time as he saw the devastation to crops and financial ruin that fungi could cause, Dr Dillon Weston was mesmerised by their splendour. “People thought fungi repulsive, and I wanted to show how beautiful they can be,” he wrote at the time.

Take Phytophthora infestans, the potato blight pathogen, responsible for destroying potato crops across Europe in the 1840s, contributing to mass starvation and the Great Irish Famine. Dillon Weston used the pathogen as the basis of an intricate glass model the height of a hand’s span, 400-times larger than the actual organism. Its delicate tendrils stretch upwards, crisscrossing each other in a complex and fragile array of strands topped by tiny oval heads crammed with spores. It is beautiful, but this beauty belies the pathogen’s legacy of death.

“He crafted some of his models in microscopic detail, showing fungal processes like spore formation and release,” explained Dr Ruth Horry from the Department of History and Philosophy of Science, who has been researching the life stories of objects that become part of museum collections.

His legacy of over 90 models is now housed in the Whipple Museum of the History of Science in Cambridge. Many are impeccable reproductions in microscopic detail of fungi such as those responsible for the mould commonly seen on bread, the fungus that sweetens wine and the leaf spot found on sugar beet; others are life-sized interpretations of woodland fungi, brightly coloured in russet and ochre; and all would have been an invaluable teaching aid for his students who rarely had access to three-dimensional representations of the organisms they were studying.

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes

Supermarket promotions boost sales of less healthy foods more than healthier foods

$
0
0
Fresh vegetables at supermarket

Price promotions are commonly used in stores to boost sales through price reductions and stimulate impulsive purchases by increasing items’ prominence through tags and positioning. However, there is growing concern that such promotional activities by the food industry may contribute to poor dietary choices and might lure consumers away from healthier, higher priced options.

“There’s plenty of anecdotal evidence, but very little empirical evidence, about the impact of price promotions on people’s diets,” explains Professor Theresa Marteau, Director of the Behaviour and Health Research Unit at the University of Cambridge. “In this study, we examined whether less healthy foods are more likely to be promoted than healthier foods and how consumers respond to price promotions.”

A team of UK researchers, funded by the Department of Health, studied detailed data on purchase records of all foods and beverages by 27,000 households in the UK. Over 11,000 purchased products from 135 food and drink categories were assigned healthiness scores – following UK Food Standards Agency (FSA) criteria – based on the FSA nutrient profiling model.

Published today in the American Journal of Clinical Nutrition, the results show - perhaps surprisingly - that on the whole less healthy items were no more frequently promoted than healthier ones. However, after accounting for price, price discount, and brand characteristics, the magnitude of the sales increase was larger in less healthy than in healthier food categories. A 10% increase in the frequency of promotions led to a 35% sales increase for less healthy foods and a just under 20% sales increase for healthier foods. The researchers believe this may be because products from less healthy food categories are often non-perishable, while those from healthier food categories – in particular fruit and vegetables – are perishable: stockpiling during promotion may therefore be more likely to happen in less healthy food categories.

The study also found that households of a higher socioeconomic status tended to respond to price promotions more than those from disadvantaged backgrounds, for both healthier and less healthy foods. The researchers suggest a number of reasons, including the fact that making the most of promotions may involve stockpiling items while they are on offer, requiring financial resources and more space to store products.

“It seems to be a widely held idea that supermarkets offer promotions on less healthy foods more often than promotions on healthier foods, but we did not find this to be the case, except within a minority of food categories,” says Dr Ryota Nakamura from the Centre for Health Economics at the University of York, who carried out the research whilst at the University of East Anglia. “Yet, because price promotions lead to greater sales boosts when applied to less healthy foods, our results suggest that restricting price promotions on less healthy foods has the potential to make a difference to people’s eating habits and encourage healthier, more nutritious diets.”

Reference
Nakamura, R et al. Price promotions on healthier vs. less healthy foods: a hierarchical regression analysis of the impact on sales and social patterning of responses to promotions in Great Britain. AJCN; 11 Feb 2015

Supermarket price promotions are more likely to lead to an increase in sales of less healthy foods than healthier choices in supermarkets, according to a study published today. However, the study of shopping patterns amongst almost 27,000 UK households found that supermarkets were no more likely to promote less healthy over healthier foods.

There’s plenty of anecdotal evidence, but very little empirical evidence, about the impact of price promotions on people’s diets
Theresa Marteau
Produce Department (cropped)

The text in this work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page. For image rights, please see the credits associated with each individual image.

Yes
License type: 
Viewing all 4361 articles
Browse latest View live




Latest Images