Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog


older | 1 | .... | 114 | 115 | (Page 116) | 117 | 118 | .... | 141 | newer

    0 0

    The decisions that the next UK government makes in response to Brexit could either add billions to the nation’s economy or risk decimating the country’s more than 264,000 manufacturing small and medium-sized enterprises (SMEs).

    SMEs make up 99.9% of all businesses in the UK. They therefore have a critical role to play if the UK is to tackle the three key challenges holding the economy back, which could be further accentuated by Brexit: a large trade deficit, poor productivity growth, and low levels of SME innovation.

    First, the UK’s long-running trade deficit could be worsened as European supply chains make plans to withdraw to the continent. According to a recent survey published by the Chartered Institute of Procurement and Supply, nearly half (46%) of EU businesses that work with UK suppliers are planning to move their UK operations to avoid potential Brexit tariffs.

    The survey also shows that severing supply chain ties travels in both directions. Nearly one-third of UK businesses that use EU suppliers are now looking for British replacements. But UK domestic supply chains are relatively weak and it is not clear whether they will be able to take advantage of emerging opportunities. For instance, only around 44% of the parts used in vehicles assembled in the UK are sourced domestically. This compares to an estimated 60% in Germany and France.

    The second challenge is productivity. The UK has a long tail of unproductive firms hindering supply-chain competitiveness. According to the Bank of England, a full one-third of companies have seen no increase in productivity growth since 2000. Many SMEs have weak internal R&D and managerial capabilities, making them unable to update production processes and adopt new technologies.

    Third is innovation. While the scientific output of the UK is world-leading, many SMEs are unable to reap the benefits. The smaller the company is, the harder it is to innovate or capitalise on its innovations. This is because small firms’ limited resources mean an unsuccessful investment can greatly affect their finances – and even jeopardise their survival.

    Opportunities abound

    Yet the opportunity for SME growth post-Brexit is huge, given proper investment. Research commissioned by the CBI, the UK’s main employers’ group, estimates that strengthening supply chains could add £30 billion to the UK economy and create more than 500,000 jobs by 2025. In the car industry alone, UK suppliers could take a much bigger share of the market and reduce the UK trade deficit by £4 billion, according to the Automotive Council.

    It is welcome that the government’s Industrial Strategy green paper recognises that the success of UK industry depends on the presence of a “vigorous ecology” of smaller companies supporting major players. The need to support this “ecology of suppliers” cannot be overestimated.

    Beating the competition

    Some of the UK’s overseas manufacturing competitors, such as Japan and Germany, have long recognised the barriers faced by SMEs and have established well-funded systems to overcome some of these constraints.

    Japan has a regional network of 60 “Kohsetsushi centres” which support the development of its industrial SMEs, providing them with testing and research services. Some of these centres have been operating for more than a century. For other sectors of its economy, such as agriculture and health, they have hundreds more similar centres. In 2016 they received a combined funding of around US$1.5 billion and hosted more than 12,000 researchers – all working to help SMEs innovate.

    Similarly, Germany has 69 Fraunhofer institutes with 24,000 staff and €2.2 billion of annual funding, along with a plethora of other institutions to support SME innovation. These include the German Federation of Industrial Research Associations or AiF, which has been operating for more than 60 years. It encompasses 100 industrial research associations serving around 50,000 businesses, mostly SMEs. In 2014, AiF disbursed around €500m (£440m) of public funding for innovative SME projects.

    These institutions help SMEs innovate by providing and funding a number of technical services such as access to the latest equipment and laboratories that SMEs could not otherwise afford as well as technical advice, analysis and testing services to ensure product quality and compliance with international standards, workforce training required for the introduction of new technologies and guidance to access innovation funds.

    The UK has made a similar move in this direction with its Catapult programme, launched in 2011. Catapult centres fill an important gap in the innovation system but, as a review commissioned by the government recognised, the UK is still “playing catch-up” with other countries. In particular, the Catapult centres cannot by themselves fulfil all the functions required to support SME productivity and innovation because they still lack the scale and geographical coverage found in other countries that have developed institutional capacity over many years.

    What the international experience tells us is that no one programme or institution is likely to help manufacturing SMEs overcome all the barriers they face. Instead, what some of the most successful countries have done is to ensure that the particular needs of SMEs are systematically taken into account across government-supported initiatives.

    Action to nurture SME innovation in the UK and address gaps in the institutional support infrastructure is therefore necessary for the nation’s future industrial success. A long-term approach is also required to build institutions with the size, coverage and financial flexibility to reach SMEs in all UK regions. Anything less runs the risk of damaging the health of the UK economy for years to come.

    Carlos López-Gómez, Head of the Policy Links unit at the Institute for Manufacturing, University of Cambridge, University of Cambridge

    This article was originally published on The Conversation. Read the original article.

    In this piece for The Conversation, Carlos López-Gómez from Cambridge's Institute for Manufacturing, discusses the role that small and medium-sized businesses might play in a post-Brexit economy.  

    technology-1

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    The team of researchers, from the University of Cambridge and the United States, have used theoretical and experimental methods to show how bismuth – the so-called “green element” which sits next to lead on the periodic table, could be used in low-cost solar cells. Their results, reported in the journal Advanced Materials, suggest that solar cells incorporating bismuth can replicate the properties that enable the exceptional properties of lead-based solar cells, but without the same toxicity concerns. Later calculations by another research group showed that bismuth-based cells can convert light into energy at efficiencies up to 22%, which is comparable to the most advanced solar cells currently on the market.

    Most of the solar cells which we see covering fields and rooftops are made from silicon. Although silicon is highly efficient at converting light into energy, it has a very low “defect tolerance”, meaning that the silicon needs to have very high levels of purity, making it energy-intensive to produce.

    Over the past several years, researchers have been looking for materials which can perform at similar or better levels to silicon, but that don’t need such high purity levels, making them cheaper to produce. The most promising group of these new materials are called hybrid lead halide perovskites, which appear to promise a revolution in the field of solar energy.

    As well as being cheap and easy to produce, perovskite solar cells have, in the space of a few years, become almost as energy-efficient as silicon. However, despite their enormous potential, perovskite solar cells are also somewhat controversial within the scientific community, since lead is integral to their chemical structure. Whether the lead contained within perovskite solar cells represents a tangible risk to humans, animals and the environment is being debated, however, some scientists are now searching for non-toxic materials which could replace the lead in perovskite solar cells without negatively affecting performance.

    “We wanted to find out why defects don’t appear to affect the performance of lead-halide perovskite solar cells as much as they would in other materials,” said Dr Robert Hoye of Cambridge’s Cavendish Laboratory and Department of Materials Science & Metallurgy, and the paper’s lead author. “If we can figure out what’s special about them, then perhaps we can replicate their properties using non-toxic materials.”

    In collaboration with colleagues at MIT, the National Renewable Energy Laboratory and Colorado School of Mines in the US, the Cambridge researchers have shown that bismuth, which sits next to lead in the periodic table, could be a non-toxic alternative to lead for use in next-generation solar cells. Bismuth, known as the “green element”, is widely used in cosmetics, personal care products and medicines. Like lead, it is a heavy metal, but it is non-toxic.

    For this study, Hoye and his colleagues looked at bismuth oxyiodide, a material which was previously investigated for use in solar cells and water splitting, but was not thought to be suitable because of low efficiencies and because it degraded in liquid electrolytes. The researchers used theoretical and experimental methods to revisit this material for possible use in solid-state solar cells.

    They found that bismuth oxyiodide is as tolerant to defects as lead halide perovskites. Bismuth oxyiodide is also stable in air for at least 197 days, which is a significant improvement over some lead halide perovskite compounds. By sandwiching the bismuth oxyiodide light absorber between two oxide electrodes, they were able to demonstrate a record performance, with the device converting 80% of light to electrical charge.

    The bismuth-based devices can be made using common industrial techniques, suggesting that they can be produced at scale and at low cost.

    “Bismuth oxyiodide has all the right physical property attributes for new, highly efficient light absorbers,” said co-author Professor Judith Driscoll, of the Department of Materials Science and Metallurgy. “I first thought of this compound around five years ago, but it took the highly specialised experimental and theoretical skills of a large team for us to prove that this material has real practical potential.”

    “This work shows that earlier theories about bismuth oxyiodide were not wrong, and these compounds do have the potential to be successful solar cells,” said Hoye, who is a Junior Research Fellow at Magdalene College. “We’re just scratching the surface of what these compounds can do.”

    “Previously, the global solar cell research community has been searching for non-toxic materials that replicate the defect tolerance of the perovskites, but without much success in terms of photovoltaic performance,” said Dr David Scanlon, a theorist at UCL not involved in this work. “When I saw this work, my team calculated based on the optical properties that bismuth oxyiodide has a theoretical limit of 22% efficiency, which is comparable to silicon and the best perovskite solar cells. There’s a lot more we could get from this material by building off this team’s work.”

    Reference
    Robert Hoye et al. ‘Strongly Enhanced Photovoltaic Performance and Defect Physics of Air-Stable Bismuth Oxyiodide (BiOI).’ Advanced Materials (2017). DOI: 10.1002/adma.201702176

    Researchers have demonstrated how a non-toxic alternative to lead could form the basis of next-generation solar cells. 

    We’re just scratching the surface of what these compounds can do.
    Robert Hoye
    Bismuth oxyiodide light absorbers

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    Microscopic versions of the cocoons spun by silkworms have been manufactured by a team of researchers. The tiny capsules, which are invisible to the naked eye, can protect sensitive molecular materials, and could prove a significant technology in areas including food science, biotechnology and medicine.

    The capsules were made at the University of Cambridge using a specially-developed microengineering process that combines the power of microfluidic manufacturing with the value of natural silk. The process mimics on the microscale the way in which Bombyx mori silkworms spin the cocoons from which natural silk is harvested.  The resulting micron-scale capsules comprise a solid and tough shell of silk nano-fibrils that surround and protect a centre of liquid cargo, and are more than thousand times smaller than those created by silkworms.

    Writing in the journal Nature Communications, the team suggest that these “micrococoons” are a potential solution to a common technological problem: How to protect sensitive molecules that have potential health or nutritional benefits, but can easily degrade and lose these favourable qualities during storage or processing.

    The study argues that sealing such molecules in a protective layer of silk could be the answer, and that silk micrococoons that are far too small to see (or taste) could be used to house tiny particles of beneficial molecular “cargo” in various products, such as cosmetics and food.

    The same technology could also be used in pharmaceuticals to treat a wide range of severe and debilitating illnesses. In the study, the researchers successfully showed that silk micrococoons can increase the stability and lifetime of an antibody that acts on a protein implicated in neurodegenerative diseases.

    The work was carried out by an international team of academics from the Universities of Cambridge, Oxford and Sheffield in the UK; the Swiss Federal Institute of Technology in Zurich, Switzerland; and the Weizmann Institute of Science in Israel. The study was led by Professor Tuomas Knowles, a Fellow of St John’s College at the University of Cambridge and co-director of the Centre for Protein Misfolding Diseases.

    “It is a common problem in a range of areas of great practical importance to have active molecules that possess beneficial properties but are challenging to stabilise for storage” Knowles said. “A conceptually simple, but powerful, solution is to put these inside tiny capsules. Such capsules are typically made from synthetic polymers, which can have a number of drawbacks, and we have recently been exploring the use of fully natural materials for this purpose. We are particularly excited by the potential to replace plastics with sustainable biological materials for this purpose.”

    Dr. Ulyana Shimanovich, who performed a major part of the experimental work as a St John’s College Post-Doctoral research associate, and now works at the Weizmann Institute of Science, said: “Silk is a fantastic example of a natural structural material. But we had to overcome the challenge of controlling the silk to the extent that we could mould it to our designs which are more than a factor of a thousand smaller than the natural silk cocoons.”

    Dr. Chris Holland, co-worker and head of the Natural Materials Group in Sheffield added: “Silk is amazing because whilst it is stored as a liquid, spinning transforms it into a solid. This is achieved by stretching the silk proteins as they flow down a microscopic tube inside the silkworm.”

    To imitate this, the researchers created a tiny, artificial spinning duct, which copies the natural spinning process to cause the unspun silk to form into a solid. The researchers then worked out how to control the geometry of this self-assembly in order to create microscopic shells.

    Making conventional synthetic capsules can be challenging to achieve in an environmentally friendly manner and from biodegradable and biocompatible materials. Silk is not only easier to produce; it is also biodegradable and requires less energy to manufacture.

    “Natural silk is already being used in products like surgical materials, so we know that it is safe for human use,” Professor Fritz Vollrath head of the Oxford Silk Group said. “Importantly, the approach does not change the material, just its shape.”

    Silk micrococoons could also expand the range and shelf-life of proteins and molecules available for pharmaceutical use. Because the technology can preserve antibodies, which would otherwise degrade, in cocoons with walls that can be designed to dissolve over time, it could enable the development of new treatments against cancer, or neurodegenerative conditions such as Alzheimer’s and Parkinson’s Diseases.

    To explore the viability of silk microcapsules in this regard, the researchers successfully tested the micrococoons with an antibody that has been developed to act on alpha-synuclein, the protein that is thought to malfunction at the start of the molecular process leading to Parkinson’s Disease. This study was carried out with the support of the Cambridge Centre for Misfolding Diseases, whose research programme is focused on the search for ways of preventing and treating neurodegenerative conditions such as Alzheimer's and Parkinson's diseases. Professor Chris Dobson, Director of the Centre and Master of St John's, who is also a co-author of this paper, said: "The results of this study are extremely exciting as they suggest that many potentially therapeutic molecules that could not normally be taken forward into the clinic because of their lack of stability, could become life-changing drugs using these encapsulation techniques."

    “Some of the most efficacious and largest selling therapeutics are antibodies,” Michele Vendruscolo, co-director of the Cambridge Centre of Misfolding diseases, said. “However, antibodies tend to be prone to aggregation at the high concentrations needed for delivery, which means that they are often written off for use in treatments, or have to be engineered to promote stability.”

    “By containing such antibodies in micrococoons, as we did here, we could significantly extend not just their longevity, but also the range of antibodies at our disposal,” Knowles said. “We are very excited by the possibilities of using the power of microfluidics to generate entirely new types of artificial materials from fully natural proteins.“

    The study, Silk microcooons for protein stabilisation and molecular encapsulation, is published in Nature Communications.

    Researchers have manufactured microscopic versions of the cocoons spun by silkworms, which could be used to store sensitive proteins and other molecules for a wide range of uses.

    By containing antibodies in micrococoons, as we did here, we could significantly extend not just their longevity, but also the range of antibodies at our disposal.
    Tuomas Knowles
    The silkworm spins a silk cocoon around itself for protection during metamorphosis. Researchers have found that silk can also protect other precious molecular cargo.

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0
  • 07/19/17--00:00: Open Cambridge 2017
  • A showcase of Eddington, the University of Cambridge’s new district in the City takes place as part of Open Cambridge on Saturday 9 September 2017.

    This will be part of the Open Cambridge weekend, now celebrating its tenth year, showcasing a diverse range of hidden architectural gems and stunning spaces that are normally closed to the public or charge entry fees. This year’s programme features 97 events, ranging from Jane Austen at King's, an artist walking tour of Parker's Piece and the Cambridge Mosque open day. 

    Over 25 different events will make up Open Eddington, exploring the ethos of architecture and sustainability which sit at the heart of the development.

    This year’s festival also includes India Unboxed, exploring more than 150 years of close relations between India and Cambridge.

    This year’s Open Cambridge weekend will showcase the University of Cambridge’s new district to the City, Eddington. Open Eddington will feature over 25 different events led by the renowned architects, professionals, and development partners who have all contributed to creating this new place.

    Eddington is the heart of this new district, which has been known as the North West Cambridge Development, and is located between Madingley Road, Huntingdon Road and the M11. The opening of Eddington marks years of planning and construction with the first phase of the project now open to residents and the local and wider community.

    Heather Topel, Project Director of the North West Cambridge Development said: “We are thrilled to present a whole day of activities and events for the public that illustrate the depth and breadth of our project ethos.  The University’s ambitions to create a sustainable community of exceptional design quality at Eddington have been explicit, and at Open Eddington, we are delighted that so many of our project architects, consultants and partners will share with you the detail of their involvement in this exemplar development. We hope many of our friends, neighbours and supporters will take the time to join us for this occasion.”

    Sustainability

    Innovative and unique infrastructure has been integrated across Eddington to help residents lead more sustainable lives. At Open Eddington, there will be a range of fascinating events that explain how these various features work.  Visitors can go behind the scenes of the energy centre, see the unique underground waste and recycling system in action, enjoy the virtual experience building site via Oculus Rift to explore renewable energy, learn about the rainwater harvesting scheme, or go on a walking tour with our project ecologist.  Any or all of these events will open your minds to how sustainability has been considered in place-making.

    Architecture and Design

    Eddington has been designed by some of the best local, national and international architectural practices. Talks and walking tours will be given by Jonathan Rose from AECOM who developed the award-winning masterplan, as well as the principals from the architects who have worked on the scheme including: Wilkinson Eyre, Mole, Stanton Williams, Mecanoo, R H Partnership, Marks Barfield, Alison Brooks and Pollard Thomas Edwards.

    There will also be an exclusive behind-the-scenes tour of the Storey’s Field Centre by MUMA, which is under construction.

    Participation

    Families can enjoy a range of events including a Roman Street Party inspired by the archaeology of the site, workshops that inspire children to become Artscapers and co-create communities through Art, or generate the energy required to watch a short film at the pop-up Cycle Cinema.

    Cycle tours and self-guided walking tours will also be available as part of Open Eddington.

    Travel

    The Universal Bus will be operating a free service on Saturday 9 September as part of Open Eddington. The service runs from Addenbrooke’s to the railway station, city centre and to Eddington. For more information visit: http://www.go-whippet.co.uk/

    Eddington is a ten-minute cycle journey from central Cambridge. There will be cycle tours available and downloadable maps for journey planning.

    India Unboxed

    In addition to Open Cambridge’s usual array of events and the Open Eddington Programme, will be the India Unboxed series – which exams the close relationship India and Cambridge have had for more than 150 years. Rooted in the University of Cambridge Museums collections, the programme explores themes of identity and connectivity for audiences both from the UK and India.

     

     

     

    See Cambridge’s newest district for the first time as part of the Open Cambridge weekend 2017

    We hope many of our friends, neighbours and supporters will take the time to join us for this occasion.
    Heather Topel, Project Director of the North West Cambridge Development
    Booking information

    The Open Cambridge weekend takes place on 8-9 September, and bookings open at 11am on 14 August. For details, see www.opencambridge.cam.ac.uk or call 01223 766766. All events listed as part of Open Eddington are free. 

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    The “Cambridge2Cambridge” cyber security competition, backed by government and industry, is the brainchild of the University of Cambridge and the Massachusetts Institute of Technology (MIT) in the US, and will see talented students pitted against each other in a three-day showdown.

    In total, 110 students from 25 universities from the UK and USA will form mixed transatlantic teams and battle against a fictional rogue state in the life-like cyber security competition backed by the National Cyber Security Centre (NCSC) and Cabinet Office.

    The annual event is now in its second year with prize money up for grabs for the winners. It will be held from 24-26 July at Trinity College, Cambridge.

    With major cyber-attacks on the increase, according to the NCSC, the need for cyber security experts is more important than ever before.

    Professor Frank Stajano, Head of the Academic Centre of Excellence in Cyber Security Research at Cambridge’s Computer Laboratory and the co-founder of Cambridge2Cambridge, said that the competition has been designed to promote greater cyber security collaboration between the UK and USA, and to give students the platform to explore creative ways to combat global cyber-attacks.

    “The aim of the competition is also to bring together different individuals in a fun and inclusive environment, where they can apply their cyber security abilities in a collaborative and competitive setting, allowing students to implement the skills they have been taught, while learning new ones in the process,” he said.

    It also gives budding cyber enthusiasts the opportunity to meet like-minded individuals, and learn more about careers in the sector by introducing them to key players in the industry and government.

    https://cambridge2cambridge.csail.mit.edu/

    A major cyber security challenge, aimed at educating and inspiring the next generation of cyber defenders from across the UK and US, will be held at the University of Cambridge next week. 

    The aim of the competition is to bring together different individuals in a fun and inclusive environment, where they can apply their cyber security abilities in a collaborative and competitive setting.
    Frank Stajano
    Inter-ACE Cyber Challenge 2017

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    The researchers, from the University of Cambridge, used existing measurements of carbon and helium from more than 80 volcanoes around the world in order to determine its origin. Carbon and helium coming out of volcanoes can either come from deep within the Earth or be recycled near the surface, and measuring the chemical fingerprint of these elements can pinpoint their source. When the team analysed the data, they found that most of the carbon coming out of volcanoes is recycled near the surface, in contrast with earlier assumptions that the carbon came from deep in the Earth’s interior. “This is an essential piece of geological carbon cycle puzzle,” said Dr Marie Edmonds, the senior author of the study.

    Over millions of years, carbon cycles back and forth between Earth’s deep interior and its surface. Carbon is removed from the surface from processes such as the formation of limestone and the burial and decay of plants and animals, which allows atmospheric oxygen to grow at the surface. Volcanoes are one way that carbon is returned to the surface, although the amount they produce is less than a hundredth of the amount of carbon emissions caused by human activity. Today, the majority of carbon from volcanoes is recycled near the surface, but it is unlikely that this was always the case.

    Volcanoes form along large island or continental arcs where tectonic plates collide and one plate slides under the other, such as the Aleutian Islands between Alaska and Russia, the Andes of South America, the volcanoes throughout Italy, and the Mariana Islands in the western Pacific. These volcanoes have different chemical fingerprints: the ‘island arc’ volcanoes emit less carbon which comes from deep in the mantle, while the ‘continental arc’ volcanoes emit far more carbon which comes from closer to the surface.

    Over hundreds of millions of years, the Earth has cycled between periods of continents coming together and breaking apart. During periods when continents come together, volcanic activity was dominated by island arc volcanoes; and when continents break apart, continental volcano arcs dominate. This back and forth changes the chemical fingerprint of carbon coming to Earth’s surface systematically over geological time, and can be measured through the different isotopes of carbon and helium.

    Variations in the isotope ratio, or chemical fingerprint, of carbon are commonly measured in limestone. Researchers had previously thought that the only thing that could change the carbon fingerprint in limestone was the production of atmospheric oxygen. As such, the carbon isotope fingerprint in limestone was used to interpret the evolution of habitability of Earth’s surface. The results of the Cambridge team suggest that volcanoes played a larger role in the carbon cycle than had previously been understood, and that earlier assumptions need to be reconsidered.

    “This makes us fundamentally re-evaluate the evolution of the carbon cycle,” said Edmonds. “Our results suggest that the limestone record must be completely reinterpreted if the volcanic carbon coming to the surface can change its carbon isotope composition.”

    A great example of this is in the Cretaceous Period, 144 to 65 million years ago. During this time period there was a major increase in the carbon isotope ratio found in limestone, which has been interpreted as an increase in atmospheric oxygen concentration. This increase in atmospheric oxygen was causally linked to the proliferation of mammals in the late Cretaceous. However, the results of the Cambridge team suggest that the increase in the carbon isotope ratio in the limestones could be almost entirely due to changes in the types of volcanoes at the surface.

    “The link between oxygen levels and the burial of organic material allowed life on Earth as we know it to evolve, but our geological record of this link needs to be re-evaluated,” said co-author Dr Alexandra Turchyn, also from the Department of Earth Sciences.

    The research was funded by the Alfred P. Sloan Foundation, the Deep Carbon Observatory and the European Research Council.

    Reference:
    Emily Mason, Marie Edmonds, Alexandra V. Turchyn. ‘Remobilization of crustal carbon may dominate volcanic arc emissions.’ Science (2017). DOI: 10.1126/science.aan5049.

    Inset Image: Schematic diagram to show the possible sources of carbon in a subduction zone volcanic system.

    Researchers have found that the formation and breakup of supercontinents over hundreds of millions of years controls volcanic carbon emissions. The results, reported in the journal Science, could lead to a reinterpretation of how the carbon cycle has evolved over Earth’s history, and how this has impacted the evolution of Earth’s habitability. 

    The link between oxygen levels and the burial of organic material allowed life on Earth as we know it to evolve, but our geological record of this link needs to be re-evaluated.
    Alexandra Turchyn
    ISS013-E-24184 (23 May 2006) --- Eruption of Cleveland Volcano, Aleutian Islands, Alaska is featured in this image photographed by an Expedition 13 crewmember on the International Space Station.

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    The East Anglian fens with their flat expanses and wide skies, a tract of some of the UK’s richest farmland, are invariably described as bleak – or worse. Turn the clock back 1,000 years to a time when the silt and peat wetlands were largely undrained, and it’s easy to imagine a place that defied rather than welcomed human occupation.

    Historians have long argued that during the ‘dark’ ages (the period between the withdrawal of Roman administration in around 400 AD and the Norman Conquest in 1066) most settlements in the region were deserted, and the fens became an anarchic, sparsely inhabited, watery wilderness.

    A new interdisciplinary study of the region by a leading landscape archaeologist not only rewrites its early history across those six centuries but also, for the first time anywhere in Europe, offers a detailed view of the settlement and agricultural management of early medieval wetland landscapes.

    Susan Oosthuizen’sThe Anglo-Saxon Fenland (published last month by Windgather Press) is a prequel to the geographer Clifford Darby’s definitive study of the medieval fen, published in 1940. She draws on her interest in the relationship between early communities and their landscapes – in particular their management of herds of cattle across extensive areas of shared grazing.

    Oosthuizen suggests that, rather than undergoing dramatic change after 400 AD, communities continued to live around the fen edge and on ‘islands’ of higher ground rising above the peat wetlands just as their ancestors had. Her evidence lies in the recent boost in archaeological discovery.

    A five-fold increase in excavations since new planning guidance was issued in 1990, and the introduction of the Portable Antiquities Scheme in 1997 for recording finds by the public, have transformed the volume of archaeological material across Britain – including the windswept fens.

    As a result the ‘dark ages’, as the period was often described, with its connotations of backwardness, is now more commonly called ‘early medieval’ which suggests less of a disjuncture between eras that appear instead to have unfolded seamlessly.

    In The Anglo-Saxon Fenland, Oosthuizen argues that this new evidence shows there is little to support the idea that the fenland was anything but continuously occupied by settled, stable communities during the period 400 to 900 AD.  

    Who were ‘the Anglo-Saxons’ and what do we know about the fens between 400 and 900? In her prologue Oosthuizen addresses these key questions with admirable clarity. Her answers set the stage for an exploration of a fertile wetland exploited for millennia by local communities and threaded through by a network of rivers that allowed incomers from across the North Sea to penetrate as far as the English midlands.

    Since the early 19th century, it has been assumed that during the 5th and 6th centuries indigenous British communities were removed altogether or reduced to servitude by incomers arriving from north-west Europe – the Anglo-Saxons – who lived in separate settlements.

    There is now, however, a growing realisation among archaeologists that it is impossible to identify ‘Romano-British’ and ‘Anglo-Saxon’ communities on the basis of material culture, the things that people used every day. “Settlements, fields and artifacts can be distinguished by status,” argues Oosthuizen, “but not by the cultural background of the people to whom they belonged.”

    She writes: “The evidence from fenland shows that newcomers were assimilated into late British communities; there was no displacement of populations nor establishment of separate communities.” The distribution, for example, of Old English, vernacular Latin and (to a lesser extent) British Celtic place-names across southern England suggests that most early medieval people were bi- or even tri-lingual. Fenland was no different.

    As Oosthuizen points out, being able to speak several languages confers obvious advantages, widening opportunities for all manner of transactions. It’s highly probable that the inhabitants of Walsoken and Chatteris, to name just two fenland villages, would have spoken both Old English and another language, switching from one to the other according to interlocutor and topic.

    Basing her arguments on pollen analysis, archaeological evidence, the longevity over almost 1,500 years of rights of common pasture in the fen, and the etymology of place names, (and the absence of evidence to the contrary), Oosthuizen proposes that new arrivals were assimilated within the indigenous Romano-British communities, sharing livelihoods within the same landscape, their various languages and cultures mingling and merging.

    The Anglo-Saxon Fenland paints a portrait of communities whose agricultural economies were based on common rights in shared wetland resources that belonged to the whole of the small territories among which they were divided. Arrangements by which the landscape’s bounty was apportioned took account of the needs of both local communities and the land itself, breathing life into the adage that the old ways are often the best ways, based on the wisdom that comes with practical experience and knowledge passed down from one’s elders.

    Dairy cattle, for example, were allowed first access to spring pasture; as providers of milk their needs for optimum nutrition were greatest. On the other hand, cattle were barred from land at times when their hooves would damage the soil structure vital to its long-term health.    

    Oosthuizen writes: “Timetabling [grazing by the dairy herd] was focused on sub-dividing the fens to allow for their rotation for different uses in different months, whose objectives were to maintain the quality of the grazing, to sustain the health of the herd, to ensure equitable exploitation among right holders, to maximise production, and to assure the long term sustainability of fen pastures.”

    A similar checklist of priorities, Oosthuizen points out, underpins modern conservation advice on floodplain water meadows, which are best maintained on a regime that includes annual mowing, use of livestock from August to keep the grass short, maintenance of boundaries, clearing of watercourses, and control of invasive weeds.

    There is abundant evidence - in place names, ditches and banks, land- and water-management practices, and (once they began) records of agreements based on centuries-old tried-and-tested farming methods - that people managed the landscape not just to meet their immediate needs but to assure the long-term sustainability of the wetland resources on which their livelihoods depended.

    Detail of a late 16th century copy of an earlier map, possibly medieval in origin. It shows the area around Four Gotes in Tydd St Giles, near Wisbech. (Courtesy of Wisbech & Fenland Museum)

    Fen dwellers made incremental adjustments to the ways in which they collectively exploited and safeguarded the fenland’s natural resources, adapting to water levels that slowly rose as a result of climate change. 

    In the undrained fen, water was both friend and foe. Serious flooding was a destructive force. Yet periodic (but relatively brief) seasonal inundation of pasture land produces grass not just for grazing but also to make the nutritious hay on which cattle thrived during the winters. Perhaps as early as 650, fen communities were already digging ditches to redirect excess water away from their pastures.

    In 1618, the commoners at Cottenham described how the right amount of flooding, at the right time, could produce the white fodder which the cattle like best and that “those grounds that lie lowest, and are oftenest and longest overflown in the winter season are the most fertile grounds and yield the best yearly value”.

    Detailed knowledge of the varying characteristics that depended on the degree of wetness in each part of the fenland enabled fen-dwellers to maximise its productivity through seasons wet and dry, and makes the most of opportunities for hunting, trapping and fishing - wildfowl and eels for the pot.

    With The Anglo-Saxon Fenland Oosthuizen reveals a society whose origins could be found in prehistoric Britain, which had evolved through the four centuries of Roman administrations, and continued to develop thereafter. The rich and complex history of the fen region shows, she argues, a traditional social order evolving, adapting and innovating in response to changing times.

    In piecing together evidence from a wide range of sources, she illuminates how early medieval communities interacted with each other, with newcomers, and – especially – how those relationships were intertwined with their management of the pastoral landscapes on which their livelihoods depended.

    The Anglo-Saxon Fenland by Susan Oosthuizen is published by Windgather Press.

    What was life in the fens like in the period known as the dark ages?  Archaeologist Susan Oosthuizen revisits the history of an iconic wetland in the light of fresh evidence and paints a compelling portrait of communities in tune with their changeable environment. In doing so, she makes an important contribution to a wider understanding of early medieval landscapes.

    The evidence from fenland shows that newcomers were assimilated into late British communities; there was no displacement of populations nor establishment of separate communities.
    Susan Oosthuizen
    Cattle grazing in the River Ouse water meadows south of Ely

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    The two collaborations are focused on food security in India and public health in Bangladesh and will see researchers from the UK and developing countries working together as equal partners.

    The awards are part of the Global Challenges Research Fund, which aims to build upon research knowledge in the UK, and strengthen capacity overseas, to help address challenges, informed by expressed need in the developing countries.

    Jo Johnson, Minister for Universities and Science, said: “From healthcare to green energy, the successful projects receiving funding today highlight the strength of the UK’s research base and our leadership in helping developing countries tackle some of the greatest global issues of our time.

    “At a time when the pace of scientific discovery and innovation is quickening, we are placing science and research at the heart of our Industrial Strategy to build on our strengths and maintain our status as science powerhouse.”  

    Andrew Thompson, GCRF Champion at Research Councils UK, said: “The 37 projects announced today build research capacity both here in the UK and in developing countries to address systemic development challenges,  from African agriculture to sustainable cities, clean oceans, and green energy, to improved healthcare, food security, and gender equality.”

    TIGR2ESS (Transforming India’s Green Revolution by Research and Empowerment for Sustainable food Supplies)

    Lead: Professor Howard Griffiths (Department of Plant Sciences)

    Talk of a second Green Revolution has been around for a while. The first – in India and other developing countries, in the 1960s – brought a massive increase in crop production that sustained the country’s mushrooming population. But now there are new pressures – not just the need to produce even more food, but to reduce the damage done by excessive use of pesticides, fertiliser and water in the face of climate change.

    TIGR2ESS, a collaboration between UK and Indian scientists, seeks to frame the big question – how to bring about a second Green revolution – in all its breadth and depth. India is developing fast– agriculture needs to take account of urbanisation, for example, which has drawn so many away from the land. Smallholder farmers- particularly women- need smart technologies to sustain crop yields, and improve health and nutrition.

    The TIGR2ESS programme will assess these options, as well as supporting basic research programmes, and providing advice to local communities. There will be many opportunities for academic exchanges, mentoring and career development for scientists from both countries. Links with the relevant government ministries in India, plus industrial connections built into the programme, will hopefully turn the best recommendations into reality. 

    “We are extremely pleased that the TIGR2ESS programme will help to deliver our vision for partnerships with institutions in India to improve crop science and food security,” says Professor Howard Griffiths, Co-Chair of the University of Cambridge’s Strategic Initiative in Global Food Security.

    “Agriculture is feminizing. We need to ensure that state resources and services, and knowledge resources, are equally accessible to women farmers,” adds Dr V Selvam, MS Swaminathan Research Foundation, India, one of the collaborators. 

    CAPABLE (Cambridge Programme to Assist Bangladesh in Lifestyle and Environmental risk reduction)

    Lead: Professor John Danesh (Department of Public Health and Primary Care)

    Gathering a big group of people and studying their health in the long term can uncover game-changing facts. The British Doctors’ Study, for example, which began in 1951, revealed that smoking causes lung cancer. Imagine if the same could be done in a country facing a perfect storm of chronic health problems.

    Bangladesh is admired worldwide for its success in cutting child mortality and fertility rate, yet it faces an onslaught of chronic diseases that arise from an interplay of factors ranging from arsenic-contaminated drinking water to iron-deficient foods and from air pollution to the rise of the western lifestyle.

    CAPABLE has the ambitious goal of recruiting 100,000 people from landscapes ranging from the green paddy fields of rural Bangladesh to the slums of the densest city in the world – Dhaka. From their data, engineers, sociologists, health researchers and a host of other disciplines will try to understand how the risk factors interact – and build a model that can be used to test interventions before they are implemented.

    “We aim to help develop simple, scalable and effective solutions to control major environmental and lifestyle risk factors in Bangladesh,” says Scientific Director of the CAPABLE programme Dr Rajiv Chowdhury from the Department of Public Health and Primary Care at the University of Cambridge.

    Two major research collaborations led by the University of Cambridge have been awarded almost £15 million in funding, the Minister of State for Universities and Science, Jo Johnson MP, announced today during a visit to Cambridge’s Sainsbury Laboratory.

    NP India burning 60

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    Our lives benefit from social networks: the contact and dialogue between family, friends, colleagues and neighbours. However these networks can also cost lives by transmitting infection or misinformation, particularly in developing nations.

    In fact, when there is an outbreak of disease, or of damaging rumour that hinders uptake of vaccination, the network through which it spreads needs to be broken up – and fast.

    But who are the people with most connections – the ‘hubs’ in any social network – that should be targeted with inoculating drugs or health education in order to quickly isolate a contagion?

    Information about social networks in rural villages in the developing world is costly and time-consuming to collect, and usually unavailable. So current immunisation strategies target people with established community roles: healthcare workers, teachers, and local officials.

    Now, Cambridge researchers have for the first time combined networking theories with ‘real world’ data collected from thousands of rural Ugandan households, and shown that a simple algorithm may be significantly more effective at finding the highly connected ‘hubs’ to target for halting disease spread. 

    The ‘acquaintance algorithm’ employed by researchers is remarkably simple: select village households at random and ask who in their network is most trusted for medical advice.

    Researchers were surprised to find that the most influential people in social networks were very often not those with obvious positions in a community. As such, these valuable ‘hubs’ are invisible to drug administration programmes without the algorithmic approach.

    “Everyone is a node in a social network. Most nodes have just a few connections. However, a small number of nodes have the majority of connections. These are the hubs we want to uncover and target in order to intentionally cause failure in social networks spreading pathogens or damaging behaviour,” says lead researcher Dr Goylette Chami, from Cambridge’s Department of Pathology.

    “It was striking to find that important village positions may be best left untargeted for interventions seeking to stop the spread of pathogens through a rural social network,” says Chami.   

    In the study, published today in the journal PNAS, the researchers write that this simple strategy could be particularly effective for isolating households that refuse to take medicine, so that they don’t endanger the rest of a community with infection. 

    To control disease caused by parasitic worm infections, for example, at least three quarters of any given community need to be treated. “The refusal of treatment by a few people can result in the destabilisation of mass drug administration programmes that aim to treat 1.9 billion people worldwide,” says Chami.

    An average of just 32% of households (‘nodes’) selected by the acquaintance algorithm need to be provided health education (and ‘removed’ from a network) to reach the disease control threshold for an entire community. Using traditional role-based targeting, the average needed is much larger: some 54%.   

    “We discovered that acquaintance algorithms outperformed the conventional field-based approaches of targeting well-established community roles for finding individuals with the most connections to sick people, as well as isolating the spread of misinformation,” says Chami

    “Importantly, this simple strategy doesn’t require any information on who holds which role and how to reach them. No database is needed. As such, it is easy to deploy in rural, low-income settings.” 

    “In an ideal world, everyone would be treated,” says Chami. “However, with limited resources, time and information, finding the best connected neighbours, the ‘hubs’, and removing them through treatment, looks to be the quickest way to fragment a network that spreads infections, and to render the most people safe.”  

    Chami and colleagues from Cambridge and the Ugandan Ministry of Health collected data on social and health advice networks from over 16,000 people in 17 villages across rural Uganda. They also collected data on networks of disease using reports of diarrhoea as a proxy for infection spread – particularly relevant to recent large-scale cholera outbreaks.

    To do this, Chami built a survey app from open source code and loaded it on to 76 Google nexus tablets. The team then trained a number of individuals from the local villages to help them go door to door.

    Adds Chami: “This kind of ‘network theory’ approach to public health in the developing world, and the use of acquaintance algorithms, if tested in randomised controlled trials, may increase compliance to treatments and inform strategies for the distribution of vaccines.”   

    An innovative new study takes a network theory approach to targeted treatment in rural Africa, and finds that a simple algorithm may be more effective than current policies, as well as easier to deploy, when it comes to preventing disease spread – by finding those with “most connections to sick people”.  

    Finding the best connected neighbours, the ‘hubs’, and removing them through treatment, looks to be the quickest way to fragment a network that spreads infections
    Goylette Chami
    A fishing village along Lake Victoria in the Mayuge District of Uganda, close to where researchers gathered data for the latest study.

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    Over half of the food consumed by the human race in terms of calories comes from just three species of grain – wheat, rice and maize – yet in biological terms all are highly unnatural. They’ve been bred, generation after generation, to have grains that are super-sized in relation to their stems. This is perfect for maximising crop yields and profits, but not so perfect if growing conditions change in a changing climate.

    Professor Martin Jones, Head of Cambridge’s Department of Archaeology and Anthropology, is far more interested in a group of around 20 species of small-grained cereals that are generically termed millets. They look like wild grasses, don’t need much water, grow quickly and have a good nutritional balance. Yet, until recently, they have been largely overlooked by the Western world as a food source for humans, and are most commonly found in packets of birdseed.

    Now Jones has brought attention to this ancient grain as a means of mitigating against the boom–bust nature of harvests. His work has contributed to a growing market in Asia for high-quality millet from Aohan, Inner Mongolia, and the cereal’s potential is attracting interest from big multinational companies.

    All of this has come from Jones’ archaeological interest in ancient farming practices. Searching for evidence of millet in the Neolithic, he discovered two key species – broomcorn and foxtail millet – in the prehistoric crop record in Europe, despite both being botanically East Asian. By piecing together the archaeological evidence, it became clear that Asian millets were coming into Europe, and that wheat and barley from Europe were moving into Asia.

    “This wasn’t a time when farming was transitioning from hunter-gathering to agriculture,” says Jones. “What we were seeing was a move from single-season, single-crop agriculture to multi-season, multi-crop agriculture.” Hundreds of years ago the Asian millets were being used in flexible and innovative ways, and became among the most geographically widespread crops in the world. By using crops from other regions, the farmers could add another growing season and significantly increase their yields.

    Jones’ archaeological work took him to a new site in Aohan when evidence emerged of local millet cultivation in Neolithic times. There, his Chinese colleagues found carbonised particles of foxtail and broomcorn millet dating from 7,700 to 8,000 years ago, which proved to be the earliest record of their cultivation in the world.

    But it was his conversations with local farmers that radically altered his perception of the grains. “When we first visited Aohan it could sometimes be hard to tell whether the millet was growing as a crop or as a weed. We asked the locals, and rather than tell us it was a stupid question – that it was irrelevant whether it was crop or weed – they politely answered a different one. They told us what it tasted like and when they last ate it. These people had lived through hard times, famines, so to survive they had developed more open ideas. I realised then that I’d come with concepts that seemed universal but just weren’t relevant to the lives of people in contemporary northern China.”

    The development of their farming practices, like those of the ancient farmers, was driven by the need for resilient plants that could ripen to harvest in challenging years, to ensure food security for the population. “What archaeologists can’t reconstruct is how much the early farmers understood the significance of what they were doing,” says Jones, “but this – and what we’ve heard from today’s Aohan peasant farmers – is something we can learn from in addressing our current food challenges.”

    “With harvests and growing conditions intimately linked, the changes in climate now happening across the world pose a real threat to food security in certain regions,” adds Jones. “To get the unusually big grain size we see in wheat, rice and maize, a lot of the properties that give the plants inherent resilience have been sacrificed. Being geared towards producing heads of large grains is terrific if you can guarantee all the water, nutrients and sunlight they need. But the crops are much more prone to complete failure if something changes, like the amount of rainfall in a growing season. It’s like putting all your eggs in one basket.”

    For farming systems where there’s no financial infrastructure providing subsidies and grants to help farmers control the growing conditions through irrigation, pesticides and other methods, inherent crop resilience can be vital to
    a successful harvest.

    “Millets have an unparalleled genetic diversity both because of their long history of cultivation, and because they’ve been grown in so many regions of the world, including very harsh ones,” says Jones. “This means they’ve retained the wild traits that give them resilience to changes in growing conditions. They don’t need much water, they grow quickly, and they have a great nutritional balance.”

    After his work demonstrated the importance of the Asian millets and their origins in northern China, the Food and Agriculture Organization of the United Nations recognised the Aohan Dryland Farming System as a ‘Globally Important Agricultural Heritage Systems’ site. Aohan millet is now badged as a high-quality product and sold in large quantities to the domestic Chinese market, where it is a staple food. This year, Jones was among those awarded a medal from the Aohan government, not only for raising the profile of Aohan millet but also for helping the farmers to turn around the fate of this once overlooked crop, with support from their local government.

    “I’m delighted that the Aohan government found such a useful and practical connection to academic research,” says Jones. “For me, talking to the farmers and local people in Inner Mongolia has taught me that their knowledge about plants is enormous.”

    Given the increasing number of extreme weather events, and a growing population demanding a more varied diet, the world is facing a potential crisis in terms of food security. Aid agencies in Africa are becoming more aware of the practice of growing millet alongside the central maize crop as a safeguard against total harvest failure and are supporting farmers in Africa to continue to do this. And UK producers are showing interest in millet as a raw ingredient in branded consumer foods to help people improve their health and wellbeing.

    “A huge amount of research linked to food security has focused on the really major crops,” says Jones. “Millets have taught me that it’s worth shifting the focus. We may have a lot still to learn from our Neolithic predecessors.”

    Research funded by the European Research Council, the Natural Environment Research Council, the Wellcome Trust and the Leverhulme Trust.

    Archaeological research shows that our prehistoric ancestors built resilience into their food supply. Now archaeologists say ‘forgotten’ millet – a cereal familiar today as birdseed – has a role to play in modern crop diversity and in helping to feed the world’s population.

    A huge amount of research linked to food security has focused on the really major crops. Millets have taught me that it’s worth shifting the focus. We may have a lot still to learn from our Neolithic predecessors.
    Martin Jones

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    In the early days of the computer, calculators were room-sized and public demand was low. Now, it’s the reverse. Digital technology has become smaller and faster, and our dependence on it has grown.

    We are almost desensitised to a stream of facts about the startling rate at which this is occurring. In 2016, IBM found that humans now create 2.5 quintillion bytes of data daily. From the start of this decade to its end, the world’s data will increase 50 times over.

    The basic building blocks of electronic devices, such as the transistor, work by moving packets of charge around a circuit. A single unit of charge is an electron, and its movement is governed by semiconductors, commonly made from silicon. But technology based on these principles is now reaching a point where it cannot get much smaller or faster. A paradigm shift is due.

    “There have been many failed attempts to oust silicon from its predominance,” reflects Professor Mark Blamire, Head of Materials Science at Cambridge. “Something has to be done because the technology can’t be scaled to smaller sizes for very much longer. It’s already a major source of power consumption. There’s no obvious competitor, so in a sense the opportunity is there.”

    Blamire and his colleague Dr Jason Robinson are leading several major programmes investigating one such competitor, known as superconducting spintronics.

    The launch of a UK-based programme last year provoked excitement within the scientific community. “Cambridge Uni spins up green and beefy supercomputer project,” announced British tech site The Register, for example. One reason in particular is because superconducting spintronics might address the eye-watering energy consumption of the huge server farms that handle internet traffic. Data centres account for 3% of the world’s electricity supply and about 2% of greenhouse gas emissions.

    The project combines two phenomena: superconductivity and spin. Superconductivity refers to the fact that at low temperatures some materials carry a charge with zero resistance. Unlike, for example, copper wires, which lose energy as heat, superconductors are therefore extremely energy efficient.

    ‘Spin’ is the expression for electrons’ intrinsic source of magnetism. Originally it was thought that this existed because electrons were indeed spinning, which turned out to be wrong, but the name stuck, and it is still used to describe the property in particles that makes them behave a bit like tiny bar magnets. Like a magnet, this property makes the electrons point a certain way; the spin state is therefore referred to as ‘up’ or ‘down’.

    Researchers have been using the magnetic moments of electrons to store and read data since the 1980s. At their most basic, spintronic devices use the up/down states instead of the 0 and 1 in conventional computer logic.

    Spintronics could also transform the way in which computers process information. The researchers envisage that instead of the devices moving packets of charge around, they will transmit information using the relative spin of a series of electrons, known as a ‘pure spin current’, and sense these using magnetic elements within a circuit.

    By eliminating the movement of charge, any such device would need less power and be less prone to overheating – removing some of the most significant obstacles to further improving computer efficiency. Spintronics could therefore give us faster, energy-efficient computers, capable of performing more complex operations than at present.

    To generate large enough spin currents for memory and logic devices, significant charge is required as an input, and the power requirements of this currently outweigh many of the benefits. Using a superconductor to provide that charge, given its energy efficiency, would present a solution. But the magnetic materials used to control spin within spintronic devices also interfere with superconductivity.

    This problem was thought insurmountable until, in 2010, Robinson discovered how to combine superconductors and spintronics so that they can work together in complete synergy. His team added an intervening magnetic layer (a material called holmium). By using this interface, they were able to preserve the delicate balance of electron pairing that’s needed to achieve superconductivity, but still managed to create a bias within the overall spin of the electrons.

    This, explains Robinson, “created a marriage that opens up the emerging field of superconducting spintronics.” Over the next five years, he and Blamire developed the field, and last year were awarded a major grant from the Engineering and Physical Sciences Research Council: “To lead the world in understanding the coupling of magnetism and superconductivity to enable future low energy computing technologies.” Robinson has since been awarded a second grant with Professor Yoshi Maeno, from the University of Kyoto, to broaden materials research on superconducting spintronics.

    Although still at an experimental stage, the project – which includes collaborators from Imperial College London, University College London and Royal Holloway London – is tackling questions such as how to generate and control the flow of spin in a superconducting system. And its scope is already expanding. “We have found more ways of achieving what we are trying to do than we originally dreamed up,” Robinson says.

    One example involves making potentially innovative use of superconductivity itself. In ‘conventional’ spintronics, spin is manipulated through the interactions between magnetic materials within the device. But Blamire has found that when a superconductor is placed between two ferromagnets, its intrinsic energy depends on the orientation of those magnetic layers. “Turning that on its head, if you can manipulate the superconducting state, you can control the orientation of the magnetic layers, and therefore the spin,” he says.

    Meanwhile, Robinson has led a study that for the first time enabled graphene, a material already recognised for its potential to revolutionise the electronics industry, to superconduct. This raises the possibility of using this extraordinary material, and other two-dimensional materials like it, in superconducting spintronics.

    Although approaches like this are still being tested, Blamire says that by 2021 the team will have developed sample logic and memory devices that fuse superconductivity and spin. These proof-of-concept models could, perhaps, be incorporated into a new type of computer processor. “It would be a huge step to get from there to a device that could be competitive,” he admits. “It’s not necessarily difficult, but it would require considerable investment.”

    The project is set up to enable industrial collaboration in the years to come. A key partner is the Hitachi Lab in Cambridge, while the project’s advisory board also features representatives from the Cambridge-based semiconductor firm ARM, and HYPRES, a digital superconductor company in the USA.

    Robinson points out that the UK – and Cambridge in particular – has historical strengths in research into superconductivity and spintronics, but adds that a “grand challenge” has long been needed to focus academic investigation on a meaningful partnership with industry.

    Leading low-energy computing into a post-semiconductor age is certainly grand. Silicon’s domination, after all, stretches from its eponymous valley in California, to a fen in Cambridge, a gulf in the Philippines and an island in Japan.

    Can the unlikely – not to say still primitive – marriage of spintronics and superconductivity really replace an electronic empire on which the sun never sets? “I suspect people had similar questions at the dawn of the semiconductor,” Robinson observes. “One shouldn’t lose sight of what we are doing here. We aren’t just trying to do something better; we are offering something entirely different and new.”

    Electron ‘spin’ could hold the key to managing the world’s growing data demands without consuming huge amounts of energy. Now, researchers have shown that energy-efficient superconductors can power devices designed to achieve this. What once seemed an impossible marriage of superconductivity and spin may be about to transform high performance computing.

    One shouldn’t lose sight of what we are doing here. We aren’t just trying to do something better; we are offering something entirely different and new.
    Jason Robinson
    Spinning top

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    The elusive, glorious past has been a dominant theme of recent political slogans and soundbites. President Trump’s rallying call to “make America great again” was met with outpourings of support on his campaign trail and, in the wake of the EU referendum, British politicians have referred to our history as a great global nation, saying that Brexit offers the opportunity to retake our place as a great world power.

    The tactic of alluding to an idealised point in the past, embodying all of a country’s best values, while glossing over times of hardship, is nothing new. In fact it’s as old as the hills, and at least as old as the seven hills of Ancient Rome.

    The first imperial regime of Rome started in 27 BC after a long period of civil unrest and brutal bloodshed. After Octavian defeated his rivals for power, Antony and Cleopatra, he cleverly rebranded himself as Augustus and began what would become a monarchic regime. He disguised this new order as the continuation and restoration of the Roman Republic and recast the historical and cultural memory of Rome to suit his own needs of self-preservation and self-promotion.

    Dr Elena Giusti, in the Faculty of Classics, is working on a book examining the part that the Aeneid, written by Roman poet Virgil, played in shaping the narrative of Emperor Augustus’ regime. Her book will contribute to a long-standing academic debate over the extent to which the poem is propagandistic.

    “My interest in Augustan poetry and its tendency to reshape traditions and place facts in a position of secondary, subsidiary importance was inspired by my experiences as a millennial growing up in Berlusconi’s Italy,” says Giusti. “My research focuses on what, after the events of 2016, we might dub ‘post-truth poetics’ – and a reading of Virgil’s Aeneid as a form of poetics and politics that aimed to shape public opinion by appealing to feelings rather than facts.”

    Virgil’s epic poem tells the story of Aeneas the Trojan hero and his struggle to found the Roman race. In Giusti’s view, Virgil was in all likelihood commissioned by Augustus to write the Aeneid, and there is certainly plenty to suggest that he wrote his epic work in compliance with the new regime.

    Giusti’s research explores Virgil’s exploitation of one historical period in particular, the age of the Punic Wars from 264 BC to 146 BC. This long-running conflict was fought between the Roman Republic and Carthage, an ancient city located on the coast of modern Tunisia.

    In alluding to the Wars, from which Rome emerged victorious, Virgil transports the reader back to a “mytho-historic” time of strength and glory in Rome’s past. The real threat from Carthage ended after the defeat of Hannibal in 201 BC, but Virgil uses Carthage to evoke metus hostilis or ‘fear of the enemy’. The poem aims to unite the Romans, shaken by the trauma of recent civil conflict, by reminding them of a time when the greatest threat was from a foreign power.

    “Civil conflict had brought Rome to its knees, and the use of Carthage in the poem appears to suit the ideological needs of foregrounding foreign conflict while whitewashing the reality of the strife against fellow citizens on which the principate itself was built,” explains Giusti. 

    In the Aeneid, Virgil presents Carthage through a thick layer of mythical and historical allusion, blending historical events and points in time to suit his political purpose. The blurred spatial and temporal narrative allows Virgil to mingle not only Ancient Greek mythology and the Punic Wars, but also the more recent historical events of the civil war, by making clear allusions to the history of Antony and Cleopatra in the relationship between Aeneas and Dido, Queen of Carthage.

    Virgil conjured a series of associations between the Punic Wars and recent Roman civil disorder. The effect was to ascribe to the latter the qualities of foreign conflict and interference by an external enemy. This fictional history, where it was the destruction of Carthage that brought about the crisis of the Republic, served to legitimise Augustus’ involvement in the civil war and vindicate him of any wrong-doing. 

    On the face of it, then, Virgil’s ‘post-truth poetics’ appear to overwhelmingly support the ambitions of Emperor Augustus to ‘make Rome great again’. However, Giusti also thinks that Virgil’s epic ultimately exposes the illusory nature of Augustan Rome and the suggestion that the new imperial order was founded in the wake of foreign rather than civil wars, which any learned reader in Rome at the time would have known to be ‘post-truth’.

    Just as a modern-day political speechwriter charged with harking back to the past with romanticised stories of empire might be required to suppress their better judgement and awareness of historical fact, Virgil appears to have negotiated a vision of the Punic Wars that he himself realised was little more than a nostalgic mirage.  

    Giusti argues that when Virgil starts to make Carthage look like Rome, and the Carthaginians like Romans, rather than the foreign enemy, memories of the recent civil wars are brought to the surface. Paradoxically, Virgil’s Carthage unveils the delusory nature of Augustus’ restoration of the Roman Republic and its mythical history. The artificiality of the image that Virgil conjures stimulates us to interrogate the legitimacy of the stories and messages encoded in the narrative.

    Perhaps this indicates the author’s frustration at writing in support of the Augustan regime. “We know that Virgil, like most Romans, suffered personally during the civil wars and that his family’s property was confiscated, although subsequently restored. To me it is clear from the poem that his primary historical concern was actually the traumatic memory of the civil wars and the subsequent subversion of Rome’s Republican institutions,” adds Giusti.

    Perhaps this image of an author conflicted in his work serves to explain why, according to legend, Virgil tried to have the Aeneid destroyed before he died. He was prevented from doing so by Augustus and his vision of “empire without end”.

    A political leader who seeks to make his nation “great again” and a time when ‘post-truth’ rhetoric appears to support political ambitions. Not Trump’s America, but Rome 2,000 years ago.

    Civil conflict had brought Rome to its knees, and the use of Carthage in the poem appears to suit the ideological needs of foregrounding foreign conflict while whitewashing the reality of the strife against fellow citizens on which the principate itself was built.
    Elena Giusti

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    Scientist have sequenced the entire genomes of 4,000-year-old Canaanite individuals who inhabited the Near East region during the Bronze Age, and compared these to other ancient and present-day populations. The results, published in the American Journal of Human Genetics, suggest that present-day Lebanese are direct descendants of the ancient Canaanites.

    The Near East is often described as the cradle of civilisation. The Bronze Age Canaanites, later known as the Phoenicians, introduced many aspects of society that we know today - they created the first alphabet, established colonies throughout the Mediterranean and were mentioned several times in the Bible.

    However, historical records of the Canaanites are limited. They were mentioned in ancient Greek and Egyptian texts, and the Bible which reports widespread destruction of Canaanite settlements and annihilation of the communities. Experts have long debated who the Canaanites were genetically, what happened to them, who their ancestors were and if they had any descendants today.

    In the first study of its kind, an international team of scientists have uncovered the genetics of the Canaanite people and a firm link with people living in Lebanon today. The team discovered that more than 90 per cent of present-day Lebanese ancestry is likely to be from the Canaanites, with an additional small proportion of ancestry coming from a different Eurasian population.

    The team, including researchers from Cambridge University’s Department of Archaeology and Anthropology, and led by the Wellcome Trust Sanger Institute, estimate that new Eurasian people mixed with the Canaanite population about 2,200 to 3,800 years ago at a time when there were many conquests of the region from outside.

    The analysis of ancient DNA also revealed that the Canaanites themselves were a mixture of local people who settled in farming villages during the Neolithic period and eastern migrants who arrived in the area around 5,000 years ago. 

    "Ancient DNA is becoming an indispensable tool for understanding population movements of the past. This study in particular provides previously inaccessible information about a group of people known only by surviving written accounts and interpretations of archaeological findings,” said Freddi Scheib, one of two Cambridge co-authors, along with Dr Toomas Kivisild.  

    “The fact that we can retrieve whole genomes from conditions not considered ideal for DNA preservation also shows how far the field have advanced technically," she said.

    In the study, researchers sequenced whole genomes of five Canaanite individuals who lived 4,000 years ago in a city known as Sidon in present-day Lebanon. Scientists also sequenced the genomes of 99 present-day Lebanese and analysed the genetic relationship between the ancient Canaanites and modern Lebanese.

    Dr Marc Haber, first author from the Sanger Institute, said: “It was a pleasant surprise to be able to extract and analyse DNA from 4,000-year-old human remains found in a hot environment, which is not known for preserving DNA well. We overcame this challenge by taking samples from the petrous bone in the skull, which is a very tough bone with a high density of ancient DNA.”

    Dr Claude Doumet-Serhal, co-author and Director of the Sidon excavation site in Lebanon, said: “For the first time we have genetic evidence for substantial continuity in the region, from the Bronze Age Canaanite population through to the present day. These results agree with the continuity seen by archaeologists.

    “Collaborations between archaeologists and geneticists greatly enrich both fields of study and can answer questions about ancestry in ways that experts in neither field can answer alone.”

    Adapted from a Wellcome Trust press release. 

    Researchers analysed DNA extracted from 4,000-year-old human remains to reveal that more than 90% of Lebanese ancestry is from ancient Canaanite populations.

    The fact that we can retrieve whole genomes from conditions not considered ideal for DNA preservation also shows how far the field have advanced technically
    Freddi Scheib
    Cambridge co-author Freddi Scheib conducting ancient bone analysis at the Wellcome Genome Campus.

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    From left: Sir Harvey McGrath, Vice-Chancellor Professor Sir Leszek Borysiewicz, Dr Gwen Borysiewicz and Mohamed A. El-Erian

    With more postdoctoral researchers than most other institutions, Cambridge sees postdocs as key to its future research strength. Now, a gift from the members of the Collegiate Campaign Board will annually support up to 10 exceptional postdoctoral researchers working in biomedical science.

    The gift is in recognition of the importance placed on postdocs by the outgoing University of Cambridge Vice-Chancellor, Sir Professor Leszek Borysiewicz.

    The Borysiewicz Biomedical Sciences Fellowship Programme will award supplementary fellowships annually for four years to up to 10 of the most exceptional postdoctoral researchers when they start working in Cambridge. The programme will enable the researchers to focus on a range of global challenges in addition to their primary research.

    Under Sir Leszek, Cambridge has made support for the postdoctoral community a priority. An entirely new campus at North West Cambridge is being built to provide dedicated accommodation for key workers in the University, primarily postdoctoral researchers. The Office for Postdoctoral Affairs (OPdA) also was established in 2013, with a range of opportunities being created to foster postdoctoral potential as world leaders, including three dedicated Postdoc Centres across the expanding Cambridge campus.

    Mohamed El-Erian and Sir Harvey McGrath, co-chairs of the campaign for the University and Colleges, said on behalf of the Board: “The recognition and promotion of postdocs in Cambridge are among the most important legacies of Sir Leszek’s successful tenure as Vice-Chancellor. It has been transformative at Cambridge, and the benefits are increasingly spreading out across higher education and the world. The Board wanted to celebrate this initiative, and ensure that this legacy will be truly sustainable.”

    The gift will fund a new group of postdocs, the Borysiewicz Biomedical Sciences Fellows, drawn from most gifted of the new medical sciences postdocs arriving each year. The programme will provide opportunities to interact with senior figures from academia and industry; and the Fellows will be supported to work in small teams on major interdisciplinary problems.

    The Vice-Chancellor said: “Our postdoctoral researchers are the bedrock of what Cambridge is able to achieve, particularly in terms of its research area. We want to make sure that we can bring the very best here, and that they are fully embedded into our community. I am delighted and honoured that the Borysiewicz Biomedical Sciences Fellowship Programme is being created in my name, and enormously grateful to the members of the Campaign Board for this collective gift, and their incredible generosity.”

    Dr Rob Wallach, Director of the Office for Postdoctoral Affairs, said: “The very generous gift of the Campaign Board is exciting and much appreciated. It will embed the Vice-Chancellor’s remarkable vision and his introduction of ways to recognise and enrich the enormous talent of postdoctoral researchers, so developing their potential and enabling many to become future world leaders in diverse fields.”

    The Dear World…Yours, Cambridge campaign for the University and Colleges was launched in autumn 2015 to raise £2bn to attract the brightest minds, create the most inspiring environment for world-class research and give the freedom to develop more world-changing ideas. To date more than £908m has been raised towards the total. The campaign focuses on the University’s impact on the world, and through it Cambridge is working with philanthropists to address major global challenges.

    Much of the world-changing research at the University of Cambridge is underpinned by postdocs - qualified researchers with fixed term contracts.

    This will embed the Vice-Chancellor’s remarkable vision and his introduction of ways to recognise and enrich the enormous talent of postdoctoral researchers, so developing their potential and enabling many to become future world leaders in diverse fields.
    Director of the Office for Postdoctoral Affairs Dr Rob Wallach

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    A wiretapped telephone records a human smuggler in Sudan asking a human smuggler in Libya how many were his. The response is 109, of whom 68 are now dead.  

    The boat had capsized within sight of the Italian island of Lampedusa, killing 366 people. At the time, autumn 2013, it was the single largest loss of life to result from the booming black market in Mediterranean crossings. Worse would follow. 

    The wiretap later records the smuggler in Sudan reproaching the smuggler in Libya for overcrowding the boat. He has since felt obliged to personally notify families. He has shelled out $5,000 in compensation in a bid to save his reputation and stop potential customers turning to one of his many rivals.

    Human smuggling is different to human trafficking: the smugglers’ commodity is the crossing of borders rather than control over people – and war, poverty and globalisation have caused demand for this commodity to explode.

    Between 2014 and 2015, illegal border crossings along the East Mediterranean route increased by an astonishing 1,641%: from around 50,000 to over 885,000. As with any market, let alone one of the fastest growing on the planet, where fortunes are to be made competition is ferocious.

    Dr Paolo Campana, an expert in criminal networks, joined Cambridge’s Institute of Criminology in early 2015. He describes the commerce of smuggling humans into Europe as a “quintessential free market”, with little intervention and no regulation beyond the market’s own mechanisms.

    “Some smugglers cheat, some overcharge, some care about safety, some don’t care who lives or dies. Some offer ‘premium’ services, fast-tracking migrants through smuggling routes. Some don’t protect people from kidnappers, others help buy them back from militias,” he says. 

    “The law struggles to apprehend smugglers, and when they do manage it, any void created is likely to be immediately filled. The main things that stop smugglers defrauding many more migrants, or drowning them in unseaworthy boats, are individual morality and maintaining a reputation that attracts more business.”

    Pure free market

    Importantly for a pure free market such as human smuggling, there are no monopolies, says Campana. While newspaper headlines will often describe ‘Mr Big’ figures or talk of Mafia involvement, his research shows that smuggling networks are fragmented: small groups with rudimentary hierarchies jostling for trade in crowded marketplaces.

    “Despite smuggling routes traversing the globe, from the Horn of Africa to Scandinavia, individual operations are stunted and localised – nobody is in control of all stages of the journey. Smugglers operate as independent actors in various stages of an overall journey, whether it’s a sea or a desert crossing, or temporary city accommodation, or car trips over European borders.”

    “While some smuggling groups make arrangements with each other, there seem to be no exclusivity agreements and – despite the localisation of smuggling networks – very little territorial control,” says Campana.   

    This absence of monopolies is radically different to other black markets such as Mafia-like protection rackets. Even in Sicily, where both human smuggling and the Mafia are major problems, Campana observed no connection between the two.

    Almost anyone can set themselves up as a smuggler: from street vendors who sell border crossings as a sideline, to tour guides who switch to smuggling, to fishermen who are already equipped with boats for the sea crossings. It is the free-for-all nature of this marketplace that gives it the flexibility to expand quickly and accommodate soaring demand. 

    “Human smuggling is an enterprise with low barriers to entry, low skills and relatively low capital requirements – yet it has the potential to be far more lucrative than most other occupations available to people on the smuggling routes.”

    As one operational analyst from the European border agency Frontex told Campana: “If you carry 20 people in a boat, that could be the equivalent of five years’ bad fishing.”  

    In the wake of the 2013 Lampedusa shipwreck, a rescue operation, initially called Operation Mare Nostrum, was set up to patrol the Mediterranean, and resources from the highly skilled anti-Mafia prosecution unit in Palermo were allocated to tracking human smuggling operations for the first time.

    Campana combed through and coded the smuggling court cases and wiretapped evidence that resulted from this shift, and has created quantitative databases to model smuggling networks.

    As well as interviewing the Frontex analysts in Warsaw, he has also travelled to small towns in Greece and some of the Italian islands to speak to migrants, the police and the local communities.

    He has just started to publish the findings from this research, including an overview of the new smuggling markets. He hopes that the first quantitative network analysis of a human smuggling operation – the one involved in the Lampedusa disaster – will also be public later this year.

    Attracting 'customers' online

    Campana is also working with his Institute of Criminology colleague Professor Loraine Gelsthorpe, who has worked for many years with victims of trafficking, to conduct further interviews to capture the voices and experiences of migrants and smugglers. Gelsthorpe is co-founder of the Cambridge Migration Research Network, CAMMIGRES, which aims to improve understanding of migration.

    “Professor Gelsthorpe and I are taking a genuinely holistic approach by combining the data-driven with the experiential,” says Campana.

    One of the key areas the researchers are exploring is how migrants choose who to trust in such a busy and dangerous marketplace. This comes back to reputation.  

    While some smuggling networks are organised around ethnic lines, and word of mouth is important, digital forums have become increasingly influential in establishing trustworthiness, so part of the research involves analysing social media.

    Smugglers often advertise their services in Facebook groups, where they try to attract ‘customers’ by responding to queries, competing through prices, and promoting credentials in the form of recommendations from other migrants.

    Payment happens in advance, often through hawala, a traditional honour system that now functions through text messaging and a vast network of brokers. In some ways these platforms and processes are not that different to using eBay, for example, but with far more at stake.

    Online networks are particularly significant in Syrian communities, where there is on average a higher level of education and digital literacy. “As everywhere, education matters,” says Campana. “Accessing and evaluating information through channels such as Facebook could mean the difference between life and death.”

    Campana’s research has led him to question the European Union’s focus on policing and naval operations in the Mediterranean to control human smuggling. “Naval operations are very noble; however, they have the unintended consequence of assisting the smugglers by taking the refugees off their hands very close to the Libyan coast – making the ‘product’ more attractive and, ultimately, increasing the number of journeys.

    “This is a market driven by exponential demand, and it is that demand which should be targeted. Land-based policies such as refugee resettlement schemes are politically difficult, but might ultimately prove more fruitful in stemming the smuggling tide.”

    Inset: Syrian and Iraqi immigrants getting off a boat from Turkey on the Greek island of Lesbos. Credit: Ggia (CC: BY-SA)

    Cambridge criminologists are using emerging sources of information – from court records to Facebook groups – to analyse the networks behind one of the fastest-growing black markets on the planet: the smuggling of people into Europe.   

    Some smugglers cheat, some overcharge, some care about safety, some don’t care who lives or dies
    Paolo Campana
    Refugees on a boat crossing the Mediterranean sea, heading from Turkish coast to the northeastern Greek island of Lesbos, 29 January 2016

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    The findings have implications for attempts to ‘breed out’ this potentially life-threatening condition.

    Pugs and bulldogs have become popular breeds in recent years – the French bulldog is set to become the UK's most popular canine, according to the Kennel Club. However, a significant proportion are affected by a condition known as Brachycephalic Obstructive Airway Syndrome (BOAS) related to their head structure.

    Studies suggest that for over half of such dogs, BOAS may lead to health problems, causing not just snoring but also difficulty exercising and potentially overheating. It can even prove life-threating. But as symptoms often do not arise until after the dog has begun breeding, veterinary scientists have been searching for markers that can predict whether a dog is likely to develop breathing difficulties – and hence potentially help breed out the condition.

    A study in 2015 led by researchers at the Royal Veterinary College, University of London, working across many breeds suggested that dogs whose muzzles comprised less than half their cranial lengths and dogs with thicker neck girths were at increased risk of BOAS. However, a new study carried out by researchers at the Department of Veterinary Medicine, University of Cambridge, and published today in the journal PLOS ONE, suggests that these measures applied to individual breeds are not dependable for this purpose.

    The Cambridge researchers took external measurements of features of head and neck shape, and of the external appearance of nostrils, together with measurements of body size and body condition score (an approximation to the degree of fatness/obesity) in just over 600 pugs, bulldogs and French bulldogs, the most numerous breeds that show this problem. Each of the dogs had also been graded objectively for respiratory function.

    The team found that while the external head measurements did have some predictive value for respiratory function, the relationship was not strong, and the measurements that showed the best predictive relationship to BOAS differed between breeds. They were unable to reproduce conclusively the findings from the previous study by the Royal Veterinary College in any breed.

    “It can be incredibly difficult to take measurements such as distance between eyes or length of nose accurately, even for experienced vets, as the dogs don’t keep still,” says Dr Jane Ladlow, joint lead author. “This may explain why it is so difficult to replicate the findings of the previous study or find any conclusive markers in our own.”

    Neck girth was a slightly more reproducible measurement, and larger neck girth in comparison to chest girth or neck length was associated with disease in the bulldogs and French bulldogs. In male bulldogs, neck girth showed a close enough association with disease to give moderately good predictive accuracy for the presence of clinically significant BOAS.

    The best measure identified by the Cambridge team was the degree of nostril opening, which proved a moderately good predictor of the presence and severity of BOAS in pugs and French bulldogs, and was also a useful marker for disease in bulldogs.

    Altogether the variables measured, when combined, gave an 80% accuracy in predicting whether or not dogs will have BOAS, the difficulty of taking some of the measurements accurately, and the need to make multiple measurements and combine them in order to produce a prediction means that the researchers would not recommend using them as a guide to breeding.

    Dr Nai-Chieh Liu, first author of the study, says: “Breeding for open nostrils is probably the best simple way to improve these breeds. Dog breeders should also avoid using dogs with extremely short muzzles, wide faces, and thick necks. These traits are all associated with increased risk of having BOAS.”

    Joint lead author Dr David Sargan adds “At this moment there is no conclusive way of predicting whether any individual pug or bulldog will develop breathing difficulties, so we are now looking for genetic tests that may help breeders get rid of BOAS more rapidly.

    “The best advice we can give to owners of short-nosed dogs is to make sure you get your dog checked annually for any potential difficulties in breathing, even if you have not yourself observed any in your dog, and to keep your dog fit and not let it get fat.” 

    As many as a half of all short-nosed dogs such as pugs, French bulldogs and bulldogs experience breathing difficulties related to their facial structure. However, research published today by the University of Cambridge suggests that there is no way to accurately predict from visible features whether an apparently healthy pug or French bulldog will go on to develop breathing difficulties.

    Dog breeders should also avoid using dogs with extremely short muzzles, wide faces, and thick necks.
    Nai-Chieh Liu

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

    0 0

    The researchers, from the University of Cambridge, used data from Twitter to determine whether bots can be accurately detected, how bots behave, and how they impact Twitter activity.

    They divided accounts into categories based on total number of followers, and found that accounts with more than 10 million followers tend to retweet at similar rates to bots. In accounts with fewer followers however, bots tend to retweet far more than humans. These celebrity-level accounts also tweet at roughly the same pace as bots with similar follower numbers, whereas in smaller accounts, bots tweet far more than humans. Their results will be presented at the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) in Sydney, Australia.

    Bots, like people, can be malicious or benign. The term ‘bot’ is often associated with spam, offensive content or political infiltration, but many of the most reputable organisations in the world also rely on bots for their social media channels. For example, major news organisations, such as CNN or the BBC, who produce hundreds of pieces of content daily, rely on automation to share the news in the most efficient way. These accounts, while classified as bots, are seen by users as trustworthy sources of information.

    “A Twitter user can be a human and still be a spammer, and an account can be operated by a bot and still be benign,” said Zafar Gilani, a PhD student at Cambridge’s Computer Laboratory, who led the research. “We’re interested in seeing how effectively we can detect automated accounts and what effects they have.”

    Bots have been on Twitter for the majority of the social network’s existence – it’s been estimated that anywhere between 40 and 60% of all Twitter accounts are bots. Some bots have tens of millions of followers, although the vast majority have less than a thousand – human accounts have a similar distribution.

    In order to reliably detect bots, the researchers first used the online tool BotOrNot (since renamed BotOMeter), which is one of the only available online bot detection tools. However, their initial results showed high levels of inaccuracy. BotOrNot showed low precision in detecting bots that had bot-like characteristics in their account name, profile info, content tweeting frequency and especially redirection to external sources. Gilani and his colleagues then decided to take a manual approach to bot detection.

    Four undergraduate students were recruited to manually inspect accounts and determine whether they were bots. This was done using a tool that automatically presented Twitter profiles, and allowed the students to classify the profile and make notes. Each account was collectively reviewed before a final decision was reached.

    In order to determine whether an account was a bot (or not), the students looked at different characteristics of each account. These included the account creation date, average tweet frequency, content posted, account description, whether the user replies to tweets, likes or favourites received and the follower to friend ratio. A total of 3,535 accounts were analysed: 1,525 were classified as bots and 2010 as humans.

    The students showed very high levels of agreement on whether individual accounts were bots. However, they showed significantly lower levels of agreement with the BotOrNot tool.

    The bot detection algorithm they subsequently developed achieved roughly 86% accuracy in detecting bots on Twitter. The algorithm uses a type of classifier known as Random Forests, which uses 21 different features to detect bots, and the classifier itself is trained by the original dataset annotated by the human annotators.

    The researchers found that bot accounts differ from humans in several key ways. Overall, bot accounts generate more tweets than human accounts. They also retweet far more often, and redirect users to external websites far more frequently than human users. The only exception to this was in accounts with more than 10 million followers, where bots and humans showed far more similarity in terms of the volume of tweets and retweets.

    “We think this is probably because bots aren’t that good at creating original Twitter content, so they rely a lot more on retweets and redirecting followers to external websites,” said Gilani. “While bots are getting more sophisticated all the time, they’re still pretty bad at one-on-one Twitter conversations, for instance – most of the time, a conversation with a bot will be mostly gibberish.”

    Despite the sheer volume of Tweets produced by bots, humans still have better quality and more engaging tweets – tweets by human accounts receive on average 19 times more likes and 10 times more retweets than tweets by bot accounts. Bots also spend less time liking other users’ tweets.

    “Many people tend to think that bots are nefarious or evil, but that’s not true,” said Gilani. “They can be anything, just like a person. Some of them aren’t exactly legal or moral, but many of them are completely harmless. What I’m doing next is modelling the social cost of these bots – how are they changing the nature and quality of conversations online? What is clear though, is that bots are here to stay.”

    Reference: 
    Zafar Gilani, Ekaterina Kochmar, Jon Crowcroft. Classification of Twitter Accounts into Automated Agents and Human Users. Paper presented at 9th IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM'17). Sydney, New South Wales, Australia.

    ‘Celebrity’ Twitter accounts – those with more than 10 million followers – display more bot-like behaviour than users with fewer followers, according to new research. 

    A Twitter user can be a human and still be a spammer, and an account can be operated by a bot and still be benign.
    Zafar Gilani
    Twitter

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0
  • 08/04/17--07:01: Of mice and women
  • Professor Magdalena Zernicka-Goetz

    Walk into Professor Magdalena Zernicka-Goetz’s laboratory and it is her sofa that catches your eye. A gaudy pink-purple, it is easily visible through the glass that separates the benches, fridges and microscopes from the office where she draws the threads of her thinking together. It converts into a bed – handy for all-night experiments. And it’s where her team sat, last year, when, during a regular update meeting, they realised that they’d made a world-changing discovery. They had created a structure resembling a mouse embryo, entirely in the laboratory, using stem cells: a world first.

    “I still remember that moment,” says Zernicka-Goetz, Professor of Mammalian Development and Stem Cell Biology and group head of the Zernicka-Goetz Laboratory. “It is one of the most happy moments in your life, when your dreams come true. You work on something, very intensively. You often have to inspire and motivate people in the lab to work on it. These are super-intelligent people. They do it because they see a value in it, not because you are telling them to. And the result is very much a team success.”

    Until she was four, Zernicka-Goetz lived in her father’s scientific laboratories. Her family had lost everything during the war, including their home. Now, growing up in Warsaw, behind the Iron Curtain, the laboratory was home: a lab converted into the family’s apartment, with the kitchen installed in a corridor. Zernicka-Goetz would walk to her pre-school, hand in hand with her father, Professor Boguslaw Zernicki and together they would discuss his passion: the brain. How do we think? Why do we think? Where do our dreams come from? The young Zernicka-Goetz was encouraged to dig deep, to probe cause and effect.

    “At the time, I didn’t realise it was inspirational,” she says. “Now, tracing my steps back, I can see the connection. At school, my biology teacher wasn’t so great. I wouldn’t have been so fascinated by science if I hadn’t had this charismatic father, motivated not by career progression, but by pure science.”

    She came to Cambridge in 1995 as an EMBO Fellow, supervised by Professor Sir Martin Evans (who discovered embryonic stem cells). Her fascination, nurtured during her PhD at the University of Warsaw under the supervision of Professor Andrzej Tarkowski, was around the plasticity of embryos. They are gloriously flexible, she says: remove one cell from an embryo and the rest will develop normally. “We know that they can recover from their different perturbations in early life, but how does that work? How do they recover? Embryos of many other animals can’t do this, but mammalian embryos can. Why? The process fascinated me.”

    She deliberately chose a different specialisation to her father’s, not wanting to be directly compared to him. But when it comes down to it, she says, both areas are all about cells. “I was fascinated by our thoughts and where they come from, and how it can all be narrowed down to the function of individual cells within the brain,” she says. “I like painting and sculpture, which somehow translates into playing with embryos and stem cells. Making shapes with them – that’s what helps me to think and sometimes inspires me.”

    Zernicka-Goetz’s playful and imaginative attitude to the astonishingly complex structure that is the mammalian embryo has given rise to some extraordinary work. In 2016, Nature and Nature Cell Biology published her papers outlining a new technique for allowing human embryos to develop in the lab for up to 13 days. Previously, embryos could survive in vitro for only seven days (the point at which an embryo would normally implant into the womb).

    The new technique is vital for studying early pregnancy loss. Under UK law, researchers are permitted to study human embryos in the lab for up to 14 days, but as no method existed for keeping them alive after seven days there was no way to study the changes which might be taking place. Then came Zernicka-Goetz’s technique, which involves creating a system in the lab which allows embryo cells to organise themselves to form a basis for future development, just as they do in the womb.

    Zernicka-Goetz had wanted to develop just such a system from her very early days as a scientist. “I very much wanted to grow these embryos beyond the so-called ‘blastocyst’ stage – the fourth day of their life in the culture dish. This is when the transformation of an embryo’s architecture happens. Nature looks very different before the embryo is implanted, and afterwards.” Her supervisors discouraged her. Far too difficult, they said. “Lots of famous scientists had tried it and failed, so they told me I should not be wasting my time. But seven years ago, I decided to return to my dream. And it worked.” That work resonated around the world, winning the People’s Choice award for Science magazine’s ‘Breakthrough of the Year 2016’.

    Then, this spring, came her work on growing mouse embryo-like structures. Zernicka-Goetz was on her way to Paris to make a speech when she heard of the paper’s publication (authors are not told in advance when their papers will be published in scientific journals). Interest was phenomenal – the world’s media descended on her. “It was a very important speech, which I was honoured to have been asked to give, and I didn’t want to cancel,” she remembers. “So I spent the whole journey on the Eurostar on the phone to journalists. It was actually more stressful than happy as I wanted to be part of it, but had to delegate it to people in the lab, and to everybody who I knew I could rely on reporting the story accurately and say: please go to the TV studio and speak about it, because I can’t do it! I have to give this speech!”

    How did her team achieve this landmark when all other efforts failed? Like many discoveries, it came about through a new way of thinking about the problem, says Zernicka-Goetz. The key to her success lay in the use of two different kinds of stem cells. “The first type of cell, pluripotent embryonic stem cells, make the mouse baby. The second type, multipotent trophoblast stem cells, make the placenta,” she explains. “We allowed these two types of stem cells to interact with each other by providing an extra-cellular matrix to help them communicate – a kind of 3D scaffold. We hoped that they would self-organise to create an embryonic structure – and they did.”

    It is important to understand that this is not ‘creating life’. Her team will not be attempting to ‘grow’ baby mice in the lab. Rather, Zernicka-Goetz says, this work is providing a system to enable scientists to better understand development. Using actual mouse embryos in research is not ideal: they are complex structures which aren’t easy to grow at the life stage that is of interest to Zernicka-Goetz’s team. Mimicking the developmental processes taking place in these embryos using stem cells is far more convenient. “And this is of enormous importance,” she says. “This is the stage where many human pregnancies fail. Human and mouse development at this time have a lot of common elements. So using this system, we will be able to identify the role of specific genes and processes, and the communication between different kinds of cells in order to build the organism.”

    Two world-changing discoveries in a year is an astonishing record, but Zernicka-Goetz says she doesn’t necessarily feel proud. “Often, in life, things are mixed,” she says. “You have to deal with happy moments and difficult moments. The difficulty for me is still mastering my balance between my life as a scientist and teacher and mother and friend and wife. During the normal day, I often run to keep up with it all – all those different lives. So when the unusual happens, it is overwhelming and you feel not proud but rewarded. Rewarded for all this effort, and training, and forgetting about your own feelings and life for what you are trying to achieve.”

    It’s hard for her to predict the future, she says. Her lab is currently working on 17 different projects. Her ideas develop as they go along, she says, but behind them, always, lurk the biggest of big questions: where does life come from? How does it start? And why do we still know so little about it? “I often wake up with ideas,” she says. “Perhaps I see something during the day, or I think of something in conversation, or when I am discussing something with my kids. Sometimes I go for a run and I think about how we might solve a specific problem. But the important question for me is to find a way to address those big questions. We know there are many things we don’t know, so how do we find out about them?”

    Article by Lucy Jolin. This article first appeared in CAM - the Cambridge Alumni Magazine, issue 81

    Last year, Magdalena Zernicka-Goetz, Professor of Mammalian Development and Stem Cell Biology, made not one, but two world-changing discoveries.

    Lots of famous scientists had tried it and failed, so they told me I should not be wasting my time. But seven years ago, I decided to return to my dream. And it worked.
    Professor Magdalena Zernicka-Goetz
    Magdalena Zernicka-Goetz is a Fellow of Sidney Sussex. She was photographed in the Heong Gallery, at Downing.

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    No

    0 0

    Without nature, humans could be neither healthy nor happy. And yet the natural world can be completely ransacked without causing even a tiny blip on our usual measures of economic progress or poverty.

    A major UN environmental meeting recently looked at launching an assessment of the different values that people attribute to nature, and what nature contributes to human societies. However, these high level discussions will be futile unless our measures of societal progress expand to explicitly include what nature does for human well-being and prosperity, especially for poor people.

    Nature matters to people’s well-being in many different ways. It obviously provides us with basic needs such as food, clean air and water, as well as protection from environmental hazards. There is also a clear relationship with both physical and mental well-being, especially for those who are fortunate enough to have access to green spaces.

    Beyond these instrumental roles, there is also evidence from around the world that nature is a more fundamental contributor to people’s sense of self. It is an integral part of what constitutes well-being, captured for some in the awe-inspiring moments when standing on top of a mountain, the breath-taking view of a beautiful river, or in the feeling of freedom associated with traversing a wide open landscape.

    The problem with economic indicators

    Despite the value we get from nature, our measures of progress and well-being remain much narrower, focused on what is visible and measurable. Gross Domestic Product (GDP) has been the most prominent approach since the end of World War II, with GDP seen as a useful snapshot of the state of the economy and people’s well-being. What these figures often hide are those things, like the role of nature, that are not measured in the monetary economy, but are an important part of daily life and can be crucial for sustaining future prosperity.

    There are alternatives. One that has gained some momentum is the Inclusive Wealth Index, which takes into account broader measures of human and natural well-being – its most recent assessment suggested that conventional GDP figures had greatly exaggerated growth over the period 1992-2010. In international development, the UN’s Human Development Index and the “multidimensional poverty index” both recognise a larger set of issues, combining material standards with measures of health and education. But they still do not adequately incorporate the role of nature.

    Ignoring nature creates some perverse paradoxes. Measured GDP might actually increase as a consequence of a major environmental disaster, because of the economic activity created by the clean up and repair. Meanwhile, the environmental losses themselves don’t show up in economic measures. A country could get rich by cutting down all its primary forests (and many have), but the associated loss of habitat and wild species would not feature in national accounts.

    Governments continue to make decisions based on a key set of headline figures. These include GDP and per capita income, which reflect economic prosperity, and, in poorer countries, the extent and incidence of poverty. But we can do better: our ongoing research focuses on developing environmentally-adjusted measures of multidimensional poverty, based on the insight that people are typically poorer when they do not have access to nature.

    Our research suggests that failing to consider these missing environmental aspects can result in an incomplete assessment of the multiple dimensions and underlying drivers of poverty. Consequently, the identification of the poor, as well as an understanding of what makes them poor, risks being partial, thereby posing a challenge to addressing poverty adequately.

    The current status quo fails people, especially the poor, and also threatens future prosperity by undervaluing nature. Those who benefit from the current approaches are typically global elites who profit from environmental destruction (which goes unrecognised).

    The losers are those most dependent on nature for their livelihoods and those especially vulnerable to environmental change. Even if nature is valued, it is typically converted into money equivalents, which favours those who are able and willing to parcel out nature into small commoditised bundles, which can then be sold to the highest bidder. This fails to take into account the views of those who believe that nature matters in other ways or in its own right, who care about the beauty of nature and the sheer joy that it provides to many.

    The consequences of neglecting people’s varied views and aspirations have become apparent from recent political events in Europe and the US. Nature matters to our well-being, and people see their relationship with nature in many different ways. Recognising this is a crucial step towards building a more inclusive, equitable and sustainable society.

    Judith Schleicher, Postdoctoral Researcher in Conservation, Poverty and Wellbeing, University of Cambridge and Bhaskar Vira, Reader in Political Economy at the Department of Geography and Fellow of Fitzwilliam College; Director, University of Cambridge Conservation Research Institute, University of Cambridge

    This article was originally published on The Conversation. Read the original article.

    Reference: 
    Judith Schleicher et al. 'Poorer without It? The Neglected Role of the Natural Environment in Poverty and Wellbeing.' Sustainable Development (2017). DOI: 10.1002/sd.1692.

    Despite the value that humans get from nature, it is not included in measurements of poverty and well-being. Cambridge's Judith Schleicher and Bhaskar Vira say it's about time this changed. 

    hike

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes
    License type: 

    0 0

    Norway is famed for its cod. Catches from the Arctic stock that spawn each year off its northern coast are exported across Europe for staple dishes from British fish and chips to Spanish bacalao stew.

    Now, a new study published today in the journal PNASsuggests that some form of this pan-European trade in Norwegian cod may have been taking place for 1,000 years.

    Latest research from the universities of Cambridge and Oslo, and the Centre for Baltic and Scandinavian Archaeology in Schleswig, used ancient DNA extracted from the remnants of Viking-age fish suppers.

    The study analysed five cod bones dating from between 800 and 1066 AD found in the mud of the former wharves of Haithabu, an early medieval trading port on the Baltic. Haithabu is now a heritage site in modern Germany, but at the time was ruled by the King of the Danes. 

    The DNA from these cod bones contained genetic signatures seen in the Arctic stock that swim off the coast of Lofoten: the northern archipelago still a centre for Norway’s fishing industry. 

    Researchers say the findings show that supplies of ‘stockfish’ – an ancient dried cod dish popular to this day – were transported over a thousand miles from northern Norway to the Baltic Sea during the Viking era.

    Prior to the latest study, there was no archaeological or historical proof of a European stockfish trade before the 12th century.

    While future work will look at further fish remains, the small size of the current study prevents researchers from determining whether the cod was transported for trade or simply used as sustenance for the voyage from Norway.

    However, they say that the Haithabu bones provide the earliest evidence of fish caught in northern Norway being consumed on mainland Europe – suggesting a European fish trade involving significant distances has been in operation for a millennium.

    “Traded fish was one of the first commodities to begin to knit the European continent together economically,” says Dr James Barrett, senior author of the study from the University of Cambridge’s McDonald Institute for Archaeological Research.

    “Haithabu was an important trading centre during the early medieval period. A place where north met south, pagan met Christian, and those who used coin met those who used silver by weight.” 

    “By extracting and sequencing DNA from the leftover fish bones of ancient cargoes at Haithabu, we have been able to trace the source of their food right the way back to the cod populations that inhabit the Barents Sea, but come to spawn off Norway’s Lofoten coast every winter.

    “This Arctic stock of cod is still highly prized – caught and exported across Europe today. Our findings suggest that distant requirements for this Arctic protein had already begun to influence the economy and ecology of Europe in the Viking age.”

    Stockfish is white fish preserved by the unique climate of north Norway, where winter temperature hovers around freezing. Cod is traditionally hung out on wooden frames to allow the chill air to dry the fish. Some medieval accounts suggest stockfish was still edible as much as ten years after preservation.   

    The research team argue that the new findings offer some corroboration to the unique 9th century account of the voyages of Ohthere of Hålogaland: a Viking chieftain whose visit to the court of King Alfred in England resulted in some of his exploits being recorded.

    “In the accounts inserted by Alfred’s scribes into the translation of an earlier 5th century text, Ohthere describes sailing from Hålogaland to Haithabu,” says Barrett. Hålogaland was the northernmost province of Norway. 

    “While no cargo of dried fish is mentioned, this may be because it was simply too mundane a detail,” says Barrett. “The fish-bone DNA evidence is consistent with the Ohthere text, showing that such voyages between northern Norway and mainland Europe were occurring.”

    “The Viking world was complex and interconnected. This is a world where a chieftain from north Norway may have shared stockfish with Alfred the Great while a late-antique Latin text was being translated in the background. A world where the town dwellers of a cosmopolitan port in a Baltic fjord may have been provisioned from an Arctic sea hundreds of miles away.”

    The sequencing of the ancient cod genomes was done at the University of Oslo, where researchers are studying the genetic makeup of Atlantic cod in an effort to unpick the anthropogenic impacts on these long-exploited fish populations.

    “Fishing, particularly of cod, has been of central importance for the settlement of Norway for thousands of years. By combining fishing in winter with farming in summer, whole areas of northern Norway could be settled in a more reliable manner,” says the University of Oslo’s Bastiaan Star, first author of the new study.

    Star points to the design of Norway’s new banknotes that prominently feature an image of cod, along with a Viking ship, as an example of the cultural importance still placed on the fish species in this part of Europe.

    “We want to know what impact the intensive exploitation history covering millennia has inflicted on Atlantic cod, and we use ancient DNA methods to investigate this,” he says.

    The study was funded by the Research Council of Norway and the Leverhulme Trust. 

    New research using DNA from the fish bone remains of Viking-era meals reveals that north Norwegians have been transporting – and possibly trading – Arctic cod into mainland Europe for a millennium.

    Our findings suggest that distant requirements for this Arctic protein had already begun to influence the economy and ecology of Europe in the Viking age
    James Barrett
    One of the ancient Viking cod bones from Haithabu used in the study

    Creative Commons License
    The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

    Yes

older | 1 | .... | 114 | 115 | (Page 116) | 117 | 118 | .... | 141 | newer