Quantcast
Channel: University of Cambridge - Latest news
Viewing all 4346 articles
Browse latest View live

Living with adversity: What Tupac and Eminem can tell us about risk factors for mental health

0
0

Tupac Shakur and Eminem are often touted as two of the greatest rappers of all time. While Tupac, who was shot dead in 1996, is African American and Eminem is Caucasian, their lyrics have similar narrative story telling styles that are filled with anguished suffering and anger conveyed in their hip-hop songs. The characters they portray in their lyrics are often surrounded by challenging environments – alcohol and drug addiction, parental abuse and gun crime, for example.

Two songs that describe important issues of adversity reflecting strong emotional turmoil in their lyrics are ‘Death around the corner’ from Tupac Shakur’s album, ‘Me Against The World’, and ‘Cleaning out my closet’ by Eminem from his album, ‘The Eminem Show’. In both songs, whether knowingly or unwittingly, the artists’ characters reveal many of the symptoms of mental illness – and also paint a picture that suggests why these problems have arisen.

In ‘Death around corner’, Tupac portrays a fictional character preoccupied with paranoia about a perceived threat to his own life and to his family. He feels the need to protect himself and his family from perceived targeted violence.

Straight away, the song opens with a skit, a dialogue between Tupac’s character, his partner, and their son. Tupac’s character is standing by the window with an AK47 firearm. His son is confused about his father’s strange behaviour and his wife is exasperated with her partner and feels he is consumed by his paranoia. She refers to Tupac’s character as “being crazy” and notes that he is neglecting his family (“you don’t work…you don’t do a…thing”). It is apparent that she does not share his concerns about their family’s safety and appears very irritated that he is preoccupied by his worries. What is particularly concerning is that the character – who is likely paranoid without justification – is carrying a potentially loaded gun at home whilst a vulnerable child and partner present and witnessing this behaviour.

The first verse makes reference to his need to stay ‘high’, probably through use of either a stimulant or cannabis, which are both risk factors for developing psychosis and paranoia.

The character describes his harsh urban environment as being where the “skinny” people “die” – in other words, where the weak are killed or exploited. The environment appears to be a place where vulnerable individuals can develop social defeat, which research has shown is a risk factor for psychosis.

Tupac’s character alludes to his daddy being “madder than a motherfucker”, which may indicate that the character has an increased risk of developing psychosis due to genetic factors, as we know people with a family history – particularly a parent or sibling – are at increased risk. He subsequently goes to bed “with my pistol in my sheets” due to feeling paranoid.

In the first verse, Tupac mentions his character’s use of “endo” (cannabis) and how it relieves his stress and paranoia. In the next verse, though, he mentions how smoking “…too much weed got me paranoid, stressed”. Is he contradicting himself here? Not necessarily: it depends on the type of cannabis that Tupac’s character has smoked which could explain both the increases in his paranoia and his relief from it – while some forms of cannabis are relatively benign, others, such as ‘skunk’, have been shown to increase the risk of psychosis.

Later, Tupac goes on to describe how his character was “raised in the city, shitty” at an early age, “drinking liquor out my momma’s titty” (a reference to his mother’s excessive alcohol use during early childhood whilst breast feeding), and possibly being exposed to second-hand cannabis smoke from an early age. All these risk factors highlight a chaotic household, which may have had an adverse effect on his developing brain – again leading to the development of psychosis in later life.

It’s clear from these lyrics that Tupac’s character has a family history of psychosis, that he abuses stimulants or cannabis, and that he lives in a harsh environment. All of these factors we know alter an individual’s brain chemistry and in particular how it responds to the key neurotransmitter dopamine, which research has shown leads people to fixate on or give too much emphasis to things in our environment or within our own thoughts, feelings or senses – and hence drive mental health issues such as paranoia and psychosis.

There is another way of looking at Tupac’s character, though: it’s possible that he is experiencing some form of Post Traumatic Stress Disorder (PTSD) following experiencing and witnessing life threatening situations from living in a violent hood “..I guess I seen too many murders…”. His behaviour certainly seems to match some of the common symptoms for PTSD: the frequent looking out of the window and paranoia could be seen as hypervigilance and hyperarousal which are prominent symptoms. His mention of seeing death around the corner could be referencing intensive flashbacks, ‘reliving’ of murders he has witnessed, and his use of “weed” might reflect the use of cannabis for ‘self-medication’.

Eminem’s song 'Cleaning out my closet' follows a similar trend of highlighting early adverse experiences. The song deals with Eminem’s angst against his mother.

In the first verse, Eminem highlights how he can’t keep his emotion in check and describes them as “the oceans exploding” attributing it to his parents relationship, their “tempers flaring”.

The chorus indicates that Eminem wishes to exorcise his emotional demons by voicing his angst in his lyrics. He uses the metaphor “but tonight I’m cleaning out my closet” to acknowledge that he would rather reveal his “skeletons” than allow them to eat away at him.

It appears he is trying to reach out to listeners as though they are psychotherapists. His character discloses his secrets in order to feel free from torment. Sigmund Freud described depression as anger turned inwards, often towards traumatic childhood experiences, hence we can see Eminem’s psychotherapy with the listeners as an opportunity to let go of the buried anger in an attempt to protect himself from depression.

The rest of the second verse involves Eminem making accusations against his detached, absent father, and Eminem promises to be different by being present for his daughter. It also reveals Eminem avoiding killing his ex-girlfriend and her partner (for the sake of his daughter), therefore he’s able to control his angry impulses.

The next verse explores his mother’s addiction to prescription pills, which echoes Eminem’s self-declared battle with addiction to prescription pills. This also highlights his increased risk of substance misuse partly due to his familial genetic predisposition.

Eminem’s character accuses his mother of Munchausen’s syndrome by proxy – where a mother fakes her child’s symptoms (or even worse, causes real symptoms) to make the child seem sick – describing himself as a “victim of Munchausen’s syndrome”. In this syndrome, it is believed that the intention of the caregiver is to gain attention and to receive commendation as the rescuer of the child for saving the child in their care from the illness. They do this to fulfill their need for attention by placing the sick role on to their child.

Eminem ends the song by accusing his mother of being jealous of his success and reveals his intention of not allowing her access to his daughter to protect her from becoming a victim of the abuse he experienced.

Interestingly, in a song released by Eminem later in 2013, entitled ‘Heights’, he regrets these harsh views toward his mother as previously written in ‘Cleaning out my closet’ and instead acknowledges his mother’s difficulty raising him as a single parent.

The suffering and painful feelings revealed by Tupac and Eminem’s characters offer us a valuable insight to examine mental health themes related to psychosis and social adversity. By utilising the interests of individuals who listen to hip-hop music – especially young people – we aim to enhance their understanding about mental health by delivering medical information in a context-enhanced manner. Perhaps this urban influenced approach will help empower and encourage individuals to examine what adversity is around the corner for them personally, and to explore what risk factors may still be locked away in their own closets.

Hip-hop artists Tupac and Eminem are among the most iconic music artists of the past two decades, and as Dr Akeem Sule and Dr Becky Inkster, co-founders of HIP-HOP-PSYCH, write, their lyrics can provide a valuable insight into the lives of some of the people most at risk of developing mental health issues.

The suffering and painful feelings revealed by Tupac and Eminem’s characters offer us a valuable insight to examine mental health themes related to psychosis and social adversity.
Tupac banner (cropped)

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Predicting gentrification through social networking data

0
0

The first network to look at the interconnected nature of people and places in large cities is not only able to quantify the social diversity of a particular place, but can also be used to predict when a neighbourhood will go through the process of gentrification, which is associated with the displacement of residents of a deprived area by an influx of a more affluent population.

The researchers behind the study, led by the University of Cambridge, will present their results today (13 April) at the 25th International World Wide Web Conference in Montréal.

The Cambridge researchers, working with colleagues from the University of Birmingham, Queen Mary University of London, and University College London, used data from approximately 37,000 users and 42,000 venues in London to build a network of Foursquare places and the parallel Twitter social network of visitors, adding up to more than half a million check-ins over a ten-month period. From this data, they were able to quantify the ‘social diversity’ of various neighbourhoods and venues by distinguishing between places that bring together strangers versus those that tend to bring together friends, as well as places that attract diverse individuals as opposed to those which attract regulars.

When these social diversity metrics were correlated with wellbeing indicators for various London neighbourhoods, the researchers discovered that signs of gentrification, such as rising housing prices and lower crime rates, were the strongest in deprived areas with high social diversity. These areas had an influx of more affluent and diverse visitors, represented by social media users, and pointed to an overall improvement of their rank, according to the UK Index of Multiple Deprivation.

The UK Index of Multiple Deprivation (IMD) is a statistical exercise conducted by the Department of Communities and Local Government, which measures the relative prosperity of neighbourhoods across England. The researchers compared IMD data for 2010, the year their social and place network data was gathered, with the IMD data for 2015, the most recent report.

“We’re looking at the social roles and properties of places,” said Desislava Hristova from the University’s Computer Laboratory, and the study’s lead author. “We found that the most socially cohesive and homogenous areas tend to be either very wealthy or very poor, but neighbourhoods with both high social diversity and high deprivation are the ones which are currently undergoing processes of gentrification.”

This aligns with previous research, which has found that tightly-knit communities are more resistant to changes and resources remain within the community. This suggests that affluent communities remain affluent and poor communities remain poor because they are relatively isolated.

Hristova and her co-authors found that of the 32 London boroughs, the borough of Hackney had the highest social diversity, and in 2010, had the second-highest deprivation. By 2015, it had also seen the most improvement on the IMD index, and is now an area undergoing intense gentrification, with house prices rising far above the London average, fast-decreasing crime rate and a highly diverse population.

In addition to Hackney, Tower Hamlets, Greenwich, Hammersmith and Lambeth are also boroughs with high social diversity and high deprivation in 2010, and are now undergoing the process of gentrification, with all of the positive and negative effects that come along with it.

The ability to predict the gentrification of neighbourhoods could help local governments and policy-makers improve urban development plans and alleviate the negative effects of gentrification while benefitting from economic growth.

In order to measure the social diversity of a given place or neighbourhood, the researchers defined four distinct measures: brokerage, serendipity, entropy and homogeneity. Brokerage is the ability of a place to connect people who are otherwise disconnected; serendipity is the extent to which a place can induce chance encounters between its visitors; entropy is the extent to which a place is diverse with respect to visits; and homogeneity is the extent to which the visitors to a place are homogenous in their characteristics.

Within categories of places, the researchers found that some places were more likely places for friends to meet, and some were for more fleeting encounters. For example, in the food category, strangers were more likely to meet at a dumpling restaurant while friends were more likely to meet at a fried chicken restaurant. Similarly, friends were more likely to meet at a B&B, football match or strip club, while strangers were more likely to meet at a motel, art museum or gay bar.

“We understand that people who diversify their contacts socially and geographically have high social capital, but what about places?” said Hristova. “We all have a general notion of the social diversity of places and the people that visit them, but we’ve attempted to formalise this – it could even be used as a specialised local search engine.”

For instance, while there are a number of ways a tourist can find a highly-recommended restaurant in a new city, the social role that a place plays in a city is normally only known by locals through experience. “Whether a place is touristy or quiet, artsy or mainstream could be integrated into mobile system design to help newcomers or tourists feel like locals,” said Hristova.

Reference:
Desislava Hristova et al. ‘Measuring Urban Social Diversity Using Interconnected Geo-Social Networks.’ Paper presented to the International World Wide Web Conference, Montréal, 11-15 April 2016. http://www2016.ca/program-at-a-glance.html

Data from location-based social networks may be able to predict when a neighbourhood will go through the process of gentrification, by identifying areas with high social diversity and high deprivation.

We understand that people who diversify their contacts socially and geographically have high social capital, but what about places?
Desislava Hristova
Gentrification in Progress

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Opinion: Here’s how tweets and check-ins can be used to spot early signs of gentrification

0
0

When you walk through a neighbourhood undergoing gentrification, you can sense it – the area is dominated by strange contradictions. Public spaces are populated by vagabonds and cool kids; abandoned buildings sit in disrepair next to trendy coffee shops; blocks of council housing abut glassy new developments.

Urbanists describe gentrification as a form of urban migration, where a more affluent population displaces the original, lower-income population. In statistics, gentrification appears as the lowering of crime rates, rising housing prices and changes to the mix of people who live there.

If we could only predict where gentrification is likely to strike next, we might be able to alleviate its negative impacts – such as displacement – and take advantage of its more positive effects, which include economic growth. That’s why our latest study– conducted with colleagues at the University of Birmingham, Queen Mary University of London, and University College London – aimed to quantify the process of gentrification, and discover the warning signs.

Detecting urban diversity

We constructed four measures of urban social diversity using data from social media. By combining these measures with government statistics about deprivation, we were able to pinpoint a number of neighbourhoods undergoing gentrification in London.

Of course, social media is notoriously unsuitable for population studies, because of the “digital divide”: the split between people who can access the internet and those who can’t exists even within urban areas – so information from social media only captures part of the overall picture. Twitter users in particular are known to be predominantly young, affluent and living in urban areas.

But these are precisely the demographics responsible for gentrification. So, we used information from social media from 2010 and 2011 to define the “social diversity” of urban venues such as restaurants, bars, schools and parks.

Urban social diversity – in terms of population, economy and architecture – is known to be a factor in successful communities. In her famous book The Death and Life of Great American Cities, urban activist Jane Jacobs wrote that “cities differ from towns and suburbs in basic ways, and one of these is that cities are, by definition, full of strangers”.

Dropping in.David Abrahamovitch/Flickr, CC BY

In our work, we first measured the amount of strangers that a place brings together as the fraction of the social network of visitors who are connected on social media. This gave us an idea of whether a place tends to be frequented by strangers or friends. We further explored the diversity of these visitors in terms of their mobility preferences and spontaneity in choice of venues. Although we did not consider demographics or income levels, there is a known relationship between the wealth of people and the diversity of their geographical interactions.

We studied the social network of 37,000 London users of Twitter, and combined it with what we knew about their mobility patterns from geo-located Foursquare check-ins posted to their public profiles.

By studying the amount of strangers versus friends meeting at a bar, or the number of diverse versus similar individuals visiting an art gallery, we were able to quantify the overall diversity of London neighbourhoods, in terms of their visitors.

Networks are powerful representations of the relationships between people and places. Not only can we draw links between people where a relationship – such as friendship – exists between them; we can also draw connections between two places if a visitor has been to both. We can even connect the two networks, by drawing links between people in the social network who have visited specific spots in the place network.

In this way, we are able to extract the social network of a place, and the place network of a person. By the time we’d finished crunching the data, we could take stock of the range of people who had visited a specific place, and the different places visited by any individual.

When we compared the diversity of urban neighbourhoods with official government statistics on deprivation, we found that some highly deprived areas were also extremely socially diverse. In other words, there were lots of diverse social media users visiting some of London’s poorest neighbourhoods.

Diminishing deprivation

To find out what was going on, we took the newly published deprivation indices for 2015 and looked for changes in the levels of deprivation from our study period in 2011. The relationship was striking. The areas where we saw high levels of social diversity and extreme deprivation in 2011, were exactly the same areas that had experienced the greatest decreases in deprivation by 2016.

A prime example can be found in the London borough of Hackney. Anyone visiting Hackney might describe it in terms of the contradictions we mentioned before – but few of us could afford to live there today. In our study, Hackney was the highest ranking in deprivation and the highest ranking in social diversity in 2011. Between then and now, it has gone from the being the second most deprived neighbourhood in the country, to the 11th.

So, although social media may not be representative of the entire population, it can offer the key to measuring and understanding the processes of gentrification. Neither entirely good nor thoroughly bad, gentrification is a phenomenon that we should all watch out for. It will undoubtedly help to define how our cities transform in years to come.

Desislava Hristova, PhD Candidate, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Desislava Hristova (Computer Laboratory) discusses how data from location-based social networks can be used to predict when a neighbourhood will go through the process of gentrification.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Graduate earnings: what you study and where matters – but so does parents’ income

0
0

Latest research has shown that graduates from richer family backgrounds earn significantly more after graduation than their poorer counterparts, even after completing the same degrees from the same universities.

The finding is one of many from a new study, published today, which looks at the link between earnings and students’ background, degree subject and university.

The research also found that those studying medicine and economics earn far more than those studying other degree subjects, and that there is considerable variation in graduates’ earnings depending on the university attended.

The study was carried out by the Institute of Fiscal Studies and the universities of Cambridge and Harvard, including Professor Anna Vignoles from Cambridge’s Faculty of Education. It is the first time a ‘big data’ approach has been used to look at how graduate earnings vary by institution of study, degree subject and parental income.

The researchers say that many other factors beyond graduate earnings, such as intrinsic interest, will and should drive student choice. However, they write that the research shows the potential value of providing some useful information that might inform students’ choice of degree – particularly to assist those from more disadvantaged backgrounds who might find it harder to navigate the higher education system.

“It would seem important to ensure there is adequate advice and guidance given that graduates’ future earnings are likely to vary depending on the institution and subject they choose, with implications for social mobility,” write the researchers in the study’s executive summary.

The research used anonymised tax data and student loan records for 260,000 students up to ten years after graduation. The dataset includes cohorts of graduates who started university in the period 1998-2011 and whose earnings (or lack of earnings) are then observed over a number of tax years. The paper focuses on the tax year 2012/13.

The study found that those from richer backgrounds (defined as being approximately from the top 20% of households of those applying to higher education in terms of family income) did better in the labour market than the other 80% of students.

The average gap in earnings between students from higher and lower income backgrounds is £8,000 a year for men and £5,300 a year for women, ten years after graduation.

Even after taking account of subject studied and the characteristics of the institution of study, the average student from a higher income background earned about 10% more than other students.

The gap is bigger at the top of the distribution – the 10% highest earning male graduates from richer backgrounds earned about 20% more than the 10% highest earners from relatively poorer backgrounds. The equivalent premium for the 10% highest earning female graduates from richer backgrounds was 14%.

The study also showed that graduates are much more likely to be in work, and earn much more than non-graduates. Non-graduates are twice as likely to have no earnings as are graduates ten years on (30% against 15% for the cohort who enrolled in higher education in 1999).

Partly as a result of this, half of non-graduate women had earnings below £8,000 a year at around age 30, say the researchers. Only a quarter of female graduates were earning less than this. Half were earning more than £21,000 a year.

Among those with significant earnings (which the researchers define as above £8,000 a year), median earnings for male graduates ten years after graduation were £30,000. For non-graduates of the same age median earnings were £21,000. The equivalent figures for women with significant earnings were £27,000 and £18,000.

“The research illustrates strongly that for most graduates higher, education leads to much better earnings than those earned by non-graduates, although students need to realise that their subject choice is important in determining how much of an earnings advantage they will have,” said Professor Vignoles.

The researchers also found substantial differences in earnings according to which university was attended, as well as which subject was studied. They say however that this is in large part driven by differences in entry requirements.  

For instance, more than 10% of male graduates from LSE, Oxford and Cambridge were earning in excess of £100,000 a year ten years after graduation, with LSE graduates earning the most. LSE was the only institution with more than 10% of its female graduates earning in excess of £100,000 a year ten years on.

Even without focusing on the very top, the researchers say they found a large number of institutions (36 for men and 10 for women) had 10% of their graduates earning more than £60,000 a year ten years on. At the other end of the spectrum, there were some institutions (23 for men and 9 for women) where the median graduate earnings were less than those of the median non-graduate ten years on.

However, the researchers say that it is important to put this in context. “Given regional differences in average wages, some very locally focused institutions may struggle to produce graduates whose wages outpace English wide earnings, which includes those living in London where full time earnings for males are around 50% higher than in some other regions, such as Northern Ireland,” they write.

In terms of earnings according to subject, medical students were easily the highest earners at the median ten years out, followed by those who studied economics. For men, median earnings for medical graduates were about £50,000 after ten years, and for economics graduates £40,000.

Those studying the creative arts had the lowest earnings, and earned no more on average than non-graduates. However, the researchers say that some of these earnings differences are, of course, attributable to differences in student intake – since students with different levels of prior achievement at A-level take different subject options.

“When we account for different student intakes across subjects, only economics and medicine remain outliers with much higher earnings at the median as compared to their peers in other subjects,” write the researchers.

After allowing for differences in the characteristics of those who take different subjects, male medical graduates earn around £13,000 more at the median than similar engineering and technology graduates, the gap for women is approximately £16,000. Both male and female medical graduates earn around £14,000 more at the median than similar law graduates.

“Earnings vary substantially with university, subject, gender and cohort,” said study co-author Neil Shepherd of Harvard University. “This impacts on which parts of the HE sector the UK Government funds through the subsidy inherent within income contingent student loans. The next step in the research is to quantifying that variation in funding, building on today's paper.”

Reference:
Institute for Fiscal Studies working paper: 'How English domiciled graduate earnings vary with gender, institution attended, subject and socio-economic background', Jack Britton , Lorraine Dearden , Neil Shephard and Anna Vignoles.

First ‘big data’ research approach to graduate earnings reveals significant variations depending on student background, degree subject and university attended.  

The research illustrates strongly that for most graduates higher, education leads to much better earnings than those earned by non-graduates, although students need to realise that their subject choice is important in determining how much of an earnings advantage they will have
Anna Vignoles
Sidney Sussex General Admission, Cambridge 2012

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: How to launch a rocket into space … and then land it on a ship at sea

0
0

On Friday 8 April 2016, SpaceX’s Falcon 9 rocket launched a mission to deliver a spacecraft called Dragon with its payload of supplies and experiments into a trajectory towards the International Space Station (ISS). Most remarkably, the first-stage booster then landed on a ship (see below).

 

Landing: how it’s done.

 

This is no easy task. Think back to 1969 when the Apollo 11 mission delivered three astronauts to the moon. Neil Armstrong and Buzz Aldrin walked on the moon while Michael Collins piloted the command module in lunar orbit. All three returned safely to Earth.

The first stage of the huge Saturn V rocket that launched them into space burned for about three minutes and then crashed into the ocean. The second stage burned for a further six minutes, taking the craft into near-Earth orbit. It too was jettisoned and then burned up during its descent to Earth. The third stage burned for nine more minutes to send the astronauts towards the moon – again burning up on re-entry. The Saturn V rocket, at a cost of US$6 billion in 1969, was completely lost.

 

On the astronauts' return, only the command module splashed down in the Pacific Ocean. Of the 140 tonnes of metal that were launched, only five tonnes returned to earth. And at launch, the mission carried a staggering 3,000 tonnes of fuel … more on this later. Just imagine the cost saving if the Saturn V had been able to return to Earth almost intact – and then land itself to be used on another mission. Well, as the latest SpaceX landing proves – along with the reusable technology being developed by Blue Origin– that dream is becoming reality.

Of course, there have been reusable spacecraft before. The space shuttle was intended as a reusable spacecraft. The orbiter returned to Earth landing like a conventional aircraft, and the two solid rocket boosters could be recovered from the sea. Only the huge orange external tank would burn up, but getting the spacecraft ready for the next flight was slow and expensive.

The SpaceX philosophy is that the majority of the rocket can be rapidly recovered, refuelled and reflown, making for significant cost savings. Fuel is less than half a percent of the cost of the mission, so there is potential to decrease the cost of getting to space dramatically.

The rocket science bit

But what of the rocket science? Well, the ISS is in “near-Earth orbit”, at an altitude of about 400km. This is less than the distance from London to Paris, albeit straight up. It is orbiting at a speed of about 7.7km/s – it only takes about 90 minutes to go once around the Earth.

The SpaceX rocket comes in to land on the drone ship.SpaceX/flickr

 

So how much energy does it take to deliver 1kg of payload to the ISS? First, think of kinetic energy (KE) – yes, you learned this at school. At 7.7km/s, the KE per kilogram works out at about 30MJ (megajoules). But you also need to consider gravitational potential energy (GPE) – yes, you did this at school, too. The force of the Earth’s gravity doesn’t change very much over this short distance upwards given that the Earth’s radius is 6,400km. So it’s easy to work out that the GPE needed to lift 1kg up to 400km is only about 4MJ. This is small compared with the 30MJ of KE required.

But not only do you need to get your payload to 7.7km/s, you have to carry the fuel in your tanks as well. Thanks to Konstantin Tsiolkovsky’s“ideal rocket equation”, however, we know that liquid fuels, which have a faster gas jet speed, are more efficient than solid fuels. Essentially, by using liquid fuel, each 1kg of payload needs a minimum of 4.5kg of fuel to reach a speed of 7.7km/s, while with solid fuel, you’d need more than 20kg – and this is before taking the rocket mass into account. This is a very good reason why rocketeers, including SpaceX, prefer to use liquid fuels; with a high gas jet speed, the launch mass is much less.

The secret of SpaceX

So how does SpaceX’s rocket work? The Falcon 9 is a two-stage launch vehicle. Each stage uses a liquid fuel with liquid oxygen (very similar to the kerosene and liquid oxygen fuel used in the Saturn V – chemistry hasn’t changed in 50 years). Falcon 9’s first stage “booster” is by far the largest and most expensive component of the launch mass and it makes sense to try to recover it for reuse – it’s no good to have it splash into the sea because the resulting damage and corrosion would render it useless.

And so SpaceX has developed the Falcon 9 booster so that it can land on a ship. The idea is that shortly after main engine cutoff and stage separation, the booster flips over and directs itself towards an unmanned drone ship that is waiting as a landing pad.

The ideal trajectory.SpaceX, Author provided

 

During its descent, the booster reaches speeds of around 1,000m/s (double this after more adventurous geostationary missions) but its fins and engines control the speed and direction of the Falcon, slowing it down as it approaches the Earth. At the last minute, four legs then deploy – and it makes a soft landing at a speed of less than 6m/s. All of this delicate navigation is performed by onboard computers and inertial navigation systems. The landing is so fast that no human could react quickly enough to ensure a smooth touchdown.

To make things trickier, the ship is pitching, rolling and heaving in the sea. It’s hard enough for pilots to land on an aircraft carrier – Falcon is a tall slender column some 20 storeys high. Once the Falcon has landed, the legs lock out rigidly. In one of the earlier Falcon missions, one of the legs failed so even though the landing was a success the Falcon fell over shortly after, resulting in a huge conflagration. But that’s why they use an unmanned drone ship as a landing platform – no-one gets hurt. Of course, it’s easier to land on land – and the Falcon does this if the mission flies over land. But the ability to land at sea adds flexibility.

SpaceX rocket coming into land.SpaceX/flickr

 

What next?

Falcon 9 and Dragon are paving the way for a new generation of low-cost reusable spacecraft – and SpaceX is already developing Dragon to launch and return up to seven astronauts into orbit and beyond. Perhaps next, there’ll be a refuelling base on the moon? Then Mars is an awful lot closer.

The flexibility offered by reusable vehicles is the stuff of science fiction – the classic movie Star Wars and more recently Martian and Interstellar are nothing without space ships that launch at the push of a button. We’re not there yet, but with spacecraft that can hop around like aircraft, then anything is possible.

The author wrote this piece with the assistance of one of his former Trinity students, Lars Blackmore, who now works for SpaceX as Principal Rocket Landing Engineer. Lars graduated with his MEng from Cambridge in 2003 and a PhD from MIT in 2007. He is now responsible for landing Falcon 9.

Hugh Hunt, Reader in Engineering Dynamics and Vibration, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Hugh Hunt (Department of Engineering) discusses the intricacies of reusable spacecraft.

SpaceX’s Falcon 9 rocket blasts off

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Overweight individuals more likely to make unhealthier choices when faced with real food

0
0

The researchers found that when making hypothetical food choices, lean and overweight people showed highly comparable patterns both in terms of their choices and the accompanying brain activity. The activity in the brain was a good predictor of which foods they would choose when later faced with a selection of real food choices. But the presence of real food influenced choices differently across the groups.

In a related study published recently in the International Journal of Obesity, the researchers show that the brain structure in obese people differs from that in lean individuals in key regions of the brain involved in processing value judgements.

More than 1.3 billion people across the world are overweight and an additional 600 million are obese. Being overweight or obese are leading risk factors for deaths globally, being associated with increased incidence of type 2 diabetes, cardiovascular disease and some cancers.

Previous studies have suggested that obesity is associated with a greater consumption of unhealthy foods – those with high sugar and/or fat content – even though lean and overweight people do not appear to differ in their judgements of the relative healthiness of foods. To help explore further this apparent contradiction, researchers from the University of Cambridge and the Medical Research Council Human Nutrition Unit examined the relationship between how people judge the healthiness and tastiness of food and whether this predicts their food choices at a buffet lunch.

The researchers asked 23 lean and 40 overweight individuals to rate 50 common snack foods, presented on a computer screen, on a five-point scale for their healthiness and tastiness independently. They then examined the degree to which each of these individually-rated attributes appeared to influence a person’s willingness to swap a particular food for one that had previously been rated as “neutral”.

Participants were shown a picture of the “neutral” reference food item at the beginning of the task and told that on each trial they would have to choose between the food item shown on that trial and the reference food item. For example, if they had chosen a granola bar as neutral (and hence their reference food), they might be shown an apple and asked if they would be willing to swap the granola bar for the apple. During this swap-choice task, participants were placed in a functional magnetic resonance imaging (fMRI) scanner, which indirectly measures activity in the brain.

Neither choice behaviour nor accompanying brain activity differed measurably according to participants’ body weight. As one might predict, for both groups, taste was a much better guide to whether a person might choose to swap a food than healthiness. Willingness to swap a given food was associated with greater levels of activity in a key region of the brain: the ventromedial prefrontal cortex, which previous studies have consistently related to the degree to which people value rewards. Activity in this region did not differ across the groups.

Following the scanner experiment, participants were presented with an all-you-can-eat buffet with a selection of sandwiches, desserts, drinks and snacks. For each type of food, there were healthier and less healthy options, such as chicken sandwich and a BLT (bacon, lettuce and tomato) sandwich, or cola and diet cola. Once they had rated the buffet choices for healthiness and tastiness, the participants were allowed to eat freely and as much as they wanted.

Brain activity predicted the proportion of healthy food consumed in both lean and overweight individuals and both groups selected a greater proportion of foods that they had rated as tasty. However, the overweight participants consumed comparably more unhealthy foods than lean participants.

At the start of the experiment, the researchers had also measured each individual’s impulsivity – in other words, their self-control – using a mixture a computer tasks and a questionnaire. While the level of impulsivity made no difference in lean individuals’ selections, the researchers found an association in overweight people between impulsivity and consumption of unhealthy foods – the greater their level of impulsivity, the greater the proportion of unhealthy food they ate. Once again, this effect was apparent only when there was real food available and was not seen during the more hypothetical valuation decisions.

“There’s a clear difference between hypothetical food choices that overweight people make and the food they actually eat,” says Dr Nenad Medic from the Department of Psychiatry. “Even though they know that some foods are less healthy than others and say they wouldn’t necessarily choose them, when they are faced with the foods, it’s a different matter.

“This is an important insight for public health campaigns as it suggests that just trying to educate people about the healthiness of food choices is not enough. The presence of unhealthy food options is likely to override people’s decisions. In this respect, food choice does not appear to be a rational decision - it can become divorced from what the person knows and values.”

In a second study, the researchers looked at the brain structure of over 200 healthy individuals using an MRI scanner and found an association between body mass index (BMI) and brain structure. Strikingly, one of the regions showing this relationship overlapped with the region responding to food value in the first study – the ventromedial prefrontal cortex. The grey matter layer in this region was thinner in people with greater BMI.

“Perhaps this offers us some clues about the first observation – that rational, hypothetical valuation decisions don’t fully translate into healthy choices in the overweight people when they are offered real food choices,” says Professor Paul Fletcher from the Department of Psychiatry. “While the region is clearly responding in a way that is not distinct from leaner people, perhaps the structural differences suggest a reduced ability to translate what one knows into what one chooses.

“Although we can only speculate at this stage, and we really don’t know, for example, whether this brain change is a cause or a consequence of increased weight, this could help explain why this same group of people found it harder to stick to their original, healthier food choices when presented with a buffet selection.”

Professor Theresa Marteau, Director of the Behaviour and Health Research Unit at the University of Cambridge, a co-author of the study, adds: "These findings attest to the power of environments in overwhelming many people’s desires and intentions to eat more healthily. The findings also reinforce the growing evidence that effective obesity policies are those that target food environments rather than education alone.”

The research was funded by the Bernard Wolfe Health Neuroscience Fund, the Wellcome Trust and the Medical Research Council.

Reference
Medic, N et al. The presence of real food usurps hypothetical health value judgment in overweight people. eNeuro; 13 April 2016; DOI 10.1523/ENEURO.0025-16.2016

Medic, N et al. Increased Body Mass Index is Associated with Specific Regional Alterations in Brain Structure. International Journal of Obesity; e-pub 22 March 2016; DOI 10.1038/ijo.2016.42

Overweight people make unhealthier food choices than lean people when presented with real food, even though both make similar selections when presented with hypothetical choices, according to research led by the University of Cambridge and published today in the journal eNeuro.

This is an important insight for public health campaigns as it suggests that just trying to educate people about the healthiness of food choices is not enough. The presence of unhealthy food options is likely to override people’s decisions
Nenad Medic
DSC02863 (cropped)

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

From the Mayans to the moors: a new film series shows biodiversity conservation in a new light

0
0

When most people think about biodiversity conservation they think about the importance of protecting the variety of life on Earth. They might not think about how the principles used to study species endangerment and its impacts on people are also used to understand the extinction of languages; or what nature writers like William Wordsworth can tell us about landscapes that previous generations took for granted but have become lost to us.

Now, a series of eight films released today by the University of Cambridge Conservation Research Institute (UCCRI) sets out to highlight these remarkable connections, demonstrating the breadth of research interests at the University that have the potential to intersect with 21st-century issues in biodiversity conservation.

Conservation research today has become a global and interdisciplinary field, raising complex issues such as how toxic waste sites in East Africa affect the increasing rarity of the cuckoo in the UK; or how the fashion industry impacts directly on the global water profile both in terms of water pollution as well as waste; or how our consumption of red meat affects climate change.

“The series of videos focuses on mutual learning and collaboration between researchers within the arts and humanities, the natural and social sciences, practitioners, policy makers and citizens, all of whom are integral to understanding conservation problems,” explains UCCRI Director, Dr Bhaskar Vira. “UCCRI provides a space to explore the understanding that emerges when disciplinary silos are broken down, and to foster productive – often mutually critical – dialogue between colleagues from across the University to promote a deeper engagement with the shared challenges that confront the future of humanity and the planet that we inhabit.”

The videos were filmed and produced by UCCRI’s Leverhulme Trust funded Artist in Residence, photographer Toby Smith. Each video showcases researchers from a range of University departments – plant sciences, zoology, social anthropology, English, architectural engineering, land economy, geography, and history and philosophy of science – relating their work and its relevance to conservation.

For example, Dr Jenny Bavidge (Faculty of English) shows how the arts and humanities are important to conservationists through recognising the need to establish conservation understanding in education and early childhood. “The science and the arts have got much better at speaking to each other and at coming up with new ways of thinking about the problems that are affecting us all,” she explains. “The new nature writing is where we are seeing this come together a lot – there’s immense interest and focus on children’s exposure and experience of the environment.”

In another video, social anthropologist, Dr Charles Piggot, explains how: “until recently, conservation has tended to focus either on natural heritage or cultural heritage, but today a new paradigm is emerging, the environmental humanities.” This collaboration of natural science, social science and arts and humanities has enabled conservation research to increase its scope and encourage the consideration of conservation issues from a much wider angle.

Natural scientists within conservation research are increasingly realising that their work relies not only on detailed biological knowledge but also on understanding social issues – learning the social rules of engagement in the country they are working in, how policy makers and governments operate and how to communicate effectively with the local people in order for their work to be of significance. Zoologist Andrew Bladon explains that, when working in Ethiopia on the Ethiopian Bush Crow and the White Tailed Swallow, local engagement was important “because tribal law is very strong and without the will of the local Borana leaders even the national park and the protection that it’s supposed to bring to the species would be ineffective.” 

Conservation is primarily underpinned by human behaviour. Therefore understanding social factors is important. Plant scientist, Tommaso Jucker, is working on a project in south east Asia looking at the impacts of human disturbance and logging activities on the forests in these areas. For him collaboration is vital with disciplines that complement each other’s areas of expertise. Rosemary Ostfeld, a land economist, explores the social, environmental and economic aspects of palm oil production and it is crucial for her to liaise with stakeholders particularly to determine the effectiveness of initiatives she works with, such as the Round Table for Sustainable Palm Oil.

Dr Helen Curry’s work as an historian of science has a fascinating and novel approach to understanding how new scientific knowledge, tools and technologies shape people’s attitudes towards, and their interactions with, different aspects of their natural world. Curry studies contemporary conservationists and their continuing and increasing interest in using technologies as ways of conserving endangered species. Currently her research focuses on the entanglement of industrial agriculture and biodiversity.

Dr Max Bock explains that as an architectural engineer, sustainability issues are inherently multidisciplinary and require attention from several perspectives to be addressed adequately.  He also explains how his work on bamboo as a sustainable building material has been taken up by NGOs internationally, a prime example of how researchers and practitioners can work together successfully.

“It is very important for critical thinking to have a cross-boundary between different disciplines and I think that’s what distinguishes a more overarching approach to research,” says geographer Anca Serban, whose work in India explores how to feed the world under a growing pressure from increased demands. “Whereas biodiversity aspects in conservation research focus on how we can minimise the impact on habitats and species, we have to weigh up the trade offs of managing conservation to ensure it does not impinge on people’s livelihoods or increase poverty, particularly in developing countries.”

UCCRI has become the hub for interdisciplinary work on conservation and sustainability across the University, and is part of the newly opened David Attenborough Building in Cambridge, along with nine conservation organisations that form the Cambridge Conservation Initiative (CCI).

The UCCRI team is keen to seek out researchers within the University who will benefit from the opportunities offered by this new campus at the heart of Cambridge. Alison Harvey, responsible for UCCRI Research and Communications and creative director of the interdisciplinary conservation videos, explains: “Many people may not immediately recognise their work as being relevant to debates about the conservation of biodiversity. We really want people to think out of the box in terms of how their work might relate to conservation and to contact us and find out about opportunities to collaborate with other researchers within the University, and with the organisations associated with CCI.”

Could your work make a difference to conservation?  Contact UCCRI for an informal chat: uccri-administrator@conservation.cam.ac.uk

From the plight of the Ethiopian Bush Crow, to representation of nature in Winnie the Pooh, to the extinction of ancient Latin American languages, the wide breadth of research connected with biodiversity conservation at the University of Cambridge is reflected in a series of films released today.

The series of videos focuses on mutual learning and collaboration between researchers within the arts and humanities, the natural and social sciences, practitioners, policy makers and citizens, all of whom are integral to understanding conservation problems.
Bhaskar Vira, University of Cambridge Conservation Research Institute

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Biggest library of bat sounds compiled to track biodiversity

0
0

An international team led by scientists from the University of Cambridge, University College London (UCL), and the Zoological Society of London (ZSL), developed the reference call library and a new way of classifying calls to accurately and quickly identify and differentiate bat species.

The researchers say the method can be used to monitor biodiversity change and complete information on bat species distributions in remote and understudied regions in Mexico. It could also be expanded for use in other areas across the Neotropics, which incorporates South and Central America, and the Caribbean Islands and Florida.

It is the first time automatic classification for bat calls has been attempted for a large variety of species, most of them previously noted as hard to identify acoustically.

“Audio surveys are increasingly used to monitor biodiversity change, and bats are especially useful for this as they are an important indicator species, contributing significantly to ecosystems as pollinators, seed dispersers and suppressors of insect populations,” explains lead author Dr Veronica Zamora-Gutierrez, from the University of Cambridge Conservation Research Institute and UCL.

“By tracking the sounds they use to explore their surroundings, we can characterise the bat communities in different regions in the long term and gauge the impact of rapid environmental change.”

“Before now it was tricky to do as many bat species have very similar calls and differ in how well they can be detected. We overcame this by using machine learning algorithms together with information about hierarchies to automatically identify different bat species.”

For the study, published today in Methods in Ecology and Evolution, the researchers ventured into some of the most dangerous areas of Mexico, primarily the northern deserts, to collect 4,685 calls from 1,378 individual bats from 59  of the over 130 species occurring in Mexico.

Most of the areas hadn’t been sampled before and the data collected, along with additional information from collaborators, provides calls for over half of the species and all of the families of bats in Mexico.

Co-author, Professor Kate Jones, UCL and ZSL, said: “We’ve shown it is possible to reliably and rapidly identify bats in mega-diverse areas, such as Mexico, and we hope this encourages uptake of this method to monitor biodiversity changes in other biodiversity hotspot areas such as South America.”

“Our ability to readily map ecological communities is imperative for understanding the impact of the Anthropocene and implementing effective conservation measures.”

The team now plan on developing a citizen science monitoring programme for Mexican bats to collect further information on bat calls. They will also develop more robust tools for bat identification using the Bat Detective website which will allow them to refine the machine learning algorithms used by the software.

The study also involved researchers from the IPN CIIDIR Durango (Mexico), Universidad Veracruzana (Mexico), Western University (Canada), University of Bristol, University of Ulm (Germany), Smithsonian Tropical Research Institute (Panama), Ernst-Moritz-Arndt University (Germany), University College Dublin and University of Warwick. It was kindly funded by CONACYT, Cambridge Commonwealth European and International Trust, The Rufford Foundation, American Society of Mammalogists, Bat Conservation International, Idea Wild, The Whitmore Trust and Engineering and Physical Sciences Research Council (EPSRC).

Adapted from a University College London press release.

Inset image: The western yellow bat (Lasiurus xanthinus) is a species of vesper bat found in Mexico and the south-western United States (UCL/ZSL).

Researchers have compiled the largest known library of bat calls to identify and conserve rare species in Mexico – a country which is home to many of the world’s bats and has one of the highest rates of species extinction and habitat loss.

Bats are especially useful for monitoring biodiversity change as they are an important indicator species, contributing significantly to ecosystems
Veronica Zamora-Gutierrez
Bat

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: How LSD helped us probe what the ‘sense of self’ looks like in the brain

0
0

Every single person is different. We all have different backgrounds, views, values and interests. And yet there is one universal feeling that we all experience at every single moment. Call it an “ego”, a “self” or just an “I” – it’s the idea that our thoughts and feelings are our own, and no one else has access to them in the same way. This may sound a bit like post-war French existentialism or psycho-analysis, but it’s actually a topic that’s being increasingly addressed by neuroscientists.

We were part of a team interested in finding out how this sense of self is expressed in the brain – and what happens when it dissolves. To do that, we used brain imaging and the psychedelic drug LSD.

Our sense of self is something so natural that we are not always fully aware of it. In fact, it is when it is disturbed that it becomes the most noticeable. This could be due to mental illnesses such as psychosis, when people might experience the delusional belief that their thoughts are no longer private, but can be accessed and even modified by other people. Or it could be due to the influence of psychedelic drugs such as LSD, when the user can feel that their ego is “dissolving” and they are becoming at one with the world. From a scientific point of view, these experiences of “ego death” or ego dissolution are also opportunities to search for this sense of self in the brain.

Our study, led by Enzo Tagliazucchi and published in Current Biology, set out to probe what is happening in the brain when our sense of self becomes altered by psychedelic drugs (link to Enzo’s paper). We studied 15 healthy volunteers before and after taking LSD, which altered their normal feelings of their selves and their relationship with the environment. These subjects were scanned while intoxicated and while receiving placebo using functional MRI, a technique which allows us to study the brain’s activity by measuring changes in blood flow. By contrasting the activity of the brain when receiving a placebo with its activity after taking LSD, we could start exploring the brain mechanisms involved in the normal experience of the self.

A holistic understanding

Results of this study showed that the experience of ego-dissolution induced by LSD was not related to changes in only one region of the brain. Instead, the drug affected the way that several brain regions were communicating with the rest of the brain, increasing their level of connectivity. These included the fronto-parietal region, an area that has previously been linked to self awareness, and the temporal region, an area involved in language comprehension and creating visual memories. The brain on LSD would therefore be similar to an orchestra in which musicians are no longer playing together in time, rather than an orchestra in which some are missing or malfunctioning.

Brain anatomy.Primalchaos/wikimedia

In a previous paper, we showed that the brain tends to organise itself into groups or modules of regions working closely together and specialising in a specific activity, a property called modularity. For example, the brain regions specialised for vision are normally organised as a module of the human brain network. LSD disrupted this modular organisation of the brain – and the level of modular disorganisation was linked with the severity of ego-dissolution that volunteers experienced after taking the drug. It seems the modular organisation of the healthy brain works as the scaffolding that allows us to maintain a sense of self.

But on a more fundamental note, these results highlight that a full understanding of the brain will never be complete unless we focus on the connectivity between regions as part of a complex network. This is irrespective of the level of microscopic detail we might have about what a single region does. Just as a symphony is fully appreciated only when one listens to all members of the orchestra playing it together, and not by studying each individual instrument separately.

By investigating the psychedelic effects of LSD with brain scanning, we can open the doors of perception to discover how the familiar, egotistical sense of self depends on a particular pattern of brain network organisation. Our sense of individuality may be down to the overall configuration that emerges from the interactions of multiple brain regions. When this organisation is disrupted by LSD, and particularly when the modular organisation falls apart, our sense of self, and the distinct boundaries between us, the environment and others might be lost.

Nicolas Crossley, Honorary Research Fellow at the Department of Psychosis Studies, King's College London and Ed Bullmore, Professor of Behavioural and Clinical Neuroscience , University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Ed Bullmore (Department of Psychiatry) and Nicolas Crossley (King's College London) discuss their work trying to find out how sense of self is expressed in the brain.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Cambridge to research future computing tech that could “ignite a technology field”

0
0

A project which aims to establish the UK as an international leader in the development of “superconducting spintronics” – technology that could significantly increase the energy-efficiency of data centres and high-performance computing – has been announced.

Led by researchers at the University of Cambridge, the “Superspin” project aims to develop prototype devices that will pave the way for a new generation of ultra-low power supercomputers, capable of processing vast amounts of data, but at a fraction of the huge energy consumption of comparable facilities at the moment.

As more economic and cultural activity moves online, the data centres which house the servers needed to handle internet traffic are consuming increasing amounts of energy. An estimated three per cent of power generated in Europe is, for example, already used by data centres, which act as repositories for billions of gigabytes of information.

Superconducting spintronics is a new field of scientific investigation that has only emerged in the last few years. Researchers now believe that it could offer a pathway to solving the energy demands posed by high performance computing.

As the name suggests, it combines superconducting materials – which can carry a current without losing energy as heat – with spintronic devices. These are devices which manipulate a feature of electrons known as their “spin”, and are capable of processing large amounts of information very quickly.

Given the energy-efficiency of superconductors, combining the two sounds like a natural marriage, but until recently it was also thought to be completely impossible. Most spintronic devices have magnetic elements, and this magnetism prevents superconductivity, and hence reduces any energy-efficiency benefits.

Stemming from the discovery of spin polarized supercurrents in 2010 at the University of Cambridge, recent research, along with that of other institutions, has however shown that it is possible to power spintronic devices with a superconductor. The aim of the new £2.7 million project, which is being funded by the Engineering and Physical Sciences Research Council, is to use this as the basis for a new style of computing architecture.

Although work is already underway in several other countries to exploit superconducting spintronics, the Superspin project is unprecedented in terms of its magnitude and scope.

Researchers will explore how the technology could be applied in future computing as a whole, examining fundamental problems such as spin generation and flow, and data storage, while also developing sample devices. According to the project proposal, the work has the potential to establish Britain as a leading centre for this type of research and “ignite a technology field.”

The project will be led by Professor Mark Blamire, Head of the Department of Materials Sciences at the University of Cambridge, and Dr Jason Robinson, University Lecturer in Materials Sciences, Fellow of St John’s College, University of Cambridge, and University Research Fellow of the Royal Society. They will work with partners in the University’s Cavendish Laboratory (Dr Andrew Ferguson) and at Royal Holloway, London (Professor Matthias Eschrig).

Blamire and Robinson’s core vision of the programme is “to generate a paradigm shift in spin electronics, using recent discoveries about how superconductors can be combined with magnetism.” The programme will provide a pathway to making dramatic improvements in computing energy efficiency.

Robinson added: “Many research groups have recognised that superconducting spintronics offer extraordinary potential because they combine the properties of two traditionally incompatible fields to enable ultra-low power digital electronics.”

“However, at the moment, research programmes around the world are individually studying fascinating basic phenomena, rather than looking at developing an overall understanding of what could actually be delivered if all of this was joined up. Our project will aim to establish a closer collaboration between the people doing the basic science, while also developing demonstrator devices that can turn superconducting spintronics into a reality.”

The initial stages of the five-year project will be exploratory, examining different ways in which spin can be transported and magnetism controlled in a superconducting state. By 2021, however, the team hope that they will have manufactured sample logic and memory devices – the basic components that would be needed to develop a new generation of low-energy computing technologies.

The project will also report to an advisory board, comprising representatives from several leading technology firms, to ensure an ongoing exchange between the researchers and industry partners capable of taking its results further.

“The programme provides us with an opportunity to take international leadership of this as a technology, as well as in the basic science of studying and improving the interaction between superconductivity and magnetism,” Blamire said. “Once you have grasped the physics behind the operation of a sample device, scaling up from the sort of models that we are aiming to develop is not, in principle, too taxing.”

A Cambridge-led project aiming to develop a new architecture for future computing based on superconducting spintronics - technology designed to increase the energy-efficiency of high-performance computers and data storage - has been announced.

Superconducting spintronics offer extraordinary potential because they combine the properties of two traditionally incompatible fields to enable ultra-low power digital electronics
Jason Robinson
Growing quantities of data storage online are driving up the energy costs of high-performance computing and data centres. Superconducting spintronics offer a potential means of significantly increasing their energy-efficiency to resolve this problem.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

UK steel can survive if it transforms itself, say researchers

0
0

The report, by Professor Julian Allwood, argues that in order to survive, the UK steel industry needs to refocus itself on steel recycling and on producing products for end users. He argues that instead of viewing Tata Steel’s UK exit as a catastrophe, it can instead be viewed as an opportunity.

Allwood’s report, A bright future for UK steel: A strategy for innovation and leadership through up-cycling and integration, uses evidence gathered from over six years of applied research by 15 researchers, funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC) and industrial partners spanning the global steel supply chain. It is published online today (15 April).

“Tata Steel is pulling out of the UK, for good reason, and there are few if any willing buyers,” said Allwood, from Cambridge’s Department of Engineering. “Despite the sale of the Scunthorpe plant announced earlier this week, the UK steel industry is in grave jeopardy, and it appears that UK taxpayers must either subsidise a purchase, or accept closure and job losses.

“However, we believe that there is a third option, which would allow a transformation of the UK’s steel industry.”

Instead of producing new steel, one option for the UK steel industry is to refocus itself toward recycling steel rather than producing it from scratch. The global market for steel recycling is projected to grow at least three-fold in the next 30 years, but despite the fact that more than 90% of steel is recycled, the processes by which recycling happens are out of date. The quality of recycled steel is generally low, due to poor control of its composition.

Because of this, old steel is generally ‘down-cycled’ to the lowest value steel application – reinforcing bar. According to Allwood, the UK’s strengths in materials innovation could be applied to instead ‘up-cycle’ old steel to today’s high-tech compositions.

According to Allwood, today’s global steel industry has more capacity for making steel from iron ore than it will ever need again. On average, products made with steel last 35-40 years, and around 90% of all old steel is collected. It is likely that, despite the current downturn, global demand for steel will continue to grow, but all future growth can be met by recycling our existing stock of steel. “We will never need more capacity for making steel from iron ore than we have today,” said Allwood.

Apart from the issue of recycling, today’s UK steel industry focuses on products such as plates, bars and coils of strip, all of which have low profit margins. “The steel industry fails to capture the value and innovation potential from making final components,” said Allwood. “As a result, more than a quarter of all steel is cut off during fabrication and never enters a product, and most products use at least a third more steel than actually required. The makers of liquid steel could instead connect directly to final customer requirements.”

These two opportunities create the scope for a transformation of the steel industry in the UK, says the report. In response to Tata Steel’s decision, UK taxpayers will have to bear costs. If the existing operations are to be sold, taxpayers must subsidise the purchase without the guarantee of a long term national gain. If the plants are closed, the loss of tax income and payment of benefits will cost taxpayers £300m-£800m per year, depending on knock-on job losses.

Allwood’s strategy requires taxpayers to invest in a transformation, for example through the provision of a long term loan. This would allow UK to innovate more than any other large player, with the potential of leadership in a global market that is certain to triple in size.

He singles out the example of the Danish government’s Wind Power Programme, initiated in 1976, which provided a range of subsidies and support for Denmark’s nascent wind industry, allowing it to establish a world-leading position in a growing market. Allwood believes a similar initiative by the UK government could mirror this success and transform the steel industry. “Rapid action now to initiate working groups on the materials technologies, business model innovations, financing and management of the proposed transformation could convert this vision to a plan for action before the decision for plant closure or subsidised sale is finalised,” he said. “This is worth taking a real shot on.”

A new report from the University of Cambridge claims that British steel could be saved, if the industry is willing to transform itself.

We will never need more capacity for making steel from iron ore than we have today.
Julian Allwood
Blast furnace #5, Port Talbot Steelworks

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: Losing your virginity: how we discovered that genes could play a part

0
0

As far as big life decisions go, choosing when to lose your virginity or the best time start a family are probably right up there for most people. It may seem that such decisions are mostly driven by social factors, such as whether you’ve met the right partner, social pressure or even your financial situation. But scientists are increasingly realising that such sexual milestones are also influenced by our genes.

In a new study of more than 125,000 people, published in Nature Genetics, we identified gene variants that affect when we start puberty, lose our virginity and have our first child. This is hugely important as the timing of these events affect educational achievements as well as physical and mental health.

Children can start puberty at any time between eight and 14-years-old. Yet it is only in recent years that we have begun to understand the biological reasons for this. Through studies of both animals and humans, we now know that there’s a complex molecular machinery in the brain that silences puberty hormones until the right time. At this point, chemical messengers secreted from the brain begin a cascade of events, leading to the production of sex hormones and reproductive maturity.

Human genetics studies have identified many genes that are linked to individual differences in the onset of puberty. There are broadly two approaches used to map such genes – studies of patients affected by rare disorders that affect puberty and large-scale population studies. The former is helpful because it can investigate gene variants that cause extremely early or delayed/absent puberty.

In previous research, we used population studies to survey a large number of individuals using questionnaires and then genome-wide association studies to scan these same participants for common genetic differences. We could then assess whether the participants' reported age at puberty was related to particular gene variants. In this way, we have in a number of studies identified more than 100 such variants, each modifying puberty timing by just a few weeks. However, together they contribute substantially.

We now understand that both nature and nurture play a roughly equal role in regulating the timing of puberty. For example, studies have consistently shown that obesity and excessive nutrition in children can cause an early onset of puberty.

Genetic factors

However, we know far less about the biological and genetic factors behind the ages that we first have sexual intercourse or have a first child. This is because previous research has focused more on environmental and family factors than genetics. But the launch of UK Biobank, a study with over half a million participants, has greatly helped to fill this lack of knowledge.

In our new study, we used this data to survey some 125,000 people in the same way as in the puberty studies. We found 38 gene variants associated with the age of first sexual intercourse. The genes that we identified fall broadly into two groups. One category is genes with known roles in other aspects of reproductive biology and pubertal development, such as the oestrogen receptors, a group of proteins found on cells in the reproductive tract and also in behaviour control centres of the brain.

If you went through puberty early you are more likely to have many children in life.Tom Adriaenssen/wikimedia, CC BY-SA

 

The other group includes genes which play roles in brain development and personality. For example, the gene CADM2, which controls brain activity and also has strong effects on whether we regard ourselves to be risk-takers. We discovered that this gene was also associated with losing your virginity early and having a higher number of children throughout life. Similarly, the gene MSRA, linked to how irritable we are, was also associated with age at first sexual intercourse. Specifically, people who are more irritable typically have a later encounter. However, more research is needed to show exactly how these genes help regulate the timing of the reproductive milestones.

We were also able to quantify that around 25% of the variation in these milestones was due to genetic differences rather than other factors.

Implications for public health

An important reason why we study reproductive ageing is that these milestones impact reproductive outcomes and also broader health risks. Epidemiological studies show that individuals who go through puberty at younger ages have higher risks of many diseases of old age, such as diabetes, heart disease and breast cancer. Similarly, first sexual intercourse at an earlier age is linked to a number of adverse behavioural, educational and health outcomes.

Using a statistical genetics approach called Mendelian Randomisation, a technique that helps clarify the causal relationship between human characteristics, these studies can tell us whether such epidemiological associations are likely to be causal rather than just random associations. We managed to show that early puberty actually contributes to a higher likelihood of risk-taking behaviours, such as sexual intercourse at an earlier age. It was also linked to having children earlier, and having more children throughout life.

These findings, along with previous studies linking early puberty and loss of virginity to social and health risks, back the idea that future public health interventions should aim to help children avoid early puberty, for example by diet and physical activity and avoiding excess weight gain. Our findings predict that this would have benefits both on improving adolescent health and educational outcomes and also for future health at older ages.

John Perry, Senior Investigator Scientist, University of Cambridge and Ken Ong, Group Leader of the Development Programme at the MRC Epidemiology Unit, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

John Perry and Ken Ong (MRC Epidemiology Unit) discuss how sexual milestones are influenced by our genes and how this can impact on broader health risks.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Study identifies gene changes that influence timing of sexual behaviour

0
0

Age at first sexual intercourse is known to be influenced by social and family factors, such as peer pressure, but this study shows that genetic factors also have an influence on the timing of this sexual behaviour. It is known from other studies that first sexual intercourse at an early age is associated with adverse educational achievements, physical health and mental wellbeing.

To identify the gene differences which influence timing of sexual behavioural, the researchers at the Medical Research Council (MRC) Epidemiology Unit at the University of Cambridge analysed the genetic data of 59,357 men and 66,310 women aged between 40 and 69 years old part of UK Biobank, a national study for health research.

This analysis identified 38 gene variants that were associated with age at first sexual intercourse. Several of these gene variants were located in or near genes previously implicated in brain development and neural connections, and their analysis uncovered associations with a range of reproductive behaviours, such as age at first birth and number of children.

Dr John Perry, a senior investigator scientist at the MRC Epidemiology Unit, and a lead author of the paper, said: “While social and cultural factors are clearly relevant, we show that age at first sexual intercourse is also influenced by genes which act on the timing of childhood physical maturity and by genes which contribute to our natural differences in personality types.

“One example is a genetic variant in CADM2, a gene that controls brain cell connections and brain activity, which we found was associated with a greater likelihood of having a risk-taking personality, and with an earlier age at first sexual intercourse and higher lifetime number of children.”

In previous studies by the same team, it was found that an earlier age at puberty is linked to increased long-term risks for diseases such as diabetes, heart disease and some cancers.

Dr Ken Ong, a paediatrician and programme leader at the MRC Epidemiology Unit, and a lead author on the paper, added: “We have already shown that early puberty and rapid childhood growth adversely affect disease risks in later life, but we have now shown that the same factors can have a negative effect at a much younger age, including earlier sexual intercourse and poorer education attainment.”

The team hope that taking account of the timing of puberty and personality type could lead to more targeted and effective approaches to health interventions and promotion of healthy behaviours.

The research was funded by the MRC.

Reference
Day, FR et al. Physical and neuro-behavioural determinants of reproductive onset and success. Nat Gen; 18 April 2016; DOI 10.1038/ng.3551

Adapted from a press release from the MRC.

A study of over 380,000 people, published today in the journal Nature Genetics, has identified gene differences that influence the age of puberty, sexual intercourse and first birth.

While social and cultural factors are clearly relevant, we show that age at first sexual intercourse is also influenced by genes
John Perry
Cherries

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
License type: 

Sonic hedgehog gene provides evidence that our limbs may have evolved from sharks’ gills

0
0

An idea first proposed 138 years ago that limbs evolved from gills, which has been widely discredited due to lack of supporting fossil evidence, may prove correct after all – and the clue is in a gene named for everyone’s favourite blue hedgehog.  

Unlike other fishes, cartilaginous fishes such as sharks, skates and rays have a series of skin flaps that protect their gills. These flaps are supported by arches of cartilage, with finger-like appendages called branchial rays attached.

In 1878, influential German anatomist Karl Gegenbaur presented the theory that paired fins and eventually limbs evolved from a structure resembling the gill arch of cartilaginous fishes. However, nothing in the fossil record has ever been discovered to support this.

Now, researchers have reinvestigated Gegenbaur’s ideas using the latest genetic techniques on embryos of the little skate – a fish from the very group that first inspired the controversial theory over a century ago – and found striking similarities between the genetic mechanism used in the development of its gill arches and those in human limbs.

Scientists say it comes down to a critical gene in limb development called ‘Sonic hedgehog’, named for the videogame character by a research team at Harvard Medical School. 

The new research shows that the functions of the Sonic hedgehog gene in human limb development, dictating the identity of each finger and maintaining growth of the limb skeleton, are mirrored in the development of the branchial rays in skate embryos. The findings are published today in the journal Development.

Dr Andrew Gillis, from the University of Cambridge’s Department of Zoology and the Marine Biological Laboratory, who led the research, says that it shows aspects of Gegenbaur’s theory may in fact be correct, and provides greater understanding of the origin of jawed vertebrates – the group of animals that includes humans.

“Gegenbaur looked at the way that these branchial rays connect to the gill arches and noticed that it looks very similar to the way that the fin and limb skeleton articulates with the shoulder,” says Gillis. “The branchial rays extend like a series of fingers down the side of a shark gill arch.”

“The fact that the Sonic hedgehog gene performs the same two functions in the development of gill arches and branchial rays in skate embryos as it does in the development of limbs in mammal embryos may help explain how Gegenbaur arrived at his controversial theory on the origin of fins and limbs.”

In mammal embryos, the Sonic hedgehog gene sets up the axis of the limb in the early stages of development. “In a hand, for instance, Sonic hedgehog tells the limb which side will be the thumb and which side will be the pinky finger,” explains Gillis. In the later stages of development, Sonic hedgehog maintains outgrowth so that the limb grows to its full size.

To test whether the gene functions in the same way in skate embryos, Gillis and his colleagues inhibited Sonic hedgehog at different points during their development.

They found that if Sonic hedgehog was interrupted early in development, the branchial rays formed on the wrong side of the gill arch. If Sonic hedgehog was interrupted later in development, then fewer branchial rays formed but the ones that did grow, grew on the correct side of the gill arch – showing that the gene works in a remarkably similar way here as in the development of limbs.

“Taken to the extreme, these experiments could be interpreted as evidence that limbs share a genetic programme with gill arches because fins and limbs evolved by transformation of a gill arch in an ancestral vertebrate, as proposed by Gegenbaur,” says Gillis. “However, it could also be that these structures evolved separately, but re-used the same pre-existing genetic programme. Without fossil evidence this remains a bit of a mystery – there is a gap in the fossil record between species with no fins and then suddenly species with paired fins – so we can’t really be sure yet how paired appendages evolved.”

“Either way this is a fascinating discovery, because it provides evidence for a fundamental evolutionary link between branchial rays and limbs,” says Gillis. “While palaeontologists look for fossils to try to reconstruct the evolutionary history of anatomy, we are effectively trying to reconstruct the evolutionary history of genetic programmes that control the development of anatomy.”

Paired appendages, such as arms and hands in humans, are one of the key anatomical features that distinguish jawed vertebrates from other groups. “There is a lot of interest in trying to understand the origins of jawed vertebrates, and the origins of novel features like fins and limbs,” says Gillis.

“What we are learning is that many novel features may not have arisen suddenly from scratch, but rather by tweaking and re-using a relatively small number of ancient developmental programmes.”

Gillis and his colleagues are further testing Gegenbaur’s theory by comparing the function of more genes involved the development of skates’ unusual gills and mammalian limbs.

“Previous studies haven’t found compelling developmental genetic similarities between gill arch derivatives and paired appendages – but these studies were done in animals like mice and zebrafish, which don’t have branchial rays,” says Gillis.

“It is useful to study cartilaginous fishes, not only because they were the group that first inspired Gegenbaur’s theory, but also because they have a lot of unique features that other fishes don’t – and we are finding that we can learn a lot about evolution from these unique features.”

“Many researchers look at mutant mice or fruit flies to understand the genetic control of anatomy. Our approach is to study and compare the diverse anatomical forms that can be found in nature, in order to gain insight into the evolution of the vertebrate body.”

This research was funded by the Royal Society, the Isaac Newton Trust and a research award from the Marine Biological Laboratory.

Inset images: Skeletal preparation of an embryonic bamboo shark (Andrew Gillis); A skate embryo that has been stained for expression of the Shh gene - staining can be seen as dark purple strips running down the length of each gill arch (Andrew Gillis); Late stage skate embryo (Andrew Gillis).

Latest analysis shows that human limbs share a genetic programme with the gills of cartilaginous fishes such as sharks and skates, providing evidence to support a century-old theory on the origin of limbs that had been widely discounted.

The branchial rays extend like a series of fingers down the side of a shark gill arch
Andrew Gillis
Head skeletons of skate and shark showing gill arch appendages in red.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

New cases of dementia in the UK fall by 20% over two decades

0
0

Reports in both the media and from governments have suggested that the world is facing a dementia ‘tsunami’ of ever-increasing numbers, particularly as populations age. However, several recent studies have begun to suggest that the picture is far more complex. Although changing diagnostic methods and criteria are identifying more people as having dementia, societal measures which improve health such as education, early- and mid-life health promotion including smoking reduction and attention to diet and exercise may be driving a reduction in risk in some countries. Prevalence (the proportion of people with dementia) has been reported to have dropped in some European countries but it is incidence (the proportion of people developing dementia in a given time period) that provides by far the most robust evidence of fundamental change in populations.

As part of the Medical Research Council Cognitive Function and Ageing Study (CFAS), researchers at the University of Cambridge, Newcastle University, Nottingham University and the University of East Anglia interviewed a baseline of 7,500 people in three regions of the UK (Cambridgeshire, Newcastle and Nottingham) between 1991 and 1994 with repeat interviews at two years to estimate incidence. Then 20 years later a new sample of over 7,500 people from the same localities aged 65 and over was interviewed with a two year repeat interview again. This is the first time that a direct comparison of incidence across time in multiple areas, using identical methodological approaches, has been conducted in the world.

The researchers found that dementia incidence across the two decades has dropped by 20% and that this fall is driven by a reduction in incidence among men at all ages. These findings suggest that in the UK there are just under 210,000 new cases per year: 74,000 men and 135,000 women – this is compared to an anticipated 250,000 new cases based on previous levels. Incidence rates are higher in more deprived areas.

Even in the presence of an ageing population, this means that the number of people estimated to develop dementia in any year has remained relatively stable, providing evidence that dementia in whole populations can change.   It is not clear why rates among men have declined faster than those among women, though it is possible that it is related to the drop in smoking and vascular health improving in men.

Professor Carol Brayne, Director of the Cambridge Institute of Public Health, University of Cambridge, says: “Our findings suggest that brain health is improving significantly in the UK across generations, particularly among men, but that deprivation is still putting people at a disadvantage. The UK in earlier eras has seen major societal investments into improving population health and this appears to be helping protect older people from dementia. It is vital that policies take potential long term benefits into account.”

Professor Fiona Matthews from the Institute of Health and Society, Newcastle University and the MRC Biostatistics Unit, Cambridge adds: “Public health measures aimed at reducing people’s risk of developing dementia are vital and potentially more cost effective in the long run than relying on early detection and treating dementia once it is present. Our findings support a public health approach for long term dementia prevention, although clearly this does not reduce the need for alternative approaches for at-risk groups and for those who develop dementia.”

The researchers argue that while influential reports continue to promote future scenarios of huge increases of people with dementia across the globe, their study shows that global attention and investment in reducing the risk of dementia can help prevent such increases.

“While we’ve seen investment in Europe and many other countries, the lack of progress in access to education, malnutrition in childhood and persistent inequalities within and across other countries means that dementia will continue to have a major impact globally,” says Professor Brayne. “Our evidence shows that the so-called dementia ‘tsunami’ is not an inevitability: we can help turn the tide if we take action now.”

Dr Rob Buckle, director of science programmes at the Medical Research Council, which funded the study, added: “It is promising news that dementia rates, especially amongst men, have dropped by such a significant amount over the last twenty years, and testament to the benefits of an increased awareness of a brain-healthy lifestyle. However, the burden of dementia will continue to have significant societal impact given the growing proportion of elderly people within the UK population and it is therefore as important as ever that we continue to search for new ways of preventing and treating the disease. This study does, however, reinforce the importance of long-term, quality studies that create a wealth of data of invaluable resource for researchers.”

Reference
Matthews, FE et al. A two decade comparison of incidence of dementia in individuals aged 65 years and older from three geographical areas of England: results of the Cognitive Function Ageing Study I and II. Nature Communications; 19 April 2016; DOI 10.1038/ncomms11398

The UK has seen a 20% fall in the incidence of dementia over the past two decades, according to new research from England, led by the University of Cambridge, leading to an estimated 40,000 fewer cases of dementia than previously predicted. However, the study, published today in Nature Communications, suggests that the dramatic change has been observed mainly in men.

Our evidence shows that the so-called dementia ‘tsunami’ is not an inevitability: we can help turn the tide if we take action now
Carol Brayne

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Monkeys regulate metabolism to cope with environment and rigours of mating season

0
0

New research on male Barbary macaques indicates that these primates have a flexible metabolic physiology which helps them survive by changing the speed of chemical reactions within their bodies, and consequently levels of energy, depending on temperature and availability of food.

The study also suggests that the metabolic rate of male macaques spikes dramatically during mating season, potentially providing a higher "aerobic capacity" at a point when males mate with multiple females a day, as well as fight other males for mating opportunities.

Levels of thyroid hormones start to build around a month before mating season, with these metabolism-predicting hormones doubling in some animals at the peak of the season. This is only the second time that changes in metabolic physiology in the run up to mating season have been seen in a vertebrate, the first being in house sparrows.

The natural habitat of Barbary macaques, in the mountains of Morocco and Algeria, is one of the most extreme environments in which any non-human primate lives.

Temperatures in winter drop as low as -5 degrees centigrade, with deep snow covering the ground for months at a time. Summer temperatures can reach 40 degrees, with food and water becoming scarce.

Researchers say that the metabolic flexibility they have observed in macaques may be an echo in one of our primate cousins of a vital physiological mechanism that has allowed humans to adapt to the planet's extreme climates – from Saharan deserts to the Arctic.

"Barbary macaques increase and decrease cellular activity and energy consumption in order to respond to challenges of climate, sustenance and reproduction. In a sense, what happens at a macro level – animal behaviour – is reflected at a micro, cellular level," said lead author Dr Jurgi Cristóbal-Azkarate of Cambridge's Division of Biological Anthropology, who conducted the research with colleagues from the universities of Roehampton and Lincoln.

"Understanding the rules and mechanisms that govern key decisions such as energy allocation in existing primates is important in gaining insight into how our ancestors were able to thrive outside tropical Africa," he said.

"Our knowledge of traits that allowed hominins to adapt to new climatic conditions is practically restricted to those that leave a traceable fossil record. We currently have a very limited understanding of the importance of physiological mechanisms in human evolution. The Barbary macaques in the Atlas Mountains are an ideal model to help address this knowledge gap."

The new findings are published today in the journal Biology Letters.

By collecting faeces dropped by the animals and analysing the samples, the researchers were able to assess levels of the thyroid hormone T3, which is known to provide an indicator of the 'basal' metabolic rate: the amount of energy expended to keep a body at rest.

The thyroid has been shown to affect metabolism across multiple species, including humans, in whom underactive thyroids slow metabolic rates and can cause tiredness, weight gain and depression.

Samples were taken across a nine month period from adult males in two groups – one which has nearly half their food supplied by tourists, and one which has to rely only on the natural diet of foraging for plants and insects.

On average, the monkeys fed by tourists had levels of T3 that were 10% higher, suggesting that those on the natural diet had to conserve energy as well as forage for food. T3 levels also increased the longer animals in both groups had to spend foraging for food. This is in line with other findings in vertebrates showing that they reduce secretion of thyroid hormones to reduce metabolic rates and save energy when "nutritionally stressed".

As the area's climate went through its dramatic seasonal shifts, so too did the macaque metabolism. T3 levels dropped markedly from June to August, then began to rise as mating season approached in the early Autumn. While T3 dropped again after mating season, the levels stayed much higher during the harsh winter months.

"All mammals, and even more so primates, share a common physiology," said Cristóbal-Azkarate. "As with humans, Barbary macaques increase T3 production in winter. Metabolic rates increase in response to lower temperatures as a mechanism to generate more energy and consequently more heat."

Even rain affected T3 and metabolic rates, which increased in wet weather. Researchers say this may show the "high thermoregulatory cost of wet fur".

The effect of the mating season on the macaques' T3 levels, and consequently their metabolic rates, was highly significant. At the height of the season, T3 levels of the males increased by an average of 80% between both groups. The average T3 increase in the wild feeding group was 98%.

"This was an unexpected and interesting finding, suggesting that males boost their metabolism in preparation for the energetic challenges both of mating and of competing with other males for access to females," said Cristóbal-Azkarate.

"Thyroid hormones are essential for sexual development and reproductive function in mammals – there is an important increase in T3 production during puberty, for example.

"To date, studies of male reproductive competition have focused almost exclusively on testosterone and stress hormones. However, our study suggests that there is a new player in the field of male reproductive competition: the thyroid, and metabolic rate."

Added Cristóbal-Azkarate: "This is the first time in which the effects of climate, nutrition and reproductive competition on thyroid hormone physiology have been studied simultaneously, in a naturalistic setting.

"By doing this, we have been able to learn about the way in which the flexibility of the metabolic physiology of Barbary macaques allows these primates – and perhaps other species, including humans – to balance the multiple energetic demands of their harsh and highly variable environment, and cope with ecological and social challenges."

The flexible physiology of Barbary macaques in responding to extreme environmental conditions of their natural habitat may help shed light on the mechanisms that allowed our ancestors to thrive outside Africa, say researchers. New study also presents the first evidence for male primates boosting their metabolic physiology for mating.

Understanding the rules and mechanisms that govern key decisions such as energy allocation in existing primates is important in gaining insight into how our ancestors were able to thrive outside tropical Africa
Jurgi Cristóbal-Azkarate
Barbary Macaques in their natural habitat of the Atlas Mountains

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Baboons watch neighbours for clues about food, but can end up in queues

0
0

Latest research on social networks in wild baboon troops has revealed how the animals get information from each other on the whereabouts of food. However, once information reaches a high status baboon, subordinates often end up in a queue for scraps.

A new study, by researchers from the University of Cambridge and the Zoological Society of London, shows how baboons monitor each other for changes in behaviour that indicate food has been found, such as hunching over to scoop it up.

This ‘socially learned’ information gets transmitted through proximity: those with more neighbours are more likely to spot when someone starts feeding. Once they do, baboons will head towards the food.

Information then starts to spread through the troop, as more baboons observe feeding behaviour or notice their neighbours moving in the direction of food. However, troop hierarchy ultimately kicks in – with the most dominant member in the vicinity, usually a male, wading in to claim the spoils. 

At this point, surrounding baboons will often form what can appear to be a queue, to determine who gets to explore that patch of ground next.

These queues reflect the complex interactions within a baboon troop. The sequence of baboons in a queue depends on status – sometimes through birth-right – as well as social and familial relationships to the particular baboon occupying the food patch.

The new research, published in the open access journal eLife, breaks down the transmission of social information through a baboon troop into three stages:

  • Acquiring information: observing behaviour that suggests food.
  • Applying information: exploring the food patch (even if no food is left).
  • Finally, exploiting information: actually getting to eat.

The researchers used social networking models to show how being close enough to spot behaviour change is the only driver for acquiring knowledge.

When it comes to applying and exploiting social knowledge, however, the characteristics of individual baboons – whether its sex, status, boldness, or social ties in grooming networks – determine who gets to eat, or where they are in any queue that forms.

Baboon troops can be sizable, sometimes as many as 100 members, with the troops in the latest study numbering around 70. On average, less than 25% of a troop – around 10 individuals – acquired information of a food patch, with less than 5% of the troop actually exploiting it.

“Who actually gets to eat is only half the story,” says Dr Alecia Carter, from Cambridge’s Department of Zoology, who led the research.

“Just looking at the animals that capture the benefits of information, in this case food, doesn’t reflect the real pattern of how information transmits through groups. Many more animals acquire information, but are limited in their use of it for a variety of reasons.”

To conduct the study, researchers snuck handfuls of maize corn kernels, a high-energy baboon favourite (“like finding a stash of chocolate bars”) into the path of two foraging troops of wild chacma baboons in Tsaobis Nature Park, Namibia.

Once a troop member found the food, the researchers recorded the identities of baboons that spotted the animal eating, accessed the food patch, and got anything to eat.

Carter says that the best place for low-ranking baboons is often the peripheries, in the hopes of finding food and grabbing a few kernels before information spreads, and they are supplanted by the local dominant.

“The more dominant a baboon is, the more spatially central in the troop they tend to be – as they can afford to be there. This provides more opportunities to gain information through the wider network,” says Carter.  

Low-rankers that discover food will sometimes try to eat as stealthily or as quickly as they can, but, once a dominant has taken control of the food patch, a queue will often form. Grooming relationships to the feeding dominant can help a subordinate jump up a queue, although much of it is dictated by status.

For females, status is a birth-right that remains fixed throughout a baboon’s life. While human societies historically privilege the firstborn, in baboon troops maternal lineage is ranked by lastborn – with each new female baby replacing the last in terms of hierarchy. 

Young males hold the same rank as their mother until they reach adolescence, usually around the age of six, and start asserting dominance through their bigger size, leading to shifts in status. 

“It is relatively easy to collect dominance data, as baboons are constantly asserting dominance,” explains Carter. “Low-cost assertions of dominance, such as pushing an individual out of small patches of food, help to mitigate high-cost assertions, such as fights, and maintain the order.”

“However, baboons can mediate their status to a minor extent by having good grooming relationships, and low-ranking individuals have a slightly higher chance of applying and exploiting information if they are central in a grooming network. Over a lifetime of food opportunities, this may prove important for fitness.”

While baboons acquire information about food locations from watching others, they can also use social learning to see when that food is likely to be gone. Interestingly, the researchers found that males and females will often use this information in different ways. 

“Baboons are highly vigilant, and constantly pay attention to what their neighbours are up to. When those in a food patch are sifting through dirt and clearly coming up empty-handed, most females will walk off, and won’t waste their time,” says Carter.

“Males on the other hand, particularly young males, are amazingly persistent, and will stay in a patch shifting sand around for a very long time in the hopes of finding a stray kernel.

“We hypothesise that, while males can afford to expend the energy, adult females are lactating or pregnant most of the time, so need to conserve their strength, and often end up using the information in a more practical way as a result.”

Baboons learn about food locations socially through monitoring the behaviour of those around them. While proximity to others is the key to acquiring information, research shows that accessing food depends on the complex hierarchies of a baboon troop, and those lower down the pecking order can end up queuing for leftovers.

The more dominant a baboon is, the more spatially central in the troop they tend to be – as they can afford to be there
Alecia Carter
Baboon troop

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Flexible hours 'controlled by management' cause stress and damage home lives of low-paid workers

0
0

A researcher who embedded himself in several London branches of one of the UK's largest supermarkets found that management used a combination of 'flexed-time' contracts and overtime to control worker shifts to meet times of anticipated demand, while ensuring costs are kept to a minimum.

Workers at the supermarket chain were frequently expected to extend or change shifts with little or no notice, often to the detriment of their home and family lives – causing the majority of workers interviewed to feel negatively about their jobs.

Low wages and lack of guaranteed hours, combined with convoluted contractual terms, weak union presence, and pressure from managers that at times bordered on coercion ("...there are plenty of people out there who need jobs") meant that many felt they had no choice but to work when ordered, despite the impact on childcare, work-life balance and, in some cases, health - both physical and mental.

Dr Alex Wood, who conducted the research while at Cambridge's Department of Sociology, has chosen not to name the retailer in the new study, published today in the journal Human Relations. Having spoken with union representatives from across the retail sector, however, Wood believes the practises he encountered are now endemic across major supermarkets in the UK.

The Government's website describes flexible working as something that "suits an employee's needs". However, Wood says there is a critical distinction – one overlooked by the Department of Work and Pensions (DWP) – between workers controlling their own schedules, and management imposing control.

"Control over flexible working enables a better work-life balance. However, such control is the privilege of high-end workers. When low-paid, vulnerable workers experience flexible working time, it is at the whim of managers who alter schedules in order to maximise profits, with little consideration for the work-life balance of employees," said Wood.

The practice of low core-hour contracts that can be 'flexed up' are most notoriously embodied in zero-hour contracts – recently reported to affect over 800,000 British workers. Last year, then DWP Minister Iain Duncan Smith held up a survey claiming to show "most" workers on such contracts find them to be beneficial.

Wood says this is an example of conflating low-end, hourly-paid workers who have schedules dictated by management - those in supermarkets, for example - with highly paid professionals such as consultants who control their own hours of work. While all are technically on zero-hours contracts, their experiences of work are dramatically different.

"It is misleading to claim that flexibility provided by zero-hour contracts is beneficial for 'most' workers' work-life balance, and it is simply implausible to suggest this is the case for low-paid, vulnerable workers who by definition lack the power to control their working time," said Wood, who contributed evidence to the coalition government's zero-hours policy review in 2014.

For the study, Wood conducted interviews with a number of workers from across four of the UK retailer's stores, ranging from check-out operators to online delivery drivers, as well as interviewing union reps and officials. He also conducted two months of "participatory observation": working as a shelf stacker in one of the larger supermarket stores.

His findings have led Wood to conclude that the problem of precarious contracts goes far beyond just zero-hours, encompassing most management-controlled flexible contracts.

At the time of the research, the UK retailer had a policy of new stores reserving 20% of all payroll costs for short-term changes in shifts, which requires around 45% of all staff to be on flexible contracts, says Wood, although interviews with union representatives indicated this was likely higher.

While contracted for as little as 7.5 core hours, all flexible workers had to provide 48 hours of availability per week at the point of application – with greater availability increasing the chances of being hired.

Officially, 'flexed' hours were not to exceed 60% of workers' core hours. However, despite being contracted for a weekly average of just nine core hours, Wood found that standard flexible workers were working an average of 36 hour weeks.

Management used combinations of 'overtime'– additional hours that are voluntary but can be offered on-the-spot – with 'flexed time'– additional hours that are compulsory but require 24 hours' notice – to ensure staffing levels could be manipulated at short notice to meet expected demand.

Both overtime and flexed time were paid at standard rates, keeping payroll costs down, and Wood found distinctions between the two were frequently blurred - disregarding what little contractual protection existed.

"In reality, the nature of low pay and low hours contracts means these workers can't afford to turn down hours," said Wood.

"Whether zero core hours, or seven, or nine - none provide enough to live on. This precarious situation of not having enough hours to make ends meet is heightened by a perception that refusal to work additional hours meant they would not be offered them again in future, something most workers simply couldn't afford."

The stress caused by management-controlled flexed time of low hour contracts, and the impact on home and family lives, were frequently raised by the workers that Wood spoke to.

One worker provided what Wood describes as a "characteristic experience". Sara co-habited with her partner Paul, also employed at the UK retailer. "[W]e've set aside Saturday as a day to do something – me, Paul and my son – as a family... She [Sara's manager] now wants me to work Saturdays... it's all up in the air."

Colin, another worker, described the impact of dramatic schedule alterations to his wellbeing: "I had to change hours, or accept another position, or try another store... I felt really sick, it just hit me, it hit all of us..."

Asim, a union rep, made it clear that management bullying occurred: "People have been told, wrongly, that they can be sacked for it if they don't change their hours."

Under Duncan-Smith, the UK government legislated to ban 'exclusive' zero-hours contracts – those that have no guaranteed hours but restrict workers from getting another job – but Wood says this is simply a straw man, and new DWP Minister Stephen Crabb must go much further.

'It's imperative that Stephen Crabb breaks from his predecessor and recognises the damage which wider manager-controlled flexible scheduling practices, including all zero hours contracts, do to work-life balance," Wood said.

"Policies are needed which strengthen low-end workers' voice. When alterations to schedules are made solely by managers and driven by cost containment, flexibility is only beneficial for the employer not the employees."

Researcher Alex Wood calls on new DWP Minister Stephen Crabb to acknowledge distinction between flexible scheduling controlled by managers to maximise profit, damaging lives of the low-paid in the process, and high-end professionals who set their own schedules – an issue he says was publicly fudged by Ian Duncan-Smith to justify zero-hour contracts.

I had to change hours, or accept another position, or try another store... I felt really sick, it just hit me, it hit all of us...
Colin, worker at the unnamed supermarket
Tesco Linwood

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Æthelred the Unready, King of the English: 1,000 years of bad press

0
0

A silver penny struck more than ten centuries ago (on display in the Fitzwilliam Museum) shows Æthelred, King of the English. The obverse shows the king in profile and the reverse a Christian cross. Thousands of similar coins have survived. Many are in collections in Copenhagen, Oslo and Stockholm. This coinage is material evidence of ‘Dane-geld’, money paid to England’s enemies in attempts to forestall Viking invasions of England.

Inevitably remembered as ‘the Unready’, Æthelred died exactly 1,000 years ago on 23 April 1016 – 50 years before the Norman Conquest. The same date in April is recorded as the day of the death of William Shakespeare (in 1616) and also celebrated each year as St George’s Day.

Born around 968, son of King Edgar and Queen Ælfthryth, Æthelred died in London, a place that had recently been established as political and commercial centre of England. He was the first monarch to be buried in the old cathedral of St Paul which much later became one of the most notable casualties of the great fire of London.

Æthelred’s nickname is a pun that may date from as early the 11th century. Æthelred means ‘noble-counsel’ while the noun unræd means ‘an ill-considered or treacherous plan. “The nickname degenerated from ‘Æthelred unræd’ into ‘Æthelred the Unready’, and ‘Æthelred no-counsel’, giving rise to further stories about him,” says Professor Simon Keynes. 

Keynes, a historian in the Department of Anglo-Saxon Norse and Celtic, has worked extensively on the Anglo-Saxon period – especially the charters and coinage that offer new windows into a time of turmoil. He was the organiser and keynote speaker at a conference last week.

Æthelred was just a boy aged around 12 years when he became King of the English, and his long rule was marred by repeated incursions from the Danes. Far from keeping English shores safe from attack, the vast amounts of money paid to the Danes (estimated at £250,000 – a huge sum at the time) simply whetted their appetite for English riches. They took the money and continued their raids. In 1016 England became, for some 50 years, part of an empire of the North Sea.

From the 6th century onwards, England had converted to Christianity while the Danes continued to worship Norse deities. Æthelred believed that this placed God on his side – but prayer proved useless. So did reprisals on Danish settlers. Fruitless attempts to bribe or defeat the Vikings sealed Æthelred’s reputation as a disastrous king who deserved to fail. Sellar and Yeatman’s 1930s classic 1066 and All That echoes this sentiment: the “Wave of Danes” who overran the country were “undoubtedly a Good Thing”. 

“Throughout history, Æthelred’s payment of Dane-geld has been used as a short hand for drastic mismanagement and poor decision making,” says Keynes. “But there is another, more complex, picture to be painted of Æthelred’s reign, and the ways that he and his councillors tackled the considerable challenges that they faced as they sought to administer a kingdom and protect their respective interests.”

Much of what we know about Æthelred’s reign comes from the Anglo-Saxon Chronicle - an account by an anonymous chronicler of each year’s notable events. The Anglo-Saxon Chronicle is far from impartial: its verses were composed by court poets, or skalds, who celebrated the deeds of the leaders of the Viking armies. “The story told in the Anglo-Saxon Chronicle, and retold many times thereafter, is very superficial. But there is plenty of other evidence for the period, and the deeper one looks, the more complex and interesting it all becomes,” says Keynes.

Keynes says that no single body of evidence is richer than the 130 charters that survive Æthelred’s reign. More properly called ‘royal diplomas’, these charters are documents that record agreements made at assemblies held four or five times a year. Such meetings, which took place at major festivals, such as Easter and Pentecost, were an opportunity for both ceremony and business. The charters, written in Latin, were witnessed by prominent members of the church and key land-owners.

“In comparison to the Anglo Saxon Chronicle, which is a wonderfully vivid narrative in the vernacular, the diplomas are dry and seemingly impenetrable documents – and it’s true that individually they appear to yield little. But considered collectively, they offer an opportunity to reach below the surface of recorded events,” says Keynes.

The majority of the charters issued during Æthelred’s reign represent grants of land. Others give detailed details of the forfeiture of land into the king’s hands or confirm the entitlement of a religious house to lands and privileges which have been lost.

“Royal diplomas were highly valuable documents in their own right. It was the possession of the charter itself which gave an individual the right to the land described even if the individual in question was not named. Not surprisingly copies and forgeries were made – which, for the historian, makes puzzling them out even harder,” says Keynes.

“The diplomas also have long lists of witnesses which, when tabulated and analysed, enable one to detect interesting changes in the composition of the king’s councillors over the course of Æthelred’s long reign – suggesting perhaps who was gaining in power and who was declining.”

Exeter Cathedral holds one of the most beautiful surviving charters, written in ink on parchment. Æthelred's diploma for Bishop Ealdred of Cornwall (994) confirms Ealdred's status as bishop of Cornwall, at St Germans, and states that he is to have the same rights as the other bishops have in their own dioceses. “This charter was probably the outcome of a determination on the part of Archbishop Sigeric to set things in order,” says Keynes.

“The English were under severe Viking attack, and this was one way of making arrangements more pleasing in the sight of God. The diploma was issued at a royal assembly, and was witnessed by a number of bishops, ealdormen, abbots, and thegns - in other words by the great and good of the land.”

Coinage offers another window into Aethelred’s reign and management of money is likely to have been on the agenda at royal assemblies. In a collaboration with the late Mark Blackburn, Keeper of Coins and Medals at the Fitzwilliam Museum, Keynes took a keen interest in the coinage of Æthelred’s reign. “Coinage was struck at as many as 80 minting places across England. It was produced in huge quantities for export as part of the tribute money paid to Viking armies and the army tax paid to a standing mercenary force,” he says.

“Variations in coin designs over time suggest that Æthelred and those working with him developed and maintained a system of staggering complexity. To control the economy, the authorities recalled coins of one type from circulation and exchanged them for coins of a new type. The designs tell their own stories. The earliest types feature the hand of God issuing from a cloud, perhaps to signify divine approval. Later the emphasis shifted to the king’s portrait and he is shown initially bare-headed and later wearing a helmet.”

The rarest of the coins struck in Æthelred’s time is a short-lived Agnus Dei (Lamb of God) type. Worldwide, just 24 survive, one of which is in the collection of the Fitzwilliam Museum and displayed in the Rothschild Gallery. What makes this coin so remarkable is the absence of king’s portrait: the obverse features the Lamb of God and the reverse a dove, symbol of the Holy Spirit. “The design represents a desperate appeal for peace, in perilous times,” says Keynes.

In portraying Æthelred’s reign as a time of turmoil, historians have drawn on a sermon given by one of the king’s most powerful advisors. Archbishop Wulfstan’s message to the English people is full of gloom: “For it is clear and manifest in us all that we have previously transgressed more than we have amended, and therefore much is assailing this people. Things have not gone well now for a long time at home or abroad, but there have been devastation and famine, burning and bloodshed in every district again and again.”

The forces ranged against Æthelred were impressive and implacable. In 994 a Viking fleet of more than 90 ships came up the Thames to London. In 1009 the Vikings came again. Almost ten centuries later, in the 1920s, a group of battle axes and spearheads, dating from around 1000, was found in the river close to old London Bridge. Vivid reminders of the raiders who sailed up the estuary to strike at the heart of England, they are on display at the Museum of London.

The eight battle axes, with their fearsome curving edges, also pose a question: how could the king and his councillors overcome a threat of such a kind? 

In September 1666 the great fire of London destroyed St Paul’s cathedral, taking Æthelred’s tomb with it. Today Æthelred is remembered in the cathedral coffee shop where a stone commemorates all the tombs known to be lost. “It’s quite touching to see Æthelred’s name close to the place where he was buried in 1016 and where he lay for the next 650 years,” says Keynes. “It’s highly unlikely that he will never shake off the damage done to him by his soubriquet – but it’s well worth continuing to challenge the accepted versions of the history of a fascinating period.”

Coins from Æthelred’s reign are displayed at the Fitzwilliam Museum in the Rothschild Gallery. Æthelred's charter for Bishop Ealdred of Cornwall (994) is available for consultation at Exeter Cathedral on request.

Inset image: Æthelred's diploma for Bishop Ealdred of Cornwall (994) (Exeter Cathedral Archive).

He was just a boy when he became King of the English and his reign was marked by repeated attacks by the Danes. Æthelred, who died 1,000 years ago on 23 April 1016, is remembered as ‘the Unready’.  But his nickname masks a more complex picture.

Throughout history, Æthelred’s payment of Dane-geld has been used as a short hand for drastic mismanagement. But there is another, more complex, picture to be painted of Æthelred’s reign.
Simon Keynes
Silver penny from the reign of King Æthelred

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: Genetics: what it is that makes you clever – and why it’s shrouded in controversy

0
0

For nearly 150 years, the concept of intelligence and its study have offered scientific ways of classifying people in terms of their “ability”. The drive to identify and quantify exceptional mental capacity may have a chequered history, but it is still being pursued by some researchers today.

Francis Galton, who was Charles Darwin’s cousin, is considered the father of eugenics and was one of the first to formally study intelligence. His 1869 work Hereditary Genius argued that superior mental capabilities were passed down via natural selection – confined to Europe’s most eminent men, a “lineage of genius”. Barring a few exceptions, women, ethnic minorities and lower socioeconomic communities were labelled as inferior in intelligence.

Galton’s controversial theories on race, socioeconomics and intelligence have been highly influential and shaped the ideologies of numerous researchers and theorists around the world.

In the UK, proponents of a Galtonian view on intelligence included educational psychologist Cyril Burt, who helped formulate the 11-plus examination, and psychologist Charles Spearman who is best known for his creation of the concept “g” – the innate general factor of human mental ability. Spearman’s background as an engineer in the British army gave him a statistical sophistication that proved instrumental in shifting the direction of the field of intelligence study.

Spearman: statistician who delved into human intelligence.Eugène Pirou via Wikimedia Commons

 

Spearman hypothesised that intelligence is comprised of “g” – or “general intelligence”, and two other specific factors: verbal ability and fluency. Spearman’s extensive work on the use of “g” within the field of statistics meant that some used the “hard” sciences and maths as instruments to argue that there were biological differences between races and social classes. “G” as a representation of the biological basis of intelligence is still being used today in research within the current field of behavioural genetics.

Political currency

The concept of inheritance, and specifically the inheritance of intelligence, has carried over into political and educational spheres. A more recent advocate of Galtonian-inspired ideas is Dominic Cummings, who served as a special advisor to the former secretary of state for education, Michael Gove. Cummings wrote the following in a 237-page document titled “Some thoughts on education and political priorities”:

 

Raising school performance of poorer children … would not necessarily lower parent-offspring correlations (nor change heritability estimates). When people look at the gaps between rich and poor children that already exist at a young age (3-5), they almost universally assume that these differences are because of environmental reasons (“privileges of wealth”) and ignore genetics.

 

The birth of twins studies

From the 1920s, when twin and adoption studies set out to determine the genetic and environmental origins of intelligence differences, the study of intelligence began to converse with the early stages of human behavioural genetics.

Under the presumption that twins experience similar environmental aspects, twins studies enable researchers to evaluate the variance of a given outcome – such as cognitive ability – in a large group. They can then attempt to estimate how much of this variance is due to the heritability of genes, the shared environment the twins live in, or a non-shared environment.

The 1980s and 1990s saw another rise in twin and adoption studies on intelligence, many of which were more systematic in nature due to advances in technology. Most supported earlier research and showed intelligence to be highly heritable and polygenic, meaning that it is influenced by many different genetic markers.

The researchers Robert Plomin, JC Defries, and Nele Jacobs were at the forefront of this new wave of studies. But this research was still unable to identify the specific genetic markers within the human genome that are connected to intelligence.

Genome – a new frontier

Genome sequencing technologies have taken the search for the genetic components of inheritance another step forward. Despite the seemingly endless possibilities brought forth by the Human Genome Project in 2001, actually using DNA-based techniques to locate which genetic differences contribute to observed differences in intelligence has been markedly more difficult than anticipated.

Genome-wide association studies (GWAS) began to take hold as a powerful tool for investigating the human genetic architecture. These studies assess connections between a trait and a multitude of DNA markers. Most commonly, they look for single-nucleotide polymorphisms, or SNPs. These are variations between genes at specific locations throughout a DNA sequence that might determine an individual’s likelihood to develop a particular disease or trait.

Originally intended to identify genetic risk factors associated with susceptibility to disease, GWAS have become a means through which to try and pinpoint the genetic factors responsible for cognitive ability. But researchers have shown that intelligence is a trait influenced by many different genes: they have so far been unable to locate enough SNPs to predict the IQ of an individual.

Ethical questions

There’s a long way still to go, but this field is receiving a great deal of publicity. This raises several ethical questions. We must ask ourselves if this research can ever be socially neutral given the eugenic-Galtonian history underpinning it.

This kind of research could have an impact on human genetic engineering and the choices parents make when deciding to have children. It could give parents with the money and desire to do so the option to make their offspring “smarter”. Though genetically engineering intelligence may appear to be in the realm of science fiction, if the genes associated with intelligence are identified, it could become a reality.

Some researchers have suggested that schools which have a child’s genetic information could tailor the curriculum and teaching to create a system of “personalised learning”. But this could lead schools to expect certain levels of achievement from certain groups of children – perhaps from different socioeconomic or ethnic groups – and would raise questions of whether richer families would benefit most.

Whether calling it “intelligence”, “cognitive ability”, or “IQ”, behavioural genetics research is still trying to identify the genetic markers for a trait that can predict, in essence, a person’s success in life. Given the history of this field of research, it’s vital it is conducted with an awareness of its possible ethical impact on all parts of society.

Daphne Martschenko, PhD Candidate, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Daphne Martschenko (Faculty of Education) discusses the concept of intelligence and the drive to identify and quantify it.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes
Viewing all 4346 articles
Browse latest View live




Latest Images