Quantcast
Channel: University of Cambridge - Latest news
Viewing all articles
Browse latest Browse all 4516

First Master’s programme on managing the risks of AI launched by Cambridge

$
0
0

Artificial Intelligence is already a part of our everyday lives in forms like Alexa, Amazon’s virtual assistant, facial identification, and Google maps. Thinking machines have huge potential to greatly enhance life for billions of people, but the technology also has huge potential downsides.

It can embed sexism, as when an algorithm for ranking job applicants automatically downgraded women; or be used for intrusive surveillance using facial recognition algorithms that decide who is a ‘potential criminal’.

The new degree in AI Ethics aims to teach professionals in all areas of life — from engineers and policymakers to health administrators and HR managers — how to use AI for good, not ill.

The programme is led by the Leverhulme Centre for the Future of Intelligence (CFI), an interdisciplinary research centre based at the University of Cambridge. Over the past four years, it has established itself at the forefront of AI ethics research worldwide, working in partnership with the University of Oxford, Imperial College London, and UC Berkeley. 

CFI is partnering with the University of Cambridge’s Institute for Continuing Education, which provides flexible and accessible higher education courses for adults, to deliver the 2-year part-time Master’s degree.

Executive Director of CFI, Dr Stephen Cave, said: “Everyone is familiar with the idea of AI rising up against us. It’s been a staple of many celebrated films like Terminator in the 1980s, 2001: A Space Odyssey in the 1960s, and Westworld in the 1970s, and more recently in the popular TV adaptation.

“But there are lots of risks posed by AI that are much more immediate than a robot revolt. There have been several examples which have featured prominently in the news, showing how it can be used in ways that exacerbate bias and injustice.

“It's crucial that future leaders are trained to manage these risks so we can make the most of this amazing technology. This pioneering new course aims to do just that.”

While society’s understanding of AI ethics has grown fast, bridges from research to real-life applications are scarce, and access to rigorous qualifications in responsible AI are sorely lacking.

Dr Cave says the new degree will address those concerns. “People are using AI in different ways across every industry, and they are asking themselves, ‘How can we do this in a way that broadly benefits society?’

“We have brought together cutting-edge knowledge on the responsible and beneficial use of AI, and want to impart that to the developers, policymakers, businesspeople and others who are making decisions right now about how to use these technologies.”

AI has already demonstrated a range of benefits for humanity. The COVID-19 pandemic has seen artificial intelligence rushed into experimental use at scale, bringing the importance of ethical AI competence into even greater relief. For example, AI has been deployed to fight the pandemic in the development of vaccines, early diagnosis and contact tracing.

But its use has also caused concern, when governments used artificial intelligence to track citizens and prevent them from leaving their homes.

The ‘Master of studies in AI Ethics and Society’ promises to develop leaders who can confidently tackle the most pressing AI challenges facing their workplaces. These include issues of privacy, surveillance, justice, fairness, algorithmic bias, misinformation, microtargeting, Big Data, responsible innovation and data governance.

The curriculum spans a wide range of academic areas including philosophy, machine learning, policy, race theory, design, computer science, engineering, and law. Run by a specialist research centre, the course will include the latest subject research taught by world-leading experts.  

Dedicated to meeting the practical needs of professionals, the course will address concrete questions such as:

·     How can I tell if an AI product is trustworthy? 

·     How can I anticipate and mitigate possible negative impacts of a technology?

·     How can I design a process of responsible innovation for my business?

·     How do I safeguard against algorithmic bias?

·     How do I keep data private, secure, and properly managed?

·     How can I involve diverse stakeholders in AI decision-making?

The hybrid programme will consist of online classes, and intensive week-long residentials at a University of Cambridge college. It’s been designed in such a flexible format to maximise the opportunities for working professionals to join the course.

Dr James Gazzard said: "The Institute of Continuing Education is delighted to be a partner in this distinctive Master's course. Our role is to provide adult students with access to cutting edge knowledge and skills. 

“As we all consider a post COVID-19 future, we know that the Fourth Industrial Revolution will see the acceleration of the opportunities and threats presented by AI and this course is well placed to support adults to re-skill and up-skill in this important emerging field."

In addition to its 800-year history of innovation and leadership in technology and the humanities, the University of Cambridge is set within the renowned ‘Silicon Fen’, a hub of AI innovation home to tech giants and start-ups from Microsoft, and Amazon, to ARM, and Apple. 

In gathering professionals from across the country and internationally, the course will build diverse networks of professionals, researchers and government leaders dedicated to responsible AI. This will help position the UK as a global leader in beneficial AI, now and into the future.

Applications for the new degree close on 31st March 2021, with the first cohort commencing in October 2021. For further information about the course, please visit: http://lcfi.ac.uk/master-ai-ethics/ 

The UK’s first Master’s degree in the responsible use of artificial intelligence (AI) is being launched by the University of Cambridge.

There are lots of risks posed by AI that are much more immediate than a robot revolt
Stephen Cave
Motherboard

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes
License type: 

Viewing all articles
Browse latest Browse all 4516

Trending Articles