Career Profile: Hadyn Belfield

“In AI governance, there are really active open questions and no firm answers. Which is why it's so important to have more people coming into the field. There's a real opportunity to contribute and shape outcomes for people here in the UK and around the world.”

Haydn Belfield is a Research Associate and Academic Project Manager at the University of Cambridge's Centre for the Study of Existential Risk. His previous roles include working as Senior Parliamentary Advisor to a Labour MP in the Shadow Cabinet, Policy Associate at the Global Priorities Project, and Research Fellow at the Leverhulme Centre for the Future of Intelligence. He holds an MSc in Politics Research and a BA in Politics, Philosophy and Economics (PPE) from the University of Oxford. 

What was the career journey that led you to your current role?

My undergraduate degree was in Philosophy, Politics, and Economics at the University of Oxford. After I graduated, I went to work in Parliament for a bit, doing internships, and ended up working for some people in the Labour Party shadow cabinet for a few years. I had always been interested in existential risk, and in 2016 my current job came up at the University of Cambridge. I thought, “Wow, this is incredible”. I was really excited about the idea of doing policy work and research around existential risk. I got that role and I’ve been at the centre for the last seven and a half years.

Can you describe what a typical day looks like?

I mostly work from home. So my commute is downstairs to go get coffee and then back up here to log on to my computer! My typical day is a combination of meetings with people and catching up with collaborators or people that I'm interested in chatting to, from either here in the UK or around the world. There’s also plenty of doing emails and general admin. 

And then there’s doing my research. A lot of this is reading books or papers and thinking about different arguments and claims that people have made in previous work. So I’ll spend a few hours on that.  And then probably between half an hour and an hour of Twitter, which is partially just enjoyment but since much of the AI safety world and the whole of politics and journalism is addicted to Twitter, you can actually find useful stuff on there.

What are the skills that are required to succeed in your job? Are there any which are particularly overlooked?

My role is slightly unusual because it's at the intersection of doing research and coming up with concrete policy ideas, and then pitching them to companies, policymakers or civil servants. The main skill needed to do that, which I don't see that many people with, is this translational ability. Being able to engage with research and papers (and interested enough to actually read a paper, which very few civil servants or politicians have time to do!) and become familiar with the kind of technical details of AI – but then having a sense of what is politically feasible on what timelines and with what budget. It’s important to be able to translate between the academic world and the policy world.

What has been your proudest accomplishment while working in this area?

I was very proud for a number of years to have been involved in setting up the Centre for Data Ethics and Innovation, which was the first standing national body on AI. I was part of the team that proposed that back in around 2016. We proposed that to a parliamentary committee and then that got taken up by the government and launched in 2017 and was recently integrated into the AI Policy Directorate.

One other thing that I’m quite proud of is playing a small role in the shift in UK policy over the last year towards being way more interested in AI, and AI safety in particular. I was seconded into the Department for Science, Innovation and Technology when it was being set up in summer 2023, and I did joint work on setting up the AI Safety Institutes, working out its strategy for evaluations, and helping out with the Bletchley summit and the Korea summit. There has been a shift towards taking AI very seriously that I don’t think I would have predicted when I started working at the University of Cambridge. The UK is leading the world on this now, which is incredible. I tend to think more in terms of groups that I've been proud of being part of that have accomplished good things, rather than about my personal wins. So I’m proud of that.

Are there any parts of your role that are particularly exciting or challenging?

Maybe the big exciting and challenging thing is working on, to my mind, the most interesting and important challenge facing the world and facing governments currently and for the next few decades. This challenge of how on Earth we should adapt our economies, societies and security systems to deal with increasingly powerful AI systems. That is hugely exciting but also really scary because we don't really have good answers to any of those questions. Even if we did have good answers, it takes time to shift policy and there are some very powerful people that don’t want any sort of regulation or international cooperation. So I think that that's probably the most exciting but challenging aspect of the job.

What types of skills does the field of AI governance need more of? Is there a particular skill set that is especially useful?

In general, the more the merrier. AI and AI governance are going to touch on every aspect of society, so we need a lot of people from a variety of backgrounds. Some skill sets that immediately spring to mind are people who know about AI hardware and chips and the ability to do compute governance – that feels like a really key one.

In my own work, I take a lot of inspiration from arms control for nuclear weapons, biological weapons, and cyber weapons. So people who have experience or expertise in those kinds of areas can bring that. We also need more people who have experience and skills in politics, communication, and building coalitions. 

There are lots of other examples – economic skills, painting positive pictures of the future, labour market analysis. We need thousands more people with a whole variety of different skills and backgrounds.

What's one thing about your role that you think people would find surprising?

There's something that quite a lot of people who are in politics have talked about – I'm pretty sure that Jake Sullivan, who's the national security advisor, spoke about this – he said that he’d always assumed that there was someone else, in another room down the corridor, making all the real decisions. And then he went into the National Security Council, and he's like looking around the room and then he realises that there is no other room. I think that that's a feeling that a lot of people have, and it reflects something really true and profound, which is that there isn’t someone else out there who has everything figured out.

Especially in AI governance, there are really active open questions and no firm answers. Which is why it's so important to have more people coming into the field. There's a real opportunity to contribute and shape outcomes for people here in the UK and around the world.

Previous
Previous

Career Profile: Cecil Abungu

Next
Next

Career Profile: Oliver Guest