Career profile
Hadrien Pouget
Interviewed 9 October 2024
“In the AI policy space, you get a lot of people who are really, really passionate about the topic.”
Hadrien Pouget is an associate fellow of Carnegie Endowment for International Peace’s Technology and International Affairs Program. He previously worked as a technical AI research assistant at the University of Oxford. He holds a Master of Computer Science from the University of Oxford and an MPhil in Technology Policy from the University of Cambridge.
What was the career journey that led you to your current role?
I have a more technical background originally. I did my undergraduate degree in Computer Science, and towards the end of it, I got more and more interested in AI. After that, I spent about 18 months working as a research assistant on technical AI research and ultimately decided I wanted to get into policy.
I first did a Masters in Technology Policy, which ended up being just really helpful for getting into writing again. While I was there, I applied to a technology policy fellowship, now called the Horizon Fellowship. They place people with technical backgrounds in think tanks or in the US government. I got placed where I'm still working at the Carnegie Endowment for International Peace. I was funded by Horizon for about a year, and then I was hired by Carnegie.
In the role that you're doing at the moment, what does a typical day look like?
It's usually a combination of desk research on some project I'm working on, reading documents and reports, trying to find data on things, and then writing it up. Another big component tends to be just meeting with people, both for the research and just to keep some kind of situational awareness of what's going on in the field. Then the last component is probably going to events, such as presentations or conferences.
Are there any skills that are definitely required to succeed in your role? And are there any skills that are actually really useful but that people might overlook?
Being able to write clearly and coherently about complicated topics is sort of the bread and butter of a lot of what I do, and then being able to communicate that to people.
Another thing is that a lot of think tank work tends to have quite unclear demand signals. It's not always clear what the best thing to be doing is. We have a range of different tools at our disposal; we can write reports, we can write op-eds, we can host convenings – there’s not always a real structure to tell you what you should be working on or what might be feasible. So I think being able to navigate a lot of that uncertainty and think strategically is quite important.
Maybe an underrated skill is just being a very empathetic person. When you're talking to a lot of different stakeholders who come at an issue with different perspectives, it can be easy to be so focused on the way you see an issue that you struggle to communicate with them or meet them where they are. So you have to really try and put yourself in someone else's shoes to understand why they're coming at an issue the way they are and where some sort of common ground can be.
What’s been your proudest accomplishment while you've been working at Carnegie?
I have a lot of little things I'm proud of. One is that I did a lot of work on the EU's AI standardisation space that I think woke a lot of people up to the challenge of creating AI standards and really motivated people to take that seriously. So I’m quite proud of that.
Is there a particular skill set that you think the AI governance field is lacking, or is there a person with a particular kind of background that you would be excited to see get involved?
I was always told when I was trying to get into this field that we really needed more people with technical backgrounds to bridge between policy and technology. I think that's probably still somewhat true, especially as we get more and more into the implementation phase of AI regulation.
But I think in general, actually being able to work at the intersection of AI and other fields of expertise is really useful. People talk about AI and bio-risk, or AI and cyber-risk, or AI and copyright law. AI is a very general technology that has intersections with a huge number of problems. And so I think it's usually quite valuable to be able to work at those intersections. Often in discussions, we're just missing a person who has specialised knowledge of the field we're talking about.
Is there anything about your role that you think could surprise people?
People are sometimes surprised by the amount of flexibility in the kind of work we do. A lot of people see think tanks as academics just putting out papers on issues. But there’s a lot of engaging directly with people, hosting meetings, and writing memos that only go to a few people. A lot of the more impact-driven work is actually on that side.
Are there any parts of your job that are particularly exciting or challenging? What are your favourite aspects of it?
My favourite aspect is just being able to talk to a bunch of other people who are also really interested in the same sorts of issues, but come from different backgrounds, and just trade notes. I think it's always a lot of fun, and especially I think in the AI policy space, you get a lot of people who are really, really passionate about the topic.
One rewarding aspect is that in AI policy, things move so fast that there are opportunities popping up all the time to provide input on things, in ways that are really exciting to respond to but hard to predict. Sometimes you're working on one thing and then there's this request for input on some important issue that comes up, or someone needs a memo written on this decision that's suddenly being made, which can be quite fun.