Career Profile
Jam Kraprayoon
Interviewed 8 October 2024
“Policymakers are often actively looking for help. You might think that it's going to be a lot of work to sell whatever policy idea you have, but we do get a lot of requests from people who are looking for our input.”
Jam Kraprayoon is a researcher with the Policy and Standards team at the Institute for AI Policy and Strategy (IAPS). He has previously held roles at Rethink Priorities, the Effective Institutions Project, and the Asian Productivity Organization. He holds a bachelor's degree in Government from the London School of Economics and an MPhil in Politics from the University of Oxford.
What were your undergraduate and postgraduate degrees? How have they benefited you in your AI governance career?
My original degree was at the London School of Economics. It was a BSc in Government, which obviously does have some relevance to AI governance, particularly on the policy side. But there was no AI-specific content in the degree itself. I took the degree from 2012 to 2015 so at that point I don't even know if there really was an AI governance field. I think Superintelligence had just come out in 2014 and I read it in 2015. So the course content itself didn’t cover AI at all. But having a background in political systems and how policy gets made have informed me somewhat in the work I'm doing now.
In between my degrees, I worked for a year as a management consultant in Thailand. This was also helpful, because the work I do now is a mix of project management and stakeholder kind-of outreach, in addition to doing deeper research work. Though I'd say there are probably quite a lot of roles in AI governance now that could help develop those skills.
After that I got my MPhil in Politics at the University of Oxford. That was useful for building more specific research methodological skills. I did things like learning how to run experiments and how to think about research design. Again, there wasn’t any AI-specific content in the course itself. But people in my college and people doing my degree were working on more AI-relevant topics, and there were some people that I interacted with who were affiliates of the Future of Humanity Institute, for instance. So those connections were useful.
So you'd had an interest in AI dating back years, because you said you read Superintelligence fairly early on. Was AI governance something you always wanted to get into, and if not, when did you decide to make that transition?
Back in 2015 I was intrigued by the idea of AI risks – I thought it was cool that this was the kind of research someone could do. But at that point the field was fairly immature and I don't think I was farsighted enough at the time to think I could work in it. I was thinking more along the lines of working in multilateral fora, or doing think tank research work that would be relevant for policy more generally.
I eventually started working for an intergovernmental organisation that was based in Asia called the APO, the Asian Productivity Organization. I was a strategic foresight specialist, which meant I was helping to produce horizon-scanning reports and teaching strategic foresight techniques to government officials and industry folks that I was working with. So that's where AI came back into the picture to some degree – not from a security or public safety angle, which is what my role is now, but more near- or medium-term issues, such as labour market disruption and opportunities for economic growth.
After four years at APO, I was looking to transition to something that felt was more impactful. So I was looking for roles related to global catastrophic risks (not just from AI but more broadly). I used 80,000 Hours to look for relevant roles, and I ended up doing a fellowship at Rethink Priorities. I was focused primarily on biosecurity for about a year, but when the ChatGPT boom happened our team shifted its focus to AI, and that's when my day-to-day work became primarily focused on AI policy and governance work. I actually ended up having the idea of finding founders to start an AI governance field-building project, which ended up being the Vista Institute for AI Policy, which has been operating for about a year now. And then eventually I transferred into a full-time research role at the Institute for AI Policy and Strategy (IAPS).
What does a typical day look like in your current role?
On a typical day there might be three or four project ‘buckets' that I'm working on. I'll describe each of the buckets.
One is my longer-term or deeper research work, which is generally on a pretty specific topic. At the moment I’m working on a project about agent governance, for example. This involves research on guardrails or safeguards that we could use to manage risks from advanced AI agents.
The second bucket is what we could call ‘short turnaround research’. One example of this is when a governmental agency makes a request for information about something AI-related. We would make sure that there's a short, information-dense output to inform them on a specific policy question that they might have. Right now there’s a request for comment on proposed rules around AI reporting requirements from the US Bureau of Industry Security, which is something we're currently working on.
The third bucket is something more like project or programme management. I'm also organising an event right now convening different stakeholders working in AI governance. So I’m liaising with my ops person to figure out how we're going to reimburse flights, finalising the invitee list, and headhunting people to run sessions.
And yeah, maybe the fourth bucket is research advising. So at the moment, IAPs has a fellowship round and I'm mentoring a fellow who’s working on a specific project related to my deeper research agent governance. So I might review some work they've done, or answer some questions they might have.
Are there any skills that are definitely required to succeed in your role? And are there any skills which you think are often overlooked, but are actually really important?
I think the main skill that's definitely required is the ability to learn pretty quickly, and be willing to move from a position of high uncertainty and low information to one of less uncertainty and more information. Even if you're coming in with a pretty specific background in an area that seems relevant for AI policy, you won't know everything. AI governance is a field that draws on so many things that you can't possibly be expected to know about all of them. It’s about knowing how to find that information, not just from desk research, but through a willingness to identify who might know that information, reach out to them and conduct good, structured expert interviews.
A secondary skill that is really useful is just to be an excellent communicator. Generally it's very important to be a good writer, and the style of writing is somewhat flexible. So if you're doing something that you expect a congressional staffer to read, you want to be short, punchy, and concise. That’s pretty different to writing a longer think tank report that might be 50 pages, or something written in an academic style because you want to be published in a journal.
I actually think pure writing ability is underrated, because ultimately, policy influence is a persuasive act as much as it is an analytical act. And so the ability to write well, and to be compelling, and to understand how to speak to people where they're at and use frames they recognize is very important.
What has been your proudest accomplishment while working in AI governance?
I'm most proud of a report that me and my team at IAPs produced on coordinated disclosure of dual-use AI capabilities. This was a really large report that involved stakeholder interviews and workshops with more than 50 people from the US government, industry, and civil society. I think I’m most proud of that because I was there from the beginning to its post-publication phase. And I think it kind of encapsulates what I think research should ideally be like, in that it was something that was informed by not just by our own independent thinking, but by a lot of contact on the ground.
What is one thing about your role that you think would surprise people?
One thing that may surprise at least some people is that policymakers are often actively looking for help, because they’re extremely time-constrained and AI is a complicated issue. You might think that it's going to be a lot of work to sell whatever policy idea you have, but we do get a lot of requests from people who are looking for our input. AI policy is still a fairly niche area, so if you do good research and build up solid relationships with stakeholders, you could end up with your ideas in demand, rather than hunting to sell them.