Career Profile

Gregory Smith

Interviewed 8 October 2024

“I've leveraged unique expertise to help develop as an AI policy researcher despite not having technical expertise.”

Gregory Smith is a policy analyst at RAND, focusing on AI governance and emerging technologies. He holds a J.D. from Columbia School of Law and a BA in History from Princeton University.

How has your law background been valuable in your AI governance career?

My law degree has been very useful, because the area of AI policy that I would say I'm best at is the intersection of AI and law. Law is highly specialised, and it's hard for journeymen and novices to pick it up. The field takes a lot of time to become familiar with. So I've leveraged that unique expertise to help develop as an AI policy researcher despite not having technical expertise.

An important skill that I’ve taken from my law background is attention to detail. This is critical for policy and governance work, particularly because we are often trusted to inform people on very specialised topics. So your work just has to be right – because one error and trust in all of your work is deeply impacted. Most people who aren’t experts in AI governance can’t evaluate the accuracy of your work. People have to take us mostly on trust, because there's only a few other people who really can check our work. And if they check and they find we're wrong, we lose a lot of trust from that. So as a policy person, being accurate and correct gets you pretty far.

What advice would you give to someone who has a law degree and is interested in pursuing an AI governance career?

This transition is very unique because law is a very hierarchical, very transparent field, where we understand what success looks like even if success is very hard. If you do well in law school and get published in the right journals, you can very easily clerk for a prestigious judge in the federal system, go to a very good firm and get promoted to partner in seven to nine years.

AI policy is much more fluid. There are no clear paths to success. We are aiming for impact. Law is a more task-oriented field, where clients often determine how lawyers go about their work. This is almost the opposite of AI policy, where you might come up with a research idea, bring it to some funder or some group, and try and make that idea happen.

My advice, if you want to go from being a lawyer or law student to working in AI policy, is to be open to finding the next right move without focussing too much on the long-term plan. Because in policy, it's often hard to plan in the long run. We don't know which opportunities and threats from AI will manifest in a way that creates policy windows or makes research relevant. You have to try just a lot of different small things – because it’s hard to know which of your small bets is going to be the one that leads to a ton of opportunity.

What are some skills that are definitely required to succeed in your role? Are there any skills you think are very important for succeeding in AI governance but overlooked?

I think the most important skill is just the ability to grind – to get up early, go to bed late, work, read and just put your nose to the grindstone to become knowledgeable. This means doing a lot of reading and writing, and producing work product (even if it’s not very good!), continually iterating it until it gets better. This is pretty standard advice that you’d find on any university careers page, but ironically, despite everyone saying that grit and perseverance are really important, I actually think it's underrated because it's just not fun. A key skill is being able to do the boring parts well. This isn’t rocket science, but many people fail at it because it’s about a day-to-day discipline that is just not enjoyable.

What has been your proudest accomplishment while working in AI governance?

I think it's been mostly around building the understanding of AI governance and what advanced AI will really mean for society. I publish pieces on the intersection of law and AI policy, but in many ways, the most impactful things I’ve been involved with are projects that are designed to spread understanding. So what I'm most proud of are the briefings I've given, and the conversations I've had where I've informed policymakers about some critical facet of AI that helps them make a better decision.

What skillsets or types of people do you think the AI governance field needs more of?

It needs more people who publish. That sounds like a silly thing to say, but one of the themes of this conversation has been, “Just do things”. A lot of people have really good ideas, they do fellowships, and they're really smart – but they just need to do something, like put a piece out. There's just so many topics you can do for AI research. You can do “AI and [X]” across basically the whole breadth of human activity and disciplines. So one thing I emphasise is working towards getting something out there, even if it's not your best possible work, because at least you've put something out there, and you’ve pushed the conversation forward.