So, you want to do some AI policy?

The first question you must answer is why. This guide will assume it is because you think that AI might be ~net bad for civilization (either because there is a significant risk of extinction, or institutions aren't ready), and you think that policy may be able to help.

A few notes:

The best way to learn is by doing. AI policy is a new field. The best practitioners are in the field, and the best way to learn is to contribute concretely towards a new project. I would personally start by trying to do a project, and then finding someone to supervise you as you do that project. The GPTs will be good here for getting a grasp (what is compute governance, anyway?).

Don't read without purpose. I would lightly recommend against doing a bluedot course (it's not the most efficient way, and feels like work), or "reading broadly" or a literature review. Maybe MATS/IAPS is fine, but usually difficult to get into without having done projects anyway. If you must, skim AI-2027 and situational-awareness. This is because the information is a lot more sticky if you learn it as it relates to a project, and a lot more interesting. You will have to do a ton of reading as you write. The two exceptions to this are: Gain a little bit of technical knowledge. You should have high-level answers that you are happy with for the following questions: "How are modern AI systems trained?" "What is the meaning of pre-training, post-training, fine-tuning, RLHF, RL, 'agents'?" "Describe the semiconductor supply chain." "What's the difference between AGI, superintelligence?" "Why/why wouldn't a misaligned AI end the world? Would AlphaFold?" If you're more comfortable technically, try and understand the Deepseek V3 paper and the Claude Opus 4 System card: that's more than enough technical depth you'd ever need. I would recommend just having a long conversation with o3 until you feel comfortable with your understanding of these questions, and skimming "Chip War." Stay on top of AI developments in the field. The best way to do this by far is to subscribe to the substacks. Ask GPT if you don't understand anything that's going on. Here are some I recommend just subscribing with your email, then just reading every time it comes into your inbox: SemiAnalysis, AI Futures Project, ImportAI, ChinaTalk, Zvi Mowshowitz's newsletter, Interconnects, Dwarkesh Podcast, Cognitive Revolution, AI Safety Newsletter (CAIS), Miles Brundage, AI Pathways – Herbie Bradley, Anton Leicht. (If you share your own blog URL, I'll add it too!)

AI policy, particularly if you've not got a long policy background / PhD, doesn't really have "opportunities", you have to make your own. You can independently do policy research and advocacy. It's very hard to apply to things and get a job without an existing network already. Concentrate on doing things that provide value.

Think about strategy. Often times, even experienced folks in the field do not know what the ultimate end goal of their governance proposal is. Ask yourself, right now, what you think the future of AI policy is going to be. Spend 10 minutes writing a short paragraph about if you believe that the AIs might end the world, think about why. Think about which one of the two camps you fall into.

Focus not only on policy, but politics of AGI. Perhaps a more niche belief that I have, but I think too much AI policy is "ideal things we would do if we had full control of the government." Much more useful is to think about the politics of how this is all going to play out, and the way that your policy can fit within the current narrative (shameless self-promotion, again)

Here are the steps that I would take, in order, assuming you're starting from scratch:

Buy a GPT plus subscription for $20/month

Read Situational Awareness, then AI-2027. Any questions you have, ask o3 (upload both PDFs into context)

Write down your best guess for what you agree and disagree with for both of those documents. What went wrong in the Race ending? How could it have been prevented?

Write a blogpost. It doesn't have to be good, or long, or new. Just take an idea that you had (perhaps in response to the prompt above), and just research it fully. Use o3 liberally.

Send me (@jasonhausenloy.48 on Signal, download signal) a draft!

Consider whether you want to develop that into a full research project. If not: read through the latest versions of all the newsletters recommended above, then think again.

Good luck!