Incubating AI x-risk projects: some personal reflections

25 Jan 2024

Note that this post was originally written in mid December 2023. See the post on the Effective Altruism Forum here.

In this post, I’ll share some personal reflections on the work of the Rethink Priorities Existential Security Team (XST) this year on incubating projects to tackle x-risk from AI.

To quickly describe the work we did: with support from the Rethink Priorities Special Projects team (SP), XST solicited and prioritised among project ideas, developed the top ideas into concrete proposals, and sought founders for the most promising of those.1 As a result of this work, we ran one project internally and, if all goes well, we’ll launch an external project in early January.

Note that this post is written from my personal perspective. Other XST team members or wider Rethink Priorities staff wouldn’t necessarily endorse the claims made in this post. Also, the various takes I’m giving in this post are generally fairly low confidence and low resilience.

These are just some quick thoughts based on my experience leading a team incubating AI x-risk projects for a little over half a year. I was keen to share something on this topic even though I didn’t have time to come to thoroughly considered views.

Key points

Summary of inputs and outcomes

Inputs

Between April 1st and December 1st 2023, Rethink Priorities dedicated approximately 2.5 full-time equivalent (FTE) years of labour towards incubating projects aiming to reduce existential risk from AI. XST had 4 full-time team members working on incubating AI x-risk projects during this period,2 and from August 1st to December 1st 2023, roughly one FTE from SP collaborated with XST to identify and support potential founders for a particular project.

In this period, XST also devoted roughly 0.4 FTE-years working directly on an impactful project in the AI advocacy space that stemmed from our incubation work.

The people working on this were generalist and relatively junior, with 1-5 years’ experience in x-risk-related work (and 0-10 years’ experience in other areas). Team members previously cofounded Condor Camp and EA Pathfinder (later Successif), and the team reported to Peter Wildeford who has significant experience starting impactful non-profits, including Rethink Priorities itself.

The main costs were the costs of employing the staff and the cost of a small starting pot (approx. $65k) for a project we plan to launch in early 2024.

Approach

The core model we used for incubating projects to tackle AI x-risk involved 3 stages:

  1. Project research: Solicit and prioritise among ideas, investigate the most promising ones, and write project memos for the ideas that seem above the bar for us working to help launch them.
  2. Founder search and vetting: For each project we want to help launch, identify highly capable founders to take the project on.
  3. Founder support: Support each founding team as they begin to launch their project.

In practice, stages 1 and 2 tended to blend together, with the first stages of founder search for a given project often involving further investigation of the project itself. We made an initial longlist of around 300 project ideas, and seriously considered 19 of these ideas (stage 1), took 4 ideas to founder search (stage 2), and XST and SP will be doing founder support for 1 founder team from the start of January (stage 3).

Main outcomes

The most important outcome of our work is that we’ll soon launch a project equipping talented university students interested in mitigating extreme AI risks with the skills and background to enter a US policy career, with a founding team due to start work at the start of January.

An additional major outcome is that we had a positive effect on AI advocacy efforts through our direct work on an AI advocacy project, which we began working on as a result of our incubation work.

We also published a list of project ideas, and wrote a project proposal for an AI crisis planning group. Additionally, we plan to publish a proposal for a project to attract legal talent to AI governance and policy work soon.

Implied forward-looking cost-effectiveness

A very rough estimate based on our inputs and outputs suggests that 5 FTE from a team with a similar skills mix to XST+SP would launch roughly 2 new projects per year.

Updated plans

Despite this progress, we’re shifting away from incubation. XST will now look into other ways to support high-priority projects such as in-housing them, rather than pursuing incubation and looking for external founders by default, while the team reconsiders its next steps.3

Reasoning for shifting away from incubation

Our original plans set out earlier this year aimed to launch one new promising project by the end of October – we’ll very likely have achieved this, albeit a couple of months late, by early January. We ended up deprioritising or being behind on the other goals stated in that post, but I think these goals were quite ambitious and I think our rate of progress is only a small-to-moderate negative update on our ability to execute on the kind of incubation strategy we’ve followed this year.

Still, XST is moving away from that approach. In my view, the three most important considerations in favour of this move are:

  1. The funding landscape was much less favourable than we expected, even after accounting for changes since mid 2022.
  2. AI x-risk work seems highest priority, but it’s harder to incubate projects in this area, especially for a generalist team, relative to other x-risk related areas.
  3. The founder pool is somewhat less strong than we expected.

I go into more detail on these in subsequent subsections.4

Difficult funding environment

The funding landscape for x-risk-focused projects is currently significantly more challenging than I imagined, even after accounting for changes since mid 2022. Most importantly, the general bar for funding is higher than I expected, and funders are significantly more skeptical about incubation in particular than I realised.5 Funding application turnaround times from major x-risk funders are also significantly longer on average than I expected.

This updated understanding of the funding environment has several important implications, each of which reduce the expected impact of continuing our incubation work, in my view:

AI x-risk projects seem highest priority, but harder for us to incubate

A couple of months into our work, we decided to narrow our focus from x-risk projects focused on any cause area to projects focused specifically on x-risk from AI, in light of the apparent increased tractability of AI x-risk work due to increased public awareness of AI risk.

I think this focus on AI specifically was a good decision, but it felt harder for us to incubate projects in this area.

Less strong founder pool

I updated negatively on the availability of a founder pool that could successfully execute on projects we’d want to launch. For the project we did our most significant founder search for, we didn’t find many candidates with the key skills we’d ideally want, such as significant knowledge of US policy careers. In addition, potential founder sources like 80,000 Hours seemed to generally have fewer compelling leads than I expected, and I also slightly increased my estimate of how hard successfully launching an AI x-risk project is on average.

Note that I feel especially low-confidence about my assessments in this area – I still feel a high degree of uncertainty about the strength of the founder pool for x-risk projects. We only conducted a full, formal founder search process for one project, and weren’t able to offer the founders significant financial security, which might deter some of the most experienced potential candidates.

Some scattered thoughts relevant for people considering incubating x-risk projects

I’ll end by providing some scattered thoughts and advice for people considering incubating x-risk projects.

I’d say that the ideal team working on x-risk incubation would have these traits (though I don’t think all this is necessary!):

If you’re thinking about starting an incubator, I’d often suggest considering getting (more) experience founding stuff yourself first. This brings many benefits:

Note that I expect that the funding landscape for projects tackling AI x-risk will improve significantly in roughly 1-2 years – so being positioned to start spinning out AI x-risk-related orgs at that time could be pretty great. For a team thinking about starting an AI x-risk incubator right now, this also pushes in favour of spending time getting more experience founding stuff first.

Note also that there are some incubation approaches we might have tried but didn’t. These all seem potentially promising to me to test out in the AI x-risk space:

Finally, I’ll quickly list some updates I made from our incubation work this year that I didn’t already cover:

Closing

If you’re interested in working on incubating x-risk projects, I might be able to share more detailed internal retrospectives. Feel free to get in touch with me at hello[at]bensnodin dot com about this.

Thanks to Cristina Schmidt Ibáñez, Marie Davidsen Buhl, Luzia Bruckamp, Maria De la Lama, Kevin Neilon, Jam Kraprayoon, Renan Araujo and Peter Wildeford for feedback on this post. Thanks also to the members of SP, other members of XST, and Rethink Priorities co-CEO Peter Wildeford for their hard work on x-risk project incubation this year.

Notes

  1. Note that SP also supports impactful early-stage projects through fiscal sponsorship, but those activities are beyond the scope of this post (I will say that I think they are very valuable activities!). 

  2. Renan started working on this project in early May, approximately 1 month later than the other 3 team members. 

  3. I’m also stepping back from my role, but this isn’t the driver of the shift from incubation. 

  4. An additional consideration pushing against incubation, which is less important in my view, is that identifying impactful projects was harder than expected. We spent many researcher hours on projects that we ultimately felt weren’t above the bar for us to invest many hours trying to incubate. Ideas needed significantly more work than anticipated to get them to a state where we were fairly confident that they’d make sense. 

  5. Generally funders I spoke to didn’t feel they had really deep models for the value of incubation, but relevant considerations included i) (genuine) uncertainty about whether more projects was the key bottleneck to address, rather than e.g. making existing projects go better; ii) maybe it doesn’t work that well to give ideas to founders rather than having them figure out ideas themselves; iii) XST’s experience profile wasn’t extremely compelling for incubation; iv) in particular maybe the key challenge is finding really strong founders, and it’s not clear that XST would be especially good at that.