We help people who want to work on AI Safety to team up together on concrete projects.
There are two ways for you to join AI Safety Camp:
as a project lead
as a team member
As a project lead, it’s your job to suggest and lead a project. You apply by sending us a project proposal. We’ll give you feedback to improve your proposal, and if you’re accepted we’ll help you recruit a team.
As a team member you’ll join one of the projects suggested by the project leads. What you’ll be doing depends entirely on the project and your role in the project.
We ask all participants (including research leads) to commit at least 10h per week for 3 months (mid January to mid April). Several teams continue to work together after AISC is over, but you’re only committing to the initial 3 months.
AI Safety Camp is entirely online, and open to participants in all time-zones.
I.e. what kinds of projects are you looking for?
We as organisers do not have an entirely unified perspective on the exact nature of AI risks, and we do not require AISC participants to share any of our concrete views, either.
We believe that AI developments could potentially lead to human extinction, or at the least pose severe large-scale risks, and that it is imperative to work towards ensuring that future AI systems are both developed and deployed in robustly safe ways. On the flipside, we want to ensure that uncontrollably unsafe AI is not developed at all. We welcome a diversity of approaches and perspectives in service of this goal.
Each of us organisers (Remmelt, Robert) is unilaterally allowed to accept projects (although we do listen to each other's advice). This means that, if you want to lead a project, it’s enough to convince one of us that your project is worthwhile. When submitting a project proposal, your application will be handled by whichever of us has assumptions most aligned with your project proposal, subject to workload constraints.
With safety, I mean constraining a system’s potential for harm.
To prevent harms, we must ensure that future AI systems are safe:
Safety is context-dependent. Harms are the result of the system’s interactions with a more complex surrounding world.
Safety must be comprehensive. Safety engineering is about protecting users and, from there, our society and ecosystem at large. If one cannot even design an AI product to not harm current users, there is no sound basis to believe that scaling that design up to larger scales will not also deeply harm future generations.
Today, companies recklessly scale designs and uses of AI models. There is a disregard for human safety. To hide this, companies utilise researchers to give users the impression of safety rather than actual safety. Safety researchers chase after the companies – trying new methods to somehow safely contain the growing combinatorial complexity (and outside connectivity) of models already in use. Billionaires sympathetic to the cause even support the companies to start up ‘safely’. This is a losing game.
Sincere researchers strove to solve lethal risks. Instead they discovered deeper problems that they could at best solve partially, using fragile assumptions then questioned by other researchers. No-one has found a method to control machinery once it starts scaling itself (to not converge on deadly changes to our environment needed for its self-preservation, etc). Some researchers are in fact discovering sharp limits to controlling AI.
There is dignity in informing the public: ‘We did our best to solve safety for autonomous open-ended AI systems. Sadly, we discovered that this problem is intractable.’
Therefore, it is not on us to solve all the risks that accelerationist CEOs and their engineers introduce by releasing unscoped designs. It is on us to hold firm: ‘You shall not pass. No longer shall we allow your reckless behaviour to put our world in true peril.’
We are not alone. Many communities want to prevent companies from harmfully scaling AI. Creatives and privacy advocates aim to stop AI freely feeding on personal data. Workers and whistleblowers aim to stop cheap but shoddy automation. Consumer organisations and auditors aim to stop unsafe but profitable uses. Environmentalists and local country folk aim to stop the polluting energy-slurping data centres.
Let’s ally to end the careless pursuit of ‘powerful AI’, at the cost of everything we hold dear in life.
Some reasons to start a project:
AI companies are causing increasing harms.
We are not on track to solving safe control of ‘AGI’.
There are fundamental limits to control. Machinery that autonomously reprograms and reproduces its own internals could not be made to stay safe.
Email me if you are taking initiative and looking for collaborations. I’m limited on time, but would gladly share my connections and offer insight into questions.
As an AISC organiser, I take projects that are well-scoped around an aim to robustly help pause/stop AI, and are considerate of other communities’ concerns about AI.
I’m excited about:
Research projects for inquiring into or explicating an underexplored consideration for restricting AI.
Engineering projects that demonstrate how model functionality is already unsafe, or that define a design scope for engineering a comprehensively safe model.
Outreach projects involving continued inquiry and discussion with specific concerned stakeholders.
Direct action projects that bring to light the mass violation of a civil right, and offer means to restrict AI companies from going further.
I believe that advanced AI systems are unlike any previous technology in their potential to have vast and diverse impacts across many domains and scales, and in their potential to exhibit dynamics that make them hard to control. In addition, leading AI companies explicitly aim to advance their systems' problem-solving capabilities beyond human abilities, a goal that seems increasingly within reach.
I think this is incredibly reckless, because it doesn’t seem likely that current alignment or control techniques will scale with AI systems becoming ever more capable and distributed.
Once we cross the threshold of being able to create AI systems that are competent enough to disempower or extinct us, we probably only get one critical try. We don’t know where that threshold is, but I think we should take the possibility seriously that it is not far out.
As an AISC organiser, I am interested in your project if it is aimed at addressing this overall risk scenario or a part of it.
Since I don’t think that we have a sufficient conceptual handle on the problems yet, I welcome diverse and speculative projects that cover more ground in terms of exploring frameworks and angles of analysis - basically, as long as you can explain to me why the project might be useful for AI Safety, I will lean towards accepting it.
In particular, I’m excited about:
Projects that make conceptual and formal progress on notions of Alignment and its limitations
Mech-interp projects aimed at reducing our fundamental confusion about NNs
Projects about cognitive architectures that are more inherently interpretable (and alignable) than those of the current paradigm, while still being reasonably competitive
Projects aimed at discovering or contributing to useful lenses for understanding LLMs (Simulator theory by janus being a primary example)
Any conceptually sound approach to AI Safety that seems neglected to you
Because conceptual research is rarely formally trained in academia, the bar on project leadership and flexibility will be a bit higher than for those projects featuring more tractable and concrete milestones and their associated difficulty. I’ll want to check your thinking about how to ensure that your team members spend their time efficiently, rather than on unproductive confusion.
However, don’t worry about your project getting rejected if you are still figuring that out. As with other aspects of the project proposal, I’ll be happy to discuss this and give you time to refine your approach. I just want this to be developed by the time your project is opened for the team member applications.
This section describes the current format of AISC, which we’ve been doing since 2023. We expect to keep this structure for the foreseeable future, because we found that this format works very well, and is efficient in terms of organiser time.
The goal of this structure is to help collaborators find each other. More specifically, we set up teams to collaborate on concrete projects, part time, for 3 months. AISC is about learning by doing.
The first step in doing this is opening up the project lead applications. Anyone with an idea for a project is invited to send us your project proposal. Next, we’ll give you feedback on your project, and some time to improve it. We aim to have at least one call with every PL applicant, where we discuss your project and, if needed, lay out what you would need to fix to get your project accepted for AISC.
The second step is us publishing all the accepted projects on our website, and opening up applications to join each project. We encourage everyone who has some spare time, and is motivated to reduce AI risk, to have a look at all the projects, and apply to the one that interests you.
Next, each PL will evaluate the applications for their projects. It’s the job of the PL to interview and choose their team. PLs are given guidance on how to do this from the organisers, but it’s up to the PL to decide who they want on their team.
It’s the job of the organisers to onboard and support the PLs. It’s the job of the PLs to onboard and support their team members.
We start together with a joint online opening session, and end together with every team presenting their results. In between, each team mostly works independently on their project.
Each team will have their project proposal, written by the project lead, when they applied, and approved by the rest of their team, when they chose this project to apply to. This means that you know what to do, at least to start with. The PL will guide the research project, and keep track of relevant milestones. When things inevitably don’t go as planned, the PL is in charge of setting the new course.
We require that every team have weekly team meetings, and that participants spend a minimum of 10h/week working on the project. Other than that, each team is free to organise themselves in whatever way works best for the team and the project.
There will not be much else happening aside from your projects. We have found that when we try to organise other activities, most participants prefer to spend their often limited time on their team’s projects. We’ll probably do something to encourage inter-team interactions, but we’re still figuring out how to best facilitate this.
At the end of the program, it is up to each participant to decide if you want to continue working together, continuing the project or possibly something new, or if it’s time to go your separate ways. Some teams stay together, and multiple orgs have come out of AISC.
We don’t know what the future will hold, but as long as the world has not gone too crazy, we expect to follow this approximate timeline for future AISCs too.
2024
Beginning September - September 28:
Project lead (PL) applications open.
Late Septembe - October 19:
We help the project lead applicants improve their project proposals.
October 25 - November 16:
Team member applications are open.
November 16 - December 21:
PLs interviews and selects their team members.
2025
January 10-11:
AISC opening weekend.
Mid-January to Mid-April:
The camp itself, i.e. each team works on their project.
April 24-27 (preliminary dates):
Final presentations
After April:
AISC is officially over, but many teams keep working together.
Your project lead application will mainly be evaluated based on your project proposal (see next question). We think that if you can produce a good plan, you can probably also lead a good project. However we still have some minimum requirements on you as a project lead.
The most important requirement is that you have enough time to allocate to your AISC project. You will need to read applications and conduct interviews before the start of the program, and you need to spend at least 10h per week on your project throughout the program.
Becoming a Project Lead means taking on the responsibility for the project, including the time and effort of everyone involved with it. Team members will trust you to direct their efforts effectively according to the vision for the project, especially when unforeseen things happen. Your job is to live up to that trust.
For projects to stop harmful AI developments, we ask of you to have already overseen work in the area you want to lead a team in – or to send us a solid plan to recruit and rely on teammates who have. For example, if you want to organise with tech workers to build collective bargaining power, you'll need experience doing that.
If you’re going to lead a research project you need to have some research experience, preferably in AI safety but any research background is ok. For example, if you are at least 1 year into a PhD or if you have completed an AI Safety research program (such as a previous AI Safety Camp, MATS, PIBBSS, etc), or if you have done a research internship, then you are qualified. Other research experience counts too. If you are unsure, feel free to contact us.
We also accept non-research projects. In this case you’ll need some relevant experience for your particular project, which we’ll evaluate on a case by case basis.
Regardless of project, you also need some familiarity with the topic or research area of your project. You don’t need to have every skill required for your project yourself, since you will not be doing your project alone. But you need to understand your project area well enough to know what your knowledge gaps are, so you know what skills you are recruiting for.
As part of the project lead application process we will help you improve your project plan, mainly through comments on your document, but we also aim to have at least one 1-on-1 call with every applicant. Your application will not be judged based on your initial proposal, but on the refined proposal, after you had the opportunity to respond to our feedback.
Every project is different, and we’ll tell each PL applicant which (if any) aspects you need to improve to be accepted. But in broad strokes, your project proposal will be judged based on:
Theory of change
What is the theory of impact of your project? Here we are asking about the relevance of your project work for reducing large-scale risks of AI development and deployment. If your project succeeds, can you tell us how this makes the world safer?
Project plan and fit for AISC
Do you have a well-thought-out plan for your project? Does this plan have a decent chance to reach the goal you set out for yourself? How well does your plan fit the format of AISC? Is the project something that can be done by a remote team over 3 months? If your project is too ambitious, maybe you want to pick out a smaller sub-goal as the aim of AISC?
Downside risk
What are the downside risks of your projects? What is your plan to mitigate any such risk? The most common risk for AI safety projects is that your project may accelerate AI capabilities. If we think your project will enhance capabilities more than safety, we will not accept it
Here’s our template for project proposals. Please follow this template to make it easier for team member applicants to navigate all the projects.
See here for projects that were accepted for AISC10. You have to follow the links to see the full project proposal for each project.
Feedback on your project proposal from the AISC staff.
Experience leading a team.
Find collaborators to work with over the months and years ahead.
Help from your team on making progress on a project of your choosing.
You need to be able to spend on average at least 10h per week, on whichever project you’re joining, from mid January to mid April.
Most projects (but not all) will also have some skill requirements, but that’s different for each project. For some projects, specific skills are less important than attitude, or having plenty of time to contribute.
Meet insightful and competent individuals dedicated to ensuring future AI is safe.
Learn about research concepts, practices and mindsets in the field.
Deliberate how to move a project forward with a project lead.
Find collaborators to work with over the months and years ahead.
Be more systematic and productive in your project work.
Learn what it’s like to take on a specific role in your team.
Test your personal fit to inform your next career steps.
The steps are as follows:
You’ll send us one application, in which you tell us which projects you’re interested in joining.
The project leads get to see your application.
You may get invited for an interview with one or more project leads.
You may get invited to join one or more projects.
If you’re invited to more than one team, you need to decide which one you’re joining.
No.
When we have allowed people to join more than one team in the past, they always end up dropping out of at least one of the projects. However, we are not stopping you if you want to informally collaborate across team boundaries.
There are many ways to help stop AI. So many that it gets overwhelming.
Here’s an overview that might come in handy.
You can restrict:
Data (inputs received from the world)
Work (functioning between domains)
Uses (outputs expressed to the world)
Hardware (computes inputs into outputs)
A corporation extracts resources to scale AI:
Scraping data from creatives, citizens, and the spaces we live in.
Compelling workers to design/tinker to make new machines work.
Raking in money by marketing uses for these ‘working’ machines.
Sucking up basic energy and materials to produce the hardware.
This extraction harms us.
Disconnected Person
AI corps direct and surveil each person’s conversations online.
Dehumanised Workplace
AI corps exploit isolated workers to sloppily automate their jobs.
Destabilised Society
AI corps release untested products that get dangerously misused.
Destroyed Environment
AI corps mine land and leak toxins, accelerating the sixth extinction.
Communities are stepping up to restrict harmful AI, and you can support them!
For example, you can support legal actions by creatives and privacy advocates to protect their data rights. Or encourage unions to negotiate contracts so workers aren’t forced to use AI. Or advocate for auditors having the power to block unsafe AI products.
Some other AI safety research programs
Or look for upcoming events or programs here: Events & Training – AISafety.com
SPAR is probably the program that is most similar to the current version of AISC, since SPAR is also online and part-time.
We’d also like to highlight Apart Sprints, which are weekend long, AI Safety hackathons, which you can join online from anywhere in the world (as long as you have internet).
If you don’t know where to start, talk to AI Safety Quest for some guidance.
If you want to take some time to learn more, either by yourself or with a group of friends, here are some curriculums you can use:
AI Alignment Course by Bluedot
AI Governance Course by Bluedot
AISC used to be an in-person event. We shifted to online during the pandemic. After that AISC alternated between in person and online for a time, since we found both formats valuable in different ways. We went back to only online in 2023, since the funding situation for AI safety had gotten worse, and online events are much cheaper to run.
We currently don’t have the funding or the staff necessary for an in-person AISC. If you want to help us change this, please reach out.
However, even if AISC is currently online, there is nothing stopping you and your team from meeting up in person during the program. For example, EA Hotel offers free housing, food and co-working space for people working on important altruistic projects (e.g. AI safety). The EA Hotel staff has told us that AISC participants are very likely to be accepted as residents. You can apply to stay there, and tell them you’re an AISC participant.
When AISC started, there were no similar programs. Now there are lots, which we are happy about. But we still think there are ways AISC stands out.
Other programs have mentors, AISC have project leads. The main difference between an project leads and a mentor is that we require our project leads to be actively involved in the project. They are not just an advisor but also a team member themselves. Another difference is that we don’t require PLs to be experienced AI safety researchers. Project leads should have some necessary experience and take full ownership for their projects, but mostly project leads applicants will be evaluated based on their project proposal and not on their CV.
However, we also don’t think that it is necessary for AISC to be different to be valuable. It’s more important for us to do a good job than to be unique. The interest in AI safety is growing quickly, and there is clearly enough interest for all the programs we are aware of.
As of now, we do not have any funds available for stipends. In case of a stipend grant around beginning October, we would still only be able to offer stipends for participants (project leads and team members) residing in low-income countries.
Yes!
We welcome returning participants. You’ll bring both your research experience, and your experience doing this type of team work, which will benefit your new team.
Camp-wide activities, such as the opening weekend and final presentations, will end up at inconvenient hours for you. If you want to skip these you’re excused. You can still watch back recordings of many of the sessions.
Team meetings however are much more important. You absolutely need to be able to attend the majority of your team's weekly meetings. If this is a problem depends on who else is on the team and where they live.
It’s possible to have calls with people from two out of the three continental regions: Americas, Europe/Africa, or Asia-Pacific. But usually not from all three regions at the same time.
If you get invited for an interview to join a team, definitely bring this question up!
If you’re the project lead, do not simultaneously accept team members from the Americas and Europe/Africa
Meetings are much more important when collaborating remotely.
In a general work or research environment where everyone works in the same office it’s often a good idea to eliminate meetings as much as possible. But in those environments you can easily communicate with each other all the time and you get a lot more casual interactions with people. For remote teams, sometimes the meeting may be the only involvement you have that week with the project.
We’ve noticed that when teamwork breaks down, it often starts with failing to have regular team meetings.
Yes, see tips for starting as a new team here.
This is not easy.
If you don’t know anyone else who is thinking seriously about this problem, a first step could be to just find others who understand your concern. Not being alone is a good first step. Look for communities and events where you can find like minded people.
We also recommend this blogpost, Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023), which contains:
Summaries and links to other blogposts from many people who have written about how they deal with facing potential AI doom.
Descriptions of mental health practices, with links to more info.
A list of therapists and coaches who will understand your AI concerns.
Group photo from the first AISC
Ashgro handles our financial admin for running AI Safety Camp.