Folks working on existential-risk-relevant AI policy or related research can request access to this database via this form. Other people can also use GCR Policy’s related public database. Approval for accessing the AI policy ideas database is not guaranteed. We appreciate your understanding if your application is not approved.
What is this database and who is it for?
This database was created and is intended to be continually updated by Abi Olvera and Rethink Priorities. It’s an attempt to compile policy ideas that may help reduce catastrophic risk from AI, with a focus on US policy ideas that might be feasible and good to implement in the near- or medium-term (in the next ~5-10 years). The database includes policy ideas of varying levels of expected impact, clarity about how impactful they’d be, and feasibility. The ideas are curated from various sources across the longtermist AI governance community and beyond.
Policy ideas for reducing AI risk have previously been scattered across many sources, including many private Google Docs. This database seeks to bring policy ideas together in one place, organized in ways to help both AI governance policy practitioners and researchers quickly identify ideas that seem to have high impact and high feasibility AI policy ideas, find additional info on those ideas, and identify what ideas may be especially valuable for research.
Abi sourced the initial set of 443 ideas from mid-2022 until early 2023 from various Google Docs, the GCR Policy team’s back-end version of their public database, private correspondence, and public reports. (For details, see the section “Our search process and inclusion criteria”.) We expect to continue gradually adding ideas from similar sources and from a form via which people can submit ideas.
When our original source for an idea was a Google Doc but there’s also a public source discussing a very similar idea, we used the public source as the “source”. This was to minimize our linking to private Google Docs and because we expect users of the database will in many cases want to cite public and “mainstream” sources.
For each idea, where applicable, we’ve included information on its source, expected levels of impact, feasibility, specificity, degree of confidence/certainty in our impact ratings, relevant agencies, and other comments.
We want to make it clear that some of the ideas in this database have relatively low (or even negative) expected impact, relatively low feasibility, and/or low confidence in the ratings we’ve given. Additionally, we do not currently have a column to assess the potential negative impact of these ideas. As such, inclusion of an idea does not necessarily imply support from Abi, Rethink Priorities, or the original source of the idea, and we encourage users to exercise caution when considering the ideas (particularly those with uncertain impacts) and to conduct their own research and analysis before making any decisions.
We should also note that most choices about what to include in the database and what ratings to give were made by Abi alone, without someone else reviewing that.
Due to the sensitive nature of the information contained within the database, we will be limiting access to only those individuals who are actively engaged in this area of work, with an emphasis on those at institutions that focus mainly on longtermist/existential-risk-focused AI governance. Additionally, we will be extending access to a select group of trusted independent researchers who have demonstrated a deep understanding of this field and a commitment to advancing its goals. We believe that this approach will ensure that the database is used responsibly and that the information it contains is used to promote the greater good.
Note about the Loose Ratings
In an effort to meaningfully sort the ideas, we used a loose five-point scale (Very Low, Low, Medium, High, Very High) for the metrics listed below. These ratings were usually roughly assigned by the original author (when available), the GCR Policy evaluation team, or Abi. Some of the ideas were assigned ratings based on Abi’s guesses of what the author would rate the idea on this scale. We aim to roughly draw on the GCR team’s evaluation matrix, though given the differing goals of our databases, the matrixes do not align perfectly. GCR and Abi’s evaluation matrixes are on separate tabs of this spreadsheet.
In summary, the categories aim to provide a general understanding of:
Impact: The expected level of impact based on initial estimates from either the author, GCR team, Abi, or expert surveys.
Confidence in Impact Rating: How confident we are that the Impact rating we gave is approximately accurate (e.g., if we gave a low rating for Impact, was that based on clear evidence or a wild guess?).
Feasibility: The extent to which the policy idea can be implemented effectively.
Specificity: How clear and specific the policy idea is as presented.
These rankings are meant to make it easier for policy professionals to quickly identify potential ideas in the database. However, they are not rigorously assessed and come from various sources, including different assessors with their own biases.
Note: Most of these ratings were given quickly and in the absence of expertise or careful analysis, so could be mistaken.
Given the considerable difference in feasibility, confidence in impact ratings, and impact ratings, we have included an additional column that allows filtering out GCR-sourced ideas.
Differences:
GCR’s matrix mostly construes confidence as confidence in how good the idea is, instead of as confidence in the rating given. So in theory, they’d give a low confidence score for an idea that they think is low impact even if they’re highly confident that it’s low impact, whereas our own ratings would give a high confidence score in such a case.
However, they in fact gave some ideas rated as low impact high confidence ratings, suggesting they may have in practice interpreted the criterion similarly to us.
GCR's matrix includes as part of “feasibility” implementation risk and whether the idea can be transferred to other jurisdiction, whereas Abi's and other authors tend to (a) not include implementation risk in feasibility (because implementation risk is already a critical part of the impact rating), and (b) not consider transferability.
Abi expects this would in most cases only cause minor differences in how she and GCR would rate a given idea’s feasibility.
Recommend ways to engage with the database:
For AI policy researchers:
Are you a new researcher onboarding to the field or a researcher seeking new projects? Researchers can review ideas, sorting them by expected level of impact, feasibility, specificity, and confidence level, among other things.
The “For Researchers” view only shows ideas with medium-high expected impact and feasibility but lower specificity or confidence. This feature may help facilitate prioritizing research that increases confidence or specificity in those ideas.
If you’re working on an idea in the database, email Abi to add your name to the “Person Researching or Familiar With” so others who might also research this or know about this issue can reach out to you.
For AI policy practitioners:
Use the database as a resource during periodic reviews of potential policy priorities. Policy teams can consult the database during quarterly/annual meetings to set new policy priorities and goals. This helps ensure teams don’t miss newer or less discussed AI governance ideas.
Upcoming meeting with an agency? Filter Ideas by relevant agency. Review the database before any engagements or reviews relating to specific US agencies.
Want to highlight a specific idea for further research? Add your email to the “Person Researching or Familiar With” Column by emailing Abi. Any researchers embarking on this idea could make their research more impactful potentially by reaching out to you.
Help us keep this database up-to-date by sharing relevant ideas as you come across them in other places by using this form. You can add your email address in the “Person Researching or Familiar With” section to indicate that you’d welcome researchers reaching out to you.
Rethink Priorities provides an option to limit the sharing of potentially sensitive ideas by marking them as "Private" in the Sharing options questions. Ideas marked as "Private" will be included in the least-shared version of the database - accessible only to people working on AI governance or existential risk at Rethink Priorities and <10 other people working on longtermist/x-risk AI governance. This is a useful feature for ideas that are not yet ready or appropriate for wider sharing.
A few ideas from this database are fully public, available in the Public GCR Database without log-in. Only high confidence-in-rating, high impact, and medium-high feasibility and specificity ideas sourced from public reports are eligible for inclusion there.
Regarding the ideas that are only in our database: We welcome and encourage open discussion and collaboration on the ideas in our database. However, we kindly request that you refrain from publicly sharing detailed information about the contents of this database in mainstream media, publicly disclosing authors or sources that have not been publicly published, or using the inclusion of an idea in our database as evidence of support. This is to address concerns regarding privacy, infohazards, and public relations, and to mitigate risks related to people misunderstanding the ideas or acting on some of the ideas despite low community favorability toward those ideas.
Unless the ideas are from a public source, these ideas are not intended for widespread public use at this time. To respect the intellectual property of our contributors, especially those who have not made their work public, we ask that you contact Abi for more information about public writing about this database or its content.
We appreciate your understanding and cooperation in maintaining the confidentiality of these ideas.
FYI: The ideas in our database also feed into the Global Catastrophic Risk Policy more general risks database non-public backend for sharing with the general longtermism research community. GCR’s shared their AI policy ideas for inclusion in our database. GCR’s database differs from ours in that GCR’s includes other non-tech related global risks and does not specifically focus on AI. We also encourage active researcher collaboration via our additional “Person Researching or Familiar With” column. Contact the GCR team for access.
For each idea, where applicable, we’ve included information on its source, year of publication, expected levels of impact, feasibility, and specificity, degree of confidence/certainty of our impact ratings, relevant agencies, etc. Feel free to filter and/or sort by these columns.
Sorting
To sort, for example, by impact:
Click “Sort” in the top left corner, then click “Add condition”, then click “Impact”, then choose “First to Last” from the drop-down menu. “First” is Very High, while Last is “Very Low”.
Filtering
You can filter, for example to only view ideas with medium or above feasibility or confidence in impact ratings:
Click “Filter” in the top left corner, then click “Add condition”, then click “Name”, then choose confidence level or feasibility from the drop-down menu, then click “Select an option”, then choose the levels you would like to see from the drop-down menu. You can also exclude levels by picking “has none of” from the middle drop-down menu or select multiple levels by adding more conditions.
Filtering by issues or agencies
Each resource has been tagged with one or more “issues”. You can filter to only show ideas with that issue or agency. Note: some ideas are missing issues and agencies data.
To filter by issue, click “Filter” in the top left corner, then click “Add condition”, then click “Issues”, then choose “contains” instead of “is exactly”, then choose the issue you would like to see from the third drop-down menu. You can also exclude certain topics by picking “has none of” from the middle drop-down menu.
Our search process and inclusion criteria
Abi began with RP's collection of lists of AI policy ideas from personal Google Docs, various contacts, conversations, and public reports. Abi reached out to owners of lists of AI ideas seeking permission to share their ideas. As we continued to reach out to owners and sought feedback on the database, we discovered more personal lists as well as additional single ideas contributed directly by the idea author.
To avoid redundancy, subsequent lists’ ideas were only added if they contained unique ideas not already on the database. If a public report cited a list, Abi replaced the list owner with the public report as the source under Title, although RP still retains information regarding the original list source.
The largest sources of AI ideas were RP’s Survey on intermediate goals in AI governance and the agreement between GCR Policy and Abi to share AI ideas. Some duplicative ideas arose from syncing ideas in both directions with GCR. These were not omitted as the wording, source report, or details differed.
Overall, Abi prioritized including policy-relevant ideas for the near term (5-10 years). Since the source lists came from people involved in AI governance, she included most ideas unless they were highly experimental and resembled scenario-imagining. Abi aimed to be useful for people in policymaking, recognizing that another list might be more appropriate for the most theoretical or experimental ideas. We intend to gradually add additional ideas from similar sources and a form for idea submission.