Our semi-annual updates are archived here, in reverse chronological order.

Mid-2024 Update

ALTER has continued operating in a chaotic environment in Israel over the past half year. Despite this, we have continued to cultivate domestic groups and individuals interested in AI safety, including work in multiagent settings, as well as continued work of AI policy in the international domain. Our US-based work coordinating and funding some work on the learning-theoretic AI agenda has also continued, with promising directions but no additional concrete outputs. There is also a new project in which David will be consulting for the RAND Corporation as part of his work for ALTER, working primarily on biorisk and risks at the intersection of AI and biorisk (AIxBio).

AI Policy and Standards

We have recently joined the NIST AI Safety Consortium (AISIC), and will be continuing work on this area as opportunities present themselves. Asher Brass from IAPS has recently agreed to work with us as a standards fellow, focused on NIST and cybersecurity. (This will be in conjunction with his work at IAPS.)

We are excited to have recently co-hosted a private event on AI Standards making and how organizations can contribute to standards setting for safety. We were joined by speakers from Simon Institute, Georgetown CSET, SaferAI, and UC Berkeley Center for Long-Term Cybersecurity, along with participants from a number of other organizations interested in standards setting. 

We have a new preprint available on “The Necessity of AI Audit Standards Boards,” which partly follows from our earlier work on safety culture, and we are continuing to engage with experts and policymakers on these topics. (Unfortunately, our planned attendance of the ACM Conference on Fairness, Accountability, and Transparency ( FAccT) is no longer happening due to its location, and the cancellation of our original flight from Israel.)

Mathematical AI Safety

In the midst of MIRI shifting focus away from mathematical AI safety,  ALTER-US, our sister-project for supporting Learning Theoretic AI Safety, has recently hired Alex Appel (Diffractor,) and Gergely Szucs continues to work on Infra-Bayesian approaches, including his recent post on infrabayesian physicalism for interpreting quantum physics. There is also recent work from MATS scholars on time-complexity for deterministic string machines and Infra-Baysian haggling. We are also attempting to assist others finding pathways forward for the broader research community in mathematical AI, and are very excited about the new UK ARIA funding stream. (Note that this work stream is being split off more fully going forward, as it is not part of ALTER.)

Biorisk

Alter is continuing to engage in dialogues about metagenomic approaches. Our paper analyzing the costs of such a system in Israel was accepted to Global Health Security 2024 in Australia in June, and the lead author, Isabel Meusel, will be attending to present the paper (Day 2, Noon, P38.) Sid Sharma, another co-author, will be presenting the Threatnet paper, which our work builds on.

We are also continuing to engage with the Biological Weapons Convention as an NGO. Israel’s geopolitical situation is far less conducive to positive engagement with the BWC (and otherwise) at present, but in our view, this makes the prospects for significant change more plausible, rather than less, in the coming few years. At the same time, the environment for progress on this project is volatile, and work is currently on hold.

Public Health

Our work on salt iodization in Israel has continued. The exact path forward is complex and still being discussed, and will continue as a small project for us on the side of our main research and policy work.

Funding

As noted last time, the combined SFF / Lightspeed grant meant that ALTERs core work was not fully funded for 2024. The RAND contract has greatly improved our funding position, and will also generate ALTER income which can be used for, among other things, our salt iodization policy work. In addition, we are in the process of applying for funding from other organizations, partly for Vanessa’s Learning Theoretic AI work, as well as for ALTER itself for core operations and to run conferences and support future ALTER fellows.

End-of-2023 Update

the past several months have been a tumultuous time in Israel, and this has affected our work in a variety of ways, as outlined in a few places below.

People

Ongoing and New Projects

Learning Theoretic / Mathematical AI alignment

(Largely via Ashgro fiscally-sponsored Affiliate):

Funding

Mid-2023 Update

Since its founding, ALTER has started and run a number of projects.

  1. Organized and managed an AI safety conference in Israel, AISIC 2022 hosted at the Technion, bringing in several international speakers including Stuart Russell, to highlight AI Safety focused on existential-risk and global-catastrophic-risk, to researchers and academics in Israel. This was successful in raising the profile of AI safety here in Israel, and in helping identify prospective collaborators and researchers.
  2. Support for Vanessa Kosoy’s Learning-Theoretic Safety Agenda, including an ongoing prize competition, and work to hire researchers working in the area.
  3. Worked with Israel’s foreign ministry, academics here in Israel, and various delegations to and organizations at the Biological Weapons Convention to find avenues to promote Israel’s participation.
  4. Launched our project to get the Israeli government to iodize salt, to mitigate or eliminate the current iodine deficiency that we estimate causes an expected 4-IQ point loss to the median child born in Israel today.
  5. Worked on mapping the current state of metagenomic sequencing usage in Israel, in order to prepare for a potential use of widespread metagenomic monitoring for detecting novel pathogens.
  6. Organized and hosted a closed Q&A with Eliezer Yudkowsky while he was visiting Israel, for 20 people in Israel working on or interested in contributing to AI safety. This was followed by a larger LessWrong meetup with additional attendees.

Current and Ongoing Work

We have a number of ongoing projects related to both biorisk and AI safety. 

  1. Fellowship program. We have started this program to support researchers interested in developing research agendas relevant to AI safety. Ram Rahum is our inaugural funded AI safety fellow, who was found via our AI Safety conference. Since then, he has co-organized a conference in London on rebellion and disobedience in AI jointly with academics in Israel, the US, and the UK. As a fellow, he is also continuing to work with academics in Israel as well as a number of researchers at Deep Mind on understanding strategic deception and multi-agent games and dynamics for ML systems. His research home is here and monthly updates are here. Rona Tobolsky is a policy fellow, and is also working with us on policy, largely focused on biorisk and iodization.
  2. Support for Vanessa Kosoy’s Learning-Theoretic AI Safety Agenda. To replace the former FTX funding, we have been promised funding from an EA donor lottery to fund a researcher working on the learning-theoretic safety agenda. We are working on recruiting a new researcher, and are excited about expanding this. Relatedly, we are helping support a singular learning theory workshop
  3. Biosecurity. David Manheim and Rona Tobolsky attended the Biological Weapons Convention – Ninth Review Conference, and have continued looking at ways to push for greater participation by Israel, which is not currently a member. David will also be attending a UNIDIR conference on biorisk in July. We are also continuing to explore additional pathways for Israel to contribute to global pandemic preparedness, especially around PPE and metagenomic biosurveillance.
  4. AI field building. Alongside other work to build AI-safety work in Israel, ALTER helped initiate a round of the AGI Safety Fundamentals 101 program in Israel, and will be running a second round this year. We are also collaborating with EA Israel to host weekly co-working sessions on AI safety, and will hope to continue to expand this. David Manheim has also worked on a number of small projects in AI governance, largely collaboratively with other groups. 

Potential Future Projects and Expansion

We are currently working on fundraising to continue current work and embark on several new initiatives, including expanding our fellowship program, expanding engagement on biorisk, and build out a more extensive program, hiring researchers and a research manager and running an internship program and/or academic workshop(s) focused on the learning theoretic alignment agenda. All of these are very tentative, and the specific plans will depend on both feedback from advisors and funding availability.

Challenges and Missteps

  1. Our initial hire to work on Vanessa’s Learning-Theoretic agenda was not as successful as hoped. In the future, Vanessa plans to both provide more interaction and guidance, and to hire people once we understand their concrete plans to do work in the area. We are considering how to better support and manage research in order to expand this research portfolio. (We do not yet have funding for a research manager, the position is critical, and it may be difficult to find an appropriate candidate.)
  2. Identifying whether ML-based AI safety research is strongly safety-dominant (rather than capabilities-dominant) can be challenging. This is a more general issue than an ALTER-specific challenge. David Manheim pre-screens research and research agendas, but has limited ability to make determinations, especially in cases where risks are non-obvious. We have relied on informal advice from AI safety researchers at other organizations to screen work being done, and matching fellows with mentors that are more capable of overseeing the research, but this is a bottleneck for promoting such research.
  3. Banking issues following the collapse of FTX and difficulty navigating the Israeli banking system, including difficulty receiving other grants. (This is now largely resolved.)
  4. Work on mandatory salt iodization in Israel has stalled somewhat, due partly to Israeli political conditions. Despite indications of support from the manufacturer, the Israeli Health Ministry has not prioritized this. We have several ideas for a path forward which are being pursued, but are unsure if the current government is likely to allow progress.

End-of-2022 Update

Unfortunately, the past several months have been particularly challenging, in large part due to the collapse of FTX and issues with the Israeli banking system. (See coverage here on some specific impacts.) Delayed items include actually posting this update, and other planned progress – but despite that, ALTER has made progress on several fronts, and is continuing its work on several fronts.

Accomplishments and Progress

Challenges

We are also revisiting our strategic planning in light of the changing funding environment and our current projects.

Plans

Mid-2022 Update

The Association for Long Term Existence and Resilience has been launched and funded, and we’re going to be working on a number of projects to build up an academic and policy focus in Israel on preventing catastrophic and existential risks and improving the trajectory of humanity for the long-term future. This is our first public update about what has been happening, and we invite anyone in Israel who we’re not already in touch with to contact us.

Hiring

Activities

Funding

Office

We’re excited to see how all of this evolves, and again, feel free to contact us with questions, or feel free to comment below.