METR (Model Evaluation and Threat Research) builds the science of accurately assessing risks, so that humanity is informed before developing transformative AI systems.
We believe that AI could change the world quickly and drastically, with potential for both enormous good and enormous harm. We also believe it’s hard to predict exactly when and how this might happen.
At some point, AIs will probably be able to do most of what humans can do, including developing new technologies; starting businesses and making money; finding new cybersecurity exploits and fixes; and more.
We think it’s very plausible that AI systems could end up pursuing goals that are at odds with a thriving civilization. This could be due to someone’s deliberate effort to cause chaos, or happen despite the intention to only develop AI systems that are safe.
Given how quickly things could play out, we don’t think it’s good enough to “wait and see” whether there are dangers.
We believe in vigilantly, continually assessing risks. If an AI brings significant risk of a global catastrophe, the decision to develop and/or release it can’t lie only with the company that creates it.
Donors
Calum Richards METR is doing important work to reduce the risk that AI goes poorly for humanity.
Ondřej Kubů