AISST logo

AISST

We’re a group of Harvard students conducting research to reduce risks from advanced AI.

DonateStart a fundraiser

We think that reducing risks from advanced artificial intelligence is one of the most important problems of our time. We also think it’s a highly interesting and exciting problem, with open opportunities for many more researchers to make progress on it.

We run a semester-long introductory reading group on technical AI safety research, covering topics like neural network interpretability, learning from human feedback, goal misgeneralization in reinforcement learning agents, and eliciting latent knowledge.

We also run an AI policy reading group, where we discuss core strategic issues posed by the development of transformative AI systems.

Cambridge, MA
haist.ai

Supporters