Make safe AI systems
Deploy them reliably
We develop large-scale AI systems so that we can study their safety properties at the technological frontier, where new problems are most likely to arise. We use these insights to create safer, steerable, and more reliable models, and to generate systems that we deploy externally, like Daisy.
Our second AI alignment paper, exploring how to train a general language assistant to be helpful, but without providing harmful advice or exhibiting bad behaviors.