almondoreo in  
Software Engineer  

Ilya from OpenAI has started a new company for safe AGI

"Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It’s called Safe Superintelligence Inc.

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Sort by:
It's all so dystopian... the shear arogance of these people make me sick. Safe... many things invented in the past few 100 years even with the best of intentions rarely turn out to be safe or with distasterous downsides.