Meta commits to not developing artificial intelligence.

Mark Zuckerberg, CEO of Meta, said: "We pledge to make General Artificial Intelligence (AGI) available to the public someday."
Meta has released a new document stating that it will not release an advanced artificial intelligence system publicly in case certain scenarios pose a risk or harm.
The document titled Frontier AI Framework identified two types of systems considered high-risk if deployed, "High-Risk Systems" and "Super-High-Risk Systems."
According to Meta, some AI systems like "Super-High-Risk - High-Risk" systems can be used to carry out electronic, chemical, or biological attacks. However, "Super-High-Risk" systems could lead to uncontrollable catastrophic results, while "High-Risk" systems could execute an attack that is relatively controllable as they are not as effective as the Super-High-Risk systems.
It is rumored that General AI is capable of self-learning and can perform all tasks that humans can do, unlike traditional AI that only does what it is trained for. Hence, Meta is concerned about this advanced technology falling into the wrong hands and its uncontrollability.
It is worth mentioning that Meta does not classify system risks based on a single experimental test but relies on multiple inputs from researchers inside and outside the company, reviewed by high-level decision-makers.