We, at this moment, put forth an urgent mission — democratizing AI research worldwide — by creating an international, publicly-funded supercomputing system equipped with 100,000 advanced AI accelerators to facilitate the training of open-source foundation models.
This initiative seeks to safeguard our democratic principles and uphold safety for generations to come allowing us to achieve technological independence and ignite global innovation. It is truly monumental in scope and scale.
AI, ushering in a new era of revolutionary technology, has granted humanity the power to access GPT-4 and other AI models. This allows us to enter an age marked by artificial intelligence’s immense capabilities and implications.
Democratizing both access to and research emerging technologies is essential if these tools are to be used for the benefit of society. Without this, there could be unfortunate consequences for the future. These new technologies span various uses, from education, SMEs, governance, and scientific investigation.
A growing trend is the reliance of educational institutions, government bodies, and countries on only a few large companies offering limited transparency or public accountability. Education and politics are increasingly dominated by these firms that operate with an opaque system.
We must take immediate action to protect our society’s technological autonomy, cultivate new ideas and defend the core democratic values that define our way of life.
The global community, particularly the European Union, the United States, the United Kingdom, Canada, and Australia, are urged to work together towards a historic goal: establishing an open-source supercomputing research facility funded publically.
This facility, of similar scale and impact to the one in CERN, should possess a wide range of machines run by machine learning and supercomputing research community experts with at least 100,000 advanced accelerators (GPUs or ASICs), acceding to the electoral decisions of countries participating.
By making open-source advanced AI models, such as GPT-4, available for everyone to access with multimodal data (audio, video, text, and program code) included, this ambitious endeavor will facilitate academic research and guarantee data security. Transparency will be further reinforced through this platform.
Furthermore, granting researchers access to the underlying training data is crucial for comprehending how the models learn and function accurately. With only their access restricted to APIs, it would not be achievable.
Using the open-source nature of this project, potential risks can be rapidly identified and addressed by the academic community, keeping AI technologies secure and reliable in an ever-increasing integrated world. The transparency of this research will benefit safety and security tremendously, allowing issues to be quickly resolved.
The internationally renowned AI safety experts should be able to conduct high-risk developments within well-defined security levels akin to labs used for biological research. To ensure safety, regulations provided by democratic institutions should be implemented in the proposed facility.
The research conducted by these AI Safety laboratories should be open and accessible to the scientific community and society. Such research should also focus on being adequately prepared for developments that have been proven to have potentially harmful effects to implement effective countermeasures promptly.
This initiative can bring substantial economic benefits to small and medium-sized firms. Its access to large foundation models allows them to personalize these for their needs while maintaining full discretion over the weights and data.
Government organizations can benefit from this approach, as they desire transparency and control over AI applications within their systems. This gives them an attractive option to ensure the accuracy and validity of these technologies.
CERN’s efforts to support open-source AI research are a promising development that has the potential to unlock large-scale AI potential. By leveraging its expertise and promoting collaboration, CERN can make significant contributions to advancing AI research while ensuring that these technologies are developed and used responsibly. As we continue to push the boundaries of AI, it is essential to prioritize transparency, ethics, and accessibility, and CERN’s efforts represent an important step in that direction.