INDUSTRY

New organisation to tackle AI issues

Fake images and generative AI: The US AI Safety Institute Consortium, AISIC, is a group of over 200 companies including Adobe, Apple, Microsoft, OpenAI, Nvidia, and Meta, which will now discuss the development of AI.

Published

The exponential development of AI gives rise to a number of problems, something that Kamera & Bild has consistently reported on. Images and videos can be created through sentences of text and can be used for wrongful purposes.

Now, over 200 organizations and companies working with AI, including Adobe, Apple, Microsoft, OpenAI, Nvidia, and Meta, have joined the new organization "US AI Safety Institute Consortium", abbreviated AISIC, to sort out the problems with AI. 

The participating companies will develop guidelines and identify different aspects of AI development, such as how it can be used for wrongful purposes, used safely – but also how material such as images and videos created by generative AI can be properly marked. As an example of the next level in development, Google's new generative AI engine "Lumiere" is mentioned, which can create lifelike video from text.

AISIC will be under the National Institute of Standards and Technology, NIST.

U.S. AI Safety Institute Consortium Press Briefing

February 8, 2024

Thank you, Secretary Raimondo. We are so glad to have your support as we launch the U.S. AI Safety Institute Consortium. 

And thank you all for being here. We are very excited to get started working with the consortium members. We truly appreciate all of the resources and expertise they will be contributing to this important effort.

As the secretary mentioned, this consortium brings together an incredible group of organizations from across the AI stakeholder community. That’s because we are going to need the best and the brightest who represent a diversity of thought, of experiences, of expertise, to ensure that we can reap the benefits of AI that is trustworthy and safe.

NIST has been bringing together diverse teams like this for a long time. We have learned how to ensure that all voices are heard and that we can leverage our dedicated teams of experts. Our open, transparent, inclusive way of doing business informs the guidance we produce and helps to increase buy-in so that it is widely implemented. The development of the AI Risk Management Framework is one recent example. 

AI is moving the world into very new territory. And like every new technology, or every new application of technology, we need to know how to measure its capabilities, its limitations, its impacts. That is why NIST brings together these incredible collaborations of representatives from industry, academia, civil society and the government, all coming together to tackle challenges that are of national importance.

Yesterday you saw us announce the leadership of the U.S. AI Safety Institute, CEO Elizabeth Kelly and Chief Technology Officer Elham Tabassi. Today we add another key element. The consortium is a critical pillar of the NIST-led U.S. Artificial Intelligence Safety Institute and will ensure that the institute’s research and testing work is integrated with the broad community of AI safety around the country and the world.