GENERATIVE AI

Building the Guardrails of Generative AI

May 25, 2023

Join Credo AI as we walk through the market research, product development, and our general ruminations about generative AI as we built our new feature, GenAI Guardrails.

----

Over the first part of 2023, Credo AI undertook extensive research to understand how AI governance leads, data scientists, and business leaders were adopting and thinking about generative AI. The results were a mix of excitement and panic. AI leaders want to ride the generative AI wave, yet not be crushed underneath it. There is a sense of inevitability about generative AI adoption.

This is similar to the trend in its proliferation, where every major tech company raced to develop a large language model (LLM), despite most tech luminaries warning about the unknown dangers of this technology.

At this breakneck pace, regulation might come too late to protect companies and consumers. Yet, tech companies feel they can’t afford to slow down and miss this opportunity. That is why it’s up to individual companies, using third-party software, to implement generative AI guardrails in order to protect their brands, their operations, and their customers from the many risks of generative AI.

Join us in this webinar to learn:

✅ The major risks of generative AI to your brand, operations, customers, as well as regulatory concerns.

✅ How policies can future-proof your organization against GenAI risks.

✅ Everything we’ve learned in our journey to understanding and shielding against the risks of generative AI.

SPEAKERS
Susannah Shattuck
Head of Product
Eli Sherman, Ph.D
Data Scientist
Yomna Elsayed, Ph.D
Head of Research
User Experience researcher and life-long learner motivated to solve challenging and difficult questions around issues of technology and society.
SPEAKERS
Susannah Shattuck
Head of Product
Eli Sherman, Ph.D
Data Scientist
Yomna Elsayed, Ph.D
Head of Research
User Experience researcher and life-long learner motivated to solve challenging and difficult questions around issues of technology and society.

Watch on Demand

You may also like

AI Bias Audit: What You Need to Know About the Updated NYC Algorithmic Hiring Law

Starting April 15, 2023, New York City's Local Law No. 144 will require any automated employment decision tools (AEDT) used on NYC-based candidates or employees to undergo annual independent bias audits. With many details left undefined by the law, ensuring compliance can be challenging. To help, AIethicist.org's Merve Hickok, BABL AI's Shea Brown, and Credo AI's Susannah Shattuck and Ehrik Aldana discussed recent changes to the proposed law, updates from the January 23rd hearing, and steps employers, agencies, and HR vendors can take to meet new bias audit requirements. Watch it now! 🔥

NYC Bias Audit: Employer & HR Vendor Collaboration for Compliance

Join Credo AI’s Susannah Shattuck, Seekout’s Sam Shaddox, and Pymetrics’ Frida Polli, as we discuss the audit requirements for employers, how to engage your HR vendors and team during due diligence and the audit process, and ways Seekout and Pymetrics are helping their customers get in compliance.

AI Risk Management 101: A Practical Guide to Adopting the NIST AI RMF in 2023

On January 26th, the National Institute of Standards and Technology (NIST) announced the launch of the NIST AI Risk Management Framework (AI RMF) 1.0. This comprehensive set of guidelines and best practices is aimed at helping organizations proactively manage risks in designing, developing, and using AI products, services, and systems. Join Ms. Elham Tabassi, Chief of Staff in the Information Technology Laboratory (ITL) at NIST, Dr. Amanda C. Muller, Ph.D., Chief of Responsible Technology & Consulting Artificial Intelligence Systems Engineer at Northrop Grumman, and Susannah Shattuck and Evi Fuelle from Credo AI as they discuss what the NIST AI RMF 1.0 is designed to do, how your organization can benefit from the comprehensive framework, and tools to accelerate and demonstrate adoption.