The Credo AI Blog

Insights and stories from the people revolutionizing responsible AI. Subscribe to our blog and get the latest posts delivered right to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Articles

AI Governance in the time of Generative AI

Generative AI systems are the next frontier of technological systems. Putting aside what they presage in terms of future AI advancements, generative AI systems are already some of the most versatile, accessible tools humanity has ever created. The excitement around this space is palpable - you see it in trending social media posts of Dall·E images, new research and product innovation, and growing investment in generative AI companies. But if you are like most, this excitement is tempered by a feeling of anxiety.

Fast Company Names Credo AI One of the Next Big Things In Tech

Today, I am thrilled to announce that Credo AI has been named by Fast Company as one of the 2022 Next Big Things in Tech – a prestigious award honoring the most innovative technologies that promise to shape industries, serve as catalysts for further innovation, and drive positive change to society within the next five years.

Operationalizing Responsible AI: How do you “do” AI Governance?

Now that we’ve established what AI governance is and why it’s so important, let’s talk strategy; how does one do AI governance, and what does an effective AI governance program look like? At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: 1) Alignment: identifying and articulating the goals of the AI system, 2) Assessment: evaluating the AI system against the aligned goals, 3) Translation: turning the outputs of assessment into meaningful insights, and 4) Mitigation: taking action to prevent failure. Let’s take a deeper look at what happens during each of these steps and how they come together to form a governance process designed to prevent catastrophic failure.

Cutting Through the Noise: What Is AI Governance and Why Should You Care?

There is a lack of consensus around what AI Governance actually entails. We’d like to cut through the noise and provide a definition of AI Governance rooted in Credo AI’s experience working with organizations across different industries and sectors, collaborating with policymakers and standard-setting bodies worldwide, and supporting various global 2000 customers to deliver responsible AI at scale.

2022 Global Responsible AI Summit: Key Highlights and Takeaways

On October 27th, Credo AI hosted the 2022 Global Responsible AI Summit, bringing together experts from AI, data ethics, civil society, academia, and government to discuss the opportunities, challenges, and actions required to make the responsible development and use of AI a reality. The Summit attracted more than 1,100 registrants across 6 continents, making it one of the leading Responsible AI gatherings of the year.

Credo AI Product Update: Build Trust in Your AI with New Transparency Reports & Disclosures

Today, we’re excited to announce the release of a major update to the Responsible AI Platform focused on Responsible AI transparency reports and disclosures. These new capabilities are designed to help companies standardize and streamline the assessment of their AI/ML systems for Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy, and automatically produce relevant reports and disclosures to meet new organizational, regulatory and legal requirements and customer demands for transparency.

Designing Truly Human-Centered AI

As we enter the era where AI has the potential to impact almost every aspect of our lives, there is a growing need to ensure that AI systems are designed with human values and experiences at their core. This is a high level introduction to Human-Centric AI (HC-AI), a Responsible AI methodology.

NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors

On January 1, 2023, the New York City (NYC) Local Law 144, aka NYC bias audit law for automated employment decision tools, will go into effect. With only a few months left for organizations to be compliant, it is a good time to discuss the impact of this legislation and highlight the areas for improvement as the legislation starts to mature.

Roundtable Recap: Realizing Responsible AI in Washington, DC

Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice.