Responsible AI Institute

Responsible AI Institute

Non-profit Organizations

Austin, Texas 31,696 followers

Advancing Trusted AI

About us

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We occupy a unique position convening critical conversations across industry, government, academia and civil society, guiding AI's responsible development. We empower practitioners to integrate oversight with assessments aligned to standards like NIST, exclusive RAISE Benchmarks bolstering integrity of AI products, services, and systems, and an authoritative certification program. Our diverse, inclusive and collaborative community is dedicated to steering the exponential power of AI towards a future that benefits all. Members span ATB Financial, Amazon Web Services, Boston Consulting Group, Yum Brands! Shell, Chevron, Roche and other leading institutions collaborating to bring responsible AI to all industries.

Website
http://www.responsible.ai
Industry
Non-profit Organizations
Company size
11-50 employees
Headquarters
Austin, Texas
Type
Nonprofit
Founded
2017
Specialties
Open Specifications, Blockchain, and Collaboration

Locations

Employees at Responsible AI Institute

Updates

  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    🎙️ New Episode Alert: Responsible AI In Action #21 🎙️ We're thrilled to welcome Cansu Canca, Ph.D., Founder+Director of AI Ethics Lab and Director of Responsible AI Practice at the Institute for Experiential AI at Northeastern University, to our latest episode! Join host Patrick McAndrew as he and Cansu explore: 🔹 The crucial role of philosophy in AI development and use 🔹 Why collaboration between computer scientists and philosophers is essential 🔹 Philosophical implications of Big Tech's interest in humanoid AI robots 🔹 Potential future applications of AI in addressing global challenges She brings a wealth of experience as a philosopher specializing in applied ethics. She advises Fortune 500 companies, works with the World Economic Forum, UN, and INTERPOL on responsible AI guidelines, and is a two-time TEDx speaker. Don't miss this insightful discussion at the intersection of AI and ethics! Learn more about Cansu's work! 💻 Watch on demand: https://lnkd.in/g6UmDAWw #ResponsibleAI #AIEthics #AIInnovation #Philosophy

  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    🌟 Welcome to the RAI Institute Team Spotlight Series 🌟 At the Responsible AI Institute, our team is the driving force behind our mission of advancing responsible AI adoption. Each week, we'll introduce you to one of the dedicated professionals working to shape the future of AI for the better. 🔎 Get to know Patrick McAndrew, our Member Engagement & Community Manager: 1️⃣ What is your role at the Responsible AI Institute? "I’m the Member Engagement & Community Manager. I regularly check in with all our members to ensure that they are getting the most out of their membership with us. I also help spotlight the incredible work our members are doing in the ecosystem through our thought leadership initiatives." 2️⃣ What ignites your passion for responsible AI and the work we do here? "Responsible AI is about protecting humanity’s best interests. I am a people person and love the stories and experiences that make us human. Responsible AI allows us to enjoy the great things to come with AI technology while making sure that we cultivate our humanity in the process." 3️⃣ Outside of work, what's a hobby or interest that might surprise people? "I love storytelling and believe entertainment is a powerful vessel for spreading a message. I’m in the process of developing a musical about responsible tech that I wrote the book, lyrics, and co-wrote the music to. If that’s not surprising enough, I worked with a circus in Switzerland!" Join us every week to get to know another member of our team and learn more about the people committed to ensuring AI benefits humanity. #ResponsibleAI #AIEthics #TeamSpotlight #AIInnovation

    • Responsible AI Institute - Member Engagement & Community Manager
  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    📣 ICYMI: Last week we released the Responsible AI Top-20 Controls, which we will be stewarding! We thank Responsible AI Institute members Booz Allen Hamilton & Mission Control AI for leading this effort. To learn more and gain access to the first 15 Controls, see below. ⬇ #ResponsibleAI #AI #ResponsibleAIControls #ResponsibleAITop20Controls #AIGovernance #EthicalAI Geoffrey M Schaefer Ramsay Brown Alyssa Lefaivre Škopac

    View organization page for Responsible AI Institute, graphic

    31,696 followers

    Excited to announce the Responsible AI Top-20 Controls initiative stewarded by the Responsible AI Institute! 🚀 Initiated by leaders from Booz Allen Hamilton & Mission Control AI, and developed by industry leaders at the Leaders in Responsible AI Summit 2024, these controls offer a straightforward path to jumpstart AI governance in your organization. What you’ll gain access to: 🔹 15 essential controls available now, including: engaging executives, establishing a risk management strategy, conducting impact assessments 🔹 Open, simple, and current best practices 🔹 Controls designed for AI users, managers, and governance teams 🔹 Early access to 5 additional controls coming soon to address emerging AI developments The Top-20 Controls answer the crucial questions like: "What do I do” and “Where do I start" in responsible AI implementation. ➡ Ready to elevate your AI governance? Read the announcement to gain access to the first 15 Controls: https://lnkd.in/gWMUrX-s #ResponsibleAI #AIGovernance #EthicalAI #AIInnovation #Top20AIControls Geoffrey M Schaefer Ramsay Brown Alyssa Lefaivre Škopac

    Introducing the Responsible AI Top-20 Controls - Responsible AI

    Introducing the Responsible AI Top-20 Controls - Responsible AI

    https://www.responsible.ai

  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    ⏪ Welcome to the Responsible AI Weekly Rewind. The team at Responsible AI Institute curates the most significant AI news stories of the week, saving you time and effort. Tune in every Monday to catch up on the top headlines and stay informed about the rapidly evolving AI landscape. 1️⃣ EU’s AI Act gets published in bloc’s Official Journal, starting clock on legal deadlines The EU AI Act, officially published in the EU's Official Journal, establishes a comprehensive regulatory framework for AI, coming into force on August 1, 2024, with a phased implementation culminating in full applicability by mid-2026. The Act introduces a risk-based approach, prohibiting practices like social credit scoring and untargeted facial recognition databases, mandating strict requirements for high-risk AI applications in areas such as law enforcement and critical infrastructure, and imposing transparency obligations on general purpose AI models and chatbots, while also requiring powerful AI systems to conduct systemic risk assessments. https://lnkd.in/gavjpxhn 2️⃣ Hickenlooper to Introduce Bill to Provide Third-Party Audits for AI Senator John Hickenlooper of Colorado will introduce the VET AI Act, directing NIST to develop guidelines for third-party audits to verify AI companies' risk management and compliance. The bill aims to ensure AI systems are independently evaluated for safety and transparency, similar to financial industry standards. https://lnkd.in/g65_erti 3️⃣ AI pushes Google emissions upward Google's latest environmental report reveals that the company's corporate emissions rose 13% last year, partly due to increased power usage by data centers serving AI applications. While Google aims to make AI infrastructure more efficient and has AI products that help reduce emissions elsewhere, it faces a significant challenge in reaching its 2030 net-zero goal. https://lnkd.in/gAAcW75g 4️⃣ OpenAI board shake-up: Microsoft out, Apple backs away amid AI partnership scrutiny Microsoft has withdrawn from its non-voting observer role on OpenAI's board, while Apple has decided against taking a similar position. OpenAI will now update its business partners and investors through regular meetings amid increasing regulatory scrutiny of Big Tech's investments in AI startups. https://lnkd.in/e-jfDm4g #ResponsibleAI #AI #AIPolicy #AIGovernance #AINews #GenAI #AIRegulation

    • No alternative text description for this image
  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    🚨 The landmark EU AI Act has been officially published, setting key deadlines for AI regulation in Europe. Here are the highlights: 📅 Key Dates: 🔸 August 1, 2024: The law comes into force 🔸 Early 2025: Prohibited AI uses become illegal 🔸 April 2025 (approx.): Codes of practice for AI developers apply 🔸 August 1, 2025: Rules for general purpose AI transparency start 🔸 Mid-2026: Most provisions fully applicable 🔸 2027: Compliance deadline for some high-risk AI systems 🔍 Key Points: 🔹 Risk-based approach: Different obligations based on AI use cases and perceived risk 🔹 Banned uses include social credit scoring and untargeted facial recognition databases 🔹 High-risk uses face obligations on data quality and anti-bias 🔹 Transparency requirements for AI chatbots and general purpose AI models 🔹 Powerful GPAIs may need to conduct systemic risk assessments 👀 What to Watch: 🔹 Development of codes of practice by the EU's AI Office 🔹 Ongoing discussions about stakeholder involvement in drafting guidelines 📰 Read more highlights: https://lnkd.in/gvRgj99c 📕 Review the EU AI Act: https://lnkd.in/euGfjvqx 🧭 Navigating the AI Regulatory Landscape At the RAI Institute, we're dedicated to translating the complex landscape created by the EU AI Act into practical and actionable insights for alignment. Our RAI Hub contains curated resources that break down the intricacies of the Act, offering clear guidance for organizations navigating these new regulations. ⬇ To gain access to the RAI Hub, check the comments. #ResponsibleAI #EUAIAct #AIRegulation #TechPolicy #ArtificialIntelligence 

    Choose the experimental features you want to try

    Choose the experimental features you want to try

    eur-lex.europa.eu

  • Responsible AI Institute reposted this

    View profile for Ramsay Brown, graphic

    Securing AI. CEO @ Mission Control AI. Research @ Cambridge.

    ☀️ ASK: If you're a Responsible AI leader; reach out to me about joining the Industry Advisory Group for the Responsible AI Top-20. It's with immense gratitude that Mission Control AI, Booz Allen Hamilton, and The Responsible AI Institute bring you the Responsible AI Top-20: the community-driven "Get Started Kit" to Responsible AI transformation. The Top-20 is built to help organizations looking for Responsible AI change go from 0 to 1. It does not recommend a single software, RMF, or law to adhere to. Rather; it's the sensemaking tool that welcomes organizations who were otherwise oblivious they needed any of those things at all. Some months ago, Geoffrey M Schaefer and I discussed my recent trip to DefCon; the cybersecurity (read: hacker) conference. It's been my hypothesis for a while that the natural unification of [Responsible AI / Trustworthy AI / AI Ethics] and AI Safety would happen under the auspice of cybersecurity, which has substantially more inertia, capital, buy-in, practitioners, and experience under its belt. And the lightbulb went off for Geoff (who had been a cybersecurity leader.) Something Responsible AI is missing - that Cybersecurity got right - is the "Get Started Kit". We have values and principles. We have RMFs. We have certifications. We have training material. We have laws. We have a lot. And that's actually been, at times, more a barrier than a help. Because it's hard to wrap your head around all of it if you're just starting out. Which is almost everyone. Our field has been missing the map of the territory; the reference that would orient someone of how to go from 0 to 1 or from 1 to 2; and navigate their way through this journey. So at this year's Leaders in Responsible AI Summit, we invited 80 of our colleagues from around the world to help make that map. Together, this brilliant group workshopped and assumption tested their way through 20 controls; 20 specific, practical, tangible, and useful things that an organization can do to improve its Responsible AI stance. We're immensely grateful to: - Geoff and the BAH team for leading the Industry Advisory Board; which will unite leaders like yourself from around the world to guide the development and implementation of the Top-20. - Alyssa Lefaivre Škopac and the Responsible AI Institute team for being the founding home of the Top-20 and breathing life into this enterprise. - All the delegates of the Leaders in Responsible AI Summit who worked so hard to provide feedback (and constructive criticism) that strengthened this project. https://lnkd.in/gHnstp3B

  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    🎙️ New Episode Alert: Responsible AI In Action #20 🎙️ We're thrilled to welcome David Gleason, Head of Responsible AI at Spark92, to our latest episode of Responsible AI In Action! Join host Patrick McAndrew as he and David dive into: 🔹 The dynamic landscape of AI 🔹 Why a multidisciplinary approach is crucial for tackling AI challenges 🔹 The latest on the potential (but now unlikely) Meta-Apple partnership David brings over 25 years of experience in Data Management, Architecture, and Strategy to the table. As a former CDAO in Financial Services and a recognized expert across industries, he offers unique insights into data-powered business growth. Don't miss this opportunity to learn from a visionary leader who's shaping the future of Responsible AI! 💻 Watch on demand now: https://lnkd.in/gQqBuhuF

  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    Excited to announce the Responsible AI Top-20 Controls initiative stewarded by the Responsible AI Institute! 🚀 Initiated by leaders from Booz Allen Hamilton & Mission Control AI, and developed by industry leaders at the Leaders in Responsible AI Summit 2024, these controls offer a straightforward path to jumpstart AI governance in your organization. What you’ll gain access to: 🔹 15 essential controls available now, including: engaging executives, establishing a risk management strategy, conducting impact assessments 🔹 Open, simple, and current best practices 🔹 Controls designed for AI users, managers, and governance teams 🔹 Early access to 5 additional controls coming soon to address emerging AI developments The Top-20 Controls answer the crucial questions like: "What do I do” and “Where do I start" in responsible AI implementation. ➡ Ready to elevate your AI governance? Read the announcement to gain access to the first 15 Controls: https://lnkd.in/gWMUrX-s #ResponsibleAI #AIGovernance #EthicalAI #AIInnovation #Top20AIControls Geoffrey M Schaefer Ramsay Brown Alyssa Lefaivre Škopac

    Introducing the Responsible AI Top-20 Controls - Responsible AI

    Introducing the Responsible AI Top-20 Controls - Responsible AI

    https://www.responsible.ai

  • View organization page for Responsible AI Institute, graphic

    31,696 followers

    QQ: Have you downloaded our AI Policy Template❓ Establishing a strong foundation for responsible AI is crucial for organizations developing, procuring, supplying, or using AI technologies. This template helps you: ✅ Develop AI policy elements grounded in ethical principles to mitigate risks, build trust, and drive innovation ✅ Operationalize leading standards and guidance like ISO/IEC 42001 and NIST AI Risk Management Framework ✅ Customize an AI policy to fit your organization's unique context, values, and AI use cases The template covers essential components, including: 🔍 Governance for accountability and oversight 🔒 Data management for privacy, fairness, and transparency 🔮 Risk management to identify, measure, and treat AI impacts ♻️ Lifecycle project management to embed responsibility 📜 Procurement and documentation to manage requirements Developed by Responsible AI Institute, this template reflects sound practices and expertise to accelerate your responsible AI maturity. 📩 Start your responsible AI journey on a solid foundation. Read the press release and download the AI Policy Template now: https://lnkd.in/gyfrMcic #ResponsibleAI #AIPolicy #AIStandards #RiskManagement

    Responsible AI Institute Launches the AI Policy Template to Help Organizations Build Foundational Responsible AI Policies and Governance - Responsible AI

    Responsible AI Institute Launches the AI Policy Template to Help Organizations Build Foundational Responsible AI Policies and Governance - Responsible AI

    https://www.responsible.ai

Similar pages

Browse jobs