When should AI not comply? That's what our latest work aims to propose — in addition to safety considerations, Faeze Brahman, Sachin Kumar & their collaborators outline the taxonomy of model noncompliance, and offer CoCoNot, a resource for training and evaluating models’ noncompliance.
Allen Institute for AI (AI2)
Research Services
Seattle, WA 44,009 followers
AI for the Common Good.
About us
Our mission is to contribute to humanity through high-impact AI research and engineering.
- Website
-
http://allenai.org
External link for Allen Institute for AI (AI2)
- Industry
- Research Services
- Company size
- 51-200 employees
- Headquarters
- Seattle, WA
- Type
- Nonprofit
- Founded
- 2014
- Specialties
- Artificial Intelligence, Deep Learning, Natural Language Processing, Computer Vision, Machine Reading, Machine Learning, Knowledge Extraction, Common Sense AI, Machine Reasoning, Information Extraction, and Language Modeling
Locations
-
Primary
Seattle
Seattle, WA 98013, US
Employees at Allen Institute for AI (AI2)
-
Eran Megiddo
Startup CEO | Education Technology Executive | New Product Innovation | Global Business Leadership
-
Chris Doehring
Lead Software Engineer at AI2
-
Kirby Winfield
VC investing in AI/ML
-
Peter Clark
Senior Research Director at the Allen Institute for Artificial Intelligence (AI2)
Updates
-
AI models are more integrated into our daily lives than ever before — making user safety paramount. Introducing AI2's Safety Toolkit, featuring red-teaming framework, a high-quality, large-scale safety training dataset, and content safety moderation tools. #AISafety
The AI2 Safety Toolkit: Datasets and Models for Safe and Responsible LLMs Development
blog.allenai.org
-
It’s time to talk about open — our CEO Ali Farhadi joined open model builders Jonathan Frankle from Databricks, Laurens van der Maaten from AI at Meta, and Jon Turow from Madrona to talk about the future of open-source AI, which is a set of models and artifacts made publicly available to modify and reuse.
-
Data toxicity can lead to harmful model outputs — and since most evaluations focus on English datasets, we’re underestimating multilingual toxicity in state-of-the-art LLMs. Our team partnered with researchers from Carnegie Mellon University and University of Virginia to highlight this gap.
PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language…
blog.allenai.org
-
Yejin Choi takes the stage this Thursday at Databricks’ #DataAISummit. If you’re in person or tuning in virtually, watch with us 👉: https://lnkd.in/dgTb63-s
-
Our CEO Ali Farhadi joined Ina Fried at the Axios AI+ Summit in New York this week, speaking about openness in research, how to build trust between AI and the public, and more. Watch the entire conversation here: https://bit.ly/AliAxios
-
“Without actual openness, it’s hard to be scientific about the evaluation.” Ali Farhadi weighed in on openness and the future of AI at the Axios AI+ Summit. Read more about why we believe in a truly open-source future for AI: https://lnkd.in/gVCvneug
Allen Institute CEO: How AI "broke trust" with the public
axios.com
-
Tune in now for Ali at Axios AI+ NY!
Axios AI+ NY
https://www.youtube.com/
-
Happening today! Join the Axios AI+ Summit to hear from our CEO Ali Farhadi about openness in AI.
Axios is bringing our AI+ Summit to NYC for Tech Week in partnership with Tech:NYC. Hear from leaders who are shaping the future of AI, from finance to media and healthcare & more.
Axios AI+ NY
www.linkedin.com
-
Axios’ AI+ Summit is happening tomorrow in NYC and via livestream 🚀 Tune in at 12:10 pm PT, June 5th for a fireside chat with our CEO Ali Farhadi and Ina Fried. We’re excited to join fellow leaders who are shaping the future of AI across industries! Register: https://lnkd.in/e-UVsjDN
Axios AI+ NY 2024 Livestream
axiosainy2024livestream.splashthat.com