John Berryman’s Post

View profile for John Berryman, graphic

Consultant in Large Language Model Application Development

This Claude 3.5 Sonnet response is fascinating. It got the answer wrong, but it caught itself and attempted a correction. This is the first time I've seen this behavior. Is it: A) Emergent behavior from a smarter model? B) Creative training to make models realize mistakes?

  • No alternative text description for this image
Walter Storm

Entrepreneur, Futurist, and Industry Disruptor

3w

We use critics in our pipelines to refine and validate the output.. it’s all coming from a generative model with probabilistic outputs at the end of the day. ;)

Anas Rabhi

Data Scientist - NLP | Generative AI

3w

Is the corrected answer bellow good ? Maybe a critic model to check the answer of Claude 3.5 and reprompt the model 🤔

Ravi Somepalli

Hands on Engineering Manager| Founder at Lakumbra| Engineer| Application Architect

3w

sonnet 3.5 takes around 8 seconds to extract data from image vs 16 seconds it took for opus

Like
Reply
Ganapathy Subramaniam

Generative AI / ML / DS Developer / Ex Stanford

2w

Or Is this some sort of the Reflection Agentic pattern built in?

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics