This Claude 3.5 Sonnet response is fascinating. It got the answer wrong, but it caught itself and attempted a correction. This is the first time I've seen this behavior. Is it: A) Emergent behavior from a smarter model? B) Creative training to make models realize mistakes?
Is the corrected answer bellow good ? Maybe a critic model to check the answer of Claude 3.5 and reprompt the model 🤔
sonnet 3.5 takes around 8 seconds to extract data from image vs 16 seconds it took for opus
Or Is this some sort of the Reflection Agentic pattern built in?
Entrepreneur, Futurist, and Industry Disruptor
3wWe use critics in our pipelines to refine and validate the output.. it’s all coming from a generative model with probabilistic outputs at the end of the day. ;)