We’re hosting a live HumanFirst webinar this Thursday at 1 PM EST on an interesting problem: auto-classification. I spoke to a customer service leader recently who was in a tough spot with this. Company earnings were down, and a specific segment of their customer base was really unhappy. They had a world class program capturing metrics like NPS and CSAT. But the metrics, for better or for worse, were for the most part LAGGING indicators. WHY were these numbers down? What was the root cause? She had to get to the bottom of it. The team had several hundred thousand customer service emails, survey responses, and call transcripts they wanted to analyze. We’re talking about millions of documents and mostly natural language data. Too much to summarize and label manually. They submitted a request to the data science and AI team - enter the queue. Weeks later the project is accepted, weeks after that there is a basic labeling of calls/emails/surveys - but without the domain experts building the projects, these are high level topics; useful, but they won’t lead to the complexity of insights this team needs. The project took over a month with 5 people working on it – a lot of technical resources dedicated to what should be a simple task. Plus, it was a one time project. To run the analysis again might be a little faster but there was a lot of work behind the scenes. Frankly the data science team wasn’t thrilled either - they were inundated with projects and wanted a faster way to do this kind of work. Enter HumanFirst. In our upcoming webinar, we’re going to walk through two approaches for easily classifying, labeling, and curating a natural language data set. The workflows will show you how to easily cluster & semantically search your data, run prompts across a large corpus of data to quickly build a relevant taxonomy, and classify your data. This is the foundational building block for a variety of desirable outcomes like: ➡️ Building a RAG chatbot ➡️ Training an NLU model ➡️ Analyzing natural language content ➡️ Improving operations You can register at the link below. See you there! https://bit.ly/3VxVaNo
John Norris’ Post
More Relevant Posts
-
In the world of data and analytics, the ability to converse with data using natural language can seem almost like science fiction. But it’s possible. Check out BCG X’s real-life implementation of a #GenAI solution utilizing a technique known as text-to-SQL, that enables business users to access data using natural language: https://on.bcg.com/4amEsou #DataScience #DataScientist #LLM
To view or add a comment, sign in
-
In the world of data and analytics, the ability to converse with data using natural language can seem almost like science fiction. But it’s possible. Check out BCG X’s real-life implementation of a #GenAI solution utilizing a technique known as text-to-SQL, that enables business users to access data using natural language: https://on.bcg.com/4amEsou #DataScience #DataScientist #LLM
Text-to-SQL: Giving Users Natural Language Access to Data
bcg.smh.re
To view or add a comment, sign in
-
In the world of data and analytics, the ability to converse with data using natural language can seem almost like science fiction. But it’s possible. Check out BCG X’s real-life implementation of a #GenAI solution utilizing a technique known as text-to-SQL, that enables business users to access data using natural language: https://on.bcg.com/4amEsou #DataScience #DataScientist #LLM
Text-to-SQL: Giving Users Natural Language Access to Data
bcg.smh.re
To view or add a comment, sign in
-
a quick sketch of roughly what the Semantic Data Stack looks like (i just made up that term, it's not an official dbt-ism, but i think it describes what i'm after well). it's quite different from the classic ELT Modern Data Stack. there's much more emphasis on leveraging dbt to create normalized, API-like marts models that can be rigorously tested, contracted, and versioned — these then feed into a rich, expansive Semantic Layer that powers thin BI and advanced AI use-cases for progressively refining step-by-step through an analysis in natural language. i don't think anybody is quite here yet from what i've seen, but this is what i'm personally iterating towards.
To view or add a comment, sign in
-
In the world of data and analytics, the ability to converse with data using natural language can seem almost like science fiction. But it’s possible. Check out BCG X’s real-life implementation of a #GenAI solution utilizing a technique known as text-to-SQL, that enables business users to access data using natural language: https://on.bcg.com/4amEsou #DataScience #DataScientist #LLM
Text-to-SQL: Giving Users Natural Language Access to Data
bcg.smh.re
To view or add a comment, sign in
-
🚀 Big news! We launched our blog this week! 👩🚀 Our first post shows how to use trufflepig to create a custom semantic search engine for your unstructured data. We demonstrate this with our Better GitHub Search app, built on the trufflepig API, which makes GitHub's popular repositories searchable via natural language. 😁 You can apply these same principles to index and search through any unstructured data you have, making it easier for your AI models to find the relevant information in your data fast. 👍 The project is open source. Check it out, share your thoughts in the comments, and follow trufflepig to boost your AI projects with the power of retrieval! ✍ Blog: https://lnkd.in/gSby5AZ8
trufflepig launched a blog! How to build a custom search engine.
To view or add a comment, sign in
-
https://lnkd.in/gvyAK4dp Imagine business users and analysts extracting meaningful insights effortlessly from a #neo4j #graphdatabase in natural language, just like having a conversation with a data expert, no more cryptic queries, no more figuring out graph syntax. Just ask your questions in plain English and watch Neo4j reveal its hidden insights. This is the power of #LLM-powered applications, and this guide is your roadmap to building them. Let’s dive into development best practices, from #datamodeling to #promptengineering, and democratize data exploration like never before. #generatieveai #genai #llms #largelanguagemodels #graphdatabases
From Code to Conversation: Unleashing the Potential of Neo4j with LLM Powered Conversational…
medium.com
To view or add a comment, sign in
-
While words may seem similar, their contextual meanings can vary greatly. At Querent, we've learned firsthand why placing excessive trust in RAG and Vector Similarities can be problematic, often leading to the presentation of misleading information as profound insights. We firmly believe that beyond mere hallucinations, our challenges lie in grappling with data representation issues when applying generalized Language Models to specific domains. Our vision is to tackle this problem ground up with transformation to translation of volumes of data into a semantic web of interconnected concepts, fueling better information retrieval, yet better insights while promoting data governance and quality driven by domain expertise. #DomainAI #Semantics #DataFabrics
Why RAG won't solve generative AI's hallucination problem | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Co-Founder of The Ravit Show | ACCA | Love to share resources on trending topics for the community | Community Evangelist
Do you want to know how to score Large Language Models (LLMs) results? Now is the time! Here's the link to the upcoming webinar "Scoring LLM Results with UpTrain and SingleStoreDB" -- https://lnkd.in/eaQkHmgE This webinar is set to revolutionize your approach to scoring and analyzing LLM applications. This is your chance to transform your data into actionable insights that can drive your business forward. Join this session where the cutting-edge UpTrain's open-source LLM evaluation tool converges with the high-velocity data processing expertise of SingleStoreDB. Prepare to be part of a live demo and code-sharing experience that promises to redefine what efficiency means for LLM applications. Here's a sneak peek at what you'll gain: ✅ Expertise in refining and enhancing LLM applications through UpTrain's advanced experimentation capabilities. ✅ Insights into the pivotal role of SingleStoreDB's real-time data infrastructure in expediting LLM scoring analysis. ✅ Advanced strategies to mitigate LLM inaccuracies and enhance the precision of responses. ✅ A comprehensive understanding of the seamless integration between UpTrain and SingleStoreDB for immediate LLM application evaluation and enhancement. Speakers: 🌟 Sourabh Agrawal, CEO and Founder at Uptrain AI 🌟 Madhukar Kumar, Chief Developer Evangelist at SingleStoreDB 🌟 Alex P., Growth Engineer at SingleStoreDB Are you prepared to be at the forefront of real-time data analytics in the LLM field? Don't forget to tune in to this webinar on 8th November, 10 am PDT!
To view or add a comment, sign in
-
I've recently shared a new article on Towards Data Science, where I explore a method for guiding generative AI to produce specific tokens. By leveraging this approach, we can effectively shape the generative process to adhere to desired formats such as JSON, SQL, or pandas code. This has profound implications for improving our ability to translate human language into computer-readable formats. https://lnkd.in/d2JViyCD
Structured Generative AI
towardsdatascience.com
To view or add a comment, sign in