Sign in to view Pedram’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Fairfax, California, United States
Contact Info
Sign in to view Pedram’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
4K followers
500+ connections
Sign in to view Pedram’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Pedram
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Pedram
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Pedram’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Previously, Head of Data at Hightouch. Long history of data…
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Dagster Labs
***** ********* *******
-
**** ***** ****
*******
-
*****
*******
-
********** ** *******
******** ** ******* - ** **********
-
***** ********** ******* *****
******** ** ******* (*.*.)
View Pedram’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Languages
-
French
Native or bilingual proficiency
-
English
Native or bilingual proficiency
-
Farsi
Native or bilingual proficiency
View Pedram’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Tim Castillo
Atlanta, GAConnect -
Sandy Ryza
San Francisco Bay AreaConnect -
Artyom Keydunov
San Francisco, CAConnect -
Fraser Marlow
Philadelphia, PAConnect -
Jonathan Neo
Perth, WAConnect -
Shaun McAvinney
Philadelphia, PAConnect -
Joe Reis 🤓
Salt Lake City, UTConnect -
Colton P.
Washington DC-Baltimore AreaConnect -
Zach Wilson
San Francisco, CAConnect -
Erin K. Cochran
Philadelphia, PAConnect -
Nick Tolerico
Irvine, CAConnect -
Rex Ledesma
Brooklyn, NYConnect -
Ritchie Vink
The Randstad, NetherlandsConnect -
Sung Won Chung
United StatesConnect -
Sean Lopp
Golden, COConnect -
Jethin Abraham
Dallas-Fort Worth MetroplexConnect -
Deekshana Krishna
United StatesConnect -
Krunal Dholakia
San Jose, CAConnect -
Chris Charles
New York City Metropolitan AreaConnect -
Ashwin J.
Greater Toronto Area, CanadaConnect
Explore more posts
-
Jesse Anderson
Unapologetically Technical's newest episode is now live! In this episode of Unapologetically Technical, I interview Hubert Dulay the author of Streaming Data Mesh and Developer Advocate at StarTree. We talked about his early experience with web backends like CORBA and SOAP and how those prepared him for data work. He shares his advice for those with web development skills to transition into data and what it's like for a person leaving a company after a long tenure there. We discuss his time at Cloudera and Confluent and how the data industry has changed over time. We talk about his recent experience with Flink and StarTree. We go in-depth into streaming data mesh, covering what it is and how to create real-time data products. We discuss the ways teams can create joined or enriched data products across domains. We round out the discussion with some tips on how to get data mesh adoption. Watch the full episode here:
10
1 Comment -
Micah Wylde
With Redpanda Data's recent acquisition of Benthos I've gotten some questions about how it compares to Arroyo. In short: Benthos is *stateless*, while Arroyo (like Flink or RisingWave) is *stateful*. This may sound small, but adds up to a fundamental difference in how these systems are designed, built, and used. In a streaming engine, statefulness combines two inter-related features: * Repartitioning data (GROUP BY in SQL) * Remembering previously seen events With just these two features, we can support most of normal SQL (window functions, joins, aggregation, etc.) plus time-oriented streaming operators (time windows, event-time semantics, watermarks). Stateless engines on the other hand are limited to simple transformations and filters (SELECT and WHERE in SQL terms) that operate on individual events. But all of this is a tradeoff; stateful systems are much harder to operate. So if you don't need joins, aggregates, or windows, a stateless system can be a better choice. They also work well as infra glue, routing data between different systems—Benthos in particular has a huge connector ecosystem. For a deeper dive on this, I wrote down some thoughts on our blog:
56
4 Comments -
Victor Kostyuk
I wrote a blog post on a thorny but underappreciated problem in ML-based marketing personalization: event attribution. It's how you go from the actions you take and customer behavior you observe to creating training experiences for your model. Good attribution makes a huge difference in driving uplift, there are a lot of degrees of freedom in configuring it, and at the same time, it's rarely discussed in ML/bandits/RL literature.
38
4 Comments -
Phillip Rhodes
This is more of a "talking to myself in public" thing than any kind of fully fleshed out hypothesis, so take this with the appropriate grain(s) of salt. But if I were interested in AI and were looking for an angle to explore that might yield something interesting, I'd start working on learning "Multiscale Mathematics" (also known as "Multiscale Modeling"). This is rooted in my personal belief that there has to be a "multi-scale" aspect to intelligence because there is such an aspect inherent in our underlying physical reality.
2
-
Kingsley Uyi Idehen
"..Although it’s always helpful to improve documentation for developers, many (myself included) prefer to dive in and learn while doing." -- an excerpt from the quoted post below. That quote from Jon Udell highlights a core challenge with documentation: determining the target audience. Unfortunately, trying to cater to everyone often leads to disenfranchising some, which can turn documentation from a tool for spreading goodwill into one that spreads ill-will. Large Language Model-driven chatbots offer a more adaptable solution to this age-old problem by providing a companion that aids working through the problem, courtesy of its natural language interaction modality. Jon and I have known each other for what feels like eons in computer time. Even when we haven't spoken directly, I feel we're often on the same wavelength about the evolution of software. For instance, I was addressing a similar issue with production documentation earlier today and turned to our #OPAL agent to expedite finding a solution, as illustrated by the following session transcripts. [1] https://lnkd.in/eWrrbXMG -- where the &t url parameter puts the page view in animated teletype mode (approx 120 per msec per typed word interval). [2] https://lnkd.in/eVZKE2vf -- conventional page view. #GenAI #AI #SmartAgent #UseCase
3
2 Comments -
Ash Koosha
Today's problem with AI output being unoriginal is our little access to the deeper, higher quality data within the model. For example, don't prompt "write me a lyric about x". Ask an LLM to sit next to a lake, look at a scene and describe it. Then ask it to use the text to compare the scene to how one can deal with loss of a friend in a lyrical structure. The path to access comes is in a domain-specific understanding of prompts. Frameworks behind prompts are going to be the official language of the future to learn. e.g. study the "approach" to how a painting was created or what the stages of writing a paper are, rather than asking AI "write me a paper". Humans are about to become 100x more curious about deeper layers of "how x is made", to be able to speak to AI and capture unique/original output. #LLM #promptengineering #prompts #latentspace
10
1 Comment -
Zach McCormick
My team and I at Superpowered AI (YC S22) have been working in the RAG space for a little over a year now, and we’ve recently decided to open-source all of our core retrieval tech. spRAG is a retrieval system that’s designed to handle complex real-world queries over dense text, like legal documents and financial reports. As far as we know, it produces the most accurate and reliable results of any RAG system for these kinds of tasks. For example, on FinanceBench, which is an especially challenging open-book financial question answering benchmark, spRAG gets 83% of questions correct, compared to 19% for the vanilla RAG baseline (which uses Chroma + OpenAI Ada embeddings + LangChain). You can find more info about how it works and how to use it in the project’s README. We’re also very open to contributions. We especially need contributions around integrations (i.e. adding support for more vector DBs, embedding models, etc.) and around evaluation. Finally, if you have an unstructured retrieval problem that you’re struggling to build a reliable solution for, feel free to reach out to me and I’d be happy to help and talk through whether spRAG is the right solution or not. #llm #rag #ai #developer https://lnkd.in/gpGcCxb7
12
1 Comment -
Emil Eifrem
My friend Philip Rathle has written an outstanding blog post that summarizes the recent buzz around GraphRAG, what we've learned from a year of helping users build systems with Knowledge Graphs + LLMs and where we believe the space is going. There's been an explosion of research articles in the last few months discussing how to use Knowledge Graphs in RAG systems. For good reasons! It turns out that building a knowledge graph of your data and using it in RAG gives you several powerful advantages. 1️⃣ It gives you better answers to most if not ALL questions you might ask an LLM using normal vector-only RAG. That alone will be a huge driver of GraphRAG adoption. 2️⃣ In addition to that, once you've created your Knowledge Graph you get easier development thanks to data being visible when building your app. Easier to build LLM-backed applications is a BIG DEAL and sorely needed in these non-deterministic systems. 3️⃣ A third major advantage is that graphs can be readily understood and reasoned upon by humans as well as machines. Building with GraphRAG is therefore easier, gives you better results, and -- this is a killer in many industries -- is explainable and auditable! That's a powerful combo! I believer that over time, GraphRAG will overtake vector-only RAG as the default architecture in LLM-backed applications. It makes total sense that the R in RAG becomes graph centric. As an industry, we already converged on the best way to do Retrieval for the web. The key to a good R on the web was graph algorithms (specifically PageRank). That innovation created a trillion dollar company. a) Retrieve the relevant documents through keyword / vector search. b) Rank them in the graph to get the "top ten blue links." Vector-only RAG is Altavista. 🔎 GraphRAG is Google. 🚀 We're entering the "Ten Blue Links" era of RAG. https://lnkd.in/des7MJdK
245
9 Comments -
Zack Hendlin
Friday Feature Preview: Natural language summaries in Zing Data! Ask a question visually or with natural language and get a text summary (along with the chart) that answers your question. Even better, they update every time you edit a chart. Seamlessly jump between natural language and visual drag + drop and Zing's AI summary stays up to date. In private preview, DM me for access.
13
-
Matt Schalsey
People ask me why I love Seismic so much, it's honestly pretty simple. Streamlined, Organized, and an Engaging Platform for all users! It's super hard to find a platform that does everything from incoporating direct analytics into internal and external viewership, along with the ability to make each users experience tailored to their role! I am a #SeismicAdvocate! #enablement #onboarding #bootcamp #riseofenablement #salesenablement #revenueenablement
56
9 Comments -
Jesse Anderson
“I have to say that you always have to keep learning. Unfortunately, I know it's comfortable to be able to do the job that you're doing all the time, but technology is different, it changes and it costs a lot of money and if you're in that domain where you find yourself and your skill set getting a bit older you can get pretty lost in your career pretty quickly so it's important to keep up.” Wise words from Hubert Dulay during our latest conversation from the Unapologetically Technical podcast. Indeed, constant learning is important no matter how comfortable we think we are in a certain job, and it’s through learning that we get to keep up with technology’s changes. Watch the full episode here to learn more: https://lnkd.in/dFE2rU4z
2
-
Adrian Vatchinsky
Here's the April roundup of new things landing at 4149! 🤖 Proactive Mode - let your AI teammate start planning and assigning their own tasks. No prompting required - just keep doing what you already are doing at work! 📫 Memos - Skimmable, short, and to the point updates that go out to the team highlighting what's coming up, what everyone's working on, and where you can help each other out! 🅿️ Blueprints - Customize how your AI teammate gets stuff done and even add on new capabilities. All with a simple blueprint outline. And much much much more! Follow along as we build 4149 and see the full update https://lnkd.in/e3VSYZxH #startups #ai #productivity #genai #buildinpublic #product
24
1 Comment -
Brody Adreon
There is an obvious need for standardized metrics across the decentralized computing marketplace landscape, which is why Sami Kassab wrote a killer blog this week proposing the use of Floating Point Operations per Second (FLOPS) as that standardized metric. In yesterday’s Big Brain Breakdown Outpost Strategies, we expand on Sami’s original post, and add some additional context of our own. Here are the key takeaways from yesterday's breakdown: > The decentralized computing marketplace sector lacks a standardized metric that measures a marketplaces total computing capacity, making it difficult for investors to accurately assess the value of these projects. > Sami proposes using FLOPS (Floating Point Operations per Second) as this standardized metric. FLOPS is widely recognized in high-performance computing and can provide a clear, hardware-agnostic measure of computational capacity. > FLOPS can accurately represent the computational capacity of diverse hardware, including GPUs, CPUs, and specialized accelerators like TPUs and FPGAs. This standard helps users assess the feasibility and cost of tasks like AI and ML workloads in real-time. > 6079’s Proof of Inference Protocol (PoIP) is one solution that can be used to verify and record FLOPS onchain, ensuring transparent and accurate reporting of computational resources through cryptographic proofs and game-theoretic mechanisms. > FLOPS should be complemented with additional metrics such as benchmark scores for AI/ML workloads, memory bandwidth, and energy efficiency to provide a complete picture of computational resources. This combination ensures a robust and sustainable growth of decentralized compute marketplaces. > The adoption of standardized metrics like FLOPS benefits users by providing clear insights into computational capabilities, aids investors in making informed decisions on the value of a project’s marketplace, and supports legitimate projects by highlighting their true value, thereby weeding out low-value, vaporware projects. Of course, we go into much more depth in the full breakdown. If you thought that was interesting, go ahead and give the full piece a look and do not forget to read Sami's original piece. He is an excellent writer! https://t.co/BtVuG82fcB https://t.co/ryQf2e8Zuu
10
-
Aaron Cannon
I'll say it. There's too much hype around AI in research. 🤯 Ok, maybe that's weird coming from me, but hear me out. Hype is a powerful force. It whips everyone up into a frenzy, creates a ridiculous amount of noise, and gets everyone using big industry jargon to impress each other. Sadly, that makes it incredibly difficult to cut through the noise. It obscures real value. People just want tools (often powered by AI) to do regular things faster, save time, or achieve more. They don't need 'paradigm shifts.' That's why I love Natasha Nair's incredibly succinct and PRACTICAL (no hype) explanation of why BOI (Board of Innovation) uses Outset. This clip is from BOI (Board of Innovation)'s Autonomous Innovation Summit a few weeks ago. She lays out, metrics and all, how they captured 15x the data with 1/7th of the hours. It's a no-BS picture of a practical use of AI. BOI (Board of Innovation) has been an incredible partner to work over the last year. They are the experts at seeing both the promise and the practice of using new technologies. I'm excited for more to come. #autonomousinnovation #userresearch #marketresearch #airesearch #ai
41
9 Comments -
Karthik Shashidhar
Do you think your company could use a much larger analytics team? Putting it another way, if your company had a much larger analytics team, what would you want to do with them? If you have some thoughts about this, please DM me. I'd love to chat with you! Babbage Insight Manu Bhardwaj
10
-
Ashley Sherry
Custom GPTs are killing a huge number of startups. And it's not hard to see why. (They don't have any proprietary tech.) If you've checked out ChatGPT recently, you might have noticed the new "Custom GPTs" panel on the left-hand side. It's like an app store for AI models, and it's spelling doom for many startups. -- As my co-founder Jacob Tucker recently pointed out, we're seeing a huge domino effect in the #AIStartup world. When ChatGPT first launched, these wrapper startups looked impressive. They were raising $2-3 million rounds, even though they had no technical moat. They were just low-hanging fruit, capitalizing on the hype around ChatGPT. Now that house of cards is crumbling. -- This is a huge win for deep tech companies that are actually innovating in the AI space. Companies like EmpathixAI, where we're building proprietary AI models from the ground up to solve specific problems, like conducting in-depth interviews at scale with #CultureChat. We're not just slapping a pretty interface on top of someone else's tech. We're doing the hard work of pushing the boundaries of what's possible with AI. -- So if you're an investor or a business looking to leverage AI, take this as a cautionary tale. Don't be dazzled by the shiny wrappers. Look for companies with real technical depth, who are building solutions that can't be easily replicated or replaced. Those are the companies that will survive and thrive in the long run. The rest? They'll be washed away by the next wave of AI innovation.
3
-
Amr Awadallah
In our latest blog by Product Manager Nick Ma and Vivek Sourabh, we compared Boomerang with the latest embeddings from OpenAI and Cohere. OpenAI released its latest models, text-embedding-3-small and text-embedding-3-large, in January 2024. Cohere released its latest models, embed v3 light and embed v3, in November 2023. Performance Highlights: 🔹English: Matches OpenAI's text-embedding-3-small and Cohere's embed v3 light, just slightly behind their larger models. 🔹Multilingual: Surpasses OpenAI and Cohere, delivering superior performance across languages. Why Choose Boomerang? 🔹Compact & Powerful: Ideal for production with exceptional efficiency. 🔹Proven Metrics: Excels in Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP). Discover how Boomerang can boost your projects. Sign up for a free account and connect with us on our forums or Discord for feedback and support. https://gag.gl/cpB7re?
19
3 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More