𝗢𝗹𝗹𝗮𝗺𝗮 𝗻𝗼𝘄 𝗶𝗻 𝗽𝗴𝗮𝗶: 𝗨𝘀𝗲 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝘀 𝗶𝗻 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 🦙 You asked and we delivered…PostgreSQL developers can now access Ollama models right inside the database, with pgai. ✨ pgai is an open-source PostgreSQL extension that brings AI models closer to your data. With Ollama support, developers can now easily get up and running with open-source embedding and generation models for their AI applications. 🔥Here’s why Ollama in pgai is a gamechanger for PostgreSQL developers: 1️⃣ Embedding creation with open-source models Developers can now create embeddings on data in PostgreSQL tables using popular open-source embedding models like BERT, Meta’s llama3, Nomic Embed, all with a simple SQL query. Pgai stores embeddings in the pgvector data type, making it easy to perform search and RAG with pgvector and pgvectorscale after embeddings are created. 2️⃣ RAG and LLM reasoning with open-source models Developers can now perform RAG and LLM reasoning tasks on data in PostgreSQL tables, leveraging state of the art open-source models like Meta’s Llama 3, Mistral, Gemma, Qwen, Phi and more. This unlocks common reasoning tasks like summarization, categorization, and data enrichment, all with a SQL query rather than an entire data pipeline. Pgai is open-source under the PostgreSQL license and free for you to use on any PostgreSQL database for your AI projects. To get started, see the pgai github repo (⭐s appreciated): https://lnkd.in/gijeVw8T Or try it on Timescale Cloud on any new database service. Learn more using Ollama and pgai here: https://tsdb.co/ollama-e #ollama #opensourceai #pgvector #postgresql
Timescale’s Post
More Relevant Posts
-
MERN Developer 👨💻 | Tailwind CSS | Data Structures and Algorithm | Java | JavaScript | C++ | Python | Git & GitHub 💻 | Member @IEEE CUSB
MongoDB's Aggregation Pipeline is a framework for transforming and manipulating data within the database. It offers a flexible and efficient way to perform various operations on documents, allowing users to shape and analyze data according to their specific needs. The pipeline consists of multiple stages, each representing a distinct operation applied sequentially to the input documents. Why Use MongoDB Aggregation Pipeline? Efficiency: Aggregation pipeline operations are executed within the database, reducing the need to transfer large amounts of data to external applications for processing. Flexibility: It provides a wide range of operators and expressions, enabling users to perform complex transformations and analyses on their data. Performance: Aggregation pipelines are optimized for performance, making them suitable for handling large datasets efficiently. Key Stages in Aggregation Pipeline: $match: Filters documents based on specified criteria. $group: Groups documents by specified fields, allowing for the calculation of aggregated values. $lookup: Performs a left outer join to a collection in the same database to filter in documents from the "joined" collection for processing. $project: Shapes the documents by including or excluding fields, creating computed fields, or renaming existing ones. $sort: Sorts the documents based on specified fields and order. $limit: Limits the number of documents passed to the next stage in the pipeline. Aggregation Pipelines Limitations 1) Result Size Restrictions The aggregate command can either return a cursor or store the results in a collection. Each document in the result set is subject to the 16 megabyte BSON Document Size limit. If any single document exceeds the BSON Document Size limit, the aggregation produces an error. The limit only applies to the returned documents. During the pipeline processing, the documents may exceed this size. The db.collection.aggregate() method returns a cursor by default. 2) Number of Stages Restrictions MongoDB 5.0 limits the number of aggregation pipeline stages allowed in a single pipeline to 1000. 3) Memory Restrictions MongoDB 6.0, the allowDiskUseByDefault parameter controls whether pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. MongoDB's Aggregation Pipeline stands as a testament to the power of in-database data processing. By leveraging the pipeline's capabilities and understanding its stages and operators, developers can unlock new dimensions of data analysis and manipulation. Thanks to Hitesh Choudhary sir and chai aur code youtube channel for providing all of these knowledge for free on youtube and for making these concepts easy to understand and fun while learning. MongoDB Documentation for aggregation pipeline https://lnkd.in/gXMHhJdD #backenddevelopment #chaiaurcode #mongodb #webdevelopment
To view or add a comment, sign in
-
MERN Stack Developer | Undergraduate @SMI University | Ex-Intern @Interns Pakistan | JavaScript | React.js | Node.js | Express.js | MongoDB | Redux | Firebase | Bootstrap | Freelancer | @SMIT Student
Exploring the world of MongoDB aggregation pipelines has been a transformative journey, guided by the insightful expertise of Hitesh Choudhary sir. This powerful framework equips you with the tools to efficiently process and transform data stored across diverse collections, making it an indispensable asset for complex data manipulations. Imagine this scenario: you have two collections, "users" and "subscriptions." The "users" collection houses user details, while "subscriptions" tracks connections between users. By harnessing the aggregation pipeline, you can seamlessly retrieve and process data from these collections, enabling advanced data operations. The workflow of MongoDB aggregation pipelines unfolds as follows: Match Stage: Filter relevant user data by specific criteria, such as usernames. Lookup Stages: Subscribers: Identify all documents in "subscriptions" where the user is the channel owner, facilitating subscriber count calculations. SubscribedTo: Identify all documents in "subscriptions" where the user is a subscriber, aiding in determining the channels they are subscribed to. AddFields Stage: Introduce calculated fields like subscriber counts and subscribed channel tallies using operators such as $size and $cond. Project Stage: Tailor the final output by selecting essential fields, focusing on pertinent data for analysis and presentation. In summary, MongoDB aggregation pipelines offer a robust and flexible approach to data manipulation, delivering substantial benefits in data analysis and enhancing application performance. Looking forward to implementing these insights in upcoming projects and discovering how they can optimize data analysis and elevate application performance! Exciting times ahead! 💻✨ #MongoDB #DataOptimization #TechAdvancements #ContinuousLearning #chaiaurcode #javascript
To view or add a comment, sign in
-
Yesterday, we discussed about indexing in SQL and it's types along with it's benefits and drawback. Today let's discuss about the various algorithms that we can use while indexing. We are considering algorithms based on PostgreSQL as it is an open source RDBMS, very intuitive and easy to use. Algorithms that you can use while indexing :- 1) B-Tree 2) Hash 3) GIN 4) GIST Now before explaining these algorithms in terms of indexing, I must clear that each and every algorithm is optimal to be used in certain scenario and not every scenario is same, so different algorithm can perform differently depending upon the problem they are trying to solve. Algorithm :- It can be considered as steps (finite steps) that are taken in order to solve a problem. 1) B-Tree :- It is best to use when we want to implement comparisons. Some of those comparisons include >, <=, = , >=, BETWEEN, IN , ISNULL, ISNOTNULL. 2) Hash :- It is best to use when we are only dealing with the equality (=) operator. 3) GIN :- It stands for "Generalized Inverted Index" and it's best to use when single field is holding multiple values such as an array. 4) GIST :- It stands for "Generalized Search Tree" and it is useful in the indexing of the geometric data or full text based search. Now it is important to understand that if you try to use algorithm that is not suitable to implement certain task efficiently, it will lead to situation that query is taking a lot of time to execute which slows down the query performance which will act as a downside of using index and destroys the whole purpose of using index in the first place. Syntax to include an algorithm in index :- CREATE INDEX <index name> ON table USING <algorithm name> (column1,...) Example :- CREATE INDEX codes ON city USING hash (countrycode). Also, by default, PostgreSQL uses B-Tree for indexing. I will talk about some other topic tomorrow. Till then, Have a nice day
To view or add a comment, sign in
-
Hey Devs and Geeks, have you ever found yourself in an unexplainable situation in MongoDB/Mongoose world where you're using `await model.save()` to update data within documents, only to discover that not all your modifications seem to take effect? If you're a curious backend developer like me, you're probably gonna encounter this puzzling scenario. In a recent project, I delved into a fascinating labyrinth of data modeling. Imagine a MongoDB collection where documents have arrays containing nested objects, with each object potentially holding various data types. It's a complex puzzle that can leave you scratching your head. After an hour of head-scratching, I stumbled upon a captivating solution that I'm excited to share with you. It involves an underrated but incredibly powerful feature in Mongoose (a JavaScript library designed to simplify MongoDB interactions.) The feature is `markModified()`, and here's how it works: You give Mongoose a little nudge by using a special command called `markModified`. It's like saying, "Hey, I changed these boxes! Pay attention!" It ensures that your updates actually happen. Imagine the relief of being able to confidently explain to your boss why some things weren't updating as expected, without blaming the database or resorting to the classic excuse, "It works on my machine!" x'D. If you want to dive deeper into this function, I found this article helpful: [https://lnkd.in/e_vpY_gn]. I'm Bhavyashu, a developer on a quest for knowledge. Stay tuned for more interesting situations and their solutions as I share my coding journey. 🚀📚 #BackendEngineer #Development #Learning #mongodb #mongoose #backenddeveloper #nodejs #nodejsdevelopment Your insights into similar or mysterious problems in the same or other tech, and how you solved them, would be welcomed in the comment section.
Understanding markModified() in Mongoose
sarav.co
To view or add a comment, sign in
-
With pgvector you can use your Instaclustr for #PostgreSQL cluster for the most advanced machine learning tasks! https://ntap.com/3tDDTY0
Instaclustr for PostgreSQL® Releases Support for pgvector
instaclustr.com
To view or add a comment, sign in
-
With pgvector you can use your Instaclustr for #PostgreSQL cluster for the most advanced machine learning tasks! https://ntap.com/3tDDTY0
Instaclustr for PostgreSQL® Releases Support for pgvector
instaclustr.com
To view or add a comment, sign in
-
Programming Languages: Python, JavaScript, Java, HTML, CSS Frameworks/Libraries: React.js, Node.js Databases: MongoDB, general database management Tools: Docker, Git
TypeORM skips entity properties when it parses DB raw data I've got a nest.js project with mongodb and typeorm. There is a method that loads all records from a mongo document (there are <10 records). I do it, roughly, like this: @Injectable() export class MyService { public constructor( @InjectRepository(MyEntity) private readonly myEntity: Repository<MyEntity> ) {} public loadAllRecords() { return this.myEntity.find(); } } Simplified entity: @Entity({ name: 'my_entity' }) export class MyEntity { @PrimaryColumn({ type: 'varchar', length: 50, name: '_id', }) @ObjectIdColumn({ type: 'varchar', length: 50, name: '_id', }) public alias: string; @Column({ type: 'varchar', length: 100, unique: true, nullable: false, name: 'default_name', }) public defaultName: string; } In other places similar code works perfectly. In this particular place typeorm loads correct raw data (I've logged and checked it), but then it puts only the alias in the created entity instance. Pay attention, it doesn't set a default_name property, it doesn't set any properties, aside from the alias at all. So, the metadata works, since _id is correctly parsed into alias, the data extraction works, there mu…
TypeORM skips entity properties when it parses DB raw data
stackoverflow.com
To view or add a comment, sign in
-
Timescale just launched two new open-source extensions for PostgreSQL - pgai and pgvectorscale! These extensions are tailored to complement pgvector, making PostgreSQL as the go-to database for AI development, eliminating the need for standalone vector databases. Develop your GenAI applications seamlessly while leveraging the advantages of Postgres. This article by Avthar Sewrathan, Matvey Arye, and John Pruitt has more details: https://lnkd.in/dV-iNR6w #Timescale #PostgreSQL #vector #RAG #GenAI
Making PostgreSQL a Better AI Database
timescale.com
To view or add a comment, sign in
-
Updating a creaky old app to use a modern database is more than just migrating the data. The application includes a bunch of code and queries that are likely coupled to the old database. Fortunately MongoDB's Relational Migrator is here to help, with our new SQL-to-MQL query converter, powered by magic AI pixie dust ✨. Find out more below! https://lnkd.in/g7e2S9jZ
AI-powered SQL Query Converter Tool is Now Available in Relational Migrator | MongoDB Blog
mongodb.com
To view or add a comment, sign in
-
CEO & Founder at RavenDB - NoSQL Distributed Database that's Fully Transactional (ACID) | Author of "Inside RavenDB 4.0" and "DSLs in Boo" | Blogger at ayende.com | Avid fantasy novels reader
Intriguing article by Matt Asay in InfoWorld! The idea of prioritizing the database in the tech stack aligns perfectly with my philosophy. We've often been conditioned to chase fads - the latest language or framework - relegating data as an afterthought. But as the article highlights, data is the soul of modern applications. NoSQL databases, in particular, offer exciting possibilities for the data-first approach. Their flexible schema and focus on specific data models can streamline development for many modern applications. The data-first approach is the key takeaway, not a specific solution provider. Whether it's using a lightweight option for learning fundamentals, exploring NoSQL for specific use cases, or leveraging abstraction tools, this data-first approach encourages a thoughtful selection of the right data technology for the job. More here: https://lnkd.in/dG2RN742 #NoSQL #DeveloperExperience #Database
Why developers should put the database first
infoworld.com
To view or add a comment, sign in
10,146 followers
Simplifying Digital Technology for businesses | Software Architect | Agile Evangelist
3wThis is going to be a game changer. Now web app developers will get the power to deploy and use AI in web applications.