Sign in to view Patricia’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Toronto, Ontario, Canada
Contact Info
Sign in to view Patricia’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
996 followers
500+ connections
Sign in to view Patricia’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Patricia
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Patricia
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Patricia’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
RBC
****** ******** *** ****, ********** ** ********** ****
-
**
********** ********** | **** ***********/******* ****
-
*** ******
****** ********** ******* | *** **** ****
-
********** ** *******
****** ** ********** (**.*.) ******** *******, **** & **
-
********** ** *******
******’* ****** ******** ***********, **** **********
View Patricia’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Patricia’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Explore more posts
-
Anthony Bartolo
Discover the Phi-3 mini models, AI creations with immense potential. The short context model, Phi-3-mini-4k-instruct-onnx, handles prompts up to 4k words, while the long context version deals with much lengthier inputs and outputs. Lee Stott shares how to leverage these models for text generation using NLP techniques. From setting up Python to utilizing the generate() API, Lee provides step-by-step instructions and code examples. Let's unleash the capabilities of Phi-3 mini models! https://lnkd.in/gp8c_2T7 #phi3 #onnx #msftadvocate
33
-
Angel Almada
Transcendence Series: The Founders and Leaders of Deep Learning Yoshua Bengio, Geoffrey Hinton, and Yann LeCun are three Canadian computer scientists who are widely regarded as the founders and leaders of deep learning, a branch of artificial intelligence that uses neural networks to learn from large amounts of data and perform complex tasks such as image recognition, natural language processing, speech synthesis, and self-driving cars. They have also received numerous awards and recognitions for their work, including the Turing Award, the highest honor in computer science, in 2018. Yoshua Bengio is a professor at the University of Montreal and the founder and scientific director of Mila, the Quebec Artificial Intelligence Institute, the largest academic research center in the world dedicated to deep learning. He is also a co-founder and senior fellow of the Vector Institute, a research institute for artificial intelligence based in Toronto. He has published over 500 papers and several books on machine learning, deep learning, neural networks, and related topics, such as Learning Deep Architectures for AI, Deep Learning, and Neural Networks and Learning Machines. He is also an entrepreneur and a co-founder of several companies that apply deep learning to various domains, such as Element AI, Imagia, and Lyrebird. Geoffrey Hinton is a professor emeritus at the University of Toronto and was vice president and engineering fellow at Google - he departed on 2023 citing concerns on the usage of AI technology. He is also the chief scientific adviser of the Vector Institute and a founding director of the Gatsby Computational Neuroscience Unit at University College London. He has published over 200 papers and several books on machine learning, neural networks, cognitive science, and artificial intelligence, such as Parallel Distributed Processing, Connectionist Models of Cognition and Perception, and The Handbook of Brain Theory and Neural Networks. He is also an inventor and a co-inventor of several algorithms and techniques that are widely used in deep learning, such as backpropagation, Boltzmann machines, contrastive divergence, and dropout. Yann LeCun is a professor at New York University and the chief AI scientist at Facebook. He is also the founding director of the NYU Center for Data Science and the co-director of the NYU Center for Neural Science. He has published over 300 papers and several books on machine learning, computer vision, neural networks, and artificial intelligence, such as Convolutional Networks and Applications in Vision, Deep Learning, and Theoretical Computer Science. He is also an innovator and a pioneer of several breakthroughs in deep learning, such as convolutional neural networks, the LeNet architecture, and the MNIST database. Note: This is the twentieth post in the series. The goal is to highlight significant figures who have made an impact on technology and its application to human endeavors.
4
-
Randal B.
New Post: A statistical analysis of why people hate post-grunge band Nickelback - https://lnkd.in/gwzmiGE7 - Daniel Parris noticed that Canadian post-grunge rock band Nickelback not only faces "considerable hostility" but that the dislike has eminent statistical characteristics that might explain its depth and persistence despite commercial success. The analysis hits on several hypotheses—their songs are overplayed relative to their success, internally repetitive despite it, and more successful with conservative consumers—but it comes down to the band itself suffering extremely peculiar PR incidents that turned it into a meme beyond the music. — Read the rest The post A statistical analysis of why people hate post-grunge band Nickelback appeared first on Boing Boing. - #news #business #world -------------------------------------------------- Download: Stupid Simple CMS - https://lnkd.in/g4y9XFgR -------------------------------------------------- or download at SourceForge - https://lnkd.in/gNqB7dnp
-
Polly Mitchell-Guthrie
I'm excited that this week at the CAIAC - Canadian Artificial Intelligence Association conference George Wang will be presenting a paper Adaptive Learning Rates for Gradient Boosting Trees, based on work he did as an intern at Kinaxis last fall, in collaboration with Christopher Wang, Yunfei O., Behrouz Soleimani, PhD. There are 3 reasons this is both impressive and cool: 1. They combine heuristics, machine learning, and optimization to speed up the training for an important algorithm, Gradient Boosted Machines, widely used in both academia and industry. Faster training improves model performance. 2. This paper is just one of the many sparks of innovation in #artificialintelligence coming out of Kinaxis, where > 50% of our patents filed in the last several years have been in this space, because we continue to push the boundaries by inventing new solutions like this one to solve problems our customers face. We've been delivering AI to the market since 2018 and were the first to provide solutions powered by AI to address both demand and supply. We are rooted in Canada, which brought the world out of the last AI winter in 2012 when a team from the University of Toronto won the famous Imagenet competition by blowing the lid off previous results. Their submission was layers of neural networks called deep learning, which later became the foundation for the #generativeAI revolution we are seeing today. 3. We have an outstanding intern program in #machinelearning at Kinaxis, where interns work on real problems, guided by senior staff. As a testament to letting the students shine, when their paper was accepted they decided to have George present it. So check out the paper if you're at the conference, but if you miss the event you can't miss the fact that we are actively speaking and sparking on AI. In fact, just last week Chantal Bisson-Krol spoke in Edmonton at the #upperbound conference on AI-Powered Strategies for Resilient Supply Chains and I spoke at the Reuters Supply Chain USA event on Embedding AI in supply chain to deliver end-to-end value.
43
2 Comments -
Anthony Bartolo
Level up your Generative AI development with Microsoft’s AI Toolkit! Shreyan J D Fernandes shares how AI Toolkit empowers you to run LLMs/SLMs locally. https://lnkd.in/gcGVZR6z AI toolkit enables you to: 💡Run pre-optimized AI models locally on various setups, including Windows 11 with DirectML acceleration or direct CPU, Linux with NVIDIA GPUs, or CPU-only environments. 💡Test and integrate models seamlessly using a user-friendly playground or a REST API for direct application incorporation. 💡Fine-tune models like popular SLMs Phi-3 and Mistral locally or in the cloud, enhancing performance, tailoring responses, and controlling style. 💡Deploy AI-powered features by choosing between cloud deployment or embedding them within your device applications. #AI #Microsoft #AIToolkit #msftadvocate
38
-
Dominic Baillie
Month of AI - Day 6 After some of the complex math of some of the previous days reading, you will be pleased to hear that today we go to the university of Toronto for a paper titled "Keeping Neural Networks Simple". Yes I know, it has a lot of math to, but that I'm afraid is the nature of the beast. Geoffrey Hinton and Drew van Camp propose a method to improve the generalization of supervised neural networks by minimizing the description length of the weights. This involves adding Gaussian noise to the weights and adapting the noise level during learning to balance the expected squared error and the information content in the weights. The technique focuses on efficiently computing derivatives of the expected squared error and weight information, leading to effective regularization and preventing overfitting. Keeping NN Simple by Minimizing the Description Length of the Weights (toronto.edu)
2
-
Elena Yunusov
Good news 🇨🇦 Tech friends: AI Tinkerers Toronto Meetup is now live! Show don't tell. Join us to share your killer AI demo with fellow AI builders and researchers, on May 30th at Shopify HQ. No pitches! We want to see your messy WIP! Let's build 🇨🇦 / acc. Apply here: https://lnkd.in/g-C73dwm 💡 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗔𝗜 𝗧𝗶𝗻𝗸𝗲𝗿𝗲𝗿𝘀? We are not just “AI enthusiasts”, we are AI tinkerers. AI Tinkerers is a meetup built around live demos, for tech entrepreneurs, hackers and those deeply passionate about creating LLM-enabled applications and have hands-on experience in building such systems. For those who remember... yes this is a lot like DemoCamp :) 🎤 𝗗𝗲𝗺𝗼𝘀 We are actively looking for presenters! A presentation/demo is limited to 5-6 minutes in length and demonstrates a work in progress. We’ll select 4 or 5 of the most interesting topics to present. Note, as capacity to the event is limited, we’ll be prioritizing those that can demo something interesting. 🚀 Pls RSVP and spread the word! We've needed an event built around actual Demos. This is it. Sponsored by Shopify and Shopify Ventures; brought to you by the Human Feedback Foundation AIT city organizers. Let's go 🚀
58
6 Comments -
Jan Schulte
Recently I interviewed Nathaniel Simard, founder & CEO of Tracel Technologies and inventor of the burn ML framework: Jan: Nathaniel, we're excited to have you as our guest today! To kick off our conversation, could you please introduce yourself and share a bit about your background and what got you into ML? Nathaniel: Sure, I'm the creator of Burn, a deep learning framework written in Rust, and the founder of Tracel AI. I started coding in the first year of university, where I was studying mechanical engineering, but I quickly switched to software engineering after, since I instantly fell in love with programming. Then I explored different facets of the field from backend to frontend development, and I decided to start my career as a consultant focused on software quality. After some time, I wanted to go deeper into AI, since I was always interested in the process of learning, so I enrolled for a master's degree at MILA. J: According to Github you started to work on burn in summer 2022. Since then has already earned over 7000 stars on GH. What was your initial motivation to develop a new ML framework? N: I always had a side project going on, for fun mostly and to learn new things. I wanted to explore asynchronous and sparse neural network architectures, where each sub-network can learn and interact with other sub-networks asynchronously and independently. I wasn't able to actually create something useful because I needed fine control over the gradients and the concurrency primitives, which is not easily done with Python and PyTorch. At the same time, I was working on machine translation models at my current job, and it was quite painful to put models into production. I decided to switch my side project to a new deep learning framework, with more flexibility regarding gradients and concurrency primitives as well as being more reliable and easier to deploy on any system. J: Amazing to see that this was born out of a side project! I feel the struggle with concurrency in Python and this is something where Rust really shines. What are some of the other key features that makes burn special? N: I think there are two things that really set Burn apart. First, almost all neural network structures are generic over the backend. The goal is that you can ship your model with almost no dependency, and anybody can run it on their hardware with the most appropriate backend, even embedded devices without an operating system. Second, Burn really tries to push the boundaries of what is possible in terms of performance and flexibility. It offers a fully eager API, but also operation fusion and other optimizations that are normally only found in static graph frameworks. The objective is that you don't have to choose between portability, flexibility, and performance; you can have it all! To read the rest of the interview with Nathaniel's plans for burn & trends in AI here: https://lnkd.in/dKv9ztMm #ml #ai
2
-
Faraz Thambi
The Toronto Machine Learning Society (TMLS) will host Dr. Yizhi Yin from Neo4j for an immersive workshop on Enabling GenAI Breakthroughs with Knowledge Graphs. 🌟 We'll explore the integration of Large Language Models (LLMs) with knowledge graphs using Neo4j, a leader in graph database and graph analytics, through a Retrieval Augmented Generation (RAG) approach. 🛠 Workshop Highlights: ✅ How to overcome LLM limitations with Retrieval Augmented Generation (RAG) ✅ How to build a personalized messenger app with RAG patterns for product recommendations ✅ Analyzing Vector Search Results with Graph Patterns ✅ Enriching Search Results Using Graph Data Science Methods You can see the full abstract here: https://lnkd.in/gwSRjPjz #GenAI #KnowledgeGraphs #tmls #neo4j
16
-
Shoeb Hosain
FINAL CALL: Discovering AI Use Cases in Retail - with the DataSphere Lab at McGill University! Professor Reflections: Building our Quebec AI ecosystem with Small-Medium-Enterprises In our work with many organizations, some feel overwhelmed with the pace of the Technological change happening, especially in the domain of data science and AI. Others, have a rough sketch of a plan, but are challenged with the resources and/or the longer priority list they have to get through each day. With a large portion of the Canadian economy vested in SME business, the DataSphere wants to help organizations develop ideas to take advantage of scale and reach that AI can have. Not only that, but we want to invest our time with the ecosystem to make Use Case development a reality. Build with us! Wed May 8, 2024 from 3-5pm on campus at McGill. Register here: https://lnkd.in/guNcAt83
25
1 Comment -
Ganesh Jagadeesan
Wow, this sounds like a game-changer for evaluating generative AI models! 🌟 Incorporating community votes into the assessment process through GenAI Arena brings a fresh perspective and ensures diverse insights into model performance. It's exciting to see innovations that prioritize transparency and comprehensive evaluation across various domains. Looking forward to exploring more advancements in AI at DataHack Summit 2024! #GenerativeAI #CommunityDriven #Innovation #DataScience 🚀
-
Raphaël MANSUY
Extending Llama-3's Context Ten-Fold Overnight: A Breakthrough in AI Language Models ... Researchers from the Beijing Academy of Artificial Intelligence and Renmin University of China 🇨🇳 have achieved a significant milestone in the development of large language models (LLMs). Their recent paper, "Extending Llama-3's Context Ten-Fold Overnight," showcases an innovative approach to expanding the context length of the Llama-3-8B-Instruct model from 8K to an impressive 80K tokens. 👉 The Importance of Context Length In natural language processing, context length refers to the amount of text a model can consider when generating a response. A longer context allows the model to understand and process more extensive information, such as entire books or detailed documents. This is particularly crucial for tasks that require a deep understanding of complex topics or long-form content. 👉 Efficient Training with QLoRA The team employed a novel fine-tuning method called QLoRA, which enabled them to efficiently train the model using only 3.5K synthetic training samples generated by GPT-4. This approach significantly reduced the computational resources and time required compared to traditional methods. Here's a simplified overview of the training process: 1. Generate 3.5K long-context training samples using GPT-4 2. Organize question-answer pairs for each context into multi-turn conversations 3. Fine-tune the LLM using QLoRA with a LoRA rank of 32 and alpha of 16 4. Train the model for 1 epoch on a 8xA800 (80G) machine 👉 Preserving Short-Context Performance Despite the focus on extending context length, the researchers ensured that the model maintained its proficiency in handling shorter contexts. This versatility is essential for practical applications, as it allows the model to adapt to various tasks and requirements seamlessly. 👉 Open Source Contribution The research team plans to make the entire project open source, including the model, data, and training code. This decision will undoubtedly benefit the AI research community, fostering collaboration, innovation, and the development of new applications based on this work. 👉 Potential Impact and Applications The enhanced Llama-3-8B-Instruct model has the potential to revolutionize various industries and domains that rely on natural language processing. Some potential applications include: - Detailed content analysis and summarization - Complex question answering systems - Improved chatbots and virtual assistants - Enhanced language translation and interpretation As AI continues to evolve, advancements like this will pave the way for more sophisticated and efficient language models that can tackle increasingly complex tasks.
71
2 Comments -
Judah Diament
In the following two articles, taken together, I saw striking parallels between the path that the AI boom is following and the dot-com bubble from over 20 years ago. Every company today is calling itself an AI company (back then every company called itself a dot-com; I still recall sitting in Bell Atlantic and calling a friend at IBM HQ whose voicemail said “you’ve reached [name removed] at IBM-dot-com”!!) The second parallel to the dot-com bubble is that investors seeing that the real short-to-medium-term financial opportunity is in the infrastructure (remember how people looked at UPS's stock during the dot-com bubble?) And the third, as with all bubbles, frenzy/hysteria as to the power of an innovation and its long-term implications. There will likely be some long-term value created here, but we’re a long way off from all the exaggerations and overhype being rooted out. We’ve seen this movie before… AI Is Driving ‘the Next Industrial Revolution.’ Wall Street Is Cashing In. Old-school stocks in the utilities, energy and materials sectors are outpacing the wider market - https://lnkd.in/erYAKvPN Tech Workers Retool for Artificial-Intelligence Boom: Generative-AI frenzy leads to unbalanced labor market in technology sector - https://lnkd.in/eaqbeBQs #AI #CS
5
-
Debanjan Saha
🚀 Advanced AI-RAG Chatbot using Langchain for the Landlord and Tenant Board of Ontario is now LIVE! 🚀 👨⚖️ Navigating legal waters in landlord-tenant disputes just got easier. I am delighted to present my project on an advanced AI-powered chatbot designed to democratize legal advice for Ontarians. Leveraging advanced technologies such as NLP, ML, and RAG, the chatbot delivers real-time, accurate legal assistance - making it a game-changer in legal tech! 📈 With FAISS-powered retrieval and sophisticated reranking algorithms, this platform is not just fast - it’s razor-sharp in delivering contextually relevant advice. 🔗 Dive into my project presentation to see the details of the project: https://lnkd.in/dU32f7ze 💡 This isn't just a project; it's a project with a purpose. My goal? To ensure that everyone has access to the legal information they need, when they need it, without any barriers. ✨ A special shoutout to the incredible team: Atharva Pandkar, Tarun Reddy who worked with me from time to time and our Professor Dr. Uzair Ahmad, without whose guidance this would not have been possible. We're just getting started on our mission to transform access to legal services. 👇 Check out my project, join the mission, and let's drive legal tech forward together! GitHub Repo: https://lnkd.in/dBDpjFp8 #LegalTech #Innovation #AI #LegalAidChatbot #AccessToJustice #OntarioLaw #TechForGood #ArtificialIntelligence #Disruption
26
6 Comments -
Vince Kellen, Ph.D.
Artificial neural networks resemble, at least at one level, their biological counterparts. And we can scan their brains... #ai "We find a diversity of highly abstract features. They both respond to and behaviorally cause abstract behaviors. Examples of features we find include features for famous people, features for countries and cities, and features tracking type signatures in code. Many features are multilingual (responding to the same concept across languages) and multimodal (responding to the same concept in both text and images), as well as encompassing both abstract and concrete instantiations of the same idea (such as code with security vulnerabilities, and abstract discussion of security vulnerabilities)." https://lnkd.in/gEW5TvtP
8
-
Gregory Mermoud
Very insightful work by Anthropic’s interpretability team. And an amazing paper, with outstanding writing and figures. The idea is very simple: interpret LLMs by leveraging sparse autoencoders as surrogate models of the MLP of transformer blocks, which allow one to disambiguate the superposition of features captured by a single neuron. A simple idea, but a very careful and complex execution, as it is often the case in our line of work. The paper goes into many details and provide a large array of insights, although the gist of the implementation remains obfuscated due to the closed source nature of Claude. Too bad, because this is the kind of work that we need to better understand and eventually trust LLMs. This is demonstrated by the authors in the section ‘Influence on Behavior’, where they show that clamping some features to either high or low value during inference is “remarkably effective at modifying model outputs in specific, interpretable ways”. Hopefully this kind of work is going to be replicated and generalized to open-weights models, such that we have new ways to steer their behavior. https://lnkd.in/eVym7f_f #interpretability #xai #explainableai #steerableai #anthropic #claude #anthropic
3
-
Tom Eck
I've posted on the work that Anthropic is doing to improve LLM interpretability, and again want to stress how important this is to the #GenAI field, especially when applied in regulated industries. The notorious "black boxes" of neural networks make them nearly impossible to understand why they generate what they do. (Note: even Sam Altman does not understand why the GPT models work so effectively. For that matter, no one actually does!!) Here is a pretty technical explanation of the approach that Anthropic is taking. I hope that more researchers dig into this. BTW - during my graduate work in genomics and drug development I discovered that a reason we don't have more/better drugs for genetic diseases is that most of them are not monogenic, meaning a disease is typically caused by multiple, even many, genes. What Anthropic is discovering seems to also be the case in LLM's. i.e., the outputs generated arise from a very complex interplay of many 'genes' in the model.
5
-
Emerson Taymor
Just a few days out from my talk at Toronto Machine Learning Society (TMLS)‘ Summit. I will be speaking next Monday July 15 at 11:30 in Room 1. My talk, “GenAI: A New Renaissance in Product Development” is about actionable tips for how to incorporate GenAI in your day to day workflows. I cover three major topics on how to: 1. Get the most from work with global teams 2. Expedite user research (while still talking to real humans!) 3. Find the best tools to leverage during the product development process Along the way I’ll be sharing caveats and pitfalls to be on the lookout for. I’ve got some fun visuals to backdrop the talk. Plus, I will be launching my playbook for how to use GenAI in the product development process. It covers a whole lot more and the audience will have first access! So come on out from 11:30 - 12 in Room 1! And it’s my first trip to Toronto so please send tips!
26
-
Sugato Ray
🐣 What is the probability of A given B has happened? This kind of conditional thinking is at the core of Bayesian Problem statement. There are many such real world scenarios that could be modeled using Bayesian thinking. 🤗 PyMC is a Python library that allows you to conveniently express such probabilistic problems in a programming construct. ⚡️PyMC 👉 GitHub: https://lnkd.in/gVj-eX-S 👉 Docs: https://lnkd.in/gfJJczQg 👉 Intro to PyMC 4.0: https://lnkd.in/gqmcbwf4 #python #pymc #bayesian #ml
5
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More