Foundational Reading - I will make studying all four of these volumes from the SFI central to my learning over the next year. Still need to go a lot deeper into transformer and diffusion models, but one won't get the range of understanding needed by narrowing down to machine learning. https://lnkd.in/ghK4aG79 #complexity #design #computation
Steven Forth’s Post
More Relevant Posts
-
Senior Instructional Designer | Multi-genre content developer | AI tools and strategies | Collaborator | Relationship-builder | Lateral and vertical thinker | Empathetic human
Serious hat tip and thanks to Kyle Shannon for sharing this insightful article: Curious about the "levels" of artificial intelligence? Check out this article: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://lnkd.in/evTRVjPH If the link doesn’t display for you, search for "Levels of AGI: Operationalizing Progress on the Path to AGI" on arvix.org https://arxiv.org
Electrical Engineering and Systems Science
arxiv.org
To view or add a comment, sign in
-
One of our paper got published recently. Thanks to Annajiat sir and our group members. "Genre Classification: A Machine Learning Based Comparative Study of Classical Bengali Literature" Link: https://lnkd.in/gqNy9Vsf
To view or add a comment, sign in
-
In the last editorial two-year overview in Annals of Regional Science (https://lnkd.in/dZYhZk86), my paper on Spatial Machine Learning (https://lnkd.in/dwRHqN7q) was the most downloaded one. It was also a top-cited paper. New methods seem to be widely accepted! Do you know them?
To view or add a comment, sign in
-
-
🚨New paper🚨 Together with Martina Vijver and Willie Peijnenburg, we published the second paper of my PhD. This paper investigates how species traits can be used in the creation of machine learning models that predict the toxic effects of metallic #nanomaterials. We also present the benefits of using and combining multiple different machine learning algorithms in this regard. Additionally, we highlight the importance of evaluating #QSARs in depth by looking at the uncertainty of the predictions and limitations of the models. And finally we looked at how the size of datasets affected model performance and show how an increase in dataset size may not necessarily improve model performance. Have a read to find out the juicy details 🍵: https://lnkd.in/ep9C9PAn
To view or add a comment, sign in
-
-
📃Scientific paper: EBLIME: Enhanced Bayesian Local Interpretable Model-agnostic Explanations Abstract: We propose EBLIME to explain black-box machine learning models and obtain the distribution of feature importance using Bayesian ridge regression models. We provide mathematical expressions of the Bayesian framework and theoretical outcomes including the significance of ridge parameter. Case studies were conducted on benchmark datasets and a real-world industrial application of locating internal defects in manufactured products. Compared to the state-of-the-art methods, EBLIME yields more intuitive and accurate results, with better uncertainty quantification in terms of deriving the posterior distribution, credible intervals, and rankings of the feature importance. ;Comment: 10 pages, 5 figures, 2 tables Discover the rest of the scientific article on es/iode ➡️https://etcse.fr/cWhj
EBLIME: Enhanced Bayesian Local Interpretable Model-agnostic Explanations
ethicseido.com
To view or add a comment, sign in
-
Another paper aim to establish a systematic set of open problems and application successes so that ML researchers can comprehend the field’s current state more quickly and become productive. https://lnkd.in/gHJTj8Bf
To view or add a comment, sign in
-
-
Senior Researcher in Neurodiversity in STEM Education. Author of Reaching and Teaching Neurodivergent Learners in STEM.
Just published! Our team's paper on Including Neurodiversity in Computational Thinking is now available. The paper presents the results from our implementation study of INFACT. Check it out! #Neurodiversity #ComputationalThinking #INFACT #NDinSTEM https://lnkd.in/gu58vyTT
To view or add a comment, sign in
-
New year, new publication alert! 🎉 Predicting metabolism at an early stage is important in maximising the chance of a drug’s success. However, accurate, useful models can be computationally expensive. To make good models more accessible, Elena Gelžinytė and Gábor Csányi at the University of Cambridge, in collaboration with Mario Öeren and Matthew Segall, have been investigating new machine learning methods, which can be applied to solve our chemistry questions at a fraction of the computational cost of traditional methods. Our latest research, in the Journal of Chemical Theory and Computation, describes a MACE interatomic potential which increases the computational efficiency of predicting cytochrome P450 sites of metabolism. Congratulations to all the authors! If you're interested, you can read the Open Access paper at: https://lnkd.in/e947Fnfu #ComputationalChemistry #PredictiveModelling #MachineLearning #MLIP
To view or add a comment, sign in
-
-
The reasoning of mathematicians is founded on certain and infallible principles. Every word they use conveys a determinate idea, and by accurate definitions they excite the same ideas in the mind of the reader that were in the mind of the writer. When they have defined the terms they intend to make use of, they premise a few axioms, or self-evident principles, that every one must assent to as soon as proposed. They then take for granted certain postulates, that no one can deny them, such as, that a right line may be drawn from any given point to another, and from these plain, simple principles they have raised most astonishing speculations, and proved the extent of the human mind to be more spacious and capacious than any other science. — John Adams Meanwhile we have allowed people to degrade the science of computing to the exact opposite.
To view or add a comment, sign in
-
Thrilled to share our paper, 'Structure in Deep Reinforcement: A Survey and Open Problems,' recently published in the Journal of Artificial Intelligence Research (#JAIR). We introduce a framework inspired by design patterns to unify existing #RL approaches that leverage structural assumptions for enhanced efficiency. We also discuss their relevance to different research areas within RL, such as Meta-RL and using foundation models for RL. A heartfelt thanks to coauthors Amy Zhang and Marius Lindauer, the JAIR reviewers, and everyone who provided feedback on earlier drafts—your insights were invaluable! Read the full paper here: https://lnkd.in/egic-XYq
Full Professor of (Auto)ML and Head of the Institute of Artificial Intelligence at Leibniz University Hannover
Our survey paper on "Structure in Deep Reinforcement: A Survey and Open Problems" was published in the Journal of Artificial Intelligence Research (#JAIR). If you are interested in learning how you can make use of structural information (in various forms and kinds) to increase the efficiency of #RL, this paper will be interesting for you. Many thanks to Aditya Mohan for making this paper possible and to Amy Zhang as a very knowledgable co-author in this field. https://lnkd.in/egic-XYq
To view or add a comment, sign in
-