Skip to content
Navigation Menu
Toggle navigation
Sign in
Product
Actions
Automate any workflow
Packages
Host and manage packages
Security
Find and fix vulnerabilities
Codespaces
Instant dev environments
GitHub Copilot
Write better code with AI
Code review
Manage code changes
Issues
Plan and track work
Discussions
Collaborate outside of code
Explore
All features
Documentation
GitHub Skills
Blog
Solutions
By size
Enterprise
Teams
Startups
By industry
Healthcare
Financial services
Manufacturing
By use case
CI/CD & Automation
DevOps
DevSecOps
Resources
Resources
Learning Pathways
White papers, Ebooks, Webinars
Customer Stories
Partners
Open Source
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform
AI-powered developer platform
Available add-ons
Advanced Security
Enterprise-grade security features
GitHub Copilot
Enterprise-grade AI features
Premium Support
Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search syntax tips
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign in
Sign up
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
leon-ai
/
leon
Public
Notifications
You must be signed in to change notification settings
Fork
1.2k
Star
14.9k
Code
Issues
81
Pull requests
22
Actions
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Actions
Security
Insights
Commits
Branch selector
develop
User selector
All users
All time
Commit History
Commits on Jul 4, 2024
fix(server): persona typo
louistiti
committed
Jul 4, 2024
dcda888
Commits on Jul 3, 2024
feat: use VRAM as LLM unit requirements
louistiti
committed
Jul 3, 2024
0c775ba
feat(server): VRAM helpers
louistiti
committed
Jul 3, 2024
6867e9c
feat(server): has GPU helper
louistiti
committed
Jul 3, 2024
1334c20
feat(server): get graphics compute API
louistiti
committed
Jul 3, 2024
94885de
feat(server): get GPU device names
louistiti
committed
Jul 3, 2024
b867758
chore: better comments on LLM action matching
louistiti
committed
Jul 3, 2024
4019bd8
Commits on Jul 2, 2024
feat: add
inspect:gpu
npm script
louistiti
committed
Jul 2, 2024
7e483b9
Commits on Jul 1, 2024
refactor(python tcp server): rename the RMS threshold setting for ASR
louistiti
committed
Jul 1, 2024
ee615e6
feat(web app): add headset tips for a better voice experience
louistiti
committed
Jul 1, 2024
b092947
fix(python tcp server): overflowed on ASR
louistiti
committed
Jul 1, 2024
1d96655
fix(web app): use correct config property for LLM warm up
louistiti
committed
Jul 1, 2024
8ed7c78
feat(server): boost free RAM delta for LLM
louistiti
committed
Jul 1, 2024
be3df77
feat(server): upgrade
node-llama-cpp
to
3.0.0-beta.36
louistiti
committed
Jul 1, 2024
2c89041
Commits on Jun 30, 2024
feat(web app): add more info data
louistiti
committed
Jun 30, 2024
e4277bf
feat(web app): add info
louistiti
committed
Jun 30, 2024
2e351d4
feat(server): VRAM context size management
louistiti
committed
Jun 30, 2024
e24dee3
feat(python tcp server): map speech synthesis hardware device choice to settings
louistiti
committed
Jun 30, 2024
8eeff3a
feat(server): upgrade
node-llama-cpp
to
3.0.0-beta.34
louistiti
committed
Jun 30, 2024
a05f344
feat(python tcp server): run speech synthesis inference on CPU
louistiti
committed
Jun 30, 2024
74241b4
feat(server): kill existing PyTorch thread from TCP server on start
louistiti
committed
Jun 30, 2024
fae15c9
feat(server): use debug verbosity by default in LLM manager
louistiti
committed
Jun 30, 2024
2c1b351
feat(server): increase LLM threads number
louistiti
committed
Jun 30, 2024
c25c525
fix(web app): init state when shouldWarmUpLLM is not enabled
louistiti
committed
Jun 30, 2024
805c65b
Commits on Jun 29, 2024
Merge remote-tracking branch 'origin/develop' into develop
louistiti
committed
Jun 29, 2024
0bb61e5
feat(server): disable onToken when LLM duties are warming up
louistiti
committed
Jun 29, 2024
d203806
feat(server): disable onToken when LLM duties are warming up
Louis Grenard
committed
Jun 29, 2024
7a42f4d
Commits on Jun 27, 2024
feat(server): sync LLM duties warmup with UI
louistiti
committed
Jun 27, 2024
ed9961c
Commits on Jun 25, 2024
feat(server): warm up LLM duties when necessary (WIP)
louistiti
committed
Jun 25, 2024
fb91551
Commits on Jun 24, 2024
refactor(scripts): differentiate PyTorch info log on macOS
louistiti
committed
Jun 24, 2024
8dc725a
Commits on Jun 23, 2024
fix(server): sometimes the action recognition LLM duty add whitespace to intent
louistiti
committed
Jun 23, 2024
6c3abf3
BREAKING: upgrade from Python
3.9
to Python
3.11
louistiti
committed
Jun 23, 2024
1314217
chore(bridge/python): upgrade cx_Freeze to
7.1.1
louistiti
committed
Jun 23, 2024
8716176
chore(python tcp server): upgrade cx_Freeze to
7.1.1
louistiti
committed
Jun 23, 2024
85c3a40
refactor(scripts): only install PyTorch when the targeted setup is the TCP server
louistiti
committed
Jun 23, 2024
0661e9e
Pagination
Previous
Next
You can’t perform that action at this time.