Skip to content
Navigation Menu
Toggle navigation
Sign in
Product
Actions
Automate any workflow
Packages
Host and manage packages
Security
Find and fix vulnerabilities
Codespaces
Instant dev environments
GitHub Copilot
Write better code with AI
Code review
Manage code changes
Issues
Plan and track work
Discussions
Collaborate outside of code
Explore
All features
Documentation
GitHub Skills
Blog
Solutions
By size
Enterprise
Teams
Startups
By industry
Healthcare
Financial services
Manufacturing
By use case
CI/CD & Automation
DevOps
DevSecOps
Resources
Topics
AI
DevOps
Innersource
Open Source
Security
Software Development
Explore
Learning Pathways
White papers, Ebooks, Webinars
Customer Stories
Partners
Open Source
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform
AI-powered developer platform
Available add-ons
Advanced Security
Enterprise-grade security features
GitHub Copilot
Enterprise-grade AI features
Premium Support
Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search syntax tips
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign in
Sign up
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
google-ai-edge
/
ai-edge-torch
Public
Notifications
You must be signed in to change notification settings
Fork
26
Star
218
Code
Issues
8
Pull requests
8
Actions
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Actions
Security
Insights
Commits
Branch selector
main
User selector
All users
All time
Commit History
Commits on Jul 18, 2024
Add support for composite kwargs (
#89
)
hheydary
committed
Jul 18, 2024
efe09c5
Update ai-edge-quantizer-nightly version (
#96
)
paulinesho
committed
Jul 18, 2024
e1ca75c
Commits on Jul 17, 2024
Update tf-nightly (
#94
)
paulinesho
committed
Jul 17, 2024
2f07c62
Commits on Jul 15, 2024
Wrap embedding tables with HLFB (
#61
)
paulinesho
committed
Jul 15, 2024
26162a2
Commits on Jul 11, 2024
Support kwargs in ai_edge_torch.convert to named inputs (
#86
)
chunnienc
committed
Jul 11, 2024
9fa0401
fix SDPA fx pass (
#85
)
Linchenn
committed
Jul 11, 2024
3b81ea0
Commits on Jul 10, 2024
Add safetensors SD 1.5 tensor names mapping (
#84
)
yichunk
committed
Jul 10, 2024
9140d68
Call CrossAttention __init__ with batch dim (
#83
)
talumbau
committed
Jul 10, 2024
70de3be
Commits on Jul 3, 2024
Update quantizer to new nightly (
#72
)
paulinesho
and
haozha111
committed
Jul 3, 2024
68752fe
Update README.md (
#80
)
haozha111
committed
Jul 3, 2024
6e37d95
Commits on Jul 2, 2024
Update setup.py (
#77
)
chunnienc
committed
Jul 2, 2024
6b0bc9f
Add a system overview of Generative API (
#69
)
haozha111
committed
Jul 2, 2024
1edabd1
Update setup.py (
#76
)
chunnienc
committed
Jul 2, 2024
55fb301
Revert "Change tokens from i64 to i32 (
#68
)" (
#73
)
paulinesho
committed
Jul 2, 2024
3f304d4
Commits on Jul 1, 2024
Pin ai-edge-quantizer nightly version (
#71
)
paulinesho
committed
Jul 1, 2024
08f274d
Commits on Jun 28, 2024
Change tokens from i64 to i32 (
#68
)
paulinesho
committed
Jun 28, 2024
2f133e4
Commits on Jun 27, 2024
Add channel last IO transformation API (
#66
)
chunnienc
committed
Jun 27, 2024
b5c7314
Commits on Jun 26, 2024
Fix BuildInterpolateCompositePass with aten decompositions (
#64
)
chunnienc
committed
Jun 26, 2024
af276a6
Commits on Jun 20, 2024
Add README for unet layers. (
#63
)
yichunk
committed
Jun 20, 2024
9b06abe
Add fx pass to remove sdpa composite zero mask input (
#62
)
chunnienc
committed
Jun 20, 2024
0a6c9b3
Commits on Jun 18, 2024
Add search model api (
#59
)
wuruoyu
committed
Jun 18, 2024
1a4fe12
Commits on Jun 17, 2024
Skip flaky quantization test temporarily (
#58
)
paulinesho
committed
Jun 17, 2024
87ca89d
Commits on Jun 14, 2024
Remove 5D tensor reshape in attention layer implementation. (
#57
)
yichunk
committed
Jun 14, 2024
0318b3e
Commits on Jun 13, 2024
Add UNet model config and refactor diffusion model. (
#55
)
yichunk
committed
Jun 13, 2024
5b31b82
support batch > 1 and add batch_size to model config. (
#56
)
haozha111
committed
Jun 13, 2024
4b9a118
init (
#33
)
chunnienc
committed
Jun 13, 2024
f1acc47
Commits on Jun 12, 2024
Remove ModuleList of optional blocks. Fix UpDecoder config. (
#51
)
yichunk
committed
Jun 12, 2024
378f8ae
Add 2D blocks used in diffusion model of stable diffusion. (
#50
)
yichunk
committed
Jun 12, 2024
fca58c3
Add cross attention layer and refactor t5_attention to inherit from cross attention (
#49
)
yichunk
committed
Jun 12, 2024
6b0ba07
Commits on Jun 11, 2024
Fix activation_config renaming (
#48
)
yichunk
committed
Jun 11, 2024
c0187c2
Add back missed deleted GELU_QUICK type. (
#47
)
yichunk
committed
Jun 11, 2024
c84db1c
Commits on Jun 10, 2024
Migrate quantization to use AI Edge Quantizer (
#46
)
paulinesho
committed
Jun 10, 2024
913831c
Update README.md (
#45
)
haozha111
committed
Jun 10, 2024
1ef63c7
Add GeGLU activation and change activation type to activation config for model_config. (
#42
)
yichunk
committed
Jun 10, 2024
1bc5098
Commits on Jun 8, 2024
Add interpolate nearest composite support (
#41
)
vamsimanchala
committed
Jun 8, 2024
fe2afd5
Pagination
Previous
Next
You can’t perform that action at this time.