Skip to content
Navigation Menu
Toggle navigation
Sign in
Product
Actions
Automate any workflow
Packages
Host and manage packages
Security
Find and fix vulnerabilities
Codespaces
Instant dev environments
GitHub Copilot
Write better code with AI
Code review
Manage code changes
Issues
Plan and track work
Discussions
Collaborate outside of code
Explore
All features
Documentation
GitHub Skills
Blog
Solutions
By size
Enterprise
Teams
Startups
By industry
Healthcare
Financial services
Manufacturing
By use case
CI/CD & Automation
DevOps
DevSecOps
Resources
Topics
AI
DevOps
Innersource
Open Source
Security
Software Development
Explore
Learning Pathways
White papers, Ebooks, Webinars
Customer Stories
Partners
Open Source
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform
AI-powered developer platform
Available add-ons
Advanced Security
Enterprise-grade security features
GitHub Copilot
Enterprise-grade AI features
Premium Support
Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search syntax tips
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign in
Sign up
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
PaddlePaddle
/
PaddleNLP
Public
Notifications
You must be signed in to change notification settings
Fork
2.9k
Star
11.8k
Code
Issues
476
Pull requests
275
Discussions
Actions
Projects
4
Wiki
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Discussions
Actions
Projects
Wiki
Security
Insights
Commits
Branch selector
develop
User selector
All users
All time
Commit History
Commits on Jul 18, 2024
fix incorrect token counting in
llm/predictor.py
(
#8769
)
lszxb
committed
Jul 18, 2024
dedf4c5
Commits on Jul 17, 2024
[Trainer] Enable parallel_config to use commas as delimiters. (
#8677
)
Difers
committed
Jul 17, 2024
62a18c4
fix the ce for the unittest (
#8772
)
wawltor
committed
Jul 17, 2024
e89399a
fix the missing truncation=True in llm/predictor.py (
#8768
)
lszxb
committed
Jul 17, 2024
ab72ba7
add benchmark baichuan2 scripts (
#8683
)
fightfat
committed
Jul 17, 2024
227ec81
Commits on Jul 16, 2024
【inference】support load or save Llama2-7b in three patterns (
#8766
)
lizexu123
committed
Jul 16, 2024
5e53db1
Commits on Jul 15, 2024
Fix version show (
#8754
)
DrownFish19
committed
Jul 15, 2024
5a508e5
[NPU] Fix sequence parallel lib import (
#8760
)
DrownFish19
committed
Jul 15, 2024
be957f0
Commits on Jul 12, 2024
[AutoParallel] update loss for global clip (
#8750
)
JZ-LIANG
committed
Jul 12, 2024
7a1c439
[PaddleNLP 3.0] Update README (
#8681
)
DrownFish19
and
jzhang533
committed
Jul 12, 2024
10d058d
Commits on Jul 11, 2024
update a100 loss (
#8708
)
zhiqiu
committed
Jul 11, 2024
028206b
[Bug fixes] Fix ring attention (
#8740
)
zhangyuqin1998
committed
Jul 11, 2024
2f0f8b1
[LLM] Update sequence parallel linear import (
#8706
)
DrownFish19
committed
Jul 11, 2024
e336e78
fix nlp dir and auto_parallel_ci exit -6 (
#8744
)
fightfat
committed
Jul 11, 2024
c4f99ee
[LLM] Add Yuan model (
#8654
)
zhaogf01
committed
Jul 11, 2024
1af227a
Commits on Jul 10, 2024
revert benchmark fix (
#8747
)
ronny1996
committed
Jul 10, 2024
a1dbc39
correct broken links in readme (
#8741
)
jzhang533
committed
Jul 10, 2024
3241120
[Safetensors] Fix mmap for Windows system (
#8734
)
DrownFish19
committed
Jul 10, 2024
1a2502c
Commits on Jul 9, 2024
disable lora (
#8674
)
lugimzzz
and
wawltor
committed
Jul 9, 2024
fbe613b
bug fix (
#8730
)
FeixLiu
committed
Jul 9, 2024
595c2fb
Commits on Jul 8, 2024
Finetune support use_fast_layer_norm (
#8717
)
tianhaodongbd
committed
Jul 8, 2024
e681019
Commits on Jul 5, 2024
fix fast_ln backward (
#8719
)
deepllz
committed
Jul 5, 2024
bdd2287
【inference】support load or save Llama2-7b in three patterns (
#8712
)
lizexu123
committed
Jul 5, 2024
d8ddba9
fix xpu gather for unified ckpt (
#8710
)
FeixLiu
committed
Jul 5, 2024
e471b1e
Commits on Jul 4, 2024
[Benchmark] Enable use_fast_layer_norm for llama2 benchmark (
#8714
)
deepllz
committed
Jul 4, 2024
70564ba
[Performance] Add fast_rmsnorm (
#8680
)
deepllz
and
Xreki
committed
Jul 4, 2024
fd01043
Commits on Jul 3, 2024
xpu use allgather (
#8697
)
FeixLiu
committed
Jul 3, 2024
6d464bf
num_samples 向下去整,防止prefrech预取时候超过数据集最大长度... (
#8690
)
JunnYu
committed
Jul 3, 2024
85cbada
fix the fast tokenizer in the taskflow
wawltor
committed
Jul 3, 2024
cf57f86
[Safetensors] Fix safetensors shape (
#8702
)
DesmonDay
committed
Jul 3, 2024
c574d6d
Add new mistral (
#7425
)
wtmlon
committed
Jul 3, 2024
af19dc4
update loss base of auto-parallel tests (
#8701
)
zhiqiu
committed
Jul 3, 2024
64efd1f
Commits on Jul 2, 2024
[benchmark] Change the mirror source for pip (
#8699
)
mmglove
committed
Jul 2, 2024
eaa80c9
change max_steps (
#8679
)
heavyrain-lzy
committed
Jul 2, 2024
4a56c68
【pir save 】Modiy export llama model file in pir mode (
#8689
)
xiaoguoguo626807
committed
Jul 2, 2024
d832282
Pagination
Previous
Next
You can’t perform that action at this time.