-
Notifications
You must be signed in to change notification settings - Fork 427
Insights: pytorch/xla
Overview
Could not load contribution data
Please try again later
26 Pull requests merged by 8 people
-
Change c10::optional to std::optional
#7702 merged
Jul 19, 2024 -
Update docs for 2.4 release
#7707 merged
Jul 18, 2024 -
internalize _parse_xla_device
#7675 merged
Jul 18, 2024 -
add depreacated decorator
#7703 merged
Jul 18, 2024 -
Add docs for eager mode
#7700 merged
Jul 17, 2024 -
[torch_xla2] Fix some duplicate and incorrect op registrations
#7696 merged
Jul 17, 2024 -
fix type annotation for 3.8
#7701 merged
Jul 17, 2024 -
Support logits_soft_cap parameter in paged_attention
#7704 merged
Jul 17, 2024 -
Update README.md for 2.4 release
#7694 merged
Jul 17, 2024 -
update debug_single_process to false
#7695 merged
Jul 17, 2024 -
Minor update to the docs
#7691 merged
Jul 16, 2024 -
Update tuple type annotation
#7699 merged
Jul 16, 2024 -
Add atan2, arange, addcmul, addcdiv, _softmax_backward_data, T, H
#7681 merged
Jul 16, 2024 -
Optimize execution for ops that have multiple output in eager mode
#7680 merged
Jul 16, 2024 -
fix git fetch for torch pin checkout
#7690 merged
Jul 15, 2024 -
Lower aten::_linalg_eigh
#7674 merged
Jul 15, 2024 -
Extend timeout for
tpu-info
test#7686 merged
Jul 15, 2024 -
Update 2.4 rc-8 in build trigger
#7685 merged
Jul 15, 2024 -
Add apt-get update in upstream image build (backport #7640)
#7684 merged
Jul 15, 2024 -
Delete extra file
#7683 merged
Jul 15, 2024 -
Backport Enable eager spmd #7341 and fix eager mode spmd module loading with fsdpv2 #7631
#7673 merged
Jul 15, 2024 -
Introduce torch_xla.launch()
#7648 merged
Jul 12, 2024 -
Fix helm version format in terraform
#7678 merged
Jul 12, 2024 -
Update TPU ARC cluster
#7677 merged
Jul 12, 2024 -
support gmm as a custom op for dynamo
#7672 merged
Jul 12, 2024 -
Update openxla pin to 11 July 2024
#7639 merged
Jul 12, 2024
10 Pull requests opened by 7 people
-
Add support for dynamic shape in dynamo
#7676 opened
Jul 12, 2024 -
deprecate xrt_world_size
#7679 opened
Jul 12, 2024 -
Add 1-layer gradient accumulation test to check aliasing
#7692 opened
Jul 16, 2024 -
Update tensor.py
#7698 opened
Jul 16, 2024 -
[WIP] Use `privateuseone` dispatch key
#7705 opened
Jul 17, 2024 -
Add mul, trapz, trapezoid, div
#7706 opened
Jul 17, 2024 -
Unify the logics to check eager mode
#7709 opened
Jul 17, 2024 -
Update eager.md
#7710 opened
Jul 18, 2024 -
Add `tril`, `tile` ops support
#7711 opened
Jul 19, 2024 -
Add op support for `nan_to_num` to `movedim`
#7712 opened
Jul 19, 2024
5 Issues closed by 4 people
-
Op info test for `view_copy .. zeros`
#7574 closed
Jul 18, 2024 -
[API Usability] Internalize `parse_xla_device`
#7652 closed
Jul 18, 2024 -
Op info test for `T .. arange`
#7688 closed
Jul 16, 2024 -
PyTorch/XLA should not crash runtime during int64 dot product on TPU
#6700 closed
Jul 16, 2024 -
not lowered: aten::_linalg_eigh
#6017 closed
Jul 15, 2024
4 Issues opened by 4 people
-
Invalid version identifier in filenames of nightly builds
#7697 opened
Jul 16, 2024 -
Graph dump to optimize
#7693 opened
Jul 16, 2024 -
CUDA and GPU-Flavoured Docker/Container Image Missing CUDA Support
#7689 opened
Jul 15, 2024 -
Is there any way to directly execute the cached computational graph
#7682 opened
Jul 15, 2024
21 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Use torch_xla.experimental.compile for all examples
#7642 commented on
Jul 18, 2024 • 4 new comments -
Rename `aten_cpu_fallback` into `aten_fallback`.
#7646 commented on
Jul 12, 2024 • 0 new comments -
Make CUDA OpenXLA fallback the default.
#7630 commented on
Jul 12, 2024 • 0 new comments -
Support of AOT compilation (refine #6992)
#7581 commented on
Jul 18, 2024 • 0 new comments -
[1/N] Use std::string_view
#7297 commented on
Jul 17, 2024 • 0 new comments -
docs: fix typos in spmd.md
#7065 commented on
Jul 15, 2024 • 0 new comments -
Op info test for `movedim .. nan_to_num`
#7556 commented on
Jul 19, 2024 • 0 new comments -
Equivalent of get_worker_info to split an IterableDataset
#7667 commented on
Jul 19, 2024 • 0 new comments -
[torch-xla 2.1 - 2.4] when functionalization is on, there are no aliasing for gradients when using gradient accumulation
#7174 commented on
Jul 19, 2024 • 0 new comments -
Op info test for `rsqrt .. signal.windows.cosine`
#7566 commented on
Jul 18, 2024 • 0 new comments -
Op info test for `view_copy .. zeros`
#7575 commented on
Jul 18, 2024 • 0 new comments -
Op info test for `constant_pad_nd`
#7367 commented on
Jul 17, 2024 • 0 new comments -
2.4 backport PR request list
#7242 commented on
Jul 17, 2024 • 0 new comments -
Sharing tensor storage (with DLPack) results in unexpected behavior.
#7304 commented on
Jul 17, 2024 • 0 new comments -
In-place operations on an DLPack aliased XLA tensor does not propagate.
#7198 commented on
Jul 16, 2024 • 0 new comments -
How to avoid compilation in a section of code?
#7622 commented on
Jul 16, 2024 • 0 new comments -
[torchbench] `moco` fails to run with CUDA OpenXLA fallback.
#7647 commented on
Jul 16, 2024 • 0 new comments -
Op info test for `atan2 .. bfloat16`
#7545 commented on
Jul 15, 2024 • 0 new comments -
Failing Torchbench Models: tracking issue
#5932 commented on
Jul 15, 2024 • 0 new comments -
[RFC] PyTorch/XLA eager mode as default
#7253 commented on
Jul 13, 2024 • 0 new comments -
[Fori Loop] Inconsistent Shape Behavior
#7665 commented on
Jul 12, 2024 • 0 new comments