Skip to content

Conversation

@b1tg
Copy link
Contributor

@b1tg b1tg commented Dec 18, 2025

No description provided.

@github-actions
Copy link
Contributor

Changes

Name                          Lines    Diff    Tokens/Line    Diff
--------------------------  -------  ------  -------------  ------
tinygrad/tensor.py             1396      +0           20.9    +0.0
tinygrad/schedule/multi.py      162      +0           21.3    +0.1


total lines changes: 0

devs = ("CPU:0", "CPU:1")
N = 16
a = Tensor.randn(N, N).shard_(devs, axis=0)
b = Tensor.randn(N, N).to(devs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shard None

# multi supports custom kernels with CUSTOM_KERNEL + AFTER
(UPat(Ops.CUSTOM_KERNEL, src=UPat(Ops.MULTI), name="ck"),
lambda ck: ck.replace(src=tuple(m.src[0] for m in ck.src))),
(UPat(Ops.CUSTOM_KERNEL, src=UPat((Ops.MULTI, Ops.CONTIGUOUS)), name="ck"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with shard None you shouldn't need this change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shard None still have CONTIGUOUS, and this test is how we use in model training, the user code should not change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants