feat: add inductive methods, bench configs, and tests#22
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 144a6218f1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| def _output_smearing(labels: Any, *, n_classes: int, std: float, generator: Any) -> Any: | ||
| torch = optional_import("torch", extra="inductive-torch") | ||
| one_hot = _one_hot(labels, n_classes=int(n_classes)) | ||
| if float(std) <= 0: | ||
| return one_hot | ||
| noise = torch.randn( | ||
| (int(one_hot.shape[0]), int(n_classes)), | ||
| generator=generator, | ||
| device=one_hot.device, | ||
| dtype=one_hot.dtype, |
There was a problem hiding this comment.
Use device-matched RNG for output smearing
_output_smearing always uses a torch.Generator() created on the CPU, but then calls torch.randn(..., device=one_hot.device, generator=generator). On CUDA, PyTorch requires the generator to be on the same device, so this raises at runtime (e.g., “Expected a CUDA generator”) whenever output_smearing_std > 0 (the default) and the method runs on GPU. This means TriNet cannot train on CUDA as-is; the generator should be created with torch.Generator(device=one_hot.device) or omitted when sampling on GPU.
Useful? React with 👍 / 👎.
Summary
add inductive methods
Checklist