Skip to content

feat: add inductive methods, bench configs, and tests#22

Merged
melvinbarbaux merged 2 commits intomainfrom
feat/inductive-methods-configs-tests
Jan 10, 2026
Merged

feat: add inductive methods, bench configs, and tests#22
melvinbarbaux merged 2 commits intomainfrom
feat/inductive-methods-configs-tests

Conversation

@melvinbarbaux
Copy link
Collaborator

@melvinbarbaux melvinbarbaux commented Jan 10, 2026

Summary

add inductive methods

Checklist

  • Tests added or updated
  • CI green
  • Changelog updated
  • Documentation updated

@melvinbarbaux melvinbarbaux self-assigned this Jan 10, 2026
@codecov
Copy link

codecov bot commented Jan 10, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 144a6218f1

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +74 to +83
def _output_smearing(labels: Any, *, n_classes: int, std: float, generator: Any) -> Any:
torch = optional_import("torch", extra="inductive-torch")
one_hot = _one_hot(labels, n_classes=int(n_classes))
if float(std) <= 0:
return one_hot
noise = torch.randn(
(int(one_hot.shape[0]), int(n_classes)),
generator=generator,
device=one_hot.device,
dtype=one_hot.dtype,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use device-matched RNG for output smearing

_output_smearing always uses a torch.Generator() created on the CPU, but then calls torch.randn(..., device=one_hot.device, generator=generator). On CUDA, PyTorch requires the generator to be on the same device, so this raises at runtime (e.g., “Expected a CUDA generator”) whenever output_smearing_std > 0 (the default) and the method runs on GPU. This means TriNet cannot train on CUDA as-is; the generator should be created with torch.Generator(device=one_hot.device) or omitted when sampling on GPU.

Useful? React with 👍 / 👎.

@melvinbarbaux melvinbarbaux merged commit 526f6eb into main Jan 10, 2026
11 checks passed
@melvinbarbaux melvinbarbaux deleted the feat/inductive-methods-configs-tests branch January 10, 2026 22:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant