-
Notifications
You must be signed in to change notification settings - Fork 53
Bumped dependancy versions in the lock file #303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Updates SASRec transformer unit tests to reflect changed recommendation ordering/output, likely as a consequence of dependency version changes.
Changes:
- Adjust expected
Columns.Itemsequences in SASRec recommendation tests (test_u2i_with_key_and_attn_masks,test_u2i_with_item_features).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pd.DataFrame( | ||
| { | ||
| Columns.User: [30, 30, 30, 40, 40, 40], | ||
| Columns.Item: [11, 13, 17, 17, 13, 11], | ||
| Columns.Item: [11, 13, 15, 17, 13, 11], | ||
| Columns.Rank: [1, 2, 3, 1, 2, 3], |
Copilot
AI
Feb 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR title/description indicate only dependency/lockfile bumps, but this hunk changes the expected recommendation output (items/ranks). Please update the PR description (and/or title) to explain the behavioral change and which dependency bump caused it, so reviewers can assess whether the new ranking is an intended/acceptable change rather than masking a regression.
| Columns.User: [30, 30, 30, 40, 40, 40], | ||
| Columns.Item: [11, 13, 17, 17, 13, 11], | ||
| Columns.Item: [11, 13, 15, 17, 13, 11], | ||
| Columns.Rank: [1, 2, 3, 1, 2, 3], |
Copilot
AI
Feb 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests assert an exact item ordering. In the recommendation stack, ranking is produced via torch.topk(..., sorted=True) (see TorchRanker.rank), which does not guarantee a deterministic order for ties and can change across torch versions / BLAS kernels even with deterministic flags. To reduce future lockfile-bump churn and flakiness, consider enforcing a deterministic tie-break (e.g., secondary sort by item id) in the ranker/recommendation code, or relax the test to assert an ordering rule (score desc + item asc) rather than a hardcoded list that may vary when scores are equal/near-equal.
| pd.DataFrame( | ||
| { | ||
| Columns.User: [30, 30, 30, 40, 40, 40], | ||
| Columns.Item: [11, 13, 12, 13, 14, 12], | ||
| Columns.Item: [11, 12, 13, 13, 14, 12], | ||
| Columns.Rank: [1, 2, 3, 1, 2, 3], |
Copilot
AI
Feb 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same concern as above: this expected ordering change looks like it could be driven by tie/near-tie score ordering differences after dependency updates. To make this test robust across torch/pytorch-lightning bumps, prefer a deterministic tie-break in ranking (or assert ordering via an explicit secondary key) instead of hardcoding an order that can legitimately vary.
Description
Type of change
How Has This Been Tested?
Before submitting a PR, please check yourself against the following list. It would save us quite a lot of time.