-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Simplify hashfull calculation. #2523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
No functional change.
|
And another output, this time with 512 MB Hash, to show the growing fill rate. info depth 20 seldepth 31 multipv 1 score cp 76 lowerbound nodes 3683949 nps 1164700 hashfull 60 tbhits 0 time 3163 pv e2e4 |
|
table[0] is used as often as any other position. The real issue however is that now the calculation will take 3 times as long and access 3 times as much memory. |
No functional change.
|
@mstembera You're right about usage of table[0]. I changed the patch accordingly. However, looping over 1,000 entries is more straightforward, imho. If this does really matter in terms of speed, then this seems more related to the overly extensive output behavior by Stockfish. I will submit a non-regression LTC test to be on the safe side. |
|
How important is this hashfull info ? Master is only looping the first 333 entries. at first sight a |
|
Thanks! I think in the future non-funtional changes should be tested with LTC bounds but STC time-control. |
|
The patch only passed because the code doesn't get executed often. However it's 100% obvious from inspecting the code that it's 3x slower. IMO the minor amount of simplification doesn't justify traversing 666 extra clusters for no additional benefit. It goes too far putting too much priority on syntax over function. I would even rather just go back to the old return cnt * 1000 / 999; |
|
just it is 3x slower, but the speed doesn't matter as you point. The reported quantity now also has an error that is now roughly 1/sqrt(3) times smaller, which I considered a benefit. |
|
That's true, however I have never known anyone to care whether the hash is exactly say 50.1% or 50.3% full. 1000 entries is the minimum number necessary to satisfy full resolution which is why Marco chose it. |
We can simplify the calculation of the hashfull info by looping over exact 1,000 entries, and then divide the result by ClusterSize. Somewhat memory accesses, somewhat more accurate. Passed non-regression LTC https://tests.stockfishchess.org/tests/view/5e30079dab2d69d58394fd5d LLR: 2.94 (-2.94,2.94) {-1.50,0.50} Total: 30125 W: 3987 L: 3926 D: 22212 Ptnml(0-2): 177, 2504, 9558, 2642, 141 closes official-stockfish#2523 No functional change.
We can simplify the calculation of the hashfull info by looping over exact 1,000 entries,
and then divide the result by ClusterSize.
The additional effort of looping over 2/3 more entries than current master seems to be compensated by the simpler calculation.
Passed non-regression LTC https://tests.stockfishchess.org/tests/view/5e30079dab2d69d58394fd5d
LLR: 2.94 (-2.94,2.94) {-1.50,0.50}
Total: 30125 W: 3987 L: 3926 D: 22212
Ptnml(0-2): 177, 2504, 9558, 2642, 141
No functional change.