-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Use a bit less code to calculate hashfull() #1830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
since it is anyway just an estimate based on looking at 1000 elements, it is not needed to look at 1000 successive elements, so the following does just fine (untested) as well: |
|
@mstembera the rounding fix is ok for me. @vondele you are right, but it is not o immediately clear why the distribution of generations among the cluster should be uniform, indeed we replace the first entry as soon as the generation is old. Maybe you are right but some tests should be done. For instance it should be easy to run sf and compute both the old and the new code and checking that difference is, say, between 1%, raise an assert otherwise. |
|
without testing .. @mcostalba I think you're right about the occupation within the cluster being non-homogeneous indeed. Now, given we use only 1000 elements to test the occupation, our estimate is relatively rough, I guess +- 5% error at half filling. |
|
Traversing 1000 clusters would also result in fetching 3 times as many cache lines compared to 333 clusters and so could be slower. |
|
@vondele even if entries in cluster would be uniform, we aren't ensure it will be like this in the future, in case for instance of a new replacing rule, and to remember to change the hash measuring code when changing the update rule it is practically impossible, so I would say to keep the current setup that is agnostic regarding the entries distribution. |
|
sure, I agree. |
vondele
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about writing this without 999 as a magic constant:
return 1000 * cnt / (ClusterSize * (1000 / ClusterSize))
actually, even the two times '1000' have a different meaning. The first is to convert to permill as requested by UCI spec, the second relates to the number of samples taken. (thus there could be a const int Samples = 333, used instead of 1000/ClusterSize in both loop bounds and the above return statement).
… preincrement as is more standard in the rest of stockfish. Scale result so we report 100% instead of 99.9% when completely full. No functional change. bench: 4041387
bench: 3646542
|
@vondele That makes sense. Fixed. |
|
this is ready to be merged IMO. |
|
It seems good to me. |
* Use a bit less code to calculate hashfull(). Change post increment to preincrement as is more standard in the rest of stockfish. Scale result so we report 100% instead of 99.9% when completely full. No functional change.
* Use a bit less code to calculate hashfull(). Change post increment to preincrement as is more standard in the rest of stockfish. Scale result so we report 100% instead of 99.9% when completely full. No functional change.
Use a bit less code to calculate hashfull().
Also change post increment to pre increment as is more standard in the rest of Stockfish.
No functional change.
bench: 4041387
If it's ok with the maintainers I would also like to change the return to
so that when the hash is 100% full we actually report 100 instead of 99.9. Currently Stockfish never reports 100% full which is inaccurate. Please advise.