Skip to content

Conversation

@mstembera
Copy link
Contributor

Use a bit less code to calculate hashfull().
Also change post increment to pre increment as is more standard in the rest of Stockfish.

No functional change.
bench: 4041387

If it's ok with the maintainers I would also like to change the return to

return cnt * 1000 / 999;

so that when the hash is 100% full we actually report 100 instead of 99.9. Currently Stockfish never reports 100% full which is inaccurate. Please advise.

@vondele
Copy link
Member

vondele commented Nov 25, 2018

since it is anyway just an estimate based on looking at 1000 elements, it is not needed to look at 1000 successive elements, so the following does just fine (untested) as well:

  for (int i = 0; i < 1000; ++i)
          cnt += (table[i].entry[0].genBound8 & 0xFC) == generation8;

@mcostalba
Copy link
Contributor

@mstembera the rounding fix is ok for me.

@vondele you are right, but it is not o immediately clear why the distribution of generations among the cluster should be uniform, indeed we replace the first entry as soon as the generation is old. Maybe you are right but some tests should be done.

For instance it should be easy to run sf and compute both the old and the new code and checking that difference is, say, between 1%, raise an assert otherwise.

@vondele
Copy link
Member

vondele commented Nov 25, 2018

without testing .. @mcostalba I think you're right about the occupation within the cluster being non-homogeneous indeed.

Now, given we use only 1000 elements to test the occupation, our estimate is relatively rough, I guess +- 5% error at half filling.

@mstembera
Copy link
Contributor Author

Traversing 1000 clusters would also result in fetching 3 times as many cache lines compared to 333 clusters and so could be slower.

@mcostalba
Copy link
Contributor

@vondele even if entries in cluster would be uniform, we aren't ensure it will be like this in the future, in case for instance of a new replacing rule, and to remember to change the hash measuring code when changing the update rule it is practically impossible, so I would say to keep the current setup that is agnostic regarding the entries distribution.

@vondele
Copy link
Member

vondele commented Nov 25, 2018

sure, I agree.

Copy link
Member

@vondele vondele left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about writing this without 999 as a magic constant:

return 1000 * cnt / (ClusterSize * (1000 / ClusterSize))

actually, even the two times '1000' have a different meaning. The first is to convert to permill as requested by UCI spec, the second relates to the number of samples taken. (thus there could be a const int Samples = 333, used instead of 1000/ClusterSize in both loop bounds and the above return statement).

mstembera added 2 commits December 17, 2018 14:50
… preincrement as is more standard

in the rest of stockfish.  Scale result so we report 100% instead of 99.9% when completely full.

No functional change.
bench: 4041387
@mstembera
Copy link
Contributor Author

@vondele That makes sense. Fixed.

@vondele
Copy link
Member

vondele commented Dec 18, 2018

this is ready to be merged IMO.

@mcostalba
Copy link
Contributor

It seems good to me.

@mcostalba mcostalba merged commit 656aad8 into official-stockfish:master Dec 23, 2018
vondele pushed a commit to vondele/Stockfish that referenced this pull request Dec 27, 2018
* Use a bit less code to calculate hashfull(). Change post increment to preincrement as is more standard
in the rest of stockfish.  Scale result so we report 100% instead of 99.9% when completely full.

No functional change.
snicolet pushed a commit that referenced this pull request Dec 27, 2018
* Use a bit less code to calculate hashfull(). Change post increment to preincrement as is more standard
in the rest of stockfish.  Scale result so we report 100% instead of 99.9% when completely full.

No functional change.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants