Releases: codedeliveryservice/Reckless
Reckless 0.9.0-dev-0dd5b9ac
An early development build that includes recent updates and improvements since the release of Reckless v0.8.0.
Full Changelog: v0.8.0...v0.9.0-dev-0dd5b9ac
Reckless 0.9.0-dev-c5af2cea
An early development build that includes recent updates and improvements since the release of Reckless v0.8.0.
Full Changelog: v0.8.0...v0.9.0-dev-c5af2cea
Reckless 0.9.0-dev-fe4d4b3d
An early development build that includes recent updates and improvements since the release of Reckless v0.8.0.
Full Changelog: v0.8.0...v0.9.0-dev-fe4d4b3d
Reckless v0.8.0
Reckless has come a long way since its early days as a solo project.
During the FIDE & Google Efficient Chess AI Challenge, I worked with Shahin (@peregrineshahin) on the team that finished in second place. After the competition in late February 2025, the whole search algorithm started being rebuilt from the ground up. Shortly after, @peregrineshahin joined the project as one of its co-authors, with Styx (@styxdoto) joining a bit later.
Together, we have transformed Reckless into a formidable chess engine, moving far and beyond the capabilities of its predecessor.
We are now releasing Reckless v0.8.0, one of the strongest chess engines in the world and the strongest chess engine written in Rust.
Playing Strength
Reckless v0.8.0 is enormously stronger than the previous release. In practical terms, v0.7.0 is no longer a meaningful opponent of measuring progress. Nevertheless, using a balanced opening book 8moves_v3, the results of the progression are as follows:
STC 8.0+0.08s
Elo | 334.77 +- 6.38 (95%)
Conf | 8.0+0.08s Threads=1 Hash=16MB
Games | N: 10116 W: 7583 L: 38 D: 2495
Penta | [0, 4, 216, 2127, 2711]
https://recklesschess.space/test/7421/
LTC 40.0+0.40s
Elo | 301.49 +- 8.15 (95%)
Conf | 40.0+0.40s Threads=1 Hash=64MB
Games | N: 5004 W: 3506 L: 2 D: 1496
Penta | [0, 0, 150, 1200, 1152]
https://recklesschess.space/test/7422/
Update highlights
Syzygy Tablebase Support
We have added support for Syzygy endgame tablebases with up to 7 pieces, thanks to the Fathom library.
Chess960 Support
Reckless can now play Chess960 (Fischer Random Chess), with full support for castling rules and position setup. It also handles assymmetrical starting positions, commonly referred to as Double Fischer Random Chess (DFRC).
NNUE Improvements
The originally used custom network trainer has been replaced with Bullet, a specialized ML library developed by @jw1912. Over 30 iterations of stronger networks have been merged, leading to a multi-layer NNUE model trained on billions of positions.
Binaries
Pre-built binaries are provided for Windows, Linux, and macOS, with versions optimized for AVX2, AVX512, and a generic build that runs on virtually all CPUs.
Select the binary that matches your operating system (-windows, -linux, or -macos) and your CPU capabilities (-generic, -avx2, or -avx512). On macOS, a single universal build is provided.
- Generic builds are the most portable but are significantly slower than AVX2 or AVX512 builds.
- AVX2 builds are faster and supported on most modern CPUs.
- AVX512 builds are generally the fastest but require a newer CPU.
If you're unsure which to use, you can start with the AVX512 build and fall back to AVX2 if you encounter issues.
Looking Ahead
Since the last release, we have made over 500 commits, and the project remains very much active. We are looking forward to making Reckless better, adding new features, and more!
Reckless v0.7.0
Release Notes
The NNUE hidden layer size has been increased from 128 to 384, further improved by adding 4 output buckets (#56) and material scaling (#62). The final architecture is (768 -> 384)x2 -> 1x4.
Changelog
Time management
Performance optimizations
- Allocate a quiet move list on the stack (#51)
- Implement operation fusion for NNUE (#58)
- Optimize accumulator handling (#59)
Various search improvements
- History tuning (#39)
- Null Move Pruning tuning (#54)
- Check extensions before the move loop (#40)
- Disable quiescence search pruning for recaptures (#35)
- Treat non-winning captures as unfavorable in quiescence search (#57)
- Static Exchange Evaluation (#36, #37, #44, and #61)
- Fully fractional LMR (#60)
Features
Full Changelog: v0.6.0...v0.7.0
Self-Play Benchmark Against v0.6.0
STC 8.0+0.08s
Elo | 172.12 +- 11.31 (95%)
Conf | 8.0+0.08s Threads=1 Hash=32MB
Games | N: 2000 W: 1007 L: 90 D: 903
Penta | [6, 38, 229, 487, 240]
LTC 40.0+0.4s
Elo | 154.77 +- 13.83 (95%)
Conf | 40.0+0.40s Threads=1 Hash=128MB
Games | N: 1002 W: 449 L: 30 D: 523
Penta | [0, 16, 136, 263, 86]
Reckless v0.6.0
Release Notes
Alongside numerous search improvements and adjustments, Reckless now supports the multi-threaded search, implemented using the Lazy SMP approach by sharing the lockless transposition table between search threads (#20, #27).
The activation function has been switched to SCReLU (bee8f74), and other three networks (#14, #26, and #33) have been trained and used during the development process.
Changelog
Time management
- Time adjustment based on distribution of root nodes (#1)
- Cyclic TC improvements (#31)
- Fischer TC improvements (#32)
Late move reductions
History heuristics
- Follow-up move history (#11)
- Counter move history (#12)
- Linear history formula (#13)
- Separate bonus and malus (#21)
- Index by side to move in main history (#23)
Performance optimizations
- Transposition table prefetching (#4)
- Handwritten SIMD for AVX2 instructions (#16)
- Faster repetition detection (#28)
Various search improvements
- Introduce razoring (#3)
- Fail-soft null move pruning (#6)
- Probe transposition table before stand pat (#7)
- Adaptive NMP based on static evaluation (#8)
- Use transposition table score to adjust eval (#10)
- SPSA tuning session (#17)
- Move check extension inside move loop (#19)
- Update aspiration search delta function (#15)
- Reset killer moves for child nodes (#22)
- Avoid using static evaluation when in check (#24)
- Increase research depth when LMR search results are promising (#30)
- Reset killer moves before null move pruning (#34)
Full Changelog: v0.5.0...v0.6.0
Acknowledgments
Special thanks to @AndyGrant for kindly sharing his CPU time and for developing OpenBench, which is actively used in the development process.
Self-Play Benchmark Against v0.5.0
STC 8.0+0.08s
Elo | 155.12 +- 11.83 (95%)
Conf | 8.0+0.08s Threads=1 Hash=32MB
Games | N: 2000 W: 994 L: 156 D: 850
Penta | [8, 53, 272, 427, 240]
LTC 40.0+0.4s
Elo | 157.43 +- 15.49 (95%)
Conf | 40.0+0.40s Threads=1 Hash=128MB
Games | N: 1006 W: 474 L: 47 D: 485
Penta | [0, 20, 145, 229, 109]
Reckless v0.5.0
Release Notes
This release introduces NNUE (Efficiently Updatable Neural Network), which completely replaces the previously used HCE (Handcrafted Evaluation).
The training data was generated through self-play, initially using a randomly initialized network. It was later iteratively trained on repeatedly generated data, with each iteration improving the network's strength and data quality. The training process was carried out using a custom NNUE trainer.
Additionally, a few minor changes and refactoring have been made.
Full Changelog: v0.4.0...v0.5.0
UCI Support
- Added support for the UCI
go nodes <x>command. - Added the custom
evalcommand.
Self-Play Benchmark Against v0.4.0
STC 8+0.08s
Score of Reckless 0.5.0 vs Reckless 0.4.0: 811 - 46 - 143 [0.882] 1000
... Reckless 0.5.0 playing White: 411 - 22 - 67 [0.889] 500
... Reckless 0.5.0 playing Black: 400 - 24 - 76 [0.876] 500
... White vs Black: 435 - 422 - 143 [0.506] 1000
Elo difference: 350.3 +/- 27.2, LOS: 100.0 %, DrawRatio: 14.3 %
LTC 40+0.4s
Score of Reckless 0.5.0 vs Reckless 0.4.0: 376 - 18 - 106 [0.858] 500
... Reckless 0.5.0 playing White: 203 - 3 - 44 [0.900] 250
... Reckless 0.5.0 playing Black: 173 - 15 - 62 [0.816] 250
... White vs Black: 218 - 176 - 106 [0.542] 500
Elo difference: 312.5 +/- 33.0, LOS: 100.0 %, DrawRatio: 21.2 %
Reckless v0.4.0
Search Improvements
- Add internal iterative reductions.
- Add futility pruning.
- Add
improvingheuristic. - Implement a logarithmic formula for LMR, adjusted based on the history heuristic.
- Adjust NMP based on depth with added zugzwang risk minimization.
- Persist history table between searches and use a gravity formula.
- Make use of TT in the quiescence search.
Evaluation Improvements
- Enemy king-relative PST
- Passed pawns
- Isolated pawns
Other Changes
- Implemented a Triangular PV table to report a full-length principal variation line.
Bug Fixes
- Fixed a formatting bug when reporting mating scores.
- Fixed a cache size reset bug when the
ucinewgamecommand is received.
Self-Play Benchmark Against v0.3.0
STC 10+0.1s
Score of Reckless v0.4.0 vs Reckless v0.3.0: 539 - 47 - 164 [0.828] 750
... Reckless v0.4.0 playing White: 284 - 17 - 74 [0.856] 375
... Reckless v0.4.0 playing Black: 255 - 30 - 90 [0.800] 375
... White vs Black: 314 - 272 - 164 [0.528] 750
Elo difference: 273.0 +/- 25.9, LOS: 100.0 %, DrawRatio: 21.9 %
LTC 60+0.6s
Score of Reckless v0.4.0 vs Reckless v0.3.0: 287 - 15 - 98 [0.840] 400
... Reckless v0.4.0 playing White: 152 - 5 - 43 [0.868] 200
... Reckless v0.4.0 playing Black: 135 - 10 - 55 [0.813] 200
... White vs Black: 162 - 140 - 98 [0.527] 400
Elo difference: 288.1 +/- 34.5, LOS: 100.0 %, DrawRatio: 24.5 %
Reckless v0.3.0
Evaluation Improvements
- King-relative PST has replaced material evaluation and traditional piece-square tables.
- Weight tuning has been performed using a gradient descent tuner.
- A tempo bonus for the side to move has been added.
Search Improvements
- Optimal Time Management has been introduced for games with incremental time controls.
- Adaptive Late Move Reductions has been implemented, replacing a constant reduction value.
- Quiet Late Move Pruning has been implemented.
- Penalties have been introduced for bad quiet moves in fail-high nodes that don't cause a beta cutoff.
- Minor search enhancements, along with other improvements, have also led to a nice Elo gain.
Self-Play Benchmark Against v0.2.0
STC 10+0.1s
Score of Reckless v0.3.0 vs Reckless v0.2.0: 398 - 38 - 64 [0.860] 500
... Reckless v0.3.0 playing White: 205 - 14 - 31 [0.882] 250
... Reckless v0.3.0 playing Black: 193 - 24 - 33 [0.838] 250
... White vs Black: 229 - 207 - 64 [0.522] 500
Elo difference: 315.3 +/- 37.9, LOS: 100.0 %, DrawRatio: 12.8 %
LTC 60+0.6s
Score of Reckless v0.3.0 vs Reckless v0.2.0: 248 - 12 - 40 [0.893] 300
... Reckless v0.3.0 playing White: 131 - 4 - 15 [0.923] 150
... Reckless v0.3.0 playing Black: 117 - 8 - 25 [0.863] 150
... White vs Black: 139 - 121 - 40 [0.530] 300
Elo difference: 369.2 +/- 52.4, LOS: 100.0 %, DrawRatio: 13.3 %
Reckless v0.2.0
General Changes
- Implement tapered evaluation with weights tuning
- Introduce Reversed Futility Pruning
- Switch from Fail-Hard to Fail-Soft framework for Alpha Beta pruning
- Switch from Make/Undo to Copy/Make
UCI Protocol Improvements
The UCI protocol implementation now includes support for reporting seldepth and hashfull parameters.
Self-Play Benchmark Against v0.1.0
STC 10+0.1s
Score of Reckless v0.2.0 vs Reckless v0.1.0: 237 - 29 - 34 [0.847] 300
... Reckless v0.2.0 playing White: 120 - 15 - 15 [0.850] 150
... Reckless v0.2.0 playing Black: 117 - 14 - 19 [0.843] 150
... White vs Black: 134 - 132 - 34 [0.503] 300
Elo difference: 296.8 +/- 48.9, LOS: 100.0 %, DrawRatio: 11.3 %
LTC 60+0.6s
Score of Reckless v0.2.0 vs Reckless v0.1.0: 87 - 5 - 8 [0.910] 100
... Reckless v0.2.0 playing White: 44 - 3 - 3 [0.910] 50
... Reckless v0.2.0 playing Black: 43 - 2 - 5 [0.910] 50
... White vs Black: 46 - 46 - 8 [0.500] 100
Elo difference: 401.9 +/- 114.5, LOS: 100.0 %, DrawRatio: 8.0 %