You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The **Cerebrum library** can be used to easily train a **first**"[NNUE](https://www.chessprogramming.org/NNUE)-like" neural network for a chess engine. It was originally designed and built for the [Orion UCI chess engine](https://www.orionchess.com/).
5
+
The **Cerebrum library** can be used to easily train a "[NNUE](https://www.chessprogramming.org/NNUE)-like" neural network for a chess engine. It was originally designed and built for the [Orion UCI chess engine](https://www.orionchess.com/).
6
6
7
-
Its originality lies in using only game results, parsed from pgn files provided by the user, and material values, computed on the fly, as targets for prediction. Both predicted values (game result and material) can be used for board evaluation.
7
+
It is composed of a few Python scripts for data preparation (optional), one Python script for **training**, and C code for **inference**.
8
8
9
-
Inference code is provided for embedding and using the trained network in a C/C++ or Python project, in two alternatives: standard (for accuracy) or quantized (for speed).
9
+
Default network architecture is perspective-based with one hidden layer. Network weights are quantised to maximise inference speed.
10
10
11
-
Do not hesitate to adapt the library to your own needs, and/or to use newer/better NNUE libraries for more flexibility/performance (e.g. [Bullet](https://github.com/jw1912/bullet/tree/main))!
11
+
Code is also provided to train a **first** network using only game results, parsed from PGN files provided by the user, and material values, computed on the fly (optional).
12
+
13
+
Feel free to adapt the library to your own needs and/or use newer/better NNUE libraries for greater flexibility and performance (e.g. [Bullet](https://github.com/jw1912/bullet/tree/main))!
14
+
15
+
<br/>
16
+
17
+
## Changes in 2.0
18
+
19
+
-**Change in network outputs**: networks now directly predict scores in centipawns → _breaking change!_
20
+
-**Tiny change to the data format** for data preparation → _breaking change!_
@@ -23,7 +34,7 @@ Do not hesitate to adapt the library to your own needs, and/or to use newer/bett
23
34
24
35
## Changes in 1.0
25
36
26
-
- Training now relies on game results (from which a win ratio is deduced for each position during a game) and material only!
37
+
- Training now relies on game results (from which a win ratio is deduced for each position during a game) and material only!
27
38
- Data preparation scripts are provided to automate the preparation of training data (using one or several pgn files)
28
39
- Network quantization is performed at the end of each training epoch, allowing the choice between better accuracy or increased inference speed
29
40
- A basic UCI chess engine is provided in two versions (standard or quantized) to demonstrate how to load and use the network
@@ -33,44 +44,42 @@ Do not hesitate to adapt the library to your own needs, and/or to use newer/bett
33
44
34
45
## Content and prerequisites (Windows)
35
46
36
-
The library consists of four main parts:
47
+
To use the library, you will first need to:
37
48
38
-
1. Data preparation code (Python scripts)
39
-
2. Training code (Python script)
40
-
3. Inference code (C files)
41
-
4. A basic UCI chess engine for demonstration purposes (Python script)
49
+
- Download the `v2.0` folder of this repository
50
+
- Install a Python runtime: https://www.python.org/
51
+
- Install some Python librairies: `pip install tqdm chess`
52
+
- Install PyTorch librairy: `pip install torch` or, if you have an NVIDIA GPU, `pip install torch --index-url https://download.pytorch.org/whl/cu128`
42
53
43
54
<br/>
44
55
45
-
To use the library, you will first need to:
56
+
Optionally, if you want to train a **first** network from PGN files:
46
57
47
-
- Download the `v1.1` folder of this repository
48
-
- Install a Python runtime: https://www.python.org/
49
-
- Install some Python librairies: `pip install torch tqdm chess`
50
-
- Download the [pgn-extract](https://www.cs.kent.ac.uk/people/staff/djb/pgn-extract/) tool and put the `pgn-extract.exe` file in the folder `./1. data preparation/`
58
+
- Download the [pgn-extract](https://www.cs.kent.ac.uk/people/staff/djb/pgn-extract/) tool and put the `pgn-extract.exe` file in the folder `./1. data preparation (optional)/`
51
59
52
60
<br/>
53
61
54
-
Optionally (for better results):
55
-
56
-
- Download the [3, 4, 5 pieces](http://tablebase.sesse.net/syzygy/3-4-5/) endgame Syzygy tablebases and put them in the folder `./1. data preparation/syzygy/3-4-5/`
57
-
- Download the [6 pieces](http://tablebase.sesse.net/syzygy/6-WDL/) endgame Syzygy tablebases and put them in the folder `./1. data preparation/syzygy/6-pieces/`
62
+
## Usage (Windows)
58
63
59
-
Optionally (for faster training, if you have an NVIDIA GPU):
Prepare a file containing positions and evaluations. Each line of the file must contain a fenstring followed by its evaluation (in pawns), separated with a comma, e.g.:
62
67
63
-
<br/>
68
+
Example:
69
+
-_r5k1/5pp1/pR2p3/1p1rP3/7P/R3P3/P6P/6K1 w - -,-4.5000_
70
+
-_6k1/ppp5/8/3P1p2/PP1b4/5pPp/5P1K/8 b - -,6.5000_
71
+
-_3r3k/p4pp1/4p2p/2pRq3/8/PP2P2P/2Q2PP1/2R3K1 b - -,-2.5000_
72
+
-_1r4k1/q2pbp1p/4n1p1/p1pQP2P/Rr1nB1N1/4B1P1/5PK1/2R5 w - -,3.5000_
64
73
65
-
## Usage (Windows)
74
+
Copy the `positions-shuffled.txt` file to the folder `./2. training/positions/`.
66
75
67
-
### Data preparation
76
+
<br/>
68
77
69
-
Prepare one or several pgn files containing full games and put it/them in the folder `./1. data preparation/pgn/`.
78
+
### Data preparation (alternative)
70
79
71
-
Then launch the script `prepare.bat`in the folder `./1. data preparation/` to obtain a file named `positions-shuffled.txt` which will be stored in the same folder.
80
+
Prepare one or several pgn files containing full games and put it/them in the folder `./1. data preparation (optional)/pgn/`.
72
81
73
-
This script will parse games and compute the average win ratio for each encountered position in all the games. It will also add some other statistical information (popcount, number of occurences of each position).
82
+
Then launch the script `prepare.bat` in the folder `./1. data preparation (optional)/` to obtain a file named `positions-shuffled.txt`, which will be stored in the same folder.
74
83
75
84
Copy the `positions-shuffled.txt` file to the folder `./2. training/positions/`.
76
85
@@ -81,9 +90,9 @@ Copy the `positions-shuffled.txt` file to the folder `./2. training/positions/`.
81
90
You can configure the network architecture by modifying the script `train.py` in the folder `./2. training/`.
82
91
83
92
Supported architectures are:
84
-
-`2x(768→A)→2` (no hidden layer)
85
-
-`2x(768→A)→B→2` (one hidden layer)
86
-
-`2x(768→A)→B→C→2` (two hidden layers)
93
+
-`2x(768→A)→1` (no hidden layer)
94
+
-`2x(768→A)→B→1` (one hidden layer)
95
+
-`2x(768→A)→B→C→1` (two hidden layers)
87
96
88
97
_(where A, B and C are mutliples of 32, e.g. `2x(768→128)→32→2` for `A=128` and `B=32`)_
89
98
@@ -101,47 +110,37 @@ This script will parse the `positions-shuffled.txt` file in the folder `./2. tra
101
110
102
111
Trained networks will be located in the folder `./2. training/networks/`. One network will be saved at the end of each training epoch.
103
112
104
-
By default:
105
-
106
-
-`epoch-11.txt` will be the last standard network (i.e. full precision: weights and biases are stored as `float`)
107
-
-`epoch-11-q.txt` will be the last quantized network (i.e. less precision, but high inference speed: weights and biases are stored as `int8`)
113
+
By default, `epoch-11-q.txt` will be the last quantized network.
108
114
109
115
<br/>
110
116
111
117
### How to use trained networks
112
118
113
-
These networks can now be used in your own engine, using your own code, or:
114
-
115
-
- using the provided inference C code in `./3. inference/1. standard/` or `./3. inference/2. quantized/` folders
116
-
- using the provided inference Python code located in the `./4. engine/1. standard/` or `./4. engine/2. quantized/` folders
117
-
118
-
<br/>
119
-
120
-
In order to use your own trained network with the provided Cerebrum UCI chess engine:
121
-
122
-
- Copy the `epoch-11.txt` (resp. the `epoch-11-q.txt`) file in the folder `4. engine/1. standard/` (resp. `4. engine/2. quantized/`)
123
-
- Rename it to `network.txt`
124
-
- Launch the engine
119
+
Trained networks can now be used in your own engine, using your own code, or using the provided inference C code, provided in the `./3. inference/` folder.
125
120
126
121
<br/>
127
122
128
123
## How to configure name and author
129
124
130
125
You can adjust the name and author of the trained networks:
131
126
132
-
- Before training, by modifying the `NN_NAME` (default = "Cerebrum 1.1") and `NN_AUTHOR` (default = "David Carteau") variables in the script `train.py` located in the folder `./2. training/`
133
-
- After training, by modifying the first two lines of the generated networks (default = "name=Cerebrum 1.1" and "author=David Carteau")
127
+
- Before training, by modifying the `NN_NAME` (default = "Cerebrum 2.0") and `NN_AUTHOR` (default = "David Carteau") variables in the script `train.py` located in the folder `./2. training/`
128
+
- After training, by modifying the first two lines of the generated networks (default = "name=Cerebrum 2.0" and "author=David Carteau")
129
+
130
+
<br/>
131
+
132
+
You can adjust more parameters: open and inspect the provided Python scripts!
134
133
135
134
<br/>
136
135
137
136
## Contribute
138
137
139
-
If you want to help me improve the library, do not hesitate to contact me via the [talkchess.com](https://www.talkchess.com) forum!
138
+
If you want to help me improve the library, do not hesitate to contact me via the [talkchess.com](https://www.talkchess.com) forum!
140
139
141
140
<br/>
142
141
143
142
## Copyright, license
144
143
145
144
Copyright 2025 by David Carteau. All rights reserved.
146
145
147
-
The Cerebrum library is licensed under the **MIT License** (see "LICENSE" and "/v1.1/license.txt" files).
146
+
The Cerebrum library is licensed under the **MIT License** (see "LICENSE" and "/v2.0/license.txt" files).
0 commit comments