-
Notifications
You must be signed in to change notification settings - Fork 43
Expand file tree
/
Copy pathfeed.xml
More file actions
2141 lines (1629 loc) · 316 KB
/
feed.xml
File metadata and controls
2141 lines (1629 loc) · 316 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.7">Jekyll</generator><link href="http://cppalliance.org/feed.xml" rel="self" type="application/atom+xml" /><link href="http://cppalliance.org/" rel="alternate" type="text/html" /><updated>2026-04-03T11:58:14+00:00</updated><id>http://cppalliance.org/feed.xml</id><title type="html">The C++ Alliance</title><subtitle>The C++ Alliance is dedicated to helping the C++ programming language evolve. We see it developing as an ecosystem of open source libraries and as a growing community of those who contribute to those libraries..</subtitle><entry><title type="html">Hubs, intervals and math</title><link href="http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update.html" rel="alternate" type="text/html" title="Hubs, intervals and math" /><published>2026-04-02T00:00:00+00:00</published><updated>2026-04-02T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update.html"><p>During Q1 2026, I’ve been working in the following areas:</p>
<h3 id="boostcontainerhub"><code>boost::container::hub</code></h3>
<p><a href="https://github.com/joaquintides/hub"><code>boost::container::hub</code></a> is a nearly drop-in replacement of
C++26 <a href="https://eel.is/c++draft/sequences#hive"><code>std::hive</code></a> sporting a simpler data structure and
providing competitive performance with respect to the de facto reference implementation
<a href="https://github.com/mattreecebentley/plf_hive"><code>plf::hive</code></a>. When I first read about <code>std::hive</code>,
I couldn’t help thinking how complex the internal design of the container is, and wondered
if something leaner could in fact be more effective. <code>boost::container::hub</code> critically relies
on two realizations:</p>
<ul>
<li>Identification of empty slots by way of <a href="https://en.cppreference.com/w/cpp/numeric/countr_zero.html"><code>std::countr_zero</code></a>
operations on a bitmask is extremely fast.</li>
<li>Modern allocators are very fast, too: <code>boost::container::hub</code> does many more allocations
than <code>plf::hive</code>, but this doesn’t degrade its performance significantly (although it affects
cache locality).</li>
</ul>
<p><code>boost::container::hub</code> is formally proposed for inclusion in Boost.Container and will be
officially reviewed April 16-26. Ion Gaztañaga serves as the review manager.</p>
<h3 id="using-stdcpp-2026">using std::cpp 2026</h3>
<p>I gave my talk <a href="https://github.com/joaquintides/usingstdcpp2026">“The Mathematical Mind of a C++ Programmer”</a>
at the <a href="https://eventos.uc3m.es/141471/detail/using-std-cpp-2026.html">using std::cpp 2026</a> conference
taking place in Madrid during March 16-19. I had a lot of fun preparing the presentation and
delivering the actual talk, and some interesting discussions were had around it.
This is a subject I’ve been wanting to talk about for decades, so I’m somewhat relieved I finally
got it over with this year. Always happy to discuss C++ and math, so if you have feedback
or want to continue the conversation, please reach out to me.</p>
<h3 id="boostunordered">Boost.Unordered</h3>
<ul>
<li>Written maintenance fixes
<a href="https://github.com/boostorg/unordered/pull/328">PR#328</a>,
<a href="https://github.com/boostorg/unordered/pull/335">PR#335</a>,
<a href="https://github.com/boostorg/unordered/pull/336">PR#336</a>,
<a href="https://github.com/boostorg/unordered/pull/337">PR#337</a>,
<a href="https://github.com/boostorg/unordered/pull/339">PR#339</a>,
<a href="https://github.com/boostorg/unordered/pull/344">PR#344</a>,
<a href="https://github.com/boostorg/unordered/pull/345">PR#345</a>. Some of these fixes are related
to Node.js vulnerabilities in the Antora setup used for doc building: as the number
of Boost libraries using Antora is bound to grow, maybe we should think of an automated
way to get these vulnerabilities automatically fixed for the whole project.</li>
<li>Reviewed and merged
<a href="https://github.com/boostorg/unordered/pull/317">PR#317</a>,
<a href="https://github.com/boostorg/unordered/pull/332">PR#332</a>,
<a href="https://github.com/boostorg/unordered/pull/334">PR#334</a>,
<a href="https://github.com/boostorg/unordered/pull/341">PR#341</a>,
<a href="https://github.com/boostorg/unordered/pull/342">PR#342</a>. Many thanks to
Sam Darwin, Braden Ganetsky and Andrey Semashev for their contributions.</li>
</ul>
<h3 id="boostbimap">Boost.Bimap</h3>
<p>Merged
<a href="https://github.com/boostorg/bimap/pull/31">PR#31</a> (<code>std::initializer_list</code>
constructor) and provided testing and documentation for this new
feature (<a href="https://github.com/boostorg/bimap/pull/54">PR#54</a>). The original
PR was silently sitting on the queue for more than four years and it
was only when it was brought to my attention in a Reddit conversation that
I got to take a look at it. Boost.Bimap needs an active mantainer,
I guess I could become this person.</p>
<h3 id="boosticl">Boost.ICL</h3>
<p><a href="https://github.com/llvm/llvm-project/pull/161366">Recent changes</a> in libc++ v22
code for associative container lookup have resulted in the
<a href="https://github.com/boostorg/icl/issues/51">breakage of Boost.ICL</a>.
My understanding is that the changes in libc++ are not
standards conformant, and there’s an <a href="https://github.com/llvm/llvm-project/issues/187667">ongoing discussion</a>
on that; in the meantime, I wrote and proposed a <a href="https://github.com/boostorg/icl/pull/54">PR</a>
to Boost.ICL that fixes the problem (pending acceptance).</p>
<h3 id="support-to-the-community">Support to the community</h3>
<ul>
<li>I’ve been helping a bit with Mark Cooper’s very successful
<a href="https://x.com/search?q=%22Boost%20Blueprint%22&amp;src=typed_query&amp;f=live">Boost Blueprint</a>
series on X.</li>
<li>Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</li>
</ul></content><author><name></name></author><category term="joaquin" /><summary type="html">During Q1 2026, I’ve been working in the following areas: boost::container::hub boost::container::hub is a nearly drop-in replacement of C++26 std::hive sporting a simpler data structure and providing competitive performance with respect to the de facto reference implementation plf::hive. When I first read about std::hive, I couldn’t help thinking how complex the internal design of the container is, and wondered if something leaner could in fact be more effective. boost::container::hub critically relies on two realizations: Identification of empty slots by way of std::countr_zero operations on a bitmask is extremely fast. Modern allocators are very fast, too: boost::container::hub does many more allocations than plf::hive, but this doesn’t degrade its performance significantly (although it affects cache locality). boost::container::hub is formally proposed for inclusion in Boost.Container and will be officially reviewed April 16-26. Ion Gaztañaga serves as the review manager. using std::cpp 2026 I gave my talk “The Mathematical Mind of a C++ Programmer” at the using std::cpp 2026 conference taking place in Madrid during March 16-19. I had a lot of fun preparing the presentation and delivering the actual talk, and some interesting discussions were had around it. This is a subject I’ve been wanting to talk about for decades, so I’m somewhat relieved I finally got it over with this year. Always happy to discuss C++ and math, so if you have feedback or want to continue the conversation, please reach out to me. Boost.Unordered Written maintenance fixes PR#328, PR#335, PR#336, PR#337, PR#339, PR#344, PR#345. Some of these fixes are related to Node.js vulnerabilities in the Antora setup used for doc building: as the number of Boost libraries using Antora is bound to grow, maybe we should think of an automated way to get these vulnerabilities automatically fixed for the whole project. Reviewed and merged PR#317, PR#332, PR#334, PR#341, PR#342. Many thanks to Sam Darwin, Braden Ganetsky and Andrey Semashev for their contributions. Boost.Bimap Merged PR#31 (std::initializer_list constructor) and provided testing and documentation for this new feature (PR#54). The original PR was silently sitting on the queue for more than four years and it was only when it was brought to my attention in a Reddit conversation that I got to take a look at it. Boost.Bimap needs an active mantainer, I guess I could become this person. Boost.ICL Recent changes in libc++ v22 code for associative container lookup have resulted in the breakage of Boost.ICL. My understanding is that the changes in libc++ are not standards conformant, and there’s an ongoing discussion on that; in the meantime, I wrote and proposed a PR to Boost.ICL that fixes the problem (pending acceptance). Support to the community I’ve been helping a bit with Mark Cooper’s very successful Boost Blueprint series on X. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Systems, CI Updates Q1 2026</title><link href="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q1 2026" /><published>2026-03-31T00:00:00+00:00</published><updated>2026-03-31T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/03/31/SamsQ1Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html"><h3 id="code-coverage-reports---designing-new-gcovr-templates">Code Coverage Reports - designing new GCOVR templates</h3>
<p>A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: <a href="https://github.com/boostorg/boost-ci/blob/master/docs/code-coverage.md">Code Coverage with Github Actions and Github Pages</a>. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development.</p>
<ul>
<li>Better full page scrolling of C++ source code files</li>
<li>Include ‘functions’ listings on every page</li>
<li>Optionally disable branch coverage</li>
<li>Purposely restrict coverage directories to src/ and include/</li>
<li>Another scrolling bug fixed</li>
<li>Both blue and green colored themes</li>
<li>Codacy linting</li>
<li>New forward and back buttons. Allows navigation to each “miss” and subsequent pages</li>
</ul>
<h3 id="server-hosting">Server Hosting</h3>
<p>This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement:</p>
<p><a href="https://lists.boost.org/archives/list/boost@lists.boost.org/thread/XYFD42TTQRYHWTLGP6GCIZQ6NHCZLNQT/">Farewell to Wowbagger - End of an Era for boost.org</a></p>
<p>There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check.</p>
<p>Launched server instances for corosio.org and paperflow.</p>
<h3 id="fil-c">Fil-C</h3>
<p>Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ .
Built a Fil-C container image based on Drone images.
Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building. This is not quite finished but should be online soon.</p>
<h3 id="boost-release-process-boostorgrelease-tools">Boost release process boostorg/release-tools</h3>
<p>The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large.
And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that
go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes.</p>
<p>Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx.</p>
<h3 id="doc-previews-and-doc-builds">Doc Previews and Doc Builds</h3>
<p>Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path.</p>
<p>Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download.</p>
<p>Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily.</p>
<p>CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip</p>
<h3 id="boost-website-boostorgwebsite-v2">Boost website boostorg/website-v2</h3>
<p>Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website.</p>
<p>Disable Fastly caching to assist metalab developers.</p>
<p>Gitflow workflow planning meetings.</p>
<p>Discussions about how Tools should be present on the libraries pages.</p>
<p>On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless.</p>
<p>Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested.</p>
<p>Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo.</p>
<h3 id="mailman3">Mailman3</h3>
<p>Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done.</p>
<h3 id="boostorg">boostorg</h3>
<p>Migrated cppalliance/decimal to boostorg/decimal.</p>
<h3 id="jenkins">Jenkins</h3>
<p>The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced.</p>
<p>Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs.</p>
<p>Debugging Jenkins runners, set subnet and zone on the cloud server configurations.</p>
<p>Fixed lcov jobs, that need cxxstd=20</p>
<p>Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts.</p>
<p>Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews.</p>
<p>Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr]</p>
<p>Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible.</p>
<h3 id="gha">GHA</h3>
<p>Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years.</p>
<p>Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3.</p>
<h3 id="drone">Drone</h3>
<p>Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.</p></content><author><name></name></author><category term="sam" /><summary type="html">Code Coverage Reports - designing new GCOVR templates A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: Code Coverage with Github Actions and Github Pages. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development. Better full page scrolling of C++ source code files Include ‘functions’ listings on every page Optionally disable branch coverage Purposely restrict coverage directories to src/ and include/ Another scrolling bug fixed Both blue and green colored themes Codacy linting New forward and back buttons. Allows navigation to each “miss” and subsequent pages Server Hosting This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement: Farewell to Wowbagger - End of an Era for boost.org There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check. Launched server instances for corosio.org and paperflow. Fil-C Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ . Built a Fil-C container image based on Drone images. Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building. This is not quite finished but should be online soon. Boost release process boostorg/release-tools The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large. And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes. Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx. Doc Previews and Doc Builds Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path. Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download. Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily. CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip Boost website boostorg/website-v2 Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website. Disable Fastly caching to assist metalab developers. Gitflow workflow planning meetings. Discussions about how Tools should be present on the libraries pages. On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless. Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested. Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo. Mailman3 Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done. boostorg Migrated cppalliance/decimal to boostorg/decimal. Jenkins The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced. Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs. Debugging Jenkins runners, set subnet and zone on the cloud server configurations. Fixed lcov jobs, that need cxxstd=20 Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts. Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews. Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr] Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible. GHA Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years. Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3. Drone Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.</summary></entry><entry><title type="html">Statement from the C++ Alliance on WG21 Committee Meeting Support</title><link href="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html" rel="alternate" type="text/html" title="Statement from the C++ Alliance on WG21 Committee Meeting Support" /><published>2026-03-27T00:00:00+00:00</published><updated>2026-03-27T00:00:00+00:00</updated><id>http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement</id><content type="html" xml:base="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html"><p>The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible.</p>
<p>We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee.</p>
<p>The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program.</p>
<p>If you are interested in learning more about our attendance program, please reach out to us at <a href="mailto:info@cppalliance.org">info@cppalliance.org</a>.</p></content><author><name></name></author><category term="company" /><summary type="html">The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible. We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee. The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program. If you are interested in learning more about our attendance program, please reach out to us at info@cppalliance.org.</summary></entry><entry><title type="html">Corosio Beta: Coroutine-Native Networking for C++20</title><link href="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html" rel="alternate" type="text/html" title="Corosio Beta: Coroutine-Native Networking for C++20" /><published>2026-03-11T00:00:00+00:00</published><updated>2026-03-11T00:00:00+00:00</updated><id>http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking</id><content type="html" xml:base="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html"><h1 id="corosio-beta-coroutine-native-networking-for-c20">Corosio Beta: Coroutine-Native Networking for C++20</h1>
<p><em>The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review.</em></p>
<hr />
<h2 id="the-gap-c20-left-open">The Gap C++20 Left Open</h2>
<p>C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over.</p>
<hr />
<h2 id="what-corosio-is">What Corosio Is</h2>
<p>Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write <code>co_await</code> and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver.</p>
<pre><code class="language-c">auto [socket] = co_await acceptor.async_accept();
auto n = co_await socket.async_read_some(buffer);
co_await socket.async_write(response);
</code></pre>
<p>Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake.</p>
<hr />
<h2 id="built-on-capy">Built on Capy</h2>
<p>Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: <em>an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained.</em></p>
<p>Capy’s <em>IoAwaitable</em> protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup.</p>
<hr />
<h2 id="what-we-are-asking-for">What We Are Asking For</h2>
<p>We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically:</p>
<ul>
<li>Does the executor affinity model hold up under production conditions?</li>
<li>Does cancellation behave correctly across complex coroutine chains?</li>
<li>Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends?</li>
<li>Does the zero-allocation model hold in your deployment scenarios?</li>
</ul>
<p>We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny.</p>
<hr />
<h2 id="get-it">Get It</h2>
<pre><code class="language-shell">git clone https://github.com/cppalliance/corosio.git
cd corosio
cmake -S . -B build -G Ninja
cmake --build build
</code></pre>
<p>Or with CMake FetchContent:</p>
<pre><code>include(FetchContent)
FetchContent_Declare(corosio
GIT_REPOSITORY https://github.com/cppalliance/corosio.git
GIT_TAG develop
GIT_SHALLOW TRUE)
FetchContent_MakeAvailable(corosio)
target_link_libraries(my_app Boost::corosio)
</code></pre>
<p><strong>Requires:</strong> CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+</p>
<hr />
<h2 id="resources">Resources</h2>
<p><a href="https://github.com/cppalliance/corosio">Corosio on GitHub</a> – https://github.com/cppalliance/corosio</p>
<p><a href="https://master.corosio.cpp.al/">Corosio Docs</a> – https://develop.corosio.cpp.al/</p>
<p><a href="https://github.com/cppalliance/capy">Capy on GitHub</a> – https://github.com/cppalliance/capy</p>
<p><a href="https://master.capy.cpp.al/">Capy Docs</a> – https://develop.capy.cpp.al/</p>
<p><a href="https://github.com/cppalliance/corosio/issues">File an Issue</a> – https://github.com/cppalliance/corosio/issues</p></content><author><name></name></author><category term="mark" /><summary type="html">Corosio Beta: Coroutine-Native Networking for C++20 The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review. The Gap C++20 Left Open C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over. What Corosio Is Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write co_await and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver. auto [socket] = co_await acceptor.async_accept(); auto n = co_await socket.async_read_some(buffer); co_await socket.async_write(response); Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake. Built on Capy Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained. Capy’s IoAwaitable protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup. What We Are Asking For We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically: Does the executor affinity model hold up under production conditions? Does cancellation behave correctly across complex coroutine chains? Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends? Does the zero-allocation model hold in your deployment scenarios? We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny. Get It git clone https://github.com/cppalliance/corosio.git cd corosio cmake -S . -B build -G Ninja cmake --build build Or with CMake FetchContent: include(FetchContent) FetchContent_Declare(corosio GIT_REPOSITORY https://github.com/cppalliance/corosio.git GIT_TAG develop GIT_SHALLOW TRUE) FetchContent_MakeAvailable(corosio) target_link_libraries(my_app Boost::corosio) Requires: CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+ Resources Corosio on GitHub – https://github.com/cppalliance/corosio Corosio Docs – https://develop.corosio.cpp.al/ Capy on GitHub – https://github.com/cppalliance/capy Capy Docs – https://develop.capy.cpp.al/ File an Issue – https://github.com/cppalliance/corosio/issues</summary></entry><entry><title type="html">A postgres library for Boost</title><link href="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html" rel="alternate" type="text/html" title="A postgres library for Boost" /><published>2026-01-23T00:00:00+00:00</published><updated>2026-01-23T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html"><p>Do you know Boost.MySQL? If you’ve been reading my posts, you probably do.
Many people have wondered ‘why not Postgres?’. Well, the time is now.
TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL.
You can find the code <a href="https://github.com/anarthal/nativepg">here</a>.</p>
<p>Since libPQ is already a good library, the NativePG project intends
to be more ambitious than Boost.MySQL. In addition to the expected
Asio interface, I intend to provide a sans-io API that exposes primitives
like message serialization.</p>
<p>Throughout this post, I will go into the intended library design and the rationales
behind its design.</p>
<h2 id="the-lowest-level-message-serialization">The lowest level: message serialization</h2>
<p>PostgreSQL clients communicate with the server using
a binary protocol on top of TCP, termed <a href="https://www.postgresql.org/docs/current/protocol.html">the frontend/backend protocol</a>.
The protocol defines a set of messages used for interactions. For example, when running a query, the following happens:</p>
<pre><code>┌────────┐ ┌────────┐
│ Client │ │ Server │
└───┬────┘ └───┬────┘
│ │
│ Query │
│ ──────────────────────────────────────────&gt; │
│ │
│ RowDescription │
│ &lt;────────────────────────────────────────── │
│ │
│ DataRow │
│ &lt;────────────────────────────────────────── │
│ │
│ CommandComplete │
│ &lt;────────────────────────────────────────── │
│ │
│ ReadyForQuery │
│ &lt;────────────────────────────────────────── │
│ │
</code></pre>
<p>In the lowest layer, this library provides functions to serialize and parse
such messages. The goal here is being as efficient as possible.
Parsing functions are non-allocating, and use an approach inspired by
Boost.Url collections:</p>
<h2 id="parsing-database-types">Parsing database types</h2>
<p>The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types,
it supports advanced scalars like UUIDs, arrays and user-defined aggregates.</p>
<p>When running a query, libPQ exposes retrieved data as either raw text or bytes.
This is what the server sends in the <code>DataRow</code> packets shown above.
To do something useful with the data, users likely need parsing and serializing
such types.</p>
<p>The next layer of NativePG is in charge of providing such functions.
This will likely contain some extension points for users to plug in their types.
This is the general form of such functions:</p>
<pre><code class="language-cpp">system::error_code parse(span&lt;const std::byte&gt; from, T&amp; to, const connection_state&amp;);
void serialize(const T&amp; from, dynamic_buffer&amp; to, const connection_state&amp;);
</code></pre>
<p>Note that some types might require access to session configuration.
For instance, dates may be expressed using different wire formats depending
on the connection’s runtime settings.</p>
<p>At the time of writing, only ints and strings are supported,
but this will be extended soon.</p>
<h2 id="composing-requests">Composing requests</h2>
<p>Efficiency in database communication is achieved with pipelining.
A network round-trip with the server is worth a thousand allocations in the client.
It is thus critical that:</p>
<ul>
<li>The protocol properly supports pipelining. This is the case with PostgreSQL.</li>
<li>The client should expose an interface to it, and make it very easy to use.
libPQ does the first, and NativePG intends to achieve the second.</li>
</ul>
<p>NativePG pipelines by default. In NativePG, a <code>request</code> object is always
a pipeline:</p>
<pre><code class="language-cpp">// Create a request
request req;
// These two queries will be executed as part of a pipeline
req.add_query("SELECT * FROM libs WHERE author = $1", {"Ruben"});
req.add_query("DELETE FROM libs WHERE author &lt;&gt; $1", {"Ruben"});
</code></pre>
<p>Everything you may ask the server can be added to <code>request</code>.
This includes preparing and executing statements, establishing
pipeline synchronization points, and so on.
It aims to be close enough to the protocol to be powerful,
while also exposing high-level functions to make things easier.</p>
<h2 id="reading-responses">Reading responses</h2>
<p>Like <code>request</code>, the core response mechanism aims to be as close
to the protocol as possible. Since use cases here are much more varied,
there is no single <code>response</code> class, but a concept, instead.
This is what a <code>response_handler</code> looks like:</p>
<pre><code class="language-cpp">
struct my_handler {
// Check that the handler is compatible with the request,
// and prepare any required data structures. Called once at the beginning
handler_setup_result setup(const request&amp; req, std::size_t pipeline_offset);
// Called once for every message received from the server
// (e.g. `RowDescription`, `DataRow`, `CommandComplete`)
void on_message(const any_request_message&amp; msg);
// The overall result of the operation (error_code + diagnostic string).
// Called after the operation has finished.
const extended_error&amp; result() const;
};
</code></pre>
<p>Note that <code>on_message</code> is not allowed to report errors.
Even if a handler encounters a problem with a message
(imagine finding a <code>NULL</code> for a field where the user isn’t expecting one),
this is a user error, rather than a protocol error.
Subsequent steps in the pipeline must not be affected by this.</p>
<p>This is powerful but very low-level. Using this mechanism, the library
exposes an interface to parse the result of a query into a user-supplied
struct, using Boost.Describe:</p>
<pre><code class="language-cpp">struct library
{
std::int32_t id;
std::string name;
std::string cpp_version;
};
BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version))
// ...
std::vector&lt;library&gt; libs;
auto handler = nativepg::into(libs); // this is a valid response_handler
</code></pre>
<h2 id="network-algorithms">Network algorithms</h2>
<p>Given a user request and response handler, how do we send these to the server?
We need a set of network algorithms to achieve this. Some of these are trivial:
sending a request to the server is an <code>asio::write</code> on the request’s buffer.
Others, however, are more involved:</p>
<ul>
<li>Reading a pipeline response needs to verify that the message
sequence is what we expected, for security, and handle errors gracefully.</li>
<li>The handshake algorithm, in charge of authentication when we connect to the
server, needs to respond to server authentication challenges, which may
come in different forms.</li>
</ul>
<p>Writing these using <code>asio::async_compose</code> is problematic because:</p>
<ul>
<li>They become tied to Boost.Asio.</li>
<li>They are difficult to test.</li>
<li>They result in long compile times and code bloat due to templating.</li>
</ul>
<p>At the moment, these are written as finite state machines, similar to
how OpenSSL behaves in non-blocking mode:</p>
<pre><code class="language-cpp">// Reads the response of a pipeline (simplified).
// This is a hand-wired generator.
class read_response_fsm {
public:
// User-supplied arguments: request and response
read_response_fsm(const request&amp; req, response_handler_ref handler);
// Yielded to signal that we should read from the server
struct read_args { span&lt;std::byte&gt; buffer; };
// Yielded to signal that we're done
struct done_args { system::error_code result; };
variant&lt;read_args, done_args&gt;
resume(connection_state&amp;, system::error_code io_result, std::size_t bytes_transferred);
};
</code></pre>
<p>The idea is that higher-level code should call <code>resume</code> until it returns
a <code>done_args</code> value. This allows de-coupling from the underlying I/O runtime.</p>
<p>Since NativePG targets C++20, I’m considering rewriting this as a coroutine.
Boost.Capy (currently under development - hopefully part of Boost soon)
could be a good candidate for this.</p>
<h2 id="putting-everything-together-the-asio-interface">Putting everything together: the Asio interface</h2>
<p>At the end of the day, most users just want a <code>connection</code> object they can easily
use. Once all the sans-io parts are working, writing it is pretty straight-forward.
This is what end user code looks like:</p>
<pre><code class="language-cpp">// Create a connection
connection conn{co_await asio::this_coro::executor};
// Connect
co_await conn.async_connect(
{.hostname = "localhost", .username = "postgres", .password = "", .database = "postgres"}
);
std::cout &lt;&lt; "Startup complete\n";
// Compose our request and response
request req;
req.add_query("SELECT * FROM libs WHERE author = $1", {"Ruben"});
std::vector&lt;library&gt; libs;
// Run the request
co_await conn.async_exec(req, into(libs));
</code></pre>
<h2 id="auto-batch-connections">Auto-batch connections</h2>
<p>While <code>connection</code> is good, experience has shown me that it’s still
too low-level for most users:</p>
<ul>
<li>Connection establishment is manual with <code>async_connect</code>.</li>
<li>No built-in reconnection or health checks.</li>
<li>No built-in concurrent execution of requests.
That is, <code>async_exec</code> first writes the request, then reads the response.
Other requests may not be executed during this period.
This limits the connection’s throughput.</li>
</ul>
<p>For this reason, NativePG will provide some higher-level interfaces
that will make server communication easier and more efficient.
To get a feel of what we need, we should first understand
the two main usage patterns that we expect.</p>
<p>Most of the time, connections are used in a <strong>stateless</strong> way.
For example, consider querying data from the server:</p>
<pre><code class="language-cpp">request req;
req.add_query("SELECT * FROM libs WHERE author = $1", {"Ruben"});
co_await conn.async_exec(req, res);
</code></pre>
<p>This query is not mutating connection state in any way.
Other queries could be inserted before and after it without
making any difference.</p>
<p>I plan to add a higher-level connection type, similar to
<code>redis::connection</code> in Boost.Redis, that automatically
batches concurrent requests and handles reconnection.
The key differences with <code>connection</code> would be:</p>
<ul>
<li>Several independent tasks can share an auto-batch connection.
This is an error for <code>connection</code>.</li>
<li>If several requests are queued at the same time,
the connection may send them together to the server using a single system call.</li>
<li>There is no <code>async_connect</code> in an auto-batch connection.
Reconnection is handled automatically.</li>
</ul>
<p>Note that this pattern is not exclusive to read-only or
individual queries. Transactions can work by using protocol features:</p>
<pre><code class="language-cpp">request req;
req.set_autosync(false); // All subsequent queries are part of the same transaction
req.add_query("UPDATE table1 SET x = $1 WHERE y = 2", {42});
req.add_query("UPDATE table2 SET x = $1 WHERE y = 42", {2});
req.add_sync(); // The two updates run atomically
co_await conn.async_exec(req, res);
</code></pre>
<h2 id="connection-pools">Connection pools</h2>
<p>I mentioned there were two main usage scenarios in the library.
Sometimes, it is required to use connections in a <strong>stateful</strong> way:</p>
<pre><code class="language-cpp">request req;
req.add_simple_query("BEGIN"); // start a transaction manually
req.add_query("SELECT * FROM library WHERE author = $1 FOR UPDATE", {"Ruben"}); // lock rows
co_await conn.async_exec(req, lib);
// Do something in the client that depends on lib
if (lib.id == "Boost.MySQL")
co_return; // don't
// Now compose another request that depends on what we read from lib
req.clear();
req.add_query("UPDATE library SET status = 'deprecated' WHERE id = $1", {lib.id});
req.add_simple_query("COMMIT");
co_await conn.async_exec(req, ignore);
</code></pre>
<p>The key point here is that this pattern requires exclusive access to <code>conn</code>.
No other requests should be interleaved between the first and the second
<code>async_exec</code> invocations.</p>
<p>The best way to solve this is by using a connection pool.
This is what client code could look like:</p>
<pre><code class="language-cpp">co_await pool.async_exec([&amp;] (connection&amp; conn) -&gt; asio::awaitable&lt;system::error_code&gt; {
request req;
req.add_simple_query("BEGIN");
req.add_query("SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE", {user_id});
account_info acc;
co_await conn.async_exec(req, into(acc));
// Check if account has sufficient funds and is active
if (acc.balance &lt; payment_amount || acc.status != "active")
co_return error::insufficient_funds;
// Call external payment gateway API - this CANNOT be done in SQL
auto result = co_await payment_gateway.process_charge(user_id, payment_amount);
// Compose next request based on the external API response
req.clear();
if (result.success) {
req.add_query(
"UPDATE accounts SET balance = balance - $1 WHERE user_id = $2",
{payment_amount, user_id}
);
req.add_simple_query("COMMIT");
}
co_await conn.async_exec(req, ignore);
// The connection is automatically returned to the pool when this coroutine completes
co_return result.success ? error_code{} : error::payment_failed;
});
</code></pre>
<p>I explicitly want to avoid having a <code>connection_pool::async_get_connection()</code>
function, like in Boost.MySQL. This function returns a proxy object that grants access
to a free connection. When destroyed, the connection is returned to the pool.
This pattern looks great on paper, but runs into severe complications in
multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state,
thus needing at least an <code>asio::dispatch</code> to the pool’s executor, which may or may not
be a strand. It is so easy to get wrong that Boost.MySQL added a <code>pool_params::thread_safe</code> boolean
option to take care of this automatically, adding extra complexity. Definitely something to avoid.</p>
<h2 id="sql-formatting">SQL formatting</h2>
<p>As we’ve seen, the protocol has built-in support for adding
parameters to queries (see placeholders like <code>$1</code>). These placeholders
are expanded in the server securely.</p>
<p>While this covers most cases, sometimes we need to generate SQL
that is too dynamic to be handled by the server. For instance,
a website might allow multiple optional filters, translating into
<code>WHERE</code> clauses that might or might not be present.</p>
<p>These use cases require SQL generated in the client. To do so,
we need a way of formatting user-supplied values without
running into SQL injection vulnerabilities. The final piece
of the library becomes a <code>format_sql</code> function akin to the
one in Boost.MySQL.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p>While the plan is clear, there is still much to be done here.
There are dedicated APIs for high-throughput data copying and
push notifications that need to be implemented. Some of the described
APIs have a solid working implementation, while others still need
some work. All in all, I hope that this library can soon reach a state
where it can be useful to people.</p></content><author><name></name></author><category term="ruben" /><summary type="html">Do you know Boost.MySQL? If you’ve been reading my posts, you probably do. Many people have wondered ‘why not Postgres?’. Well, the time is now. TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL. You can find the code here. Since libPQ is already a good library, the NativePG project intends to be more ambitious than Boost.MySQL. In addition to the expected Asio interface, I intend to provide a sans-io API that exposes primitives like message serialization. Throughout this post, I will go into the intended library design and the rationales behind its design. The lowest level: message serialization PostgreSQL clients communicate with the server using a binary protocol on top of TCP, termed the frontend/backend protocol. The protocol defines a set of messages used for interactions. For example, when running a query, the following happens: ┌────────┐ ┌────────┐ │ Client │ │ Server │ └───┬────┘ └───┬────┘ │ │ │ Query │ │ ──────────────────────────────────────────&gt; │ │ │ │ RowDescription │ │ &lt;────────────────────────────────────────── │ │ │ │ DataRow │ │ &lt;────────────────────────────────────────── │ │ │ │ CommandComplete │ │ &lt;────────────────────────────────────────── │ │ │ │ ReadyForQuery │ │ &lt;────────────────────────────────────────── │ │ │ In the lowest layer, this library provides functions to serialize and parse such messages. The goal here is being as efficient as possible. Parsing functions are non-allocating, and use an approach inspired by Boost.Url collections: Parsing database types The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types, it supports advanced scalars like UUIDs, arrays and user-defined aggregates. When running a query, libPQ exposes retrieved data as either raw text or bytes. This is what the server sends in the DataRow packets shown above. To do something useful with the data, users likely need parsing and serializing such types. The next layer of NativePG is in charge of providing such functions. This will likely contain some extension points for users to plug in their types. This is the general form of such functions: system::error_code parse(span&lt;const std::byte&gt; from, T&amp; to, const connection_state&amp;); void serialize(const T&amp; from, dynamic_buffer&amp; to, const connection_state&amp;); Note that some types might require access to session configuration. For instance, dates may be expressed using different wire formats depending on the connection’s runtime settings. At the time of writing, only ints and strings are supported, but this will be extended soon. Composing requests Efficiency in database communication is achieved with pipelining. A network round-trip with the server is worth a thousand allocations in the client. It is thus critical that: The protocol properly supports pipelining. This is the case with PostgreSQL. The client should expose an interface to it, and make it very easy to use. libPQ does the first, and NativePG intends to achieve the second. NativePG pipelines by default. In NativePG, a request object is always a pipeline: // Create a request request req; // These two queries will be executed as part of a pipeline req.add_query("SELECT * FROM libs WHERE author = $1", {"Ruben"}); req.add_query("DELETE FROM libs WHERE author &lt;&gt; $1", {"Ruben"}); Everything you may ask the server can be added to request. This includes preparing and executing statements, establishing pipeline synchronization points, and so on. It aims to be close enough to the protocol to be powerful, while also exposing high-level functions to make things easier. Reading responses Like request, the core response mechanism aims to be as close to the protocol as possible. Since use cases here are much more varied, there is no single response class, but a concept, instead. This is what a response_handler looks like: struct my_handler { // Check that the handler is compatible with the request, // and prepare any required data structures. Called once at the beginning handler_setup_result setup(const request&amp; req, std::size_t pipeline_offset); // Called once for every message received from the server // (e.g. `RowDescription`, `DataRow`, `CommandComplete`) void on_message(const any_request_message&amp; msg); // The overall result of the operation (error_code + diagnostic string). // Called after the operation has finished. const extended_error&amp; result() const; }; Note that on_message is not allowed to report errors. Even if a handler encounters a problem with a message (imagine finding a NULL for a field where the user isn’t expecting one), this is a user error, rather than a protocol error. Subsequent steps in the pipeline must not be affected by this. This is powerful but very low-level. Using this mechanism, the library exposes an interface to parse the result of a query into a user-supplied struct, using Boost.Describe: struct library { std::int32_t id; std::string name; std::string cpp_version; }; BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version)) // ... std::vector&lt;library&gt; libs; auto handler = nativepg::into(libs); // this is a valid response_handler Network algorithms Given a user request and response handler, how do we send these to the server? We need a set of network algorithms to achieve this. Some of these are trivial: sending a request to the server is an asio::write on the request’s buffer. Others, however, are more involved: Reading a pipeline response needs to verify that the message sequence is what we expected, for security, and handle errors gracefully. The handshake algorithm, in charge of authentication when we connect to the server, needs to respond to server authentication challenges, which may come in different forms. Writing these using asio::async_compose is problematic because: They become tied to Boost.Asio. They are difficult to test. They result in long compile times and code bloat due to templating. At the moment, these are written as finite state machines, similar to how OpenSSL behaves in non-blocking mode: // Reads the response of a pipeline (simplified). // This is a hand-wired generator. class read_response_fsm { public: // User-supplied arguments: request and response read_response_fsm(const request&amp; req, response_handler_ref handler); // Yielded to signal that we should read from the server struct read_args { span&lt;std::byte&gt; buffer; }; // Yielded to signal that we're done struct done_args { system::error_code result; }; variant&lt;read_args, done_args&gt; resume(connection_state&amp;, system::error_code io_result, std::size_t bytes_transferred); }; The idea is that higher-level code should call resume until it returns a done_args value. This allows de-coupling from the underlying I/O runtime. Since NativePG targets C++20, I’m considering rewriting this as a coroutine. Boost.Capy (currently under development - hopefully part of Boost soon) could be a good candidate for this. Putting everything together: the Asio interface At the end of the day, most users just want a connection object they can easily use. Once all the sans-io parts are working, writing it is pretty straight-forward. This is what end user code looks like: // Create a connection connection conn{co_await asio::this_coro::executor}; // Connect co_await conn.async_connect( {.hostname = "localhost", .username = "postgres", .password = "", .database = "postgres"} ); std::cout &lt;&lt; "Startup complete\n"; // Compose our request and response request req; req.add_query("SELECT * FROM libs WHERE author = $1", {"Ruben"}); std::vector&lt;library&gt; libs; // Run the request co_await conn.async_exec(req, into(libs)); Auto-batch connections While connection is good, experience has shown me that it’s still too low-level for most users: Connection establishment is manual with async_connect. No built-in reconnection or health checks. No built-in concurrent execution of requests. That is, async_exec first writes the request, then reads the response. Other requests may not be executed during this period. This limits the connection’s throughput. For this reason, NativePG will provide some higher-level interfaces that will make server communication easier and more efficient. To get a feel of what we need, we should first understand the two main usage patterns that we expect. Most of the time, connections are used in a stateless way. For example, consider querying data from the server: request req; req.add_query("SELECT * FROM libs WHERE author = $1", {"Ruben"}); co_await conn.async_exec(req, res); This query is not mutating connection state in any way. Other queries could be inserted before and after it without making any difference. I plan to add a higher-level connection type, similar to redis::connection in Boost.Redis, that automatically batches concurrent requests and handles reconnection. The key differences with connection would be: Several independent tasks can share an auto-batch connection. This is an error for connection. If several requests are queued at the same time, the connection may send them together to the server using a single system call. There is no async_connect in an auto-batch connection. Reconnection is handled automatically. Note that this pattern is not exclusive to read-only or individual queries. Transactions can work by using protocol features: request req; req.set_autosync(false); // All subsequent queries are part of the same transaction req.add_query("UPDATE table1 SET x = $1 WHERE y = 2", {42}); req.add_query("UPDATE table2 SET x = $1 WHERE y = 42", {2}); req.add_sync(); // The two updates run atomically co_await conn.async_exec(req, res); Connection pools I mentioned there were two main usage scenarios in the library. Sometimes, it is required to use connections in a stateful way: request req; req.add_simple_query("BEGIN"); // start a transaction manually req.add_query("SELECT * FROM library WHERE author = $1 FOR UPDATE", {"Ruben"}); // lock rows co_await conn.async_exec(req, lib); // Do something in the client that depends on lib if (lib.id == "Boost.MySQL") co_return; // don't // Now compose another request that depends on what we read from lib req.clear(); req.add_query("UPDATE library SET status = 'deprecated' WHERE id = $1", {lib.id}); req.add_simple_query("COMMIT"); co_await conn.async_exec(req, ignore); The key point here is that this pattern requires exclusive access to conn. No other requests should be interleaved between the first and the second async_exec invocations. The best way to solve this is by using a connection pool. This is what client code could look like: co_await pool.async_exec([&amp;] (connection&amp; conn) -&gt; asio::awaitable&lt;system::error_code&gt; { request req; req.add_simple_query("BEGIN"); req.add_query("SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE", {user_id}); account_info acc; co_await conn.async_exec(req, into(acc)); // Check if account has sufficient funds and is active if (acc.balance &lt; payment_amount || acc.status != "active") co_return error::insufficient_funds; // Call external payment gateway API - this CANNOT be done in SQL auto result = co_await payment_gateway.process_charge(user_id, payment_amount); // Compose next request based on the external API response req.clear(); if (result.success) { req.add_query( "UPDATE accounts SET balance = balance - $1 WHERE user_id = $2", {payment_amount, user_id} ); req.add_simple_query("COMMIT"); } co_await conn.async_exec(req, ignore); // The connection is automatically returned to the pool when this coroutine completes co_return result.success ? error_code{} : error::payment_failed; }); I explicitly want to avoid having a connection_pool::async_get_connection() function, like in Boost.MySQL. This function returns a proxy object that grants access to a free connection. When destroyed, the connection is returned to the pool. This pattern looks great on paper, but runs into severe complications in multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state, thus needing at least an asio::dispatch to the pool’s executor, which may or may not be a strand. It is so easy to get wrong that Boost.MySQL added a pool_params::thread_safe boolean option to take care of this automatically, adding extra complexity. Definitely something to avoid. SQL formatting As we’ve seen, the protocol has built-in support for adding parameters to queries (see placeholders like $1). These placeholders are expanded in the server securely. While this covers most cases, sometimes we need to generate SQL that is too dynamic to be handled by the server. For instance, a website might allow multiple optional filters, translating into WHERE clauses that might or might not be present. These use cases require SQL generated in the client. To do so, we need a way of formatting user-supplied values without running into SQL injection vulnerabilities. The final piece of the library becomes a format_sql function akin to the one in Boost.MySQL. Final thoughts While the plan is clear, there is still much to be done here. There are dedicated APIs for high-throughput data copying and push notifications that need to be implemented. Some of the described APIs have a solid working implementation, while others still need some work. All in all, I hope that this library can soon reach a state where it can be useful to people.</summary></entry><entry><title type="html">Systems, CI Updates Q4 2025</title><link href="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q4 2025" /><published>2026-01-22T00:00:00+00:00</published><updated>2026-01-22T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/01/22/SamsQ4Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html"><h3 id="doc-previews-and-doc-builds">Doc Previews and Doc Builds</h3>
<p>The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version.</p>
<p>Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process.</p>
<p>Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job.</p>
<h3 id="boost-website-boostorgwebsite-v2">Boost website boostorg/website-v2</h3>
<p>Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic.</p>
<p>When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org.</p>
<p>Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml.</p>
<p>During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version.</p>
<p>Meetings about CMS feature, other topics. Many general discussions about website issues.</p>
<h3 id="mailman3">Mailman3</h3>
<p>When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain.</p>
<h3 id="boost-ci">boost-ci</h3>
<p>Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org.</p>
<h3 id="jenkins">Jenkins</h3>
<p>Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews.</p>
<p>Modified email notifications to send ‘recovery’ type messages after failed jobs. Other enhancements to Jenkins jobs.</p>
<h3 id="boost-release-process-boostorgrelease-tools">Boost release process boostorg/release-tools</h3>
<p>When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz .</p>
<p>Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU.</p>
<h3 id="drone">Drone</h3>
<p>Microsoft Windows - VS2026 container image.<br />
Ubuntu 25.10 container image.</p>
<h3 id="gha">GHA</h3>
<p>Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation.</p>
<p>Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.</p></content><author><name></name></author><category term="sam" /><summary type="html">Doc Previews and Doc Builds The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version. Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process. Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job. Boost website boostorg/website-v2 Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic. When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org. Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml. During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version. Meetings about CMS feature, other topics. Many general discussions about website issues. Mailman3 When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain. boost-ci Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org. Jenkins Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews. Modified email notifications to send ‘recovery’ type messages after failed jobs. Other enhancements to Jenkins jobs. Boost release process boostorg/release-tools When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz . Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU. Drone Microsoft Windows - VS2026 container image. Ubuntu 25.10 container image. GHA Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation. Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.</summary></entry><entry><title type="html">Containers galore</title><link href="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html" rel="alternate" type="text/html" title="Containers galore" /><published>2026-01-18T00:00:00+00:00</published><updated>2026-01-18T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html"><p>During Q4 2025, I’ve been working in the following areas:</p>
<h3 id="boostbloom">Boost.Bloom</h3>
<ul>
<li>Written <a href="https://bannalia.blogspot.com/2025/10/bulk-operations-in-boostbloom.html">an article</a> explaining
the usage and implementation of the recently introduced bulk operations.</li>
</ul>
<h3 id="boostunordered">Boost.Unordered</h3>
<ul>
<li>Written maintenance fixes
<a href="https://github.com/boostorg/unordered/pull/320">PR#320</a>,
<a href="https://github.com/boostorg/unordered/pull/321">PR#321</a>,
<a href="https://github.com/boostorg/unordered/pull/326">PR#326</a>,
<a href="https://github.com/boostorg/unordered/pull/327">PR#327</a>,
<a href="https://github.com/boostorg/unordered/pull/328">PR#328</a>,
<a href="https://github.com/boostorg/unordered/pull/335">PR#335</a>.</li>
</ul>
<h3 id="boostmultiindex">Boost.MultiIndex</h3>
<ul>
<li>Refactored the library to use Boost.Mp11 instead of Boost.MPL (<a href="https://github.com/boostorg/multi_index/pull/87">PR#87</a>),
remove pre-C++11 variadic argument emulation (<a href="https://github.com/boostorg/multi_index/pull/88">PR#88</a>)
and remove all sorts of pre-C++11 polyfills (<a href="https://github.com/boostorg/multi_index/pull/90">PR#90</a>).
These changes are explained in <a href="https://bannalia.blogspot.com/2025/12/boostmultiindex-refactored.html">an article</a>
and will be shipped in Boost 1.91. Transition is expected to be mostly backwards
compatible, though two Boost libraries needed adjustments as they use MultiIndex
in rather advanced ways (see below).</li>
</ul>
<h3 id="boostflyweight">Boost.Flyweight</h3>
<ul>
<li>Adapted the library to work with Boost.MultiIndex 1.91
(<a href="https://github.com/boostorg/flyweight/pull/25">PR#25</a>).</li>
</ul>
<h3 id="boostbimap">Boost.Bimap</h3>
<ul>
<li>Adapted the library to work with Boost.MultiIndex 1.91
(<a href="https://github.com/boostorg/bimap/pull/50">PR#50</a>).</li>
</ul>
<h3 id="other-boost-libraries">Other Boost libraries</h3>
<ul>
<li>Helped set up the Antora-based doc build chain for DynamicBitset
(<a href="https://github.com/boostorg/dynamic_bitset/pull/96">PR#96</a>,
<a href="https://github.com/boostorg/dynamic_bitset/pull/97">PR#97</a>,
<a href="https://github.com/boostorg/dynamic_bitset/pull/98">PR#98</a>).</li>
<li>Same with OpenMethod
(<a href="https://github.com/boostorg/openmethod/pull/40">PR#40</a>).</li>
<li>Fixed concept compliance of iterators provided by Spirit
(<a href="https://github.com/boostorg/spirit/pull/840">PR#840</a>,
<a href="https://github.com/boostorg/spirit/pull/841">PR#841</a>).</li>
</ul>
<h3 id="experiments-with-fil-c">Experiments with Fil-C</h3>
<p><a href="https://fil-c.org/">Fil-C</a> is a C and C++ compiler built on top of LLVM that adds run-time
memory-safety mechanisms preventing out-of-bounds and use-after-free accesses.
I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running
some benchmarks to measure the resulting degradation in execution times and memory usage.
Results follow:</p>
<ul>
<li>Articles
<ul>
<li><a href="https://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html">Some experiments with Boost.Unordered on Fil-C</a></li>
<li><a href="https://bannalia.blogspot.com/2025/11/comparing-run-time-performance-of-fil-c.html">Comparing the run-time performance of Fil-C and ASAN</a></li>
</ul>
</li>
<li>Repos
<ul>
<li><a href="https://github.com/joaquintides/fil-c_boost_unordered">Compiling Boost.Unordered test suite with Fil-C</a></li>
<li><a href="https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c">Benchmarks of Fil-C and ASAN against baseline</a></li>
<li><a href="https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c_memory">Memory consumption of Fil-C and ASAN with respect to baseline</a></li>
</ul>
</li>
</ul>
<h3 id="proof-of-concept-of-a-semistable-vector">Proof of concept of a semistable vector</h3>
<p>By “semistable vector” I mean that pointers to the elements may be invalidated
upon insertion and erasure (just like a regular <code>std::vector</code>) but iterators
to non-erased elements remain valid throughout.
I’ve written a small <a href="https://github.com/joaquintides/semistable_vector/">proof of concept</a>
of this idea and measured its performance against non-stable <code>std::vector</code> and fully
stable <code>std::list</code>. It is dubious that such container could be of interest for production
use, but the techniques explored are mildly interesting and could be adapted, for
instance, to write powerful safe iterator facilities.</p>
<h3 id="teaser-exploring-the-stdhive-space">Teaser: exploring the <code>std::hive</code> space</h3>
<p>In short, <code>std::hive</code> (coming in C++26) is a container with stable references/iterators
and fast insertion and erasure. The <a href="https://github.com/mattreecebentley/plf_hive">reference implementation</a>
for this container relies on a rather convoluted data structure, and I started to wonder
if something simpler could deliver superior performance. Expect to see the results of
my experiments in Q1 2026.</p>
<h3 id="website">Website</h3>
<ul>
<li>Filed issues
<a href="https://github.com/boostorg/website-v2/issues/1936">#1936</a>,
<a href="https://github.com/boostorg/website-v2/issues/1937">#1937</a>,
<a href="https://github.com/boostorg/website-v2/issues/1984">#1984</a>.</li>
</ul>
<h3 id="support-to-the-community">Support to the community</h3>
<ul>
<li>I’ve been part of a task force with the C++ Alliance to review the entire
catalog of Boost libraries (170+) and categorize them according to their
maintainance status and relevance in light of additions to the C++
standard library over the years.</li>
<li>Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</li>
</ul></content><author><name></name></author><category term="joaquin" /><summary type="html">During Q4 2025, I’ve been working in the following areas: Boost.Bloom Written an article explaining the usage and implementation of the recently introduced bulk operations. Boost.Unordered Written maintenance fixes PR#320, PR#321, PR#326, PR#327, PR#328, PR#335. Boost.MultiIndex Refactored the library to use Boost.Mp11 instead of Boost.MPL (PR#87), remove pre-C++11 variadic argument emulation (PR#88) and remove all sorts of pre-C++11 polyfills (PR#90). These changes are explained in an article and will be shipped in Boost 1.91. Transition is expected to be mostly backwards compatible, though two Boost libraries needed adjustments as they use MultiIndex in rather advanced ways (see below). Boost.Flyweight Adapted the library to work with Boost.MultiIndex 1.91 (PR#25). Boost.Bimap Adapted the library to work with Boost.MultiIndex 1.91 (PR#50). Other Boost libraries Helped set up the Antora-based doc build chain for DynamicBitset (PR#96, PR#97, PR#98). Same with OpenMethod (PR#40). Fixed concept compliance of iterators provided by Spirit (PR#840, PR#841). Experiments with Fil-C Fil-C is a C and C++ compiler built on top of LLVM that adds run-time memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running some benchmarks to measure the resulting degradation in execution times and memory usage. Results follow: Articles Some experiments with Boost.Unordered on Fil-C Comparing the run-time performance of Fil-C and ASAN Repos Compiling Boost.Unordered test suite with Fil-C Benchmarks of Fil-C and ASAN against baseline Memory consumption of Fil-C and ASAN with respect to baseline Proof of concept of a semistable vector By “semistable vector” I mean that pointers to the elements may be invalidated upon insertion and erasure (just like a regular std::vector) but iterators to non-erased elements remain valid throughout. I’ve written a small proof of concept of this idea and measured its performance against non-stable std::vector and fully stable std::list. It is dubious that such container could be of interest for production use, but the techniques explored are mildly interesting and could be adapted, for instance, to write powerful safe iterator facilities. Teaser: exploring the std::hive space In short, std::hive (coming in C++26) is a container with stable references/iterators and fast insertion and erasure. The reference implementation for this container relies on a rather convoluted data structure, and I started to wonder if something simpler could deliver superior performance. Expect to see the results of my experiments in Q1 2026. Website Filed issues #1936, #1937, #1984. Support to the community I’ve been part of a task force with the C++ Alliance to review the entire catalog of Boost libraries (170+) and categorize them according to their maintainance status and relevance in light of additions to the C++ standard library over the years. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Decimal is Accepted and Next Steps</title><link href="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html" rel="alternate" type="text/html" title="Decimal is Accepted and Next Steps" /><published>2026-01-15T00:00:00+00:00</published><updated>2026-01-15T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html"><p>After two reviews the Decimal (<a href="https://github.com/cppalliance/decimal">https://github.com/cppalliance/decimal</a>) library has been accepted into Boost.
Look for it to ship for the first time with Boost 1.91 in the Spring.
For current and prospective users, a new release series (v6) is available on the releases page of the library.
This major version change contains all of the bug fixes and addresses comments from the second review.
We have once again overhauled the documentation based on the review to include a significant increase in the number of examples.
Between the <code>Basic Usage</code> and <code>Examples</code> tabs on the website we believe there’s now enough information to quickly make good use of the library.
One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB.
It is a huge release (1108 commits with a diff stat of &gt;50k LOC), but is be better than ever.
I expect that this is the last major version that will be released prior to moving to the Boost release cycle.</p>
<p>Where to go from here?</p>
<p>As I have mentioned in previous posts, the int128 (<a href="https://github.com/cppalliance/int128">https://github.com/cppalliance/int128</a>) library started life as the backend for portable arithmetic and representation in the Decimal library.
It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like <code>&lt;format&gt;</code> support.
The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support.
This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support.
I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division).</p>
<p>Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (<a href="https://github.com/correaa/boost-multi">https://github.com/correaa/boost-multi</a>) library.
Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory.
Feel free to give the library a go now and comment on what you find.
This is a very high quality library which should have an exciting review.</p></content><author><name></name></author><category term="matt" /><summary type="html">After two reviews the Decimal (https://github.com/cppalliance/decimal) library has been accepted into Boost. Look for it to ship for the first time with Boost 1.91 in the Spring. For current and prospective users, a new release series (v6) is available on the releases page of the library. This major version change contains all of the bug fixes and addresses comments from the second review. We have once again overhauled the documentation based on the review to include a significant increase in the number of examples. Between the Basic Usage and Examples tabs on the website we believe there’s now enough information to quickly make good use of the library. One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB. It is a huge release (1108 commits with a diff stat of &gt;50k LOC), but is be better than ever. I expect that this is the last major version that will be released prior to moving to the Boost release cycle. Where to go from here? As I have mentioned in previous posts, the int128 (https://github.com/cppalliance/int128) library started life as the backend for portable arithmetic and representation in the Decimal library. It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &lt;format&gt; support. The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support. This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support. I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division). Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (https://github.com/correaa/boost-multi) library. Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory. Feel free to give the library a go now and comment on what you find. This is a very high quality library which should have an exciting review.</summary></entry><entry><title type="html">From Prototype to Product: MrDocs in 2025</title><link href="http://cppalliance.org/alan/2025/10/28/Alan.html" rel="alternate" type="text/html" title="From Prototype to Product: MrDocs in 2025" /><published>2025-10-28T00:00:00+00:00</published><updated>2025-10-28T00:00:00+00:00</updated><id>http://cppalliance.org/alan/2025/10/28/Alan</id><content type="html" xml:base="http://cppalliance.org/alan/2025/10/28/Alan.html"><p>In 2024, the <a href="https://www.mrdocs.com">MrDocs</a> project was a <strong>fragile prototype</strong>. It documented Boost.URL, but the <strong>CLI</strong>, <strong>configuration</strong>, and <strong>build process</strong> were unstable. Most users could not run it without direct help from the core group. That unstable baseline is the starting point for this report.</p>
<p>In 2025, we moved the codebase to <strong>minimum-viable-product</strong> shape. I led the releases that stabilized the pipeline, aligned the <strong>configuration model</strong>, and documented the work in this report to support a smooth <strong>leadership transition</strong>. This post summarizes the <strong>2024 gaps</strong>, the <strong>2025 fixes</strong>, and the <strong>recommended directions</strong> for the next phase.</p>
<!-- prettier-ignore -->
<ul id="markdown-toc">
<li><a href="#system-overview" id="markdown-toc-system-overview">System Overview</a></li>
<li><a href="#2024-lessons-from-a-fragile-prototype" id="markdown-toc-2024-lessons-from-a-fragile-prototype">2024: Lessons from a Fragile Prototype</a></li>
<li><a href="#2025-from-prototype-to-mvp" id="markdown-toc-2025-from-prototype-to-mvp">2025: From Prototype to MVP</a> <ul>
<li><a href="#v003-enforcing-consistency" id="markdown-toc-v003-enforcing-consistency">v0.0.3: Enforcing Consistency</a></li>
<li><a href="#v004-establishing-the-foundation" id="markdown-toc-v004-establishing-the-foundation">v0.0.4: Establishing the Foundation</a></li>
<li><a href="#v005-stabilization-and-public-readiness" id="markdown-toc-v005-stabilization-and-public-readiness">v0.0.5: Stabilization and Public Readiness</a></li>
</ul>
</li>
<li><a href="#2026-beyond-the-mvp" id="markdown-toc-2026-beyond-the-mvp">2026: Beyond the MVP</a> <ul>
<li><a href="#strategic-prioritization" id="markdown-toc-strategic-prioritization">Strategic Prioritization</a></li>
<li><a href="#reflection" id="markdown-toc-reflection">Reflection</a></li>
<li><a href="#metadata" id="markdown-toc-metadata">Metadata</a></li>
<li><a href="#extensions-and-plugins" id="markdown-toc-extensions-and-plugins">Extensions and Plugins</a></li>
<li><a href="#dependency-resilience" id="markdown-toc-dependency-resilience">Dependency Resilience</a></li>
<li><a href="#follow-up-issues-for-v006" id="markdown-toc-follow-up-issues-for-v006">Follow-up Issues for v0.0.6</a></li>
</ul>
</li>
<li><a href="#acknowledgments" id="markdown-toc-acknowledgments">Acknowledgments</a></li>
<li><a href="#conclusion" id="markdown-toc-conclusion">Conclusion</a></li>
</ul>
<h2 id="system-overview">System Overview</h2>
<p><a href="https://www.mrdocs.com">MrDocs</a> is a C++ documentation generator built on <strong>Clang</strong>. It parses source with full language fidelity, links declarations to their comments, and produces reference documentation that reflects real program structure—<strong>templates</strong>, <strong>constraints</strong>, and <strong>overloads</strong> included.</p>
<blockquote>
<p>Traditional tools often approximate the AST. MrDocs uses the AST directly, so documentation matches the code and modern C++ features render correctly.</p>
</blockquote>
<p>Unlike single-purpose generators, MrDocs separates the <strong>corpus</strong> (semantic data) from the <strong>presentation layer</strong>. Projects can choose among multiple <strong>output formats</strong> or extend the system entirely: supply <strong>custom Handlebars templates</strong> or script new generators using the <strong>plugin system</strong>. The corpus is represented in the generators as a <strong>rich JSON-like DOM</strong>. With schema files, MrDocs enables integration with <strong>build systems</strong>, <strong>documentation frameworks</strong>, or <strong>IDEs</strong>.</p>
<p>From the user’s perspective, MrDocs behaves like a <strong>well-engineered CLI utility</strong>. It accepts <strong>configuration files</strong>, supports <strong>relative paths</strong>, accepts custom <strong>build options</strong>, and reports <strong>warnings</strong> in a controlled, <strong>compiler-like</strong> fashion. For C++ teams transitioning from <strong>Doxygen</strong>, the <strong>command structure</strong> is somewhat familiar, but the <strong>internal model</strong> is designed for <strong>reproducibility</strong> and <strong>correctness</strong>. Our goal is not just to render <strong>reference pages</strong> but to provide a <strong>reliable pipeline</strong> that any C++ project seeking <strong>modern documentation infrastructure</strong> can adopt.</p>
<script src="https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js"></script>
<div class="mermaid">
graph LR
A[Source] --&gt; B[Clang]
B --&gt; C[Corpus]
C --&gt; D{Plugin Layer}
subgraph Generator
E[HTML]
F[AsciiDoc]
G[XML]
G2[...]
end
D --&gt; E
D --&gt; F
D --&gt; G
D --&gt; G2
E --&gt; H{Plugin Layer}
H --&gt; H2[Published Docs]
F --&gt; H
G --&gt; H
G2 --&gt; H
C --&gt; I[Schema Export]
I --&gt; J[Integrations<br />IDEs &amp; Build Systems]
</div>
<h2 id="2024-lessons-from-a-fragile-prototype">2024: Lessons from a Fragile Prototype</h2>
<p>MrDocs entered 2024 as a proof-of-concept built for Boost.URL. It could document one or two curated codebases and produce asciidoc pages for Antora, but the workflow stopped there. The CLI exposed only the scenarios we needed. Configuration options lived in internal notes. The only dependable build path was the script sequence we used inside the Alliance. External users hit errors and missing options almost immediately.</p>
<p><strong>Stability was just as fragile:</strong> We had no <strong>sanitizers</strong>, no <strong>warnings-as-errors</strong>, and inconsistent <strong>CI hardware</strong>. The binaries crashed as soon as they saw unfamiliar code. The pipeline worked only when the input looked like Boost.URL. Point it at slightly different code patterns and it would segfault. Each feature landed as a custom patch, so logic duplicated across generators, and fixing one path broke another.</p>
<p><strong>Early releases:</strong> Release <code>v0.0.1</code> captured that prototype: the early Handlebars engine, the HTML generator, the DOM refactor, and a list of APIs that only the core team could drive. <code>v0.0.2</code> added structured configuration, automatic <code>compile_commands.json</code>, and better SFINAE handling, but the tool was still insider-only.</p>
<p><strong>Leadership transition:</strong> Late in 2024 I became project lead with two initial priorities: <strong>document the gaps</strong> and describe the <strong>true limits</strong> of the system. That set the 2025 baseline—a functional prototype that needed <strong>coherence</strong>, <strong>reproducibility</strong>, and <strong>trust</strong> before it could call itself a product.</p>
<p>What 2025 later fixed were the weaknesses we saw here: configuration coherence, generator unification, schema validation, and basic options were all missing. The CLI, configuration files, and code drifted apart. Generators evolved independently with duplicated code and inconsistent naming. Editors had no schema to lean on. Extraction rules were ad hoc, which made the output incomplete. CI ran on an improvised matrix with no caching, sanitizers, or coverage, so regressions slipped through. That was the starting point.</p>
<blockquote>
<p>Summary: 2024 produced a working demo, not a reproducible system. Each success exposed another weak link and clarified what had to change in 2025.</p>
</blockquote>
<p>In short:</p>
<ul>
<li>2024 left us with a working prototype but no coherent architecture.</li>
<li>The system could demonstrate the concept, but not sustain or reproduce it.</li>
<li>Every improvement exposed another weak link, and every success demanded more structure than the system was built to handle.</li>
<li>It was a year of learning by exhaustion—and setting the stage for everything that came next.</li>
</ul>
<p>Key 2024 checkpoints align with the timeline below:</p>
<script src="https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js"></script>
<div class="mermaid">
%%{init: {"theme": "base", "themeVariables": {"primaryColor": "#f7f9ff", "primaryBorderColor": "#9aa7e8", "primaryTextColor": "#1f2a44", "lineColor": "#b4bef2", "secondaryColor": "#fbf8ff", "tertiaryColor": "#ffffff", "fontSize": "14px"}}}%%
timeline
title Prototypes
2024 Q1 : Boost.URL showcase
2024 Q2 : CLI gaps
2024 Q3 : Config + SFINAE fixes
2024 Q4 : Leadership transition
</div>
<h1 id="2025-from-prototype-to-mvp">2025: From Prototype to MVP</h1>
<p>I started the year with a gap analysis that compared MrDocs to other C++ documentation pipelines. From that review I defined the minimum viable product and three priority tracks. <strong>Usability</strong> covered workflows and surface area that make adoption simple. <strong>Stability</strong> covered deterministic behavior, proper data structures, and CI discipline. <strong>Foundation</strong> covered configuration and data models that keep code, flags, and documentation aligned. The 2025 releases followed those tracks and turned MrDocs from a proof of concept into a tool that other teams can adopt.</p>
<ul>
<li><strong>v0.0.3 — Consistency.</strong> We replaced ad-hoc behavior with a coherent system: a single source of truth for configuration kept CLI, config files, and docs in sync; generators and templates were unified so changes propagate by design; core semantic extraction (e.g., concepts, constraints, SFINAE) became reliable; and CI hardened around reproducible, tested outputs across HTML and Antora.</li>
<li><strong>v0.0.4 — Foundation.</strong> We introduced precise warning controls and a family of <code>extract-*</code> options to match established tooling, added a JSON Schema for configuration (enabling editor validation/autocomplete), delivered a robust reference system for documentation comments, brought initial inline formatting to generators, and simplified onboarding with a cross-platform bootstrap script. CI gained sanitizers, coverage checks, and modern compilers.</li>
<li><strong>v0.0.5 — Stabilization.</strong> We redesigned documentation metadata to support recursive inline elements, enforced safer polymorphic types with optional references and non-nullable patterns, and added user-facing improvements (sorting, automatic compilation database detection, quick reference indices, improved namespace/overload grouping, LLDB formatters). The website and documentation UI were refreshed for accessibility and responsiveness, new demos (including self-documentation) were published, and CI was further tightened with stricter policies and cross-platform bootstrap enhancements.</li>
</ul>
<p>Together, these releases executed the roadmap derived from the initial gap analysis: they <strong>aligned</strong> the moving parts, <strong>closed</strong> the most important capability gaps, and delivered a <strong>stable foundation</strong> that future work can extend without re-litigating fundamentals.</p>
<script src="https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js"></script>
<div class="mermaid">
%%{init: {"theme": "base", "themeVariables": {
"primaryColor": "#e4eee8",
"primaryBorderColor": "#affbd6",
"primaryTextColor": "#000000",
"lineColor": "#baf9d9",
"secondaryColor": "#f0eae4",
"tertiaryColor": "#ebeaf4",
"fontSize": "14px"
}}}%%
mindmap
root((MVP Evolution))
v0.0.3
Config sync
Shared templates
CI discipline
v0.0.4
Warning controls
Schema
Bootstrap
v0.0.5
Recursive docs
Nav refresh
Tooling polish
</div>
<h2 id="v003-enforcing-consistency">v0.0.3: Enforcing Consistency</h2>
<p><code>v0.0.3</code> is where MrDocs stopped being a collection of one-off special cases and became a coherent system. Before this release, features landed in a single generator and drifted from the others; extraction handled only the narrowly requested pattern and crashed on nearby ones; and options were inconsistent—some hard-coded, some missing from CLI/config, with no mechanism to keep code, docs, and flags aligned.</p>
<p><strong>What changed:</strong> The <code>v0.0.3</code> release fixes this foundation. We introduced a single source of truth for <strong>configuration options</strong> with TableGen-style metadata: docs, the config file, and the CLI always stay in sync. We added essential Doxygen-like options to make basic projects immediately usable and filled obvious gaps in symbols and doc comments.</p>
<p>We implemented metadata extraction for <strong>core symbol types</strong> and their information—such as template constraints, <strong>concepts</strong>, and <strong>automatic SFINAE</strong> detection. We <strong>unified generators</strong> and templates so changes propagate by design, added <strong>tagfile support</strong> and “lightweight reflection” to documentation comments as <strong>lazy DOM objects</strong> and arrays, and <strong>extended Handlebars</strong> to power the new generators. These features allowed us to create the initial version of the <strong>website</strong> and ensure the documentation is always in sync.</p>
<p><strong>Build and testing discipline:</strong> CI, builds, and tests were hardened. All generators were now tested, <strong>LLVM caching</strong> systems improved, and we launched our first <strong>macOS release</strong> (important for teams working on Antora UI bundles). All of this long tail of performance, correctness, and safety work turned “works on my machine” into repeatable, adoptable output across HTML and Antora.</p>
<p><code>v0.0.3</code> was the inflection point. For the first time, developers could depend on consistent configuration, <strong>shared templates</strong>, and predictable behavior across generators. It aligned internal tools, eliminated duplicated effort, and replaced trial-and-error debugging with <strong>reproducible builds</strong>. Every improvement in later versions built on this foundation.</p>
<details>
<summary>Categorized improvements for v0.0.3</summary>
<ul>
<li><strong>Configuration Options</strong>: enforcing consistency, reproducible builds, and transparent reporting
<ul>
<li>Enforce configuration options are in sync with the JSON source of truth (<a href="https://github.com/cppalliance/mrdocs/commit/a1fb8ec6f23ef0802626329d7ab1e5c4635c52a7" title="refactor(generate-config-info): normalization via visitor">a1fb8ec6</a>, <a href="https://github.com/cppalliance/mrdocs/commit/9daf71fe0539a3a6b926560a15e65fdbd6343356" title="refactor: info nodes configuration file">9daf71fe</a>)</li>
<li>File and symbol filters (<a href="https://github.com/cppalliance/mrdocs/commit/1b67a847db83f329af6cb9f059da7fa071939593" title="feat: file and symbol filters">1b67a847</a>, <a href="https://github.com/cppalliance/mrdocs/commit/b352ba223db0ad0b3d5f7283072b5dffb95eab1e" title="feat: symbol filters listed on docs">b352ba22</a>)</li>
<li>Reference and symbol configuration (<a href="https://github.com/cppalliance/mrdocs/commit/a3e4477f699e1c5c4d489239ad559f9d51823272" title="feat: reference, symbol options">a3e4477f</a>, <a href="https://github.com/cppalliance/mrdocs/commit/30eaabc9a28aa3282bbe9e5b0c8b0e4a2c2c817f" title="docs: reference, symbol options">30eaabc9</a>)</li>
<li>Extraction options (<a href="https://github.com/cppalliance/mrdocs/commit/41411db2848e1fab628dc62ee2e1831628b5d4c7" title="feat: extraction options">41411db2</a>, <a href="https://github.com/cppalliance/mrdocs/commit/1214d94bcf3597bd69caacd5b2648f677d4d197d" title="docs: extraction options">1214d94b</a>)</li>
<li>Reporting options (<a href="https://github.com/cppalliance/mrdocs/commit/f994e47e318d852cc17cd026f7d7cdbcf3df0c5f" title="feat: reporting options">f994e47e</a>, <a href="https://github.com/cppalliance/mrdocs/commit/0dd9cb45cf0168dec028aeb276bd03a419ba3a12" title="docs: reporting options">0dd9cb45</a>)</li>
<li>Configuration structure (<a href="https://github.com/cppalliance/mrdocs/commit/c8662b35fc85dc142f0694f299bb000a0f8899be" title="feat: use structured information for configuration">c8662b35</a>, <a href="https://github.com/cppalliance/mrdocs/commit/dcf5beef5a4b8ea75b24364b9c8a8f2f56d5e6c8" title="feat: generate config documentation">dcf5beef</a>, <a href="https://github.com/cppalliance/mrdocs/commit/4bd3ea42420f20b6a45c545e7b61396567c3201f" title="docs: configuration schema">4bd3ea42</a>)</li>
<li>CLI workflows (<a href="https://github.com/cppalliance/mrdocs/commit/a2dc4c7883917025f0b63b227be7476f3986fd1d" title="feat: CLI orchestrator improvements">a2dc4c78</a>, <a href="https://github.com/cppalliance/mrdocs/commit/3c0f90df53794a02d3c53d25aa4fa5c8a69fbaad" title="docs: CLI quick reference">3c0f90df</a>)</li>
<li>Warnings (<a href="https://github.com/cppalliance/mrdocs/commit/4eab1933ff58330fb2c6753a648a26fba3038118" title="docs: warnings">4eab1933</a>, <a href="https://github.com/cppalliance/mrdocs/commit/5e586f2b03dd7b1eb5a45e51c904d8cbf4f63661" title="feat: warnings">5e586f2b</a>, <a href="https://github.com/cppalliance/mrdocs/commit/0e2dd713ebde919bf0ebc231d9a5795eb99b0d25" title="feat: warning when configuration references missing include directories">0e2dd713</a>)</li>
<li>SettingsDB (<a href="https://github.com/cppalliance/mrdocs/commit/225b2d50835485b746c766df8993e1bb66938d17" title="feat: settings DB">225b2d50</a>, <a href="https://github.com/cppalliance/mrdocs/commit/51639e77b629f00c02aa11afe41a01e12804ef63" title="feat: settings db generator">51639e77</a>)</li>
<li>Deterministic configuration (<a href="https://github.com/cppalliance/mrdocs/commit/b544974105efc225af0af7f9952ef96338fe4c44" title="feat: deterministic configuration order">b5449741</a>)</li>