You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<description>Research scientist at Google DeepMind, in Paris. Previously PhD from Sorbonne University under the supervision of Professor Matthieu Cord, and intern at FAIR Meta. My works have initially explored how ensembling, weight averaging and invariance can improve the robustness and out-of-distribution generalization in deep learning; I am now investigating how the generalization literature can help for alignment, to improve reward modeling, and create AGIs benefiting society as a whole.</description>
21
+
<description>Research scientist at Google DeepMind, in Paris. Previously research intern at FAIR Meta. My PhD, pursued at Sorbonne University under the supervision of Professor Matthieu Cord, received from SSFAM the 2024 award of the best French thesis in ML. My works have initially explored how ensembling, weight averaging and invariance can improve the robustness and out-of-distribution generalization in deep learning. I am now investigating how model merging and reinforcement learning can improve the alignment of AIs with the world in all its diversity.</description>
<p>Research scientist at Google DeepMind, in Paris.
364
-
Previously PhD from Sorbonne University under the supervision of Professor Matthieu Cord, and intern at FAIR Meta.
365
-
My works have initially explored how ensembling, weight averaging and invariance can improve the robustness and out-of-distribution generalization in deep learning;
366
-
I am now investigating how the generalization literature can help for alignment, to improve reward modeling, and create AGIs benefiting society as a whole.</p>
364
+
Previously research intern at FAIR Meta.
365
+
My PhD, pursued at Sorbonne University under the supervision of Professor Matthieu Cord, received from SSFAM the 2024 award of the best French thesis in ML.
366
+
My works have initially explored how ensembling, weight averaging and invariance can improve the robustness and out-of-distribution generalization in deep learning.
367
+
I am now investigating how model merging and reinforcement learning can improve the alignment of AIs with the world in all its diversity.</p>
During my PhD, I analyzed how ensembling via weight averaging can improve out-of-distribution generalization and alignment.
632
+
During my PhD, I analyzed how ensembling via weight averaging can improve out-of-distribution generalization and alignment. This received the 2024 award of the best ML thesis in France from <ahref="http://ssfam.org/laureats-prix-de-these-ssfam/" target="_blank">SSFAM</a>.
0 commit comments