After a Ph.D. in Human-Robot Interaction at EPFL in Lausanne and IST in Lisbon, I joined Google to work on Reinforcement Learning, inverse RL and Game theory. Since 2023, I worked on RL-based fine-tuning of LLMs from Human Feedback (RLHF).




After a Ph.D. in Human-Robot Interaction at EPFL in Lausanne and IST in Lisbon, I joined Google to work on Reinforcement Learning, inverse RL and Game theory. Since 2023, I worked on RL-based fine-tuning of LLMs from Human Feedback (RLHF).