Wednesdays@DEI: Talks, 12-11-2025

Na próxima 4ª feira, dia 12 de novembro, teremos duas talks no âmbito do processo de scouting.

Author and Affiliation: Eliezer de Souza da Silva , Basque Center for Applied Mathematics (BCAM)

Bio: Eliezer de Souza da Silva is a Postdoctoral Fellow at the Basque Center for Applied Mathematics (BCAM) in Bilbao, Spain. He received his PhD in Computer Science from the Norwegian University of Science and Technology (NTNU), Norway, his MSc in Electrical and Computer Engineering from Unicamp, and his BSc in Computer Engineering from UFES, Brazil. His research explores probabilistic machine learning, Bayesian modeling, and deep generative models, focusing on theoretical and practical contributions for prior predictive analysis, amortized samplers, and causal inference. More details are available at https://sereliezer.github.io/.

Title: Probabilistic Methods for Prior Selection, Amortized Sampling, and Beyond

Abstract: This talk will explore the emerging landscape of probabilistic methods that unify prior selection, amortized sampling, and deliberative inference. In the first part, I will discuss how prior predictive analysis provides a principled approach for setting priors in Bayesian models. By matching prior predictive moments to empirical summaries, we can derive closed-form and computationally efficient criteria for determining latent ranks in matrix and tensor factorization, avoiding costly model re-training or cross-validation. This framework allows us to design flexible, interpretable probabilistic models with robust inductive biases, and can be extended towards Bayesian Neural Networks. In the second part, I will turn to amortized sampling, a paradigm that generalizes variational inference through the lens of Generative Flow Networks (GFlowNets). GFlowNets learn stochastic generation policies that sample objects proportionally to an unnormalized reward, thereby implementing amortized sampling schemes over compositional spaces such as graphs, sets, or model structures. I will summarize our recent contributions — On Divergence Measures for Training GFlowNets (NeurIPS 2024) and When do GFlowNets Learn the Right Distribution? (ICLR 2025) — which establishes new theoretical links between divergence minimization, sampling correctness, and structured inference. Finally, I will present future directions on deliberative inference processes, where amortized samplers are used to generate diverse candidate solutions that can be evaluated, refined, and combined. This approach envisions inference as a multi-step, self-improving deliberation process — integrating exploration, uncertainty propagation, and diversity optimization — with potential applications ranging from model selection and scientific discovery to human-AI collaborative reasoning.

-

Author and Affiliation: Ricardo Jorge Ferreira Nobre, (INESC-ID)

Bio: Ricardo Nobre is a tenured researcher at the INESC-ID R&D institute in Lisbon/Portugal, being a member of the High-Performance Computing Architectures and Systems (HPCAS) research area. His interests include high-performance computing, parallel programming, compilers and machine learning. He has contributed to close to 30 papers in international journals and conferences. Ricardo received the Ph.D. degree in Informatics Engineering from Faculdade de Engenharia da Universidade do Porto (FEUP), Porto, Portugal.

Title: Unlocking the Potential of High Performance Computing Systems
Abstract: The processing capabilities of modern parallel processors are often underutilized, particularly when it comes to those designed to accelerate specific types of computations. This talk highlights research focused on bridging the gap between theoretical peak performance and what is achieved in practice on high-performance computing systems