Join the series of recorded livestreams by
established AI experts and donate to charity.
AI FOR UKRAINE
About
Who We Are
AI for Ukraine is a non-profit educational project by AI HOUSE, part of the Roosh ecosystem.

Before the war, we constantly grew the local AI community by developing talents and creating projects. We have not stopped and continue making Ukraine an AI hub.


What We Do
Why Join Us
World-class AI experts share their cutting-edgeknowledge with you.

Donate any amount to register, and help to save the lives of Ukrainians.

Every donation is a huge help!
AI for Ukraine helps the Ukrainian AI community to learn and develop during wartime.
We organize sessions by international AI experts and collect donations for the Ukrainian army and the Come Back Alive Foundation. Now lectures are available in the recording.
Welcome
our speakers
Principal Data and Applied Scientist at Microsoft
Distinguished Scientist, VP at Amazon Web Services
Full Professor at Université de Montréal, Founder and Scientific Director of Mila
Сo-founder at scikit-learn, Director at INRIA
Research Scientist at UC Berkeley

PhD Candidate at Cornell University
Assistant Professor of CS at UCSB, Director of Scalable Statistical ML Lab
Senior Applied Scientist at Amazon Web Services
Senior Research Scientist at DeepMind
Become our speaker to support Ukraine
Sr Principal Research Manager at Microsoft
Assoc. Professor at Cornell, Researcher at Hugging Face
CEO and Co-Founder of EquiLibre Technologies
Agenda
Talk with Yoshua Bengio:
Bridging the gap between current deep learning and human higher-level cognitive abilities
Available in recording after registration and donation
Professor at Université de Montréal
About the talk "Bridging the gap between current deep learning and human higher-level cognitive abilities"

How can what has been learned on previous tasks generalize quickly to new tasks or changes in distribution? The study of conscious processing in human brains (and the window into it given by natural language) suggests that we are able to decompose high-level verbalizable knowledge into reusable components (roughly corresponding to words and phrases). This has stimulated research in modular neural networks where attention mechanisms can be used to dynamically select which modules should be brought to bear in a given new context.

Another source of inspiration for tackling this challenge is the body of research into causality, where changes in tasks and distributions are viewed as interventions.

The crucial insight is that we need to learn to separate (somewhat like in meta-learning) what is stable across changes in distribution, environments or tasks and what may be separate to each of them or changing in non-stationary ways in time. From a causal perspective what is stable are the reusable causal mechanisms, along with the inference machinery to make probabilistic guesses about the appropriate combination of mechanisms (maybe seen as a graph) in a particular new context. What may change with time are the interventions and other random variables which are those that yield more directly to observations. If interventions are not observed (we do not have labels for fully explaining the changes in tasks in terms of the underlying modules and causal variables) we would ideally like to estimate the Bayesian posterior over the interventions, given whatever is observed.

This research approach raises many interesting research questions ranging from Bayesian inference and identifiability to causal discovery, representation learning and out-of-distribution generalization and adaptation, which will be discussed in the presentation
Talk with Alex Smola:
One and Done - Automatic Machine Learning with AutoGluon
Available in recording after registration and donation
Distinguished Scientist and VP at Amazon Web Services
About the talk: "One and Done - Automatic Machine Learning with AutoGluon"

In the past Machine Learners had to choose between simplicity, speed and accuracy when using AutoML tools. In this talk Alex Smola will show how AutoGluon provides all three of them, for a wide range of applications. This includes tabular data, text, images, and time series among the many options.


As well in this talk Alex will cover the techniques that allow for good performance, he will cover details such as data fusion for multimodal problems, model zoos, distillation, and how to deal with dependent random variables in time series.
Tutorial with Sergiy Matusevych:
Model Compression for Deep Learning
Available in recording after registration and donation
Principal Data and Applied Scientist at Microsoft
About the tutorial: "Model Compression for Deep Learning"

It is a truth universally acknowledged that the memory footprint of many neural networks can be significantly reduced after training with little or no impact on the model’s accuracy. In this presentation we will discuss why and when to compress ML models, survey major model compression techniques and best practices, and review state-of-the-art approaches to model compression. We will focus on pruning and quantization, but also cover other techniques, like knowledge distillation, deep mutual learning, and architecture search. This is an introductory-level tutorial for all ML practitioners interested in optimization of their models in production.
Talk from Gaël Varoquaux:
Evaluating machine learning models
and their diagnostic value
Available in recording after registration and donation
Research director at INRIA, Co-Founder at scikit-learn
About the talk: "Evaluating machine learning models and their diagnostic value

Gaël will first discuss choosing metric informative for the application, stressing the importance of the class prevalence in classification settings. Then he will then discuss procedures to estimate the generalization performance, drawing a distinction between evaluating a learning procedure or a prediction rule, and how to give confidence intervals to the performance estimates.
Talk from Sebastien Bubeck:
Unveiling Transformers with LEGO
Available in recording after registration and donation
Senior Principal Research Manager at Microsoft
About the talk "Unveiling Transformers with LEGO"

The discovery of the transformer architecture was a paradigm shifting event for deep learning. However, these architectures are arguably even harder to understand than say convolutional neural networks. In this work we propose a synthetic task, called LEGO (Learning Equality and Group Operations), to probe the inner workings of transformers. We obtain some insights on multi-head attention, the effect of pretraining, as well as overfitting issues.

Joint work by the ML Foundations group at Microsoft Research with co-authors Yi Zhang, Arturs Backurs, Ronen Eldan, Suriya Gunasekar, and Tal Wagner.
Talk from Alexander Rush:
Prompting, Metadatasets, and Zero-Shot NLP
Available in recording after registration and donation
Associate Professor at Cornell Tech, Researcher at Hugging Face
About the talk: "Prompting, Metadatasets, and Zero-Shot NLP"

The paradigm of NLP tasks is changing, expanding from mostly single-dataset supervised learning in structured form to multi-dataset semi-supervised learning expressed in natural language.

This talk focuses on T0, a large-scale language model trained on multi-task prompted data (Sanh et al 2022). Despite being an order of magnitude smaller than GPT-3 class models, T0 exhibits similar zero-shot accuracy on unseen task categories.

In addition to the modeling elements, this talk highlights the community processes of collecting data, dataset, and prompts for models of this scale. The work was done as part of BigScience, an international, collaborative effort to study large language models.
Talk from Anna Rohrbach:
Multimodal Grounded Learning with Vision and Language
Available in recording after registration and donation
Research Scientist at UC Berkeley
About the talk: "Multimodal Grounded Learning with Vision and Language"

Humans rely on multiple modalities to perceive the world and communicate with each other, most importantly using vision and language. Describing what we see to each other is an innate human ability. Importantly, what makes it possible for us to understand each other, is that we share the common reality, i.e. we ground the concepts to the world around us. Finally, humans also use language to teach each other about new things, i.e. we can learn from language alone. In her research, Anna Rohrbach wants to enable AI models to have similar capabilities: to communicate, to ground, and to learn from language.

A lot of progress has been made in the classical vision and language tasks, particularly visual captioning, but the AI models still struggle sometimes. One of the core challenges in multimodal learning is precisely grounding, i.e., correctly mapping language concepts on the visual observations. Lack of faithful grounding can have harmful impact on the models, making them biased or cause hallucination. Finally, even models that can communicate and ground may still need human “advice”, i.e. learn to behave more human-like. Recently we increasingly see language being used to enhance visual models by enabling zero-shot capabilities, improving generalization, mitigating bias, etc. Anna is deeply interested in building models that can ingest language advice to improve their behavior.

In her talk Anna Rohrbach will cover work that tries to achieve the aforementioned capabilities and discuss challenges as well as exciting opportunities that lie ahead.
Talk from Yu-Xiang Wang:
Towards Practical Reinforcement Learning: Offline Data and Low-Adaptive Exploration
Available in recording after registration and donation
Assistant Professor of Computer Science at UC Santa Barbara, Director of Scalable Statistical Machine Learning Lab
About the talk: "Towards Practical Reinforcement Learning: Offline Data and Low-Adaptive Exploration"

Standard Reinforcement Learning requires interactive access to the environment for trials-and-errors. In many practical applications, it is often unsafe, illegal or costly to deploy untested policies. Offline RL, instead, directly optimizes the policy from a large logged dataset collected by running the currently deployed systems. This setting is, arguably more common in practical applications.

In this talk, Yu-Xiang Wang will first share some recent theoretical advances on the offline RL algorithms. Then he will talk about the limitation of offline RL over its online counterpart, and describe a new model called RL with low-switching cost that could get the best of both world.
Talk from Maria Antoniak:
Modeling Personal Experiences Shared in Online Communities
Available in recording after registration and donation
PhD Candidate at Cornell University
About the talk: "Modeling Personal Experiences Shared in Online Communities"

Written communications about personal experiences, such as giving birth or reading a book, can be both rhetorically powerful and statistically difficult to model. Unsupervised natural language processing (NLP) models can be used to represent complex personal experiences and self-disclosures communicated in online communities, but it is also important to re-examine these models for biases and instabilities.

In this talk, Maria will share work that seeks to reliably represent individual experiences within their social contexts and model interpretive dimensions that illuminate both patterns and outliers, while addressing social and humanistic questions. She will share two case studies that highlight both the opportunities and the risks in reusing NLP models for context-specific research questions.
Talk from Francesco Locatello:
Towards Causal Representation Learning
Available in recording after registration and donation
Senior Applied Scientist at Amazon Web Services
About the talk:

Nowadays there is strong cross-pollination between the machine learning and graphical causality fields, with increasing mutual interest to benefit from the respective advances. In this talk, Francesco Locatello will first review fundamental concepts of causal inference and present new approaches for causal discovery using machine learning.

Then, he will broadly discuss how causality can contribute to modern machine learning research. As well, Francesco will introduce causal representation learning as an open problem for both communities: the discovery of high-level causal variables from low-level observations. The last but not the least, he will discuss his work on learning (more) causal representations and the architectural innovations that are required to represent causal variables with neural networks.
Talk from Misha Laskin:
Data-driven Reinforcement Learning
with Transformers
Available in recording after registration and donation
Senior Research Scientist at DeepMind
About the talk:

Over the last few years, self-supervised sequence modelling with transformers has produced Large Language Models (LLMs) with unprecedented generalization capabilities. Due to their ability to learn in-context, LLMs can be adapted to solve diverse downstream tasks via prompting. Reinforcement Learning (RL) research, however, is still predominated by narrow agents that are powerful but do not generalize well to tasks beyond the ones they were trained to solve. To improve the generalization capabilities of RL agents, several recent approaches explored re-formulating RL as a sequential prediction problem with transformers.

In this talk, Misha Laskin will give a tutorial on these methods that he will broadly refer to as Reinforcement Learning Transformers (RLTs). Misha will show how RL agents can be pre-trained to learn in-context like LLMs. He will then present new work showing that transformers can be pre-trained to reinforcement learn in-context. Specifically Misha Laskin will show how, after pre-training, a single transformer can learn to solve many different tasks autonomously through trial and error like an RL algorithm without ever updating its weights.
Talk from Martin Schmid:
Search in Imperfect Information Games
Available in recording after registration and donation
CEO & Co-Founder of EquiLibre Technologies
About the talk:

From the very dawn of the field, search with value functions was a fundamental concept of computer games research. Turing’s chess algorithm from 1950 was able to think two moves ahead, and Shannon’s work on chess from 1950 includes an extensive section on evaluation functions to be used within a search. Samuel’s checkers program from 1959 already combines search and value functions that are learned through self-play and bootstrapping. TD-Gammon improves upon those ideas and uses neural networks to learn those complex value functions — only to be again used within search.

The combination of decision-time search and value functions has been present in the remarkable milestones where computers bested their human counterparts in long-standing challenging games — DeepBlue for Chess and AlphaGo for Go. Until recently, this powerful framework of search aided with (learned) value functions has been limited to perfect information games.

We will talk about why search matters, and about generalizing search for imperfect information games.

We made this educational initiative to keep our AI community going and to support the Come Back Alive Foundation. The Foundation purchases equipment that helps save the lives of the Ukrainians, including thermal imaging optics, quadcopters, cars, security, and intelligence systems. As well, they implement projects to support veterans and their rehabilitation.

We have already transferred 324,000 UAH from you to the Come Back Alive Foundation. See the report in our Instagram post and the bank receipt here.
Have a question? Ask our team directly: aiforukraine@aihouse.org.ua
FAQ
How to get access to the session?
If you have already registered and donated before the live session, you will get a follow-up email with the link to all recording livestreams.

If you have registered and donated after the live session, you will get a registration confirmation email with the recording of all previous sessions.
Are sessions available now?
Yes, all livestreams were recorded. Now they are available with materials from speakers after donation and registration.
What is the language of the sessions?
All sessions are in English only.
Are there any subtitles?
Unfortunately, no. After the live stream, automatic captions were available in English only. YouTube Machine Learning algorithms generated automatic captions, so the quality of the captions may vary. For example, they might misrepresent the spoken content due to mispronunciations, accents, dialects, or background noise.
I have submitted my details in the form and donated but have not received any confirmation. What should I do?
We process all submissions and donations for up to 24 hours. If you do not get a response during this time, please look for an email from us in your spam folder. If there is none, please contact us at aiforukraine@aihouse.org.ua and send your donation screenshot, so we can manually verify your donation.
Why do you support the Come Back Alive Foundation?
“Come Back Alive” is a foundation providing competent assistance to the military. Since 2014, their key goal has been to make the Armed Forces of Ukraine more efficient and save the lives of the military. Since the beginning of the full-scale invasion in February 2022, they have multiplied their military assistance to the defenders of Ukraine.

The Foundation purchases equipment that helps save the lives of the military, including thermal imaging optics, quadcopters, cars, security, and intelligence systems. In addition, their instructors train sappers, teach pre-medical aid, and facilitate secret missions.

They are developing analytics, which becomes the basis for future decisions of the state in the defense field, and are implementing projects to support veteran entrepreneurship and sports rehabilitation.

The Foundation has transparent financial statements. Every donation and purchase can be tracked in real-time at
the Reporting page.