Planning and Acting to Learn
Most current AI applications (e.g., image classification or more in general perception tasks) are based on learning models that are pre-trained on datasets available before deployment. However, in many cases, e.g., agents that operate in real-world open-ended dynamic environments, it is unrealistic to assume that all the training datasets are available at once for all possible configurations of all potential environments. AI agents should autonomously learn/adapt/extend their models by acting in the environment. In this talk, I will show how agents can automate the process of collecting training data and using them to learn their models, how they can automatically evaluate the quality of the prediction of learned models and identify the situations in which the model’s predictions are correct. I will show how it is possible to formalize such a learning task in a symbolic planning framework that can be used by the agent to autonomously plan for the learning process.