project . 2016 - 2021 . Closed

SEED

Learning to See in a Dynamic World
Open Access mandate for Publications
European Commission
Funder: European CommissionProject code: 648379 Call for proposal: ERC-2014-CoG
Funded under: H2020 | ERC | ERC-COG Overall Budget: 1,999,410 EURFunder Contribution: 1,999,410 EUR
Status: Closed
01 Jan 2016 (Started) 31 Dec 2021 (Ended)
Open Access mandate
Research data: No
Description

The goal of SEED is to fundamentally advance the methodology of computer vision by exploiting a dynamic analysis perspective in order to acquire accurate, yet tractable models, that can automatically learn to sense our visual world, localize still and animate objects (e.g. chairs, phones, computers, bicycles or cars, people and animals), actions and interactions, as well as qualitative geometrical and physical scene properties, by propagating and consolidating temporal information, with minimal system training and supervision. SEED will extract descriptions that identify the precise boundaries and spatial layout of the different scene components, and the manner they move, interact, and change over time. For this purpose, SEED will develop novel high-order compositional methodologies for the semantic segmentation of video data acquired by observers of dynamic scenes, by adaptively integrating figure-ground reasoning based on bottom-up and top-down information, and by using weakly supervised machine learning techniques that support continuous learning towards an open-ended number of visual categories. The system will be able not only to recover detailed models of dynamic scenes, but also forecast future actions and interactions in those scenes, over long time horizons, by contextual reasoning and inverse reinforcement learning. Two demonstrators are envisaged, the first corresponding to scene understanding and forecasting in indoor office spaces, and the second for urban outdoor environments. The methodology emerging from this research has the potential to impact fields as diverse as automatic personal assistance for people, video editing and indexing, robotics, environmental awareness, augmented reality, human-computer interaction, or manufacturing.

Data Management Plans