Abstract :
Summary form only given. Unusual event detection, i.e., identifying (previously unseen) rare/critical events, has become one of the major challenges in visual surveillance. The main solution for this problem is to describe local or global normalness and to report events that do not fit to the estimated models. The majority of existing approaches, however, is limited to a single description (e.g., appearance or motion) and/or builds on inflexible (unsupervised) learning techniques, both clearly degrading the practical applicability. To overcome these limitations, we demonstrate a system that, on the one hand is capable of extracting and modeling several representations in parallel and, on the other hand, allows for user interaction within a continuous learning setup. Novel yet intuitive concepts of result visualization and user interaction will be presented that allow for exploiting the underlying data.