Dear Apple Users, please be aware that there is an iCal Time Zone problem, follow this link.


Lade Veranstaltungen

« Alle Veranstaltungen

  • Diese Veranstaltung hat bereits stattgefunden.

MCI: Demo Session

September 7 @ 17:30 - 18:30

Zoom link:

Protected Area

This content is password-protected. You received the password after registering via email and can also access it in our Conference Discord.

 

AmI-VR: An Accessible Building Information System as Case Study Towards the Applicability of Ambient Intelligence in Virtual Reality

Timo Götzelmann, Julian Kreimeier, Johannes Schwabl, Pascal Karg, Christina Oumard, Florian Büttner

Technische Hochschule Nürnberg, Deutschland

Ambient intelligence represents a paradigm in which the user does not react to the environment, but vice versa. Accordingly, smart environments can react to the presence and activities of users and support them unobtrusively from the background. Especially in the context of accessibility, this offers great potential that has so far only been demonstrated for individual user groups. To overcome this limitation, we propose the automated as well as user- and context-related adaptation of the modality as well as locality of the representation of building information in the form of an adjustable table as well as two displays on the basis of a prototype for a library information center. For being independent from material and regulatory restrictions and for better planability (especially with the ongoing COVID-19 pandemic) we used in addition to the hardware components also a Virtual Reality simulation, which proved to be very useful. Further optimization and evaluation will be needed for a more in depth understanding and dissemination in the long run, yet our prototype aims to help fostering further activities in the field of ambient intelligence, accessibility and virtual reality as a planning tool.

Demo-Video:

 


 

An Interactive Machine Learning System for Image Advertisements

Markus Foerste1, Mario Nadj2, Merlin Knaeble2, Alexander Maedche2, Leonie Gehrmann3, Florian Stahl3

1collective mind AG, Germany; 2Karlsruhe Institute of Technology, Germany; 3University of Mannheim, Germany

Advertising is omnipresent in all countries around the world and has a strong influence on consumer behavior. Given that advertisements aim to be memorable, attract attention and convey the intended information in a limited space, it seems striking that previous research in economics and management has mostly neglected the content and style of actual advertisements and their evolution over time. With this in mind, we collected more than one million print advertisements from the English-language weekly news magazine “The Economist” from 1843 to 2014. However, there is a lack of interactive intelligent systems capable of processing such a vast amount of image data and allowing users to automatically and manually add metadata, explore images, find and test assertions, and use machine learning techniques they did not have access to before. Inspired by the research field of interactive machine learning, we propose such a system that enables domain experts like marketing scholars to process and analyze this huge collection of image advertisements.

Demo-Video:


Demonstrating Dothraki: Tracking Tangibles Atop Tabletops Through De-Bruijn Tori

Dennis Schüsselbauer, Andreas Schmid, Raphael Wimmer

University of Regensburg, Germany

We demonstrate usage examples and technical properties of Dothraki, an inside-out tracking technique for tangibles on flat surfaces. An optical mouse sensor embedded in the tangible captures a small (36×36 pixel / 1×1 mm), unique section of a black-and-white De-Bruijn dot pattern printed on the surface.
Our system efficiently searches the pattern space in order to determine the precise location of the tangible with sub-millimeter accuracy. Our proof-of-concept implementation offers a recognition rate of up to 95%, robust error detection, an update rate of 14 Hz, and a low-latency relative tracking mode.
The MuC demonstration encompasses four separate demos that showcase typical application scenarios and features: a magic lens, two tangibles that know each others relative position, a simple geometry application that measures distances and angles, and tangibles that know on which of multiple surfaces they have been placed.

Demo-Video:


Demonstrating ScreenshotMatcher: Taking Smartphone Photos to Capture Screenshots

Andreas Schmid, Thomas Fischer, Alexander Weichart, Alexander Hartmann, Raphael Wimmer

University of Regensburg, Germany

Taking screenshots is a common way of capturing screen content to share it with others or save it for later. Even though all major desktop operating systems come with a screenshot function, a lot of people also use smartphone cameras to photograph screen contents instead. While users see this method as faster and more convenient, image quality is significantly lower. This paper is a demonstration of ScreenshotMatcher, a system that allows for capturing a high-fidelity screenshot by taking a smartphone photo of (part of) the screen.
A smartphone application sends a photo of the screen region of interest to a program running on the PC which retrieves the corresponding screen region with a feature matching algorithm. The result is sent back to the smartphone.
As phone and PC communicate via WiFi, ScreenshotMatcher can also be used together with any PC in the same network running the application — for example to capture screenshots from a colleague’s PC.
Released as open-source code, ScreenshotMatcher may be used as a basis for applications and research prototypes that bridge the gap between PC and smartphone.

Demo-Video:


Designing Augmented Reality Workflows for Care Specific Tasks

Marc Janßen, Alexander Volker Droste, Michael Prilla

TU Clausthal, Germany

Augmented Reality Head Mounted Devices have the potential to provide digital information to people while they are working. Besides other advantages, this enables the provision workers with instructions how to carry out certain tasks both for training and while they are working on the task. While using HMDs for the purposes has been shown to be beneficial in practice, the creation and adaptation of instructions and corresponding workflows is still a manual, time-consuming task. In this paper, we present a tool that enables the modelling and HMD specific configuration of such workflows, and that automatically instantiates these workflows on HMDs. This provides opportunities for creating user-specific workflows on HMDs and to adapt existing workflows to the needs of users. We illustrate the potential of our solution by the example of workflows in care.

Demo-Video:


Don’t Catch It! – An Interactive Virtual-Reality Environment to Learn About COVID-19 Measures Using Gamification Elements.

Christian Andreas Krauter, Jonas Axel Siôn Vogelsang, Aimée Sousa Calepso, Katrin Angerbauer, Michael Sedlmair

University of Stuttgart, Germany

The world is still under the influence of the COVID-19 pandemic.
Even though vaccines are deployed as rapidly as possible, it is still necessary to use other measures to reduce the spread of the virus. Measures such as social distancing or wearing a mask receive a lot of criticism. Therefore, we want to demonstrate a serious game to help the players understand these measures better and show them why they are still necessary.
The player of the game has to avoid other agents to keep their risk of a COVID-19 infection low. The game uses Virtual Reality through a Head-Mounted-Display to deliver an immersive and enjoyable experience.
Gamification elements are used to engage the user with the game while they explore various environments.
We also implemented visualizations that help the user with social distancing.

Demo-Video:


In Case You Don’t Know What To Play. Framework for a VR Application that manipulates Time Perception through spatial distortion.

Paul Morat, Aaron Schwerdtfeger, Frank Heidmann

Fachhochschule Potsdam, Germany

In Case You Don’t Know What To Play is a framework for designing a Virtual Reality application that uses spatial distortion to influence the user’s visual perception and ultimately shall manipulate their sense of time.
Spatial orientation allows us to judge distances regarding their temporal component. In particular, in self-motion we can estimate the duration it takes to cover a given distance, since time and spatial perception are connected. This connection can be broken with the help of Virtual Reality technology and a perceptual conflict can be created.

Demo-Video:

 


LOKI: Entwicklung eines Interfaces für die Aufgaben-basierte, Privatsphäre-freundliche Smart Home-Steuerung durch LOKale Informationsverarbeitung

Paul Gerber, Marvin Heidinger, Julia Stiegelmayer, Nina Gerber

Technische Universität Darmstadt, Deutschland

Aktuelle Steuerungssysteme für Smart Home-Geräte wie Amazon Echo setzen vorwiegend auf eine zentrale Datenverarbeitung auf den unternehmenseigenen Servern. Da Smart Home-Geräte potenziell sensitive Daten erfassen, führt dies häufig zu Privatsphärebedenken auf Seiten der Nutzenden. Eine Privatsphäre-freundliche Alternative besteht in der lokalen Verarbeitung der Smart Home Daten, welche die Smart Home-Systeme zusätzlich vor Hacking-Angriffen schützen würde. Zu diesem Zweck haben wir in einem iterativen “Human-in-the-Loop”-Prozess LOKI entwickelt, ein Interface, welches die Steuerung eines smarten Haushalts mittels lokaler Informationsverarbeitung ermöglicht. Zusätzlich bietet LOKI die Option, bestimmte Routinen festzulegen, was eine Aufgaben-basierte Steuerung ermöglicht, welche eine natürlichere Interaktion mit den Smart Home-Geräten ermöglicht als eine Geräte-basierte Steuerung.

Demo-Video:


Mixed Reality Environment for Complex Scenario Testing

Jakob Peintner, Maikol Funk Drechsler, Fabio Reway, Georg Seifert, Werner Huber, Andreas Riener

Technische Hochschule Ingolstadt – CARISSMA Institute of Automated Driving, Germany

Driver Assistance Systems are currently evaluated using testing procedures, defined for example, by Euro NCAP. However, these standardized testing procedures only represent an ideal situation in which a Vulnerable Road User (VRU) crosses straight in front of the vehicle. Furthermore, these testing conditions do not account for the variabilities, which occur in everyday traffic—for example, different weather conditions like fog, rain, snow, or darkness. Another aspect, which thusly is not considered in the test catalog, is the changes in the behavioral patterns of VRUs, caused by these adverse weather conditions. In an emergency braking situation, the behavior of the VRU might also be influenced by the interaction with the vehicle. For example, a pedestrian could be startled by an approaching car. This might provoke a very different reaction from the human, which cannot be simulated in the dummy-motion defined in the testing procedure. To create a more versatile and interactive simulation and testing environment, we developed the Mixed Reality Test Environment MiRE. The goal of MiRE is to enable the testing of a broader spectrum of scenarios while also considering the interaction between VRUs and vehicles.

Demo-Video:


The (Mobile) Driving Experience Lab: Bridging Research and Knowledge Transfer to the General Public

Clemens Schartmüller1,2, Andreas Riener1,2, Claus Pfeilschifter1, Franziska Hegner1

1Human-Computer Interaction Group, Technische Hochschule Ingolstadt (THI), Germany; 2Institute for Pervasive Computing, Johannes Kepler University Linz (JKU), Austria

In interdisciplinary human-computer interaction (HCI) research, user studies are essential. Due to easy attainability, flexible schedules, etc., these studies are primarily conducted with students. Depending on the application, however, this often does not yield results that are representative of the respective target group(s). Getting these to the, e.g., driving simulator is tedious, time-consuming, and costly. To address this issue, we have integrated a driving simulator mockup into a trailer to bring the lab directly to the target groups. It allows researchers to diversify their samples for research studies. In addition, the lab can also be used for a broad knowledge transfer to society. We hypothesize that academics and practitioners will benefit from having easy access to the general public as well as specific target groups by reducing the barrier to study participation. In this paper, we outline the development steps for this lab, present its core features, and discuss potential applications.

Demo-Walkthrough:
Demo-Video:

 

QR Code

Details

Datum:
September 7
Zeit:
17:30 - 18:30
Kategorie: