Supporting Motion Capture Acting Through a Mixed Reality Application

Supporting Motion Capture Acting Through a Mixed Reality Application

Daniel Kade, Rikard Lindell, Hakan Ürey, Oğuzhan Özcan
Copyright: © 2018 |Pages: 26
DOI: 10.4018/978-1-5225-2616-2.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Current and future animations seek for more human-like motions to create believable animations for computer games, animated movies and commercial spots. A technology widely used technology is motion capture to capture actors' movements which enrich digital avatars motions and emotions. However, a motion capture environment poses challenges to actors such as short preparation times and the need to highly rely on their acting and imagination skills. To support these actors, we developed a mixed reality application that allows showing digital environments while performing and being able to see the real and virtual world. We tested our prototype with 6 traditionally trained theatre and TV actors. As a result, the actors indicated that our application supported them getting into the demanded acting moods with less unrequired emotions. The acting scenario was also better understood with less need of explanation than when just discussing the scenario, as commonly done in theatre acting.
Chapter Preview
Top

Introduction

Acting for motion capture, as it is performed today, is a challenging work environment for actors and directors. Short preparation times, minimalistic scenery, limited information about characters and the performance as well as memorizing movements and spatial positions requires actors who are trained and able to highly rely on their acting and imagination skills. In many cases these circumstances can lead to performances with unnatural motions such as stiff looking and emotionless movements, as well as less believable characters.

Moreover, acting is an art that requires training, education and preparation to reach perfection. In todays acting for media, computer games and digital environments, these values become a factor of time and money constraints (Kade et al., 2013a). Time and budget for a production determine the choice of actors and recording schedules. This also limits the preparation time and increases the demands on actors such as being able to create a character on the spot, using improvisational acting and having good imagination skills. In our previous research, we have shown that motion capture actors face these challenges and need to be supported when good acting with short preparation times is required (Kade et al., 2013a).

As a performance and the character is often shaped during a motion capture shoot, repetitions and longer recording times can be a result when actors with less experience or acting education are used. Repetitions of scenes, explanations of scenarios and scenes result in time overhead, which takes away valuable recording time or increase the costs of a motion capture shoot.

One solution, as explained in our previous research, is to use trained actors and give them time to prepare, to create a character and to rehearse scenes in advance; as it is done in other acting areas (Kade et al., 2013a). This can be certainly considered for motion capture shoots for movies and high budget productions. However, in motion capture for computer games, commercial spots or smaller productions and animations this is commonly not a solution. This is mainly because long production times and hiring experienced and trained actors is usually not affordable for small-budget productions.

To explore the possibilities on how to support actors within a motion capture environment, we developed a head-mounted projection display in our earlier research (Kaan et al., 2014). We extended the research by setting the focus in this article to test the impact of virtual environments as acting support with six professional theatre and TV actors. Our prototype creates a virtual acting scene around the actors and provides audiovisual elements to trigger emotions and to support the actors’ performances while acting for motion capture. The head-mounted projection display (HMPD), which actors are wearing, uses a laser projector in combination with a retro-reflective material as a screen to display the digital imagery. A smartphone is used as an image-processing unit and as a sensor platform, detecting head movements. A depiction of this setup can be seen in Figure 1.

Figure 1.

Visualization of a user wearing our HMPD, projecting digital scenery onto a retro-reflective screen that is covering the walls around the user

978-1-5225-2616-2.ch010.f01

The aim of our study is to suggest and validate a proof-of-concept, testing if acting with our prototype allows actors to understand the acting scene, the demands on the character and the demands on the expected performance better than without using our acting support application. We hypothesize that triggering emotions like being frightened, or similar, can be triggered through visual and audible effects and could lead to more natural and believable reactions in the actors’ performances. The overall goal of our research is to provide an acting support application that can be used during acting on a motion capture shoot floor and by motion capture performers with different levels of acting training.

To prove our concept and hypothesis, we evaluated our device with six traditionally trained theatre and TV actors and compared their performances when using our prototype to the performances without any acting support. All actors performed 3 short acting scenes once with our device and once again without it.

Complete Chapter List

Search this Book:
Reset