TheToyWorlds was evaluated at a junior school as part of the virtual marionette research.

ToyPaperGame
Picture 1 – Childrens interacting with the game at the classroom.

Introduction

We challenged children under the ages of 5 and 6 to grab a virtual puppet and play with it.

The main goal was to better understand how children under these ages who never had contact with digital interfaces based on body motion respond to the interaction. The ToyWorlds is a two player game where the player controls the virtual marionette with their hands. The left hand manipulates the body of the puppet and the head follows the right hand motion. The players grab the puppet with the left hand and move the target where the puppet is looking with the right hand. In this way we challenge the participants to control their own body as if it was a marionette controller. This experience is somehow near to how puppeteers manipulate with their puppets, using rods, strings, or other kind of controller.

t1
Picture 2 – Simulation of the ToyWorlds puppet manipulation using the two hands

In puppetry the head of the puppet is essencial to transmit emotions and to help the audience to understand the direction of the action or motion. By fowling the players right hand with their heads, the virtual puppets call the attention of the audience and a contribution to a more expressive animation. Although this is a very simple way to control virtual puppets by just using our two hands it is a very challenging interaction because our hands are not visually represented in the scene. In this way, we must calculate the position of our hands by the position of the puppet´s body and head target. And because the result does not mirror our body exact motion, we need the capacity to abstract from the real motion to the represented motion. This indirect manipulation creates a distance between the player and the puppet witch we want to understand.  This is one important  aspect we want to explore, study and measure.

t2
picture 3 – The right hand makes the puppet to rotate. The puppets head follows the players right hand forcing the body to rotate.

By assigning two different tasks to our two hands we challenge the players to take control of their bodies, an abstraction of the body which may be very difficult to childrens under 6 years old.

The virtual marionettes are rag-doll puppets, dynamically affected by gravity and collisions, the result is a very natural and rich animation.

toy2_capture

picture 4 – screenshot of the game

 From inside out, the game narrative!

In Marta´s bedroom, there are strange things happening when she goes to sleep. She dreams with paper boxes that fill her room creating a big mess. Because she is a little lazy to put her bedroom in order, she created two little imaginary friends that help her to clean up. Joaquin wears a green shirt and his the uncle of António which wears a orange shirt, both are paper figures and very hard workers. They can only grab the boxes when they change their colors, Joaquim grabs the green boxes and António grabs the orange boxes. When the boxes change to the blue color, both puppets can pick them by just looking at them using their respective looking target square. Green and orange boxes are grabbed with the bodies of the puppets using the players left hand and the blue boxes are picked up with the looking target using the players right hand. After grabbing the boxes the player receives a score of one point to the green or orange boxes and ten points to the  blue boxes. The boxes disappear if grabbed or return to their natural color after a certain time.

The players give life to this two little figures and have one minute and an half to grab the boxes. They start by making the calibration pose and the first to get their pose calibrated takes control of the first player witch is Joaquim.

This game evaluates body motor control, an important aspect for the puppetry manipulation.

In the table above we show the game results of this experiment. Two of the 22 children were not able to grab any color box because they felt stressed by the competition.

[table id=1 /]

Synchronous Objects is very interesting framework for visualizing choreographic structures from dance into virtual objects.

A detailed scientific approach to the art of dance emphasizing the benefits of data visualization in the understanding of the fundamental form of human expression.

Synchronous Objects is an unique project developed by the choreographer William Forsythe and Ohio State University””””””””s Advanced Computing Center for the Arts and Design (ACCAD) and the Department of Dance. Their goal was to create a visualization tools based on a very large set of data for understanding and analyzing choreographies. The quantification through the collection of data and the transformation into a series of virtual objects –  synchronous objects – to explore choreographic structures and  reveal their patterns. As the authors explain: “Our goal in creating these objects is to engage a broad public, explore cross-disciplinary research, and spur creative discovery for specialists and non-specialists alike.”

http://synchronousobjects.osu.edu

 

s14

The Blackmagic pocket camera (BMPC) is small and powerful even when you operate this camera remotely.

For project “Peregrino” we searched for small cameras with changeable lens to frame our particular small size scenes. Zoom, Focus and Iris were  features that we searched for. In our first experiment we use a Gopro camera because of the size, but the image quality and the fixed lens were a problem, this camera doesnt provide zoom,  focus, or iris. We also experiment a Sony NEX-7 and we did like the way we could remotely control with WiFi using the Sony API, but the image quality and many other tiny things like the fact the camera works only in battery mode made us search for other solutions. In the other hand the Blackmagic Pocket Camera offers a great image quality with interchangeable lens with a 16mm sensor. We could also control the Focus and Iris remotely by using the LANC input.

LANC BMPC

LANCuage the syntax to control.
LANC – Logic Application Control Bus System or Control-L

LANC is a trademark protocol made by SONY. A bidirectional communication serial port where two devices can communicate with each other. For more information about LANC just click in the fowling link: http://www.boehmel.de/lanc.htm

Other video camera brands, like Cannon also implement this protocol to take advantage of the many remote control equipments in the market, but they call it “Control” port. LANC is kind of a standard remote protocol for video cameras. Blackmagic also implements the LANC protocol in their cameras, although just a few functions are supported.
Available LANC commands in BMPC: REC, Manual focus near/far; Manual iris near/far; Auto-Focus; Auto-Iris. We are using in our play all of this functions intensively except the REC functions.

S24

Framework to control IRIS and FOCUS on the BMPC. We use a computer software to control the LANC via MIDI but you can use any midi device if you implement a midi interface into the arduino.

QLAB is a show control application for MAC which we use in our project. A show control environment that orchestrates all the media and data flow. Sam Kusnetz provides a very interesting writing about how to prepare a machine for live performance using Qlab, visit: http://figure53.com/notes/2013-10-29-prepare-execute-troubleshoot/

 

The workflow: Qlab sends MIDI messages to the Hairless-midi midware software that converts standard midi to serial midi. Then, the messages are interpreted by the arduino witch sends LANC commands (pulses) into the blackmagic cameras.

[vimeo 97412299 w=580&h=326]

You can download all the files above (press more)

(more…)

chromakey qlab quartz composer

Framework for Real-time chromakey in QLAB.

The main goal was to provide live chromakey to camera cues inside Qlab using still images or video files. My work  adapts the solution provided by George Toledo implementing it in QLAB.

—————————————————————————————————————
[Q-Chromakey-still.qtz]

Q-Chromakey-still provides an easy way to generate a real-time chromakey form inside Qlab using a still image as background.

s9

You just need to drag the Q-Chromakey-still.qtz into the Video Effects tab of a camera/video cue and define the color to key, adjust the threshold and the smoothing values and finally define the image location (path and filename).

You can download all the files above (press more)

(more…)

A framework solution to load and play movies or other live material inside Qlab using syphon.

qlab syphon

Qlab provides Syphon Server and Client. You can easily insert a Syphon footage with a camera cue. But if you want to mix a camera or video cue from Qlab with your syphon source you are in trouble. You could try to send your video cue to a syphon output, mix it or blend it with the other source and return it using syphon target (camera cue), but this is not a great solution and you will end up with strange effects.

Basically this framework will send the camera/video cue to a Quartz custom composition using the video effects tab and then mix it with the syphon source returning the image using the same channel.

The main goal of this framework is to be able to mix live images with other live or recorded sources. You can mix the image from a camera with a video using luma key or chroma key.

Qlab  3 uses Quartzcomposer as an image processor and not a video processor disabling the rendering of sprites, billboards, video importers and many other objects. This framework provides a workaround to this kind of limitations. Although i provide a solution to import a video, you can actually send graphic shapes from quartz composer via syphon, or a 3D graphics from Unity yo be blended with your cameras inside Qlab.

You can download all the files above (press more)

(more…)

A framework solution to play a sequence of images inside Qlab using a Quartz custom composition.

Qlab Image Sequence

In the “Prometeu” project framework we used Modul8 as the visual effect (animation/video) engine. Since Qlab now supports video output with multiple live camera feeds,  custom surface mapping and Quartzcomposer compositions for visual effects we have chosen Qlab as the main visual engine.

But Qlab version 3 uses Quartzcomposer as an image processor only and does not allow object rendering like sprites, billboards, cubes, spheres, or meshes. There are other things  that are not allowed to be used in quartz-composer via Qlab custom composition, like: patch time, movie importer, stop watch, interpolation.  Things that are related to time and image buffer brings serious issues and won””””t work as expected.

Because in the project “Peregrinação” we use a lot of chroma key, motion was a requirement inside quartz-composer.
Although i knew some of the quartz limitations through Qlab i started to search a solution to play videos using quartz. The first experiment i´ve made was to use syphon witch i explain in another post. Then, i found a different approach, loading and playing a series of images.

You can download all the files above (press more)

(more…)

peregrinacao

From the 10th till the 23th of  May at Mosteiro São Bento da Vitória, Porto.

Coproduction Lafontana – Animated Forms / Teatro Nacional São João (TNSJ) from the original text of Fernão Mendes Pinto, with the interpretation of Marcelo Lafontana.

Inspired by the adventures of Fernão Mendes Pinto, a portuguese explorer, reported
from the book “Peregrinação” , published in 1614, Marcelo Lafontana makes a journey through strange stories that are presented in a miniature world.

Through the intersection of puppetry, in particular, the expressiveness of paper theater with the cinematic language, this small world becomes a large space of illusion in which the narrative arises.

The stage, is transformed into a film studio, where scenarios and characters drawn and cut from cardboard are handle in front of video cameras. The images are captured by a multimedia architecture providing image processing, image mixing, sound effects, chroma key, performance animation, and all happens in real-time. The final result is projected on a screen, like a sail of a boat that opens to the charms of a journey through the imagination.
from
Fernão Mendes Pinto

staging e interpretation
Marcelo Lafontana

dramaturgy
José Coutinhas

scenography
Sílvia Fagundes

design of the characters and scenarios
Luís Félix, Rebeca das Neves

photography
JPedro Martins

music
Eduardo Patriarca

Multimedia (architecture and contents)
Luís Grifu

Lighting design
Pedro Cardoso

staging assistance
Rita Nova

I´ve implemented some of the digital puppetry methods in this project. Methods and tools are described in another post in this blog.

 

c1

c5_s

 

c2

 

c3

 

c4

 

 

 

 

Street Outdoor

outdoor