INVERSUS – The Sensitive Machine

Inversus is the sensitive machine that makes no sense. An artistic installation exploring interaction with common objects.

Why should a lamp be used only to illuminate?

Lamps, speakers or fans are usually used as output interfaces, what would happen if we turn the output into an input interface? Inversus explores this inversion by using Lamps as light sensors, speakers as pressure sensors and fans as blowing sensors. The main concept of Inversus is to invert the meaning of transmission devices into reception devices. A sensitive machine that capture human interaction to produce sound and visual kinetics. A performing instrument that gives life to a mechanic flower that spins when someone blows into the machine producing an animated shadow like in shadow puppetry. There is also a virtual marionette inside the machine that reacts to the pressure of the pads; this marionette is rigged with bones that are mapped to the pads that make them squash and stretch producing animation. Virtual and real animation is generated based on human interaction. Made from a washing machine this audiovisual instrument makes the sense of things changing by spinning a colorful wheel.

There are 3 different types of interaction

1. by touching 4 color pads that produce sounds in the same manner as when playing a drum (pressure sensitive), the virtual marionette reacts to this interaction by moving the arms and legs

2. by passing with the hands above 3 leds which produce sound like an organ (sound keeps playing until the hand moves away from the led)

3. by blowing a fan the frequency of the sound changes and the flower starts to spin.

Inversus ( Transmission -> Reception)

This is a 3D picture simulating the appearance of the installation

Inversus - interactive installation 3D simulation

Inversus – interactive installation 3D simulation

Picture from “CHEIA” exhibition at Póvoa de Varzim – Portugal (from 5th till 31th October)

Inversus interactive installation

Inversus interactive installation Póvoa do Varzim Oct. 2013

(more…)

Multitouch surface testing (FTIR)
The purpose of this tests is to build a puppetBox, a multitouch surface to manipulate puppets using fingers.
I´ve made silicone layers with synthetic and cellular diluent.
Avoid dissolving the silicone with cellular diluent because it will take too much time to try. Using synthetic diluent
Of-course you can use expensive materials to avoid this issues.

 

Material used:
– Optoma projector PK320 80 Ansi Lumen
– EyeCam Default lens with 850 nm filter (i changed the lens to the default because i used a very small surface)
– InfraRed Ledstrip 850 nm
– Baking paper with 2 layers of silicon + synthetic diluent
– CCV 1.2 on MAC and CCV 1.3 on Windows
Conclusions:
Systems:
– Eyecam gives a better performance in Windows (60 fps) against Mac OSX (30 fps)
– Flash CCV demos work 2 times faster on windows compared to Mac OSX
Screens:
– Rosco screen gives high resolution picture and it gives you a great feeling to touch
– Baking paper gives brighter picture with low resolution but it is very low-cost
– Rosco screen needs more layers of Sillicon to track blobs

Experiment made in September 2013

OSCeleton2Ramdance 0.1 beta
Jun 2013

This MaxMSP Patch was develop to be able to use the Microsoft Kinect (and more) with RamDance.

Kinect Framework 1: Osceleton->Osceleton2RamDance->RamDance

RAM Dance is an excelent toolkit to create environments for dancers.It is based on OpenFrameWorks so it is possible to modify and create new behaviors.
RAM Dance can also send and receive OSC messages for remote control. This feature presents a great extension to explore.

Although i work with digital puppetry, i found RAM Dance to be a very interesting environment to explore.

By the time this version was finished it was released another application developed by “eight” presenting a easier way to connect Kinect to RamDance. I recommend the use of that application if you are using a MAC and the Microsoft Kinect v1 and don´t want to modify any parameters. If you want to use another operation system, different depth sensor or modify parameters the OSCeleton2Ramdance could be a better choice, why ? Basically because you can modify the patch to meet your needs. For instance: if you want to send the OSC data to other ports and applications it´s easy.

Because OSCeleton send orientation data in a 3×3 matrix i needed to convert this to angle base format (ffff) to be compatible with RamDance that´s why it is included the mat2axis.

Files included in the package
– Folder mat2axis (orientation data conversion)
– max.jar (this is a maxmsp file just to be sure)
– OSCeleton2Ramdance-b01.app (MAC application)
– OSCeleton2Ramdance-b01.maxpat (MAXmsp patch)
– readme.txt (this file)
(MAC or Windows)
What you need:
– You need OpenNI installed (www.openni.org)
– OSCeleton from sensebloom (https://github.com/Sensebloom/OSCeleton)
– Ramdance (http://interlab.ycam.jp/en/projects/ram/ram_dance_toolkit)

How to use:
1. Open Terminal (Mac) or Command Prompt on windows to execute OSCeleton (please read the osceleton read.me file to a better understanding)
You can execute the fowling line: (the ´w´ shows the input image / ´xt´ enables orient data)
./osceleton12 -w -xt -mx 150 -my -150
2. Run the RamDance application
3. Open the OSCeleton2Ramdance-b01 and turn it on

That´s it…

good exploration

 

DOWNLOAD

[wpdm_package id=”1668″]

 

questions or comments to virtual.marionette@grifu.com

 

Future Work:
– connect WIImotes to control points of the skeleton
– optimize the patch
– Bridge Output data

Virtual Marionette was presented at Noite Europeia dos Investigadores (European researchers night) in Porto, Portugal.

NEI2012

An excellent opportunity to share my work and disseminate digital puppetry around the community.

I´ve received a lot of feedback, and realized how hard it is to work in adverse conditions

 

 

real-time film

Collaboration with Teatro de Formas Animadas (TFA) to create a Live Film concept production using shadow puppetry. This concept applies film language and aesthetics  to a puppetry live performance. Mixing real-time editing, post-production and digital performance animation with traditional shadow puppetry with sand drawings and live music performance with digital post-production.

The puppeteer and the music performer controlled all the contents that were produced and presented in real-time in a screen located in the middle of them.

Click here to go to Prometeu keynote site (pictures and  workflow) or here to read more about the project at grifu.com

(more…)

 

Accepted for publication for CHI 2012 (ACM Digital Library)

Use your body as a puppetry controller giving life to a virtual silhouette through acting.


We present a solution that allows real-time interactive control of virtual shadow puppets for performance animation based on body motion.
The interaction with the virtual silhouette with this method is somehow similar to the hand shadow performance, where the performer models his hands to create the illusion of a shape, is like mixing the performance of an actor with the manipulation of a puppeteer.

(more…)

 

IPADATA is a digital puppetry experiment using a touch surface IPAD to control a puppet in real-time.
The Ipad can control many control points at the same instance at real-time.

Workflow: IPAD send messages to Osculator via TUIpad which translate this messages into Animata via OSC protocol mapping joints to each control point.

 

 

WIIMATA is a digital puppetry experiment using 2 WIImotes to control a puppet in real-time.

Workflow: WIImote send messages to Osculator which translate and scale this messages into Animata via OSC protocol mapping joints to each Wiimote.

Kine-Puppet Show is another demonstration using the  Microsoft Kinect as a motion capture interface to control a virtual marionette inside a little puppet show environment.

Real Time 3D Puppet Animation

reActorreactor - real time animation

reactor

reActor – real time 3d puppet animation using a low-cost motion capture system based on the Microsoft depth camera.

Using body movement for controlling a 3D virtual marionette.
This is a low-cost animation solution for performance animation (real-time productions)

Hardware:
Microsoft Kinect depth camera for mark-less motion capture (accessible and simple calibration)
Macbook with OSX 10.6

Software solution:
OpenNI framework
Avin PrimeSense for skeletal model
SenseBloom OSCeleton for joint information transmission via OSC protocol
Autodesk Maya for character modeling
Unity game engine for real-time animation with OSC data

Video Demonstration

INTERFACE: Microsoft Kinect

(more…)