Animation, Character Essences, Research & Coding

Character Essences Begins

After a few years of improv theatre, animation research and coding I think it’s time to begin my dream project. Character Essences will combine theatre techniques of character creation with traditional and procedural animation. Drawing on character archetypes from Commedia dell’arte and the physical theatre methods of Jacque le Coque and Rudolf Laban, the main focus is to find movement parameters (constants and variables) that define well established characters.

Once the parameters of movement have been identified, they can be manipulated to create a large variety of characters procedurally. The uses include video game automated character generation, extra characters in films and autonomous robot movements. One of the goals is also to simplify movement patterns without the need for large data sets like in machine learning. My belief is that by focusing on the intrinsics, rather than the extrinsics of character movement one can better identify the corresponding building blocks.

Characters can range from simple primitive models to animals and humans. Early experiments included Expressing Emotions Through Mathematical Functions (see description HERE) for primitive models. I found that combinations of fast, sinusoidal movements can create the illusion of joy in spheres and cubes, for example. These observations are linked more to psychology and to the Heider-Simmel experiment. If human emotion can be identified in such simple entities, surely adding a recognizable shape to the character (eg. biped, quadruped) will produce more relatable experiences with the observer. Let the adventure begin!

Keywords: Archetypes, procedural animation, psychology, biomechanics, equations, theatre, characters

Standard
Character Essences, Research & Coding

Robin Animator V1.0

The Robin Animator V1.0 is a Maya plugin written in python for animation prototyping. It can be used to generate basic procedural animations of little bird characters. These animations can then be exported for your games, rendered in your films or can serve as reference for more complex animations.

Motivation

The question behind this project was to see whether we can create complex bird animations using simple movement components. This can be linked to emergence theory and subsumption architecture. The former talks about how a complex system is greater than the sum of its parts, while the latter shows how apparently intelligent looking behaviour can arise from a set of simple, separate component behaviours. In other words, complex character animation CAN be the result of simple movements working together!  In our case, the component behaviours link to the way each body part moves and tend to act independently from each other.

RobinComponents

Robin geometric prototype model

I chose to focus on little bird characters, robins, to be more precise. The reason behind this is that I’m fascinated by how these little creatures move. Their speed seems to be in a different time frame from ours, due to their minute proportions.  After looking at robins in the real world for a while, I decided to approximate their movement with a geometric prototype model.

Geometry and Movement

The geometric body parts link to the movement components that our robin displays. The following list shows the link between the two.

  • The Head
    • Geometry: Sphere and cone
    • Movement: Shake (Rotate Y), Nod (Rotate Z)
  • The Torso
    • Geometry: Sphere scaled along Y axis
    • Movement: Bend (Rotate Z) – Moves with Feet
  • Wings
    • Geometry: Flattened spheres
    • Movement: Lift (Rotate X)
  • Tail
    • Geometry: Extruded cube
    • Movement: Wag (Rotate Y), Lift (Rotate Z)
  • Feet
    • Geometry: Modified cubes
    • Movement: Bend (Rotate Z) – Moves with Torso
RobinCTRL

RobinCTRL circle is at the base of the character. These are its attributes.

The robin’s movement is controlled by the RobinCTRL, a circle at the base of the character. The added attributes inside of it (eg. Lift Tail, Wag Tail etc.) are connected to the corresponding rotation fields for each geometric component of the character. These rotation fields usually have a minimum and maximum rotation limit to avoid self-intersections.

The main rule behind the rotation of any character component is a sine wave:

Where R is the rotation angle, A is the amplitude, S is the speed and is the angle linked to the current frame. The amplitude and speed can be set from the graphical user interface for each character component. The current frame is usually the one being considered for the addition of a key. To better understand the process, let us have a look at the GUI and the python code behind it.

The GUI and the Code Behind It

RobinGUI

Plugin GUI

The GUI has the following components:

  • Reset Robin button
    • Clears all the key frames of the animation
  • Animation Start Frame
    • Sets the start frame for any animation component
  • Animation End Frame
    • Sets the end frame for any animation component
  • Component tabs
    • Feet control the hopping movement
    • Torso controls the bending of the torso
    • Wings controls the flapping of the wings
    • Head controls the shaking and nodding of the head
    • Tail controls the wagging and lifting of the tail

Each tab usually has fields for setting up the frames per movement, the amplitude and speed. The frames per movement refers to the number of frames necessary to perform that action once. A hop taking place over 10 frames is faster than a hop over 20 frames for example. Speed can be used to tweak this effect of course.

In the case of the Feet tab, once these settings are typed into the fields, the user can press the Hop button, which calls the following method.

#Head nodding animation
def createNodHeadAnimation():
    robinCtrl = cmds.select('RobinCTRL', r=True)
    getAnimationStart()
    getAnimationEnd()
    getNodHeadFrames()
    getNodHeadAmplitude()  
    getNodHeadSpeed()     
    flip = 1
            
    for i in range(animationStart, animationEnd, nodHeadFrames):
        if mirrorNodHead:
            flip = -flip    
        
        for j in range(0, nodHeadFrames, 1):
            if (i+j < animationEnd):
                teta = j*pi/nodHeadFrames            
                headRotation = flip * nodHeadAmplitude * math.sin(nodHeadSpeed * teta) 
                        
                if headRotation > 90.0:
                    headRotation = 90.0
                cmds.setAttr('RobinCTRL.NodHead', headRotation)
                cmds.setKeyframe('RobinCTRL', attribute='NodHead', t=i+j )  
            else:
                break

The RobinCTRL circle is first selected. Then the animation start and end frame values are extracted from the GUI.  Next getNodHeadFrames(), getNodHeadAmplitude(), getNodHeadSpeed() extract the frames per hop, amplitude and speed values from the GUI. The flip parameter is a boolean which decides whether the movement should be symmetric or not (ie. hopping up and down, rather than hopping up and then jumping to a down pose briskly).

The two for loops that follow travel through the frames of animation and set a keyframe at every step. The inner loop is the one that creates the individual hopping movement, while the outer loop makes sure all the frames between the start and end frames are covered. The  angle, which controls the point on the sine wave we’re currently at, goes from 0 to  in nodHeadFrames steps. This is the parameter set by the getNodHeadFrames() methodThe last two lines from the inner for loop set the calculated headRotation in the NodHead field of the RobinCTRL circle controller and add a keyframe to this new value.

Similar steps can be seen in the remaining movement component tabs. Individual methods were written for each tab, but I believe they can be reduced considerably as the current code is repetitive. For future work, it would be nice to introduce techniques for creating animation sequences (eg. hop for 30 frames, stop, look around etc.). Also, saving parameter settings would be useful for recreating popular animations like flying or whatever the user enjoyed doing.

The code and Maya file are available on GitHub.

Please have a play and tell me what you think! Thank you!

Standard
Animation, Character Essences, Research & Coding

E-StopMotion

Digitizing stop motion animation has been my Engineering Doctorate project for the past three years. The aim was to simplify the workload for artists and offer them tools to bring their handmade creations in a 3D environment. The following video shows a simple pipeline for digitizing characters from the game Clay Jam, by Fat Pebble. This is now published work and open for film and game companies to use.

Publications

[1] Anamaria Ciucanu, Naval Bhandari, Xiaokun Wu, Shridhar Ravikumar​, Yong-Liang Yang, Darren Cosker. 2018. E-StopMotion: Digitizing Stop Motion for Enhanced Animation and Games. In MIG 18: Motion, Interaction and Games (MIG 18), November 8-10, 2018, Limassol, Cyprus. ACM, New York, USA, 11 pages.  [PDF]

 

hellidropter2_1_0024-e1496355516523.png

Hellidropter says Hi!

Abstract

Nonrigid registration has made great progress in recent years, taking more steps towards matching characters that have undergone non-isometric deformations. The state-of-the-art is, however,still linked more to elastic or locally shape preserving matching, leaving room for improvement in the plastic deformation area.
When the local and global shape of a character changes significantly from pose to pose, methods that rely on shape analysis or proximity measures fail to give satisfying results.
We argue that by using information about the material the models are made from and the general deformation path, we can enhance the matches significantly. Hence, by addressing mainly plasticine characters, we attempt to reverse engineer the deformations they undergo in the hands of an artist.
We propose a mainly extrinsic technique, which makes use of the physical properties we can control (stiffness, volume) to give a realistic match. Moreover, we show that this approach overcomes limitations from previous related methods by generating physically plausible intermediate poses, which can be used further in the animation pipeline.

Project Links

You can follow the research progress on Vimeo and GitHub. This is a work in progress project, in collaboration with the Centre for Digital Entertainment at University of Bath and Fat Pebble, under the supervision of Darren Cosker.

Standard
Character Essences, Research & Coding

Cube Limbo

While tutoring Fundamentals of Visual Computing at Bath University, I got acquainted with WebGL and ThreeJS. This is a quick weekend project, where cubes of random sizes and animations do the limbo. The students laughed, so mission accomplished!

The main idea is to create a state machine for the procedural animation. Each cube needs to be created, then translated towards the limbo bar, then scaled down to fit underneath it, scaled back up and translated out of view. The snippet of code below gives a glimpse of all the states needed. We start off by setting the cube creation mode to true.

 var cubeCreationMode = true;
 var cubeScaleMode = false;
 var cubeTranslationMode = false;
 var cubeSquashMode = false;
 var cubeSquashTranslationMode = false;
 var cubeStretchMode = false;
 var cubeRemoveMode = false;

ThreeJS, a wrapper around WebGL, works between the script tags in the html file where our application is. This application is mainly composed of two functions, init() and animate(). The former is used to initialize the camera, the scene and any objects needed to be rendered in it (eg. floor and limbo bar), lights and the WebGL renderer. The latter function is used as a loop, which updates the rendered scene at a number of frames per second (eg. 30 fps). If objects move in the scene, they’ll be drawn at their new location.

function animate()
{
  requestAnimationFrame(animate);
  makeMove(); 
  renderer.render(scene, camera); 
}

As it can be observed in the function definition above, animate() requests a new frame to be drawn, makes a move on the objects in the scene and then renderes the current frame.

Depending on the current true state, the cubes in the scene will have a different movement pattern. For example, the code below shows how a cube translates, when the cubeTranslationMode is true. 

If the cube is still within the appropriate distance from the limbo bar, it will translate towards it. Also, the little side to side movement of the cube is given by a rotation around the local x axis. Cosine of time was used for the transaltion in the hope of creating an ease in and ease out effect, which doesn’t seem very noticeable.

if (cubeTranslationMode) 
{
  distanceToStop =  tempCube.position.distanceTo(cubeTargetPoint); 
  if (distanceToStop > cubeStopDistance)   
    { 
      //Keep translating
      tempCube.position.x += Math.cos(clock.elapsedTime) * cubeTranslationSpeed;
      //Keep rotating 
      if (cubeCurrentRotationIteration <= cubeRotationIterations) 
      { 
        tempCube.rotation.x += cubeRotationDirection*cubeRotationSpeed;
        cubeCurrentRotationIteration++; 
      } 
      else 
      { 
        cubeCurrentRotationIteration = -cubeRotationIterations;
        cubeRotationDirection = -cubeRotationDirection; 
      }
   }
...
}

Similar snippets of code can be written for the rest of the animation states. One must remember to set the current flag to false when the animation segment has finished. In this example, once the cube is close enough to the limbo bar, cubeTranslationMode is set to false, while cubeSquashMode becomes true.

Reference: ThreeJS scripting 

Standard
Character Essences, Research & Coding

Expressing Emotions Through Procedural Animation

We know from Paul Ekman that there is a baseline for human emotions. We all express the 6 basic emotions (joy, sadness, anger, disgust, fear, contempt) in more or less the same way. This personal inquiry looked at how we can abstract emotions to a language of trigonometric functions. Is there a link between the energic, soaring joy emotion and the upwards movement of a sine wave? For this initial stage of the project, I used simple primitive geometry.

 

Abstract

The complexity of emotion and thought an individual can contrieve is far from being clearly defined. As the philosopher Winwood Reade suggests however, “while the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty”.  This statement reflects the idea of general available guidelines, common to all individuals, through which they connect and understand one another. These rules are also found in the area of expressing emotions. We can simply describe  basic patterns for anger, contempt, disgust, fear, joy and sadness, thus we can make an attempt in defining these templates in a mathematical form. The current study focuses on finding the appropriate elementary functions that contribute to creating, so called, target factors, which convey characteristics of the six aforementioned emotions.  

Project Links

This project was done during by MSc in Computer Animation and Visual Effects at Bournemouth University, under the supervision of Stephen Bell.  You can read our presentation here.

Standard
Acting & Improv, Character Essences

The Logic of Movement

A year ago I went to a workshop called The Logic of Movement with Stephen Mottram, an amazingly gifted puppeteer. He showed us how each character has a unique movement code that defines its personality. ‘A piece of cloth can move like a chicken’, he said and also showed us, as we gaped in amazement. Now, as I’m writing about handmade characters in my thesis, I came across some of his work from 1990, Animata.

I’m sharing one of the videos here. I find it fascinating that a simple set of ping pong balls can create such complex characters in our minds. This is DEFINITELY an area of great interest to me, as a future independent researcher.

Standard
Character Essences, Research & Coding

Curious Cones

This little project aims to create a random set of cones that “look at” a shiny red sphere passing by. The reference below shows the original maya + python youtube tutorial. The code that follows was saved as 2 shelf tools which can be used with any mesh.

The first script (randomInstances.py) creates 30 instances of the first object selected in the scene. These instances are then randomly positioned, rotated and scaled.

#randomInstances.py
import maya.cmds as cmds
import random
random.seed(1234)
result = cmds.ls(orderedSelection = True)
print 'result: %s ' %(result)
transformName = result[0]
instanceGroupName = cmds.group(empty = True, name = transformName + '_instance_grp#')
for i in range(0, 30):
   instanceResult = cmds.instance(transformName, name = transformName + '_instance#')
   cmds.parent(instanceResult, instanceGroupName)
   tx = random.uniform(-10, 10)
   ty = random.uniform(0, 20)
   tz = random.uniform(-10, 10)
   rotX = random.uniform(0, 360)
   rotY = random.uniform(0, 360)
   rotZ = random.uniform(0, 360)
   sXYZ = random.uniform(0.1, 1.25)
   cmds.move(tx, ty, tz, instanceResult)
   cmds.rotate(rotX, rotY, rotZ, instanceResult)
   cmds.scale(sXYZ, sXYZ, sXYZ, instanceResult)
cmds.hide(transformName)
cmds.xform(instanceGroupName, centerPivots = True)

The second script(aimAtFirst.py) takes the first elements selected and sets it as a target, while the rest of the elements selected are set as sources, looking at this target along their Y axis. At least one source and one target must be selected in order for the algorithm to work.

#aimAtFirst.py

selectionList = cmds.ls(orderedSelection = True)
if len(selectionList) >= 2:
   print 'Selected items: %s ' % (selectionList)
   targetName = selectionList[0]
   selectionList.remove(targetName)
   for objectName in selectionList:
      print 'Constraining %s towards %s ' %(objectName, targetName)
      cmds.aimConstraint(targetName, objectName, aimVector = [0, 1, 0])
else:
   print 'Please select 2 or more objects'

Reference: Autodesk Scripting on Youtube

Standard