Animation, Character Essences, Research & Coding, Research & Play

Le Quack Walker v1.0

“People’s movements can change your impression of them.” 

(Isao Takahata)

 

 

Notes

Code: This blog post is a piece of personal research and development, which is very close to my heart. I encourage you to read, experiment with the code and Maya file available on Github and reference this post if you use it in your own work.

Research: I believe academic findings should be shared beyond the borders of peer reviewed journals and this is a small attempt at achieving that. I do recognize, however, the importance of published work. The current post does not aim to replace the latter, but to supplement it with findings that stem from curiosity rather than academic rigour. 

Learn: If you wish to learn more about designing, modelling and procedurally animating simple characters, please have a look at my upcoming course, Little Creatures with a Personality on Thinkific. 

 

Introduction

Patterns of movement, patterns of laughter, patterns of movement that make us laugh. There are many patterns that connect us, but the ones that truly matter speak the truth of our human nature. And if the truth is unique, otherwise it wouldn’t be called “the truth”, then it should have clear characteristics. There should be a code behind truthful behaviours that generate similar reactions in people, regardless of where they come from.

This project started as a question: What is it that makes ducks funny? Moreover, I wanted to know if there was a code behind the movement of a funny duck. After much thought, experimentation and scripting, I realized that the notion of funniness is too complex. Rather than attempting to understand everything at once, I chose a simple behaviour, walking, and investigated its “funny” potential. Since emotions are also seen among laughing people, they were chosen as nuances for the walking behaviour. This allowed a palette of walk cycles to be experimented with.

This report is part of the Character Essences project, which focuses on recreating believable actions using procedural animation. Actions are often hard to describe, but techniques like Laban Motion Analysis allow dividing complex behaviours into simple motions. Behaviours can thus be described and recreated as a cohesion of individual movements.

This observation is connected to emergence theory, where complex systems emerge from apparently simple rules. One example is Craig Reynolds’ flocking system (1987), where three simple rules govern the complexity of a moving flock of boids (ie. birds or fish). These rules are cohesion (boids must stick together), alignment (boids must travel in the same direction) and separation (boids mustn’t collide with each other).

Motivation

Before delving into the complexity of human behaviour, I wanted to have a look at a simple creature, a duck. Ducks are funny little birds, with their wagging tails and wobbly walks. Everything about a duck feels like out of a cartoon, even its brilliantly coloured feathers and beak. So what is it that makes a duck funny? Also, can I find the simple motions which form the complex behaviour of a wobbly duck? If the answers to these questions are found, I can then recreate a duck as a procedurally animated character.

Moreover, the feeling a procedural character conveys could inspire a similar effect in an observer. In other words, if we consider a walking duck to be funny, a similar reconstruction done for a procedural duck should also be classified as funny. This could extend to more types of characteristics and behaviours, which can lead to applications in video games, films and psychology.

Procedural characters could be used in simulations and interactions with users to entertain and aid them. If the psychological effect and believability are controllable to some extent, characters can react according to the context of a scene. In a video game, for example, a procedural character can display an angry walk if a user breaks the rules or it can have a joyful jump if they haven’t seen the player in a long time. 

Background

Firstly, let’s discuss the concept of “movement code”. I was introduced to this notion in Stephen Mottram’s puppeteering workshop, The Logic of Movement (2017). He spoke about every creature having a well defined method for moving, which is linked to their size, weight and emotion. As an example, the reason why a chicken thrusts its head forward when walking is to balance out the larger body weight left behind when taking a step.

The “movement code” is also linked to the more comprehensive Laban Motion Analysis (LMA) technique defined by Rudolf Laban and his students (2011). Laban was a movement theorist who studied and classified complex movement into a simple set of qualities. He looked at the shape of the body and space it moves in, as well as the conscious efforts humans make when performing an action.

The efforts described in LMA are weight, space, time and flow (Bishko 2014). Each effort varies between two movement qualities. Weight can vary between light and strong/heavy, space between indirect and direct, time between sudden and sustained, while flow varies between free and bound. Combinations of two qualities form states (awake, remote, stable, dream, rhythm and mobile) and three qualities form drives (action, passion, vision and spell).

By focusing on the four efforts, the question is whether these elements can form the basis of complex, emergent behaviour. In Melzer et. al (2019) basic emotions (Ekman 1992) like happiness, sadness and anger were recreated through sets of simple motions. The goal of the performing actors was to display core movements, without the knowledge of which emotion they were attempting to recreate. Participants then ranked the overall movement as emotions. The experiments showed not only that emotions can be recreated in the human body by simply repeating certain simple movements, but also that others recognise and empathize with such emotions.

The work I attempted relies on the aforementioned paper, but is not rigorous in its academic methodology. It is more of an early prototype, a hypothesis formed in the imagination, if you will. I wished to know whether similar techniques could be applied to a duck walk and whether people found the results funny. The duck walk was to be generated using mathematical functions, thus forming a repeatable movement code. 

Method

The software used to create the Le Quack Walker V1.0 prototype was Autodesk Maya 2018. A simple 3D mesh was modelled, textured and rigged to approximate the look and mechanics of a duck’s walk. NURBS controllers parent constrained joints in the feet, spine and neck areas to allow movement of the skinned joints.

duck1

Simple duck mesh, texturing and rig prototype.

Instead of keyframing curves by hand, a Python script plugin was written to generate keyframes depending on the desired parameters (code available on Github). The GUI below shows the options the user has when running the Python script. First the Animation Start and End Frames are established, together with the Frames per Second (FPS). By default, these values are 0, 120 and 24 respectively. When generating the walk cycle, a keyframe is added automatically for all the controllers every 3 frames to help create a smooth animation.

GUIDuck

 

The next values, Amplitude, Speed, Weight and Direction control the qualities of movement for the duck walk cycle. Weight and Direction are directly linked to the Laban efforts, mentioned in the Background section. Amplitude is the length of the stride, while Speed is how fast the duck goes. Unfortunately the latter parameter didn’t work out as expected and the default value of 5 that the plugin starts with is the best looking option.

Amplitude

Amplitude compensates for the issue with the Speed parameter, as a large stride coincides with a faster walk, since more ground is covered in the same amount of time as a smaller stride. The slider value varies between a low and a high Amplitude. This is mapped to the respective small and large step sizes. The forward translation is then calculated as a function of the amplitude and speed of the character. The side view images below show two frames from the Amplitude = 1 and Amplitude = 10 respectively. An amplitude of zero would result in no movement as the step size is 0.

AmplitudeOneTwoFrames

Frames 1 (right) and 36 (left) of the generated walk at Amplitude = 1

AmplitudeTenTwoFrames

Frames 1 (right) and 36 (left) of the generated walk at Amplitude = 10

Weight

A low Weight value on the available slider represents a light weight, while a high value is a strong/heavy weight. A light weight is similar to a feather floating through the air, while a heavy weight is like the sturdy step of an elephant. I added some additional bounce in the duck’s step for a lightweight animation. When the weight is heavy, the duck’s movement is closer to the ground, since it’s more affected by gravity. The side view images below show two frames from the minimum and maximum Weight values, 0 and 10 respectively. Notice the bounce in the step for the low weight value.

WeightZeroTwoFrames

Two frames of the generated walk at Weight = 0

WeightTenTwoFrames

Two frames of the generated walk at Weight = 10

Direction

Direction is direct for low slider values and indirect for high values. A direct motion is converted into little or no body rotation around the vertical (Y) axis. An indirect motion has more rotation around the Y axis, as well as some supporting side to side X axis translation. The side view images below show two frames from the minimum and maximum Direction values, 0 and 10 respectively. Notice the exaggerated sway in the second image when the movement is indirect.

DirectionZeroTwoFrames

Two frames of the generated walk at Direction = 1

DirectionTenTwoFrames

Two frames of the generated walk at Direction = 10

Emotions

Once these parameters were established, the question was whether combinations of them would reveal complex behaviour. There are many ways to express behaviour and personality, but among the most common ones are emotions. Two out of the six basic emotions described by Paul Ekman (1992) were chosen, joy and sadness. Attempts were made to recreate these emotions on top of the neutral walk cycle of the duck. The neutral state was estimated at Amplitude = 5, Speed = 5, Weight = 5 and Direction = 0. 

The available parameters were mapped to the parameters suggested in Melzer et. al (2019) for recreating emotions in humans. For example, their paper mentions that joy was recognized by participants in their study when elements like lightness, jumping and rising movements were observed. These could be replicated easily with small weight value, specifically Weight = 1.

Sadness, on the other hand, was recognized in Melzer et. al (2019) as passive, sinking weight along with other parameters. A high value, Weight = 9, was used for recreating this effect. Amplitude and Direction were also experimented with, but did not offer significant results in expressing joy or sadness.

Animation Graphs and Code

Once the desired parameters are established, for example Amplitude = 5, Speed = 5, Weight = 1, Direction = 0 the Generate button is pressed in the GUI Python plugin. This activates a sequence of functions that reset controller values and extract values from the GUI fields. These values are then fed into the generateWalk() function. A snipped of this function is shown below. 

Notice that trigonometric functions like sine and cosine are used with an angle theta as a parameter. This angle increases depending on the current frame and frames per second. The Amplitude parameter influences the amplitude of the trigonometric functions, resulting in the step size. Looking at lines 4 and 5 below, the variables currentFootTranslationY and currentFootTranslationZ are the coordinates for a point moving along an ellipse.

The ellipse flattens when touching the ground, as conditioned in lines 8 to 11. The resulting curve is the trajectory for the left foot inverse kinematics (IK) handle. The joint angles for the rest of the leg are calculated automatically by Maya’s Rotate Plane IK Solver. An example of the left foot animation graphs can be seen in the first image below.

The spine translation along the Y axis factors in the inverse Weight parameter. The sine wave graph that results shifts between higher and lower average values depending on whether the Weight is low or high respectively. An example of the Translate Y animation graph for a low Weight value can be seen in the second image below.

Notice that the maximum value is 1.5, while the minimum value is -0.5. This translates visually to the character bouncing up more than it gravitates towards the ground. Finally, in the third image you can see the animation graph for the Rotate Z values for the spine, which is directly proportional to the Rotate Y variable. The latter is the side to side movement of the spine, given by a cosine function.    

rotationAmplitude = amplitude * extraAmpFactor
currentLeftFootTranslationX = (amplitude / 3.0) * weight * math.fabs(math.sin(0.5 * teta))
currentRightFootTranslationX = currentLeftFootTranslationX - amplitude                
currentFootTranslationY = -amplitude * math.sin(teta) / asq
currentFootTranslationZ = amplitude * math.cos(teta) / bsq
currentLeftFootRotationX = -rotationAmplitude * math.sin(teta) / 2.0

if (currentFootTranslationY < 0):
    currentFootTranslationY = 0
if (currentLeftFootRotationX < 0):
    currentLeftFootRotationX = 0  
                    
currentLeftToeTranslationY = -currentFootTranslationY / asq
currentLeftFootTranslationZ = currentLeftToeTranslationY / asq                           
             
#Spine    
currentSpineTranslationX = currentLeftFootTranslationX - amplitude / 2.0                
currentSpineTranslationY = (weightCosValue / 2.0) + invWeight * math.sin(2 * teta) / asq 
currentSpineRotationY = -weight * rotationAmplitude * math.cos(teta)
currentSpineRotationZ = currentSpineRotationY / 3.0
currentTailRotationY = currentSpineRotationY / 2.0
                
#Assign values to controllers

Results

Fourteen combinations of low and high parameter values for Amplitude, Weights and Direction were made. The resulting animations were playblasted out of Autodesk Maya and uploaded as private videos on Youtube. These videos were then inserted into a Google Forms survey with thirty questions.

At the start and end of the survey, participants were asked how happy they were. This was to check whether the duck animations had any effect on the overall state of the observers. At the start, 73.5% of the participants were above 5 on a scale from 1 (not happy) to 10 (super happy). At the end, 79.4% of the participants were above 5. Although the change is not significant, it does show a tendency towards a more cheerful disposition after watching procedurally animated ducks.   

For each of the fourteen videos, participants were asked to name the emotion they thought the video expressed, with Happy, Sad, Angry, Fearful, Disgusted, Neutral and Other as potential answers. They were then asked whether the duck in the video was funny.

Thirty four participants answered the questionnaire anonymously. The most successful question was the one for video four (Amplitude = 5, Speed = 5, Weight = 1, Direction = 0). Over 90% of the participants recognized the light Weight animation as a Happy movement. In the graph below Excited was classified as Happy.

94% of the participants also found this animation as funny, with a score of 5 or above, where 1 is not funny, while 10 is super funny. 61% of participants gave a score of 7 or above to the same question. This result repeated itself for both the emotion and the degree of funniness for duck 12 (Amplitude = 7, Speed = 5, Weight = 1, Direction = 5), but to a lesser degree. About 67% of participants found the duck Happy, while over 85% said the duck was funny.

Duck 4 walk cycle results

Sadness and fear were often found at similar percentages of influence. For example, video thirteen (Amplitude = 3, Speed = 5, Weight = 9, Direction = 0) was classified as Sad by 38% of the participants, while 41.2% classified it as Fear. This was triggered by a high weight while walking, with the respective parameter Weight = 9. This observation is backed up by the sinking motion described by Melzer et al (2019) when defining sadness.

It is worth noting that Amplitude has an influence in the results. Videos five and thirteen both had Weight = 9, but only the latter was classified as sad and fearful. Amplitude = 5 for video five, while Amplitude = 3 for video thirteen. This might be linked to the enclosing behaviour recognized in fear and passive weight specific to sadness (Melzer et. al 2019).

Discussion and Conclusion

This report illustrated the creation of a procedural walk cycle for a duck character with the option of varying the movement style through a set of parameters (Amplitude, Weight and Direction). Two of these parameters, Weight and Direction, are linked to Laban’s efforts of movement. In specific combinations, Laban’s efforts have been shown to convey emotions. Thus the duck walk cycles can be nuanced through such emotions.

The survey results were conclusive only for the expression of joy, with sadness coming second. The most indicative parameter of such emotions was Weight. Low weights have been found to illustrate happiness, while high weights are more representative of sadness. These results are similar to the characteristics given to such emotions in a study on human movement and the link to emotions (Melzer et. al 2019). Moreover, people were more prone to find a duck funny when it was displaying a happy walk cycle.

More work is needed, however, to further understand the mechanics of stylized walk cycles, the emergent theories behind emotions and what comprises a funny behaviour. In the future, comparisons can be done with similar techniques from the field of physics simulations or machine learning algorithms, rather than purely mathematical procedural animation.

It must be said, however, that this report shows how simple movements have the potential to convey complex behaviours. Along with emergent theories, procedural animation could unlock nature’s hidden patterns of movement using the simplest of tools. In other words, we are slightly closer to discovering the “movement code” of a duck, which opens possibilities for other, more complicated beings, maybe even humans.

References

  • Bishko, L. 2014. Animation Principles and Laban Movement Analysis: Movement Frameworks for Creating Empathic Character Performances. Research Showcase at Carnegie Mellon University: Nonverbal Communication in Virtual Worlds: Understanding and Designing Expressive Characters.
  • Ekman, P. (1992). An argument for basic emotionsCognition and Emotion, 6(3-4), 169–200. [Link here]
  • Laban, R., Ullmann, L. (2011). The Mastery of Movement, Fourth Edition. A Dance Books Publication.
  • Melzer Ayelet, Shafir Tal, Tsachor Rachelle Palnick. (2019). How Do We Recognize Emotion From Movement? Specific Motor Components Contribute to the Recognition of Each Emotion. Frontiers in Psychology, Volume 10, 2019, Pages 1389, DOI=10.3389/fpsyg.2019.01389, ISSN=1664-1078 [Link here]
  • Reynolds, Craig W. (1987). Flocks, herds and schools: A distributed behavioral model. SIGGRAPH Comput. Graph. 21, 4 (July 1987), 25–34. DOI:https://doi.org/10.1145/37402.37406
  • Laughing Matters | Comedy Documentary  | Earful Comedy. (1985). Redistributed by Earful Comedy, narrated and starring Rowan Atkinson [Video] [Link here]
  • The Logic of Movement. Workshop by Stephen Mottram as part of the Pupeteering Festival, Bristol 2017.
Standard
Animation, Character Essences, Research & Coding

Character Essences Begins

After a few years of improv theatre, animation research and coding I think it’s time to begin my dream project. Character Essences will combine theatre techniques of character creation with traditional and procedural animation. Drawing on character archetypes from Commedia dell’arte and the physical theatre methods of Jacque le Coque and Rudolf Laban, the main focus is to find movement parameters (constants and variables) that define well established characters.

Once the parameters of movement have been identified, they can be manipulated to create a large variety of characters procedurally. The uses include video game automated character generation, extra characters in films and autonomous robot movements. One of the goals is also to simplify movement patterns without the need for large data sets like in machine learning. My belief is that by focusing on the intrinsics, rather than the extrinsics of character movement one can better identify the corresponding building blocks.

Characters can range from simple primitive models to animals and humans. Early experiments included Expressing Emotions Through Mathematical Functions (see description HERE) for primitive models. I found that combinations of fast, sinusoidal movements can create the illusion of joy in spheres and cubes, for example. These observations are linked more to psychology and to the Heider-Simmel experiment. If human emotion can be identified in such simple entities, surely adding a recognizable shape to the character (eg. biped, quadruped) will produce more relatable experiences with the observer. Let the adventure begin!

Keywords: Archetypes, procedural animation, psychology, biomechanics, equations, theatre, characters

Standard
Acting & Improv, Research & Coding, Research & Play

World Problems: Ep.1 – Global Warming and the Magic Box Designs

“Scientists have recently determined that it takes approximately 400 repetitions to create a new synapse in the brain – unless it is done with play, in which case, it takes between 10 – 20 repetitions.” (Dr Karin Purvis)

Motivation of World Problems Series

I’m starting Ana’s Research and Play with Episode 1 of the World Problems (WP) series. WP will have longer episodes (~15 mins) that combine ideation, design, prototyping and testing of sometimes crazy inventions. It is intended to experiment with possible solutions to help “save” the world. The approach is a playful one, rather than a worried and tense one. The reasoning is my belief that people achieve their best when fear of failure is out of the way.

The inventions that result from this series might or might not be viable. In this sense, WP presents a humble method to saving the world. My ambition is not to come up with precise inventions that will give accurate results (although they are very welcome). In my experience, having such pressures, under the constraint of limited time, leads to mediocre solutions and headaches. What I am trying to do is follow my curiosity and allowing myself to both innovate and fail (first attempt at learning).     

In the best case scenario, the world will benefit from an invention. Worse case scenario, I will have brainstormed some ideas that fill people with such indignation at my nerve, that they’ll just go and make their own creations. Empathy also motivates me and it is necessary to prevent an attitude of carelessness and lack of responsibility. It is important, however, to use empathy as a driving energy rather than an energy draining one. We should all make a contribution to saving the world we live in, but it mustn’t destroy us in the process – unless it’s a sacrifice of love, but that’s a different story. Let’s begin!  

Episode 1 Summary

In this episode I come up with a few crazy designs to help save the world from global warming, by using random household items. It all starts with choosing the problem out of a list of possible world problems. I then have a warm up (of my mind, not the world) by finding different uses for household items via lateral thinking.

The Magic Box, which is often seen in clowning exercises comes into play. This leads to shotfire brainstorms from Experimental Ana, who gives up grammar for creativity. It all ends with a set of crazy invention designs (see below). One of them or a combination of up to three of them could be prototyped in the future.

The Research

Episode 1 is linked more to brainstorming ideas, but research elements also find their way through. Please see the video description for the references used. Here are some research inspired elements from the video.

  • Choosing the problem
  • Motivation of play based approach
  • Review of a few accidental discoveries
  • Background on Lateral Thinking
  • Ideation of designs
  • Designing possible prototypes

The Play

The structure of Episode 1 is linked to an improv game called Fix it MacGyver! In this game, a character called MacGyver is given a problem and three random items. He or she has to come up with a solution to fix the problem by utilizing the given items.

For example, let’s say someone’s house is on fire. MacGyver has a cat, a sandwich and a chainsaw. One solution is of course to use the cat as a scout to check if there are any survivors. The chainsaw can be used to cut through the fallen parts of the house, so that the trapped victims can be reached. Once they are out, a sandwich is provided for nutrition, while waiting for the firemen.

The idea of the game is not to “get it right”, since there are “no mistakes, just opportunities in improv” (Tina Fey). Letting your thoughts imagine the wildest solutions is very liberating because it cuts out inner criticism. What improvisers experience with this game is also linked to Julia Cameron’s theory, described in her book The Artist’s Way. She recommends evading the inner critic by free writing three pages of whatever comes to mind every morning.

My Experimental Ana from the video uses this technique of free and spontaneous thought. Censoring of ideas is kept to a minimum, giving priority to the joy of discovering where my own thoughts take me. In the paraphrased words of Keith Johnstone, one of the pillars of improv, “You must trust that your mind, God or the giant moose will tell you what to say.”

The elements of play in Episode 1 are the following:

  • Defining the game guidelines (box of objects + find different uses for them)
  • Magic box game linked to clowning exercise
  • Lateral thinking solutions to a problem breaks patterns of thinking
  • Experimental Ana uses free and spontaneous thought
  • Experimental Ana uses jump and justify improv technique (say the word first and then justify its meaning)
  • Creating designs with commitment

Designs

After the research and play collaboration, seven designs emerged. These are not necessarily viable designs, but they open up a world of possibilities! Please have a look and tell me which of these designs you would like prototyped in the future!

BadAirSmasherBoaCleanerEDangeredSnifferFlowerShapedFlowerpotFreshLifeBalancerMinivacuumShoesSmartRope

Standard
Character Essences, Research & Coding

Robin Animator V1.0

Note: The code and Maya file are available on GitHub.

The Robin Animator V1.0 is a Maya plugin written in Python for animation prototyping. It can be used to generate basic procedural animations of little bird characters. These animations can then be exported for your games, rendered in your films or can serve as reference for more complex animations.

Motivation

The question behind this project was to see whether we can create complex bird animations using simple movement components. This can be linked to emergence theory and subsumption architecture. The former talks about how a complex system is greater than the sum of its parts, while the latter shows how apparently intelligent looking behaviour can arise from a set of simple, separate component behaviours. In other words, complex character animation CAN be the result of simple movements working together!  In our case, the component behaviours link to the way each body part moves and tend to act independently from each other.

I chose to focus on little bird characters, robins, to be more precise. The reason behind this is that I’m fascinated by how these little creatures move. Their speed seems to be in a different time frame from ours, due to their minute proportions.  After looking at robins in the real world for a while, I decided to approximate their movement with a geometric prototype model.

Geometry and Movement

The geometric body parts link to the movement components that our robin displays. The following list shows the link between the two.

  • The Head
    • Geometry: Sphere and cone
    • Movement: Shake (Rotate Y), Nod (Rotate Z)
  • The Torso
    • Geometry: Sphere scaled along Y axis
    • Movement: Bend (Rotate Z) – Moves with Feet
  • Wings
    • Geometry: Flattened spheres
    • Movement: Lift (Rotate X)
  • Tail
    • Geometry: Extruded cube
    • Movement: Wag (Rotate Y), Lift (Rotate Z)
  • Feet
    • Geometry: Modified cubes
    • Movement: Bend (Rotate Z) – Moves with Torso

The robin’s movement is controlled by the RobinCTRL, a circle at the base of the character. The added attributes inside of it (eg. Lift Tail, Wag Tail etc.) are connected to the corresponding rotation fields for each geometric component of the character. These rotation fields usually have a minimum and maximum rotation limit to avoid self-intersections.

The main rule behind the rotation of any character component is a sine wave:

Where R is the rotation angle, A is the amplitude, S is the speed and is the angle linked to the current frame. The amplitude and speed can be set from the graphical user interface for each character component. The current frame is usually the one being considered for the addition of a key. To better understand the process, let us have a look at the GUI and the Python code behind it.

The GUI and the Code Behind It

The GUI has the following components:

  • Reset Robin button
    • Clears all the key frames of the animation
  • Animation Start Frame
    • Sets the start frame for any animation component
  • Animation End Frame
    • Sets the end frame for any animation component
  • Component tabs
    • Feet control the hopping movement
    • Torso controls the bending of the torso
    • Wings controls the flapping of the wings
    • Head controls the shaking and nodding of the head
    • Tail controls the wagging and lifting of the tail

Each tab usually has fields for setting up the frames per movement, the amplitude and speed. The frames per movement refers to the number of frames necessary to perform that action once. A hop taking place over 10 frames is faster than a hop over 20 frames for example. Speed can be used to tweak this effect of course.

In the case of the Feet tab, once these settings are typed into the fields, the user can press the Hop button, which calls the following method.

#Head nodding animation
def createNodHeadAnimation():
    robinCtrl = cmds.select('RobinCTRL', r=True)
    getAnimationStart()
    getAnimationEnd()
    getNodHeadFrames()
    getNodHeadAmplitude()  
    getNodHeadSpeed()     
    flip = 1
            
    for i in range(animationStart, animationEnd, nodHeadFrames):
        if mirrorNodHead:
            flip = -flip    
        
        for j in range(0, nodHeadFrames, 1):
            if (i+j < animationEnd):
                teta = j*pi/nodHeadFrames            
                headRotation = flip * nodHeadAmplitude * math.sin(nodHeadSpeed * teta) 
                        
                if headRotation > 90.0:
                    headRotation = 90.0
                cmds.setAttr('RobinCTRL.NodHead', headRotation)
                cmds.setKeyframe('RobinCTRL', attribute='NodHead', t=i+j )  
            else:
                break

The RobinCTRL circle is first selected. Then the animation start and end frame values are extracted from the GUI.  Next getNodHeadFrames(), getNodHeadAmplitude(), getNodHeadSpeed() extract the frames per hop, amplitude and speed values from the GUI. The flip parameter is a boolean which decides whether the movement should be symmetric or not (ie. hopping up and down, rather than hopping up and then jumping to a down pose briskly).

The two for loops that follow travel through the frames of animation and set a keyframe at every step. The inner loop is the one that creates the individual hopping movement, while the outer loop makes sure all the frames between the start and end frames are covered. The  angle, which controls the point on the sine wave we’re currently at, goes from 0 to  in nodHeadFrames steps. This is the parameter set by the getNodHeadFrames() methodThe last two lines from the inner for loop set the calculated headRotation in the NodHead field of the RobinCTRL circle controller and add a keyframe to this new value.

Similar steps can be seen in the remaining movement component tabs. Individual methods were written for each tab, but I believe they can be reduced considerably as the current code is repetitive. For future work, it would be nice to introduce techniques for creating animation sequences (eg. hop for 30 frames, stop, look around etc.). Also, saving parameter settings would be useful for recreating popular animations like flying or whatever the user enjoyed doing.

Please have a play with the code (link to GitHub code and Maya file) and tell me what you think! Thank you!

Standard
Animation, Character Essences, Research & Coding

E-StopMotion

Digitizing stop motion animation has been my Engineering Doctorate project for the past three years. The aim was to simplify the workload for artists and offer them tools to bring their handmade creations in a 3D environment. The following video shows a simple pipeline for digitizing characters from the game Clay Jam, by Fat Pebble. This is now published work and open for film and game companies to use.

Publications

[1] Anamaria Ciucanu, Naval Bhandari, Xiaokun Wu, Shridhar Ravikumar​, Yong-Liang Yang, Darren Cosker. 2018. E-StopMotion: Digitizing Stop Motion for Enhanced Animation and Games. In MIG 18: Motion, Interaction and Games (MIG 18), November 8-10, 2018, Limassol, Cyprus. ACM, New York, USA, 11 pages.  [PDF]

 

hellidropter2_1_0024-e1496355516523.png

Hellidropter says Hi!

Abstract

Nonrigid registration has made great progress in recent years, taking more steps towards matching characters that have undergone non-isometric deformations. The state-of-the-art is, however,still linked more to elastic or locally shape preserving matching, leaving room for improvement in the plastic deformation area.
When the local and global shape of a character changes significantly from pose to pose, methods that rely on shape analysis or proximity measures fail to give satisfying results.
We argue that by using information about the material the models are made from and the general deformation path, we can enhance the matches significantly. Hence, by addressing mainly plasticine characters, we attempt to reverse engineer the deformations they undergo in the hands of an artist.
We propose a mainly extrinsic technique, which makes use of the physical properties we can control (stiffness, volume) to give a realistic match. Moreover, we show that this approach overcomes limitations from previous related methods by generating physically plausible intermediate poses, which can be used further in the animation pipeline.

Project Links

You can follow the research progress on Vimeo and GitHub. This is a work in progress project, in collaboration with the Centre for Digital Entertainment at University of Bath and Fat Pebble, under the supervision of Darren Cosker.

Standard
Character Essences, Research & Coding

Cube Limbo

While tutoring Fundamentals of Visual Computing at Bath University, I got acquainted with WebGL and ThreeJS. This is a quick weekend project, where cubes of random sizes and animations do the limbo. The students laughed, so mission accomplished!

The main idea is to create a state machine for the procedural animation. Each cube needs to be created, then translated towards the limbo bar, then scaled down to fit underneath it, scaled back up and translated out of view. The snippet of code below gives a glimpse of all the states needed. We start off by setting the cube creation mode to true.

 var cubeCreationMode = true;
 var cubeScaleMode = false;
 var cubeTranslationMode = false;
 var cubeSquashMode = false;
 var cubeSquashTranslationMode = false;
 var cubeStretchMode = false;
 var cubeRemoveMode = false;

ThreeJS, a wrapper around WebGL, works between the script tags in the html file where our application is. This application is mainly composed of two functions, init() and animate(). The former is used to initialize the camera, the scene and any objects needed to be rendered in it (eg. floor and limbo bar), lights and the WebGL renderer. The latter function is used as a loop, which updates the rendered scene at a number of frames per second (eg. 30 fps). If objects move in the scene, they’ll be drawn at their new location.

function animate()
{
  requestAnimationFrame(animate);
  makeMove(); 
  renderer.render(scene, camera); 
}

As it can be observed in the function definition above, animate() requests a new frame to be drawn, makes a move on the objects in the scene and then renderes the current frame.

Depending on the current true state, the cubes in the scene will have a different movement pattern. For example, the code below shows how a cube translates, when the cubeTranslationMode is true. 

If the cube is still within the appropriate distance from the limbo bar, it will translate towards it. Also, the little side to side movement of the cube is given by a rotation around the local x axis. Cosine of time was used for the transaltion in the hope of creating an ease in and ease out effect, which doesn’t seem very noticeable.

if (cubeTranslationMode) 
{
  distanceToStop =  tempCube.position.distanceTo(cubeTargetPoint); 
  if (distanceToStop > cubeStopDistance)   
    { 
      //Keep translating
      tempCube.position.x += Math.cos(clock.elapsedTime) * cubeTranslationSpeed;
      //Keep rotating 
      if (cubeCurrentRotationIteration <= cubeRotationIterations) 
      { 
        tempCube.rotation.x += cubeRotationDirection*cubeRotationSpeed;
        cubeCurrentRotationIteration++; 
      } 
      else 
      { 
        cubeCurrentRotationIteration = -cubeRotationIterations;
        cubeRotationDirection = -cubeRotationDirection; 
      }
   }
...
}

Similar snippets of code can be written for the rest of the animation states. One must remember to set the current flag to false when the animation segment has finished. In this example, once the cube is close enough to the limbo bar, cubeTranslationMode is set to false, while cubeSquashMode becomes true.

Reference: ThreeJS scripting 

Standard
Research & Coding

MIG18 in Cyprus

I wake up at 6am with the sound of a mandolin in my ears. Russian voices can be heard from next door. I turn in my bed thinking ‘For a five star hotel, they don’t have thick enough walls.’ I finally get up and look outside, ‘but they do have a five star view.’

morningView

View from my room at St. Raphael

This is how my first day in Limassol started, at St. Raphael’s hotel, with a spectacular view and russians playing the mandolin. The sea looked so inviting, but I spent most of the day preparing my presentation. ‘Ah, this is torture, but it must be done.’

Although I’m in my natural element walking around the stage pretending to be a pirate and giving orders to my ship mates, I don’t like presentations, at least not formal ones. I guess I haven’t convinced myself yet that although people may judge, they really want you to succeed. I didn’t know what to expect from the conference I was about to attend, I just knew that the mustard suit my mum made me buy would be like a kangaroo sticking out from a flock of sheep.

The next day (08/11/2018) was the first day of MIG18 (ACM Siggraph’s Motion, Interaction in Games conference). I was in limbo state, not too nervous, not too calm. I went to the sea to rest my thoughts, as the sun was slowly lifting its head over the morning sky. My room card had stopped working by the time I got back from the sea and a light breakfast. Reception quickly sorted it, but it was 8:30 and I had to dress up for my presentation at 9:00. Guess what? The maid was in the room, making my bed. ‘Erm, do you mind if I change in the bathroom.’ I said, holding my mustard suit, which had been dry cleaned the previous day.

It turned out I didn’t have to hurry so much, Prof. Nadia Magnenat Thalmann wanted to swap presentations with me, since she had a plane to catch. It’s funny that she presented exactly what I wanted to hear: a way of classifying (salsa) dancing using simple motion features. Yes, salsa can be decomposed into simple patterns of movement, emotions can be decomposed into action features. What else can be simplified and understood about the human nature? What are the invariants to human perception as Rogelio Cardona-Rivera implied in his wonderful keynote on The Science of Game Design. I go deeper and ask myself, what are the simple patterns of movement that unite us all, that move us to tears, that enhance our empathy towards one another. Can we use technology to understand such patterns and, subsequently, understand one another?

Machine learning is the hot topic of the day (for how long I wonder). It also was ridden all over the conference, with topics like Data-driven Autocompletion for Keyframe Animation by Xinyi Zhang et. al,  Physics-based Motion Capture Imitation with Deep Reinforcement Learning by Chentanez et. al, two very good key notes on ML by Daniel Holden and Jungdam Won etc. Although my initial attitude towards ML was skepticism, I must confess I finally saw what all the fuss is about. If my interest is to understand behaviour from the intrinsics of a character outwards, ML was doing the opposite.  Since capturing the complexities of human nature in a closed form equation is virtually impossible, why not humbly understand its approximations by analyzing as many people as can fit in a database? Yes, I’ll think more about it…

Now for some cheese:

The number of surprises was endless, but I’ll just mention a few wonderful events and people that made my experience at the conference worthwhile. I loved how friendly everyone was, people really were curious and wanted to help eachother out. The organizers, Panayiotis Charalambous, Yiorgos Chrysanthou, Ben Jones and Jehee Lee were very welcoming and down to earth, always making sure we were having a good time. I loved Matthias Muller’s keynote on Physics Based Dynamics. I realized I had quoted him in my thesis as he was awkwardly receiving a fertility totem as a thank you gift for his talk.

I was humbled and happy to meet the charming, smart, warm and confident Xinyi, Athomas, Bea, Anastasia, David, Usman, Luis, Loic, Daniel, Philipp, Dario, Yuri, Jason and many, many others :). Sorry I haven’t managed to talk to everyone!

mig2018

MIG18 people (picture taken from MIG18 Facebook Page)

 

Last but not least, the day out in Nicosia and the two amazingly cypriot dinners really got everyone to socialize and loosen up. Do I even need to mention the cypriot dancing? It was nice to see people volunteer and do curious glass balancing or oversimplified zorba moves around the restaurant. It was funny that every type of dance had some form of courtship: courting in the wheat field, courting by the well, showing off glass balancing in front of the young bride to be :))

glasspeople

Traditional cypriot glass balancing.

 

Our Nicosia tour guide reminded me of a confident cypriot granny who knows exactly what to put in her meze dishes. She walked us around Nicosia and shared the city’s disputed history. Probably the most haunting moment was when we saw the green line wall separating the greek and turkish cypriots. The police there seemed friendly enough, however, which gives me hope. I loved the quaint, narrow streets with fairy lights illuminating the pavement, the local crafts shops and the friendly people around. It was nice to see young people trying to revive the old market place with art.

All in all, MIG18 in Cyprus was awesome and I hope to come again! Thank you for the adventure!

 

greenwall

Green line wall between the greek and turkish cypriot sides of Nicosia.

police

Green line wall police seem ok. 🙂

guide

Our Nicosia tour guide.

wax

Local wax crafts shop.

coolkids

The cool kids in the local market in Nicosia.

Standard
Character Essences, Research & Coding

Expressing Emotions Through Procedural Animation

We know from Paul Ekman that there is a baseline for human emotions. We all express the 6 basic emotions (joy, sadness, anger, disgust, fear, contempt) in more or less the same way. This personal inquiry looked at how we can abstract emotions to a language of trigonometric functions. Is there a link between the energic, soaring joy emotion and the upwards movement of a sine wave? For this initial stage of the project, I used simple primitive geometry.

 

Abstract

The complexity of emotion and thought an individual can contrieve is far from being clearly defined. As the philosopher Winwood Reade suggests however, “while the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty”.  This statement reflects the idea of general available guidelines, common to all individuals, through which they connect and understand one another. These rules are also found in the area of expressing emotions. We can simply describe  basic patterns for anger, contempt, disgust, fear, joy and sadness, thus we can make an attempt in defining these templates in a mathematical form. The current study focuses on finding the appropriate elementary functions that contribute to creating, so called, target factors, which convey characteristics of the six aforementioned emotions.  

Project Links

This project was done during by MSc in Computer Animation and Visual Effects at Bournemouth University, under the supervision of Stephen Bell.  You can read our presentation here.

Standard
Character Essences, Research & Coding

Curious Cones

This little project aims to create a random set of cones that “look at” a shiny red sphere passing by. The reference below shows the original maya + python youtube tutorial. The code that follows was saved as 2 shelf tools which can be used with any mesh.

The first script (randomInstances.py) creates 30 instances of the first object selected in the scene. These instances are then randomly positioned, rotated and scaled.

#randomInstances.py
import maya.cmds as cmds
import random
random.seed(1234)
result = cmds.ls(orderedSelection = True)
print 'result: %s ' %(result)
transformName = result[0]
instanceGroupName = cmds.group(empty = True, name = transformName + '_instance_grp#')
for i in range(0, 30):
   instanceResult = cmds.instance(transformName, name = transformName + '_instance#')
   cmds.parent(instanceResult, instanceGroupName)
   tx = random.uniform(-10, 10)
   ty = random.uniform(0, 20)
   tz = random.uniform(-10, 10)
   rotX = random.uniform(0, 360)
   rotY = random.uniform(0, 360)
   rotZ = random.uniform(0, 360)
   sXYZ = random.uniform(0.1, 1.25)
   cmds.move(tx, ty, tz, instanceResult)
   cmds.rotate(rotX, rotY, rotZ, instanceResult)
   cmds.scale(sXYZ, sXYZ, sXYZ, instanceResult)
cmds.hide(transformName)
cmds.xform(instanceGroupName, centerPivots = True)

The second script(aimAtFirst.py) takes the first elements selected and sets it as a target, while the rest of the elements selected are set as sources, looking at this target along their Y axis. At least one source and one target must be selected in order for the algorithm to work.

#aimAtFirst.py

selectionList = cmds.ls(orderedSelection = True)
if len(selectionList) >= 2:
   print 'Selected items: %s ' % (selectionList)
   targetName = selectionList[0]
   selectionList.remove(targetName)
   for objectName in selectionList:
      print 'Constraining %s towards %s ' %(objectName, targetName)
      cmds.aimConstraint(targetName, objectName, aimVector = [0, 1, 0])
else:
   print 'Please select 2 or more objects'

Reference: Autodesk Scripting on Youtube

Standard
Games, Research & Coding

Master Thesis: Intelligent Agent In Pursuit of an Unknown Moving Target in an Unknown Dynamic Environment

The question raised in this work is how can a detective agent discover another agent’s strategy of movement as quickly as possible? The detective has to find and follow footprints and go through locked doors to find the culprit, before he gets away. The project is also known as the Sherlock project, but can it become Sherlock? Time will tell.

This exercise is meant to be a combination of graphics programming for simulating interaction between agents, goal finding algorithms and artificial intelligence. Thus the results from my work could be added alongside research in path finding and behaviour algorithms used for game or virtual reality agents, robotics and could even be a start for crowd simulation behaviour.

Having an agent follow a given target is one issue, but having an agent follow an unknown target by detecting clues or by dynamically discovering the environment is a whole different story. This concept seems to be separating from the virtual world as it blends more into a natural behaviour of real people.

Apart from curiosity, other reasons for choosing this topic would be the opportunity of training a program to discover a world in pursuit of a goal, while preparing its knowledge base for other possible trajectories. Abstract or maybe futuristic uses for this topic would be in robotics, creating a detective machine that can scan an area and look for clues much more accurately, form theories based on previous research and work hand in hand with a detective.

The same strategies can be used for a robotic companion who can play “hide and seek” with a child with communication problems, for example, or who can follow an elderly owner around the house (or an unknown location) when needed. An even more abstract idea would be of adopting the same theory in areas like medicine, where a nanobot with a tracking algorithm such as this could be trained to make it’s way around the human body and detect traces of infections or viral activity.

Standard