The final goal:

Using camera for gesture recognition, and machine learning a number of gestures will trigger movement in the costume through motors.


So far, I’ve been working with Hana on the costume and what textures to use that can have a range of transformation.

Kinetic costume prototyping

Photo from Hana Zeqa, trying accordion like shapes that would open and close like living organisms

Kinetic costume prototyping 2

Photo from Hana Zeqa, trying placement of accordion pieces


I had to figure out if it was even possible with all the motors, batteries, etc. needed to make this costume transform so I asked Tom and Adam who gave me a few ideas and limitations on what could be done within the short 2 month time frame.

I tried 1 way of moving the accordion pieces which is to attach wires in the 1st and last slices of each section and one stays fixed while the motor moves the other. It worked quite well but the motion isn’t very big.

Kinetic costume prototyping 4

Pulling by a string on the middle of the accordion seems to work the best

With the limitation, I think I can work with motors to control these 3 types of textures:

Kinetic costume prototyping 3














Some notes suggested by Tom:

  • vibration motor (can be used to tickle the performer? to be tested)
  • remember there needs to be space for battery
  • the sound from the motors can be loud

Likely methods of moving the costume:

  1. motors and wire/string
  2. muscle wire (costly and slow to acquire)
  3. rubber/silicon air bladders

Programs for gesture recognition and machine learning:

  • Wekinator
  • Isadora
  • Real time multi-person pose estimation


  • Leap
  • Kinect

Next steps:

  • Try to make the costume physically movable then try to use motors to control
  • figure out which inputs of gestures to use