F A C E standalone app ( Mac Only )

Finally, i built a standalone app of F A C E.
Although it doesn’t include entire project ,it has necessary functions to play.
Sadly, it only has Mac version now.

DOWNLOAD: HERE!

The package is big because it includes lots loops and sounds.
You will need web camera and Max Runtime to play with it.
This app is still on going. If you have any suggestions, please let me know!
I would love to know!!

Max Runtime


Run existing Max patches without editing capabilities.

Music System Part 2

The primary music system I used is a sampler. In order to have various combinations of music and sounds instead of random sounds mix, first of all, I analyze all the raw data and figure out the type of the data. For example, people can make bigger facial movements with eyes and mouth such as blinking eye and open mouth. The size changes of eyes and mouth can be in a diverse range dramatically. On the other hand, the raw data of a nose are relatively stable because people have limited range to control their noses’ movements. Besides, sometimes the face recognition might get confused about certain facial features. For instance, if the visitor wears glasses, it might disturb the detection. Based on the feature of each data, I decided which data corresponded which each musical track.
All the data are compared with face database. Each facial feature has its range. For example, the face size, one of the main tracks, has been divided into five sizes: Extra small, small, medium, large and extra large. Each size has relative sampler which carries different music and sounds. The scale of music and sounds were chosen to match with the size. For instance, extra small size category has more music and sounds with faster tempo. Besides the face size, I also add the second reference to have more varied music and sound compositions. Despite everyone having a unique face; the face size might be similar easily. The second reference is the proportion: the current visitor’s nose percentage in his/her face. After deciding which size category it falls into, nose percentage, the second reference, chooses the music and sound in the specific sampler.

Music System Part 1

Since the idea Face can have the musicology is not usual idea for most people.
At least people sounder me don’t think in that way at all.
I analyze my facial features and make a simple simulation to the ideal and final result.

After collecting people’s faces to my database, by using Photoshop to overlap and compare each faces, I got lots of data and ranges. Those values are the important references for generating and picking music.  I also capture facial expressions in sequent images in order to analyze the muscle movements of face. Meanwhile, the test results also can help me improve detection system.

After testing of matching the sound with face, I built a music track system.
Each facial features is used to pick loops and beats…etc. They are used  multiply.
Here is the simplified structure. There are lots of data and values. I chose some major part to show how it work.


After building the basic structure of music system, I move on to mix and create loops and sounds in SoundBooth. Preparing different loops with different length, shift the pitch and change the tempo.
I also prepare special effects sound to match expressions. More details in Music system part 2.

Video camera, interior and lighting

In order to record all participants’ video without giving the computer too heavy loading, I used video camera (DV) instead of small web cam. While the program has a live stream connection with video camera, the video camera also records the whole experiencing process to DV tape.

Meanwhile, the program also keeps recording the music, which is generated by the visitor. Inside of the booth, all the walls are white. It helps the video camera to get a clearer image of the visitor’s face and make the detection more precisely. I also install two fluorescent lights inside the booth for the same purpose. The fluorescent lights shine on the visitor’s face.

Yes, Face!

The current visitor inside the booth will have his/her face projected on the outside wall simultaneously. Inside the booth, the viewer only sees the reflection of the mirror and flash visualization instead of live video projected outside the wall. Most audiences know their faces will be projected on the outside wall before they get into the booth and experience the project. Still, some viewers figure out that their faces were projected outside after experiencing the project. In the booth, there are no extra decorations except the white wall. The white background highlights the visitor’s face and expressions. Those facial features and expressions have been equally scaled. All the tiny changes on the face become more distinct, and then music and sounds enhance the personality of the project.

lots of fun to see how people react and interact!

Diagram

Diagram

The installation sets up in a square-like gallery space. The booth installs on the right side of gallery. Computer, video camera, monitor and related devices are placed in the first half space of the booth. The details of setting will be explained in the following paragraph. Another half space of the booth is the space for the visitor. The wall in front of the visitor there are two windows with mirrored tint. There are two speakers. One is inside of the booth. Another one is outside.


The Mac mini connects to video camera, control panel monitor, speakers, Arduino and TripleHead2Go. Triplehead2go connects to projector and visualization display monitor. Arduino connects with pressure switch under the cushion on the chair. All the cables have been hidden inside of the wall or arranged through a hole on the ceiling of the booth.

Display


The black booth represents the original statue that is before the music is generated. Without the participants, the booth is just an object. When the visitor gets into the booth, his/her face brings the project alive with expression and music. I used triangle shape to indicate the music notes. These triangles were placed in front of the booth in track form. Starting from black triangles and then proceeding to colorful triangles. The display was to correspond with the order of the triangles. Especially, the black triangle placed close to the booth refers to the silence in the beginning. Colorful triangles represent the participation of the visitor and audiences. People are the body of this project. Vivid color and triangles are the framework of the project, it adds my personality and also symbolizes the process of generating music.

What will current visitor see?

In F A C E, music starts because of the current visitor’s face. In other words, I created a platform to let all participants become part of my project.

 

 
← Camera

 

 

 

← Visualization

 

 

 
The visualization on the bigger screen corresponds with the visitor’s facial movements. The screen is marked by dots with grid. Those dots are connected into colorful blocks. These colorful blocks spark with the current tempo. Those dots are also the positions of left eye, right eye, nose and mouth. When the visitor makes the expressions or moves back and forth, the graphics will scale up or down with the relative size of facial features.

Walkthrough

When the viewer walks into the exhibition space, the audio starts to communicate with
the viewer and bring up curiosities. Although people cannot see the audio, they respond with the audio subconsciously no matter if they actually have reactions such as stand still or enter the booth to experience. When the viewer sits down inside of booth, the action starts the interaction with music. The viewer sends the message to the program, generating music and sounds. Afterwards, music and sounds deliver more messages back to the viewer.

The viewer gets affected by it and responds with expressions and movements. This creates a circulation between the viewer and the project. Besides the direct interaction between the current viewer inside of the booth and audio, audiences’ reactions also make interaction livelier. Because of the projection of the viewer’s face and audio, audiences are able to witness the experiencing process. For example, there are several situations such as the sound effects don’t match with audiences’ expectations, audiences have no idea what is going on or audiences know the current participant. Their reactions influence the current viewer and the entire interaction.

Alive

Interaction is a way of communication.

In this F A C E, the essence is the musicology of face.When two kinds of non-verbal language are incorporated with consciousness, it generates unique music combinations. Those music combinations lead to more interesting interactions between the current visitor and audiences.

One of the important reason that interactive media attracts me so much is to make my
work “alive”. The modes of interactivity, human to artifact and human-to-human, can lead to
countless fascinating results. Besides, the relationship between giving and receiving of
interactivity does not have clear boundaries, which is my favorite part of it. Sometimes
creators become receivers. Though experimenting with visible or invisible and every human
sense in term of art, the chemical reactions between creator and viewers are unpredictable. In
this project, I am trying to create nodes among people and the project, which trigger
interesting sequences.

Prototype

The purpose of performance is to use the designed semi-private space and projection
outside to let audiences experience the unseen relationship between face and music, and then
triggers different levels of interaction: current viewer and the reflection of mirror, music and
facial features, viewer inside of the booth and audience. All the reactions affect each other
and lead to unpredictable results.

I created a prototype of booth for testing. i know it does looks like psychic booth.

When staying in an unfamiliar place, most people’s perception will be more sensitive.
The program will start to analyze a viewer’s face until the viewer sits down and triggers the
pressure switch under the chair cushion. In other words, during the experiencing process,
viewers will figure out what is going on and start to interact by themselves. Music starts
because of their faces and movements. Interaction not only includes the common methods:
haptic and vision but also hearing and more sensibilities. Those sensibilities as base points
extend to the different levels of interactions.

Shape and Sound / Music

The Essence

According the fundamental of shape, when we see some certain shapes, we will feel relative perceptions . For example, the round shape usually shows smooth, full and weighty feelings. On the other hand, the triangle gives sharp, unstable and neat feelings.

for some reasons, every time when i see this image, i feel the beats!

Furthermore, this principle can also apply to human faces and sense organs. Although different facial features show different feelings depending on people’s views, people still share similar perspectives. People observe other’s faces to get information. Basic on those judgments, people respond with the actions they intend. Sometimes people can control their facial expression. Sometimes people don’t even aware that they are making the expression. Expression is controlled by brain, consciousness and sub-consciousness. In addition, various expressions (muscle movements) bring more flexibilities of the face. People can change how they look like through expressions. Expression can turn back the image dramatically.

The primary purpose of F A C E is to provide an unusual opportunity for people to discover and experience the relationship between face and music. This project relies on the interaction among current user, music and audiences. The issues of content are starting from a series of questions: What people will see and hear? How does the performance and interaction start?

Motivation!

I am a person full of curiosity. I enjoy observing and discovering hidden and invisible interactivity in everything. The quote from the Talmud given at the beginning of book says “ If you want to understand the invisible, look carefully at the visible. “ In other words, the symbols and metaphors of objects are the important clues to understand the core of everything even art. I am discovering any possible methods to complete my ideas, which are able to evoke viewers’ experience and even provoke viewers to have brand new perspectives of their surrounding.

* [1] The Talmud is a vast collection of Jewish laws and traditions

I believe there are some invisible and hidden connections in different things even though people might not see or be aware it. Those things are varied and may not necessarily relate to each other. However, in some ways, they relate to “people”: how people perceive the world, how people are affected by the world, how they interact with objects and others in the environment. I always try to figure out people’s personalities and what they are thinking through their facial features and expressions. The more I observe people’s faces, the more I sense musicology from their faces. The results bring some interesting relative connections between faces and music. However, the idea is an unconfirmed theory. For most people, this idea is an assumption based on my imagination. How do faces have musicology?

I guess I should start from my own FACE! ↓↓↓