Regarding the fourth work of our group, we spent a lot of time, in the beginning, to discuss how to use non-tactile technology to display our work in the Max. This problem makes us very confused. First of all, our idea is to use facial recognition to control the playback and stop of several different pieces of audio. With the help of the teacher, we used the FaceOSC software to identify each person’s face to complete our work more accurately. Moreover, we tried to use the size of the mouth open and the direction in which the face moves forward to achieve the state of controlling the audio with the face. When the mouth is opened to the specified number (higher than 13), the sound will play, and when the face moves (near the computer screen) is higher than the value 5, a new piece of audio will play. Furthermore, this work we hope people could feel that they do anything that has a significant change or a small change to the world, thus when they open their mouths and move their faces, they can make a different sound. Finally, we reached our initial expectations, although we have experienced many difficulties in this work.