We started to analyze the sound of the open violin strings: G, D, A and E. The frequencies are faraway from each other, we supposed would easier to pick them up. Later we tried to analyze consecutive notes a little bit faster. The result was positive, the program is collecting the different frequencies without problems.
How does this process work?
We have a feature extractor analyzer which captures sound through a standard laptop microphone, would be possible also to do it with any other one. We focused on the “ Raw Midi pitch” which basically tell us at which frequency is the violin vibrating. This midi-data is been translated into OSC (Open Sound Control) messages which can be later used to control visual environments in order to produce sound visualization.
The same thing is happening which the Kinect. Different features from motion are captured such as joints’ positions and speeds from both arms (hand, elbow and shoulder) are also being translated into OSC messages. All this information can now used and read by sound synthesis or animation software such us “Processing” or “Quartz composer”.
The next steps include translating information into visual representations from both, movements and sound.