RTSMCS
REAL-TIME SPATIAL MUSIC COMPOSITION SYSTEM
RTSMCS REAL-TIME SPATIAL MUSIC COMPOSITION SYSTEM Agenda - - PowerPoint PPT Presentation
RTSMCS REAL-TIME SPATIAL MUSIC COMPOSITION SYSTEM Agenda Introduction what is the RTSMCS? Introduction built for collaboration How does it work? How to operate the system Systems analysis and design Structured
REAL-TIME SPATIAL MUSIC COMPOSITION SYSTEM
The RTSMCS (Real-Time Spatial Music Composition System) is system for real-time music composing that utilizes hand movement / position data in conjunction with a database of categorized short musical segments and a predefined algorithm that can combine and restructure such segments, to produce novel musical creations. In short the system does the following:
algorithm, which makes its music up using pre-written short segments of music from a given database, will currently create its music.
the rest of the hand position data.
to play it, and is stored in the end of the session to a MIDI file.
Further more, the system also provides a reusable framework which can be used to build various real- time music creation and manipulation systems.
The modular design of the system allows for collaborators from fields such as Computer Science and Music to build unique music creation systems by supplementing the current components of the system with their own. It also allows each collaborator to contribute his part without necessarily having to know in detail how the other parts in the system work:
scoring software to MIDI files. Then, just share the files - no programming knowledge is needed.
(although musical knowledge can be helpful), just some basic understating of the system controls and some basic familiarity with the music segments database they’re going to work with.
composition algorithm. Although some basic segment composition algorithms are already provided with the system, I wish to encourage musicians and programmers to create new interesting algorithms for musical segment composition, since it’s a great opportunity for a direct collaboration.
To obtain the body position / orientation data, the system uses OptiTrack: Motive – a system which tracks reflective markers, and groups of such markers, using infrared cameras, image processing and computational geometry. The markers can be attached to various objects including the human body. Since we’re interested in hand positions, markers are attached to gloves wore on the hands. The hand position data is transformed by RTSMCS into internal system parameters, which in turn control various musical aspects in the music that is being generated in real-time by the system. The categorized short musical segments database consists of a directory which is structured in the following way: The directory has subdirectories, each representing a the musical category of the segments inside of it (e.g. Ascending Melodies, Descending Melodies). The musical segments inside each subdirectory are contained in MIDI files. The system allows the operator to give it a hint as to how the music, which is being generated, should sound at this given moment based on the operator’s hand movements. Those hints are given to the system by the rotation of the operator’s hand around its axis – the angle of the hand dictates from which category in the database the composing algorithm should take the segments which it will process and combine to create new music.
Meaning, the angle of the hand in the current instance implies the category from which the system will currently choose musical segments which it will compose together. Categories are arranged lexicographically by their names from left to right.
Moving the hand rightwards will shift the pitch of the currently composed music up, while moving the hand leftwards will shift the pitch of the currently composed music down.
music currently being composed. Upwards increases volume, downwards decreases it.
music currently being composed.
Y Z X roll
Angle – select midi segment category y axis – adjust dynamics x axis – adjust pitch z axis – adjust tempo
Although the initial system requirements analysis didn’t explicitly call for it, the eventual design
replaced, as long as the replacing component implements all of the required interfaces. For instance, although the RTSMCS was originally indented to get it user parameters form the OptiTrack: Motive motion tracking system, it can in theory work with various human-machine interfaces (e.g. on screen GUI, mouse, joystick, webcam) given that the appropriate adapter- class is implemented. Similarly the music composition logic is also not bound to just manipulating pre-written segments from a database, but can be extended to any logic that complies with the given interface.
Categorized midi segments DB
Compose segments Adjust pitch, tempo, dynamics & sustain notes
Virtual Instrument Optitrack system
Midi part requests Midi segments Midi msg stream Midi message stream Extract
positions x-y-z + angle of hands Raw Optitrack data Translate spatial positions to internal system params Pitch, tempo, dynamics and sustain adjustment parameters Numerical hint (User’s category requests)
MIDI file
Virtual Instrument Human Interface Device
MIDI messages Raw input data MIDI messages + Pitch, tempo, dynamics and sustain adjustment parameters Numerical hints (User’s category requests)
MIDI file
Adjusted MIDI messages Numerical hints, Pitch, tempo, dynamics and sustain adjustment parameters MIDI messages Final MIDI messages