An overview on problems and methods to map performers’ actions to a synthesized sound is presented. Approaches incorporating the audio signal are described and a synthesis method called “Audio Signal Driven Sound Synthesis” is introduced. It uses the raw audio signal of a traditional instrument to drive a synthesis algorithm. The system tries to support musicians with satisfying instrument-specific playability. In contrast to common methods that try to increase openness for the player’s input, openness of the system is achieved here by leaving essential playing parameters non-formalized as far as possible. Three implementations of the method and one application are described. An empirical study and experiences with users testing the system implemented for a bowed string instrument are presented. This implementation represents a specific case of a broader range of approaches to the treatment of user input, which have applications in a wide variety of contexts involving human-computer interaction.