This article presents a novel approach to search in shared audio file storages such as P2P based systems. The proposed method is based on the recognition of specific patterns in the audio contents in such a way extending the searching possibility from the description based model to the content based model. The importance of the real-time pattern recognition algorithms that are used on audio data for content-based searching in streaming media is rapidly growing (Liu, Wang, & Chen, 1998). The main problem of such algorithms is the optimal selection of the reference patterns (soundprints) used in the recognition procedure. The proposed method is based on distance maximization and is able to quickly choose the pattern that later will be used as reference by the pattern recognition algorithms (Richly, Kozma, Kovács, & Hosszú, 2001). The presented method called EMESE (experimental media-stream recognizer) is an important part of a lightweight content-searching method, which is suitable for the investigation of the networkwide shared file storages. The experimental measurement data shown in the article demonstrate the efficiency of the proposed procedure.
From the development of Napster (Parker, 2004), the Internet based communication has been developed toward the application level networks (ALNs). On the more and more powerful hosts, various collaborative applications run and create virtual (logical) connections with each other (Hosszú, 2005). They establish virtual overlay, and as an alternative of the older client/server model, they use peer-to-peer (P2P) communication. The majority of such systems deal with file sharing; that is why their important task is to search in large, distributed shared file storages (Cohen, 2003; Qiu & Srikant, 2004).
Until now, the search has been usually carried out based on the various attributes of the media contents (Yang & Garcia-Molina, 2002). These metadata are the name of the media file, the name of the authors, data of recording, type of the media content, and maybe some keywords and other descriptive attributes. However, if the incorrect metadata were accidentally recorded, the media file may become invisible due to the misleading descriptions. Currently, the powerful computers give the possibility to implement and widely use pattern recognition methods. Naturally, due to the large amount of media files and their very rich content, very limited pattern identification should be reached as a realistic goal. This article introduces the problem of the media identification based on well-defined pattern recognition.
Another problem is introduced if the pattern-based identification method should be extended form media files to real-time media streams. The hardness of this problem is the requirement that the pattern identification system must work in real-time even in weak computing environments. For this purpose, the full-featured media monitoring methods are not applicable since they require the large processing power in order to run their full-featured pattern recognition algorithms.
The novel system named EMESE is dedicated for solving the special problem, where a small but significant pattern should be found in a large voice stream or bulk voice data file in order to identify known sections of audio. Since we limit our review to the sound files, the pattern, which serves for identifying the media content of a file, is named soundprint. The developed method is lightweight, meaning that its design goals were the fast operation and relatively small computing power. In order to reach these goals, the length of the pattern to be recognized should be very limited and the total score is not required. This article deals mainly with the heart of the EMESE system, the pattern recognition algorithm, especially with the creation of the reference pattern, the process called reference selection.
Key Terms in this Chapter
Pattern Recognition: It means the procedure of finding a certain series of signals in a longer data file or signal stream.
Client/Server Model: A communicating way where one host has more functionality than the other. It differs from the P2P model.
Application Level Network (ALN): The applications, which are running in the hosts, can create a virtual network from their logical connections. This virtual network is also called overlay. The operations of such software entities are not able to understand without knowing their logical relations. In most cases, these ALN software entities use the P2P model, not the client/server model for the communication.
Peer-to-Peer (P2P) Model: A communication way where each node has the same authority and communication capability. They create a virtual network, overlaid on the Internet. Its members organize themselves into a topology for data transmission.
Audio Signal Processing: It means the coding, decoding, playing, and content handling of the audio data files and streams.
Content-Based Recognition: The media data are identified based on their content and not based on the attributes of their files. Its other name is content-sensitive searching.
Synchronization: It is the name of that procedure, which is carried out for finding the appropriate points in two or more streams for the correct parallel, playing out.
Manhattan-Distance: The L1 metric for the points of the Euclidean space defined by summing the absolute coordinate differences of two points (|x2-x1|+|y2-y1|+…). Also known as “city block” or “taxi-cab” distance; a car drives this far in a lattice-like street pattern.
Overlay: The applications, which create an ALN, work together, and they usually follow the P2P communication model.
Bark-Scale: A nonlinear frequency scale modeling the resolution of the human hearing system. 1 Bark distance on the Bark-scale equals to the so called critical bandwidth that is linearly proportional to the frequency under 500Hz and logarithmically above that. The critical bandwidth can be measured by the simultaneous frequency masking effect of the ear.