Multimedia Storage and Retrieval Innovations for Digital Library Systems

Multimedia Storage and Retrieval Innovations for Digital Library Systems

Chia-Hung Wei (Ching Yun University, Taiwan), Yue Li (Nankai University, China) and Chih-Ying Gwo (Ching Yun University, Taiwan)
Release Date: April, 2012|Copyright: © 2012 |Pages: 397
ISBN13: 9781466609006|ISBN10: 1466609001|EISBN13: 9781466609013|DOI: 10.4018/978-1-4666-0900-6

Description

Digital libraries are libraries in which collections are stored in digital formats (as opposed to print, microform, or other media) and accessible by computers.

Multimedia Storage and Retrieval Innovations for Digital Library Systems offers the latest research on retrieval and storage methods for digital library systems, a burgeoning field of data sourcing. In order to stay abreast of the latest terminology, case studies, frameworks, architectures, methodologies, and research within the field of digital libraries, it is crucial to keep up with the latest literature. The information in this compendium comes from contributing authors and experts around the world, making it a one of a kind addition to any research library.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Copyrights
  • DELOS
  • Digital Preservation
  • Digitization
  • DLRM
  • Hybrid library
  • Licensing
  • Metadata
  • OAI-PMH
  • Selection Criteria

Reviews and Testimonials

The mission of this book is to provide various aspects of digital library technology and educate the database community. The individual chapters are contributed by different authors and present various solutions to the different kinds of topics concerning data acquisition, data retrieval, data mining, data analysis, and digital image processing.

– Chia-Hung Wei, Ching Yun University, Taiwan

Table of Contents and List of Contributors

Search this Book:
Reset

Preface

Digital library technology provides various types of digital information, such as multimedia databases, animated movies, digital medical images, and content-based images. The data acquisition, data retrieval, data mining, data analysis, digital image processing, and text classification technologies enable the database users to locate desired digital information, analyze characteristics of data, and discover knowledge hidden in vast amount of objects. Due to the importance of those technologies, a significant amount of research and efforts have been made around the world to study them.

The mission of this book is to provide various aspects of digital library technology and educate the database community. The individual chapters are contributed by different authors and present various solutions to the different kinds of topics concerning data acquisition, data retrieval, data mining, data analysis, and digital image processing. The prospective audience of the book is academics, scientists, and engineers who are engaged in efforts to understand the research, design, and applications of data acquisition, retrieval, and analysis. This book can also be used as a supplement in digital database related courses for lecturers, upper-level undergraduates, and graduate students. Moreover, fellow researchers and PhD students intending to broaden their scope or looking for a research topic in data acquisition, data retrieval, or data mining may find this book inspiring.

Elahe Kani-Zabihi, et al. provide a study to find out whether users should be involved in the design stage of a Digital Library (DL). Hence, an experiment has been undertaken to find out the level of user satisfaction with a User-Centered DL (UDL) compared to a Non-User-Centered DL (NDL). In their experiment, the UDL prototype has been compared with the NDL prototype. The two prototypes were then evaluated by separate users with various Information Technology (IT) backgrounds. Results show that users’ task performance was better in the UDL version of the prototype. The importance of keeping users foremost in mind when developing DLs is well known among developers of DLs. However, there are still those who develop DLs assuming that users will find their services easy to learn. These assumptions should really be backed up by an earlier user feedback and evaluation. In fact, in order to be successful in the evaluation process of the DL application and be accepted by its users, the application should be user-centred so as to meet the needs and goals of the end users. Indeed, a system cannot be designed and implemented solely by IT specialists but has to be realised by end-users in co-operation with the specialists. Although there are many studies focused on user-centred DLs, they are not inherently user-centred. Therefore, the potential benefits of considering users in the design process of DLs are clear, but the details of the methodologies through which this might be best achieved are not. The result of this chapter was a usable DL prototype and a set of nine principles for DLs. Of particular interest to this study was the eighth principle, which declared that a user-centred development approach should be adopted and that the design focus should be on users, as otherwise, DLs would not communicate effectively and efficiently with their users. The research described in this chapter is closely related to the Envision project; however, as opposed to the Envision project, our participants were from groups with a variety of IT backgrounds.

Shaoqun Wu and Ian H. Witten use digital library technology to help language learners express themselves by capitalizing on the human-generated text available on the Web. From a massive collection of n-grams and their occurrence frequencies, we extract sequences that begin with the word “I,” sequences that begin with a question, and sequences containing statistically significant collocations. These are preprocessed, filtered, and organized as a digital library collection using the Greenstone software. Users can search the collection to see how particular words are typically used and browse by syntactic class. The digital library is richly interconnected to other resources. It includes links to external vocabularies and thesauri so that users can retrieve words related to any term of interest, and links the collection to the Web by locating sample sentences containing these patterns and presenting them to the user. They have conducted an evaluation of how useful the system is in helping students, and the impact it has on their writing. Finally, language activities generated from digital library content have been designed to help learners master important emotion-related vocabulary and expressions. They predict that the application of digital library technology to assist language students will revolutionize second language learning. How can ordinary, everyday language be captured? Their approach is to capitalize on the text on the World-Wide Web, in particular the vast set of n-grams from the Web that Google has made available. Only digital library technology can provide searching and browsing functions for such a massive body of text. Our system is based on the Greenstone software. They have built a collection called “First Person Singular” that allows learners (and teachers) to locate phrases associated with a particular word, as well as synonyms, antonyms, and collocations. The digital library enables sentences containing these patterns to be retrieved from the Web and presented to the user as examples. They have conducted an evaluation with actual language students, and the results show the potential usefulness of the system in helping students correct grammar errors, generate text, and expand text. In this chapter, they first examine the n-grams Google has supplied and explain how to extract a subset that is useful for language learning. They then describe the design and implementation of the First Person Singular digital library collection: how it is built and the searching and browsing facilities it includes. Next they show how results obtained from the collection can be augmented by retrieving related material from the Web and the British National Corpus (BNC). Then they describe the findings from an evaluation with actual students.

The current system of so-called institutional repositories, even if it was a sensible response at an earlier stage, may not answer the needs of the scholarly community, scientific communication and accompanied stakeholders in a sustainable way. However, having a robust repository infrastructure is essential to academic work. Yet, current institutional solutions, even when networked in a country or across Europe, have largely failed to deliver. Consequently, a new path for a more robust infrastructure and larger repositories is explored by Laurent Romary and Chris Armbruster to create superior services that support the academy. A future organization of publication repositories is advocated that is based on macroscopic academic settings providing a critical mass of interest as well as organizational coherence. Such a macro-unit may be geographical (a coherent national scheme), institutional (a large research organisation or a consortium thereof), or thematic (a specific research field organizing itself in the domain of publication repositories). They are concerned with crossroads. Therefore this chapter will neither trace the history of online scientific communication nor consider the debate on open access, except when it is directly relevant to the argument. To substantiate the claim that it would be wise to reconsider the parameters of the publication repository infrastructure, they proceed as follows. Firstly, while institutional open access mandates have brought some content into open access, the important mandates are those of the funders and these are best supported by a single infrastructure and large repositories, which incidentally enhances the value of the collection (while a transfer to institutional repositories would diminish the value). Secondly, they compare and contrast a system based on central research publication repositories with the notion of a network of institutional repositories to illustrate that across central dimensions of any repository solution the institutional model is more cumbersome and less likely to achieve a high level of service. Next, three key functions of publication repositories are reconsidered, namely: a) the fast and wide dissemination of results; b) the preservation of the record; and c) digital curation for dissemination and preservation. Fourth, repositories and their ecologies are explored with the overriding aim of enhancing content and enhancing usage. Fifth, a target scheme is sketched, including some examples. In closing, a look at the evolutionary road ahead is offered.

The rapid evolution of digital equipment has led to the explosive proliferation of multimedia data in education, entertainment, sport, and various applications. The development of automatic or semi-automatic systems and tools for digital content analysis and understanding becomes compelling. As important multimedia content, sports video has been attracting increasing attention due to commercial benefits, entertaining functionalities, and audience requirements. Much research on shot classification, highlight extraction, and event detection in sports video has been done to provide interactive video viewing systems for quick browsing, indexing, and summarization. More keenly than ever, the audience desires professional insights into the games. The coach and players demand automatic tactics analysis and performance evaluation with the aid of multimedia information retrieval technologies. Therefore, sports video analysis is certainly a research issue worth investigation. In this chapter, Hua-Tsung Chen, et al. review current research and give an insight into sports video analysis. Recently, sports video is attracting considerable attention due to potential commercial benefits and entertaining functionalities. There has been lots of research directed toward automatic indexing and summarization of broadcast sports video. As the pace of life in the information society accelerates, most viewers desire to retrieve the significant events or designated scenes and players, rather than watch a whole game in a sequential way. Various algorithms of shot classification and highlight extraction in sports video have been developed based on the combination of low-level visual/auditory features and game-specific rules. Some research efforts focus on ball/player tracking for event detection, since semantic events are mainly caused by ball-player and player-player interactions. Most existing works in sports video analysis are audience-oriented. However, more keenly than ever, the audience desires professional insights into the games. The coach and the players demand automatic tactics analysis and performance evaluation with the aid of multimedia information retrieval technologies. Traditional interactive video viewing systems, which provide quick browsing, indexing, and summarization of sports video, no longer fulfill their requirements. The professionals prefer a better understanding of the tactic patterns and statistical data so that they are able to improve performance and better adapt the operational policy during the game. To achieve this purpose, the current trend is to employ some personnel for game annotation, match recording, tactics analysis and statistics collection. However, it is obviously time-consuming and labor-intensive. Hence, automatic tactics analysis and statistics collection in sports games via video analysis technology are undoubtedly compelling.

Digital libraries remove physical barriers to accessing information, but the language barrier still remains due to multilingual collections and the linguistic diversity of users. Paul Clough and Irene Eleta introduce a study that aims at understanding the effect of users’ language skills and field of knowledge on their language preferences when searching for information online and to provide new insights on the access to multilingual digital libraries. Both quantitative and qualitative data were gathered using a questionnaire, and results show that the language skills and the field of knowledge have an impact on the language choice for searching online. These factors also determine the interest in cross-language information retrieval: language-related fields constitute the best potential group of users, followed by the Arts and Humanities and the Social Sciences. The curators of digital libraries and their users are being confronted with large quantities of digital material, increasingly diverse in nature: multi-media, multi-cultural, and multi-language. A fundamental goal of digital libraries is to provide universal access to the information being managed, but this can only be realised if digital content is made more accessible and usable over time within online environments. For example, the European i2010 Digital Libraries initiative aims to make cultural, audiovisual, and scientific heritage accessible to all: “The initiative combines cultural diversity, multilingualism and technological progress” (European Commission Information Society and Media, 2006). In Europe two major initiatives are The European Library1 (TEL) and more recently Europeana2. The European Library (TEL) offers access to digital resources (books, posters, maps, sound recordings, and videos) and bibliographic content from 48 national libraries in 35 languages. Europeana—the European digital library, museum, and archive—aims to provide users access to around 2 million digital objects, including photos, paintings, sounds, maps, manuscripts, books, newspapers, and archival papers. Both digital libraries offer access to multilingual content, and Europeana plans to provide a multilingual interface and offer multilingual access to users. More widely, UNESCO has officially launched the World Digital Library3, an Internet-based library that aims to display and explain the wealth of all human cultures. Of course, universal access is as applicable to smaller and more specialised digital libraries as it is to the larger national and international ones. However, although digital libraries can remove physical and spatial barriers in accessing information, the language barrier still remains due to multilingual collections and the linguistic diversity of users. Previous research has shown that language has an impact on the structure of the Web and that the power relations of languages on the Internet have cultural implications, causing a digital divide. Language represents a clear barrier to accessing information online, which is dominated by the English language and Anglo-American values. This is the context in which digital libraries must operate and are thereby subject to this digital divide also. A key factor to the future success of digital libraries is the provision of appropriate multilingual services to allow users to find, explore, and work with content in multiple languages.

Bogdan Ionescu, et al. provide the chapter discussing content-based access to video information in large video databases and particularly to retrieve animated movies. They examine temporal segmentation, and propose cut, fade, and dissolve detection methods adapted to the constraints of this domain. Further, they discuss a fuzzy linguistic approach for deriving automatic symbolic/semantic content annotation in terms of color techniques and action content. The proposed content descriptions are then used with several data mining techniques (SVM, k-means) to automatically retrieve the animation genre and to classify animated movies according to some color techniques. They integrate all the previous techniques to constitute a prototype client-server architecture for a 3D virtual environment for interactive video retrieval. In this chapter, we tackle the indexing issue for a new application domain, which becomes more and more popular: the animated movie entertainment industry. While the very few existing approaches are limited to dealing either with the analysis of classic cartoons or with cartoon genre detection, their approach is different as it uses fuzzy color-based and action-based content descriptions to retrieve animated movies according to their artistic content.

Zhongfei (Mark) Zhang, et al. present a multiple-instance learning-based approach to multimodal data mining in a multimedia database. This approach is a highly scalable and adaptable framework that the authors call co-learning. Theoretic analysis and empirical evaluations demonstrate the advantage of the strong scalability and adaptability. Although this framework is general for multimodal data mining in any specific domain, to evaluate this framework, the authors apply it to the Berkeley Drosophila ISH embryo image database for the evaluations of the mining performance in comparison with a state-of-the-art multimodal data mining method to showcase the promise of the co-learning framework. In this chapter, they focus on a multimedia database as an image database in which each image has a few textual words given as annotation. They then address the problem of multimodal data mining in such an image database as the problem of retrieving similar data and/or inferencing new patterns to a multimodal query from the database. Specifically, in the context of this chapter, multimodal data mining refers to two aspects of activities. The first is the multimodal retrieval. This is the scenario where a multimodal query consisting of either textual words alone, or imagery alone, or in any combination is entered and an expected retrieval data modality is specified that can also be text alone, or imagery alone, or in any combination; the retrieved data based on a pre-defined similarity criterion are returned back to the user. The second is the multimodal inferencing. While the retrieval-based multimodal data mining has its standard definition in terms of the semantic similarity between the query and the retrieved data from the database, the inferencing-based mining depends on the specific applications. In this chapter, they focus on the application of the fruit-fly image database mining. Consequently, the inferencing-based multimodal data mining may include many different scenarios. A typical scenario is the across-stage multimodal inferencing. There are many interesting questions a biologist may want to ask in the fruit fly research given such a multimodal mining capability. For example, given an embryo image in stage 5, what is the corresponding image in stage 7 for an imageto-image three-stage inferencing? What is the corresponding annotation for this image in stage 7 for an image-to-word three-stage inferencing? The multimodal mining technique they have developed in this chapter also addresses this type of across-stage inferencing capability, in addition to the multimodal retrieval capability.

Conventional approaches to content-based image retrieval exploit low-level visual information to represent images and relevance feedback techniques to incorporate human knowledge into the retrieval process, which can only alleviate the semantic gap to some extent. To further boost the performance, a Bayesian framework is proposed by Rui Zhang and Ling Guan. The information independent of the visual content of images is utilized and integrated with the visual information. Two particular instances of the general framework are studied. First, context which is the statistical relation across the images is integrated with visual content such that the framework can extract information from both the images and past retrieval results. Second, characteristic sounds made by different objects are utilized along with their visual appearance. Based on various performance evaluation criteria, the proposed framework is evaluated using two databases for the two examples, respectively. The results demonstrate the advantage of the integration of information from multiple sources. The ever-lasting growth of multimedia information has been witnessed and experienced by human beings since the beginning of the information era. An immediate challenge resulting from the information explosion is how to intelligently manage and enjoy the multimedia databases. In the course of the technological development of multimedia information retrieval, various approaches have been proposed with the ultimate goal of enabling semantic-based search and browsing. Among those intensively explored topics, Content-Based Image Retrieval (CBIR), born at the crossroad of computer vision, machine learning, and database technologies, have been studied for more than a decade, yet still remain difficult.

Recent programs like the Million Book Project and Google Print Library Project have archived several million books in digital format, and within a few years a significant portion of the world’s books will be online. While the majority of the data will naturally be text, there will also be tens of millions of pages of images. Many of these images will defy automation annotation for the foreseeable future, but a considerable fraction of the images may be amiable to automatic annotation by algorithms that can link the historical image with a modern contemporary, with its attendant metatags. To perform this linking, there must be a suitable distance measure that appropriately combines the relevant features of shape, color, texture, and text. However, the best combination of these features will vary from application to application and even from one manuscript to another. In this chapter, Xiaoyue Wang, et al. propose a general framework for annotating large archives of historical image manuscripts. Their work is similar in spirit to the work on the automatic discovery of relationships among images in illuminated manuscripts. The authors introduced a model of various annotations of digital contents, in forms of both text and typed links, e.g. author links. However in this chapter, they are focusing on the lower level primitives to support such work. They use different feature spaces such as shape, color, and texture. Then they combine these similarities using appropriate weights. Their experiments show that the accuracy they can obtain is higher by using a combined feature similarity measure than by using any single feature measure. Their fundamental contribution is introducing a novel technique for learning this weighting parameter, in spite of a lack of any labeled training data.

Shyamosree Pal, et al. have used Gestalt properties for understanding various digital documents, which is a contemporary problem of the digital era and requires state-of-the-art technologies for its effective solution. Clearly, the solution is largely dependent upon the successful identification of all kinds of structures present in a document image and subsequently finding their associations with different components within a document. Interestingly, a document page has a striking property of admitting a characterization by the rectilinear arrangement of its major constituent components like paragraph, lines, words, tabular structures, graphics, etc. Based on this simple yet useful property, a novel geometric technique is proposed for rectilinear decomposition of different components in a document page, followed by an effective method on indexing and organizing these components for the purpose of efficient retrieval of digital documents. An efficient and meaningful segmentation of the above-mentioned components from a document image is the first step towards the indexing of document pages. The second phase involves storing these geometric structures in a scientific way in order to design a robust retrieval system. Given a gray-scale document image, our algorithm performs the segmentationcum-recognition of its different components by analyzing the geometric features of their respective minimum-area rectilinear/isothetic polygonal covers corresponding to a few judiciously selected values of the grid spacing, g. As the shape and size of a polygonal cover depends on g (lower the value of g, tighter is the polygonal cover, and vice versa), and each isothetic polygon is represented by an ordered sequence of its vertices, the spatial relationship of the polygons corresponding to a higher grid spacing with those corresponding to a lower one, is performed using an appropriate geometric analysis of the vertex sequences representing these polygons.

Due to the increasing use of digital medical images, a need exists to develop an approach for automatic image annotation, which provides textual labels for images. Thus added labels can be used to access images using textual queries. Automatic image annotation can be separated into two individual tasks: feature extraction and image classification. Chia-Hung Wei and Sherry Y. Chen present feature extraction methods for calcification mammograms. The resultant features, based on BI-RADS standards, make annotated image contents represent the correct medical meaning and tag correspondent terms. Furthermore, this chapter also proposes a probabilistic SVM approach to image classification. Finally, the experimental results indicate that the probabilistic SVM approach to image annotation can achieve 79.5% in the average accuracy rate.

In many applications, 2D and 3D information is used jointly to improve recognition results. A particular applicative scenario is that of face recognition where the use of 3D face models has been experimented only recently. In fact, there is recent evidence that the structural 3D information captured by face scans can improve face recognition results particularly in those situations where pose variations and illumination changes are concerned. Following this idea several works have appeared recently that address 3D-3D face recognition. However, 3D-3D face recognition is suited just for very particular application scenarios where cooperation between the subjects and the recognition/verification system is assumed both during the enrollment of subjects into the gallery of known individuals, and for the test of the identity of new subjects (probes). In perspective, it would be interesting to acquire complete and highly defined 3D face scans during enrollment, whereas it should be possible to acquire probes also in non-cooperative environments, using video-camera systems that track and capture the subject face and can compare the reconstructed 3D information against 3D gallery models. This aims to define innovative hybrid solutions to the recognition problem that exploit the complementary advantages carried out by different media to improve accuracy results. Following this idea, in this work Stefano Berretti, et al. propose a hybrid 2D-3D face recognition approach that is capable of reconstructing a 3D face model starting from two orthogonal face images (frontal and side view), and comparing it against 3D face scans. The proposed approach is based on three main steps. First, an Active Shape Model (ASM) is used to locate a set of landmarks on the face image. In the second step, a 3D template deformable model is used to reconstruct the 3D face. This is possible by establishing a correspondence between the landmarks in the frontal and side images and a set of control points in the 3D template model. Control points can move so as to modify the 3D geometry of the template face according to the position of the corresponding landmarks in the 2D images. Once the 3D face is reconstructed, in the third and final step of the proposed approach, the iso-geodesicregions solution for 3D face recognition is used to compare the reconstructed 3D model against a gallery of 3D face scans acquired with a laser scanner. Preliminary experimental results carried out on a small database of 3D face scans show the viability of the approach. In addition, experiments have been carried out to prove the robustness of the approach with respect to possible errors in the locations of the landmarks identified during the ASM process.

A metric on partitions of arbitrary measurable sets and its special properties for metrical content-based image retrieval based on the ‘spatial’ semantic of images is proposed by Dmitry Kinoshenko et al. Such nested partitions carry some spatial content of scene with various levels of detailing, and take into consideration region relations too. It should be noted that there arise partitions in CBIR in a variety of ways. Under hierarchical clustering, for instance, nested partitions allow constructing image partitions into disjoint nested subsets so that firstly one can seek suitable class, then the most similar to the query subclass is chosen and so on. Consequently the exhaustive search is fulfilled only on the lower level of hierarchy. It is clear that nested partitions express in implicit form degree of information refinement or roughening. Therefore, metrical properties of nested partitions, which are investigated in the chapter, not only correspond to rational content control but provide creation of specific search algorithms, e.g., invariant to image background.

The pace of scholarly exploration, publication, and dissemination grows faster every year, reaching unprecedented levels. To support this level of innovation, scholars increasingly rely on open-access mechanisms and digital libraries, portals, and aggregators to disseminate their findings. While there is controversy over which of the trends of search engines, open access, preprint, and self-archiving have most influenced the growth of scientific discovery, the consensus is that these batteries of methods have bettered the dissemination of scholarly materials. Now, an arguable bottleneck in the scientific process is in the processing, sensemaking, and utilization of scholarly discoveries for the next iteration. Scholars are still largely confined to printing, reading, and annotating the papers of their interest offline, without the help or guidance of a digital library to organize and collect their thoughts. They believe a key component of a strategy to address this gap is in building applications that take advantage of the logical structure and semantic information within the documents themselves. Even within the limited domain of computer science, searching for competing methodologies to solve a problem, analyzing empirical results in tables, finding example figures to use in a presentation, or determining which datasets have been used to evaluate an approach, are all comparative tasks that researchers do on a regular basis. Unfortunately, currently these can only be done manually, without aid from any computing infrastructure. To support such analytics is not trivial and requires groundwork. One important subtask that is common to all of the above problems is to obtain the logical structure of the scholarly document. Minh-Thang Luong, et al. identify not only metadata such as title, authors, abstract, and parsing references, but also the logical structure of the internals of the document—sections, subsections, figures, tables, equations, footnotes, and captions.

Data acquisition is a major concern in text classification. The excessive human efforts required by conventional methods to build up quality training collection might not always be available to research workers. Wei-Yen Day, et al. look into possibilities to automatically collect training data by sampling the Web with a set of given class names. The basic idea is to populate appropriate keywords and submit them as queries to search engines for acquiring training data. The first of the two methods presented in this chapter is based on sampling the common concepts among classes and the other is based on sampling the discriminative concepts for each class. A series of experiments were carried out independently on two different datasets and results show that the proposed methods significantly improve classifier performance even without using manually labeled training data. Their strategy for retrieving Web samples substantially helps in the conventional document classification in terms of accuracy and efficiency. The goal of this chapter is, given a set of concept classes, to automatically acquire training corpus based merely on the names of the given classes. Similar to our previous attempts, we employ a technique to produce keywords by expanding the concepts encompassed in the class names, query the search engines, and use the returned snippets as training instances in the subsequent classification tasks. Two issues may arise with this technique. First, the given class names are usually very short and ambiguous, making search results less relevant to the classes. Secondly, the expanded keywords generated from different classes may be very close to each other so that the corresponding search-result snippets have little discrimination power to distinguish one class from the others. We present two concept expansion methods to deal with these problems, respectively. The first method, expansion by common concepts, aims at alleviating the problem of ambiguous class names. The method utilizes the relations among the classes to discover their common concepts. For example, “company” could be one of the common concepts of classes “Apple” and “Microsoft.” Combined with the common concepts, relevant training documents to the given classes can be retrieved. The second method, expansion by discriminative concepts, aims at finding discriminative concepts among the given classes. For example, “iPod” could be one of the unique concepts of class “Apple.” Combined with the discriminative concepts, effective training documents that distinguish one class from another can be retrieved.

Museums and libraries are treasure houses of human history and knowledge with rich repositories on cultural heritage. With advanced technological developments in digital libraries and Web 2.0, cultural institutions are beginning to explore new forms of universal and dynamic accessibility. Using a case example of the Chinese “qipao,” Yin-Leng Theng et al. propose a socially constructed virtual museum prototype incorporating interactivity of Web 2.0 to promote cultural communication and exchange while improving user interaction and participation. In this chapter, they describe the design, prototyping, and evaluation process of QiVMDL (Qipao Virtual Museum and Digital Library). The chapter concludes with implications for digital library research and development supporting virtual museums for the preservation of cultural heritage.

After two decades of repository development, some conclusions may be drawn as to which type of repository and what kind of service best supports digital scholarly communication. In this regard, four types of publication repository may be distinguished, namely the subject-based repository, research repository, national repository system, and institutional repository. Two important shifts in the role of repositories may be noted, and in regard to content, a well-defined and high quality corpus is essential. This implies that repository services are likely to be most successful when constructed with the user and reader in mind. With regard to service, high value to specific scholarly communities is essential. This implies that repositories are likely to be most useful to scholars when they offer dedicated services supporting the production of new knowledge. Along these lines, challenges and barriers to repository development may be identified in three key dimensions, i.e., identification and deposit of content, access and use of services, and preservation of content and sustainability of service. An indicative comparison of challenges and barriers in some major world regions is offered by Chris Armbruster and Laurent Romary. The rationale is that repositories may have many functions, but unless they serve scholarly communication first and foremost, they will not be accepted and used in the long term. Acceptance and usage by the scholarly community is crucial to sustainability. For this, the emphasis must be on identifying challenges and barriers to improve services and asking which types of repositories and what kind of services are needed in future. The argument proceeds as follows. First, two major shifts in digital scholarly communication and their impact on repositories are analysed, namely: a) the problem of organising the increasing volume of published knowledge in a fashion that the user is served relevant, interesting, and important material; and b) the need to deliver highly useful services to scholars as authors and readers. Second, challenges and barriers to repository development are discussed in three key dimensions: a) identification and deposit of content; b) access and use of services; and c) preservation of content and sustainability of service. The chapter closes with an indicative comparison of some major world regions in an effort to help repositories overcome barriers and master the challenges.

REFERENCE

European Commission Information Society and Media. (2006). i2010: Digital libraries. Luxembourg: Office for Official Publications of the European Communities. Retrieved June 26, 2009, from http://ec.europa.eu/information_society/activities/digital_libraries/doc/brochures/dl_brochure_2006.pdf

Author(s)/Editor(s) Biography

Chia-Hung Wei is currently an assistant professor of the Department of Information Management at Ching Yun University, Taiwan. He obtained his Ph.D. degree in Computer Science from the University of Warwick, UK, and Master's degree from the University of Sheffield, UK, and Bachelor degree from the Tunghai University, Taiwan. His research interests include content-based image retrieval, digital image processing, medical image processing and analysis, machine learning for multimedia applications and information retrieval. He has published over 10 research papers in those research areas.
Yue Li received his Ph.D. degree in computer science from University of Warwick, UK, in 2009, M.S. in Information Technology from Department of Computer Science, University of Nottingham, UK, in 2005, and B.Sc. in Mathematics from Nankai University, China, in 2003. He is currently an assistant professor of Collage of Software, University of Nankai, China. He serves as a member of editorial review board of International Journal of Digital Crime and Forensics. His research interests include digital forensics, multimedia security, digital watermarking, pattern recognition, machine learning and content-based image retrieval.
Chih-Ying Gwo received his Bachelor degree from the National Tsing Hua University, Taiwan, M.S. degree in computer engineering from Syracuse University, New York, USA, and Ph.D. in electrical engineering from University of Southern California, USA. He is currently an assistant professor of Department of Information Management at Ching Yun University, Taiwan. His research interests include digital libraries, medical image processing, online character recognition and pattern recognition.

Indices

Editorial Board

  • Bharat Bhargava, Purdue University, USA
  • Sherry Chen, Brunel University, UK
  • Wesley W. Chu, University of California - Los Angeles, USA
  • Chi-Ren Shyu, University of Missouri, USA