Design and Managing of Distributed Virtual Organizations

Design and Managing of Distributed Virtual Organizations

Diego Liberati (Italian National Research Council, Italy)
DOI: 10.4018/978-1-59904-893-2.ch040
OnDemand PDF Download:
$37.50

Abstract

A framework is proposed that creates, uses, communicates, and distributes information whose organizational dynamics allow it to perform a distributed cooperative enterprise in public environments even over open source systems. The approach assumes Web services as the enacting paradigm, possibly over a grid, to formalize interaction as cooperative services on various computational nodes of a network. A framework is thus proposed that defines the responsibility of e-nodes in offering services and the set of rules under which each service can be accessed by e-nodes through service invocation. By discussing a case study, the chapter will detail how specific classes of interactions can be mapped into a serviceoriented model whose implementation will be carried out in a prototypical public environment.

Key Terms in this Chapter

Grid Computing: A computing paradigm that enables the sharing, selection, and aggregation of a wide variety of geographically distributed computational resources.

Micro-Array: Technology providing biologists with the ability to measure the expression levels of thousands of genes in a single experiment.

Adaptive Bayesian Networks: Conditional probabilistic trees describing the relative importance of each relevant variable to determine a classification.

Bio-Informatics: The application of information technology to advanced biological problems, like transcriptomics and proteomics, involving huge amounts of data.

Workflow: The automation of a business process, in whole or part, during which documents, information, or tasks are passed from one participant (human or machine) to another for action, according to a set of procedural rules.

E-Science: The cooperative work of scientists with various competences at different sites over an ICT connection in order to achieve a common scientific goal.

Minimum Description Length: Information Theory principle stating that the best model is the one that minimize the number of bits needed to codify both the model and the data, here used in order to quantify the number of relevant features.

Web Services: Software paradigm enabling peer-to-peer computation in distributed environments based on the concept of “service” as an autonomous piece of code published in the network.

Unsupervised Clustering: Automatic classification of a dataset in two or more subsets on the basis of the intrinsic properties of the data without taking into account further contextual information.

Principal Component Analysis: Rearrangement of the data matrix in new orthogonal transformed variables ordered in decreasing order of variance.

Complete Chapter List

Search this Book:
Reset