Web Engineering Advancements and Trends: Building New Dimensions of Information Technology

Web Engineering Advancements and Trends: Building New Dimensions of Information Technology

Ghazi I. Alkhatib (The Hashemite University, Jordan) and David C. Rine (George Mason University, USA)
Release Date: January, 2010|Copyright: © 2010 |Pages: 374|DOI: 10.4018/978-1-60566-719-5
ISBN13: 9781605667195|ISBN10: 1605667196|EISBN13: 9781605667201|ISBN13 Softcover: 9781616922337

Description

As countless failures in information technology and Web-based systems are caused by an incorrect understanding of knowledge sharing, an increased awareness of modern, fundamental industry concepts becomes crucial to Web and interface developers.

Web Engineering Advancements and Trends: Building New Dimensions of Information Technology examines integrated approaches in new dimensions of social and organizational knowledge sharing with emphasis on intelligent and personalized access. A defining collection of field advancements, this publication provides current research, applications, and techniques in testing and validation of Web systems.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Agent-enabled semantic Web
  • GUI testing methodology
  • Image Mining
  • Intelligent semantic Web services
  • Object oriented software testing
  • Pattern-oriented Web engineering
  • Scenario driven decision systems
  • Software architecture analysis
  • User interfaces for improving cell phone devices
  • Voice driven emotion recognizer mobile phone

Reviews and Testimonials

Web Engineering Advancements and Trends: Building New Dimensions of Information Technology reflects on the future dimensions of Information Technology and Web Engineering (ITWE), and expands on two major themes to emphasize intelligence, provisioning, and personalization of Web engineering utilizing technologies for the advancement of ITWE applications.

– Ghazi I. Alkhatib, Applied Science University - Amman, Jordan

Table of Contents and List of Contributors

Search this Book:
Reset

Preface

This book is the latest sequel of two previous books: Book I was entitled 'Agent Technologies and Web Engineering: Applications and Systems;' book II was entitled 'Integrated Approaches in Information Technology and Web Engineering: New Dimensions of Social and Organizational Knowledge Sharing.' In this book we include this introductory chapter to reflect on future dimensions of Information Technology and Web Engineering (ITWE). We expand on the two major themes of the first two books to emphasize intelligence, provisioning, and personalization of Web engineering utilizing technologies for the advancement of ITWE applications. Such applications include: E-Cultures, E-Sciences, E-Businesses, and E-Governments. An important technology in Web engineered systems is the social and organizational Web agents.

As to technologies, four important technologies of Web engineered intelligent integrated approaches to Information Technology (IT) and Social and Organizational Knowledge Sharing are Web Ontology, Semantic Web, Dublin Core, and Cloud Computing.

As we shall see in the material that follows, there are many new dimensions appearing in ITWE. Many previously developed and Web engineered systems failures are caused by an incorrect understanding of the intelligent sharing of knowledge. Within the evolution of ITWE, a new form of intelligent sharing and personalization of news and information creation and distribution arises.

Moreover, it is also important to understand the latest contemporary approaches to ITWE investment payoffs. Such ITWE investment payoffs include but are not limited to these elements:

  • IT Investment and Organizational Performance
  • Qualitative and Quantitative Measurement of ITWE Investment Payoff
  • Integrated Approaches to Assessing the Business Value of ITWE
  • Relationships Between Firms’ ITWE Policy and Business Performance
  • Investment in Reusable and Reengineered ITWE and Success Factors
  • Modeling ITWE Investment
  • Understanding the Business Value of ITWE.

    For the reader to better understand each of these fundamental concepts incorporated in the chapters of the book material, an introductory overview of these concepts is in order. While the materials in these chapters do not include all of these new dimensions in applications and technologies, we hope that researchers will embark on these dimensions in the near future. This introductory chapter and the material in the following chapters will be especially helpful for the advanced student or professional who has not followed closely these developments in ITWE.

    Web Engineering

    The World Wide Web (http://en.wikipedia.org/wiki/Web_Engineering) has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behavior and place some unique demands on their usability, performance, security and ability to grow and evolve.

    However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In recent years, there have been some developments towards addressing these problems and requirements. As an emerging discipline, Web engineering actively promotes systematic, disciplined and quantifiable approaches towards successful development of high-quality, ubiquitously usable Web-based systems and applications.

    In particular, Web engineering focuses on the methodologies, techniques and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.

    Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modeling and simulation, project management, usability engineering, and graphic design and presentation.

    Web engineering is neither a clone, nor a subset of software engineering, although both involve programming and software development. While software engineering traditionally focuses upon the development of programs that execute on computers and Web engineering focuses upon the development of programs that execute on the Internet, there is more to differences between the two. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications.

    Information Technology and Web Engineering

    Information technology (IT) , (http://en.wikipedia.org/wiki/Information_Technology) (as defined by the Information Technology Association of America (ITAA), is "the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware." IT deals with the use of electronic computers and computer software to convert, store, protect, process, transmit, and securely retrieve information, i.e. IT focuses upon the use of information to improve the quality of human work and life.

    Today, the term information technology has ballooned to encompass many aspects of computing and technology, and the term is more recognizable than ever before. The information technology umbrella can be quite large, covering many fields. IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases. A few of the duties that IT professionals perform may include data management, networking, engineering computer hardware, database and software design, as well as the management and administration of entire systems.

    When computer and communications technologies are combined so as to leverage the use of information to improve the work and life of humans, the result is information technology, or "infotech". Information Technology (IT) is a general term that describes any technology that helps to produce, manipulate, store, communicate, and/or disseminate information. Presumably, when speaking of Information Technology (IT) as a whole, it is noted that the use of computers and information are associated.

    Web and E-Science

    Web Science Research Initiative (WSRI) or Web Science (http://en.wikipedia.org/wiki/Web_Science) was energizing more recently with a joint effort of MIT and University of Southampton to bridge and formalize the social and technical aspects of collaborative applications running on large-scale networks like the Web. It was announced on November 2, 2006 in MIT. Tim Berners-Lee is leading a program related to this effort that also aims to attract government and private funds, and eventually produce undergraduate and graduate programs. This is very similar to the ISchool movement.

    Some initial areas of interest are:

  • Trust and privacy
  • Social Networks. See for example the video "The month ahead: Social networks to shake things up in May," about the use of Twitter, Facebook, and MySpace for creating social networks. (http://news.zdnet.com/2422-13568_22-292775.html?tag=nl.e550) Accessed 4/29/2009
  • Collaboration, using Web 2.0 for example. One research suggested the use of case-based reasoning to encourage participation and collaboration of users to update and communicate using Web 2.0. (He, et. al., 2009)

    The term e-Science (or eScience) (http://en.wikipedia.org/wiki/E-Science) is used recently to describe computationally intensive science that is carried out in highly distributed network environments, or science that uses immense data sets that require grid computing; the term sometimes includes technologies that enable distributed collaboration, such as the Access Grid. Traditionally e_Science would refer to the use of Internet technology as a platform for either scientific computations or applying the scientific methodologies. The recent term was created by John Taylor, the Director General of the United Kingdom's Office of Science and Technology in 1999 and was used to describe a large funding initiative starting in November 2000. Examples of this kind of science include social simulations, Web and Internet – based particle physics, Web and Internet – based earth sciences and Web and Internet – based bio-informatics. Particle physics has a particularly well developed e-Science infrastructure due to the need since the 1960’s for adequate computing facilities for the analysis of results and storage of data originating in the past from Particle In Cell (PIC) plasma physics simulations for electronics designs and astrophysics in the national/international laboratories and more recently from the CERN Large Hadron Collider, which is due to start taking data in 2008.

    Web Application Domains

    E-Business

    Electronic Business, commonly referred to as "eBusiness" or "e-Business", (http://en.wikipedia.org/wiki/E-business) may be defined as the utilization of information and communication technologies (ICT) in support of all the activities of business and traditionally refers to the use of Internet technology as a platform for doing business. Commerce constitutes the exchange of products and services between businesses, groups and individuals and hence can be seen as one of the essential activities of any business. Hence, electronic commerce or eCommerce focuses on the use of ICT to enable the external activities and relationships of the business with individuals, groups and other businesses. Some of these e-business activities have been popularized by order entry, accounting, inventory and investments services.

    Louis Gerstner, the former CEO of IBM, in his book, ‘Who Says Elephants Can't Dance?’, attributes the term "e-Business" to IBM's marketing and Internet teams in 1996.

    Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers.

    In practice, e-business is more than just e-commerce. While e-business refers to more strategic focus with an emphasis on the functions that occur using electronic capabilities, e-commerce is a subset of an overall e-business strategy. E-commerce seeks to add revenue streams using the World Wide Web or the Internet to build and enhance relationships with clients and partners and to improve efficiency using the Empty Vessel strategy. Often, e-commerce involves the application of knowledge management systems.

    E-business involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners. Special technical standards for e-business facilitate the exchange of data between companies. E-business software solutions allow the integration of intra and inter firm business processes. E-business can be conducted using the Web, the Internet, intranets, extranets, or some combination of these.

    e-Government

    e-Government (from electronic government, also known as e-gov, digital government, online government (http://en.wikipedia.org/wiki/E-Government) (or in a certain context transformational government) refers to the use of Internet technology as a platform for exchanging information, providing services and transacting with citizens, businesses, and other arms of government. e-Government may be applied by the legislature, judiciary, or administration, in order to improve internal efficiency, the delivery of public services, or processes of democratic governance. The primary delivery models are Government-to-Citizen or Government-to-Customer (G2C), Government-to-Business (G2B) and Government-to-Government (G2G) & Government-to-Employees (G2E). Within each of these interaction domains, four kinds of activities take place

  • Pushing information over the Internet, e.g.: regulatory services, general holidays, public hearing schedules, issue briefs, notifications, etc.
  • Two-way communications between the agency and the citizen, a business, or another government agency. In this model, users can engage in dialogue with agencies and post problems, comments, or requests to the agency.
  • Conducting transactions, e.g.: lodging tax returns, applying for services and grants.
  • Governance, e.g.: online polling, voting, and campaigning.

    The most important anticipated benefits of e-government include more efficiency, improved services, better accessibility of public services, and more transparency and accountability. While e-government is often thought of as "online government" or "Internet-based government," many non-Internet "electronic government" technologies can be used in this context. Some non-Internet forms include telephone, fax, PDA, SMS text messaging, MMS, wireless networks and services, Bluetooth, CCTV, tracking systems, RFID, biometric identification, road traffic management and regulatory enforcement, identity cards, smart cards and other NFC applications; polling station technology (where non-online e-voting is being considered), TV and radio-based delivery of government services, email, online community facilities, newsgroups and electronic mailing lists, online chat, and instant messaging technologies. There are also some technology-specific sub-categories of e-government, such as m-government (mobile government), u-government (ubiquitous government), and g-government (GIS/GPS applications for e-government).

    There are many considerations and potential implications of implementing and designing e-government, including disintermediation of the government and its citizens, impacts on economic, social, and political factors, and disturbances to the status quo in these areas.

    In countries such as the United Kingdom, there is interest in using electronic government to re-engage citizens with the political process. In particular, this has taken the form of experiments with electronic voting, aiming to increase voter turnout by making voting easy. The UK Electoral Commission has undertaken several pilots, though concern has been expressed about the potential for fraud with some electronic voting methods.

    Governments are adapting Enterprise Service Bus (ESB), a message passing program among disparate datacenter that functions on top of Service Oriented Architecture paradigm. The following links are for different ESB applications in respective countries:

  • USA, Washington DC (http://www.oreillynet.com/xml/blog/2006/08/esb_adoption_in_government.html)
  • Irish: (http://www.renault.com/SiteCollectionDocuments/Communiqu%C3%A9%20de%20presse/en-EN/Pieces%20jointes/19579_03042009_PRAlliance_Irish_government_EN_E8032F3B.pdf), (http://www.rte.ie/news/2009/0416/esb.html)
  • Vietnam (http://www.esb.ie/main/news_events/press_release217.jsp),
  • Jordan; (http://www.customs.gov.jo/library/633400494578802642.pdf)

    A maturity model for e-government services includes integration (using WS and ESB) and personalization through push and pulls technologies (using data mining and intelligent software agents) as the last two stages. (Alkhatib, 2009) This is very critical for e-government systems since they generate and store voluminous amount of data spreading throughout countries and the world and accessed by a huge number of users, businesses, inter-government agencies access, and government managers. Retrieving relevant data and information over the Web dictates the need for integrating various repositories of data and information, as well as personalizing access over the Web to improve quality of service to citizens, business, and inter-government agencies knowledge sharing.

    There are many new dimensions appearing in ITWE. Much of these new dimensions have to do with these elements:

  • Global shift of business and government enterprises
  • Mobility of stakeholders in social, business and government enterprises
  • Evolution of prior IT enterprises such as news, education and intelligence
  • Evolution of social cultures in the emerging third world communities
  • Evolution of technologies beyond traditional information and Web technologies.
  • Failure of ITWE technologies that cause firms to fail.

    One author (Wang, 2009) notes that "Analysis of the once highly popular concept enterprise resource planning (ERP) suggests that (1) the popularity of ERP was influenced positively by the prevalence of highlighted business problems that ERP had claimed to solve; (2) ERP’s popularity was influenced negatively by the prevalence of related innovation concepts; and (3) these influences largely disappeared after ERP passed its peak popularity."

    However, ERP systems still play major rule in enterprises as their backbone application supporting back office operations. The new ERP research emphasizes usability and integration of different databases through middleware technologies such as service oriented architectures and portals. Also, for effective utilization of these ERP systems, many vendors, such as Oracle and SAP, moved their applications to Internet and cloud computing platform. . Furthermore, ERPII links Intranet-based ERP to customer and supplier. We note one research on customer relationship management field based on the Internet (eCRM). (Chen, 2007)

    In this book, chapter 4 of section 2 contains an analysis ERP usability issues. Other chapter in section 4 presents studies on localized user interface for improving users' devise competency of cell phones, mobile phone voice driven emotion recognizer, and testing methodology for graphic user interface

    It is still true today of the observations made by (Christensen, 1997) regarding the ITWE innovator’s dilemma attesting to new information technologies causing firms to fail. These include the following:

  • Disruptive technological changes
  • Technologies that do not match stakeholders’ needs or requirements
  • Mismatch of an enterprise’s size to the market size
  • Technology mismatch to emerging markets
  • Technology performance provided, market demand and product life cycles
  • Mismanagement of disruptive technological change.

    Many of these failures are caused by an incorrect understanding of the sharing of knowledge. Knowledge sharing (http://en.wikipedia.org/wiki/Knowledge_Sharing) is an activity through which knowledge (i.e. information, skills, or expertise) is exchanged among people, friends, or members of a family, a community (e.g. Wikipedia) or an organization. Consider the present failures and lack of understanding in the news enterprises. Fewer news stake holders now read the traditional newspaper as presented by the traditional writing of news reporters and editors, in part due to the lack of completeness, unbiased reporting and participation within the stake holders’ communities. Electronic – digital newspaper presented by email or on the Web pages or TV channels, while allowing broader and cheaper circulation still contains these three failure elements. Yet consider the rise of news interactive sharing communities and blogging to complement traditional news reporting and editing, wherein even global and cross cultural communities interactively share their news information and intelligence insights with one another, eliminating some of three failures within the traditional reporting and editing. Hence, within the evolution of ITWE a new form of news creation and distribution arises. Therefore, this new form of news forms a valuable knowledge asset.

    Organizations such as the emerging news sharing communities are recognizing that knowledge constitutes a valuable intangible asset for creating and sustaining competitive advantages. Knowledge sharing activities are generally supported by knowledge management systems. However, as with news organizations, such knowledge management systems are evolving from the traditional hierarchical publisher-reporter-editor model to a wide global distributed community’s networks model. However, information and Web technology constitutes only one of the many factors that affect the sharing of knowledge in organizations, such as organizational culture, trust, and incentives. The sharing of knowledge constitutes a major challenge in the field of knowledge management because some employees tend to resist sharing their knowledge with the rest of the organization. Since there are a number of obstacles that can hinder knowledge sharing, one of the obstacles stand out. This obstacle is the notion that knowledge is property and ownership is very important. In order to counteract this notion, individuals must be reassured that they will receive credit for a knowledge product that they created. However, there is a risk in knowledge sharing. The risk is that individuals are most commonly rewarded for what they know, not what they share. If knowledge is not shared, negative consequences such as isolation and resistance to ideas occur. To promote knowledge sharing and remove knowledge sharing obstacles, the organizational culture should encourage discovery and innovation. This will result in the creation of organizational culture trust. Such should be the case within the future domain of news knowledge. Since organizations generate a huge amount of knowledge, intelligent and personalized knowledge sharing becomes necessary to insure effective utilization of knowledge throughout the enterprise. In addition, another related research area is knowledge verification and validation, especially of the type tacit knowledge. This will ensure that only verified and valid knowledge is included in knowledge repositories.

  • Removes ambiguity (includes scope notes and parenthetical qualifiers)
  • Provides context (terms can be viewed hierarchically (Broader Terms [BT], Narrower Terms [NT], and Related Terms [RT])
  • Terms can be viewed alphabetically as well
  • Selected terms can be incorporated in a search argument
  • Have an intelligent interface

    In order to achieve maximum use of Web engineered systems, Section 3: Testing and Performance Evaluation contains chapters related to:

  • A methodology for testing object oriented systems
  • A theory and implementation of a specific input validation testing tool
  • Performance of Web servers
  • A framework for incorporating credibility engineering in Web engineering application

    Web Engineering Technologies

    Ontologies and Thesauri

    Ontology tools, of more recent origin, are similar to machine assisted indexing software, but are far more complex to build. Ontology tools are designed to work on specific knowledge domains and require the collaboration of domain experts and ontology specialists to develop. They are further intended to be comprehensive with regard to the chosen domain. The experience to date with this class of tools suggests that the building of the ontology for a specific domain is a long and expensive process.

    A technical thesaurus (as opposed to the Roget kind of thesaurus) is a formalized method of representing subject terms in a given domain. Most of the formal thesauri in use today conform to a standard format (e.g., NISO Z39.19).

    There are several compelling reasons to use a technical thesaurus to enhance information retrieval, especially through an interactive interface:

    The Web Ontology Language (OWL) (http://en.wikipedia.org/wiki/Web_Ontology_Language) is a family of knowledge representation languages for authoring ontologies, and is endorsed by the World Wide Web Consortium. This family of languages is based on two (largely, but not entirely, compatible) semantics: OWL DL and OWL Lite semantics are based on Description Logics, which have attractive and well-understood computational properties, while OWL Full uses a novel semantic model intended to provide compatibility with RDF Schema. OWL ontologies are most commonly serialized using RDF/XML syntax. OWL is considered one of the fundamental technologies underpinning the Semantic Web, and has attracted both academic and commercial interest.

    In October 2007, a new W3C working group was started to extend OWL with several new features as proposed in the OWL 1.1 member submission. This new version, called OWL 2, has already found its way into semantic editors such as Protégé and semantic reasoning systems such as Pellet and FaCT++.

    Semantic Web

    The Semantic Web (http://en.wikipedia.org/wiki/Semantic_Web) (is an evolving extension of the World Wide Web in which the semantics of information and services on the Web is defined, making it possible for the Web to understand and satisfy the requests of people and machines to use the Web content. It derives from World Wide Web Consortium director Sir Tim Berners-Lee's vision of the Web as a universal medium for data, information, and knowledge exchange.

    At its core, the semantic Web comprises a set of design principles, collaborative working groups, and a variety of enabling technologies. Some elements of the semantic Web are expressed as prospective future possibilities that are yet to be implemented or realized. Other elements of the semantic Web are expressed in formal specifications. Some of these include Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF Schema (RDFS) and the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain.

    Other research relating semantic Web to XML could be found at (http://www.wiley.com/legacy/compbooks/daconta/sw/), and to ontologies (http://www.aifb.uni-karlsruhe.de/WBS/Publ/2001/OLfSW_amasst_2001.pdf)

    Section one with 6 chapters contains research in the areas of context-aware semantic Web and intelligent semantic Web for improving query formulation and information retrieval, the impact of ontologies on information retrieval using multiagent technology, clustering image mining, and integrating patterns in Web engineering applications.

    Dublin Core

    The Dublin Core is a product of the World Wide Web Consortium (W3). It is a standard set of 15 fields, some of which can contain qualifiers, to describe virtually any resource that can be accessed over the WWW (although it is not limited to resources found on the Web). The 15 Dublin Core elements may be thought of as dimensions (or facets) that are the most useful for getting of quick grasp of the resource in question. The 15 core elements (with optional qualifiers in parentheses) are:

  • Identifier—a unique identifier
  • Format (Extent, Medium)—description of physical characteristics
  • Type—conceptual category of the resource
  • Language—the language in which the resource is written
  • Title (Alternative)—the title of the resource
  • Creator—person primarily responsible for the intellectual content
  • Contributor—secondary contributors to the work
  • Publisher—organization responsible for publishing the work
  • Date (Created, Valid, Available, Issued, Modified)—date as qualified
  • Coverage (Spatial, Temporal)—locates the content in space and time
  • Subject—topics covered, free-form or from a controlled list
  • Source—from which the resource is derived
  • Relation (Is Version Of, Has Version, Is Replaced By, Replaces, Is Required By, Requires, Is Part Of, Has Part, Is Referenced By, References, Is Format Of, Has Format)—relationship of the resource to the source cited
  • Rights—to use or reproduce
  • Description (Table of Contents, Abstract)—content as qualified or full text of the resource

    Each one of the Dublin Core elements can have multiple values. The following a link shows the list of elements as of 2008-01-14. (http://dublincore.org/documents/dces/)

    The Dublin Core metadata (http://en.wikipedia.org/wiki/Dublin_Core) element set is a standard for cross-domain information resource description. It provides a simple and standardized set of conventions for describing things online in ways that make them easier to find. Dublin Core is widely used to describe digital materials such as video, sound, image, text, and composite media like Web pages. Implementations of Dublin Core typically make use of XML and are Resource Description Framework based. Dublin Core is defined by ISO in 2003 ISO Standard 15836, and NISO Standard Z39.85-2007.

    A new announcement on 1 May 2009 from the Dublin Core on Interoperability Levels for Dublin Core Metadata was published as DCMI Recommended Resource: (http://catalogablog.blogspot.com/2009/05/interoperability-levels-for-dublin-core.html)

    "Interoperability Levels for Dublin Core Metadata" has been published as a Recommended Resource. The document discusses modeling choices involved in designing metadata applications for different types of interoperability. At Level 1, applications use data components with shared natural-language definitions. At Level 2, data is based on the formal-semantic model of the W3C Resource Description Framework (as in Linked Data). At Levels 3 and 4, data also shares syntactic constraints based on the DCMI Abstract Model. The document aims at providing a point of reference for evaluating interoperability among a variety of metadata implementations. The authors expect this document to evolve as the trade-offs and benefits of interoperability at different levels are explored and welcome feedback from its readers."

    Dublin Core could be improved by incorporating intelligent and personalized interface for more effective and efficient information access.

    XML

    XML (eXtensible Markup Language) is a standard Internet protocol for representing a broad variety of Internet resources. It is derived from SGML (Standard General Markup Language) but is considerably simpler. It is also related to HTML, but goes well beyond it in flexibility and uniformity. In the space of a few years, XML has become the lingua franca of the Internet, especially for exchanging information between two otherwise radically different forms of representation.

    XML compatibility is proposed for whatever solution is adopted by NASA to enhance its lessons learned capability. The use of XML will essentially allow source systems that generate potentially useful information for the lessons learned repository to remain unchanged and unaffected. The final architecture will include an XML format into which source materials can be mapped and exported. The new lessons learned facility will then be able to import these materials into a standard environment.

    It should also be noted here that the Dublin Core can be easily represented in XML format. An interesting new dimension that applies an advanced form of XML has appeared in the latest of the IEEE standards and related products about IEEE StandardsWire(TM) - November 2008. In the IEEE StandardsWire™ the featured p product is the Language for Symbolic Music Representation Defined in a New Standard. A new IEEE standard represents the first collective step toward the improvement of both sub-symbolic and symbolic music coding and processing, music communication, and capabilities of individuals and companies.

    IEEE 1599-2008™, "IEEE Recommended Practice for Definition of a Commonly Acceptable Musical Application using the XML Language," offers a meta-representation of music information for describing and processing music information within a multi-layered environment, for achieving integration among structural, score, Musical Instrument Digital Interface (MIDI), and digital sound levels of representation.

    The Language for Symbolic Music Representation defined this IEEE standard represents the first collective step toward the improvement of both sub-symbolic and symbolic music coding and processing, music communication, and capabilities of individuals and companies.

    IEEE 1599-2008(TM), "IEEE Recommended Practice for Definition of a Commonly Acceptable Musical Application using the XML Language," offers a meta-representation of music information for describing and processing music information within a multi-layered environment, for achieving integration among structural, score, Musical Instrument Digital Interface (MIDI), and digital sound.

    XML is the backbone for developing Internet applications through Service Oriented architecture (Web services and Enterprise Bus) linking loosely coupled applications. Two options are available: standard-based and XML-native based. In the first method, three standards are used to deploy a platform independent Web service as software service exposed on the Web and accessed through SOAP, described with a WSDL file, and registered in UDDI. (http://www.startvbdotnet.com/Web/default.aspx) (For more information on the three standards, see http://Webservices.xml.com/pub/a/ws/2001/04/04/Webservices/index.html). In the latter approach, a native XML-based WS is platform dependent, such as Oracle Web Services. (http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb_Web_services.htm).

    Here is what IBM is doing currently on XML-related research as of 24 November 2008 (http://domino.research.ibm.com/comm/research_projects.nsf/pages/xml.index.html)

    "Researchers at several of our labs are completing the infrastructure for the Web to complete its move to XML as its transport data encoding and for much of its data persistence, providing a foundation for Service-Oriented Architecture, Web 2.0 and Semantic Web Technologies, as well as Model-Driven Development. Current focus areas include scalability and performance, and improved programmability.

    Aspects of our work:

  • A modular XHTML system that integrates SMIL (multimedia), XForms, MathML, P3P (controlled disclosure of private information) into HTML
  • Ability to separate information content and information rendering, and put them together again using the powerful XSL style sheet language (and thus support accessibility, personalization, collaboration, search and Mobile Computing, and integrate Multimedia on the Web). This includes approaches to integrating style sheets with Java Server Pages.
  • Ability for application or industry specific markup vocabularies, described with XML Schema language and queried with XML Query Language (One result: Your program or agent works equally well querying a database or repository as querying a Web site).
  • Standard and powerful approaches to linking, metadata, Web Ontologies, and search and filtering (see mineXML).
  • Web Services descriptions and message protocols so that code written in any language can interact with server functions written in any language, while preserving the debugging advantages of human readable messages; and use of Web Services by portals
  • Extensible Digital Signature model
  • DOM API
  • The ability to select whether transformations are performed at the client, at edge nodes, or at the server
  • Supporting use of XML Processing Languages against non-XML information, via Virtual XML
  • Very high quality formatting for text, diagrams, equations
  • Multi-modal user interaction abstractions and tools that support new types of interaction and novel devices
  • Storing XML documents in databases and querying them efficiently
  • The ability for Web sites to hand-off a Web-form initiated transaction while passing the customers information (CP Exchange) in support of e-Commerce.
  • An infrastructure for Semantic Web (Introduction to Semantics Technology, W3C Semantic Web)
  • An infrastructure for Services Computing
  • Experiments in improving the way the Java language can be used in handling XML information (see XJ)"

  • Another area of IBM research as of 11 August 2008 is Virtual XML. (http://domino.research.ibm.com/comm/research_projects.nsf/pages/virtualxml.index.html)

    “Virtual XML

    Virtual XML is the ability to view and process any data - whether XML or non-XML - as though it is XML, and in particular allow use of XML processing languages, such as XPath and XQuery, on the data. In the Virtual XML project we couple this with special access functions that make it possible to write scripts that "mix and match" XML and non-XML data, and advanced analysis and adaptation technology to ensure that the Virtual XML processing is efficient even on large data collections.

    Why?

    More and more structured data is converted into XML documents, either for transmission and processing that follow various standards like the Web services standards or for combination with semi-structured data such as HTML documents. Sometimes the original structured data is replaced with the converted data; sometimes it is converted "on the fly." Both approaches pose problems: If the original data is converted, then legacy applications depending on the old format must be rewritten. Converting data on the fly, on the other hand, imposes a significant performance penalty because the standard XML format requires significant overhead for generating or parsing XML character sequences.

    Virtual XML solves these problems by

  • Keeping everything in the native format most natural for the data, and
  • Providing thin "on-demand" adapters for each format in a generic abstract XML interface corresponding to the XML Infoset as well as the forthcoming XPath and XQuery Data Model."

    Cloud Computing (CC)

    Cloud computing (http://en.wikipedia.org/wiki/Cloud_Computing) is a style of architecture in which dynamically scalable and often virtualized resources are provided as a service over the Internet. The concept incorporates infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

    More recently two SaaS are announced: Security-as-a-service (http://news.zdnet.com/2422-13568_22-291742.html?tag=nl.e539) accessed 5/12/2009, and PC 'security as a service' offered free by Panda anti-virus. (http://blogs.zdnet.com/Gardner/?p=2920&tag=nl.e539) accessed 4/29/2009.

    In another issue, increased maintenance and support costs may force Enterprises to adapt software-as-a-service (SaaS) models to reduce cost and achieve flexibility. (http://blogs.zdnet.com/BTL/?p=17796&tag=nl.e539) accessed 5/12/2009

    Comparisons

    Cloud computing is often confused with grid computing ("a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks"), utility computing (the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity") and autonomic computing ("computer systems capable of self-management").

    Indeed many cloud computing deployments as of 2009 depend on grids, have autonomic characteristics and bill like utilities — but cloud computing can be seen as a natural next step from the grid-utility model. Some successful cloud architectures have little or no centralised infrastructure or billing systems whatsoever, including peer-to-peer networks like BitTorrent and Skype and volunteer computing like SETI@home.

    Architecture

    The majority of cloud computing infrastructure, as of 2009, consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere that has access to networking infrastructure. The Cloud appears as a single point of access for all the computing needs of consumers. Commercial offerings need to meet the quality of service requirements of customers and typically offer service level agreements. Open standards are critical to the growth of cloud computing and open source software has provided the foundation for many cloud computing implementations.

    Characteristics

    The customers engaging in cloud computing do not own the physical infrastructure serving as host to the software platform in question. Instead, they avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service, paying instead for only the resources they use. Many cloud-computing offerings have adopted the utility computing model, which is analogous to how traditional utilities like electricity are consumed, while others are billed on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not left idle, which can reduce costs significantly while increasing the speed of application development. A side effect of this approach is that "computer capacity rises dramatically" as customers do not have to engineer for peak loads. Adoption has been enabled by "increased high-speed bandwidth" which makes it possible to receive the same response times from centralized infrastructure at other sites.

    Economics

    Cloud computing users can avoid capital expenditure on hardware, software and services, rather paying a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed, like electricity) or subscription (e.g. time based, like a newspaper) basis with little or no upfront cost. Other benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low management overhead and immediate access to a broad range of applications. Users can generally terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the services are often covered by service level agreements with financial penalties

    Companies

    IBM, Amazon, Google, Microsoft and Yahoo are some of the major cloud computing service providers. It is being adopted by individual users through large enterprises including General Electric and Procter & Gamble

    Political issues

    The Cloud spans many borders and "may be the ultimate form of globalization." As such it becomes subject to complex geopolitical issues: providers must satisfy myriad regulatory environments in order to deliver service to a global market. This dates back to the early days of the Internet, where libertarian thinkers felt that "cyberspace was a distinct place calling for laws and legal institutions of its own"; author Neal Stephenson envisaged this as a tiny island data haven called Kinakuta in his classic science-fiction novel Cryptonomicon.

    Despite efforts (such as US-EU Safe Harbor) to harmonize the legal environment, as of 2009 providers such as Amazon Web Services cater to the major markets (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones." Nonetheless, there are still concerns about security and privacy from individual through governmental level, e.g., the USA PATRIOT Act and use of national security letters and the Electronic Communications Privacy Act's Stored Communications Act.

    Legal issues

    In March 2007, Dell applied to trademark the term "cloud computing" (U.S. Trademark 77,139,082) in the United States. The "Notice of Allowance" it received in July 2008 was canceled on August 6, resulting in a formal rejection of the trademark application less than a week later.

    On September 30, 2008, USPTO issued a "Notice of Allowance" to CGactive LLC (U.S. Trademark 77,355,287) for "CloudOS". A cloud operating system is a generic operating system that "manage[s] the relationship between software inside the computer and on the Web", such as Microsoft Azure. Good OS LLC also announced their "Cloud" operating system on December 1st, 2008.

    In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 designed to close a perceived legal loophole associated with Free software designed to be run over a network, particularly software as a service. An application service provider is required to release any changes they make to Affero GPL open source code.

    Risk mitigation

    Corporations or end-users wishing to avoid not being able to access their data — or even losing it — should research vendors' policies on data security before using vendor services. One technology analyst and consulting firm, Gartner, lists seven security issues which one should discuss with a cloud-computing vendor:

  • Privileged user access—inquire about who has specialized access to data and about the hiring and management of such administrators
  • Regulatory compliance—make sure a vendor is willing to undergo external audits and/or security certifications
  • Data location—ask if a provider allows for any control over the location of data
  • Data segregation—make sure that encryption is available at all stages and that these "encryption schemes were designed and tested by experienced professionals"
  • Recovery—find out what will happen to data in the case of a disaster; do they offer complete restoration and, if so, how long that would take
  • Investigative Support—inquire whether a vendor has the ability to investigate any inappropriate or illegal activity
  • Long-term viability—ask what will happen to data if the company goes out of business; how will data be returned and in what format

    Key characteristics

  • Agility improves with users able to rapidly and inexpensively re-provision technological infrastructure resources.
  • Cost is greatly reduced and capital expenditure is converted to operational expenditure. This lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and minimal or no IT skills are required for implementation.
  • Device and location independence enable users to access systems using a Web browser regardless of their location or what device they are using, e.g., PC, mobile. As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet the users can connect from anywhere.
  • Multi-tenancy enables sharing of resources and costs among a large pool of users, allowing for:
    • Centralization of infrastructure in areas with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilisation and efficiency improvements for systems that are often only 10-20% utilised.
  • Reliability improves through the use of multiple redundant sites, which makes it suitable for business continuity and disaster recovery. Nonetheless, most major cloud computing services have suffered outages and IT and business managers are able to do little when they are affected.
  • Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored and consistent and loosely-coupled architectures are constructed using Web services as the system interface.
  • Security typically improves due to centralization of data, increased security-focused resources, etc., but raises concerns about loss of control over certain sensitive data. Security is often as good as or better than traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible.
  • Sustainability comes about through improved resource utilisation, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy

    Public cloud

    Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via Web applications/Web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.

    Hybrid cloud

    A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". A recent article makes a case for hybrid clouds such as Cisco’s recent announcement of an on-premise extension to the rebranded WebEx Collaboration Cloud. (http://blogs.zdnet.com/SAAS/?p=758&tag=nl.e539) accessed 5/7/2009

    Private cloud

    Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualisation automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticised on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".

    The First International Cloud Computing Expo Europe 2009 to be held in Prague, Czech Republic, May 18 - 19, 2009 lists the following topics (http://www.cloudexpo-europe.com):

  • Getting Ready for Cloud Computing - IT Strategy, Architecture and Security Perspective
  • IT Security Delivered from the Cloud
  • Application Portability in a Multi-Cloud World
  • Load Balancing and Application Architecture in the Cloud
  • The Federated Cloud: Sharing Without Losing Control
  • Cloud Science: Astrometric Processing in Amazon EC2/S3
  • Practical Strategies for Moving to a Cloud Infrastructure
  • Cloud Infrastructure and Application – CloudIA
  • The Darker Sides Of Cloud Computing: Security and Availability
  • The Nationless Cloud?

    In yet another important development, National Science Foundation had a Press Release 09-093, (May 8, 2008) that "Nimbus" Rises in the World of Cloud Computing, the cloud computing infrastructure developed by Argonne National Lab shows that cloud computing's potential is being realized now. For more information see link below. (http://www.nsf.gov:80/news/news_summ.jsp?cntn_id=114788&govDel=USNSF_51)

    Current research directions in the area of cloud computing are:

  • Standards development: Developing standards for cloud computing so organizations and enterprises adapting cloud computing will be able to determine how CC meets their requirements. (http://www.australianit.news.com.au/story/0,24897,25502520-5013040,00.html) accessed May 19, 2009.
  • Virtualization: Linking virtualization with CC to optimize data center operations using the Web. Industries deploying virtualization include auto makers, and oil and gas. (http://blogs.zdnet.com/BTL/?p=18410&tag=nl.e539) accessed May 20, 2009.

    FUTURE New Dimensions for ITWE Software

    Global Internet Technology crosses many social and cultural boundaries. Use of Internet (ITWE) Technology raises many ethics and values issues. There is further need for effective and efficient ethics and values identification methods and tools to assist Internet technology workers in businesses, governments and societies in selecting and using cultural features that are both common and distinct in solving global social information infrastructure conflict problems. The proposed research will examine ethics and values contexts that influence and are influenced by Internet technology. Future ITWE research will use approaches to identify and develop the needed ethics and values identification tools to assist workers in solving ethics and values problems that arise in the global application of Internet technology. These problems arise from cultural and social differences in ethics and values about the Internet.

    First, in order to investigate ethics and values identification approaches to solve problems based on social conflicts or isolation one needs a cross-cultural means for evaluating communicated ethics and values views about Internet technology. New research combines the speaker typology with ones speech interests within an analysis system of logic formulae. It is possible to design Internet ethics and values identification models for social change by developing software products based on forms or logic types. Doing so allows one to build computer models and solve problems, for example, in Internet communications, Internet social sciences, Internet psychology, Internet education, Internet public policy, and Internet conflict resolution.

    Second, our proposed speaker and speech analyses logic-based method will examine the foundation of speech and speaker typology. This method and tools will be validated using samples of communications about Internet technology written by persons from different cultural and social backgrounds. The method and tools resulting from the research should be more efficient than current alternative approaches to ethics and values identification.

    Third, our proposed Internet Speaker and Speech Analyses logic-based method will examine the foundation of intuitive key-interests sense of speech and speaker through a focus on psychological, sociological and political dimensions which reflects the speech’s logical formula and the speaker’s typology.

    Fourth, logic forms principles have already been shown to be a fundamental framework for studying commonalities between different features of human beliefs and cultures. Therefore, we will research how to use ethics and values identification computer technology of Unilogic forms, types, classification and clustering to investigate selected social and cultural problems about the use of Internet technology that can be formulated by deriving common beliefs and features between various human beliefs and cultures.

    Fifth, there are seven transformational Steps in the logic form of computer automated reasoning. Step1. Identify the set of potential social conflict problems to be evaluated (e.g. belief or policy systems about use of Internet technology under investigation) as the domain, and describe them informally using logic forms. Step2. Using natural language analysis on these forms, based upon features identification and extraction, transform the result of Step1 into equivalent formal logic computer language. Step3. Using a computer modeling language whose grammar is based upon a given type theory, transform the result of Step2 into an equivalent domain modeling language. Step4. Computer domain model the written specifications from Step1 using the result from Step3. Step5. Select a particular focus of investigation in the domain identified, ethics and values about the use of Internet technology, and analyze this formal computer domain model using new methods, for example, analysis of commonalities between several global belief or policy systems, and analysis of differences. Step6. Set up a computer simulation involving interactions of simulated beings in this focus area. Step7.Execute the computer simulation to assist in identifying and understanding the common human thought concepts about Internet technology expressed within the universe of discourse identified in Step1.

    Sixth, the long term objective of this research in Unilogic Form Ethics and Values Identification is to explore how one can provide a computer ethics and values identification method and tools to extract the universal ‘rhythm’ from different social interest entity knowledge bases of (business, culture, government, etc.) spoken and written communications about Internet technology, to reuse this universal rhythm in the service of more inclusive dynamic management of, e.g., conflict, to find short or inexpensive ways to administer complex Internet-based social systems, and to provide a predicative logic unityped interest relation of any given entity in the system. The benefits of such basic science research in the long term includes support for decision and policy makers at the levels of organizations, markets, personalities, families, societies, religions, churches, groups of populations nations, states, government agencies, education institutes, etc.

    Seventh, in the short term the objective of logic Form Ethics and Values Identification is to provide computer science and information systems research a global method and tools to analyze and remodel existing information about Internet technology in the forms of spoken and written communications, and to reuse them in new but similar domains. This is related to reuse of unityped interest comprised of relation behavior for any defined entity. For example, reuse of communications information about an isolated system’s entities in order to reform them into coexisting system’s entities requires ability to discover the interest roots of that isolated system and to remodel it for coexistence system entities. To do this, the researcher needs knowledge of logic forms and the ability to control the unitype predicates to decipher and remodel the communications information.

    To find ‘Ethics and Values’ held about the use of Internet Technology one must examine various issues, such as:

  • Privacy of content used.
  • Type of content used.
  • Integrity of content used.
  • Security of content used.
  • Selective dissemination of content used.

    These issues represent different kinds of concerns persons from various cultures have about the use of Internet Technology. These concerns are interpreted differently as one moves between global cultures. The policies and beliefs applied to Internet technology regarding Privacy of content used, Type of content used, Integrity of content used, Security of content used, and Selective dissemination of content used vary as one moves across different global cultures. Since Internet Technology bridges between different global cultures, policies and beliefs about the use of Internet Technology change as the Internet crosses different cultures. In order to develop a meaningful global set of policies and beliefs about the use of Internet Technology one must both identify and integrate these different global cultural policies and beliefs.

    FUTURE New Dimensions for ITWE Applications

    Human exploration and development of space will involve opening the space frontier by exploring, using and enabling the development of space through information technology, while expanding the human experience into the far reaches of space. At that point in time we assert that the current primitive World Wide Web (Web) will be replaced and dramatically expanded into an Interstellar Space Wide Web (SWW) (Rine, 2003). The current state-of-the-art low orbits communications satellites constellations will be dramatically expanded to higher orbits and to orbits supporting work on other remote human colonies. This will be necessary in order to furnish in a human-friendly way the necessary software and information that will be needed in support of Interstellar Space Wide Information Technologies. Many of the problems encountered in conceiving of, modeling, designing and deploying such a facility will be different from those problems encountered in today’s Web. Future research and development work will be to identify some of these problems and to conceptually model a few of their solutions.

    Twenty-first Century Space Wide Web (SWW) distributed component-based software applications will dwarf today’s increasingly complex World Wide Web (WWW) environments, supported by Earth-bound low orbit satellite constellations, and will represent a far more significant investment in terms of development costs, deployment and maintenance (Rine, 2003). As we now move into the twenty-first century part of the cost will come in the effort required to develop, deploy and maintain the individual software components. Many of these components will be on remote, numerous satellites. As now, part of this effort will include implementing the required functionality of components, implementing the required interactions for components and preparing components to operate in some remote runtime environment. One way to reduce the cost of component development will continue to be reuse of existing commercial software components that meet the functional requirements.

    The need for an adaptive Web-based configuration language is motivated by considering a future scenario involving planetary and deep space satellites. Assume that there are several teams of researchers scattered across Mars, and that communication between these researchers is supported by a constellation of low-orbit communication satellites, like the present LEOS. Further, suppose that there is a deep space probe exploring the asteroid zone between Mars and Jupiter. Scientists on both Mars and Earth would like to be able to dynamically access data from this probe, via a relay between their low-orbit constellations and a ground or space based relay station. This access can include running different sets of measurements, changing sensor configuration, etc. Further, appropriate Earth scientists would like to share results with their Mars colleagues using Push technology, and vice versa. Finally, scientists will want to run their experiments on the deep space probe by writing and then loading the equivalent of a Java-like applet onto the probe.

    The above scenario raises a number of technological challenges (Rine, 2003). First, the Earth-based and the Mars-based scientists may have quite different capabilities in terms of the type and amount of data they can receive from the probe. It may even be desirable to first send the data to Earth, have it processed, and then sent back up to Mars. Second, all space-based communication is costly in terms of power consumption, available bandwidth and round trip propagation delay. Finally, the dynamics of this situation change due to factors such as changing orbits and relative positions. For instance, it may be better for the probe at times to send to Mars, or to Earth, or both. These routing decisions are based upon both the needs of the application and the physical configuration of the communication satellites. In order to make appropriate use of minimal bandwidth and limited power, it is desirable that these semantics are directly reflected from the application down to the network layer. The challenge is to do this in a way that both does not unduly burden the application writer (i.e., the scientist writing the satellite applet) and also makes appropriate usage of network resources. This is possible by developing the SWW-XML and by placing communication control in the appropriate adapters.

    To support a space Web (e.g. SWW), a futuristic spacecraft (satellite) orbiting a distant planet or moon needs to robustly self-adapt to its target environment. Sometimes this self-adaptation will be needed in order to respond to SWW user commands from Earth or from another space colony, and sometimes this self-adaptation will be needed as a satellite reconfigures itself to allow it to perform more effectively or efficiently in a new physical environment. Let us imagine a satellite orbiting a different planet or a different orbit around the same planet for a period of time (e.g. a month). Environmental factors can be different versions of sunlight, temperature, magnetic field, gravity, and solar wind. These diverse environments may be uncertain and different for different planets. Suppose a satellite supporting SWW in this proposal supports imaging. Let us use a client-server model. Suppose the client is the ground station on earth and the server is this satellite. Suppose there are three software subsystems embedded in this satellite: command and data handling (CD), flight control, and payload control. Each of these software subsystems runs on a different processor. The CD is basically for receiving up-link commands and routing them through constellations to a given satellite and to a given processor. The flight control software is mainly for the attitude determination and attitude control system (ACS). The payload interface processor (PIP) is for controlling the imaging camera and sending images back to client. We illustrate an idea of an adaptive ACS. In the future, this idea can be applied to adaptive PIP software.

    In the last section, we included three chapters on Web applications: Scenario driven Decision support system, market economy approach for managing data grids, and interoperability of Web-based Geospatial applications.

    Author(s)/Editor(s) Biography

    Ghazi Alkhatib is an Assistant Professor of Software Engineering at the College of Computer Science and Information Technology, Applied Science University located in Amman, Jordan. In 1984, he obtained his Doctor of Business Administration from Mississippi State University in Information Systems with minors in Computer Science and Accounting. Since then, he has been engaged in teaching, consulting, training, and research in the area of Computer Information Systems in the US and Gulf countries. In addition to his research interests in databases and systems analysis and design, he has published several articles and presented many papers in regional and international conferences on software processes, knowledge management, e-business, Web services, and agent software, workflow, and portal/grid computing integration with Web services.
    David Rine has been practicing, teaching ,and researching engineered software development for over thirty years. Prior to joining George Mason University, he served in various leadership roles in the IEEE Computer Society and co-founded two of the technical committees. He joined George Mason University in 1985 and was the founding chair of the Department of Computer Science and one of the founders of the (Volgenau) School of Information Technology and Engineering. Rine has received numerous research, teaching, and service awards from computer science and engineering societies and associations, including the IEEE Centennial Award, IEEE Pioneer Award, IEEE Computer Society Meritorious Service Awards, the IEEE Computer Society Special Awards, IEEE Computer Society 50th anniversary Golden Core Award, and historical IEEE Computer Society Honor Roll and Distinguished Technical Services Awards. He has been a pioneer in graduate, undergraduate, and high school education, producing computer science texts and leading establishment of the International Advanced Placement Computer Science program for the nation's high school students, co-designer of the first computer science and engineering curriculum (1976), and the first masters in software engineering curriculum (1978). He has been an editor of a number of prestigious software-oriented journals. During his tenure, he has authored over 300 published works and has directed many PhD students. Complementing his work at GMU, he has worked on many international technology and relief projects in various countries and made many life-long international friendships. His past students are the most important record of his technical achievements.