Web-Based Supply Chain Management and Digital Signal Processing: Methods for Effective Information Administration and Transmission

Web-Based Supply Chain Management and Digital Signal Processing: Methods for Effective Information Administration and Transmission

Manjunath Ramachandra (MSR School of Advanced Studies, India)
Indexed In: SCOPUS
Release Date: October, 2009|Copyright: © 2010 |Pages: 316|DOI: 10.4018/978-1-60566-888-8
ISBN13: 9781605668888|ISBN10: 1605668885|EISBN13: 9781605668895|ISBN13 Softcover: 9781616923990
Hardcover:
Available
$144.00
List Price: $180.00
20% Discount:-$36.00
TOTAL SAVINGS: $36.00
E-Book:
Available
$144.00
List Price: $180.00
20% Discount:-$36.00
TOTAL SAVINGS: $36.00
Hardcover +
E-Book:
Available
$172.00
List Price: $215.00
20% Discount:-$43.00
TOTAL SAVINGS: $43.00
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Description

Digital signal processing is now a growing area of study invading all walks of organizational life, from consumer products to database management.

Web-Based Supply Chain Management and Digital Signal Processing: Methods for Effective Information Administration and Transmission presents trends and techniques for successful intelligent decision-making andtransfer of products through digital signal processing. A defining collection of field advancements, this publication provides the latest and most complete research in supply chain management with examples and case studies useful for those involved with various levels of management.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Applications of supply chain management
  • Controller based supply chain management
  • Conventional supply chain management
  • Data management in supply chain
  • Digital Signal Processing
  • Integration of multiple supply chains
  • Manipulation of supply chains
  • Security in supply chain management
  • Supply chain lifecycle
  • Tools for supply chain management
  • Web Services
  • Web-based supply chain management

Reviews and Testimonials

The objective of this book is to provide the latest and complete information with examples and case studies. The entire flow of information handling along the supply chain, from retrieval to application, will be covered.

– Manjunath Ramachandra, MSR School of Advanced Studies, India

Table of Contents and List of Contributors

Search this Book:
Reset

Preface

What is information management?

Information management basically refers to the administration of information that includes acquisition, storage and retrieval of information. In the process, the information modified and cleaned. James Robertson writes in his article, 10 principles of effective information management, as:

“'Information management' is an umbrella term that encompasses all the systems and processes within an organization for the creation and use of corporate information”. Professor T.D. Wilson, Högskolan i Borås, Sweden, defined Information management as: “The application of management principles to the acquisition, organization, control, dissemination and use of information relevant to the effective operation of organizations of all kinds”.

Information Management is the administration of information, its usage and transmission, and the application of theories and techniques of information science to create, modify, or improve information handling systems.

Objective of this book is to bring out the effective management of the information system accessible to the reader at one place. The book is wound around the theme of managing authorized information from a single facility. The present trend is to automate the information management that calls for effective electronic techniques involving DSP for handling transduction, intelligent decisions etc. The objective of this book is to provide the latest and complete information with examples and case studies. The entire flow of information handling along the supply chain, from retrieval to application, will be covered.

Need of signal processing

Digital signal processing (DSP) is a growing area invaded in to all walks of life starting from consumer products to database management. In this book the DSP application for the effective information management will be covered. Although it is possible to digest the essence of this book at the level of the application layer or as the end user without a prior exposure to DSP, it is recommended to get an overview of the topics such as Statistical (Bayesian) estimators, Neural networks etc before implementing the solutions or architecting the designs. In each chapter, references are provided as appropriate.

The data as such will be useless unless it carries some information. Interestingly, the data carries information only when some one needs and uses the same. So, all round efforts are required for the proper handling of the data so that it bears the relevant information. In this book, the sophisticated techniques borrowed from signal processing are applied for effective management of the information.

Importance of supply chain model

Supply chain is a self contained and self sustained system. It is a reflection of the inequality wherein a component with surplus “commodity” transfers a portion of the same on to a component which requires it. It is this demand for the commodity and the supply of the same that ensures stability of the supply chain.

Innumerable examples exist for both natural and artificial supply chains depending on what the components are and what gets transfers across them. It spans contrasting players like the farmers growing wheat and the Bakers; the steel industries and the automobile industries; the forest and the tigers; finally authors and the readers binding them in fine unseen fabric. The last example stands as a candidate for information flow from the author to the readers. This book peeps in to this sustained unseen relation and provides the better understanding and alternate mechanisms for information management based on signal processing and web technologies to shape the same. In this book, the information life cycle is linked to the information supply chain management.

Information Technology is the acquisition, processing, storage, manipulation and dissemination of digital information by computing or telecommunications or a combination of both.

The book also provides an insight in to information supply chain- the end-to-end flow of information in an organization. Although enterprise resource planning centralizes transaction data and ensures transaction consistency, it was never designed for semantic persistence. However, because master data is embodied in transactions, it is not possible to achieve enterprise information management without Enterprise Resource Planning (ERP). For the same reason, the book contains examples from the enterprise information supply chain though no assumption has been made on the nature of the enterprise.

All the chapters are interlinked in one-way or the other. It provides continuity and comfort feel for the reader. The book is wound around the concepts developed in the introductory chapter. Here the solutions based on differential feedback are introduced. Though there is no restriction on the sequence to be followed after the introductory chapter, it is advisable to cover the first section before getting in to any of the chapters of the other sections.

The book covers the interests of a wide range of readers from the new entrants to the field up to the advanced research scholars perusing research/courses in DSP, IT and management. The examples, without any bias in the order, include but not limited to:

  • Industries, Executives and managers of large organizations: The book gives guidelines and methodology for managing information supply chain in a large organization like IT industries, library, e-learning, banking, information systems, financial systems, Bio informatics, multimedia, fleet management, manufacturing industries, storage systems, avionics etc. Information supply chain management strategies have been successfully used in various applications as diverse as the hydraulic plant control in China.
  • Academics, Business school students and researchers in the field of management: Now a day there is an increasing demand from all corners to have the information supply chain management from one integrated platform. Courses on information supply chain management are being taught in leading business schools. They’ll find the new approaches described in this book extremely useful. The techniques may be included in the curriculum.
  • E-libraries, E-publishers: This book also looks at the software aspects such as hyper documents, XML etc. Hyper documents are nonlinear documents made of linked nodes. The content of each node is text. Picture, sound or some mix of these in a multimedia hyper documents. That makes a convenient and promising organization for rich classes of sets of documents used in an electronic encyclopedia, computer-aided learning (E-library), software documentation, multi-Author editing, e-publishing etc. The book will be extremely useful in these areas.

In this book, novel features such as knowledge based expert systems are introduced. The advanced results obtained in the research of the other areas of DSP are projected to be useful for information supply chain management. Adequate effort has been made to bring the different dimensions of management under one platform with numerous live examples and case studies. The topics such as hierarchical organization, feedback neural network and shifted feedback etc are uniquely covered in this book.

A lot of customization has been made to the available literature and references in the information management area to fit in to the book. Numerous examples are provided appropriately.

There are a good number of papers available on supply chain management. Some of them address the signal processing techniques with restricted scope for information management such as data storage, retrieval, data mining etc. They are scattered across the different journals. The good points from these papers have been included in the book

A major part of the book is devoted for data management. A chapter at the end is reserved for knowledge management systems. Data mining concepts will be addressed along with the information storage. A separate chapter will explain machine learning, XML, hypermedia engineering etc. Control, access, and sharing of the metadata in one centralized location will be explained.

The book provides insight to the various activities taking place along the information supply chain. No assumption has been made on the size and nature of the information supply chain. The first step in dealing with the data or information is to acquire the same and represent it in a way compatible for subsequent usage. The acquired information would be subjected to cleaning to ensure the data quality and represented in such a way that it would be easy store and retrieve the same. It calls for a brief description on the data that happens through metadata.

In an enterprise data would be stored in distributed environment. For the effective usage the requisite data from different units are to be brought and integrated. In order to search the required data from the archive, efficient searching algorithms are required. The efficiency of searching once again depends up on how the data is organized and described with metadata. Once the requisite data or information is available from the search engine, it has to reach the recipient. Two aspects play a major role during the transmission of this data- Service quality and security. The service quality ensures that the right time data is available from the heterogeneous sources in the distributed environment while the security protocols assure only the intended recipient gets the data.

To use the data effectively and to extract some useful information out of the stored data, various tools are required. These tools help the end user, the human being in identifying the requisite patterns in the vast ocean of data and spread the knowledge across.

All these transactions are mapped on to different chapters in this book. In a way all these chapters are interlinked and represent the information flow along the supply chain.

In addition to the introduction, there are five parts in the book reflecting the lifecycle of information supply chain. The first part provides an introduction and insight in to the subject matter. The activities in the information supply chain management including information acquisition, storage, retrieval and transmission are covered in rest of the chapters.

In each chapter a substantial part is devoted to the issues and solutions in the subject matter followed by the future trends and expected changes. These sections are covered keeping the title of the book in mind. The solutions provided are intended to solve the issues highlighted. These problems are typically encountered in various organizations that get involved in the life cycle of the information. Fine tuning of these solutions may be required to adapt to any specific application. The future trends are intended to provide insight on the upcoming technologies and to help in ramping up accordingly. It also provides a good topic for research. A spectrum of products and projects may be derived from these tips.

The other half of each chapter introduces the topic and provides adequate background material and references to understand the issues faced and the emerging trends.

Information supply chains exist since ages in various forms and shapes. The Multistoried libraries of ancient India and china archiving the palm leaf books stand as a good example for information system. The issues in handling such a system bear some commonalities with the issues in a present day digital library although the solutions are different. Information management can still exist without signal processing. Other forms such as we technology can provide the solution. For example, web technology can provide the required distributed storage medium to archive a large data. Signal processing techniques however insist on using small size memory and keep the data in compressed form to reduce the data transfer latencies. The signal processing solutions need not be seen as a separate technique. They provide the requisite algorithms and designs to aid the existing techniques and result in substantial saving of time and cost. The logical extension of any information management technique points to Digital signal processing.

Signal processing technology is matured, advanced and meant to handle any problem in a simple way. However, the mathematics and reasoning behind every solution is highly complicated. The flip side is that the solution would be ensured, as it would be simple. Often it would be unthinkable by other means at the end. The proposed solutions and the future trends make use of digital signal processing in one way or the other. Though other solutions are feasible and in place, the signal processing based solutions provide a list of advantages over the others as mentioned below:

The DSP based solutions and algorithms may be directly implemented with dedicated processors. As the cost of the processors are coming down, the solutions and the trends discussed in each chapter extensively make use of DSP algorithms. Typical example is the compression technology where the algorithms make use of complicated mathematics and the solution provides attractive results. But for signal processing, such a solution would not have come up.

The advanced signal processing technique mentioned in this book is unique of its kind. It provides a broader opportunity to explore the new ideas and novel techniques to venture in to the competitive market. Most of the commercially available software, tools and techniques used for resource planning, data acquisition, processing, cleansing, achieving, storage, security, rights management, transmission, servicing, searching, knowledge management, data mining, pattern recognition, artificial intelligence and expert systems. The solutions include wireless, mobile, satellite and fixed infrastructure domains.

Another attraction with signal processing solution is the modularity and reusability. The solution in one enterprise can be customized and used in the other enterprise. That is how relatively unthinkable organizations are linked today. The modular approach in the design and the transducers permits to apply the signal processing tools and techniques at any part of the flow. The generic nature of the solution permits mass production as reusable components. It is interesting to see that most of the Intellectual property (Ips) originating today involves signal processing in one way or the other.

The generic nature of signal processing roots back to the initial stages of the design. The mathematical model describing the problem, irrespective of the domain or organization, can be directly implemented with signal processing techniques. Hence the signal processing algorithms work with a problem in economics such as financial time series much the same as they do with a problem in communications such as data compression. The power of signal processing hence depends up on the availability of a strong and stable model and representation of the problems in the form of mathematical models. In this book the first chapter extensively covers data representation through a new modeling technique based on a feedback from the output to the input.

Meeting the customer requirements with quality in time requires stringent requirements on the flow of data (information) and the control signals over the information supply chain. To make it happen,

    1. The supplier has to pump in the information at the required rate, 2. Ensure the security of information till it reached the customer premises 3. Ensure that the authorized person or organizations and it would not fall in to the wrong hands. Resending the information for the intended customers will not nullify the consequences. 4. The information should reach the customer in agreed time. 5. Transmission corruptions are to be strictly avoided
and much more. These points are discussed in the subsequent chapters.

In the first half of the book, extraction, transformation and loading of the data from multiple platforms and sources will be explained. It effectively covers the information administration. The technology for the retrieval and transmission of the information is provided in the second half of the book.

A separate introductory chapter is provided to bind the four important concepts of information administration, supply chain, signal processing and supply chain.

The first section is about the advanced information technology. It may be noted that the intention of this book is not to cover the breadth and depth of information theory. Only the relevant and powerful concepts required for information supply chain are highlighted here. The new concept of differential feedback representation of the information is given here. The focus is on the information administration and management. It provides framework for rest of the book. This section has 4 chapters.

Chapter 1 gives the basics of information modeling and measurement. A good methodology is required for the scientific measurement of the information. It is required to ascertain the value of the information that gets transported or sold. The important model called feedback neural network model for the information is provided. The model comes with tons of interesting properties that are used subsequently in rest of the book.

Chapter 2 provides basics of information representation. The data or information needs to be put in an appropriate form the end user can understand. In between the source and the destination, it has to be put in a form convenient for storage and transmission. The chapter explains the syntax and semantics for information representation. The raw stream of data as such carries no meaning. It attains a meaning only when the user is able to interpret. Depending up on the context and rendering device, the string takes meaning when fed to appropriate application. Ontology plays an important role in the interpretation of the information depending on the context. The raw DataStream would be called as information only when a certain meaning is derivable from the same.

Chapter 3 introduces the information systems. It is basically a self sustained framework involving the entire life cycle of the information. The activities include acquisition, processing, storage and retrieval. Each of these activities forms a separate section in the book. There are various examples for information systems used in day to day life including World Wide Web, data mart, data warehouse etc.

Chapter 4 is about information management. It goes with a part of the title of this book. The material covered in this book is introduced here. The information management includes supervising all the activities of the information system. Specifically it includes identifying information needs, acquiring information, organizing and storing information, developing information products and services, distributing information, and using information.

The second section is about acquisition and processing of the information. The data acquired has to be appropriately cleaned before being processed. It is only then the data would be called information. Associated with the information is the metadata that has to provide adequate meaning and relevance for the information.

Chapter 5 covers the information acquisition and presentation techniques. Care has to be taken to ensure optimal data acquisition. Too much or too little data would result in confusion along the supply chain. An optimal amount of data is required to be collected. The user interface couples the end user i.e. human being with the machine. To make the interaction highly effective, the interface has to be specially designed. It also indicates the requirement of an intelligent machine, addressed in a different chapter.

Chapter 6 explains the processing techniques for the data. The data acquired from the sources would generally get mixed with noise, under or over sampled or distorted. To make it useful and meaningful, it is first cleaned and then subjected to transformations. Quality constrains are imposed on the cleanliness of the data to make sure that it meets the industry standards. Some of the useful transformations required in the industry are Fourier transform, averaging, curve fitting etc. The transformed data would be stored.

Chapter 7 explains the compression techniques in detail. The information acquired from the sources would have some redundancies. By removing these redundancies, it would be possible to store the data effectively with minimum space. The concepts of compression are derived from Shannon’s information theory and coding. Various compression techniques are explained.

Chapter 8 is about the Metadata. Metadata is the information about the information. It binds the data with the meaning. A good metadata description for the data is extremely important in operations while searching. The different metadata schemes are explained.

The third section is about storage of the information. The information would retain its value only when it is possible to capture the same. The storage of information has grown in to a big industry with billions of dollar transactions. In this section the data storage, archival mechanisms are explained. The stored data needs to be maintained with security. The security and rights concepts are fused while storing the data. The integration of heterogeneous data is an additional chapter in this section.

Chapter 9 is about data integration. The data integration is required at various levels. The data along the supply chain would be organized in different levels of hierarchy and at various levels of abstraction. Integration of the data has to consider the service quality. The service oriented protocol and its impact on the data integration are discussed

Chapter 10 deals with the storage mechanisms being practiced in various enterprises. Planning is required for the storage and subsequent utilization of the data. Examples of the data warehouse and data mart are provided. The mechanism to handle the heterogeneous data has been explained.

Chapter 11 is about the archival of the information. Back up and archiving are the important activities in any enterprise. The various standards and mechanisms existing for digital archiving are discussed.

Chapter 12 explains the access and security mechanisms associated with the stored data. In the digital era, it is possible to assign permissions selectively in the sense that, a particular target group can use the intended section of the data. The rights are fused as a part of the content creation. The different rights management standards are discussed.

The fourth part is about the retrieval of the information after it is acquired and stored. The rest of the activities are there for the effective usage of the information. The raw data would be called information by finding some meaning in it. The information would be called knowledge together with a collection of similar data or data base. Such a data base is called expert system that is intended to provide answers to user queries in a specific field. As one move from data to information and information to knowledge there is a change in the abstraction and visibility of the things.

The usage of this huge knowledge base calls for the assistance of intelligent elements. Automated intelligence is required for assisting the user in identifying the requisite hidden patterns in the data, quantitative prediction based on the previous history or knowledge etc. Tools are also required for searching the knowledge or information that is useful for a large section of people. Knowledge management addresses all these issues including acquisition, interpretation, deployment and searching of the knowledge.

Chapter 13 deals with the data retrieval techniques and standards and provides good input for the search engines. For the Searching of the data various algorithms including parallelization of search and fusion of the results are discussed.

Chapter 14 explains how machine intelligence can be used to identify the patterns hidden in the data. These patterns are generally invisible for a human user, but helpful for the search engines to retrieve the data.

Chapter 15 is about the data mining. Artificial neural networks are extremely useful in learning the known patterns and start giving inferences when exposed to unknown patterns.

Chapter 16 gives an over view of a knowledge base system towards providing the required information for the end user. Building and effectively using such a system is challenging. It calls for machine intelligence to achieve the same. The inferences and interpretations of such a system will have far reaching effects.

Chapter 17 is about the knowledge management. The knowledge management includes effective acquisition, processing, storage, retrieval and decimation of the knowledge. It follows the similar cycles of information management but at a different level of abstraction. The tools and techniques used are a bit different and calls for more intelligence in the present case. Knowledge based systems provide the required expertise for the end customers to interpret the data and aid in the retrieval of the required information.

The last part is about the transmission of the data over the supply chain. The data stored has to reach the end user to serve the purpose. It calls for effective and secure means for the transmission. The enterprises require the right time data rather then the real time data. To meet the same good data searching algorithms are required. Weightage has to be given for the real time data transmission with quality. The data transmission has to adhere to the agreed quality of service. Another factor to be considered during the data transmission is the security. The communication channels are prone to attacks from the intruders. Adequate security measures are to be in place during the transmission of the data.

Chapter 18 is about data streaming. Streaming is a special way of data transmission adhering to the streaming protocols. Streaming is done to transfer live actions be it a live sports or a stored advertisement. Streaming has its attraction in simplified storage constraints and live display of contents. The attractive feature of streaming over the other data transfers such as FTP is that the entire data need not be stored before use. i.e., the streamed data may be decoded and displayed on the fly without waiting for the entire data to be transferred. Small buffering may be required to provide a smooth output.

Chapter 19 gives the data transmission under service quality constraints. In an enterprise, the data sources are distributed and carry heterogeneous data. The priorities and data availability constraints on each of them would be different. They all contend for the common resources such as buffer all along the way. This calls for the usage of an intelligent element to schedule the resources so that they are utilized effectively and the service quality of the transmitted data is met.

Chapter 20 is about the secure transmission of the data. The different data encryption standards are discussed and suggestions are provided to speed up the process so that timing constraints of the service quality are met

Author(s)/Editor(s) Biography

Manjunath Ramachandra is currently working at Philips, Bangalore. He has about 14 years of industrial and academic experience in the overlapping verticals of signal processing including multimedia, information and supply chain management, wireless/mobile and networking. Research in the same field led to PhD thesis, about 75 international publications, patent disclosures etc. He has chaired about 10 international conferences and figures in Marquis Who’s Who 2008. His areas of interests include signal processing, database architecture, networking etc.

Indices