AI-Based Data Analytics for E-Rendering Frameworks

AI-Based Data Analytics for E-Rendering Frameworks

Manisha Verma, Jagendra Singh
Copyright: © 2024 |Pages: 16
DOI: 10.4018/978-1-6684-9285-7.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Big data and AI/ML pipeline models provide a good basis for analyzing and selecting technical architectures for big data and AI systems. The experience of many big data projects has shown that many projects use similar architectural models that differ only in the selection of different technological components in the same diagram. The big data and AI/ML pipeline framework are used to describe pipeline stages in big data and AI and ML projects, and supports the benchmark classification. This includes four pipeline stages: data acquisition/collection and storage, data preparation and storage, data analysis with artificial intelligence/machine learning, and performance and interaction, including data visualization, user interaction, and API access. The authors have also created a toolkit to help identify and leverage existing models by following the steps below and the two different technical areas and different data types within the framework.
Chapter Preview
Top

1. Introduction

Artificial intelligence systems are machine-based systems with varying degrees of autonomy that can make predictions, make recommendations, or make decisions for a set of human-defined goals using large sets of alternative data sources and data analysis called “Big Data”. Such data-driven ML models are able to learn from datasets for “selfimprovement” without explicit human programming. There is a lot of information on why and how technical benchmarking should be done for specific business and analytical processes, but there is a lack of objective and evidence-based methods to measure the correlation between technology models. models. When multiple benchmarking tools are available for a particular need, there is even less evidence of how those tools compare and how the results might impact their business goals. The Data Bench project filled this gap by designing a framework for BDT development to achieve excellence and continuously improve their performance by measuring their technology development activity against metrics with high business relevance. In this way, it bridges the gap between business and technical benchmarking of big data and analytics applications.

This chapter presents the Big Data and AI framework ML Pipeline, which supports technology analysis and benchmarking for the horizontal and vertical technical priorities of the strategic European research and innovation agenda on the value of Big Data, as well as cross-sectoral factors Research Agenda, Strategy, Innovation and dissemination in the fields of artificial intelligence, data and robotics. In the following sections, we focus on DataBench's approach to technical benchmarking, which uses the Big Data Model and AI/ ML Pipeline as an overall framework and is further categorized into different areas of the Big Data Value Benchmark Model (BDV). The DataBench framework is accompanied by a handbook and toolkit intended to support European industrial users and technology developers who need to make informed decisions about investments in big data technologies while optimizing business and technical performance. The manual presents and explains the main reference models used for technical benchmarking.

However, there are certain basic tenets of Big Data that will make it even simpler to answer in E rendering framework:

  • It refers to a massive amount of data that keeps on growing exponentially with time.

  • It is so voluminous that it cannot be processed or analyzed using conventional data processing techniques.

  • It includes data mining, data storage, data analysis, data sharing, and data visualization.

  • The term is an all-comprehensive one including data, data frameworks, along with the tools and techniques used to process and analyze the data.

The Toolkit is a software tool that provides access to benchmarking services; help stakeholders.

  • (1)

    to identify use cases where they can achieve the greatest business value and return on investment so they can prioritize their investments;

  • (2)

    select the best technical benchmark to measure the performance of the chosen technical solution; AND

  • (3)

    the ability to assess their business performance by comparing their business impact to that of their competitors so they can revise their decisions or their organization if they find they are falling short of their industry median benchmarks and the size of their company. Activity. Therefore, the services provided by the toolkit and the manual support users in all phases of their journey (before, during and during the ex-post evaluation of their BDT investment) and from a technical and commercial point of view. In, the following section: We present the Big Data and AI/ ML Pipeline Framework, used to describe the pipeline stages in Big Data and AI, supporting benchmark classification; The framework also serves as a basis for demonstrating commonalities between big data projects, such as those in the Big Data Value Public-Private Partnership (BDV PPP) program (De Amorim RC., 2012). In this chapter, we present the categorizations of the architectural plans for the implementation of the different phases of the Big data and AI/ ML Pipeline, with differences based on the processing types (batch, real-time, interactive), the main data types and the access/interaction types (the a login action/interaction) with an API or human interaction) and how existing big data and AI/ML benchmarks ranked against the big data pipeline and AI. These charts provide the basis for selecting pipeline specializations that meet the needs of different projects and instances.

Complete Chapter List

Search this Book:
Reset