Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is CUDA

Handbook of Research on Advancements of Contactless Technology and Service Innovation in Library and Information Science
Is a parallel computing platform and application programming interface that allows software to use certain types of graphics processing units for general purpose processing.
Published in Chapter:
Building a Chatbot for Libraries
Iman Khamis (Northwestern University, Qatar)
DOI: 10.4018/978-1-6684-7693-2.ch015
Abstract
Natural language processing (NLP) is a very important science because it helps make human language understandable to machines. In addition, it helps add numeric structure to the unstructured data that is needed for many applications, such as speech recognition or text analytics. The importance of improving speech recognition is not limited to individual applications that make life easier. Speech recognition technology is very important for many businesses, as it can provide customer service benefits and enrich the customer experience. For example, voice recognition technologies have proven to be very secure, and even banks use them to authorize access to individuals' accounts. This chapter consists of the execution of a chatbot using different machine learning algorithms to help provide answers to users and provide basic customer service support.
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Computer Architectures and Programming Models: How to Exploit Parallelism
Compute Unified Device Architecture is a low-level parallel programming model and application programming interface (API) model created by NVIDIA to use CUDA-enabled (NVIDIA) graphics processing unit (GPU) for general purpose processing.
Full Text Chapter Download: US $37.50 Add to Cart
Practical Examples of Automated Development of Efficient Parallel Programs
A parallel computing platform and application programming interface (API) model created by NVIDIA. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU).
Full Text Chapter Download: US $37.50 Add to Cart
Data Streaming Processing Window Joined With Graphics Processing Units (GPUs)
Is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled Graphics Processing Unit (GPU) for general purpose processing – an approach termed GPGPU (General-Purpose computing on Graphics Processing Units; Wikipedia, 2018b ).
Full Text Chapter Download: US $37.50 Add to Cart
Algebra-Dynamic Models for CPU- and GPU-Parallel Program Design and the Model of Auto-Tuning
A parallel computing platform and application programming interface (API) model created by NVIDIA. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU).
Full Text Chapter Download: US $37.50 Add to Cart
Stream Processing of a Neural Classifier II
A GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the GPU. CUDA requires an NVIDIA GPU and special stream processing drivers.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR