Serverless Computing: Unveiling the Future of Cloud Technology

Serverless Computing: Unveiling the Future of Cloud Technology

Nidhi Niraj Worah, Megharani Patil
Copyright: © 2024 |Pages: 21
DOI: 10.4018/979-8-3693-1682-5.ch003
(Individual Chapters)
No Current Special Offers


This chapter serves as a comprehensive initiation into serverless computing, a revolutionary paradigm in cloud computing. It begins by elucidating the foundational concepts, breaking down its architecture that simplifies development through modular functions triggered by specific events. The narrative unfolds, highlighting the myriad benefits, such as enhanced efficiency, seamless scalability, and liberating developers from intricate server management. A comparative analysis with traditional cloud computing underscores serverless computing's unique attributes. However, the chapter maintains balance by addressing challenges like cold start latency, execution duration limits, and potential vendor lock-in. The exploration concludes by showcasing real-world applications in domains like real-time data processing, backend APIs, batch processing, image and video processing, and chatbots. Offering a nuanced understanding of serverless computing, this chapter stands as an invaluable resource for readers exploring its advantages, challenges, and diverse applications.
Chapter Preview

2. Purpose Of Serverless Computing

Serverless computing aims to simplify application development and operation for developers by relieving them of the burden of managing servers. To operate their apps, developers in traditional computing must configure and manage servers, which can be difficult and time-consuming. With serverless computing, the cloud provider handles all server-related responsibilities, such as resource allocation, scaling, and maintenance, allowing developers to concentrate on building the code for their applications. It's similar to renting a fully staffed kitchen so you can prepare meals without having to purchase, assemble, or clean any kitchenware. With this method, developers may work more efficiently and expand their business because they only pay for the computing power they really utilize when their code executes in response to specific events or requests.(Wen 2017)

Key Terms in this Chapter

Platform as a Service (PaaS): A service that offers a ready-made platform for developing, testing, and deploying applications, abstracting the underlying infrastructure complexities.

Dynamic Allocation Model: A system where resources such as computing power and memory are automatically assigned and reassigned based on demand.

Microservices Architecture: An architectural approach where a complex application is built as a collection of small, independent services that communicate with each other. Each microservice is designed to perform a specific business function and can be developed and deployed independently.

Virtualized Computing Resources: Virtualized computing resources refer to the creation of virtual versions of physical resources, such as servers, storage, or networks.

Modular Functions: Small, independent units of code that perform specific tasks.

Granular Scalability: The ability to scale specific components or functions of an application independently based on demand.

Software as a Service (SaaS): A service that delivers software applications over the internet, accessible through a web browser without the need for installation or maintenance.

Infrastructure as a Service (IaaS): A service that provides virtualized computing resources over the internet, allowing users to rent virtual machines, storage, and networks.,

Content Delivery Network (CDN): A Content Delivery Network (CDN) is a system of distributed servers that work together to deliver web content, like images and videos, to users based on their geographic location.

Complete Chapter List

Search this Book: