Server­less Func­tions Explained: Fundamentals



Cloud com­put­ing has fun­da­ment­ally changed the land­scape of IT infra­struc­ture. It allows busi­nesses to lever­age scal­able, on-demand resources, redu­cing the need for large upfront invest­ments in hard­ware. One of the most sig­ni­fic­ant innov­a­tions in cloud com­put­ing is server­less architecture.

This will be a two-part art­icle. This first one, aims to under­stand server­less func­tions, their advant­ages and chal­lenges, and what tech­niques we can use to develop and test these func­tions loc­ally.

We will go through:

  • What is Server­less Architecture?
  • How do Server­less Func­tions work?
  • Server­less Func­tions Advantages
  • What’s the catch?
  • How can we develop and test?

What is Server­less Architecture

Before we explore server­less func­tions, it is essen­tial to under­stand the found­a­tion they are built on: server­less archi­tec­ture. Server­less com­put­ing, des­pite its name, still relies on serv­ers. How­ever, the effort of man­aging these serv­ers, its infra­struc­ture, scal­ing it, and main­tain­ing it, is done by a cloud pro­vider like AWS, Azure, or Google Cloud, allow­ing the developers to focus on the devel­op­ment of the product itself. In a nut­shell, you develop and deploy.

 There are vari­ous sub­sets within the server­less com­put­ing paradigm, each hav­ing dif­fer­ent func­tions and pur­poses. For instance, server­less func­tions, also known as Func­tions as a Ser­vice (FaaS), handle event-driven exe­cu­tion, Backend as a Ser­vice (BaaS) provides backend ser­vices like data­bases and authen­tic­a­tion, and server­less data­bases auto­mat­ic­ally scale based on demand.

Mindmap linking the concept of Severles Computing to Orchestration, APIs, Functions, Security, Databases, Storage and others.
Fig­ure 1: Server­less Com­put­ing Subsets

How do Server­less Func­tions work?

Server­less func­tions allow developers to execute small, mod­u­lar pieces of code (func­tions) in response to events without wor­ry­ing about man­aging the server infra­struc­ture. But how do these func­tions work?

For a func­tion to be executed we need to define a trig­ger. This could be an HTTP request, a data­base change, file uploads, sched­uled time events (cron jobs), or mes­sages from a queue. The way we define these trig­gers dif­fers from the cloud pro­vider we chose. For example, let’s con­sider a server­less func­tion triggered by an HTTP request: 

  1. User Request: A user makes an HTTP request to an API Gate­way endpoint.
  2. API Gate­way: The API Gate­way receives the request and trig­gers the asso­ci­ated function.
  3. Func­tion Exe­cu­tion: The cloud pro­vider spins up a runtime envir­on­ment for the func­tion, executes the code, and pro­cesses the request.
  4. Response: The func­tion pro­cesses the input (e.g., query­ing a data­base, pro­cessing data) and returns a response.
  5. Shut­down: Once the exe­cu­tion is com­plete, the runtime envir­on­ment is terminated.
An example for a severless function flow.
Fig­ure 2: Server­less func­tion flow example

When a server­less func­tion is invoked, its life­time depends on its con­fig­ur­a­tion. It can handle a single request and then ter­min­ate, remain act­ive to handle mul­tiple requests (up to a spe­cific con­cur­rency limit), or stay act­ive for a set time to reduce delays for future requests.

One final key concept regard­ing these server­less func­tions is that a func­tion is eph­em­eral and state­less, mean­ing it does­n’t retain any data or con­text between exe­cu­tions. If a state needs to be main­tained, it must be stored in external ser­vices like data­bases or object storage.

Server­less Func­tions Advantages

Now that we know how server­less func­tions should work, let’s dis­cuss why we should con­sider using a server­less architecture.

Scalab­il­ity

When using tra­di­tional server-based mod­els, we face a trade-off. If we only pay for the exact resources our applic­a­tion needs to run at a con­sist­ent num­ber of requests, we might struggle to handle more traffic if the num­ber of requests increases unex­pec­tedly. On the other hand, if we pay for extra resources in case of growth, we might end up wast­ing money on unused resources if the traffic does­n’t increase. Server­less func­tions solve this prob­lem by auto­mat­ic­ally adjust­ing the resources based on the num­ber of requests. This means they can handle high traffic effi­ciently without wast­ing resources when traffic is low.

In con­trast, while Kuber­netes and VM auto­scal­ing, for example, also provide auto­matic scal­ing, they typ­ic­ally can­not scale down to zero, which means there are always some baseline costs. Addi­tion­ally, con­tain­ers and VMs gen­er­ally have longer star­tup times com­pared to server­less func­tions, lead­ing to poten­tial delays dur­ing traffic spikes.

Server­less func­tions not only offer faster response times by start­ing in mil­li­seconds, but they also elim­in­ate the need for infra­struc­ture man­age­ment, as the cloud pro­vider handles all the under­ly­ing details. This makes server­less func­tions a cost-effect­ive and effi­cient choice for applic­a­tions with vari­able or unpre­dict­able traffic, as they ensure resources are used optim­ally without the over­head and com­plex­ity of man­aging tra­di­tional server-based solutions.

Cost Effi­ciency

Server­less mod­els charge you based on the pre­cise amount of com­put­ing power and time your code requires to run. Dur­ing peri­ods of low activ­ity, your costs are min­imal, as there is no need to main­tain idle serv­ers. How­ever, dur­ing high-traffic peri­ods, the server­less infra­struc­ture scales auto­mat­ic­ally, ensur­ing you only pay for the increased usage. This makes them an ideal choice for applic­a­tions with vari­able or unpre­dict­able traffic pat­terns, as it elim­in­ates the need to over-pro­vi­sion resources in anti­cip­a­tion of peak usage times.

Server­less mod­els charge based on the exact com­put­ing power and time your code requires to run (pay-per-use). If the num­ber of requests decreases, the cost goes down accord­ingly, and if it increases, the cost scales up with usage. This makes them an ideal choice for applic­a­tions with vari­able or unpre­dict­able traffic pat­terns, as it elim­in­ates the need to over-pro­vi­sion resources in anti­cip­a­tion of peak usage times or pay­ing for idle time.

Reduced Oper­a­tional Overhead

In tra­di­tional server-based mod­els, engin­eers often spend lots of time con­fig­ur­ing, pro­vi­sion­ing, and main­tain­ing serv­ers, deal­ing with issues like scal­ing and other oper­a­tional tasks. With server­less mod­els, these respons­ib­il­it­ies are passed to the cloud pro­vider, allow­ing the engin­eers to focus only on writ­ing and deploy­ing code. As a res­ult, this speeds up the devel­op­ment cycles and reduces main­ten­ance efforts. Devel­op­ment teams can deliver new fea­tures and updates more often and effi­ciently, improv­ing over­all productivity.

 Faster Time to Market

As we pre­vi­ously dis­cussed, by remov­ing the com­plex­it­ies of server man­age­ment, the devel­op­ment cycles are shorter, and teams can bring new fea­tures to mar­ket much faster. As a res­ult, not only keeps the product com­pet­it­ive but also allows for a faster time in response to user feed­back and chan­ging mar­ket demands. Over­all, server­less pro­motes agil­ity, enabling the team to innov­ate and iter­ate faster.

What’s the catch?

There’s a set of chal­lenges when devel­op­ing server­less func­tions. Since we don’t have to man­age any server infra­struc­ture, from a dev server to a test and pro­duc­tion server, and we just need to deploy these func­tions to the cloud, how can we have a local envir­on­ment that sim­u­lates the pro­duc­tion envir­on­ment? How can we make sure that when devel­op­ing it loc­ally it will behave the same way as in pro­duc­tion? How can we develop and test loc­ally a server­less func­tion that inter­acts with one or more cloud ser­vices (e.g., AWS S3, DynamoDB, API Gate­way)? How can we ensure that the local devel­op­ment envir­on­ment has the same con­fig­ur­a­tions, ser­vices, and integ­ra­tions as the cloud provider?

Another issue is debug­ging, which in a server­less envir­on­ment can be more com­plex due to the state­less nature of these func­tions and the lack of tra­di­tional debug­ging tools. Unlike tra­di­tional serv­ers, we can’t attach debug­gers to live server­less func­tions run­ning in the cloud. In server­less plat­forms, the under­ly­ing infra­struc­ture is abstrac­ted away from the developer. You don’t have dir­ect access to the serv­ers or con­tain­ers where your func­tions are executed. Debug­gers often rely on low-level access to the runtime envir­on­ment and pro­cess. Since server­less plat­forms abstract away this level of detail for scalab­il­ity and man­age­ab­il­ity, attach­ing debug­gers dir­ectly to the live func­tions is not feasible.

Ok! Enough with the prob­lems and let’s look at how we can face these challenges!

How can we develop and test?

Deploy­ing devel­op­ment and test­ing envir­on­ments in the cloud to execute and eval­u­ate our daily devel­op­ment work could poten­tially resolve the issue. How­ever, man­aging mul­tiple developer accounts could become over­whelm­ing. Addi­tion­ally, this approach might escal­ate expenses, as with server­less com­put­ing, we pay for the resources we use. Con­sequently, run­ning func­tions in the cloud for test­ing pur­poses could incur sig­ni­fic­ant costs. Keep in mind that it’s com­mon to have a ded­ic­ated devel­op­ment or test­ing envir­on­ment, prefer­ably used for qual­ity assur­ance or as a stage in your Con­tinu­ous Integ­ra­tion and Con­tinu­ous Deliv­ery (CI/CD) pipeline, rather than dir­ectly test­ing your code in the cloud.

 If devel­op­ing dir­ectly in the cloud isn’t a good idea, what meth­ods can we use? Let’s look at 4 dif­fer­ent approaches:

Emu­lat­ing Services

Emu­lat­ing cloud ser­vices involves cre­at­ing a local envir­on­ment that mim­ics the cloud envir­on­ment as closely as pos­sible. This is a good approach when devel­op­ing server­less applic­a­tions because it allows developers to test and debug their code without need­ing to deploy to the cloud every time. This makes devel­op­ment faster and cheaper, as it avoids the costs asso­ci­ated with run­ning cloud func­tions and helps catch errors early in a con­trolled envir­on­ment. It provides a real­istic envir­on­ment for test­ing, ideal for end-to-end (e2e) test­ing and debug­ging com­plex interactions.

Mock­ing Services

 Mock­ing involves cre­at­ing fake imple­ment­a­tions of cloud ser­vices where the responses and beha­viour are pre­defined. It allows the developers to test their applic­a­tions without mak­ing calls to the cloud. This is use­ful to isol­ate parts of the applic­a­tions to run inde­pend­ent tests (unit tests), and gen­er­ally faster when com­pared to the emu­la­tion of a ser­vice, since it’s not actu­ally per­form­ing an operation.

Ser­vice Virtualization

 This one might cause some con­fu­sion with mock­ing and emu­lat­ing ser­vices, but it’s a dif­fer­ent concept. Ser­vice vir­tu­al­iz­a­tion cre­ates vir­tu­al­ized ver­sions of spe­cific com­pon­ents. These vir­tu­al­iz­a­tions will sim­u­late the actual beha­viour when inter­act­ing with these com­pon­ents, allow­ing test­ing without requir­ing the actual ser­vices to be avail­able. This method is often used when test­ing inter­ac­tions between com­pon­ents (end-to-end testing).

Cloud Pro­vider Test­ing Tools

 Cloud pro­viders often offer tools designed for test­ing server­less applic­a­tions loc­ally, like AWS SAM (Server­less Applic­a­tion Model) and Azure Func­tions Core Tools. These tools help sim­u­late the cloud envir­on­ment and provide a more real­istic test­ing experience.

When com­par­ing each method, none is bet­ter than the oth­ers. Mock­ing, emu­lat­ing, and vir­tu­al­iz­ing might not com­pletely rep­lic­ate the real cloud envir­on­ment, so they may not accur­ately show how things will work in pro­duc­tion. Also, cloud pro­vider test­ing tools might not cover all the scen­arios you encounter in pro­duc­tion. Each method has its use. Addi­tion­ally, these meth­ods sup­port debug­ging within an IDE, except for cloud pro­vider test­ing tools, which may or may not, depend­ing on the tool. 

We covered the good and the less good, on Part 2 let’s talk about how you can apply a server­less archi­tec­ture in general.

What’s next?

On Part 1 of this art­icle, we learned the basics of server­less func­tions, and the tech­niques to cre­ate a local envir­on­ment to develop and test these functions.

In Part 2, we apply the the­ory by cre­at­ing a local envir­on­ment to run a server­less applic­a­tion, util­iz­ing server­less func­tions and other cloud ser­vices. We will begin by defin­ing a server­less applic­a­tion, select­ing a cloud pro­vider, under­stand­ing the neces­sary ser­vices from the cloud pro­vider to run our applic­a­tion, identi­fy­ing the tools required to emu­late these ser­vices, and finally deploy­ing and run­ning the applic­a­tion locally.