Comprehensive Guide to Serverless Go with AWS Lambda

learn how to design and develop an API as a set of single-purpose functions, events, and resources with AWS Lambda.


1. Introduction to Serverless
First, let’s have a quick look as to how software was traditionally built.
Web applications are deployed on web servers running on physical machines. As a software developer, you needed to to be aware of the intricacies of the server that runs your software.
To get your application running on the server, you had to spend hours downloading, compiling, installing, configuring, and connecting all sorts of components. The OS of your machines need to be constantly upgraded and patched for security vulnerabilities. In addition, servers need to be provisioned, load-balanced, configured, patched, and maintained.
In short, managing servers is a time-consuming task which often requires dedicated and experienced systems operations personnel.
What server maintenance can feel like – Metropolis (1927 film)
What is the point of software engineering? Contrary to what some might think, the goal of software engineering isn’t to deliver software. A software engineer’s job is to deliver value - to get the usefulness of software into the hands of users.
At the end of the day, you do need servers to deliver software. However, the time spent managing servers is time you could have spent on developing new features and improving your application. When you have a great idea, the last thing you want to do is set up infrastructure. Instead of worrying about servers, you want to focus more on shipping value.
How can we minimize the time required to deliver impact?
-
1.1 Moving to the Cloud
Over the past few decades, improvements in both the network and the platform layer - technologies between the operating system and your application - have made cloud computing easier.
Back in the days of yore (the early 1990s) developers only had bare metal hardware available to run their code, and the process of obtaining a new compute unit can take from days to months. Scaling took a lot of detailed planning, a huge amount of time and, most importantly, money. A shift was inevitable. The invention of virtual machines and the hypervisor shrunk the time to provision a new compute unit down to minutes through virtualization. Today, containers gives us a new compute unit in seconds.
DevOps has evolved and matured over this period, leading to the proliferation of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) providers. These third-party platforms lets you delegate the task of maintaining the execution environment for their code to capable hands, freeing the software developer from server and deployment concerns.
Today, developers have moved away from deploying software on physical computers sitting in their living room. Instead of manually downloading and building a bunch of platform-level technologies on each server instance (and later having to repeat the process when you scale) you can go to a simple web user interface on your PaaS provider of choice, click a few options, and have your application automatically deployed to a fully provisioned cluster.
When your application usage grows, you can add capacity by clicking a few buttons. When you require additional infrastructure components, set up deployment pipelines, or enable database backups, you can do this from the same web interface.
The state of Platform-as-a-Service (PaaS) and cloud computing today is convenient and powerful - but can we do better?
-
1.2 Enter Serverless
The next major shift in cloud computing is commonly known as “Serverless” or “Functions-as-a-Service” (FaaS.)
Keep in mind that the phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer need to think that much about them. Computing resources get used as services without having to manage around physical capacities or limits.
Cloud Computing Evolution
Serverless is a software development approach that aims to eliminate the need to manage infrastructure by:
  • Using a managed compute service (Functions-as-a-Service) to execute your code, and
  • Leveraging external services and APIs (third-party Software-as-a-Service products.)
There is now an abundance of third party services: APIs that handles online payments, transactional email, user analytics, code quality measurement, content management, continuous integration, and other secondary concerns. In our everyday work we also make use of external tools for project management, file sharing, office administration, and more.
Instead of spending valuable resources on building up secondary capabilities such as infrastructure and server maintenance, developers can focus on their core value proposition. Rather than building things from scratch, developers can connect prefabricated parts together and prune away secondary complexity from your application. By making use of third-party services, you can build loosely coupled, scalable, and efficient architectures quickly.
Serverless platforms are a major step towards delegating infrastructure problems to companies that are much better positioned to deal with them. No matter how good you become at DevOps, Amazon / Google / Microsoft will almost certainly have done it better. You can now get all the benefits of a modern container farm architecture without breaking the bank or spending years building it yourself.
-
1.3 From PaaS to FaaS
How is Functions-as-a-Service different from Platform-as-a-service?
Platform-as-a-Service (PaaS) such as Heroku offer support for long running applications and services. When started, long running server processes (also known as daemons) wait for an input, execute some code when an input is received, and then continue waiting for another request. These server processes run 24 / 7 and you are billed every month or hour regardless of actual usage.
PaaS platforms provide a managed virtual server provisioning system that frees you from managing actual servers. However, you still have to think about the servers. You often need to describe your runtime environment with a Dockerfile, specify how many instances to provision, enable autoscaling, and so on.
Functions-as-a-Service (FaaS) lets you deploy and invoke short-lived (ephemeral) function processes to handle individual requests. Function processes are created when an input event is received, and disappears after the code finishes executing. The platform handles the provisioning of instances, termination, monitoring, logging, and so on. Function processes come into existence only in response to an incoming request and you are billed according to the number of function invocations and total execution time.
FaaS platforms goes a step further than PaaS so you don’t even need to think about how much capacity you need in advance. You just upload your code, select from a set of available languages and runtimes, and the platform deals with the infrastructure.
The table below highlights the differences between PaaS and Faas
-
1.4 From Monolith to Microservices to Functions
One of the modern paradigms in software development is the shift towards smaller, independently deployable units of code. Monolithic applications are out; microservices are in.
A monolithic application is built as a single unit, where the presentation, business logic, data access layers all exist within the same application. The server-side application will handle HTTP requests, execute domain logic, retrieve and update data from the database, and select and populate HTML views to be sent to the browser. A monolith a single logical executable. Any changes to the system involves building and deploying a new version of the application. Scaling requires scaling the entire application rather than individual parts of it that require greater resource.
In contrast, the idea behind microservices is you break down a monolithic application into smaller services by applying the single responsibility principle at the architectural level. You refactor existing modules within the monolith into standalone micro-services where each service is responsible for a distinct business domain. These microservices communicate with each other over the network (often via RESTful HTTP) to complete a larger goal. Benefits over a traditional monolithic architecture include independent deployability, scalability, language, platform and technology independence for different components, and increased architectural flexibility.
For example, we could create a Users microservice to be in charge of user registration, onboarding, and other concerns within the User Domain. Microservices allow teams to work in parallel, build resilient distributed architectures, and create decoupled systems that can be changed faster and scaled more effectively.
Cloud Computing Evolution: from monolith to microservices to functions
Functions-as-a-Service (FaaS) utilizes smaller unit of application logic in the form of single-purpose functions. Instead of a monolithic application that you’d run on a PaaS provider, your system is composed of multiple functions working together. For example, each HTTP endpoint of a RESTful API can be handled by a separate function. The POST /users endpoint would trigger a createUser function, the PATCH /users endpoint would trigger an updateUser function, and so on. A complex processing pipeline can be decomposed to multiple steps, each handled by a function.
Each function is independently deployable and scales automatically. Changes to the system can be localized to just the functions that are affected. In some cases, you can change your application’s workflow by ordering the same functions in a different way.
Functions-as-a-Service goes beyond beyond microservices, enabling developers to create new software applications composed of tiny building blocks.
-
1.5 FaaS Concepts
Let’s look at the basic building blocks of applications built on FaaS: Events trigger Functions which communicate with Resources.

-

Functions

A Function is a piece of code deployed in the cloud, which performs a single task such as:
  • Processing an image.
  • Saving a blog post to a database.
  • Retrieving a file.
When deciding what should go in a Function, think of the Single Responsibility Principle and the Unix philosophy:
  1. Make each program do one thing well.
  2. Expect the output of every program to become the input to another, as yet unknown, program.
Following these principles lets us maintain high cohesion and maximize code reuse. A Function is meant to be small and self-contained. Let’s look at an example AWS Lambda (Go) Function:
E
The lambda function above takes name as input and returns a greeting based on a name.
FaaS providers have a distinct set of supported languages and runtimes. You are limited to the environments supported by the FaaS provider; one provider may offer an execution environment that is not supported by another. For example, Azure Functions supports C# but AWS Lambda does not support C#. On AWS Lambda, you can write your Functions in the following runtimes (January 2018):
  • Node.js – v4.3.2 and 6.10
  • Java – Java 8
  • Python – Python 3.6 and 2.7
  • .NET Core – .NET Core 1.0.1 (C#)
  • Go - Go 1.x
With tooling, you can support compiled languages such as Rust which are not natively supported. This works by including executable binaries within your deployment package and having a supported language (such as Node.js) call the binaries.
1.5.1.1 AWS Lambda Function Environment
Each AWS Lambda function also has a 500MB ‘scratch space’ of non-persistent disk space in its own /tmp directory. The directory content remains when the container is frozen, providing transient cache that can be used for multiple invocations. Files written to the /tmp folder may still exist from previous invocations.
However, when you write your Lambda function code, do not assume that AWS Lambda always reuses the container. Lambda may or may not re-use the same container across different invocations. You have no control over if and when containers are created or reused.
-

Events

One way to think of Events is as signals traveling across the neurons in your brain.
Events are analogous to signals travelling in your brain
You can invoke your Function manually or you can set up Events to reactively trigger your Function. Events are produced by an event source. On AWS, events can come from:
  • An AWS API Gateway HTTP endpoint request (useful for HTTP APIs)
  • An AWS S3 bucket upload (useful for file uploads)
  • A CloudWatch timer (useful for running tasks every N minutes)
  • An AWS SNS topic (useful for triggering other lambdas)
  • A CloudWatch Alert (useful for log processing)
The execution of a Function may emit an Event that subsequently triggers another Function, and so on ad infinitum - creating a network of functions entirely driven by events. We will explore this pattern in Chapter 4.
-

Resources

Most applications require more than a pure functional transformation of inputs. We often need to capture some stateful information such as user data and user generated content (images, documents, and so on.)
However, a Function by itself is stateless. After a Function is executed none of the in-process state within will be available to subsequent invocations. Because of that, we need to provision Resources in the form of an external database or network file storage to store state.
A selection of AWS resources
Resources are infrastructure components which your Functions depend on, such as:
  • An AWS DynamoDB Table (for saving user and application data)
  • An AWS S3 Bucket (for saving images and files)
-
1.6 FaaS Execution Model
AWS Lambda executes functions in an isolated container with resources specified in the function’s configuration (which defines the function container’s memory size, maximum timeout, and so on.) The FaaS platform takes care of provisioning and managing any resources needed to run your function.
The first time a Function is invoked after being created or updated, a new container with the appropriate resources will be created to execute it, and the code for the function will be loaded into the container. Because it takes time to set up a container and do the necessary bootstrapping, AWS Lambda has an initial cold start latency. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated.
The cold start latency occurs due to container bootstrapping
After a Function is invoked, AWS Lambda keeps the container warm for some time in anticipation of another function invocation. AWS Lambda tries to reuse the container for subsequent invocations.
-
1.7 Traditional Scaling vs. Serverless
One of the challenges in managing servers is allocating compute capacity.
Web servers need to be provisioned and scaled with enough compute capacity to match the amount of inbound traffic in order to run smoothly. With traditional deployments, you can find yourself over-provisioning or under-provisioning compute capacity. This is especially true when your traffic load is unpredictable. You can never know when your traffic will peak and to what level.
Traditional server capacity planning can result in under and over-provisioning.
When you over-provision compute capacity, you’re wasting money on idle compute time. Your servers are just sitting there waiting for a request that doesn’t come. Even with autoscaling the problems of paying of idle time still persists, albeit at a lesser degree. When you under-provision, you’re struggling to serve incoming requests (and have to contend with dissatisfied users.) Your servers are overwhelmed with too many requests. Compute capacity is usually over-provisioned, and for good reason. When there’s not enough capacity, bad things can happen.
Your servers, overwhelmed by incoming requests
When you under-provision and the queue of incoming requests grows too large, some of your users’ requests will time out. This phenomena is commonly known as the ‘Reddit Hug of Death’ or the Slashdot effect. Depending on the nature of your application, users will find this occurrence unacceptable.
With Functions-as-a-Service, you get autoscaling out of the box. Each incoming request spawns a short-lived function process that executes your function. If your system needs to be processing 100 requests at a specific time, the provider will spawn that many function processes without any extra configuration on your part. The provisioned server capacity will be equal to the number of incoming requests. As such, there is no under or over-provisioning in FaaS. You get instant, massive parallelism when you need it.
-

AWS Lambda Costs

With Serverless, you only pay for the number of executions and total execution duration. Since you don’t have to pay for idle compute time, this can lead to significant cost savings.

Requests

You are charged for the total number of execution requests across all your functions. Serverless platforms such as AWS Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console.
  • First 1 million requests per month are free
  • $0.20 per 1 million requests thereafter ($0.0000002 per request)

Duration

Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.

Pricing Example

If you allocated 128MB of memory to your function, executed it 30 million times in one month, and it ran for 200ms each time, your charges would be calculated as follows:
Monthly compute charges
The monthly compute price is $0.00001667 per GB-s and the free tier provides 400,000 GB-s.
  • Total compute (seconds) = 30M * (0.2sec) = 6,000,000 seconds
  • Total compute (GB-s) = 6,000,000 * 128MB/1024 = 750,000 GB-s
  • Total Compute – Free tier compute = Monthly billable compute seconds
  • 750,000 GB-s – 400,000 free tier GB-s = 350,000 GB-s
  • Monthly compute charges = 350,000 * $0.00001667 = $5.83

Monthly request charges

The monthly request price is $0.20 per 1 million requests and the free tier provides 1M requests per month.
  • Total requests – Free tier request = Monthly billable requests
  • 30M requests – 1M free tier requests = 29M Monthly billable requests
  • Monthly request charges = 29M * $0.2/M = $5.80

Total charges

Total charges = Compute charges + Request charges = $5.83 + $5.80 = $11.63 per month
For more details, check out the official AWS Lambda pricing docs.

Additional charges

In a typical web-connected application that needs HTTP access, you’ll also incur API Gateway costs at $3.50 per 1M executions.
-
1.8 AWS Lambda Limits
The execution environment AWS Lambda gives you has some hard and soft limits. For example, the size of your deployment package or the amount of memory your Lambda function is allocated per invocation.

Invocation Limits

From 2018 onwards, Lambda memory limits has been increased to 3GB.
Some things of note:
  • Memory: Each Function can have an allocated memory (defaults to 128MB.) Doubling a function’s allocated memory will also double the amount you are billed per 100ms. If you’re performing some memory-intensive tasks, increasing the memory allocation can lead to increased performance.
  • Space: Your Functions have a 512MB ‘scratch’ space that’s useful for writing temporary files to disk. Note that there is no guarantee that files written to the /tmp space will be available in subsequent invocations.
  • Execution Time: Five minutes is the longest time a function can execute. Exceeding this duration will immediately terminate the execution.
-

Deployment Limits


-

Concurrency Limits


Concurrent executions refers to the number of executions of your function code that are happening at any given time. You can use the following formula to estimate your concurrent Lambda function invocations:

events (or requests) per second * function duration

For example, consider a Lambda function that processes Amazon S3 events. Suppose that the Lambda function takes on average three seconds and Amazon S3 publishes 10 events per second. Then, you will have 30 concurrent executions of your Lambda function.
By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000. Any invocation that causes your function’s concurrent execution to exceed the safety limit is throttled. In this case, the invocation doesn’t execute your function.
-

AWS Lambda Limit Errors

Functions that exceed any of the limits listed in the previous limits tables will fail with an exceeded limits exception. These limits are fixed and cannot be changed at this time. For example, if you receive the exception CodeStorageExceededException or an error message similar to “Code storage limit exceeded” from AWS Lambda, you need to reduce the size of your code storage.
Each throttled invocation increases the Amazon CloudWatch Throttles metric for the function, so you can monitor the number of throttled requests. The throttled invocation is handled differently based on how your function is invoked:

Synchronous Invocation

If the function is invoked synchronously and is throttled, the invoking application receives a 429 error and the invoking application is responsible for retries.

Asynchronous Invocation

If your Lambda function is invoked asynchronously and is throttled, AWS Lambda automatically retries the throttled event for up to six hours, with delays between retries.

Stream-based Invocation

For stream-based event sources (Amazon Kinesis Streams and Amazon DynamoDB streams), AWS Lambda polls your stream and invokes your Lambda function. When your Lambda function is throttled, AWS Lambda attempts to process the throttled batch of records until the time the data expires. This time period can be up to seven days for Amazon Kinesis Streams. The throttled request is treated as blocking per shard, and Lambda doesn’t read any new records from the shard until the throttled batch of records either expires or succeeds.
-

Increasing your concurrency limit

To request a concurrent executions limit increase:
  1. Open the AWS Support Center page, sign in if necessary, and then choose Create case.
  2. For Regarding, select Service Limit Increase.
  3. For Limit Type, choose Lambda, fill in the necessary fields in the form, and then choose the button at the bottom of the page for your preferred method of contact.
-
1.9 Use Cases
FaaS can be applied to a variety of use cases. Here are some examples.

Event-driven File Processing

You can create functions to thumbnail images, transcode videos, index files, process logs, validate content, aggregate and filter data, and more, in response to real-time events.
Multiple lambda functions could be invoked in response to an event. For example, to create differently sized thumbnails of an image (small, medium, large) you can trigger three lambda function in parallel, each with different dimension inputs.
An event-driven image thumbnail flow.
Here’s an example architecture of a serverless asset processing pipeline:
  1. Whenever a file is uploaded to an S3 bucket,
  2. A lambda function is triggered with details about the uploaded file.
  3. The lambda function is executed, performing whatever processing we want it to do.
A major benefit of using FaaS for this use case is you don’t need to reserve large server instances to handle the occasional traffic peaks. Since your instances will be idle for most of the day, going the FaaS route can lead to major cost savings.
With FaaS, if your system needs to process 100 requests at a specific time, your provider will spawn that many function processes without any extra configuration on your part. You get instant compute capacity when you need it and avoid paying for idle compute time.
In Chapter 4, we will explore this pattern by building an event-driven image processing backend.
-

Web Applications

You can use FaaS together with other cloud services to build scalable web applications and APIs. These backends automatically scale up and down and can run in a highly available configuration across multiple data centers – with zero administrative effort required for scalability, back-ups, or multi-data center redundancy.

Here’s an example architecture of a serverless backend:
  1. Frontend clients communicate to the backend via HTTP.
  2. An API gateway routes HTTP requests to different lambda functions.
  3. Each lambda function has a single responsibility, and may communicate with other services behind the scenes.
Serverless web applications and APIs are highly available and can handle sudden traffic spikes, eliminating the Slashdot Effect. Going FaaS solves a common startup growing pain in which teams would rewrite their MVP in a different stack in order to scale. With Serverless, you can write code that scales from day 1.
In Chapters 5 and 6, we will explore this use case by building a parallelized web scraping backend and a CRUD commenting backend.
-

Webhooks

Webhooks (also known as ‘Reverse APIs’) lets developers create an HTTP endpoint that will be called when a certain event occurs in a third-party platform. Instead of polling endlessly for an update, the third-party platform can notify you of new changes.
For example, Slack uses incoming webhooks to post messages from external sources into Slack and outgoing webhooks to provide automated responses to messages your team members post.
Webhooks are powerful and flexible: they allow customers to implement arbitrary logic to extend your core product. However, webhooks are an additional deployable component your clients need to worry about. With FaaS, developers can write webhooks as Functions and not have to worry about provisioning, availability, nor scaling.
FaaS also helps platforms that use webhooks to offer a smoother developer experience. Instead of having your customers provide a webhook URL to a service they need to host elsewhere, serverless webhooks lets users implement their extension logic directly within your product. Developers would write their webhook code directly on your platform. Behind the scenes, the platform depploys the code to a FaaS provider.
Twilio Functions is an early example of a serverless webhook
The advantages of going serverless for webhooks are similar to those for APIs: low overhead, minimal maintenance, and automatic scaling. An example use case is setting up a Node.js webhook to process SMS requests with Twilio.
-
1.10 Benefits

High Availability and Scalability

A FaaS provider handles horizontal scaling automatically for you, spawning as many function processes as necessary to handle all incoming requests. FaaS providers also guarantees high availability, making sure your functions are always up.
As a developer, you are freed from having to think about provisioning multiple instances, load balancing, circuit breaking, and other aspects of deployment concerns. You can focus on developing and improving your core application.
-

Less Ops

There’s no infrastructure for you to worry about. Tasks such as server configuration and management, patching, and maintenance are taken care of by the vendor. You’re responsible only for your own code, leaving operational and administrative tasks to capable hands.
However, operational concerns are not completely eliminated; they just take on new forms. From an operational perspective, serverless architectures introduces different considerations such as the loss of control over the execution environment and the complexity of managing many smaller deployment units, resulting in the need for much more sophisticated insights and observability solutions. Monitoring, logging, and distributed tracing are of paramount importance in Serverless architectures.
-

Granular Billing

With traditional PaaS, you are billed in interval (monthly / daily / hourly) cycles because your long-running server processes are running 24 / 7. Most of the time, this means you are paying for idle compute time.
FaaS billing are more granular and cost effective, especially when traffic loads are uneven or unexpected. With AWS Lambda you only pay for what you use in terms of number of invocations and execution time in 100 millisecond increments. This leads to lower costs overall, because you’re not paying for idle compute resources.
-
1.11 Drawbacks

Vendor lock-in

When you use a cloud provider, you delegate much of the server control to a third-party. You are tightly coupled to any cloud services that you depend on. Porting your code from one platform or cloud service to another will require moving large chunks of your infrastructure.
On the other hand, big vendors aren’t going anywhere. The only time this really matters is if your organization itself has a business requirement to have multi-cloud vendors. Note that building a cross-cloud solution is a time-consuming process: you would need to build abstractions above the cloud to essentially standardise the event creation and ingestion as well as the services that you need.
-

Lack of control

FaaS platforms are a black box. Since the provider controls server configuration and provisioning, developers have limited control of the execution environment. AWS Lambda lets you pick a runtime, configure memory size (from 128MB to 3GB), and configure timeouts (up to 300 seconds or 5 minutes) but not much else. The 5 minute maximum timeout makes plain AWS Lambda unsuitable for long-running tasks. AWS Lambda’s /tmp disk space is limited to 512MB also makes is unsuitable for certain tasks such as processing large videos.
Over time, expect these limits to increase. For example, AWS announced a memory size limit increase from 1.5GB to 3GB during in November 2017.
-

Integration Testing is hard

The characteristics of serverless present challenges for integration testing:
  • A serverless application is dependent on internet/cloud services, which are hard to emulate locally.
  • A serverless application is an integration of separate, distributed services, which must be tested both independently, and together.
  • A serverless application can feature event-driven, asynchronous workflows, which are hard to emulate entirely.
Fortunately, there are now open source projects such as localstack which lets you run a fully functional local AWS cloud stack. Develop and test offline!
-
1.12 FaaS Providers
There are a number of FaaS providers currently on the market, such as:
  • AWS Lambda
  • Google Cloud Functions
  • Azure Functions
  • IBM OpenWhisk
When choosing which FaaS provider to use, keep in mind that each provider has a different set of available runtimes and event triggers. See the table below for a comparison of event triggers available on different FaaS providers (not an exhaustive list):

For the rest of this book, we will be using AWS Lambda.
-
1.13 Chapter Summary
In serverless, a combination of smaller deployment units and higher abstraction levels provides compelling benefits such as increased velocity, greater scalability, lower cost, and the ability to focus on product features. In this chapter, you learned about:
  • How serverless came to be and how it compares to PaaS.
  • The basic building blocks of serverless. In serverless applications, Events trigger Functions. Functions communicate with cloud Resources to store state.
  • Serverless benefits, drawbacks, and use cases.
In the next chapter, you will look at the Serverless framework and set up your development environment.
-
2. The Serverless Framework
The Serverless framework (henceforth serverless) is a Node.js command-line interface (CLI) that lets you develop and deploy serverless functions
2.1 Introduction
The Serverless framework (henceforth serverless) is a Node.js command-line interface (CLI) that lets you develop and deploy serverless functions, along with any infrastructure resources they require.
The serverless framework lets you write functions, add event triggers, and deploy to the FaaS provider of your choice. Functions are automatically deployed and events are compiled into the syntax your FaaS provider understands.
serverless is provider and runtime agnostic, so you are free to use any supported FaaS providers and languages. As of January 2018, the framework supports the following FaaS providers:
  • Amazon Web Services (AWS Lambda)
  • Google Cloud Platform (Google Cloud Functions)
  • Microsoft Azure (Azure Functions)
  • IBM OpenWhisk
  • Kubeless
  • Spotinst
  • Webtasks
Out of the box, the Serverless framework gives you:
  • Structure: The framework’s unit of deployment is a ‘service’, a group of related functions.
  • Best practices: Support for multiple staging environments, regions, environment variables, configs, and more.
  • Automation: A handful of useful options and commands for packaging, deploying, invoking, and monitoring your functions.
  • Plugins: Access to an active open-source ecosystem of plugins that extend the framework’s behaviour.
-
2.2 Installation
To install serverless, you must first install Node.js on our machine. The best way to manage Node.js versions on your machine is to use the Node Version Manager (nvm), so we’ll install that first. Follow the step-by-step instructions below.

Install Node Version Manager

First, we’ll install nvm, which lets you manage multiple Node.js versions on your machine and switch between them.
To install or update nvm, you can use the install script using cURL:

> curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh

or Wget:

> wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh

To verify that nvm has been installed, do:

> command -v nvm
nvm

Install Node.js

To install the latest Node.js version that AWS Lambda supports, do:

> nvm install v6.10.3

To set a default Node.js version to be used in any new shell, use the nvm alias ‘default’:

> nvm alias default v6.10.3

To verify that the correct Node.js version has been installed, do:

> node -v
v6.10.3

Install the Serverless Framework

Install the serverless node module with npm. In your terminal, do:

> npm install serverless -g

Type serverless help to see all available commands:
E
In Chapter 3, we will go through some of these commands in more detail. For a full reference of serverless CLI commands, read the official docs.
-
2.3 Getting Started

Development Workflow

Here is a typical workflow when building applications using the serverless CLI:
  1. serverless create to bootstrap a Serverless project.
  2. Implement your functions.
  3. serverless deploy to deploy the current state of the project.
  4. serverless invoke or manually invoke to test the live function.
  5. serverless logs to stream your function’s logs.
  6. Implement and run unit tests for your functions locally with mocks.

Project structure

Here is a typical serverless Go project structure:.

+-- src/
| +-- handlers/
| | +-- addTodo.go
| | +-- listTodos.go
| +-- lib/
+-- serverless.yml
+-- .gitignore

Tests go into the /test directory. Here is where the unit tests for our functions and supporting code lives. Test inputs are stored in /test/fixtures.
The serverless.yml config file is in our service’s root directory.

2.3.3 serverless.yml

The serverless.yml file describes your application’s functions, HTTP endpoints, and supporting resources. It uses a DSL that abstracts away platform-specific nuances. Have a look at an example serverless.yml below:
E
In the serverless.yml, you define your Functions, the Events that trigger them, and the Resources your Functions use. The serverless CLI reads and translates this file into a provider specific language such as AWS CloudFormation so that everything is set up in your FaaS provider.

Events

If you are using AWS as your provider, all events in the service are anything in AWS that can trigger an AWS Lambda function, like an S3 bucket upload, an SNS topic, and HTTP endpoints created via API Gateway. You can define what events can trigger your functions in serverless.yml:
E

Custom Variables

The Serverless framework provides a powerful variable system which allows you to add dynamic data into your serverless.yml. You can define custom variables such as imagesBucketName in serverless.yml that you can re-use throughout the project.
In the serverless.yml, we define a custom block with variables such as imagesBucketName that we can re-use throughout the file .
E

Environment Variables

You can also define Environment variables such as process.env.IMAGES_BUCKET_NAME in the provider.environment block. You can use the environment variables you define within your functions:
In the serverless.yml, we expose the bucket’s name custom.imagesBucketName as an environment variable IMAGES_BUCKET_NAME. We can then read the environment variable from a function as process.env.IMAGES_BUCKET_NAME:
# serverless.yml custom: imagesBucketName: snapnext-images provider: ... environment: IMAGES_BUCKET_NAME: ${self:custom.imagesBucketName}

Resources

Defining AWS Resources such as S3 buckets and DynamoDB requires some familiarity with AWS CloudFormation syntax. CloudFormation is an AWS-specific language used to define your AWS Infrastructure as code:
E

IAM Role Statements

By default, Functions lack access to AWS resources such as S3 buckets and DynamoDB tables. AWS Lambda executes your Lambda function on your behalf by assuming the role you provided at the time of creating the Lambda function. Therefore, you need to grant the role the necessary permissions that your Lambda function needs, such as permissions for Amazon S3 actions to read an object.
You can give these Functions access to resources by writing Identity and Access Management (IAM) role statements for your Function’s role. For example:
E
The above role statement allows our Functions to retrieve objects from an S3 bucket.
Each IAM role statement has three attributes: Effect, Action, and Resource. Effect can be either Allow or Deny. Action refers to specific AWS operations such as S3:GetObject. Resource points to the arn of the specific AWS resources to grant access to.
Always remember to specify the minimum set of permissions your lambda functions require. For a list of available IAM Actions, refer to the official AWS IAM reference.
Keep in mind that provider.iamRoleStatements applies to a single IAM role that is created by the Serverless framework and is shared across your Functions. Alternatively, you can create one role per function by creating an AWS::IAM::Role Cloudformation resource and specifying what role is used for the Function:
E

Serverless Plugins

Plugins lets you extend beyond the framework’s core features. With plugins, you can:

Installing Plugins

serverless plugins are packaged as Node.js modules that you can install with NPM:

> cd my-app
> npm install --save custom-serverless-plugin

Plugins are added on a per service basis and are not applied globally. Make sure you are in your service’s root directory when you install a plugin!

Using Plugins

Including and configuring your plugin is done within your project’s serverless.yml.
The custom block in the serverless.yml file is the place where you can add necessary configurations for your plugins, for example:

plugins:
- custom-serverless-plugin
custom:
customkey: customvalue
In the above the custom-serverless-plugin is configured with a custom.customkey attribute. Each plugin should have documentation on what configuration options are available.
-
2.4 Additional Setup
Before we can continue to the hands-on section, there are a few more things you need to set up.

Amazon Web Services (AWS) Setup

AWS Account Registration

For the rest of this book, you’ll be using AWS as your FaaS provider. If you haven’t already, sign up for an AWS account!
Once you’ve signed up, you’ll need to create an AWS user which has administrative access to your account. This user will allow the Serverless Framework configure the services in your AWS account.
First, Login to your AWS account. Go to the Identity & Access Management (IAM) page.

Click on the Users sidebar link.
Click on the Add user button. Enter serverless-admin, tick the Programmatic access Access type checkbox, and select Next: Permissions.

Click Attach existing policies directly, tick Administrator Access, and select Next: Review.

Review your choices, then select Create user.

Save the Access key ID and Secret access key of the newly created user.

Done! We’ve now created a user which can perform actions in our AWS account on our behalf (thanks to the Administrator Access policy).

Set Up Credentials

Next, we’ll pass the user’s API Key & Secret to serverless. With the serverless framework installed on your machine, do:
serverless config credentials --provider aws --key <your_aws_key> --secret <your_\ aws_secret>
Take a look at the config CLI reference for more information.
-
2.5 Chapter Summary
In this chapter, you learned about the Serverless framework and set up your development environment.
In the next chapter, you will build a simple application with the Serverless framework.
-
3. The Go Language
3.1 Why Go?
The Go language has:
  • Incredible runtime speed.
  • Amazing concurrency abstraction (goroutines.)
  • A great batteries-included standard library
  • Ease of deployment
-
3.2 Whirlwind Tour of Go

Installation

On OSX, you can download the go1.9.3.darwin-amd64.pkg package file, open it, and follow the prompts to install the Go tools. The package installs the Go distribution to /usr/local/go.
To test your Go installation, open a new terminal and enter:

$ go version
go version go1.9.2 darwin/amd64

Then, add the following to your ~/.bashrc to set your GOROOT and GOPATH environment variables:

export GOROOT=/usr/local/go
export GOPATH=/Users/<your.username>/gopath
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin%



source ~/.bashrc

Note that your GOPATH should be the directory under which your source and third-party packages will live under.
Next, try setting up a workspace: create a directory in $GOPATH/src/learn-go/ and in that directory create a file named hello.go.

$ mkdir learn-go
$ cd learn-go
$ touch hello.go
// hello.go



package main

import "fmt"

func main() {
fmt.Printf("hello, world\n")
}

Run your code by calling go run hello.go. You can also go build Go programs into binaries, which lets us execute the built binary directly:

$ go build hello.go

The command above will build an executable named hello in the directory alongside your source code. Execute it to see the greeting:

$ ./hello
hello, world

If you see the “hello, world” message then your Go installation is working!
In the sub-sections that follow, we’ll quickly run through the basics of the Go language.

Types

Go is a statically-typed language; it comes with several built-in types such as strings, integers, floats, and booleans.
The types.go program below demonstrates Go’s basic built-in types:
E
The Go programs shown in this chapter is available as part of the sample code included with this book.
Running the above program from the terminal gives you the following:

$ go run types.go
1+1 = 2
7.0/3.0 = 2.3333333333333335
false
true
false

Variables

Variables in Go always have a specific type and that type cannot change. You declare variables with var, or the := syntax to both declare and initialize a variable.
The variables.go program below demonstrates how to declare and initialize variables:
E
Running the above program from the terminal gives you:

$ go run variables.go
initial
1 2
true
0
short

Branching

If/else

Branching with if and else in Go is straightforward:
E
Running the above program from the terminal gives you:
.
$ go run if.go
7 is odd
9 has 1 digit

Switches

Switch statements express conditionals across many branches. The switch.go program below demonstrates different ways you can use switches to perform branching logic:
E
Running the above program from the terminal gives you:

$ go run switch.go
Write 2 as two
It's after noon
I'm a bool
I'm an int
Don't know type string

Data Structures

Slices

Slices in Go are dynamically sized arrays. The slices.go program below shows how you can initialize, read, and modify slices:
E
Running the above program from the terminal gives you:

$ go run slices.go
empty s: [ ]
s: [a b c]
s[2]: c
len(s): 3
append: [a b c d e f]

Maps

Maps in Go are similar to hashes or dictionaries in other languages. The maps.go program shows how you can initialize, read, and modify maps:
E
Running the above program from the terminal gives you:

$ go run maps.go
m: map[a:1 b:2]
v1: 1
delete: map[b:2]

Loops

For

for is Go’s only looping construct:
E
Running the above program from the terminal gives you:

$ go run for.go
1
2
3
7
8
9

Range

You can use range to iterate over elements in a variety of data structures. The range.go program demonstrates how you can iterate over slices and maps:
E
Running the above program from the terminal gives you:

$ go run range.go
current index: 0
current index: 1
current index: 2
sum: 9
a -> apple
b -> banana

Functions

Functions in Go accepts parameters of specified types, and returns values of specified types. The functions.go program demonstrates how you can define and call functions in Go:
E
Running the above program from the terminal gives you:

$ go run functions.go
1+2 = 3

Pointers

Pointers allow you to pass references to values within your program.
You use the * prefix to refer to variables by reference / memory address instead of by value.
The & prefix is used to return the memory address of a variable.
E
Running the above program from the terminal gives you:

$ go run pointers.go
initial: 1
zeroval: 1
zeroptr: 0
pointer: 0x42131100

Note that zeroval doesn’t change the i in main, but zeroptr does because it has a reference to the memory address for that variable.

Structs

Go Structs are typed collections of fields. They are similar to classes in other languages. Structs are the primary data structure used to encapsulate business logic in Go programs.
The structs.go program shows how you can initialize, read, and modify structs in Go:
E
Running the above program from the terminal gives you:

$ go run structs.go
{Alice 21}
{Bob 0}
Ann
30

Go supports methods defined on struct types. The methods.go program demonstrates how you can use method definitions to add behaviour to structs:
E
Running the above program from the terminal gives you:

$ go run methods.go
area: 50
perim: 30

Interfaces

Go Interfaces are named collections of method signatures.
E
Unlike interfaces in other languages, Go interfaces are implicit rather than explicit. You don’t have to annotate a struct to say that it implements an interface. As long as a struct contains methods defined in an interface, that struct implements the interface.
In the interfaces.go program above, you can see how the measure method works for both Circle and Rect, because both structs contains methods defined in the Geometry interface.
Running the above program from the terminal gives you:

$ go run interfaces.go
{3 4}
12
14
{5}
78.53981633974483
31.41592653589793

Errors

In Go it’s idiomatic to communicate errors via an explicit, separate return value.
E

Packages

Nearly every program we’ve seen so far included this line:

import fmt

fmt is the name of a package that includes a variety of functions related to formatting and output to the screen. Go provides packages as a mechanism for code reuse.
Create a new learn-packages/ directory in the learn-go/ folder:

mkdir learn-packages
cd learn-packages

Let’s create a new package called math. Create a directory called math/ and in that directory add a new file called math.go:

mkdir math
touch math.go



// $GOPATH/src/learn-go/learn-packages/math/math.go

package math

func Average(xs []float64) float64 {
total := float64(0)
for _, x := range xs {
total += x
}
return total / float64(len(xs))
}

In our our main.go program, we can import and use our math package:

// $GOPATH/src/learn-go/learn-packages/main.go

package main

import "fmt"
import "learn-go/learn-packages/math"

func main() {
xs := []float64{1,2,3,4}
avg := math.Average(xs)
fmt.Println(avg)
}

Package Management

dep is a dependency management tool for Go.
On MacOS you can install or upgrade to the latest released version with Homebrew:

$ brew install dep
$ brew upgrade dep

To get started, create a new directory learn-dep/ in your $GOPATH/src:

$ mkdir learn-dep
$ cd learn-dep

Initialize the project with dep init:

$ dep init
$ ls
Gopkg.lock Gopkg.toml vendor

dep init will create the following:
  • Gopkg.lock is a record of the exact versions of all of the packages that you used for the project.
  • Gopkg.toml is a list of packages your project depends on.
  • vendor/ is the directory where your project’s dependencies are installed.

Adding a new dependency

Create a main.go file with the following contents:
// main.go

package main

import "fmt"

func main() {
fmt.Println("Hello world")
}
Let’s say that we want to introduce a new dependency on github.com/pkg/errors. This can be accomplished with one command:

$ dep ensure -add github.com/pkg/errors

That’s it!
For detailed usage instructions, check out the official dep docs.
-
3.4 Go on AWS Lambda
AWS released support for Go on AWS Lambda on January 2018. You can now build Go programs with typed structs representing Lambda event sources and common responses with the aws-lambda-go SDK.
Your Go programs are compiled into a statically-linked binary, bundled up into a Lambda deployment package, and uploaded to AWS Lambda.

Go Lambda Programming Model

You write code for your Lambda function in one of the languages AWS Lambda supports. Regardless of the language you choose, there is a common pattern to writing code for a Lambda function that includes the following core concepts:
  • Handler – Handler is the function AWS Lambda calls to start execution of your Lambda function. Your handler should process incoming event data and may invoke any other functions/methods in your code.
  • The context object – AWS Lambda also passes a context object to the handler function, which lets you retrieve metadata such as the execution time remaining before AWS Lambda terminates your Lambda function.
  • Logging – Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch Logs.
  • Exceptions – There are different ways to end a request successfully or to notify AWS Lambda an error occurred during execution. If you invoke the function synchronously, then AWS Lambda forwards the result back to the client.
Your Lambda function code must be written in a stateless style, and have no affinity with the underlying compute infrastructure. Your code should expect local file system access, child processes, and similar artifacts to be limited to the lifetime of the request. Persistent state should be stored in Amazon S3, Amazon DynamoDB, or another cloud storage service.

Lambda Function Handler

A Lambda function written in Go is authored as a Go executable. You write your handler function code by including the github.com/aws/aws-lambda-go/lambda package and a main() function:
E
Note the following:
  • package main: In Go, the package containing func main() must always be named main.
  • import: Use this to include the libraries your Lambda function requires.
  • context: The Context Object.
  • fmt: The Go Formatting object used to format the return value of your function.
  • github.com/aws/aws-lambda-go/lambda: As mentioned previously, implements the Lambda programming model for Go.
  • func HandleRequest(ctx context.Context, name string) (string, error): This is your Lambda handler signature and includes the code which will be executed. In addition, the parameters included denote the following:
  • ctx context.Context: Provides runtime information for your Lambda function invocation. ctx is the variable you declare to leverage the information available via the the Context Object.
  • name string: An input type with a variable name of name whose value will be returned in the return statement.
  • string error: Returns standard error information.
  • return fmt.Sprintf(“Hello %s!”, name), nil: Simply returns a formatted “Hello” greeting with the name you supplied in the handler signature. nil indicates there were no errors and the function executed successfully.
  • func main(): The entry point that executes your Lambda function code. This is required. By adding lambda.Start(HandleRequest) between func main(){} code brackets, your Lambda function will be executed.

Using Structured Types

In the example above, the input type was a simple string. But you can also pass in structured events to your function handler:
E
Your request would then look like this:

{
"What is your name?": "Jim",
"How old are you?": 33
}

And the response would look like this:

{
"Answer": "Jim is 33 years old!"
}

Each AWS event source (API Gateway, DynamoDB, etc.) has its own input/output structs. For example, lambda functions that is triggered by API Gateway events use the events.APIGatewayProxyRequest input struct and events.APIGatewayProxyResponse output struct:

package main

import (
"context"
"fmt"

"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)

func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (e\
vents.APIGatewayProxyResponse, error) {
fmt.Printf("Body size = %d.\n", len(request.Body))
fmt.Println("Headers:")
for key, value := range request.Headers {
fmt.Printf(" %s: %s\n", key, value)
}

return events.APIGatewayProxyResponse{Body: request.Body, StatusCode: 200}, nil
}

func main() {
lambda.Start(handleRequest)
}

For more information on handling events from AWS event sources, see aws-lambda-go/events.

The Context Object

Lambda functions have access to metadata about their environment and the invocation request such as:
  • How much time is remaining before AWS Lambda terminates your Lambda function.
  • The CloudWatch log group and log stream associated with the executing Lambda function.
  • The AWS request ID returned to the client that invoked the Lambda function.
  • If the Lambda function is invoked through AWS Mobile SDK, you can learn more about the mobile application calling the Lambda function.
  • You can also use the AWS X-Ray SDK for Go to identify critical code paths, trace their performance and capture the data for analysis.

Reading function metadata

AWS Lambda provides the above information via the context.Context object that the service passes as a parameter to your Lambda function handler.
You need to import the github.com/aws/aws-lambda-go/lambdacontext library to access the contents of the context.Context object:

package main

import (
"context"
"fmt"

"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)

func handleRequest(ctx context.Context, request events.APIGatewayProxyRequest) (e\
vents.APIGatewayProxyResponse, error) {
fmt.Printf("Body size = %d.\n", len(request.Body))
fmt.Println("Headers:")
for key, value := range request.Headers {
fmt.Printf(" %s: %s\n", key, value)
}

return events.APIGatewayProxyResponse{Body: request.Body, StatusCode: 200}, nil
}

func main() {
lambda.Start(handleRequest)
}

In the example above, lc is the variable used to consume the information that the context object captured and log.Print(lc.AwsRequestId) prints that information, in this case, the AwsRequestId.

Logging

Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch.

package main

import (
"log"
"github.com/aws/aws-lambda-go/lambda"
)

func HandleRequest() {
log.Print("Hello from Lambda")
}

func main() {
lambda.Start(HandleRequest)
}

By importing the log module, Lambda will write additional logging information such as the time stamp. Instead of using the log module, you can use print statements in your code as shown below:

package main

import (
"fmt"
"github.com/aws/aws-lambda-go/lambda"
)

func HandleRequest() {
fmt.Print("Hello from Lambda")
}

func main() {
lambda.Start(HandleRequest)
}

In this case only the text passed to the print method is sent to CloudWatch. The log entries will not have additional information that the log.Print function returns.

Function Errors

Raising custom errors

You can create custom error handling to raise an exception directly from your Lambda function and handle it directly:

package main

import (
"errors"
"github.com/aws/aws-lambda-go/lambda"
)

func OnlyErrors() error {
return errors.New("something went wrong!")
}

func main() {
lambda.Start(OnlyErrors)
}

When invoked, the above function will return:

{ "errorMessage": "something went wrong!" }

Raising unexpected errors

Lambda functions can fail for reasons beyond your control, such as network outages. In Go, you use panic in response to unexpected errors. If your code panics, Lambda will attempt to capture the error and serialize it into the standard error json format. Lambda will also attempt to insert the value of the panic into the function’s CloudWatch logs.

package main

import (
"errors"

"github.com/aws/aws-lambda-go/lambda"
)

func handler(string) (string, error) {
panic(errors.New("Something went wrong"))
}

func main() {
lambda.Start(handler)
}

When invoked, the above function will return the full stack trace due to panic:

{
"errorMessage": "Something went wrong",
"errorType": "errorString",
"stackTrace": [
{
"path": "github.com/aws/aws-lambda-go/lambda/function.go",
"line": 27,
"label": "(*Function).Invoke.function"
},
...

]
}


Environment Variables

Use the os.Getenv function to read environment variables:

package main

import (
"fmt"
"os"
"github.com/aws/aws-lambda-go/lambda"
)

func main() {
fmt.Printf("Lambda is in the %s region and is on %s", os.Getenv("AWS_REGION"), o\
s.Getenv("AWS_EXECUTION_ENV"))

}

Lambda configures a list of environment variables by default.
-
3.5 Summary
The Go we’ve covered so far is more than enough to get you started with building Go applications of AWS Lambda.

Further Learning

E
-
4. Building a CRUD API
In this chapter, you will build as simple CRUD (Create-Read-Update-Delete) API using Go and AWS Lambda. Each CRUD action will be handled by a serverless function. The final application has some compelling qualities:
  • Less Ops: No servers to provision. Faster development.
  • Infinitely Scalable: AWS Lambda will invoke your Functions for each incoming request.
  • Zero Downtime: AWS Lambda will ensure your service is always up.
  • Cheap: You don’t need to provision a large server instance 24 / 7 to handle traffic peaks. You only pay for real usage.
-
4.1 Prerequisites
Before we continue, make sure that you have:
  • Go and serverless installed on your machine.
  • Your AWS account set up.
Follow the steps in Chapter 2 to set up your development environment, if you haven’t already.
-
4.2 Background Information
Web applications often requires more than a pure functional transformation of inputs. You need to capture stateful information such as user or application data and user generated content (images, documents, and so on.)
However, serverless Functions are stateless. After a Function is executed none of the in-process state will be available to subsequent invocations. To store state, you need to provision Resources that communicate with our Functions.
On top of AWS Lambda, you will need to use the following AWS services to capture state:
The subsections that follow briefly explain what each AWS service does.

Amazon DynamoDB

Amazon DynamoDB is a fully managed NoSQL cloud database and supports both document and key-value store models.
Amazon Dynamo DB
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables’ throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics.
Our CRUD API uses DynamoDB as to store all user-generated data in our application.

Amazon API Gateway

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
Amazon API Gateway
With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any Web application.
Our CRUD API uses API Gateway to allow our Functions to be triggered via HTTP.
-
4.3 Design

Problem Decomposition

For each endpoint in our backend’s HTTP API, you can create a Function that corresponds to an action. For example:

`GET /todos` -> `listTodos`

`POST /todos` -> `addTodo`

`PATCH /todos/{id}` -> `completeTodo`

`DELETE /todos/{id}` -> `deleteTodo`

The listTodos Function returns all of our todos, addTodo adds a new row to our todos table, and so on. When designing Functions, keep the Single Responsibility Principle in mind.

Remember: Events trigger Functions which communicate with Resources. In this project, our Functions will be triggered by HTTP and communicate with a DynamoDB table.
-
4.4 Development

Example Application Setup

Check out the serverless-crud-go sample application included as part of this book. This example application will serve as a handy reference as you build your own. In your terminal, do:

cd serverless-crud-go
./scripts/build.sh
serverless deploy

Running the build.sh script will call the go build command to create statically-linked binaries in the bin/ sub-directory of your project. Here is the build script in detail:

#!/usr/bin/env bash

echo "Compiling functions to bin/handlers/ ..."

rm -rf bin/

cd src/handlers/
for f in *.go; do
filename="${f%.go}"
if GOOS=linux go build -o "../../bin/handlers/$filename" ${f}; then
echo "✓ Compiled $filename"
else
echo "✕ Failed to compile $filename!"
exit 1
fi
done

echo "Done."

-

Set up boilerplate

As part of the sample code included in this book, you have a serverless-boilerplate-go template project you can copy to quickly get started. Copy the entire project folder to your $GOPATH/src and rename the directory and to your own project name. Remember to update the project’s name in serverless.yml to your own project name!
The serverless-boilerplate-go project has this structure:

+-- scripts/
+-- src/
+-- handlers/
+-- .gitignore
+-- README.md
+-- Gopkg.toml
+-- serverless.yml

Within this boilerplate, we have the following:
  • scripts contains a build.sh script that you can use to compile binaries for the lambda deployment package.
  • src/handlers/ is where your handler functions will live.
  • Gokpkg.toml is used for Go dependency management with the dep tool.
  • serverless.yml is a Serverless project configuration file.
  • README.md contains step-by-step setup instructions.
In your terminal, navigate to your project’s root directory and install the dependencies defined in the boilerplate:

cd <your-project-name>
dep ensure

With that set up, let’s get started with building our CRUD API!.
-

Step 1: Create the POST /todos endpoint

Event

First, define the addTodo Function’s HTTP Event trigger in serverless.yml:

// serverless.yml

package:
individually: true
exclude:
- ./**

functions:
addTodo:
handler: bin/handlers/addTodo
package:
include:
- ./bin/handlers/addTodo
events:
- http:
path: todos
method: post
cors: true

In the above configuration, notice two things:
  • Within the package block, we tell the Serverless framework to only package the compiled binaries in bin/handlers and exclude everything else.
  • The addTodo function has an HTTP event trigger set to the POST /todos endpoint.

Function

Create a new file within the src/handlers/ directory called addTodo.go:
E
In the above handler function:
  • In the init() function, we perform some initialization logic: making a database connection to DynamoDB. init() is automatically called before main().
  • The addTodo handler function parses the request body for a string description.
  • Then, it calls ddb.PutItem with an environment variable TODOS_TABLE_NAME to insert a new row to our DynamoDB table.
  • Finally, it returns an HTTP success or error response back to the client.

Resource

Our handler function stores data in a DynamoDB table. Let’s define this table resource in the serverless.yml:
E
In the resources block, we define a new AWS::DynamoDB::Table resource using AWS CloudFormation.
We then make the provisioned table’s name available to our handler function by exposing it as an environment variable in the provider.environment block.

To give our functions access to AWS resources, we also define some IAM role statements that allow our functions to perform certain actions such as dynamodb:PutItem to our table resource.


Summary

Run ./scripts/build.sh and serverless deploy. If everything goes well, you will receive an HTTP endpoint url that you can use to trigger your Lambda function.
Verify your function by making an HTTP POST request to the URL with the following body:

{
"description": "Hello world"
}

If everything goes well, you will receive a success 201 HTTP response and be able to see a new row in your AWS DynamoDB table via the AWS console.
-

Step 2: Create the GET /todos endpoint

Event

First, define the listTodos Function’s HTTP Event trigger in serverless.yml:

// serverless.yml

functions:
listTodos:
handler: bin/handlers/listTodos
package:
include:
- ./bin/handlers/listTodos
events:
- http:
path: todos
method: get
cors: true

Function

Create a new file within the src/handlers/ directory called listTodos.go
E
In the above handler function:
  • First, you retrieve the tableName from environment variables.
  • Then, you call ddb.Scan to retrieve rows from the todos DB table.
  • Finally, you return a success or error HTTP response depending on the outcome.

Summary

Run ./scripts/build.sh and serverless deploy. You will receive an HTTP endpoint url that you can use to trigger your Lambda function.
Verify your function by making an HTTP GET request to the URL. If everything goes well, you will receive a success 200 HTTP response and see a list of todo JSON objects:

> curl https://<hash>.execute-api.<region>.amazonaws.com/dev/todos
{
"todos": [
{
"id": "d3e38e20-5e73-4e24-9390-2747cf5d19b5",
"description": "buy fruits",
"done": false,
"created_at": "2018-01-23 08:48:21.211887436 +0000 UTC m=+0.045616262"
},
{
"id": "1b580cc9-a5fa-4d29-b122-d20274537707",
"description": "go for a run",
"done": false,
"created_at": "2018-01-23 10:30:25.230758674 +0000 UTC m=+0.050585237"
}
]
}

-

4.4.1.4 Step 3: Create the PATCH /todos/{id} endpoint

Event

First, define the completeTodo Function’s HTTP Event trigger in serverless.yml:

// serverless.yml

functions:
completeTodo:
handler: bin/handlers/completeTodo
package:
include:
- ./bin/handlers/completeTodo
events:
- http:
path: todos
method: patch
cors: true

Function

Create a new file within the src/handlers/ directory called completeTodo.go:
E
In the above handler function:
  • First, you retrieve id from the request’s path parameters, and tableName from environment variables.
  • Then, you call ddb.UpdateItem with both id, tableName, and UpdateExpression that sets the todo’s done column to true.
  • Finally, you return a success or error HTTP response depending on the outcome.

Summary

Run ./scripts/build.sh and serverless deploy. You will receive an HTTP PATCH endpoint url that you can use to trigger the completeTodo Lambda function.
Verify your function by making an HTTP PATCH request to the /todos/{id} url, passing in a todo ID. You should see that the todo item’s done status is updated from false to true.
-

Step 4: Create the DELETE /todos/{id} endpoint

Event

First, define the deleteTodo Function’s HTTP Event trigger in serverless.yml:

// serverless.yml

functions:
deleteTodo:
handler: bin/handlers/deleteTodo
package:
include:
- ./bin/handlers/deleteTodo
events:
- http:
path: todos
method: delete
cors: true

Function

Create a new file within the src/handlers/ directory called deleteTodo.go:
E
In the above handler function:
  • First, you retrieve id from the request’s path parameters, and tableName from environment variables.
  • Then, you call ddb.DeleteItem with both id and tableName.
  • Finally, you return a success or error HTTP response depending on the outcome.

Summary

Run ./scripts/build.sh and serverless deploy. You will receive an HTTP DELETE endpoint url that you can use to trigger the completeTodo Lambda function.
Verify your function by making an HTTP DELETE request to the /todos/{id} url, passing in a todo ID. You should see that the todo item is deleted from your DB table.
-

Writing Unit Tests

Going Serverless makes your infrastructure more resilient, decreasing the likelihood that your servers fail. However, your application can still fail due to bugs and errors in business logic. Having unit tests gives you the confidence that both your infrastructure and code is behaving as expected.
Most of your Functions makes external API calls to AWS cloud services such as DynamoDB. In our unit tests we want to avoid making any network calls - they should be able to run locally. Unit tests should not be dependent on live infrastructure where possible.
In Go, we use the testify package to write unit tests. For example:
E
-
4.5 Summary
Congratulations! In this chapter, you learned how to design and develop an API as a set of single-purpose functions, events, and resources.
-
Get Serverless Go Book
Serverless Go teaches you how to build scalable applications with the Go Language, the Serverless framework, and AWS Lambda. You will learn how to to design, develop, and test Serverless Go applications from planning to production.
-
H3ULOL
BIUHi

Write a response.....

No comments yet.





What`s New..!!

Machine Learning

Check out Machine Learning Magazine just published

Sign up to EdnSquare, Here is lot more to explore

Get the things deliverd to your inbox that matters to you most

More About EdnSquare

·       ·       ·