Apart from Runtime-specific errors (e.g. Python, NodeJS), programmers have to think about failures that are specific to Lambda functions. In this documentation, we have covered most of the problems that cause headaches to serverless developers.

Timeout Error

All calls made to AWS Lambda must complete execution within 15 minutes (you can set a lower timeout, when appropriate). When the execution duration reaches the timeout, your invocation is halted abruptly.

Timeouts are expressed in Lambda logs in the following way.

2019-02-13T20:12:38.950Z 41a10717-e9af-11e7-892c-5bb1a4054ed6 Task timed out after 300.09 seconds

Out of Memory Error

Lambda executions can run into memory limits. You can recognise the failure when both the Max Memory Used and Memory Size values in the REPORT line are identical.

Example:

START RequestId: b86c93c6-e1d0-11e7-955b-539d8b965ff9 Version: $LATEST

REPORT RequestId: b86c93c6-e1d0-11e7-955b-539d8b965ff9 Duration: 122204.28 ms Billed Duration: 122300 ms Memory Size: 256 MB Max Memory Used: 256 MB

RequestId: b86c93c6-e1d0-11e7-955b-539d8b965ff9 Process exited before completing request

Missing Handler

Handler is a function in your code that is invoked by AWS Lambda when your service is executed. If the handler name provided in the Lambda configuration is different from the function in your code, this error will be produced upon Lambda execution and caught by Dashbird.

See example below:

START RequestId: db1e9421-724a-11e7-a121-63fe49a029e8 Version: $LATEST

Handler 'my_lambda_handlerx' missing on module

REPORT RequestId: db1e9421-724a-11e7-a121-63fe49a029e8 Duration: 15.11 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB

Handling Failures

Okay, so now we know what can go wrong. Fortunately, Lambda has a few tricks up its sleeve that we can use to remedy the situation.

Beware of the retry behaviour in AWS

In case an invocation results in error, AWS Lambda will automatically retry the request for you (see AWS Lambda Retry Behavior). That can be very convenient, but is also a potential source of issues that can be hard to debug.

To help you track down those cases, a “Retry” flag is assigned to the invocations when we identify AWS Lambda is retrying a previously failed request. Also, when viewing an error on Dashbird, it will also point you subsequent retry requests generated by AWS retry system.

Stream-based event sources (AWS Kinesis and DynamoDB)

Some services can automatically generate streams of data and invoke your Lambda functions. If the invocation fails, Lambda will try to process the batch again until the data expires.

 

To ensure that stream events are processed in order, the exception is blocking and the function will not read any new records until the failed batch is either successfully processed or expired.

Idempotent functions

Depending on the flow of your system, retries can be harmful. For instance, lets imagine a function that is responsible for adding user row to the database and sending a welcome email. If the function fails after creating the user, and gets retried, you will have a duplicate row in the database.

A good way to overcome this is to design your functions to be idempotent.

Idempotent functions are functions with a single task, which either succeeds or can be retried without any damage to the system. You could redesign the aforementioned function using step-functions. First being the function responsible for adding the user to the database and as a second step, another function sends the email. Read more about step functions here.


We aim to improve Dashbird every day and user feedback is extremely important for that, so please let us know if you have any feedback about our features and error handling! We would really appreciate it!

 

Can’t find what you’re looking for? We’d love to help. Send us a message through the chat bubble or email us.

No results found