All-in-one serverless DevOps platform.
Full-stack visibility across the entire stack.
Detect and resolve incidents in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
Systems maintenance and expansion cost much more than the initial development. As the project grows and technology evolves, complexity and the list of issues grow. Technical debt haunts developers and the longer it takes, the more ‘interest rates’ accrue over this debt.
This is why future maintainability should be at the top of developers minds when architecting and developing software applications.
As most things in the software industry, there is no easy, one-size-fits-all solution to keeping systems maintainable. What is known to be a good practice is to face it from three different perspectives: operations, complexity, evolution.
Operability is about making it easy for system administrators and DevOps teams to keep the system running in a reliable way.
Although it’s true that modern cloud services, such as serverless, simplify operations by a great deal. Nevertheless, a considerable amount of responsibility still bear at the developers’ hands.
How simple it is to patch a security update to a system library, for example? Highly coupled architectures, or the lack of an automated and test-based deployment strategy, will make it harder for operations.
Tracking the root cause of issues or performance bottlenecks also depend on good development practices. When developers make clever use of logging capabilities of their runtimes, operations find easier paths to understand what may be impacting the system performance.
Having a good system monitoring in place provides visibility to the system inner workings. It makes a lot easier for operations to keep on top of it.
Documentation that is always up-to-date and concise can also be invaluable. Providing contextual expectations for the system or one of its components can be very helpful. “If X is done in the context of Y, the result will be Z“.
A certain level of complexity is natural in any software, and some applications will be more complex than others. It depends the nature of the particular problem being solved. That is called essential complexity1.
Essential-complexity factors are the ones developer have to face. There is no way to avoid them without sacrificing the solution expected by the user.
There is also inadvertent complexity2. It is introduced by causes unrelated to the problem solved by the software. An example would be performance caused by infrastructure or runtime limitations/issues.
Throughout an application’s lifetime, many things tend to grow:
All that contributes to decrease easyness for developers to understand the system. As the level of complexity grows, it also gets harder to maintain and extend the application with new features. Making any change to a complex system has a higher likelihood to introduce bugs as well.
Bad practices contribute to the problem, such as: tight coupling among modules or services, inconsistent or lack of conventions, poor code planning, breaking SOLID principles, etc.
Therefore, reducing complexity is positive to any application. It makes the software simpler to understand, easier to maintain and lighter to extend. Development teams must strive to reduce complexity whenever possible.
Each complexity factor must be confronted with the question:
Is is possible to remove this factor without sacrificing the solution to the user’s problem?
If the answer is yes, it is an inadvertent factor. Ideally, they should be minimized or removed.
Serverless architectures are the newest technologies with the target of removing complexity out of cloud applications. They abstract away the need to plan and manage infrastructure. Developers can build highly maintainable systems by relying upon APIs and SLAs and focusing on the user’s problem, not the infrastructure.
Change can’t be avoided in most – if not all – software projects. It is just a result of the real world: we learn new things, business and people needs change over time, technologies go obsolete.
Applications must account for change and keep in a good shape to accommodate them, when it becomes desired, necessary or inevitable to adapt.
Evolvability3 is the term we’re looking for here. It is a characteristic of architectural and coding designs that enables efficient and reliable accommodation of changing requirements.
Characteristics of an evolvable system:
This article was heavily inspired by the Designing Data-Intensive Aplications book, by Martin Kleppmann. The book itself is a compilation of a multitude of sources, some of which we link below in the footnotes.
No results found
End-to-end observability and real-time error tracking for AWS applications.