Focus on Core Value and Keep Cloud Infrastructure Flexible with “Ports & Adapters”
Introduction
The decision-making process for cloud infrastructure is too often driven by personal preferences and experience including the HIPPO (Highest Paid Person’s Opinion) based on gossip circulating in executive circles.
That leads to real problems.
What works for large mature systems is too expensive and complex for early stages, while early-stage shortcuts will block system growth later.
In addition, cloud vendors release new features constantly, invalidating previous assumptions and rendering custom solutions obsolete or risky.
The decision process is supposed to be fact-based, considering factors like cost, performance, and security. However, this is rarely possible because of too many unknowns at the beginning.
This publication is an attempt to summarize my personal experience in applying a more systematic approach towards mitigating uncertainty risks and preserving required flexibility without premature overengineering.
The approach is based on applying Set-Based Engineering method combined with Use Case guided application of the “Ports and Adapters” architecture pattern.
This publication is not intended as a marketing or training material on any specific methodology. It should rather be treated more like an interim technical report.
Here, the main focus is on general concepts independent of any particular programming language or software design approach. Future publications will provide more details for specific programming languages such as Python and TypeScript.
Systematic covering of interconnected topics across cloud architecture and software design might complicate the sequential reading. To make it easier, I will provide a top-level summary at the beginning followed by more detailed discussion in separate chapters with meaningful headlines such that the core message could be grasped by just skimming through level one and two titles, with full reading of the chapter text only when there is enough interest in the corresponding topic.
As stated above, we will start with the Summary.
Summary
- Cloud infrastructure decisions are complex, with multiple choices interconnected in non-obvious ways. Early decisions, rarely optimal due to unknowns, often change later.
- Set-Based Engineering manages uncertainty by evaluating multiple options simultaneously, deferring final decisions as late as possible.
- The “Ports and Adapters” architecture pattern isolates application logic from infrastructure via ports, enabling multiple configurations.
- AI services (including GenAI) are a new type of infrastructure, isolated from application core through appropriate ports.
- Use Case modeling systematically defines system boundaries and actors, identifying required ports without early infrastructure commitments while driving infrastructure choices through event flow analysis.
Selecting Cloud Infrastructure
Selecting cloud infrastructure is overwhelming, with numerous interconnected options influencing each other in subtle ways. Vendor services catalogs often reflect marketing strategies rather than systematic engineering needs, amplifying complexity.
Dealing with multiple interrelated choices leads to combinatorial complexity of system design. Early choices are rarely optimal and often need revision based on feedback, which is why a systematic approach to creating flexible architecture — and defining its limits — is essential.
Below is a high-level taxonomy of essential cloud infrastructure topics, ordered roughly by the magnitude of their impact on subsequent decisions. Each topic headline is accompanied with a brief justification of why in my opinion it’s essential, and why it comes in this particular order.
Providing detailed description for each category is beyond the scope of this publication.
- System Distribution — Modern cloud is not a mainframe-like centralized computing environment. It contains multiple geographically distributed elements. Required latency, privacy, level of aggregation, and maintenance cost are among major decision factors. System partitioning between multiple physical locations shapes the scope for all other options.
- Cloud Platform — Based on system distribution, multiple cloud platforms may be required, potentially from different vendors (e.g., Google BigQuery consumed by AWS services and cached within Cloudflare CDN).
- Hardware — Cost, performance, and sustainability requirements limit hardware options per cloud platform and location.
- Compute Services — Software systems perform computations using incoming and stored data. While options like virtual machines and managed containers are well-known, cloud vendors offer many non-traditional compute solutions (e.g., AWS StepFunctions, API Gateway VTL, EventBridge Rules, or database stored procedures).
- AI Services — While traditional compute services follow predefined algorithms, AI services extend traditional compute capabilities by enabling training data-driven transformations, adding a new dimension to system design.
- Storage Services — Every non-trivial system needs to store data, often in large volumes and varied formats. Offloading storage to the cloud is a common motivation for migration.
- Communication Services — Systems require communication channels for external data entry, internal data flows, and outbound results or requests to external services.
- Security Services — Protection from unauthorized access and increasingly frequent cyber-attacks is mandatory to ensure system integrity and data security.
- Business Continuity — Even in a well-protected system, software at some locations may fail due to physical disruptions (e.g., power outages) or human errors (e.g., defects or misconfigurations).
- Observability Services — To detect malfunctions and optimize performance, logs must be collected, metrics calculated, and alarms, when necessary, raised.
- Orchestration Tools and Services — Cloud-based systems comprise multiple interconnected services, requiring coordinated allocation, configuration, and release.
- Development Tools and Services -Keeping cloud architecture separate from software development is a common but serious mistake, as it creates silos and shifts unresolved issues between teams, leading to inefficiencies and misaligned priorities.
This complex decision space, with its interdependencies and inevitable changes, requires a systematic approach to preserve flexibility without overengineering — which is where Set-Based Engineering becomes valuable.
Set-Based Engineering
Making the perfect choice on the first attempt is virtually impossible, given the complexity and evolving nature of cloud infrastructure. Systems evolve, requirements change, and trade-offs shift. At the beginning of a project, very little is known.
Set-Based Engineering helps manage this uncertainty by maintaining as many options as possible until the final choice is made. The process involves:
- Initial List of Candidates: Conducting safe-to-fail experimentation and preliminary evaluation of multiple options.
- Narrower List of Configurations: Addressing the needs of different environments (e.g., Dev, Test, Stage, Production).
- Final Choices for Production: Selecting one or more configurations based on the business model.
- Feedback and Metrics: Continuous collection of data to evaluate new candidates or revisit old ones, ensuring adaptability and improvement.
The diagram below illustrates this dynamic process:
Set-Based Engineering, originally developed in manufacturing and popularized by Toyota, has proven effective in software development as well.
To advance, we need to address two key questions: how initial candidates are identified and how they are reflected in software systems.
This publication proposes implementing infrastructure candidates as adapters, isolated via from the application core via well-defined ports, as suggested by the “Ports and Adapters” architecture pattern.
Use Case Analysis provides a systematic way to identify the system’s core functionality, define required ports, and evaluate suitable adapter candidates, ensuring alignment with business goals.
“Ports & Adapters” Architecture Pattern
Refer to Chapter Two of the “Hexagonal Architecture Explained” book for a detailed and formal pattern description. Here, I will provide a short description of the main elements of the pattern in my own words.
The “Ports & Adapters” pattern suggests a simple yet practical approach to separate concerns in software. Separation of concerns is crucial because the software code base quickly grows even for a modest application. There are numerous challenges to deal with. To address these challenges the “Ports & Adapters” pattern suggests splitting all elements involved in a particular software application into five distinct categories and dealing with each one separately:
- The Application itself. This category encapsulates the real value delivered to prospective customers and users. This is the reason why software is going to be developed and used in the first place. Sometimes, it’s called the Core or System Under Development (SuD), or Computation Layer, where external inputs are processed to produce final results. Visually, the Application part of the system is represented in the form of a hexagon. There is nothing special or magic in this shape. As the “Hexagonal Architecture Explained” book authors explain, it “… has served well as a hook to the pattern. It’s easy to remember and generates conversation.”
- External Actors that communicate with or are communicated to by the Application. These could be human end users, electronic devices, or other Applications. The original pattern suggests further separation into Primary (or Driving) Actors — those who initiate an interaction with the Application, and Secondary (or Driven) Actors — those with whom the Application initiates communication.
- Ports — a fancy name for formal specification of Interfaces the Primary Actors could use (aka Driving Ports) or Secondary Actors need to implement (aka Driven Ports) to communicate with the Application. In addition to the formal specification of the interface verbs (e.g.
BuyParkingTicket
) ports also provide detailed specifications of data structures that are exchanged through these interfaces. - Adapters fill the gaps between External Actors and Ports. As the name suggests, Adapters translate protocols and data between Actors and the Application as well as deal with low-level errors handling and recovery. Adapters should not perform any business computations.
- Configurator pulls everything together by connecting External Actors to the Application through Ports using corresponding Adapters. Depending on the architectural decisions made and price/performance/flexibility requirements these decisions were trying to address, a specific Configuration can be produced based on Specification statically before the Application deployment or dynamically during the Application run.
The pattern suggests reducing complexity and risk by focusing on one problem at a time, temporarily ignoring other aspects. It also suggests a practical way to ensure the existence of multiple configurations of the same computation each one addressing some specific needs be it test automation or operation in different environments.
The picture below presents main elements of the pattern with some additional clarifications:
Sometimes, it’s beneficial to distinguish between the Application, such as the “Blue Zone” parking lot application, see below, its External Clients (e.g. Car Drivers and Parking Inspectors), its External Services (e.g. Payment Service), and Internal Mechanisms (e.g. Database or System Clock).
External Clients initiate specific Workflows via corresponding Driving Ports. Workflows, in turn, communicate with External Services or Internal Mechanisms using Driven Ports.
There are three major types of Workflows:
- Initiated by an External Client (green line) — e.g., a Car Driver purchasing a Parking Ticket.
- Initiated by an External Service (blue line) — e.g., a Parking Lot Camera detecting a car entering/leaving.
- Initiated by an Internal Mechanism (purple line) — e.g., a System Clock triggering a check on ticket expiration.
While these distinctions are not a part of the original pattern, they are useful for systematic port discovery, as we will see later (see Use Case Driven Analysis)
Each Port will normally have multiple Adapters each one addressing specific needs of development, testing, end-to-end integration and production deployment.
Specific Application Configuration is produced by a Configurator (static or dynamic) based on Specification.
Having multiple Adapters for the same Port effectively allows keeping several infrastructure options open for as long as required, effectively applying the Set-Based Engineering approach to software development.
Before describing Use Case-Driven identification of Ports and Adapters, we first introduce a sample Application for illustrative examples.
“Blue Zone” Sample Application
To prevent this publication from becoming too abstract, let’s introduce a concrete example application to illustrate the proposed Use-Case-Driven discovery process for Ports and Adapters.
I selected the “Blue Zone” application recommended by the “Hexagonal Architecture Explained” book as a canonical example. Although originally developed in Java, its major elements are applicable to any programming language and any cloud.
From the application README
:
BlueZone allows car drivers to pay remotely for parking cars at regulated zones in a city, instead of paying with coins using parking meters.
1. Driving actors using the application are car drivers and parking inspectors.
2. Car drivers will access the application using a Web UI (User Interface), and they can do the following:
— Ask for the available rates in the city, in order to choose the one of the zones they want to park the car at.
— Buy a ticket for parking the car during a period of time at a regulated zone. This period starts at current date-time. The ending date-time is calculated from the paid amount, according to the rate (euros/hour) of the zone.3. Parking inspectors will access the application using a terminal with a CLI (Command Line Interface), and they can do the following:
— Check a car for issuing a fine, in case that the car is illegally parked at a zone. This will happen if there is no active ticket for the car and the rate of the zone. A ticket is active if current date-time is between the starting and ending date-time of the ticket period.4. Driven actors needed by the application are:
— Repository with the data (rates and tickets) used in the application. It also has a sequence for getting ticket codes as they are needed.
— Payment service that allows the car driver to buy tickets using a credit card. Obviously, no adapter for a real service has been developed, just a test-double (mock).
— Date-time service for obtaining the current date-time when needed, for buying a ticket and for checking a car.
The “Blue Zone” application structure is illustrated on the high-level diagram below:
Use Case Modeling: A Short Introduction
Complete coverage of the Use Case Modeling approach is beyond the scope of this publication. Here, I define basic elements of Use Case model in my own words. For a full description of Use Case Modeling, refer to the “Use Case 3.0” book.
The Use Case Modeling vocabulary contains definitions of the following concepts:
- The System — Similar to the Application in “Ports & Adapters”, but typically broader. It represents the service or product we’re developing to deliver value to users. Unlike Application in “Ports & Adapters”, a System (e.g., an E-Commerce platform) may include multiple applications, like back- and front-office components.
- Actors — Essentially identical to “Ports & Adapters” definition: entities (human users, devices, or other systems) that communicate with one or more System Applications.
- Use Case — An interaction started by an Actor to achieve a goal, potentially involving other Actors. Think of the System as an object, with Use Cases as its public methods.
- Use Case Event Flow Specification — Defines all externally visible Use Case details: interaction with Actors, performance, security and error handling.
- Supplementary Specification defines System requirements common to all Use Cases, like regulations, safety, observability and business continuity.
With these definitions, we can now explore how Use Case analysis enables systematic “Ports & Adapters” discovery. We’ll use the “Blue Zone” application as a concrete example.
Use-Case-Guided Ports and Adapters Discovery
Use Case Model
The Use Case Model answers the fundamental question: “Why the System exists? Who (or what) interacts with it and for what purpose?”.
The Use Case Model for the “Blue Zone” application is presented below:
The model shows a CarDriver buying a ticket via a PaymentService (BuyTicket), and a ParkinInspector verifying tickets (CheckCar).
This provides enough information for identifying initial set of Driving and Driven Ports, as detailed in the next section.
Initial Set of Driving and Driven Ports
The Use Case Model above suggests two Driving and one Driven Ports:
- Driving Port for CarDriver to initiate the BuyTicket Use Case.
- Driven Port to interact with PaymentService for ticket purchases.
- Driving Port for ParkingInspector to initiate the CheckCar Use Case.
How do we name these ports? Naming varies, as practitioners interpret Ports differently. My approach is to treat Port as a one or more interfaces, all tied to the same technology by adapters.
In other words, a Port may contain more than one interface (e.g. separate interfaces to send and receive data adhering to the Interface Segregation Principle), but all interfaces should be tied to the same technology by corresponding adapters.
For naming, I name Driving Ports after their Use Cases:
- ForBuyingTickets — Driving Port to be used by CarDriver.
- ForCheckingCars — Driving Port to be used by ParkingInspector.
I name Driven Ports after the service they obtain from the Actor:
3. ForPayments — Driven Port to communicate with PaymentService.
This naming differs from the original Blue Zone implementation, but I find it more intuitive for systematic Use-Case-Guided Ports and Adapters discovery.
Initial set of Driving and Driven Ports of Blue Zone application is shown below:
The diagram uses blue for Driving Ports and pink for Driven Ports, reflecting the Application’s greater control over Driving Ports and constraints with Driven Ports to external services.
Next, we’ll explore discovering more Driven Ports using the Use Case Event Flow Specification.
Deriving Additional Ports from Event Flow Specification
The Blue Zone Application README file states:
Driven actors needed by the application are:
— Repository with the data (rates and tickets) used in the application. It also has a sequence for getting ticket codes as they are needed.
— Payment service that allows the car driver to buy tickets using a card. Obviously, no adapter for a real service has been developed, just a test-double (mock).
— Date-time service for obtaining the current date-time when needed, for buying a ticket and for checking a car.
While we could infer the ForPayments Driving Port directly from the Use Case Model, assuming a Repository and Date-time Service is more guesswork than evidence-based. How do we know that we need them? While in this simple application such a conclusion might be justified, in less trivial cases such a gut-feeling-guided architecture approach could be dangerous resulting in extra cost and complexity.
If we look at Event Flow Specification for the BuyTicket Use Case, for example, we can find a sounder justification for additional Driving Ports:
From the main course of events above we learn that:
- The System should keep Parking Rates per Zone (suggests a DefineParkingRates Use Case, requiring a new Actor — m.b. ParkingZoneOperator — and a Driving Port).
- The System should be able to calculate allowed parking time.
- The System should store the purchased ticket for further validation (CheckCar Use Case).
Now, we understand that the System needs some Data Store to keep parking zone rates and purchased tickets and it needs an access to a Clock in order to calculate the ticket validity period and to check whether it is expired.
Now, we can specify three additional Driven Ports:
- ForStoringParkingRates — to store parking rates per zone.
- ForStoringTickets — to store purchased tickets.
- ForObtainingDateTime — to get the current time.
While some practitioners advocate treating these elements as Actors, I prefer classifying them as System Internals — technical capabilities essential for implementation rather than business actors.
Here, we apply Set-Based Engineering by defining technology boundaries:
- We decided to have only one Driven Port for obtaining current date and time for both Use Cases.
- We decided that storing parking rates and tickets doesn’t require the same technology but doesn’t rule it out either. To compare, the original “Blue Zone” implementation defined only one ForStoringData Port that assumes one common store for all data.
- We decided both Use Cases will use the same technology for storing and retrieving tickets, ensuring immediate availability for inspection. This avoids an aggressive Eventual Consistency approach, favoring ACID-compliant technology (SQL vs. NoSQL options remain open).
Additional Driven Ports derived from the Use Case Event Flow Specifications are shown below:
To distinguish additional Driven Ports, I used another color (light green) and UML notation for component interfaces. While not completely standard this notation hopefully highlights these ports specifics.
As we refine the Use Case Event Flow Specification, we’ll uncover more interface details per Actor, guiding Cloud Infrastructure decisions while maintaining flexibility, as explored next.
Defining Actor Interfaces
The Use Case Specification offers insights that advance our Set-Based Engineering approach. For example, a careful reading reveal:
- CarDriver will use a Web UI across devices (PC, tablets, and mobile phones).
- ParkingInspector will scan parking tickets or car license plates via a mobile application.
- PaymentService requires a standard messaging protocol.
Based on these inputs we may come up that we will use
- GraphQL protocol for CarDriver WEB UI.
- REST API protocol for ParkingInspector mobile application.
- AMQP protocol for the PaymentService.
Like the Driven Ports from the previous section, these choices narrow our technology options, including cloud services, while keeping flexibility for future iterations. These actor interface decisions are shown below:
We’ll continue applying Set-Based Engineering to select a cloud platform in the next section, keeping our options flexible.
Selecting Data Store Technology
Use Case SLAs (throughput, latency), system availability, and data schema stability influence the choice of database for production deployment.
If the data schema is stable, and flexibility for ad hoc queries and ACID guarantees are required, then a relational database like PostgreSQL is likely the best choice.
However, if there is a need for very flexible, document-like, schema, a document database like MongoDB might be a better option.
If very high throughput and low latency are required, a column-based NoSQL database like Cassandra might be better.
Possible data store options for “Blue Zone” Application are shown below:
At this stage, we explore broad option sets for the ‘Blue Zone’ application’s data stores, identifying the most suitable class (SQL, Document, Columnar). This analysis stems from the BuyTicket and CheckCar Use Cases, which fall under the Online Transaction Processing (OLTP) category.
If we add Online Analytical Processing (OLAP) use cases — like financial statistics, parking forecasts, or route planning for ParkingInspector — we might need additional database types, such as a Data Warehouse, Data Lake, Data Lakehouse, Timeseries or Graph Database.
Additionally, time-to-market, available skills, cost per transaction, and the operational environment often push the final choice toward a certain option, even if it’s not ideal from a purely technical perspective.
The “Ports and Adapters” pattern can reduce the impact of this choice on the Application core. The degree of independence varies — it’s more achievable for the Command side of OLTP, with greater limitations on the Query side and even more for OLAP. While Ports can abstract basic CRUD operations, ad hoc SQL queries may still leak database-specific details.
As noted, different Ports, in this case ForStoringTickets and ForStoringParkingRates, may use different types of data stores.
Next, we’ll use the System Supplementary Specification to select a cloud platform for development and deployment, applying the SBE approach to preserve the System infrastructure flexibility.
Selecting Cloud Platform
The choice of cloud platform is rarely free and is often guided by the System Supplementary Specification, as introduced earlier. It also depends on the company’s business model: pure Software as a Service (SaaS) or Independent Cloud Software Vendor (Cloud ISV). While SaaS vendors have more freedom in choosing a cloud platform that suits their business, Cloud ISVs often need to support multiple cloud platforms.
Typical factors influencing the cloud platform choice are:
- Brownfield Restrictions — other systems already deployed, established skills, contractual agreements, and operational practices.
- Regulatory Constraints — in many countries, certain applications (e.g., government, finance, defense, healthcare) may run only on a restricted set of certified cloud platforms.
- Available Services — while leading cloud platforms offer similar capabilities, some specialize in specific services (e.g., AI, edge computing).
The typical choice is between AWS, GCP, and Azure, as shown in the diagram below. Recently, more focused cloud platforms have gained popularity: Cloudflare (edge computing), DigitalOcean (container developers), and Databricks (data and AI). Some argue Kubernetes acts as a virtual computing environment, with cloud platforms serving as the underlying hardware.
These likely choices for the “Blue Zone” application are shown below:
Once the cloud platform is selected, we can map identified actor interface protocols, infrastructure elements, and other system components (e.g., Compute, Orchestration) to available cloud services. Using the SBE approach with Ports and Adapters, we isolate cloud services from the application core through Ports and wrap them in Adapters. As noted, different environments — development, testing, staging, and production — will likely require different Adapters to ensure availability, security, cost, and productivity. In the next sections, we’ll assume AWS was chosen and analyze three typical scenarios — High Fidelity MVP, “Serverless First“ Deployment, and Production Pivot — while keeping SBE flexibility in mind.
High-Fidelity MVP
A startup must find Product-Market Fit quickly — before funds run out — or close with minimal loss if it can’t. Once a fit is found, it must scale even faster. In this uncertain environment, multiple technology options often compete, and both quick-and-dirty development and premature overengineering are risks to avoid.
A Minimal Viable Product (MVP) helps find evidence of product/market fit before seed money runs out. Feedback speed is the top priority — every startup’s journey begins with speculative assumptions about market needs, often as “a solution looking for a problem.”
Building an MVP goes beyond writing code — it’s about validating assumptions at different fidelity levels. A Low-Fidelity MVP is a rough prototype, often a simple sketch, to test if we understand the customer’s problem. A High-Fidelity MVP is closer to the final system, ensuring we’re on track with an emerging solution.
Transitioning from low-fidelity MVPs (e.g., Python scripts or Jupyter Notebooks) to high-fidelity MVPs that resemble the final product and run on cloud infrastructure is challenging. While we might already know our target cloud (AWS, as chosen earlier), there are still too many unknowns about actual load, required skills, and time to set up real cloud infrastructure.
Deploying a High-Fidelity MVP on a single Virtual Machine Instance with all Ports mapped to RAM-based Adapters, using a simple cloud-native orchestration tool like AWS CloudFormation, is a valid and often overlooked choice.
This option is shown below:
As a transitory stage, this configuration allows us to:
- Decide on the programming language (is Python sufficient, or do we need Golang, Rust, or C++?).
- Select third-party libraries (e.g., TensorFlow or PyTorch?), isolating them from the application core via additional Ports.
- Define project structure and internal communication (Linux Named Pipes may suffice to simulate channels and defer final decisions).
- Demonstrate a working system with moderately sized data sets to prospective customers and collect early feedback.
- As decisions about required cloud infrastructure emerge, simulate their usage with a local cloud stack.
- Experiment with hardware options (GPU, ARM, FPGA) unavailable for local development.
- Develop initial acceptance and unit tests to validate core application logic without the complexity of real deployments.
- Decide on Port interface granularity, domain data types, and Configurator nature (dynamic, static, or both).
In sum, starting with a single VM instance and RAM Adapters for all Ports is a cost-efficient way to begin the SBE journey, guided by Use Case Modeling and implemented through the Ports and Adapters pattern.
Once we’re confident we’re on the right track, the Serverless First Deployment, described next, is a candidate to test real deployment while maintaining SBE flexibility.
“Serverless First” Deployment
“Serverless First” doesn’t mean “Serverless Always.” Initial performance measurements might justify avoiding this approach for the first deployment. However, in most cases, it’s the right option due to its simplicity and lower initial cost. If we started with a single VM-based MVP, as described above, the transition to a serverless deployment can be gradual, introducing one service at a time while retaining flexibility through Ports and Adapters. For the “Blue Zone” application, there’s no reason to reject the serverless deployment option. This leads to the following AWS cloud service choices:
Based on the Actor interfaces and Ports analysis, we’ll:
- Implement the ForCheckingCars Port using AWS API Gateway and AWS Lambda Adapters, running the CheckCar workflow in Lambda.
- Implement the ForBuyingTickets Port with AWS S3 Static Website, AWS AppSync, and AWS Lambda.
- Implement the ForObtainingDateTime Port using the Linux system clock Adapter.
- Implement the ForStoringTickets and ForStoringParkingRates Ports with AWS Aurora Serverless (PostgreSQL-compatible).
- Implement the ForPayments Port using the Amazon MQ service.
- Use the AWS CDK (Cloud Development Kit) for cloud resource configuration and deployment.
Alternatively, we may consider:
- Using AWS API Gateway and AWS Lambda for the ForBuyingTickets Port to minimize cloud services, assuming a Server-Side Rendering Web UI for the BuyTicket Use Case (not depicted to avoid clutter).
- Implementing the ForStoringTickets and ForStoringParkingRates Ports with an AWS DynamoDB Adapter using the Single-Table Design technique (depicted).
Cloud service choices that deviate from initial analysis shouldn’t surprise us. Software development isn’t linear — practical factors like skills, time-to-market, and operational overhead can shift architecture decisions. Applying the SBE mindset with Ports and Adapters enables this flexibility, allowing us to swap technologies as needed.
“Serverless First” doesn’t mean “Serverless Forever.” Once we collect operational metrics, receive feedback, and clarify the long-term product roadmap, we’ll need to re-evaluate our technology choices and consider new options, as explored next.
Production Pivot
Rephrasing a maxim attributed to Helmuth von Moltke the Elder: “No initial architecture blueprint survives the first encounter with real operations.”
Beyond obvious mistakes and miscalculations, there are objective reasons for architectural changes — sometimes dramatic:
- Increased Popularity: The service gains popularity, requiring additional security services like AWS WAF for network protection, AWS KMS for data encryption at rest, and AWS Certificate Manager for strong client authentication.
- Performance Optimization: The service’s popularity requires offloading the static website and parking zone rates, which rarely change, to a CDN like AWS CloudFront.
- Municipal Deployment: As a municipal service, the “Blue Zone” application should be deployed in AWS Local Zones for large metropolitan areas rather than a central region.
- Cost Efficiency: The “Blue Zone” application’s load is stable from 7:00 AM to 5:00 PM, with no activity outside these hours (parking is free). The AWS API Gateway and AWS Lambda combination is no longer cost-effective, so we migrate computation to Kubernetes using AWS ALB and AWS EKS to consolidate platforms.
- Database Concerns: Our DBAs dislike AWS Aurora Serverless limitations on PostgreSQL plugins and its lag behind the open-source version. Management is negotiating with Neon and Timescale for a cloud-neutral partnership.
- DevOps Preferences: Our DevOps team dislikes AWS CDK and explores replacing it with HELM, Pulumi, or Open Tofu, aligning with the Kubernetes shift.
- Emerging Trends: Self-driving cars are proliferating in major metropolitan areas, and competitors are replacing ParkingInspectors with autonomous robots. We need to adopt LLMs and Agents to stay competitive.
Some of these new options are shown below:
The SBE approach, combined with the Ports and Adapters pattern, makes these changes possible and less disruptive, preserving flexibility for future pivots.
Summing Up
From the discussion above, certain connections between various artifacts emerge:
- Use Case and Actor Combinations — determine “external” Ports.
- Use Case Event Flow Specifications — determine additional “internal” Ports.
- Detailed Actor Specifications — determine communication protocols for “external” Ports.
- Use Case Acceptance Tests — determine Port Interfaces and Domain Data Structures.
- Domain Data Structures — determine Data Store Schemas.
- Data Store Schemas and Use Case SLAs — determine Data Store types (SQL, Document, Columnar, etc.).
- Supplementary Specification — determines Cloud Platform.
- Time to Market — determines “external” and “internal” Port Adapters to Cloud Services.
- Port Adapters — determine the System Configuration.
- Programming Language and Runtime Environment — determine type of Configurator (dynamic, static or both).
Artifact connections presented above may create an impression of software development process as a sequential and ordered progression from one stage to another.
In reality, this rarely occurs; typical software development, especially in startup environments, tends to fluctuate messily from one area to another. The presence of Actors and Use Cases does not imply someone had the time, skills, or patience to document these specifications before coding started. Very often, stakeholders and users do not know what they want until they see what is possible.
The opposite extreme is also incorrect. These connections exist whether we document them or not. Use Cases provide a pragmatic context for system specification regardless of whether they were produced in advanced or reverse-engineered after the first MVP was presented to prospects. Any system has Actors and Use Cases. These Use Cases have certain pre-conditions to be able to start, usually several event flows (main and alternative), and result in post-conditions. System adherence to its Use Case specifications could, and should, be validated by a corresponding Acceptance Test Suite. That, in turn, determines which Ports the System needs.
However, impacts can flow in any direction, and the overall picture is rarely, if ever, completely consistent. By its nature, software development belongs to the Complex Adaptive Systems category and exhibits so-called Messy Coherence, which could be defined as:
a state where systems are allowed to evolve organically with minimal constraints, resulting in patterns that are locally coherent but may appear disordered at a global level. It balances adaptability with just enough structure to enable function, without imposing rigid order that would stifle emergence.”
— Dave Snowden (paraphrased from multiple sources, including his blog and talks).
I took a liberty to illustrate this with the picture below:
In software development, few things follow a strictly linear order, yet they are not entirely arbitrary either. There are inner connections between artifacts. Some could be defined or assumed upfront, some will be discovered in retrospect, and most will be revised and changed more than once.
It is my hope that the journey from Use Cases to Cloud Infrastructure Adapters described in this publication will help to make decision process more systematic and fact-based without creating false expectations of complete order where it does not exist by the very nature of the process.
To put the proposed structure into practical use, we need to go deeper into specific programming languages and frameworks. This will be the topic of future publications.