IfC-2023: Technology Landscape
Part Three: An “Isometric” View and Summary
In Part One of this series, I analyzed characteristics which, in my assessment, are common to all “Infrastructure from Code”(IfC) offers. The main conclusion was that, as a whole, the IfC technology is still at an early stage of development, which in turn warns against premature fixation and calls for leaving enough room for safe-to-fail experimentation.
In Part Two of this series, I analyzed 12 IfC products, I’m aware about, using five factors, namely:
- Programming Language — is an IfC product based on an existing mainstream programming language(s) or embarks on developing a new one?
- Runtime Environment — does it still use some existing runtime environment (e.g. Node.js)?
- API — is it proprietary or some form of standard/open source? Cloud-specific or cloud-agnostic?
- IDE — does it assume its proprietary, presumably cloud-based, Integrated Development Environment or could be integrated with one or more of existing IDEs?
- Deployment — does it assume deployment applications/services to its own cloud account or produced artifacts could be deployed to the customer’s own cloud account?
The outcome of such a detailed analysis was insightful and confusing at the same time.
It was insightful since it allowed me to look not only at existing IfC products, but also at strategic implications of different choices available, not necessarily even made by any actual product.
It was confusing since it left the desire to grasp a big picture, even if with fewer details, unsatisfied.
In this part, I want to check whether there are two factors I could combine in a 2D matrix to generate some additional insight into the IfC technology potential and available strategic choices.
Language and Runtime Environment Map
After some experimentation, I came up with the following table:
My logic goes as follows. IfC is first and foremost about deriving cloud infrastructure specifications directly from the application code, therefore which language is supported turns out to be the most important factor.
Ability to bind with modules developed in other languages in-proc is primarily determined by the runtime environment. However, there is additional possibility to integrate with other runtimes via out-of-proc mechanisms such as side car or workflow.
We, therefore, could come up with the following mapping between IfC responsibilities and Hexagonal Architecture ports, adapters, applications and core:
Concluding Remarks and Way Forward
What follows from the IfC Hexagonal Architecture diagram presented above is that IfC is supposed to take upon itself automatic generation of cloud and internal service adapters, translation of application logic into cloud-native workflow and binding the whole structure together through supported API ports. What is important to notice, is that IfC is not supposed to interfere whatsoever with any subdomains, be it Core Domain, or Supportive or Generic Subdomains. The latter are supposed to be completely independent of the IfC choices for Language, Runtime Environment and APIs.
Is such a high level of flexibility at least theoretically possible? The following diagram, using Python and AWS as examples, suggests how it could look like:
What this diagram suggests is that Application Logic could be programmed in one of programming languages supported by IfC compiler, say Python, and automatically translated into a cloud-native workflow specification, say AWS StepFunctions (for details of this approach look here).
This workflow could invoke multiple computation units, say AWS Lambda Functions, in general case defined in different programming languages for different runtime environments.
Each particular computation unit, say programmed in Python, could make in-proc calls to various extensions, supported by its runtime environment or out-of-proc calls to sidecars developed in any supported language and runtime environment.
The computation unit itself could communicate with cloud resources via adapters to the corresponding platform SDK, say AWS Boto3.
The workflow could also communicate with cloud resources directly, while the out-of-proc integration could be with binary modules compiled from Golang, Rust, C/C++. I decided to skip over these details in order not to overcomplicate the picture.
It looks like such a completely open-end architecture is indeed possible, at least in theory.
In the previous series, I suggested analyzing IfC implementations of four types of interaction, namely acquisition, configuration, consumption and operation, with cloud across four pillars: services, vendors, APIs and locations.
In this series, I introduced another five technological factors for IfC products differentiation: language, runtime environment, APIs, IDE, and deployment.
If it looks to you too complicated and confusing, you are right, it is. My only concern is that the proposed analytical framework is not messy enough, but rather is too neat and structured and thus unrealistic.
This is so because the reality is messy, confusing and at times just illogical. Imposing artificial order prematurely would only make matters worse. Order, if any, is supposed to emerge organically from multiple product-market fit probes and technological experiments. With this regard, the IfC technology is not different from any other Complex Adaptive System at its early, just out of chaos, stage.
Connecting the Dots
With all its imperfection and complexity (I wish I could reduce the number of dimensions I need to think about) I found this analytical framework coherent enough in a sense of lack of unresolvable inner contradictions.
That makes it a valid candidate for moving further. Here are my major takeaways from this and the previous series analysis:
- Decisions about programming language and runtime environment support appear to be having a major strategic impact. In particular, these two choices determine the potential level of interoperability with existing solutions/libraries.
- Decision about the nature of APIs exposed to application comes right after and to a large degree determines how particular IfC product is going to be used. Specifically, this choice determines the level of application code portability between different cloud platforms.
- Decisions about IDE and Deployment, while probably not having the same weight in terms of underlying technologies, have tremendous impact on the overall system usability, operational efficiency, security, strategic partnership, brand building, and cost.
- Decisions about which cloud services from which vendor to support in which order appear to be belonging to particular IfC product roadmap reflecting its business model and go to market strategy.
- Providing proper implementation for selected cloud resource acquisition and consumption seem to be required from the outset (m.b. with some small variations).
- Level of support for resource configuration and operation may vary depending on tactical considerations, but neglecting them for too long would incur substantial risk for any serious production deployment.
- Decisions about cloud vendor API level to be used determine primarily potential efficiency, implementation effort and, to some degree, the level of portability between different clouds for the IfC product itself.
- Considering the current stage of IfC technology evolution, decisions about supporting different deployment locations (for detailed discussion of this topic, look here) seem to be always limited to the core cloud platform (some IfC products are edge computing oriented from the very beginning) with gradual extension towards additional location options in the future.
Resolving the Teminology Conflicts
While preparing the summary above, I spotted two potential sources of terminological inconsistency between this and the previous series:
- In the previous series by API I understood cloud vendor API used for implementation of particular interaction with cloud service. In this series by API I understood the API exposed to the application code.
- In the previous series by Deployment Option I understood the deployment location option (e.g. central cloud, edge, etc.) for programming artifacts produced by the IfC product in question. In this series, by Deployment Option I understood the cloud account ownership for the IfC product itself and produced artifacts.
This series was my sincere effort to capture the IfC technology landscape as I see it today, at the beginning of 2023. It would be interesting to come back to this analysis in 2024 to check what changed. Hope to see you then.