IfC-2023: Technology Landscape
Part One: Common Background
“Infrastructure from Code” (IfC) is a relatively new technology focused on converting pure application code into cloud infrastructure specifications.
Last year, it was featured by Jeremy Daly at re:invent 2022 in Las Vegas. Still, it remains unknown to the wide public. Google search results for “Infrastructure From Code” are mostly composed of links to “Infrastructure As Code” publications. On the other hand, there are already about a dozen players in this field.
In my first article on the subject I suggested considering IfC as a natural next step in cloud application infrastructure automation.
In the following three-part series my objective was to provide a more accurate answer to the question: precisely what kind of problems is IfC trying to solve?.
In this series, I’m going to take a closer look at fundamental constraints that shape the technology choices and which choices were made by the current players. What is common, and what is different in their offers? Are they competing for the same market segment? How big the overall potential is, and how much of it is uncovered yet by existing offers?
This is the first part of a three-part series organized as follows:
- Part One: Common Background
- Part Two: Five-Factors Analysis
- Part Three: An “Isometric” View and Summary
Since I’m a head of technology of one of the IfC products, namely Cloud AI Operating System (CAIOS), I cannot be 100% objective and neutral. As any other player in this field, I’m biased towards particular technology and business choices. The only thing I could do, is to openly admit it upfront.
To avoid any conflict of interests, I will base my analysis solely on publicly available information without trying hands-on any of competing products and even not registering at corresponding websites.
I will also avoid, as much as I can, direct marketing of my own product. This series does not reflect in any form the official stand or business strategy whatsoever of my mother company BlackSwan Technologies or the product I’m responsible for.
In short, treat this series as a purely professional treatise on a subject, I’m sincerely interested in, that reflects my own understanding of the subject matter.
With about a dozen of players on the field, the question is how we could compare them one against another? What are essential factors and which we could ignore for the moment? How many of such factors do we need to choose? Why those over others?
Many analytical frameworks use some form of N x N matrices probably reflecting the fact that, on paper, we are limited by 2-dimensional space. We are also much accustomed to Cartesian rather than to more generic Semantic Spacetimes structures.
Alen Helton analyzed three IfC frameworks using five attributes:
- Time to build — how quickly was I able to get something up and running
- Approach — what was the methodology used to abstract or infer the infrastructure from the code
- Deployment — how easy was it to put what I had written in the cloud
- Performance — was the API latency within an acceptable threshold
- Visibility — how much can I see, alter, or manage the infrastructure that was generated
While these criteria might be good for assessing the current maturity of the products, answering the question “what could I do with ’em today?”, this is not a question I’m trying to answer.
When looking at my product roadmap for the year ahead, I’m asking myself what is the overall potential of the IfC technology as a whole, and only after that, at which degree is it exploited or planned to be exploited by current players.
My basic assumption for this analysis is that until they reach true, conclusively proven, product-market fit, all current offers are preliminary MVPs intended to test the market acceptance and investors willingness to put more money.
Ala Shiban from Klotho has recently published his analysis of a subset of existing IfC offers enumerating the following four major approaches to IfC implementation:
- Programming Languages (Wing and DarkLang)
- SDK-based (Ampt and Nitric)
- Annotations + Framework (Encore, Shuttle and AWS Chalice)
- Pure Annotations (Klotho)
Such a linear structure of progressing from “less something” to “more something” is perfectly OK for a product technical whitepaper in order to justify technological choices being made and demonstrate the product superiority over competing offers. My goal for this series is different.
I’m more interested in strategic options with potentially long-term impacts, avoiding, as much as possible, any premature good/bad or right/wrong conclusions. All products are bad unless they bring value and make money. All choices are wrong, unless it’s proven conclusively otherwise … in retrospect.
In my personal opinion, IfC technology is still too immature to judge it on the basis of immediate benefit. In any case, this heavily depends on for whom and in which context such benefits are real. As declared above, within the scope of this series I’m going to focus on analyzing its potential and hard-to-overcome constraints.
So, which approach should I choose?
On my list, there are currently 12 IfC products, including my own CAIOS plus 3 outliers excluded for that or another reason. Assuming that I’m not going to fight the 2D page space limitation, that suggests putting them all as rows of some table. Now, the question is what should be columns in such a table, how many of them and for what purpose?
One possible way would be to start with one column and see if there are any useful insights we could extract from such analysis. If not — go up with the number of columns keeping in mind the famous 7+/-2 limitation.
Technology Adoption Lifecycle
For one-column analysis, to start with, let’s check whether using the Roger’s Technology Adoption Bell Curve could fit the bill. I’m more accustomed to working with Geoffrey Moore’s variation, which explicitly mentions the Chasm.
As presented at Fig. 1 below, my assessment is that the IfC technology as a whole is currently at Early Market stage mostly suitable for Early Adopters, who, as we know, need a high level of flexibility and openness of the system in order to be able to adjust it to their specific needs.
Based on analysis presented above, the Technology Adoption Lifecycle placement by itself does not constitute a strong enough differentiator among various IfC offerings.
Individual system openness and hackability (if I could permit myself such a term) could be assessed only by early adopters, who really tried to play with the system. For them, it will be important to know how to extend/fix the system if something is missing or does not work as expected.
Interestingly enough, the cloud technology itself seems to be getting out of the Early Market phase, approaching the Main Market and is looking for solutions to make this technology more accessible to pragmatic customers.
Type of Innovation
Type of innovation, e.g. disruptive vs sustaining, could be the next candidate for the IfC offerings comparative analysis basis:
Within this context, I would dare to argue that at the moment IfC technology, as a whole, is still a disruptive innovation with capabilities below the mainstream needs line.
The main reason for such assessment is that the current generation of IaC template writers (sometimes called DevOps Engineers) and Cloud SDK programmers are already too deeply invested in this special knowledge that, at least at the beginning, will consider IfC as not serious, arguing that it will never reach the sophistication level required for creating production grade Terraform (or CloudFormation) and Helm templates, and will never be able to keep up with constantly changing Cloud Services SDKs. Once upon a time, the same arguments were used by machine language programmers against any compiler …
Following this assessment, at its current stage of evolution, IfC will mostly need to look after an underserved population — people who need to develop software for cloud, but for whom the current IaC + SDK option is too cumbersome, labor-intensive and/or error and security prone.
Steve Blank educated us in his seminal “The Startup Owner’s Manual” that “Market type influences EVERYTHING a company does”, so we probably also need to look at this axis:
- Bringing a new product into an existing market
- Bringing a new product into a new market
- Bringing a new product into an existing market and trying to:
— Re-segment that market as a low-cost entrant
— Re-segment that market as a niche entrant 👈
- Cloning a business model that’s successful in another country
As it’s marked above, I hold an opinion that IfC technology, again as a whole, is primarily trying to re-segment existing cloud software development market by suggesting a more affordable option to the underserved population.
To identify this underserved population I sometimes use a term “programming literate knowledge workers” — people, who do not mind writing code to solve some real business problems using cloud, but do not have time, skills and patience to deal with low-level nuts and bolts of the current IaC + SDK option.
One may argue that these “programming literate knowledge workers” are actually citizen developers. My answer would be “not exactly”. Of course, it depends on interpretation, but the concept of “citizen developer” is usually applied to somebody who solves business problems by consuming “no code” or “low code” platforms not directly supported by the organization IT.
Therefore, a “knowledge worker” could play the role of “citizen developer” as long as (s)he could make progress within constraints imposed by no code/low code platforms available. When the answer is “no”, which means the available no-code/low code options are not flexible, not known or just too expensive, then the “knowledge worker” might opt for writing some code as long as it does not require deep expertise in underlying cloud technology.
The IfC technology feels exactly this gap. Following the “The Startup Owner’s Manual” recipe, we could come up with the following market map (no worry, companies compete with business models rather than with high-level diagrams reflecting obvious facts):
What the diagram above is trying to convey is that IfC brings to the table radical simplification of full code development over more traditional IaC+SDK options and jailbreak flexibility over no-code/low-code options, while also drastically reducing vendor-locking.
The further elaboration of this model would take us into the Value Proposition Design area of each individual product, which is beyond the scope of technology landscape survey and is usually done behind closed doors.
Gartner Magic Quadrant plots companies across two axes: Completeness of Vision vs Ability to Execute. It is a potentially very useful model for identifying market leaders but less so for technology analysis per. For obvious reasons, I decided to leave Gartner Magic Quadrant … to Gartner.
Wardley Maps also apply two-factor analysis plotting Value Chain vs Evolution. This is something we could try to do, as presented below:
Basically, it claims that the IfC technology, at the moment, is at Genesis Phase, which by definition is Uncertain and Risky, intended to simplify, presumably custom, Application Development by insulating it from intricate details of leading cloud platforms (e.g. AWS, Azure, GCP, etc.) with some exploit of Open Source Software, which is Mature, and Well-Understood.
Some Wardley Maps practitioners, including Simon Wardley himself, would probably object to my placement of the cloud platforms into the Product rather Utility category. I had my own reasons for doing this: from the IfC perspective, individual cloud platforms are different enough and are not as widely spread and mature as Open Software solutions (think MySQL or Apache SPARK).
As with the IfC Market Map presented above, further elaboration of the IfC Wardley Map would take us into the problematic area of comparing strategic plays of different IfC vendors, and that, again, will need to remain behind closed doors, at least for a while.
Application of the one- or two-factor analysis frameworks mentioned above to IfC technology as a whole led to a coherent view of this technology as staying at its initial stage of development with still many options open and unknown unknowns laying ahead.
If this analysis is correct, then premature fixation of particular choices could prevent from unleashing the full potential of this technology and therefore is potentially harmful. Running multiple, smaller scale, safe-to-fail experimentations would probably have better chances to generate scalable, sustainable and repeatable business models.
Whether such a strategy is affordable for one vendor or is going to be shared by multiple vendors is another question, at the moment, beyond the scope of this series.