IfC-2023: Technology Landscape

Part Two: Five-Factor Analysis

Asher Sterkin
13 min readJan 31, 2023


In Part One of this series, I analyzed characteristics which, in my assessment, are common to all “Infrastructure from Code”(IfC) offers. The main conclusion was that, as a whole, the IfC technology is still at an early stage of development, which in turn warns against premature fixation and calls for leaving enough room for safe-to-fail experimentation.

In this part, I’m going to take a closer look at potential differentiators.

To find a minimal set of attributes, I tried to play with 3 columns — the last resort to present something in a neat graphical form (kind of 3D cube with all IfC product scattered around), but found 3 factors too limiting and eventually settled down with five attributes, namely:

  • Programming Language — is an IfC product based on an existing mainstream programming language(s) or embarks on developing a new one?
  • Runtime Environment — does it still use some existing runtime environment (e.g. Node.js)?
  • API — is it proprietary or some form of standard/open source? Cloud-specific or cloud-agnostic?
  • IDE — does it assume its proprietary, presumably cloud-based, Integrated Development Environment or could be integrated with one or more of existing IDEs?
  • Deployment — does it assume deployment applications/services to its own cloud account or produced artifacts could be deployed to the customer’s own cloud account?

We are going to review every attribute in some depth.

Programming Language

The first differentiator between various IfC products is whether a new, “cloud-native”, programming language is proposed or, on the contrary, some mainstream programming language is supported, and if so how many of them?

New Languages

At the moment, there are following new programming languages proposed, which might be classified as “next generation, cloud native”:

New Languages

At the moment, there are following new programming languages proposed, which might be classified as “next generation, cloud native”:

The main rationale behind an attempt to introduce a new cloud-oriented programming language is that it gives a chance to get rid of old legacy baggage accumulated by the mainstream programming languages and to come up with a clean syntax that better reflects cloud specifics.

History teaches us that from time to time new programming languages manage to take off, nurture a devoted and faithful community around and establish themselves as new emerging trends. In compiled to binary language space golang and rust are probably the most notable success stories, while in dynamic languages that would probably be clojure and kotlin.

What exactly could a new, cloud native, language solve? Well, there are a number of issues that no any known mainstream programming language addresses well:

  • Confusion between sequential, concurrent and parallel execution at different levels of isolation: process, thread, greenlet
  • Confusion between in-proc function call and out-of-proc message passing
  • Lack of common syntax for external event triggers
  • Lack of consistent data access control
  • Confusion between synchronous and asynchronous APIs
  • Confusion between blocking and non-blocking API call

Of course, any new language will need to address all traditional issues such as:

  • Programming paradigm: procedural, functional, object oriented, actor-based
  • Static vs dynamic typing
  • Interpreted vs compiled
  • Compile- and/or run-time meta-programming support
  • Standard vs 3rd party libraries
  • Interoperability with other languages
  • Probably many more …

It’s important to note that attacking the cloud programming challenges mentioned above could generate some good ideas. Very often, these good ideas could be then reproduced, with more or less fidelity, in the main-stream programming languages. This, therefore, would be a noble attempt very much worth the effort regardless of final results.

Unfortunately, Ballerina and Ecstasy mentioned above, while proposing new approaches to cloud programming, do not seem to be addressing the IfC challenge in its full scope, and therefore are excluded from the further analysis.

Mainstream Programming Languages

The second group of IfC products opted for supporting one or more mainstream programming languages trying to get out maximum from what we already have. The main argument here is that there are already too many languages, each one promised to solve the software productivity problems once for all, and no one succeeded so far (Linux kernel is still developed in “C”).

A more cynical argument would state that any new language attempt started neat and clean, but accumulated exactly the same amount of weight as predecessors pointing to java and scala as the most spectacular monstrous failures.

And yet another argument is that there is nothing special about cloud — it’s just a big supercomputer (see also here), and therefore there is no justification for any cloud-specific programming language.

There is indeed a fundamental mismatch between linear program text and multidimensional and dynamic nature of resulting software, and no one from existing mainstream languages is addressing it well.

Threfore, chances that a new language will solve this problem better are not high, and it’s just better to stick with what we have.

Here is the list of existing products with one main language supported:

The Ampt analysis is based on Serverless Cloud, from which it has recently spinned off assuming that the same spirit, if not full letter, is going to be preserved.

To many the Vendia product might look like it does not belong here. However, Tim Wagner, the Vendia CEO and the father of AWS Lambda, is seriously talking about a new emerging Supercloud concept (not to be confused with the Cloudflare Supercloud), which conceptually is too close to the IfC main objective to be ignored. Using the GraphQL schema language might look a bit exotic, but apparently it fits fairly well in many practical data sharing use cases. Still, considering that it’s not an exact fit, I decided to exclude Vendia from the further analysis.

Supporting JavaScript normally assumes supporting TypeScript as well, but whether it makes any practical difference or not depends on specific implementation.

There are a number of IfC products that support multiple programming languages, as follows:

  • Klotho: JavaScript, Python, Golang, (Java, C# in development)
  • Nitric: JavaScript, Python, Golang, (Java, Kotlin, C# in development)
  • Cloudflare: JavaScript, Rust, C, Cobol, Kotlin, Dart, Python, Scala, Reason/OCaml, Perl, PHP, FSharp
  • Vercel: JavaScript, Golang, Python, Ruby

One may object that including Cloudflare is unfair since it’s not a potentially cloud-neutral IfC solution, but rather just a competing cloud platform. That’s true, but since Cloudflare supports so many IfC elements I decided to keep it for reference.

Runtime Environment

Another possible name for this category could be “Virtual Machine”. I opted for the Runtime Environment name because some programming languages (e.g. Golang and Rust) do not have a Virtual Machine. To make an order in this messy topic, we need to distinguish between

  • Language syntax
  • Compilation target (bytecode vs native machine code)
  • Calling and argument passing conventions (see Application Binary Interface)

Here, we encounter a quite usual confusion between programming language syntax, compiler frontend that supports it, compiler technological core, and compiler backend targeted to particular Runtime Environment, that potentially supports integration with binary extensions developed in other languages. And it’s indeed complicated and confusing. In order to clarify a bit possible options the following diagram compares most popular options for the Python ecosystem:

Fig. 1: Python Ecosystem

The diagram above does not imply that any of existing IfC products, including my own CAIOS, supports all these combinations, it just demonstrates that such possibility exists in principle.

Within the IfC technology analysis context, we therefore need to be more precise in specifying which programming languages and which runtime environments are supported.

Choosing one programming language for IfC does not automatically exclude other programming languages to be used for developing extension modules. All dynamic Runtime environments (Python, NodeJS, JVM, .NET) support binary extensions created from compiled programming languages such as C/C++, Golang and Rust.

In the diagram above, I did not depict possibility to extend .JVM, .NET or NodeJS environments with Golang or Rust compiled modules not because such possibility does not exists, but rather because I’m less familiar with these options and did not want overcomplicate the diagram. Even if some options do not exist today, they will come tomorrow for sure.

Ability to compile different programming languages to the same Runtime Environment, for example Java, Kotlin, Scala, Clojure to JVM bytecode, does not necessarily mean IfC is equally supported for all of them, since the main question is which language, and how, is used for producing infrastructure templates.

The same logic applies to the ability to compile many (or one might argue any) programming language to Web Assembly will not make principal difference if IfC is not supported for these languages — in this case they will be treated similar to a Python extension written in C/C++.

Within the IfC context, dynamic binding granularity makes the main difference between bytecode (aka Python, NodeJS, JVM and .NET) vs native machine code Runtime environments. While former have module-level dynamic binding granularity and, at least in principle, could import each module dynamically from cloud storage, native machine code runtime environments are restricted to Dynamically Linked Libraries of the underlying Operating System (e.g. Linux Shared Objects) at the best, but usually fails into statically composed Container Images.

Therefore, compiling dynamic languages such as Java or Kotlin to binary code using GraalVM will automatically eliminate the cloud-based dynamic binding potential.

Based on what is known from publicly available sources the IfC Runtime Environments landscape looks as follows:


This is where most of the IfC magic is supposed to be actualized. How does a prospective developer specify what cloud resources to consume, what APIs to expose to outside, and what system internal events to process?

From these specifications, the IfC machinery is supposed to produce, this way or another, Infrastructure as Code templates and required invocations of SDK functions (for more detailed discussion of IfC transformations of original application code, see my previous article).

There is a certain tradeoff between programming language core syntax and various extensions developed as its standard library and 3rd party libraries. Dynamically typed languages tend towards minimal core while statically typed languages incorporate more into their core syntax.

Lisp, and after it Clojure, is probably the most extreme example of minimal core dynamically typed language (mostly () brackets) extended via standard or user-defined libraries and macros.

Syntax-rich, strongly typed languages, such as Haskell, C++, and Scala pull more weight within the core. Many popular languages, such Python, take a stand somewhere in between these two extremes.

Some IfC products introduce proprietary APIs, which could explicitly mention cloud or not. The fact, that this proprietary API might be Open Sourced by the IfC product vendor does not make any difference — it’s still proprietary, at least for the time being.

Others may try to convert the programming language standard library and popular Open Source APIs into cloud artifacts.

Orthogonal to the fact whether the main API is proproetary or not, the IfC vendor may provide integration with other Open Source libraries implementing so-called “Bring Your Own Framework” approach. If such support is not announced it does not mean it’s impossible — just IfC vendor did not introduce any special features for supporting them.

In addition, solutions for dynamic languages (e.g. Python) may use code decorators in order, for example, to specify HTTP requests routing in proprietary or a popular Open Source way.

The following list summarizes what is known at the moment based on publicly available information:

  • Encore: proprietary, cloud-agnostic, main API, decorators for HTTP routing, no support for 3rd party Open Source libraries
  • Shuttle: proprietary, cloud-agnostic, main API, heavy usage of decorators, support for 3rd party Open Source libraries (web, db)
  • Chalice: proprietary, cloud vendor-specific main API, decorators for event triggers, no support for 3rd party Open Source libraries
  • Modal: proprietary, cloud-agnostic main API, use decorators, support for Machine Learning Open Source libraries
  • CAIOS: language standard or 3rd party Open Source cloud-agnostic API, optional support for decorators (e.g. HTTP request routing), open-end architecture supports onboarding of missing 3rd party Open Source libraries
  • Klotho: Open Source APIs, cloud-agnostic, heavy usage of annotations, support for adding additional 3rd party Open Source libraries
  • Nitric: proprietary, cloud-aware main API, decorators for HTTP routing (at least), no support for 3rd party Open Source libraries
  • Ampt: proprietary, cloud-aware, main API, no usage of decorators, extendable via Bring Your Own Framework mechnism
  • Cloudflare: proprietary, cloud-aware, main API, no decorators, no support for 3rd party Open Source libraries
  • Dark: proprietary main API, no decorators, no support for 3rd party Open Source libraries
  • Vercel: standard languge APIs, cloud-agnostic, usage of decorators depends on framework, support for onboarding 3rd party libraries
  • Wing: proprietary, cloud-aware, main API, no decorators, support for JavaScript libraries, but presumably those which do not require access to cloud resources


Integrated Development Environment is a critical element of the software development process. Debates about which one is superior and which one is inferior reach almost religious heat. As usual, there is a lot of confusion about various elements of IDE, for example, where a code editor stops and fully fledged IDE starts, or whether Web-based IDEs will ever reach capabilities of desktop ones, or whether Jupyter Notebook is IDE of the future.

Within the context of IfC, the question is whether it comes with its own IDE or count on mainstream products and if so, at which level of integration (plugins etc.). How IDE, if any, is combined with or complemented by a Command Line Interface?

One needs to keep in mind that a development environment is the first and most strong brand awareness building tool. Here, every smallest detail counts: logo, colors, themes, names, etc. If something within IfC requires a strong hand of Product Management, this is exactly the development environment and tools.

Also, need to notice, the more proprietary language/APIs are, the more investment in development tools such as Intellisense and/or static code analysis might be required.

At the moment, IfC development tools landscape looks as follows:


This is the last attribute I choose for comparing existing IfC products. The question is where the customer’s cloud program will run: at her/his cloud account, at the IfC vendor’s account or both?

Running IfC services and applications at the IfC vendor’s cloud in fact implements the idea of Supercloud in its fullest saying to the customer: “Don’t worry at which cloud it runs, we will take care of this in the most secure and cost-efficient way”.

The second strong advantage of this option is that it allows the IfC vendor to implement various performance optimizations, slashing down the development round-trip time dramatically.

The counter-argument to this approach goes that serious enterprise customers need full control over security, privacy and business continuity as well as that the IfC vendor deployment option is good only for green field deployment while in most of the cases we need to deal with the brown-field deployment scenario.

Here is the current status of IfC Deployment Options:

Wait a Minute! What About …?

There are many questions, which may arise from such analysis. Here is my sincere effort to answer some, the most anticipated, of them.

What About Observability?

In his “Serverless 2023 — A Shift in Focus” article Allen Helton rightfully stated that “Observability is often an afterthought. It shouldn’t be. It is a critical part of a serverless architecture.”

If so, why isn’t it included in my IfC products analysis? Not because I think it’s unimportant, quite the opposite. In the previous series of my IfC articles, I argued that any IfC solution should properly address all four aspects of transforming application code into cloud infrastructure templates: resource acquisition, configuration, consumption and operation support.

Therefore, proper support for Observability, as other operational aspects such as security, is supposed to be an integral part of any offer. Whether it’s tightly integrated internally or comes with more modular plug-and-play structure is another question.

In this analysis, I was mostly interested in strategic choices, which, in my view, will determine a specific way of providing proper support for IfC applications and services operation.

What About Kubernetes and Terraform?

The answer is basically the same as for Observability. In the previous series, I suggested considering four types of interaction with various cloud services, potentially supplied by different vendors, using various levels of APIs and deployed at potentially different locations.

Following this model, Kubernetes is just one possible compute service, normally supplied by the cloud platform, but does not have to. Terraform, on the other hand, is a cloud resources orchestration service supplied by an alternative vendor — Hashicorp.

I, therefore, do not consider the decision to support, or not to support, any of these two as a strategic one. In my model, in both cases, brownfield deployment constraints are going to be the most prevailing factors.

What About Cloud Portability?

This is actually a good question, and the answer is “it is determined by the decision about what kind of API is exposed to application”.

Indeed, if the decision was made to expose cloud-specific API directly to application, as it’s done, for example, in Chalice or Cloudflare, then the application turns out to be locked to the particular cloud platform by definition and the question about cloud portability of the IfC product itself will unlikely make any sense.

If on the other hand, the decision was made to expose to application APIs, which are completely cloud-neutral, as it’s done, for example, in Klotho or CAIOS, then the application code might be cloud-neutral (depending on whether it’s still possible for the application developer to use cloud-specific APIs or not, and whether (s)he decided to actually use them). In this case, cloud portability of the IfC product itself turns out to be belonging to its roadmap and most likely subject to available resources and actual market demand.

What About Engineering Platform?

Another good question. Engineering platforms, such as Spotify Backstage are supposed to “restore order to your infrastructure and enable your product teams to ship high-quality code quickly — without compromising autonomy”.

If an IfC product generates infrastructure specifications from an application code, does this mean it will make any Engineering Platform unnecessary?

Not exactly. Whatever strategic choices outlined above were made, the IfC product will need to support templates for typical services/applications, to support online help, and some catalog of available APIs, “how to extend” guidelines, and many more. Whether all this is going to be a part of IDE or would require an integration with some external tool, such as Spotify Backstage will depend on the decisions made.

This topic might require additional elaboration in a separate publication.

All I See are Trees. Where is the Forest?

To answer this question, Part Three of this series is devoted.



Asher Sterkin

Software technologist/architect; connecting dots across multiple disciplines; C-level mentoring