Great question! I will try to answer by analogy. Modern processor architectures (Intel, ARM, RISC V, GPU) also have tons of tuning parameters: multiple levels of memory, different co-processors, large/reduced command set. Optimizing all these parameters manually are beyond mental capacity of ordinary people - we rely on Operating Systems (e.g. Linux), Compilers and Run-Time Environments. From time to time, very few of us, need to go under the hood, tweak something and immediately package it in a form of library that takes care for itself (think typical ML library which automatically detects presence of GPU and configures itself accordingly).
In our current solution, there is some default configuration which is good enough for developing a typical service. For more advanced use cases, an accompanying service_config.py could be supplied providing additional specifications be it security or resource optimization. Unlike other approaches, which push all this to JSON/YAML, service configuration is just a Python function obtaining deployment mode (dev, test, etc) and service class definition. It could return some static data or perform any calculation or database access. What we envision is that many fine tuning parameters will be detected automatically based on static code analysis and collected operational data (e.g. proper size of Lambda RAM, do we need guaranteed concurrency etc.). Some others are automatically determined based on the type of API and security mode (do we need VPC?).
Makes sense?