Type Systems and Incidental Versus Inherent Complexity
Notes (‘epistemic status’): This post is conjecture and hypothesis; the purpose of this post is for me to work through ideas. No authority is claimed.
I am dissatisfied with my throughput as a product engineer 1.
While I hope and believe that I am at least an above-average developer, by throughput 2, in my tool sets of choice, I do not believe my throughput has increased significantly in the last year. Additionally, I am troubled by a nagging feeling that there must be a better way to develop software.
As a (chiefly) Ruby programmer who has never used a strongly and statically typed language professionally, I have been exploring type systems, type theory, and category theory in order to determine if a type system may enable significant throughput increases.
In theory, the benefits of a type system are (a) reducing programming errors, and revealing errors sooner, using the type system, and (b) for statically typed languages, enabling more-powerful program analysis.
In practice, the benefits ability of a type system to reduce programming errors is restricted by the power 3 of the type system. The benefits of more-powerful program analysis are reduced by poor tools, and made unnecessary by powerful language features 4.
What both of these amount to is: a type system can make a programmer more effective in managing complexity. Ideally, they help the programmer manage inherent complexity, the complexity that is required by the problem.
The cost of a type system is incidental complexity. When programmers complain about the boilerplate that languages like Java and C++ force for defining interfaces, they are complaining about having to manage complexity not inherent to the problem they are solving.
The “loss of flexibility” complaint is, in my mind, a symptom of extra incidental complexity; more complex systems are harder to change, regardless of whether the complexity is incidental or inherent.
The question, then, is: does a type system provide more benefit in managing inherent complexity than it costs in incidental complexity?
When phrased this way, the problem is more clearly a matter of choice: a programmer’s method of managing complexity is personal to them.
-
I use the term ‘product engineer’ to mean ‘a software engineer who has and takes responsibility for designing product features as well’.↩
-
“Throughput” in the sense used by Eliyahu M. Goldratt in his books; I choose it to avoid the existing connotations and preconceptions of ‘productivity’.↩
-
No precise definition of ‘power’ or ‘powerful’, but a. A type system’s power increases with its ability to correctly and automatically infer types. b. A type system’s power increases with the quantity of restrictions that can be placed on a type. That is, Liquid Haskell’s refinement types are more powerful than regular Haskell types; Idris' dependent types are more powerful than both, in terms of specificity. The two senses of power can be made to trade off against each other; whether Liquid Haskell’s type system is more powerful than Idris' is a matter of debate, as I do not know how precisely weight Idris' dependent types against Liquid Haskell’s much better automatic type inference.↩
-
“Powerful” language can be taken to be something like “expressive”; think of the trade off between Java with great IDE refactoring tools, and a ‘more powerful’ language like Ruby, where you rarely need to use those tools in the first place. The ability to manipulate the text of the language source easily is among the most powerful features. A good library for manipulating the AST of the language can substitute. This is why I consider lisp to be one of the most-powerful languages.↩