Statically versus dynamically typed programming languages – that controversial topic is old as the world but still pops up at almost every developer gathering. Everyone has a preference. Everyone is zealously defending their side. And that's great – developers who care about their tools care about their work. Sit with me, and I'll tell you how I feel about this controversial matter.
I'll use some of the most popular programming languages as references – the ones that I've used the most. When I say "statically typed language", I mean explicitly typed languages like Java and C++. When I say "dynamically typed language", I mean strongly typed dynamic languages like Python and Ruby – leaving JavaScript and PHP out of the picture as weak typing makes me uneasy.
I understand that I've used poor static languages requiring a lot of type declarations. I know I've used just a few languages professionally. Nonetheless, I've worked with the most widely used ones. Rust and Haskell are beautiful, but how many people have the leeway to craft software using them? Craving toys we'll never play with is impractical. Instead, let's debate the tools common people use every day.
Biases and Habits
To fend off an unbiased position on this battlefield you need to have a considerable experience in both worlds – static and dynamic. If you come from a statically typed background, the idea of relying on dynamic typing can make you feel uncomfortable. This discomfort is usually rooted in a belief that static typing is more reliable. On the other hand, if you've been working with dynamic languages throughout your career, having to declare each type before you use it may feel like too much boilerplate that clutters your code and makes you walk instead of run.
You need experience on both sides to judge objectively. Otherwise, all the arguments that you throw at your friends from the other side remain simply a personal bias founded on a well-established habit. I've had my share of battles on this eternal battleground and most of the time the arguments I hear are either too subjective or standing on the ground of false assumptions and common misconceptions. Without further ado, let's dive into this value-cost analysis paralysis.
Values and Costs
Static and dynamic languages both make promises and each has values and costs.
Static typing values
- The compiler saves you from type errors.
This statement sounds quite alarming. Ultimately it assumes that runtime type errors will occur unless the compiler performs type checks. That assumption should be true if you are a bunch of cowboys who write some code and push it to production straight away. The compiler is not a pest control service. You need other safety nets to keep the bugs away. Software written in Python or Ruby doesn't have higher defect rates than software written in Java or C++.
What about type casting? The moment you start casting all bets are off. The
compiler excuses itself and you are left on your own. The same holds true for
the infamous any
in TypeScript. The moment you have to integrate a third-party
JavaScript library into your TypeScript project, any
starts polluting your
codebase as a plague. Things that should be easy become hard, and things that
are hard become any
.
Developers from a statically typed background may find comfort in the notion that static typing provides safety. But in reality that is an illusion. Compiler preventable errors never were our main problem. How many times your production system was on fire due to a runtime type error? How many bugs have you fixed that were caused by such a type error? Not many I guess.
How many times have you fixed a NPE in production? In the Java world, Null Pointer Exceptions (NPE) are the most common errors found in production applications. Tony Hoare introduced Null references in ALGOL W back in 1965 "simply because it was so easy to implement". He talks about that decision considering it "my billion-dollar mistake".
How about good tests? The danger of null reference errors should be the same in static and dynamic languages. But why the problem is so common in Java and not so common in Ruby? Java applications go live every day with a minimal set of unit tests – just enough to cover the major computing pieces. Why? Because you have the compiler to "ensure" that the code is correct. In Ruby projects, it's normal to see a code-to-test ratio of 1 to 3.
Your code is only as good as your tests. – Sandi Metz
- The compiler saves you from typos.
A type system prevents most careless mistakes plaguing programmers in dynamic languages. Typos can be hard to spot errors. We all waste valuable time debugging failures caused by them. But that assumption is primarily valid if you have the following work process – you write code for one hour straight and then run it to see what happens. If you work like this, you have bigger problems than typos.
Developers using dynamic languages have found a working process that enables them to spot all kinds of errors early, not only typing mistakes. When I'm changing code in one place of the system, I have the relevant tests running in the background, giving me immediate feedback not only that the code is correct but also that the behavior is correct. That's why TDD is popular among developers using dynamic languages and not so popular among developers using static languages.
I agree that spotting mistakes as early as possible can boost productivity – or at least let you gain back those hours lost fighting with the type system before you start the actual coding. A good linter (rubocop, flake8, eslint), integrated into your editor, will highlight most typos when using a dynamic language.
But don't get me wrong – I am not saying that having a compiler is a bad thing. All I'm saying is that relying on the compiler to save you from bugs is a bad thing. I'm trying to make you think and re-evaluate your faith in the compiler. The compiler is not the magic wand that saves the day. Having a compiler is no excuse for not having tests and pushing code directly to production without a code review or manual testing. "If it compiles, it works." – the most famous last words.
- Type information serves as documentation.
This argument is quite subjective. It lies on the premise that programmers cannot infer types from the context, so that they won't understand code without types. If you start a new job in a new company and they throw a large project at you – well, types may help, at least at first. If you are used to seeing type declarations in your code, you may find them helpful.
It all comes down to habits and experience. If you come from a statically typed language, the lack of type declarations sounds like utter chaos. If you are used to dynamic typing, you find type declarations verbose and distracting. Developers experienced in dynamic languages find the less verbose syntax is easier to read, write, and understand.
To understand what a piece of TypeScript code does, the type definitions are of crucial help. To understand what a piece of Ruby code does, the specs are of essential help. You have to see how that Ruby code is used to understand it. In TypeScript, the types serve as documentation. In Ruby, the tests serve as documentation. No matter in which universe developers live, they find ways to achieve understandable code.
Few of us are in the shoes of Microsoft, Facebook, or Dropbox, having to maintain and extend projects with millions of lines of code. In such large codebases, type annotations come handy, improving developer productivity. But still, every developer works on a small part of that system and has a solid understanding of all the code in that part. They quite familiar with what Item is and whether an ID is an integer or a string.
- Type information helps the tooling ecosystem.
Whether you write TypeScript in VS Code or use Eclipse for Java, the UX is smooth when your editor or IDE can rely on type information. You have an excellent auto-complete. You rename with confidence. Typos and other errors are highlighted as you type. A typing system offers IDEs rich support – no doubt about it.
You may achieve nearly the same experience when working with a dynamic language and get the best of both worlds. You have to think more about writing grepable code. You may use a language server or ctags. Editors like Visual Studio Code and Neovim have built-in LSP clients and provide excellent support as you type.
Writing Ruby or Python – with the right LSP setup – provides an experience very close to that of an IDE. You have auto-completion, function signatures, on-hover documentation, go to definition, find references, rename, format, and code snippets. You have all of the good stuff and it's blazing fast.
- The compiled code is optimized to run quickly.
This argument represents a quality only if you accept the following assumption – the application runs slowly without these optimizations. In some cases, a well-crafted statically typed code will outperform its well-crafted dynamically typed twin. A common example today – when everyone is building RESTful backends – is JSON serialization.
Parsing JSON in C++ and Java is times faster than it is in Python or Ruby or PHP. But working with JSON in Java is an annotation nightmare. You cannot see the damn property from all the bloat. A framework has just turned a simple POJO into a monstrous aberration.
Anyway, if your use case falls into a category where you have no choice but to go for performance, you should welcome the compiler into your project. If you must, you must. In my experience, however, an application is not running slowly because of a poor choice of programming language. An application runs slowly because of developer mistakes.
These mistakes fall into two categories: slow database queries and poor algorithmic time complexity. It doesn't matter if you program in Java or Python when people rely too much on an ORM library and don't understand how the database works, causing N+1 problems everywhere. It doesn't matter if you program in C++ or Ruby if you cannot identify and refactor n^2 code.
Dynamic typing values
- Faster development cycles.
The code is interpreted and dynamically loaded without a compile cycle. Developers used to static typing strongly believe that the benefits of a compiler guarding them against runtime type errors is a necessity. They trade off efficiency for having that guardian on their side.
That is a solid argument only if you assume that without the compiler these type errors will occur and the compiler is the only one who can save you. To put it in other words – the time spent chasing and fixing type errors is greater than the time lost in overall application development.
For developers used to dynamic typing getting started with static typing can be difficult. Everything seems to go slower and take more effort before you see results. When I switched from Java to Ruby I felt a massive boost in productivity. It was like I was running on steroids. Trading compiler safety for faster feedback loops proved to be a very good deal.
If you are working on a proof of concept or any other form of exploratory programming, you'll find the REPL (read-eval-print-loop) an invaluable companion. The tool is useful not only for fast prototyping but in any situation where you have to quickly test a hypothesis. You can run a piece of code in that sandbox, get it working quickly, and then integrate it back into the project. The REPL doesn't need to compile or deploy your code. You get immediate feedback.
- Less boilerplate code.
The source code does not include explicit type information. Dynamic languages are more succinct than their statically typed counterparts. You don't go to the lengths that C++ programmers do to express safety properties as type-based proofs.
Programmers used to dynamic typing find the code easier to understand when it does not contain type declarations. They can infer an object’s type from its context. Programmers used to static typing feel just the opposite.
I remember the so many times I've wandered around in a Java project trying to find the actual code. All the time wasted trying to work around the type system just to get something working.
Take a Chess piece class definition in Java:
public class Piece {
private Position position;
public Piece(Color color, File file, Rank rank) { ... }
}
The redundancy is standing out. Java has very explicit types. We have to constantly declare the types of things. Its type system is not sound by design – not providing some kind of guarantee. The situation is pretty much the same with other older languages like C – their type systems are just designed to spit warnings for common errors.
Static languages require that you specify the complete interface of an abstraction in one place before you can go to implement the actual logic. This can be quite annoying if you are just prototyping – writing code that evolves over time or trying out fresh ideas. You have to change things in several places just to make a simple tweak. The worst form of this is C++ header files.
- Metaprogramming is easier.
Metaprogramming – or writing code that writes code – is a double-edged sword. It could be a great tool in the hands of skilled craftsmen. But an inept apprentice could make quite the mess. If you have ever solved a complex problem by creating a simple DSL, you know the bliss you feel when looking at your own creation. That is the greatest mastery any developer could achieve – to solve complex problems with simple code that reads like prose. For those craftsmen, metaprogramming is a must-have feature.
But if you have ever chased an elusive bug hidden deep down an obscure DSL, you become an opponent for life, claiming metaprogramming as the ultimate gun to shoot yourself in the foot. Metaprogramming is a scalpel – dangerous in the wrong hands, life-saving when used properly. A great tool that requires greater responsibility. Used carefully it has great value and stands as a strong argument in favor of dynamic typing.
- Flexibility and changeability.
Dynamic typing is the basis of conciseness, flexibility, and polymorphism. Since you do not constrain types in your code, it is concise and flexible. You do not need to declare the specific types of objects you use. Why care at all about the object's type? We should care about interfaces and behavior.
That could sound confusing to people used to static typing. But if you keep in mind that it all boils down to variables, objects, and the links between them, the concept that you never have to declare variables' types ahead of time becomes much simpler to grasp. Types are determined automatically at runtime, not in response to declarations. Dynamic typing produces easily changeable code.
I remember the countless architectural discussions we had when building systems in Java – the whole waterfall methodology – trying to design everything upfront. When working in Java, it's important to get the architecture right from the start. Java doesn't tolerate bad design decisions. Later on, when the codebase grows, it's hard to change the initial design. Not impossible, but hard. It's simply not flexible enough. The code is a bit "hard" for "software".
In Ruby or Python, the only thing I care to get right from the start is the database schema. I am trying to do the right abstractions based on all the information I have at the beginning, but I am not too worried if I don't get everything right at the start. I have less code and I can easily change it. As I don't rely on the compiler for error checking, I have my code well-covered with tests so I can refactor with confidence. The code is "soft" – flexible and changeable.
Depend on behavior, not types
No matter from which universe you come – static or dynamic – you don't want to depend on concrete implementations. Every self-respecting OOP language has some notion of interfaces. Interfaces express generalizations about behaviors. They bring flexibility to our code as they allow us to decouple implementations.
Program to an interface, not an implementation. – the "Design Patterns" book
An interface reveals only the operations we can do. We don't know what it is. We know what we can do with it. We can only see the exposed behavior. While you'll have to write a good deal of boilerplate code to untie specific implementations in static languages, in dynamic ones, you get all the goodies with zero effort.
Once you begin to treat your objects as if they are defined by their behavior rather than by their class, you enter into a new realm of expressive flexible design. – Sandi Metz
By relying on types in your code, you break its flexibility. You limit it to working on just one type. Without type declarations, your code may work on a wide range of concrete implementations. In dynamic languages, you code to object interfaces (or operations supported), not to types. You care what an object does, not what it is.
Any object with a compatible interface will work, regardless of its specific type – that's the "Pythonic" way of thinking. To achieve the same in Java, you'll need much more bloat code around defining interfaces and classes implementing them, and then a complex framework like Spring to have objects injected wherever needed.
People often wonder why there is no Dependency Injection (DI) in Ruby and Python and how these folks achieve Inversion of Control (IoC). IoC is very common in mature Python code. But nobody talks about it as it is achieved naturally through duck typing. No need for a complex framework to give an object its instance variables. The Django framework utilizes DI heavily but no one shouts out fancy names for simple concepts.
Dependency Injection is a 25-dollar term for a 5-cent concept. – James Shore
Costs of concretion and costs of abstraction
We want to work effectively. To do that, we need to reduce the cost of change. Both concretion and abstraction come with certain costs. Concrete code is harder to extend but easier to follow having types that serve as documentation. Abstract code may seem harder to read to the untrained eye but is far easier to change.
Once you develop this ability to tolerate ambiguity about the class of an object you are set right on the road to designing abstractions with confidence and without fear. You stop worrying about the inner details of your classes and start envisioning your objects as abstract entities who interact through public interfaces.
But can we have the best of both worlds – abstractions with well-documented behavior? Sure, simply write some tests. Good tests are the best code documentation any team could wish for. Many Java developers neglect tests because they rely too much on the compiler. Types cannot document your code as descriptively as a good test suite can.
Reduce the cost of change with duck typing
Dynamically typed languages like Python, Ruby, and JavaScript don't have interfaces. Developers use "duck typing" instead. If it walks like a duck and quacks like a duck, it must be a duck. The concept is that you can use an object instance as long as the method you are invoking can be found on that object.
Duck typing sounds weird to developers used to static typing. They don't know what functionality to expect without an explicit type being specified. On the other hand, developers used to dynamic typing look at the explicit interfaces in Java and don't see how those folks can possibly refactor their code as the requirements change. Add a new method to an interface, and you'll have to adapt all existing implementations.
Methods that cannot behave correctly unless they know the classes of their arguments make code less flexible and harder to change when new classes appear and existing classes change. The more you depend on a class implementation the less flexible your code is. When the dependent class change, you must change.
If the object acts like a duck then its class is irrelevant. Duck types are public interfaces that are not tied to a specific class implementation. Ducks are objects defined by their behavior rather than by their class. The expectations about the behavior of an object define its public interface.
Abstract interfaces make your code more flexible by replacing the costly dependency on a concrete class with a more forgiving dependency on a message. You don't care what the underlying type is as long as the object can handle the message you've sent to it. Duck typing makes your code more abstract and less concrete, making it easier to extend but hiding the specific class behind the duck.
Developer productivity
Dynamic languages shine when it comes to developer efficiency. They allow programmers to get more done with less effort. They are deliberately optimized for productivity: simpler syntax, dynamic typing, lack of compile steps, batteries included. You create usable software in a fraction of the time needed compared to using a static language. You work free of the type wrangling and gymnastics needed to please the compiler.
The net effect is boosting developer productivity times beyond the levels supported by traditional languages like Java and C#. That assumption stands out even more in the modern world of web development and cloud solutions where developers are asked to release new features as soon as new requirements come in, and customers enjoy the new behavior the moment they open the application in their browsers.
Having a simple and readable syntax promotes not only productivity but software quality as well. It takes much less effort to read, understand, and change 10 lines of code than 100. It takes off the whole pressure of getting things right from the start. Instead, you can have something barely working and iterate over and over until it is perfectly shaped. Trying to do the same iterative approach with static typing would require more effort as you need to go and change all your types.
References
- Practical Object-Oriented Design, by Sandi Metz
- Static Typing Where Possible, Dynamic Typing When Needed, by Erik Meijer and Peter Drayton at Microsoft
- Static Typing Is Not For Type Checking by Bojidar Bojanov
- Dependency Injection Demystified by James Shore