On Discussing Programming Languages
To discuss programming languages is generally noisy among us, technologists. We’re a different kind (for the good, for the bad), generally very opinionated.
The opinions change, also the perspectives and the experience, but the discussion was and always will be present.
In this post, I hope to bring light to the discussion. Don’t expect easy answers. I will ask much more than pretend I know the answers.
I use language and programming language interchangeably.
We have to think!
Overview
Your mileage may vary
When we start developing it’s relatively common to learn programming languages from the C family. Those are the languages that have a characteristic if
, followed by {}
and code in between. They usually have strong typing and good performance.
For example, a C program that checks if a number is even or odd (harvested from Internet):
#include<stdio.h>
int main()
{
// This variable is to store the input number
int num;
printf("Enter an integer: ");
scanf("%d",&num);
// Modulus (%) returns remainder
if ( num%2 == 0 )
printf("%d is an even number", num);
else
printf("%d is an odd number", num);
return 0;
}
After 1 or 2 years of writing code in the same language, it seems to be the only option.
You may have started from a different family of languages, that’s not the point though.
Before learning C/C++ at college I had the opportunity to learn Clipper, Pascal, Visual Basic, and Delphi. Those were the first languages I learned that helped form a critical sense over it, each one with its pros and cons.
The lesson here is: every programming language brings together someone else’s experience.
Behind the choices
Unfortunately, it’s common to ignore an important aspect of our choices:
- What is the motivation for creating a new programming language or tool?
- Who created it?
- What’s the creator’s background?
You may not need to know it, but if you’re the person responsible for deciding it, you better go look for it. This is probably the research which is gonna give you an interesting background. It’s a good exercise that I recommend. The history behind the choices used to be fascinating!
Try to reflect on the following aspects:
- Was the language created in the ‘60s? The ‘70s?
- If it’s older, what was the hardware?
- What did the language’s creator choose to add and to remove from the design?
As I promised, there are a lot of questions. Those are important inquiries you’ll have to think about when you decide to create your language.
Let’s suppose it’s your time. You could ask yourself:
- Should my language be multi-paradigm or not? Functional, logic, imperative?
- Compiled or interpreted?
- Should I run on a Virtual Machine?
- What about the type system…
- Concurrency and parallelism
- Do I have hardware limitations? Multiplatform?
- Should I optimize for…
- Should the compiler generate an intermediate language…
It’s your choices the same way someone has to think when creating the language you’re using today. Every question leads to another one and has its consequences.
When I started I’d never ask these questions. There’re things I had no idea I didn’t know. My baggage came through the years by exploring by using programming languages as a tool, exploring their usabilities and differences.
High invested
For every choice, there’s a level of investment of time and sometimes money. It doesn’t matter if you’re self-learned or not, you at least spent time learning it.
Let’s suppose a new programming language is fresh over the web. As people are learning, other people are experimenting with their job, making it bigger, one step at a time. New books start being written, the publishers go look for them as the interest grows.
Naturally, you’re inclined to appreciate it, you’re growing together. Now, you can change jobs, apply your knowledge, and receive money for that.
New conferences appear then you can travel. The new language in the block shines, leverages your career, people ask for your help and you’re seen as a reference.
There’s a big factor that motivates these choices. For every investment, you naturally build a bias towards it.
Biases exist in different areas. Another example is when you buy a product and you’re temporarily blind. You can’t think of missing characteristics or points of failures, you’re biased!
You’re not ready to listen to your friend say:
Wow, that’s too expensive! Product A is much better in terms of…”
Good, one more step on establishing our feet on the ground!
Why so many options?
Programming languages can be classified into two larger groups: general-purpose or domain-specific.
For general-purpose languages, it tends to be more complicated. It’s not always possible to add new features that were not thought by-design. Simply add a new feature might look foreign, out of context. We use to say it’s not idiomatic.
Let’s reflect on a few examples:
The Ruby language after +25 years decided that the language needed to adapt to modern hardware to stay relevant in terms of concurrency and parallelism. Wow! At least to me, that doesn’t look pretty simple.
How to think this change for a language that, until then, hadn’t this as a priority? We could say it’s a matter of OCP (Open-Closed Principle), but it’s far harder for programming languages than for applications.
To break or not break compatibility? Should we create primitives based on keywords or think in terms of a different abstraction? The designers have to think in the lifetime of the language, community, and its use from here on. What a huge effort is required!
Java itself, that year after has been changing. It’s been a few releases since you can write Java code on a REPL too! JShell gives that flexibility tool existent for years on interpreted languages. In terms of functional programming, Java’s brought too many options now.
I have no empirical reference, but I think Kotlin made Java open its eyes for evolution.
Go, after years of criticism and a divided community over generics, have finally materialized it for the next versions. Draft after draft, the designers have moved to a common goal with the help of the community.
Nothing is static. It’s always time to adapt and move forward. Everyone learns. The tool of today might not be great for a problem of tomorrow.
Ecossystem
A language gains relevance, mainly in terms of industry, if there’re libraries, tools, and things happening around. It’s innate that, as humans, we need innovation. We need to see things evolving and vibrant.
If it grows too much, is natural that more code is produced (good and bad). The language’s affordance naturally brings curious people. Nothing wrong with dummy libraries, code for the sake of code. That job of curation, to harvest repos and extract meaning is essential.
One more time, I don’t have a point to prove this. It’s something backed by my own experience.
Hard to measure aspects
I decided to write this post after reading a lot of discussions about it. The good, the bad, the evil. We, the people.
We’re hooked by:
My code is elegant
My code is beautiful
I prefer…
It’s authentic to assume:
I’ve adopted the language X because it looks different. It’s the hype, I want to try it.
The process of experimentation is quite common, but for more experienced developers, it goes differently. If you’re being paid, working on a team, documenting your choices and decisions are very important. Train people and make progress. If you can, give contributions back to the community.
I remember a video series from professor Barry Burd) about functional programming, where he talks about the famous goto
keyword. It gained new forms but the levels of indirection still exist. Think in terms of if
.
Productivity
This is a very broad term. I still don’t know exactly what it means What does it mean to be productive? Is it when I create more LOC? If I create keyboard bindings to expand into code snippets, does it mean productivity? Your experience can have a total influence here.
You might be productive after years of writing code in the same language, mas is biased towards it. If you learn a new language, how long does it take to teach your brain the new semantics? What about the idiomatic way of writing it?
What if instead of productivity you mean familiarity? Naturally, the first encounter with a new programming language might feel weird. It happened to me when I first met Clojure. A different family of language that initially hurts, but helped me think differently.
See the Clojure code below that finds the greatest common divisor of a and b (harvested from the Internet):
(defn gcd
"(gcd a b) computes the greatest common divisor of a and b."
[a b]
(if (zero? b)
a
(recur b (mod a b))))
Nothing to compare here with the C code, isn’t it?
Simple vs Easy
This goes far.
I write a lot of code in language X.
What does it mean? If you write more code but the reading is easy, it’s organized, what’s the problem?
Do you write a lot of code because the language does not offer constructions and/or abstractions that make reuse possible? Notice that, everything is a tradeoff. The language may be raw and even though you write more code, the learning curve could be lower.
Every language allows us to write low cohesion code, with bad legibility. It’s always possible to ignore the design purpose and deviate broadly from the ideas behind it. Non-idiomatic code is not an exclusivity of language X or Z. One more time you need to understand the choices and try to follow the best constructs.
I’ve been there when I started learning Ruby. My code sounded like Java. It took me time to adapt and learn Ruby idioms. I read books on it and made progress. I read a lot of code. People helped me review my code and show the Ruby way of writing it.
Another aspect that looks foreign to me was the boom of one-liners methods — methods exclusively with one-single-line! It took me time to get used to those levels of indirections. Nowadays I use one-liners carefully and discuss their validity on code reviews.
Never buy an opinion based on a Hello, world!
example. It’s not a fair comparison that does not reflect reality. I’m tired of seeing contrived examples comparisons trying to guide decisions.
It’s much better to migrate a project of yours to a new language, try it and see if the model behind it is what you expect or not. Sometimes the exercise is worth doing just by doing. You might not need to prove anything and that’s ok.
If you need examples and things to compare, Rosetta Code is better than a mere hello_world.rb
vs HelloWorld.java
.
Wrapping up
Without further ado, let’s sum up the idea. It all depends!
Always I can, I try to look for papers and old books. So many ideas we think are new, comes from the ‘70s, ‘80s. It’s nothing new, it’s just possible to materialize it. New hardware, better usability, knowledge sharing. As the idea of concurrency seems to be something new, but CSP ideas were proposed by Sir Tony Hoare in the late ‘70s.
Also, books, that I particularly really appreciate. Their format in which the authors needed to dig deeper before writing and exploring concepts. I wholeheartedly disagree with people that say that book is merely a theory. The books materialize the experience lived by someone else that you can reuse. It works like a shortcut in the great sense of the word. It opens doors.
Be wary of your choices. Stay alert to no be the one-size-fits-all of programming languages. General-purpose languages make it possible and that’s ok. If you need optimizations, go ahead with a specific purpose language, try them. No programming language can help with a problem you don’t understand.
As I get older in terms of code, the more I realized it all depends. Maturity comes in great shape.
Try different flavors of languages, experiment with them, and make decisions.
Thanks for reading!