I’ve been a programmer since the age of 8, and some kind of developer for most of my life. Throughout my life as a coder, both hobbyist and professional, I’ve learnt plenty of programming languages that felt like cookie-cutter clones of each other, but also a few programming languages that changed the way I looked at programming, sometimes even at thinking.

It blew my mind, as interpreted by MidJourney

Ranking by order in which I discovered these languages. In most cases, I’m going to attach features to languages that were not the first language to have such features. It’s not meant to be a misattribution, just to showcase how I was exposed to such features.


A language designed to make programming simple.

Original, line-based, Basic is dead, but its legacy lives on in VB.Net, VBA, but also in more subtle manners in today’s dynamic languages (Python, JavaScript, etc.)

(Turbo) Pascal

I guess my most formative language. I released a few applications in Turbo Pascal as a teenager, including a commercial one (although, to be fair, Alain Frisch did most of the work).

A few years later, still a teen, I attempted to move to C and was a bit disappointed, only to find it basically the same language, just harder to read (what’s that weird three-components for loop?), more error-prone (far pointers? really?) and with glaring missing features (what do you mean, there are no modules?), so I’m not going to include C in this list.

Pascal language still exists a bit as Turbo Pascal, Lazarus Pascal and Delphi, but also lives on with Ada (for its syntax) and every documentation tool around the sun (JavaDoc, rustdoc, Sphinx, Doxygen, etc.), as the original documentation tool, WEB, was designed for and with Turbo Pascal.

x86 ASM

One of the main limitations of Turbo Pascal was that its BGI (the API to talk to the graphics adapter) was limited to 16 colors and relatively small resolutions. When more modern graphic cards appeared, every developer suddenly started learning assembly language to be able to talk to the devices. I was no exception.

These days, very few people write assembly language manually, because it’s not very useful and it’s 100% unportable, but it is of course still present behind the scenes in your favorite compiler or JIT. Every so often, I see a Rust developer or a C++ developer showcasing a disassembled version of the binary they have produced and it reminds me of the (not so) good old times.


At the other end of the spectrum was HyperCard. A programming language designed for people who did not want to learn programming languages. It looked like a drawing application but you could use the built-in language to customize it into a full IDE, as well as to write new programs.

Arguably, this language is the core inspiration for everything “visual”, from Visual Basic to Visual Studio to every tool that you can use to draw a UX. Also, to some extent, the web itself traces some of its roots to HyperCard. Every so often, I see a revival of some descendent of HyperCard, but they don’t seem to stick.

SmallTalk shares of course many of these ideas and was possibly even more revolutionary, but I learnt SmallTalk much later, so it was too late to blow my mind on these specific points!


Caml (then OCaml) came at me by surprise. I was a jaded 1st year student, convinced that I knew everything under the sun since I had relased commercial software, and I had taken the habit of being better at programming that the teachers in charge of programming in high school, which of course meant that I was the best there was. I read the documentation of OCaml before the first lesson and misunderstood “polymorphism” (we call it “generics” these days) as “weakly typed”. Boy was I wrong. During the first lesson, the professor showed us how to walk a tree in OCaml. It was so concise, so robust and so readable. Also, the professor was actually a strong developer and the author of one of the first word processors, so this course also became a much needed lesson in humility.

I feel in love with OCaml. It quickly became my main programming language, until I burnt out while using it to develop Opalang.

OCaml is still very much active. It’s used to develop programming languages (including the original Rust prototypes), static analysis tools for nuclear plants, operating systems for satellites and other situations that require an extremely high level of safety. It is also one of the immediate ancestors of Rust, F#, Elm, ReasonML and, if my memory serves, the first version of React. Oh, and also Opalang. For some reason, message-passing concurrency seems to be marketed as “gochannels” these days, but as far as I can tell, they date back to Concurrent ML, one of the ancestors of OCaml.


I started Java when I was trying to modernize some code I’d written Turbo Pascal and finally get it to work on both Linux, Windows and System 7 (the ancestor of macOS). I found the language verbose, I couldn’t understand why they didn’t ship an installer given how complicated it was to ship and/or run classes and I really missed generics, algebraic data types or pattern-matching, but Java became my baseline language for a while – not as good as OCaml in my books, but with access to better libraries.

These days, Java is of course one of the pillars of the industry. It’s also one of the main ancestors of C#, Scala or Go.


If you have never coded in Prolog, I can only recomment giving it a try. It will absolutely change how you think of programs. In Prolog, you don’t write algorithms, you teach the program how to think. Writing a simple path finder is something you can do in about 2-4 trivial rules. Writing a simple type system is barely longer.

These days, Prolog itself has mostly vanished. However, its legacy is impressive. If my memory serves, SQL is largely based on a subset of Prolog, with a large focus on optimization at the expense of simplicity. The C++ template system is largely a more complicated reimplementation of Prolog. Parts of Rust’s type system are being rewritten with Chalk, a descendent of Prolog.

I somewhat suspect that, one of these days, someone will start combining LLMs with Prolog (or Coq) to build something reliable on top of ChatGPT. Also, I have recently stumbled upon TypeDB, which looks very much like a new generation of Prolog, with a relational database-like API.


Coq (and its cousins, Twelf, Isabel, …) is something different entirely. You encode your entire specifications as types. These types are your goal. You don’t program by looking at your functions, your libraries, but by looking at your types using tools that transform your types to simplify them gradually into independent subgoals. One way to describe it is that instead of coding, you are solving a puzzle. Of course, your subgoals end up being your functions, your data structures, your modules. But that’s almost accidental.

Oh, and course,, the type-checker is so powerful that your program is actually a formal, mathematically verified proof that it fulfills the specifications. A number of mathematical theorems have been verified by turning them into Coq.

I haven’t followed development on Coq and other languages of the family in the while. As far as I understand, they’re still quite used in academia, where they have served to formally prove compliance of compilers and OS kernels. I suspect that if Prolog doesn’t come back to create reliable LLMs, someone will use Coq or Idris to do so.


Erlang (and now Elixir) is and remains the language for distribution. A core much simpler than Python, but that scaled easily to millions of nodes.

While many people still haven’t heard about Erlang/Elixir, this language has been used in the industry since the 90s, for highly distributed systems. The ecosystem is very much alive and a static type system is currently being tested. The let it fail model is used by mobile operating systems. The concurrency and distribution models have trickled to Scala, Go and pretty much everything that contains the words “microservices” or “actors”. In fact I am a bit sad whenever I see microservices or Kafka, as they are largely reimplementations of the design of Erlang/Elixir and the underlying BEAM Virtual Machine, but based on technologies that are much harder to use and deploy and protocols that are several orders of magnitude slower.

Oh, Opalang borrows from Erlang, too.

The π-calculus

I’m cheating here. The π-calculus (“pi-calculus”) is not a programming language but a calculus, i.e. a mathematical model designed to reason about systems. This model can be fully defined with only a dozen lines of specification and is also pretty trivial to implement. It has been used to perform static analysis on models of cryptographic protocols, or banking protocols… and also biological processes. I’ve used it to define a type system that developers could theoretically use to guarantee that their protocols would degrade nicely in presence of a (D)DoS attack.

I haven’t seen any new publication on the π-calculus in a while, but I might just not be looking in the right place.


This one is a bit special, as I joined the team and became the lead designer for the language for a while. Imagine writing a single codebase in a simple functional + actor-based language comparable to OCaml/F#/ReasonML/Elm or a statically typed Erlang, then having the compiler analyze the code for safety, security and distribution, then split it into client-side code (compiled to JavaScript + HTML + CSS), server-side code (compiled to native) and database-side code (compiled into a db schema and compiled queries).

I fell in love with the language. Then got burnt out designing, developing and evangelizing it in a toxic environment. Took me some time to get back in the field of programming language design after that.

I think that Opalang (and its cousins ReSharper and Ur/Web) was a bit too much in advance for its time. The entire idea of multi-tier programming seems to have been vanished into oblivion. Perhaps it will resurface some day – WASM should make this much easier.


I fell in love again with Rust. It combined most of the benefits of OCaml or Haskell with the promise of writing highly concurrent code and system-level development. Also, at the time, revolutionary ideas for modular garbage-collection and M:N scheduling, although these ideas were dumped before 1.0. At the time I started tinkering with the language, it was very much a research project. I remember some of the early conversations we’ve had on error-handling or strings or iterators. One of the high marks on my resume is that I’m one of the people who introduced the idea of the try! macro, which since then graduated into operator ?.

I haven’t had much time to contribute to the compiler or stdlib in a while, but I’m planning to return to it once I have some free time!

Sadly, I think that Rust has spoilt me for many other languages. Now, whenever I write Python or Go (both of which I currently do for a living) or even TypeScript, I’m always cringing at the prospect of my code blowing up as soon as I run it. With Rust, I encode as many properties as I reasonably can within the type system, which means that I need to think more before I code, but also that tests usuall pass the first time.


I’ve coded in dozens of programming languages throughout my career. This includes Haskell, Scheme, SmallTalk, (Tw)elf, Idris, Python, Go, PHP, Perl, Bash, C, C++, JoCaml, JavaScript, TypeScript, Scala, C#, F#, Visual Basic, Ruby, (D)TAL, Squeak, Logo, Scratch, UnrealScript, GDScript, Lua, SQL…

If I didn’t include them, it’s because they didn’t blow my mind. Which doesn’t mean that these languages are not great or innovative. Just that the concepts either didn’t resonate with me or, more often than not, that I had already discovered these same concepts with other languages, just by the luck of the order in which I learnt languages.

What’s next?

I haven’t seen anything revolutionary in a while, but this doesn’t mean anything. The next idea could be lurking around the corner, in a small research project that I just haven’t heard about.

When I joined PASQAL, I hoped to be wowed by languages for quantum computing. I’m not there yet, but I don’t lose hope!

I imagine that, despite the overhype and the insane environmental cost, something useful will come from ChatGPT and LLMs, but I’m yet to see it. So far, I’ve seen interesting experiments and many toys that already feel broken. I wonder what the future will look like.

And you, which languages have influenced the way you think about programs?