Recent publications by Consumer Reports and the NSA have launched countless conversations in development circles about safety and its benefits.

In these conversations, I’ve seen many misunderstandings about what safety means in programming and how programming languages can implement, help or hinder safety. Let’s clarify a few things.

A metal safe leaking fluid data, as imagined by MidJourney

Safety and Security

So let’s start by clarifying a few things. First, (software) safety is not (software) security.

Security is something that has meaning only within a threat model:

Security (within a given threat model): A piece of code is secure if no attacker can find a way to use your code to realize a risk judged unacceptable.

As most applications do not have a formal threat model, we’ll let this degenerate to the more handwavy:

Security (handwavy): An attacker cannot make your code do somethings it should not do.

Similarly, safety is something that has meaning only within a specification:

Safety (within a specification): The code behaves according to its specifications.

While most code doesn’t have specifications other than the code itself, in practice, this is a hard definition to uphold. We can go with the gentler:

Safety (within a set of invariants): The invariants for the code hold.

What’s an invariant? Well, good question. In this post, we’ll define

Invariant: Something the programmer believes of the code.

Usually, invariants are easy to spot: they are often called “documentation”, “comments” or “names”. If you can’t spot any invariants in code, assume that they are broken. For instance, and while this is not often something that you’ll find in litterature, I personally consider Python’s syntax and keyword arguments safety tools.

We can even decide to let degenerate the definition of safety to:

Safety (handwavy): The code works and the programmer understands why. No, for real, not just guessing.

Safety is definitely related to security. However, here is a program that has full security and no safety:

fn not_main() { // Oops, typo. This should have been `main()`.
    // Do many useful things.
}

Since we’re not doing anything, we’re (presumbaly) not behaving according to specifications. However, since we’re not doing anything either, we’re (presumably) not doing anything we shouldn’t do.

And here is a program that has full safety and no security:

fn main() {
    disclose_user_password();
}

These are, of course, extreme examples. In most cases, when you write a program, you want to achieve both safety and security. Moreover, in a perfectly safe program, you can analyze security by auditing the specs.

Let’s repeat this:

Benefit of safety: If a program is perfectly safe (wrt a spec or invariants), you can guarantee security (wrt a threat model) by analyzing the spec (or invariants).

In particular, some safety properties, when broken, often open security vulnerabilities. More on this later.

Achieving perfectly safe code has been a long-standing goal of both the programming language community and the formal methods community. This goal has not been reached yet. I suspect that it never will. But that is ok, because while pursuing this objective (or subsets thereof), the PL and FM communities have given us a number of extremely valuable tools, including:

Yes, I’m probably stretching a bit the definition of “PL community” with some of the items above. Some definitely come from communities that make use of programming languages without pretending to invent anything PL-related. I’m also skipping a number of other items that are definitely adjacent to safety and security, such as cryptography, containers, etc. I’m planning to write about these in another post.

Not all of these innovations have made it into the industry, but many are now taken for granted by developers.

Now, let’s get one thing out of the way: to achieve safety, you do not need any of these tools. No, really, you can write perfectly safe (and secure) programs in raw assembly language. However, I do not expect that, for any reasonable spec and threat model, anybody will write a perfectly safe + secure web browser in raw assembly any time soon. These days, the complexity of the code of a full web browser is simply mind-bending. In fact, let’s be honest, if you start from asm, the complexity of the code of any modern, useful application is already considerable. These days, if you or I wish to write an application, we’re going to start by picking a programming language and a set of libraries and tools, which will probably feature some of the items listed above.

Conversely, there are safety violations that you can do nothing about, even with perfect tooling: if you OS or your hardware are unreliable, it may break your code in ways that are impossible to predict. The same is true for security.

So why try to achieve safety and security despite the fact that we live in an imperfect world? Because knowing that you can’t be 100% successful is no excuse for accepting a bad result. As software developers we are here to produce the best possible software under the constraints at hand, which may include unreliable hardware, an unreliable environment or a finite runway. And both safety (it works) and security (it doesn’t cause harm) are among the most important features of “best possible software”, typically alongside performance.

Programming languages

Oh, programming languages? Yes, programming languages. Because the reports I’ve linked to above all talk of safe programming languages. For some definition of safe. So what is a safe programming language? Well, let’s try and come up with a definition:

Language (in this post): A general-purpose programming language actually used in the industry at wild.

Yes, we can definitely write safe Domain-Specific Languages. Yes, there are specialized implementations of, say, JavaScript that reduce the Trusted Computing Base by supporting neither async nor any kind of calls to native code. Yes some academics program in Coq on Twelf or using Proof-Carrying Code. These are absolutely valid tools, but they are beyond the scope of this post.

Safe language (with respect to a specification/invariants): A language which helps the developer write safe code (with respect to a specification/invariants).

Safe language (handwavy): A language which helps the developer write safe code (handwavy).

Given that PL and FM researchers still toil hard to try and achieve a perfectly safe language, calling a language “safe” is something of a stretch. No, sadly, $(YOUR FAVORITE LANGUAGE) is not safe (handwavy). It may feature some very important safety properties (we’ll discuss these later), but it’s not absolutely safe. It can, however, be safer than another language for some subset of specs and/or (handwavy) for some developers. Yes, since our handwavy definition for safety implies a developer, of course, some languages are going to be safer for some developers and less safe for others. And of course how much experience you have in a language very much influences how safe this language is for you. This is why quite safe code (e.g sqlite) has been written in C, a language that features very few tools to aid with safety. This is also why the Linux kernel is opening itself to Rust – because finding developers who can write C with this level of safety is really hard, while kernel maintainers believe that finding developers who can write Rust code with this level of safety is easier, thanks to better safety-oriented tooling.

And while we’re at it, let’s try and come up with a definition for a secure programming langauge:

Secure language (with respect to a threat model): A language which helps the developer write secure code (with respect to a threat model).

Secure language (handwavy): A language which helps the developer write secure code (handwavy).

One of the many conversations spawned by the above reports was within the /r/cpp community. Two themes returned regularly “These reports are not focusing on recent versions of C++” and “I’m writing safe/secure code in C++ all day long, $(LANGUAGE X) won’t help me”, where $(LANGUAGE X) often rhymed with “Trust”.

I can’t judge on the first argument. Most of my knowledge of C++ I gained either before joining Mozilla or while working for 9 years on the codebase of Firefox. While this codebase has been modernized quite a few times, its roots are deeply entrenched in legacy C++ – and even legacy C – dating back to times where many of the features that modern C++ developers take for granted were not implemented or not properly implemented by compilers. In fact, these past few weeks, I have been trying to brush up on my C++ by finding examples of shiny, pure, modern and of course safe C++. I haven’t found any yet, but if any reader knows of a good codebase I could look at, please don’t hesitate to drop me a line!

What about the second argument? The answer is absolutely “yes”. You can definitely write safe and secure code in C++, at least for some threat models and some specifications. This is also true for $(YOUR FAVORITE LANGUAGE), of course.

This doesn’t make $(YOUR FAVORITE LANGUAGE) (or $(LANGUAGE X)) a safe language, or a secure language, for all specs/threat models.

Let me emphasize this.

Your favorite language is not perfectly safe. It is not perfectly secure. It is not even safer and more secure than most other languages for all teams of developers, all domains, all threat models.

If you’re reading these lines and must remember only one thing, please, fellow developers in $(YOUR FAVORITE LANGUAGE), stop trolling developers with different experience. Chances are that they are perfectly right to use these tools that you despise. Even if they’re not, trolling is not constructive.

Classifying safeties

In some of the conversations about safety and security, one of the recurrent topics is that there is more than one kind of safety and that either $(YOUR FAVORITE LANGUAGE) or $(LANGUAGE X) doesn’t help with that kind of safety. Both assertions are absolutely true. So let’s take a deeper look at software safety.

A few kinds of safety return constantly in these conversations:

There is definitely more to safety than these four kinds of safety. Documentation and clarity of intent/implementation are parts of safety. Assertions/contracts are part of safety (although one could argue that they are already part of type safety). Many applications also need to take into account user safety, which is not part of software safety. There are also various notions of resource safety, etc. But they are all beyond the scope of this discussion.

Let’s try and provide a quick and fairly handwavy definition for these kinds of safety:

Memory safety: Pretend that all your memory is labeled with dynamic types (including undefined, for memory that isn’t addressable anymore). If your code reads from a memory address believing that it’s reading something with type T, then it’s actually reading from something with type T (or a subtype thereof). If your code writes to a memory address believing that it’s writing something with type T, then it’s actually writing on top of something with type T (or a supertype thereof, including undefined).

Data race safety: If a thread is performing a non-atomic write at an address in memory, another thread may not be performing a read or a write at the same address concurrently.

Thread safety: There is no scheduling that can break an invariant.

Thread safety includes deadlocks, livelocks and data race safety, but they are not limited to these. If you are familiar with chaos testing, you will probably be able to quote from memory many examples in which scheduling breaks your code, without deadlock, livelock or data race.

Type safety: Pretend that all your memory is labeled with dynamic types (including undefined, for memory that isn’t addressable anymore). Every invariant for every type in memory holds for the entire duration of the program.

Readers familiar with Formal Methods will immediately notice that this definition is lacking both the words “Soundness” and “Subject Reduction”. I promise I tried to include them in this post, but both of them require concepts that don’t really map well to most industrial languages (operational semantics and some kind of stuck state), so I’ve progressively rephrased these definitions into something that both makes more sense in the context without needing an entire research paper’s worth of definition per language. Type safety as I define it is not strictly equivalent to traditional mathematical definitions of type safety, but I feel that this definition works much better with industrial languages. If you can come up with a better definition, feel free to drop me a line!

Note that, with this definition (or the usual mathematical one), type safety is not the same thing as having static type checks. You can very well have static type checks that are insufficient to guarantee type safety or a dynamic type system that enforces type safety.

Now, if you squint hard (or if you rewrite this post five times in an attempt to simplify it), you can see that memory safety, data race safety and type safety can be rewritten as the following, which I find simpler and easier to reason with:

Memory safety (within a set of types and invariants) A piece of code is memory safe if it is both write safe and read safe with respect to these invariants.

Write safety (within a set of invariants): A piece of code is said to “break write safety” if, at any point, it overwrites a value, breaking an invariant of the code. It is write-safe if it never breaks write safety.

Read safety (within a set of types and invariants): A piece of code is said to “break read safety” if, at any point, accessing memory as a given type T results in a value that does not respect the invariants of T. It is read-safe if it never breaks read safety.

Why do we care about write safety or read safety? Because breaking read or write safety means breaking invariants. Invariants are the total sum of knowledge that the developer has about their code. Break invariants and you have no clear idea about what your code is going to do.

And to emphasize once again: breaking invariants/safety does not mean introducing a vulnerability. It most likely means introducing a bug. It also means that you don’t know what your code is going, so this bug might introduce a vulnerability.

So is your code / your language thread safe? Is it read safe? Is it write safe?

Let’s start with an example of invariants. Someone in your team has come up with a revolutionary encoding, the WTF-42. You need to implement a new class or type WTFString of strings that are guaranteed to always be valid once initialization is complete. Can you do it? You lose:

Can you do it?

As for our thread-safety invariant, we’ll adopt something simple: the program executes to the end (e.g. no deadlock, no livelock).

Are you ready for a tentative taxonomy of languages? I’ll do my best to be objective, but I am a human being, with limited knowledge and unlimited bias, so I can be writing things that are false or misleading. If you feel that’s the case, don’t hesitate to get in touch.

C

Can we break write-safety in the language?

Yes, trivially:

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

I do not see how this could be possible in the general case. Model-checking tools (e.g. TLA+) can help for a specific piece of code. If you have ever used model-checking on C code, please feel free to drop me a line.

Can we break read-safety in the language?

Yes, trivially.

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

I do not see how this could be possible in the general case. Model-checking tools (e.g. TLA+) can help for a specific piece of code.

Can we break thread-safety in the language?

Yes, writing a deadlock, a livelock or a data race condition is trivial.

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

I do not feel that it is possible in the general case. Model-checking tools (e.g. TLA+) can help for a specific piece of code.

C++

Can we break write-safety in the language?

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

There is a folk theorem within the C++ community that there exists a safe subset of C++. I have read both CERT’s guidelines and Bjarne Stroustrup’s guidelines and they very clearly are not this subset. If this subset is written anywhere, I would be very interested in reading it.

That being said, let’s see if we can come with a subset:

This feels reasonably write-safe. There may of course exist larger subsets, this one just happens to be something that fits within five lines.

Of course, at this stage, two questions arise:

  1. How do you make sure that you are using this style?
  2. Would developers use this style?

To answer 1.: I believe that it would be possible to write a linter. Not easy – writing linters for C++ is never easy – but possible. Auditing the stdlib to make sure that it always panics… well, the stdlib is a huge piece of code but it can be rewritten. It is my understanding that there are efforts to port the stdlib to CHERI-enabled hardware. For platforms that support CHERI, this might provide essentially what I’m speaking of. It is my understanding that we’re still at least 5 years away from being able to actually test this, though.

To answer 2.: I my experience writing C++ and being part of the C++ community, I believe that there are two categories of C++ users. Those that write High Frequency Trading code and everybody else.

Other than using this subset, model-checking tools can probably help ensure write-safety of a specific piece of code.

Can we break read-safety in the language?

Yes, the remarks regarding write-safety also apply here.

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

The remarks regarding write-safety also apply here.

Can we break thread-safety in the language?

Yes, exactly as in C.

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

I believe that with the appropriate libraries, one could implement a concurrent but safe sublanguage for C++, for instance by imitating Sklml-style concurrency. However, I suspect that very few programmers would use this, as this would considerably restrict C++, a language that many use specifically because it is so flexible.

Without such a library and discipline, just as hard as in C.

Python

Can we break write-safety in the language?

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

While I haven’t checked formally, I believe that there is at least one way to achieve this:

There are certainly larger subsets that would work. This subset has the advantage that, if coupled with well-reviewed libraries, it would be easy to review/lint. Furthermore, I have recently spoken with a Python developer who apparently uses this style, so it seems to exist in the wild.

However, this would require throwing away most of the existing ecosystem, including almost all of Python’s batteries. I suspect that most Python developers would be unhappy about this.

I don’t know of any model-checking tool for Python.

Can we break read-safety in the language?

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

I believe that it is possible, by adopting the same constraints as for write-safety and rejecting duck-typing in favor of isinstance. Unfortunately, this collides violently with the concept of idiomatic Python, so I suspect that such a subset would not be used.

Can we break thread-safety in the language?

Yes, exactly as in C. The GIL protects refcounting, but nothing else.

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

Once again, adopting a (concurrent) functional programming style could help, something similar to Sklml for instance.

JavaScript

Can we break write-safety in the language?

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

As in Python, I suspect that it would be sufficient to audit native code and adopt a functional style in JS. There are JS frameworks based on the idea, so this would not be entirely shocking. It might even be possible to continue interacting with the DOM, with an approach comparable to React. Network access would be complicated but Haskell libraries have demonstrated that it can be done with a functional style.

This would be fairly easy to review or lint. However, this would require throwing away most of the existing ecosystem.

I don’t know of any model-checking tool for JS.

Can we break read-safety in the language?

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

My intuition tells me that it wouldn’t be difficult. Just enforce dynamic type checks at boundaries. A TypeScript compiler could be customized to inject this.

Can we break thread-safety in the language?

Yes, either with the scheduling of Promise (which form logical threads) or with that of Workers (which are backed by OS threads or processes).

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

As in Python, very likely feasible, at the expense of most of the existing ecosystem.

TypeScript

Can we break write-safety in the language?

Yes, exactly as in JavaScript.

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

Exactly as in JavaScript.

Can we break read-safety in the language?

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

I haven’t checked in details but this feels fairly easy:

Can we break thread-safety in the language?

Yes, exactly as in JavaScript.

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

Exactly as in JavaScript.

Ruby

I’m not very familiar with Ruby. It is my understanding that the situation is exactly as in JavaScript.

Java, Kotlin, C# without unsafe, Scala, F#, OCaml

Can we break write-safety in the language?

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

I suspect that this is entirely possible. Again, one such policy could be:

As it turns out, Scala, Kotlin, F# and OCaml are designed explicitly to allow this latter point, while Java and C# have progressively gained the features necessary to support this.

Again, this would require throwing away most of the ecosystem and standard library, something that may involve some pushback.

There may of course be some larger subsets that remain write-safe.

Alternatively, there are model-checking and other formal analysis tools for some of these languages, which may help.

Can we break read-safety in the language?

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

The ideas exposed in the write-safe subset would basically work.

Can we break thread-safety in the language?

Yes, exactly as in C or C++.

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

As in C++, adopting a concurrent functional programming approach would work. This has been demonstrated for OCaml with Camlp3l and Skelml.

Rust

Can we break write-safety in the language?

How hard is it to isolate a write-safe subset of the language in which we can still code some useful applications?

Option 1: Don’t use unsafe. Most of the code I have seen or written in Rust doesn’t use unsafe, it’s not particularly constraining.

Option 2: If you absolutely must use unsafe, make sure that it does not break write-safety, as recommended by the official documentation. - No, really, review them again. Re-read the Rustonomicon. Have them reviewed by a second and a third person. Ideally, they can even suggest a way to remove that use of unsafe.

In either case, restrict your dependencies to vetted crates/libraries. The Rust toolchain will let you inspect your dependencies.

And… that’s it? From our list, despite having a strong functional programming core, Rust is the only language that has a clear write-safe subset that does not require developers to switch to functional programming.

Alternatively, there are also several model-checkers for Rust.

Can we break read-safety in the language?

How hard is it to isolate a read-safe subset of the language in which we can still code some useful applications?

Option 1: Don’t use unsafe. Really. The only times I’ve had to use unsafe in production code was when writing a new kind of Mutex and I had a proof at hand that it didn’t break any invariant.

Option 2: If you absolutely must use unsafe, review every site to ensure that it does not break write-safety. This is strongly encouraged by the recommendations. - Then review them some more. Add assertions around them. Try to eliminate them.

In either case, restrict your dependencies to vetted crates/libraries. The Rust toolchain will let you inspect your dependencies.

And that’s it. No need to switch to functional programming.

Alternatively, there are also several model-checkers for Rust.

Can we break thread-safety in the language?

Yes. It’s not as bad as in C, because Sync and Send will reject many breakages, but it remains possible to create deadlocks or livelocks, either with OS threads or with Futures.

How hard is it to isolate a thread-safe subset of the language in which we can still code some useful applications?

Formal methods are very good at detecting deadlocks, livelocks or any other reliance on scheduling, but I don’t know that anyone has every tried to attempt this with Rust.

I have not checked but my hunch is that the following policy would be sufficient:

This severely restricts our ability to write concurrent code, but doesn’t suppress it, as scoped threads remain usable. Additionally, this would be easy to enforce with a Clippy lint. As it turns out, this is pretty much a form of functional concurrent programming.

There may be larger subsets that are safe.

Other languages?

I’d love to add Ada, Circle, Go, Haskell, Idris, Zig and others. But I think that this post is long enough, isn’t it?

So, what’s the safest language? What’s the most secure language?

As expressed above, this depends on your spec or invariants and on your threat model. There is no absolute answer.

But Rust is the safest, right?

Out of the box, Rust provides a pretty good baseline level. But in practice, any evaluation needs to take into account your spec or invariants and your threat model. So it’s entirely possible that other languages will behave better against some specs and thread models.

In particular, I wouldn’t be surprised if Ada, Haskell or Idris provided an even better baseline at safety than Rust.

What about statistics?

Oh, right, I forgot something. Researchers have attempted to draw statistics about language safety and security.

Apparently, there is somewhere a list of CVEs classified by programming language. I will admit that I have been to lazy to look for it seriously. I expect that C and C++ are somewhere at the head of the list but that by itself doesn’t actually mean anything, since Linux (C), BSD (C), MySQL (C), Sqlite (C), Postgres (C), Chromium (C++), Firefox (C++) and a few others are both highly monitored by their communities (including the part of the NSA that helps with protection) and targeted by attackers. As ChatGPT grows in use, I expect that Python will rise in the ranks, but that also won’t mean anything conclusive, for the same reasons.

Now, Microsoft has also published a summary of vulnerabilities fixed in their products. This is, again, heavily biased, because Microsoft develops almost everything in C++. However, Microsoft’s conclusion is that ~70% of the vulnerabilities are due to memory corruptions, which these days are prevented by default by every language other than C and C++. Apparently, the statistics are similar in Chromium and in Apple products.

To emphasize

More than 70% of security vulnerabilities spotted by Microsoft, Google and Apple are due to breaking write safety using mechanisms that are available only out of the box only in C and C++.

As a reminder, these are developments that involve:

(*) I know that both Google and Mozilla are doing this. I’m guessing that Microsoft and Apple are, too.

(**) Probably not in all applications.

(***) Recall that Chromium team was initially a Firefox development team.

Is there a conclusion that we can draw? The signs suggest that despite taking inhuman levels of precautions to avoid specifically memory corruptions, these teams fail repeatedly at this specific task. This is a problem of both safety and security. As a member of the PL (and formerly FM) community, my first reflex is to blame the tools involved. To prove that C and/or C++ are to blame, however, one would of course need the opportunity to compare against similar programs, used quite as much in the wild, but written with different programming languages. As far as I know, such a study is currently impossible because there is no code that fulfills all these criteria, so this is un(dis)provable. However, it is clear that if you are using C or C++ for anything security-critical, you are abandoning lots of tools designed to help you achieve memory-safe code and assuming that you can beat both Google, Microsoft, Apple and Mozilla at this game, despite all the assets mentioned above. You are a brave person.

Mozilla has decided years ago to switch progressively their efforts from C++ to Rust. Google has followed suit first with components of Fuchsia, Android and now Chromium. Microsoft has long ago started investing in Managed C++, C# and more recently in Rust. Apple is progressively moving many developments to Swift. To clarify, none of this means that either Rust or Swift is better than C or C++, only that these companies (and the C or C++ developers pushing for the adoption of Rust and Swift) believe that it is, for some tasks. If the trend continues, we may end up with a comparison between vulnerabilities in C or C++ and Rust or Swift on not just comparable projects but identical ones.

There has also been at least one attempt to study the safety of a programming language by looking at the number of bug fixes commits vs. non bug fixes commits. Bugs are typically safety violations, whether they are security issues or not. Intuitively, this feels like a valid way to indirectly measure whether there is any unsafety correlated to the use of a language.

Let me copy their results:

From highest to lowest proportion of bugfixes, C++, TypeScript tied to Obj-C, C, PHP, Python, CoffeeScript, JavaScript, Erlang, C#, Java, Perl, Go, Ruby, Scala, Haskell, Clojure

Note that this study dates back to 2017. Rust was too young to be in the list. Since Rust and Scala have pretty close safety guarantees, I would imagine that Rust would feature somewhere close to Scala in that table, but that’s just a hunch from my part. One possible conclusion is that C++, TypeScript, Objective-C, C, PHP, Python code available in the wild seems to contain many bugs. Or it could mean that the developers in these languages just fix more bugs. Or are better at labelling what they’re doing as bugfixes. Or that their software has more users, which causes more bugs to be found. It’s really hard to be certain.

There may be something to conclude from the fact that Python lies among the “worse than average languages” while Ruby which is somewhat similar lies among the “better than average languages”, but it would take someone smarter than me to figure out what.

So what?

What what? Oh, do you want me to tell you to use Rust?

Use whichever language makes sense for your goal, specs and threat model. There are many use cases in which I will be using Rust if I have a choice. But I’ll happily use a different tool if it feels appropriate.

I just hope that this post can help you a bit navigate the constraints and vocabulary of safety and security. And please, please, do not use this for trolling. We’re all in this together, attempting to improve the safety and security of our code. We all have things to learn from each other and each other’s tools.

Also, if you feel that I’ve made a mistake and misrepresented $(YOUR FAVORITE LANGUAGE), feel free to drop me a line!

edit Lots of feedback, thanks! I keep updating this post.