Everything else being equal, finding bugs statically is better than finding bugs at runtime. Everything else is rarely equal.
In the comments to my last post, someone asked about the benefits / adoption of formal verification compared to testing. I have some-but-not-a-ton of experience with formal verification, but I have a lot of experience with static analysis tools.
For those of you who don't know, formal verification involves making a mathematical specification for what your code is supposed to do, and then running a tool that can prove it meets that specification. Most people reading this post will now know why most developers don't do it.
(It's worth a reminder - tests *are* a specification for your code. They're just one that's written in the language in which you wrote the test instead of being written in math.)
It's a truism that it is better to find a bug at compile time than at test time, at test time than at code review time, and at code review time than at deployment time. Each additional step bakes in problematic decisions, giving them the chance to spread to other code, and, eventually, to users. Also, every second that passes between when you write the code and when you notice the problem is a second your brain might use to forget what the code does.
I wrote a post a couple of months ago about research done at Google to identify what the ideal static analysis tool looks like. In short: you don't have to do any work to make it run, it never gives you false positives, it points out real problems, and the tool tells you exactly what you need to do to fix the problem. Tools like this are great, and are why Google leans heavily on govet, clang-tidy, and Error Prone (errorprone.info).
The more typing that someone has to do to make a static analysis tool work, though, the more that they are likely to write a test, instead (possibly adding a dynamic analysis tool).
Formal verification tools are at the far end of this - they tend to make you write specifications for your code, which they then prove the code obeys. Everyone who has worked on a codebase in the broader tech industry knows that most specifications last (at most) as long as it takes to write the code, at which point you realize you got them wrong and change them. This makes formal verification a lot of work for very little reward. The only people who write formal specifications for their code are the ones who are writing code that will kill you if they get it wrong - code to keep airplanes flying or to keep nuclear reactors from meltdown.
Most Java development is way back at the other end, where we still beg and plead with people to add nullness annotations (it will help when the jspecify.dev project finishes). This is why sneaking static analysis features into programming languages is so important (I hear you, Kotlin fans).
At some point, people's interest in typing annotations drops off, and for everything beyond that, we have tests.