> Using protobufs meant deferring learning the various serde's of Rust, which has helped me to focus on getting something done.
I have only dabbled once with Serde to generate some JSON, but I found it quite pleasant to use. Similar to Go's JSON library: You basically annotate your types like "this struct can be serialized, the field FooBar is serialized as 'foo'" etc., and Serde generates a serialize method for you (or, to be precise, it generates an implementation of some Serialize trait for your type).
It was less about JSON, but more about reducing the number of libraries that I had to learn in one go. The benefit of using protobufs/gRPC was that I already had a host of microservices that I could choose to interact with.
I'll need to learn serde at some point, but it might be after a while as all the things I'm planning on doing with Rust include protobufs.
Being “statically-compiled” isn't some magic property that automatically makes program run faster. It can even be the opposite in some situations, since you can't benefit from JIT optimizations.
What makes Rust, like C or C++, efficient is not just that they are statically compiled, it's that they have massively-tuned optimizing compiler spending a lot of time in compile-time optimizations.
If you take Go, another statically-compiled language, its compiler is way less aggressive with optimization because the Go compiler developers wants to keep the compile-time low (which is probably a good idea in their niche) and because it has been much less of a focus than for LLVM or Gcc. As a result, Go is slower than C/C++/Rust by almost an order of magnitude, despite being statically compiled.
Another example: if you compile C code with `-O0` optimization level, it will still be statically compiled, but it will be way slower than most JIT-ed languages.
The reason why C or Rust are fast is primarily the increase data locality. In java, js or python everything is a pointer. Fetching random parts of memory is expensive therefore if you want to keep the CPU fed with data you need continguous data structures. In theory nothing other than data locality stops JS from being faster than C and the proof is webassemby which gets 50% of the performance of native code which is really impressive considering that webassembly runs inside a sandbox that prevents glaring security issues like bufferoverflows on the stack from rewriting the return address.
> What makes Rust, like C or C++, efficient is not just that they are statically compiled, it's that they have massively-tuned optimizing compiler spending a lot of time in compile-time optimizations.
Consider that javac and the Java runtime have received plenty of resources and development time (since 1995 or so!).
The overhyped claim in the early days of Java was that it would be "faster than C" due to JIT optimizations. That has certainly turned out to not be the case. Further, Java and all the other managed languages (to include the CLR languages) remain unsuitable for real time applications due to GC pauses.
As this article points out, memory consumption for JVM based systems is also usually vastly higher than for C/C++/Rust/Ada.
C++ in particular has been a dumpster fire of a language, and IMO has set the industry back considerably. It's good to see Rust and a few other languages providing much better alternatives in the same space - languages suitable for lean, real time, high performance applications.
> What makes Rust, like C or C++, efficient is not just that they are statically compiled, it's that they have massively-tuned optimizing compiler spending a lot of time in compile-time optimizations.
Actually lots of money spend across compiler vendors in the last 30 years, and UB abuse, as C compilers were pretty lame in the early 80's.
There was no need to "well actually" that, as it is the same point the parent was making: that money (and UB abuse) is why the tuned compilers now exist.
That baseless assertion makes no sense. UB is a property of the ISO standard and has no effect on how compiler vendors implement the language. In fact, UB is intended to enable vendors to be free to choose how they implement that behavior. So why do you believe that the freedom to pick how you implement something hinders any development effort?
I agree that the Go compiler doesn't optimize as aggressively, but it's not that simple. I've heard of a rewrite of some legacy code from C++ to Go making the code go faster.
A rewrite often includes a redesign, which can make more of a difference than compiler optimizations.
If they ported back to C++ again, it would probably be faster still. But there are also maintenance concerns to think about.
Once you understand the borrow checker and how you can do a lot of zero copy string processing after you read in a file you will see it's a bit more than being statically compiled.
Basically what you can do is, e.g. Reading in a file. Now you have a string containing all your file.
When you then write a parser you can write it so that it uses string slices pointing into your file buffer whenever it references any text from your file, instead of creating new strings.
As long as you don't need to modify anything you don't need to create any copies and the borrow checking compiler will ensure that you don't invalidate any part of your buffer.
For a lot of parsers string allocation is a big part of runtime performance.
Not really, Java always had AOT compilers to native code, they just happened to be commercial and we all know how modern developers love to pay for their tools.
From looking at the traces, the performance difference between the two is minimal (on a per op basis). This mainly because both the Java and Rust services don't do much beyond interfacing with Redis/Tile38.
The impressive and important thing is the amount of resources used in performing their tasks.
With a little math you can see that I have 64GB RAM, which is plenty. What remains scarce is CPU cycles, so if I can do more on the same machine; I'm happy.
My Go is very basic, but I can follow up with a Go impl and compare it with Rust. The code shouldn't take an experienced Go programmer more than an hour, as it's mainly mapping gRPC calls to Redis calls.
Java is not statically compiles to machine code. It’s statically compiles to byte code which is hen interpreted or dynamically compiles to machine code.
So to way java is statically compiles is not quite correct in my opinion.
Of course it is a bit slower, as Java does not take advantage of UB nor disabling security checks for the ultimate performance.
And depending on the use case there is always the issue of value vs reference types.
But it is good enough for most use cases of typical desktop software, to the point of Oracle's long term roadmap is to rewrite the remaining C++ parts of OpenJDK in Java (Project Metropolis), using a subset they are designing called System Java.
Also can check Android flagship devices (better for performance comparison), since Android 5.0, Java is also AOT compiled to native code. Although Android 7 changes it to a mix of interpreter written in Assembly, JIT and AOT with PGO. And Android P will introduce sharing of PGO metadata across devices via Play Store.
I wrote a blog post about it and open sourced both the Java and Rust code.