sixtyfps is licensed under the GPL, so it's icky for anyone that isn't developing a GPL app and doesn't want to or can't shell out for the commercial license. Iced is licensed under the MIT license.
- "Ambassador" - free commercial license (after approval), in exchange for marketing as being built with sixtyfps and authorization to use your logo and feedback
- Normal commercial - paid
I kinda like the addition of the "Ambassador" tier. Seems like a fair exchange for an in-development framework, and would be a good option for people that are building a new product and don't yet have the revenue to justify the cost during the prototyping phase.
I wasn't aware of the "Ambassador" option and that changes the calculus for deciding between the different frameworks on the basis of the license, although the opaque "after approval" could mean anything and tbh gives me pause. Thanks for enlightening me.
For the record, the rust community has mostly settled around dual-licensing under MIT/Apache for what you can call "foundational" crates ("libraries"), which are both far more liberal than the GPL.
Those are just my words, please don't judge someone based solely on someone else's summary, prefer first-party sources to form your own opinion. What it means seems pretty clear to me: https://sixtyfps.io/ambassador-program.html Just email them about your project and see what they say.
> For the record, the rust community has mostly settled around dual-licensing under MIT/Apache for what you can call "foundational" crates ("libraries")
That's just your opinion, there's no such thing as a blessed license scheme by the community. GPL is a perfectly valid choice if you feel like it
> That's just your opinion, there's no such thing as a blessed license scheme by the community. GPL is a perfectly valid choice if you feel like it
I didn't state an "opinion." I'm stating an observable fact. And don't paraphrase what I said - my words are right there and you don't need to play games with what I said or didn't say: I never said anything was "blessed," I merely pointed out that by and large, the Rust community has - whether you like it or not - mostly settled on MIT/Apache 2.0 (as in, the de facto license). I even couched my statement very carefully as referring to "foundational" crates, even though - as a matter of fact - it applies to the majority of published open source crates regardless of whether or not you consider them "foundational."
Different languages/communities built around common languages or frameworks tend to "settle" (there's that word again) on different licenses. These are typically (but not necessarily) the licenses that the language/framework itself is licensed under. In rust's case, the majority of what you might consider "foundational" crates - i.e. crates that are intended for consumption by the broader community to build the basis of other projects in the same way that you would depend on the standard library itself, be they runtimes, util libraries, etc. have, by and large, accepted either MIT or dual-licensed MIT/Apache2.0 as the default to start with, unless there's a compelling reason (such as starting a business around it!) to do otherwise. There's nothing wrong with picking a different license, it's just less likely for the community to converge around a more strictly licensed option. That's just the way the world works.
sixtyfps is dual-license. you can contact them and negotiate a license for commercial use. i think this is a better scheme than straight mit which accommodates no demand for financial support to the developers.
That's a good point, and it might scare some people away, but the same model already exists for Qt, and is very successful there. Sure, nobody knows how successful Qt would have been had it been MIT licensed. But still, people do buy Qt. Generally, the moment you sell something you can also afford licenses for a library, because money gets involved.
Much as Qt attempts to convince its users otherwise, the core of Qt is LGPL and can be dynamically linked with proprietary apps. SixtyFPS's GPL license has no such option, though the ambassador license may fulfill the same purpose (to some extent, while it lasts).
The more I use Elm; and the more I look at other reactive frontends like React+Redux and Vue, the more I love thinking in The Elm Architecture (https://guide.elm-lang.org/architecture/) aka Model-View-Update as a pattern for GUI development.
To wit: Here's a way to implement it with Python and Tkinter for an old school reactive GUI.
Hmm, I'm not so sure that Tkinter example really solves the problem of performing background work without blocking the UI. Sure, the UI thread doesn't block, but instead the UI state (as the user continues interacting with it) can drift out of sync with the model when the controller thread is busy with other work.
The real issue in that example seems to be that the event loop needs to wait for both UI events and read/write readyness on the serial port. This could probably be handled either with createfilehandler[1] (although it doesn't work on Windows) or by dedicating a thread to handling the serial port and posting messages to the Tkinter event loop as suggested at the end.
I think your last suggestion is dead on - think about the browser event loop where Messages can be triggered all the time from clicks, hovers etc. If you have 4 threads and all those threads are pushing Messages then the update thread only has to push the state change notifications and doesn't have to do much blocking of its own
I had my first encounter with The Elm Architecture through the Go Bubble Tea library (https://github.com/charmbracelet/bubbletea). What a revelation. Such a drastically different way of thinking about UI.
The Elm Architecture is not that super novel. Actually it came from game development from the Components way of thinking. Take a look at this book. It's a classic for gamedev:
- https://gameprogrammingpatterns.com/
Elm seems stagnant because Evan is not so transparent and prefer work in a private branch and to release work in batches instead of having a release every 6 months, this makes people think it is dead. Elm is still being used in production by dozens of companies with huge codebases (100k~400kloc), and the community is very active, there are amazing projects like elm-pages, elm-review, elm-ui (from mdgriffith), elm-spa, elm-charts, lamdera and others.
Most of the activity happens in the (public) Elm Slack community, so it's not indexed in search engines and hard to find in StackOverflow, but it's a really effective place to get advice and help with a problem in a more personal way
I have only very briefly looked at ELM. There is one thing I’m not clear about with the ELM architecture. Is the Model a tree with one to one correspondence with the GUI tree? If so, I assume this implies duplicate model entries if the same UI element is in two different branches.
This would seem to imply that the Model is a ViewModel and not the ultimate Model.
No, I don't think your model must necessarily have the same topology as the view. However, the "model" most certainly resembles a view model, it's not generally something that you write to disk (although you could do it if you really wanted to). The difference between this model and the view model from MVVM is that in MVVM, the view model contains runtime objects and handles such as sockets, file handles, database connections, service locators and so on, whereas the model in the Elm philosophy keeps those elsewhere, in the realm of messages and updates.
Quickly? Redux is a state container that supports Flux actions and Elm is a language with a companion architecture that's recommended. The comparison would be between Elm and Javascript/Typescript+React+Redux, as it provides the ability to implement those featuresets out of the box, including Messages, which is the TEA answer to Redux and Flux actions (more or less).
Personally, I find working with Elm's state management features is like heaven compared to dealing with Redux+Typescript's intensely noisy types.
Answering your question, the difference is Elm has a more expressive type system than javascript, and will catch a lot of errors for you, with excellent error reporting from the compiler.
The layout of the code becomes less important, because the compiler will catch most errors (actually recommended in elm guides not to split in too many files until overwhelmingly necessary)
They're really similar. Elm works hard to abstract away the JavaScript universe and create really clearly defined borders regarding being inisde Elm land or outside of Elm land. Specifically, interacting with external libs, etc. involves retrofitting them to work via a port, which lets side effecty stuff come in as a standard message without having to deviate from the basic design. Redux may do something similar (I forget), but the gist of what I'm saying is that Elm works really hard to adhere to its core primitives to limit cognitive overhead for most tasks.
I'm not sure there's a naming conflict here. There is little overlap between x86 decoding and a GUI library. If software is eating the world, then it's similar to Giant Bicycles and Giant Food Stores.
I didn't mean to imply priority: a clash is just a clash, not a sign that either project isn't entitled to the name. It's just an unfortunate namespace problem: language package managers (overwhelmingly) flatten things into a single-level namespace, and so we end up with things like this.
> There is little overlap between x86 decoding and a GUI library.
I work on a graphical tool that visualizes x86 binaries! It uses iced (the decoder), and it could very easily end up using iced (the GUI library).
The main thing I disagree with you on, is that there is a namespace collision because of the package manager, or that there is a namespace collision because both use Rust. If there is a namespace collision, it should be because they're both tools used by developers. The package manager issue is solved easily by adding something to the package name. As for the language, Rust is shaping up to be more like C++ than Ruby or Python, where the main community is more about the language and the standard library and less about the myriad different uses of it.
struct Counter {
// The counter value
value: i32,
// The local state of the two buttons
increment_button: button::State,
decrement_button: button::State,
}
impl Counter {
pub fn view(&mut self) -> Column<Message> {
...
}
}
this looks like it holds state in the ui layer... wouldn't it be better to keep state in some separate state/model object (struct) and bind the ui to that instead?
not doing it this way couples a lot of logic/state to the ui framework which means its much harder to switch ui frameworks later, not to mention testing the logic independently of the ui choice...
This type of state is typically used for "interface state", e.g. "is this toggled open", "which menu item is selected", etc. It's definitely normal to combine that with a 'database' style model layer, as you say.
I agree with your point. In fact it's the approach I'm doing myself in my current UI lib (which has a very specific use case but the point still applies).
Does this use the text input stack provided by the platforms? Or is it parsing raw keyboard events?
I only ask because if it doesn't use platform text input, that is a non-starter for mobile.
It also would mean having to reimplement platform behavior for text input, like being able to hold Shift on macOS while moving the cursor to change text selection.
Yes! This is because web browsers use the platform text input stack
Browsers are actually a great example of my favorite approach to cross-platform UI, because they sprinkle platform-native widgets throughout the canvas that they render, controlled by a platform-agnostic programming language. Which reminds me that people were trying to use webrender[1] to build native apps in Rust.
Why would a widget library need to deal with input stack directly? For portability it'd provide an API where the caller will dispatch abstract events such as mouse events and keyboard events and already translated character events.
If you think it's easy to make an abstraction around keyboard events, stroll through the Chromium and Blink source tree some time and try to understand all the nuances.
Keyboard and key event input is much harder than it first appears. From multiple platforms to multiple languages to multiple keyboard interface devices to multiple layouts, and you cannot assume that the operating system provides a workable abstraction on its own.
Not sure if that's an apples to apples comparison. Generally speaking a widget library should be able to provide and interface for taking the keyboard input. It doesn't need to interface with the OS for this. The caller does and uses for example an IME or the native OS API to receive the os level input. I have in fact written widget libraries this way. Just wondering if I didn't hit some ude case that would require the widget to rely on some input system directly.
If you're implementing a text input widget then your widget needs to interact with the platform IME. It doesn't just receive text to append to the buffer in response to key presses. The IME may be context-sensitive and thus need access to the existing text, it may replace previously entered text, it may want to display a pop-up (e.g. a list of candidates) near the cursor position. These things require interfacing with the OS, and the underlying data model varies for each platform.
And what makes you think the widget can't give this information to the user of the library which would Then invoke the IME/whatever input system and give the result back to the widget in an platform independent input/character event?
// pseudo-interface
class TextWidget {
...
WidgetEvent HandleMouseEvent(const MouseEvent& m, ...);
WidgetEvent HandleKeyboardEvent(const KeyboardEvent& k, ...),
WidgetEvent InputEvent(const InputEvent& i, ...);
};
// client code.
PlatformMsg msg;
while (GetMessage(&msg, ...)) {
WidgetEvent event;
if (IsKeyboard(msg)) {
event=widgets.DispatchEvent(MapKeyboardMsg(msg),...);
} ... {
if (NeedsIMEInput(event)) {
OpenIME(ExtractDetails(event));
}
}
The client application interfaces with the native/platform/synthesizer API/wrapper library to read/generate events from apps message queue/script/internet/whatever and maps those into platform agonistic events supported by the widget library and dispatches them into the library where the library decides which widget will receive them etc. The widget will react to the event and in the case of the text input widget will return back an event with the relevant information for invoking some IME system (in this example). The client code interfaces again with the native/platform/synthesizer to do what is asked and then dispatches the result back to the widget which will update its state.
This design will basically have the same functionality but will split the responsibilities so that the widget system would not depend on the input system. This allows the two to vary independently and remain agnostic of each other thus promoting reuse and testing and simplifying the implementation (IMHO).
The question is not about how to invoke some IME, it's about how to separate these concerns and de-couple the widget system from the input system by delegating the invocation (and required details) to somewhere else. I don't see any real reason that warrants tight coupling.
You've simply moved the problem. Someone ends up having to take responsibility for managing virtual keyboards, internationalization, layouts, etc., and now you've given the chance for every user of the library to do it differently.
I think there are approximately five viable Rust toolkits: Iced, Makepad, sixtyfps, egui, and Druid (in no particular order). Any of these five could evolve to being a real solution, with enough investment. Of course, it's possible another could arise, or that I've given short shrift to one of the other contenders (areweguiyet has a list, but that resource is not super well maintained).
There are some things easier to do in Iced than Druid (especially integration with 3D, which is something we currently don't do at all). On the other hand, we're pushing pretty hard on infrastructure: proper text handling, input methods, subwindows, etc. Currently, Druid relies on platform capabilities for 2D drawing, but my main work right now is a new GPU renderer. And we hope to work with 'mwcampbell on accessibility in the next year (integrating AccessKit).
We've got a great community, but tend not to toot our horn very loudly, so people might not be as aware of the progress we're making. By contrast, sixtyfps is a commercial product (in addition to a GPL release) and has weekly updates.
Have you done any benchmarking on how this compares with Electron on things like binary size, memory usage? Those are usually most common complaints people have with regards to Electron so would be nice to see the improvements this provides.
From my experience, compressed binary bundle is < 10 Mb and memory usage is < 100 Mb for a generic app. Of course memory usage will vary wildly between application domains. With Iced, it's fairly constant, since Rust offers pretty solid memory leak protection.
This looks great too, but just to point out if you're looking for something like Electron (but rust, or smaller or whatever) then Tauri is a much closer fit, using web stuff (HTML, CSS, JS, wasm, frameworks, whatever you want) for the UI in the same way.
Tauri, while an improvement on some aspects, is still a webapp in a shell. But it has the problem that you have to deal with browser specific quirks.
From what I understand, this will avoid that problem by compiling to whatever native platform you are building for? Also it'd not just be a webapp in a shell.
Yes, correct I think on all counts. I was just responding to the Electron comparison, saying if that's what you want, maybe look at Tauri, because it's also 'a webapp in a shell'.
For some people in some senses, there's a big advantage to the 'webapp in a shell' model, because HTML/CSS is considered good, or WhizzBang.js is the familiar tool for building UIs, or whatever.
I didn't mean to say it's better (or worse) than this - it's just quite a different (more similar to Electron) approach, that'll suit some and not others.
Well, Tauri (or more specifically the 'ri' part: https://github.com/tauri-apps/wry/) uses different engines on different platforms, so I assume there's the potential for quirks.
Yeah but don't forget that Electron is literally packed with features. For example it can draw SVG, it can play video, it can do 3d stuff, do networking stuff ...
IntelliJ is a lot more than a GUI. There's a ton of language modeling and parsing and lookup stuff going on in there. It has tons and tons of features too.
Electron is hundreds of megabytes for "hello world."
That's right. Accessibility is one of the last things to figure out after all of the other tricky things: proper event handling for sane keyboard navigation, efficient layer clipping / redraw, advanced text layout / style, animations, expanded compatibility across platforms, etc, etc.
And then accessibility brings with it the integration with different OSes via disparate APIs, so it will take some effort to offer a cross-platform API to the library user.
You'll notice that Iced is not >= 1.0 yet, so you shouldn't expect it to be fully-baked.
Accessibility seems like the kind of thing that would need to be baked-in from the start right? Bolting on accessibility after everything else has already been built seems like a recipe for a half-assed solution.
On the one hand that's understandable, but on the other hand it appears to be almost 2 years old already and if people start to actually use it in its current state the lack of accessibility will become problematic.
Neat! I really like the Rust ethos of creating modular cross-platform adopters (e.g. winit, getrandom), and from a brief skim this seems to fit that model.
It's almost like they want to make sure it works for the vast majority of people before they spend time and effort making it work for a much smaller segment of their userbase.
Accessibility is not a bolt-on extra feature, it's core to being able to make a product that suits all your users. Everyone needs, or will need, accommodation sometimes.
When you have an accident waterskiing and end up on crutches, you'll be glad for the accessibilty of doors that open with a button press. When you poke your eye hiking and need to wear an eyepatch for a few weeks, a screen reader might not seem such a niche use case.
These are even kind of extreme examples: frankly, even stuff like dark mode that we take for granted at this point is an accommodation to how some people need or want to use their computers. Migraines are a good example of an (astonishingly) common condition that benefits from flexibility around having to stare at a screen.
None of us gets younger as the days go by, either, and that comes with changed needs. I'm still thirty years from retirement and I find that the 10 point font I used to prefer isn't good enough anymore. Bumping up text size is a quotidian action for me and millions of other people.
ADDITION: Ah, finally found the quote I was looking for. Maybe not directly relevant, but thought-provoking:
> SMS texting was invented [to] figure out a way for deaf people to communicate with one another without speaking. ... Now text messages are universal.
One blind programmer has worked on the Rust compiler. Indirectly he is making a library like this possible. Should we leave him out, because he is a minority?
The odds are good that the fraction of this project's user base—developers writing GUI applications in Rust—who would be interested in Linux support is much more than 2%.
People keep bringing up accessibility whenever UI toolkits are mentioned, but wouldn't it be simpler to just have a CLI interface for whatever your program is doing?
Depends, accessibility is a spectrum. Most people don't need a fully voice/gesture based interface but can get by with good contrast controls and font sizing (as in, the fonts need to be able to get huge). Accessibility is also being able to use the interface only with a keyboard, or only with a mouse or specialized controller with 2 buttons.
I am person, who in last few days tried out to use VoiceOver and move responsibility on it to navigate me and read contents, when I could code while I had closed eyes. Ironically the most annoying interface was CLI, because I had to extend my focus on contents of the terminal. Imagine running docker-compose up and you are going to spammed with reading all logs. Next, usually I had problem to find some relative point that I could mark a "milestone" in my head, which will be further point from I will be reading.
Also, not all commands are read after executing them. Such command is silly pwd. Everytime I write it down I had to move the VO cursor to read the contents. Also many characters I would recognize as a programmer VO reads differently. Nano editor was complete unusable for me. While in vim I had notable problem to recognize indentations characters (sometimes they are read and sometimes they don't) and to know in what mode am I actually. brew update+upgrade was a nightmare for me. All kind of separators like #### are read as "number", "number" spam. The same situation was with "clear" command. It was read a hundreds of "space", "space", "space"... I was starting to think I am going to the Moon or I play Portal 2 [0].
I could continue writing down my disappointment about the CLI interface that was among my favorites in past but now after this week with VoiceOver I think it is terrible to be blind, because most of time I had to use my eyes to help myself to recognize where I am and what I need to click to move to next element. I was very surprised by the rofi-like find of elements that are on the screen (named as Rotor feature, CapsLock+U). With VO I don't any longer needed to use mouse, which for anybody who has problem with wrists is also a nice thing to consider.
Just looking at the example, I notice that you have to use Enums to pass messages around. E.g. "IncrementPressed" in the example. Wouldn't it be much easier to use closures for that? Managing Enums sounds like an unnecessary administrative burden.
There are several benefits to using enums as opposed to closures:
1. All your actions are in one place. Figuring out what can happen as a result of a series of user interactions can be done without navigating through a component hierarchy.
2. The history of a user interaction is simple to serialize. In case of error, you can dump the user interaction history for this session (just a list of enum values, right?) to disk. This can then be submitted in bug reports for easy re-production of an issue.
3. It's simple(r) to create a time-travelling debugger that can move backwards and forwards in time, since it can simply re-play the series of messages to a specific point.
4. Since you've seperated state changes from the UI, writing unit tests that ensures that a series of user interactions results in a specific state is pretty easy. You just send in the enum values representing a specific use interaction then check what state you've ended up with.
> 1. All your actions are in one place. Figuring out what can happen as a result of a series of user interactions can be done without navigating through a component hierarchy.
I'd imagine it would be easier to just call a method on whatever we want to update, instead of lowering all our methods into enums. Why wouldn't we do that instead?
Also, serializing usually isn't enough to get the benefits you describe. You need true determinism for that, and that's very difficult to achieve. If you ever e.g. query the current date without recording it you've just introduced nondeterminism.
If we could really achieve deterministic replayability easily, then I might be okay with the drawback of turning methods into enums.
> I'd imagine it would be easier to just call a method on whatever we want to update, instead of lowering all our methods into enums. Why wouldn't we do that instead?
Easier how? The difference in lines of code is minimal. It's not going to make any meaningful difference in how much time you spend on crafting a solution.
> Also, serializing usually isn't enough to get the benefits you describe. You need true determinism for that, and that's very difficult to achieve. If you ever e.g. query the current date without recording it you've just introduced nondeterminism.
It requires some thought, but it isn't terribly difficult. Most user interactions with a gui doesn't require a side effect, and those that do can usually be written in a way that it gets represented in the enum. Even when nondeterminism creeps in, it rarely cancels out all of the benefits of having a history of user interaction. Perfect nedn't be the enemy of good.
I'm afraid that when dealing with nondeterminism, you *do* need to be perfect. If even one call is nondeterministic, you've corrupted your entire replay.
This is a common challenge with e.g. RTS games, and it means we have to very carefully discover and avoid nondeterministic functions.
For example, did you know that C#'s string's hash code calculation is nondeterministic across runs? The same can be said for any floating point calculations, iterating Rust's/Go's default hash maps, etc.
Also, IME most interactions with GUI do have side effects, GUI apps tend to be very stateful.
> For example, did you know that C#'s string's hash code calculation is nondeterministic across runs? ... iterating Rust's/Go's default hash maps, etc.
I do, but I also know that in Java, C#, Rust and Go it's well documented that you should not rely on the iteration order of the values in a hashmap or hash-based collection, which (at least in Go) is one of the reasons hash-iteration order is randomized across runs.
If a bug is caused by this kind of nondeterminism, you're likely to catch the bug by replaying the history multiple times. In other words, if the history isn't perfect due to nondeterminism, the history is still useful in tracking down bugs because that nondetermism will be revealed over multiple replays.
There are significant advantages to this sort of defunctionalization in a system like this. It is a bit more administratively at the point of enum definition, but the flexibility and simplification of the other parts of the program will be a win for almost any non-trivial UI application, not to mention the future-proofing it gives you.
Could you give a concrete example of the benefits? I prefer to avoid abstract discussion since it might not apply to 99% of my code.
And by the way, defunctionalization sounds a lot like transforming my code into a bytecode interpreter which interprets my code. This is something I don't need and which would make my code slower and more difficult to read.
The problem with closures is that they are opaque. The only way to interact with them is to call them. Making them the foundation of your system eliminates a lot of useful architectural patterns, such as putting filters on the event stream, being able to prioritize some of them based on certain criteria, being able to ship them between different servers if necessary, creating a generic logger that can operate on the enumeration values without having to have each closure log things, implementing various security checks (you really don't want a security system to be stuck only being able to run a bit of code to see if it's safe), just a whole lot of things that can't be done if all you have are closures.
If you don't care about your values leaving your local OS process, which is a pretty common use case, there's a hybrid approach you can take too, which is to put closures inside your data structures that describe the value, instead of passing raw closures around. You get the benefits of being able to examine the values without executing the handler and being able to filter, decorate, etc. but while retaining all the advantages of being able to implement handlers inline. Depending on your local language, various slick implementations may allow this to be one degree or another of transparent, such as implementing some sort of interface/trait for all these values, or implementing __call__ in Python, etc.
"And by the way, defunctionalization sounds a lot like transforming my code into a bytecode interpreter which interprets my code. This is something I don't need and which would make my code slower and more difficult to read."
It depends on the language you're in. Most OO languages have some way of doing this that doesn't add any speed issues to speak of, it's just a slight rearrangement of code.
I'd also say this is an issue of scale. I don't bother with this in small programs and may just glue everything together with closures, but as program size increases the odds that you'll want to do something that you can't do with closures directly increase. But the languages I tend to work in don't make this much of a hassle usually, either. I think you may be overestimating the expense.
By opaque, do you mean encapsulated? Encapsulation is pretty important to software architecture, in that it enforces decoupling between components. Rust has private visibility for this very reason.
The filtering you're talking about sounds interesting, but the GUI layer is (imo) definitely the wrong place to do it. When's the last time you wanted to filter, prioritize, secure, or distribute UI events?
Imagine if we transformed every method call in our app into enums, it would be an unwieldy and unreadable mess. I'm not saying you're suggesting that, just highlighting that there are definitely some drawbacks here.
So, I don't see why we're being forced to use enums in this case.
I wonder if this a conscious decision by the Iced folks, and if they first tried with closures and ran into some trouble.
Your hybrid idea is interesting! Is that doable with Rust in practice? I'd imagine the borrow checker might enforce it can't read from outside its scope.
By opaque, I mean the only thing you can do is execute them. While this is encapsulated, it isn't at the right level of control.
" but the GUI layer is (imo) definitely the wrong place to do it. When's the last time you wanted to filter, prioritize, secure, or distribute UI events?"
Ironically, the GUI is already that way. Under the hood, enumerated values were passed to your program by the windowing system. It has to be, because there's a serialization layer there. GUI programs already have no choice but to be structured this way, and indeed, also certainly have some sort of filtering mechanism in them to do things like ignore events it doesn't want to handle. You're probably programming at an abstraction layer that is already sitting above that. You think this isn't useful in a GUI precisely because the GUI framework you're using implemented this pattern for you already!
"Your hybrid idea is interesting! Is that doable with Rust in practice?"
Really it's just creating objects that have various inspectable attributes as needed, that also have some trait or something that allows them to be called in a uniform manner. It's not too hard, it's just a bit more structure around what is already there; you basically copy & paste the closure into the method of a new object, and make the capture explicit in the object.
One key difference is that the defunctionalized representation is serializable. An example feature that enables is that you can now have a GUI for your interface construction, letting you preview the layout while you build it. The controls are hooked up to the message names, which can be be deserialized at runtime and used for dispatch. Attaching live functions, especially closures, to them is not really feasible.
You might be serializing your enums, but unless you serialize a lot more (button IDs, view IDs, etc), the IDs in your enums will be meaningless. I don't think you're suggesting we serialize the entire GUI?
This also seems like the wrong layer to serialize. Normally we would want to serialize the model, not the view. Admittedly, the line is a bit blurry here (and for most small programs) so take my words with a grain of salt.
> I don't think you're suggesting we serialize the entire GUI?
I am, exactly -- to enable WYSWIG construction/editing of the GUI. Just added another comment saying this below, but consider Android Layout Editor/Apple Interface Builder (or someone else mentioned QT Creator, which I'm not familiar with but has the same idea).
There's only one MyFancyTextEditor widget definition, a blueprint so to speak. It's IDs refer to other things inside that blueprint.
But at runtime, we might instantiate two different MyFancyTextEditors, one for view A and one for view B. They probably shouldn't share IDs, lest we run into confusion.
I think WYSWIG is more like defining a class, whereas at runtime we'd be serializing instances.
Feel free to correct me if I'm wrong, I admit I've never been down this line of thought!
A xib is an XML description of a GUI layout. Note the `<connections><action ...>` element inside the `<button>` -- this is defining what happens when the button is tapped. The connection becomes active at runtime when the file is rehydrated into live objects. The `selector` string identifies the name of the method to be called. (This exact mechanism relies on ObjC, but the principle can be applied to other systems.)
My point was that when we hydrate, we don't use the same IDs as what was in the serialized data. Otherwise, we'd have conflicts when we hydrate the same definition multiple times.
I'm sorry, I'm not following your point -- what are the IDs used for, in your example?
(For the XIB system, they have no purpose at runtime; they're solely for the archive. Objects get their own identity as usual when they're instantiated.)
In practice, when we receive a GUI event, we then modify something in our surrounding environment, usually some model, controller, or other view. To interact with any of those in Rust, we need to refer to it by some sort of ID.
This is the same ID you're referring to, which objects get when they're instantiated, at least for views (models and controllers likely do something similar).
My central question is: why would we want to serialize those IDs?
Also, WYSWYG might not be the best example, as it's a GUI that produces another GUI, hence some confusion. If you think about this in terms of a simple app that maintains a Customer database, you'll see what I mean; we wouldn't ever want to serialize a button click event there, especially since the hydrated IDs aren't stable and we don't know what they refer to unless we serialize the entire app UI state.
Maybe I'm missing something particular about Rust? When the archive is loaded at runtime, we don't need explicit IDs: we have a normal object graph, with references between things. The instantiated widgets have identity as objects in memory. The string `id="ykl-6F-b5r"` isn't relevant -- or used at all.
You can create as many instances of `Button` as you want, and they are different, and their targets (the other widgets they message when tapped) are different because they were either created alongside or, if they were live before the archive was deserialized, a reference was made to them as part of the deserialization process.
I think what you might be missing is that, in (safe + idiomatic) Rust, we actually can't have references between things (unless we want to make our entire GUI immutable, which would be a tad silly).
Since we can't use references, we need to "refer" to other objects via IDs.
Thanks, I'll have to look into how GUIs in Rust work then; that sounds utterly bizarre to me :)
I thought you could take references to things in Rust. Also, in particular, I don't understand why you'd want to use closures -- inherently reference-y -- when everything else is (I guess?) a non-reference plain value.
You might be right, but I feel that by filling out Enum structures I am helping the tools, whereas the tools should be helping me.
And as a terminal-aficionado I don't want to use Design/Creator tools.
So ... what I really want is a simple API with closures. And there should be no problem with that, as the Design/Creator tools can simply emit more complicated code if they want (e.g. closures that use enums).
> I don't think you're suggesting we serialize the entire GUI?
Yup that's exactly what people do in some GUIs. Tons of GUI frameworks and systems rely on varying degrees of defunctionalization and serialization. Microsoft COM and XServer for example. I don't know exactly how granular they get but I believe it's quite granular in what you can do over RPC.
Enums are a natural fit for event handling. If you write an api that can manage 50 different kinds of events, a single enum consisting of 50 variants would be able to facilitate that. A closure isn't identifiable to the request handler. It's just a thing that does stuff, when you're ready to call it. How do you know what stuff to do with it?
What I mean is: binding an enum to an action, which then sends a message containing the enum, deciphering it and executing code based on its value, and not forgetting to define the enum somewhere, when ... you could simply bind the closure to the action.
The iced package requires the messages to implement the Debug, Clone, and Send traits, none of which are available for closures. I was able to implement something similar to your example[0] but it only supports plain function pointers for the callbacks. The compiler wasn't able to derive a sufficiently general Debug trait for the function pointer due to an issue with the lifetime of the argument, so I had to implement that myself as well.
[Edit: Ignore this next paragraph; as steveklabnik pointed out, this change has already been implemented and "Message(|c| c.value += 1)" is accepted by the latest stable version.] Incidentally, as long as there are situations where only function pointers can be used and not closures it would be really nice to have some support for anonymous function pointers in Rust (with the fn type and not just the Fn trait) so that one could write e.g. "Message(fn |c| c.value += 1)" instead of "Message({ fn f(c: &mut Counter) { c.value += 1 } f })". Or just infer the fn type for "closures" which don't actually close over any variables without the need for an extra keyword.
Bare function pointers are, but not state-capturing ones (see my other comment for why that is). Without state capture, closures really aren't more ergonomic to use than enums in most cases, and are often less so.
It seems you're right. That didn't work the last time I tried it—which I admit was some time ago—but it does work in the latest stable version of rustc.
The enum of all possible events is still there, it's just spread over dozens of different callback properties belonging to several different objects. Splitting out via an enum to a big `handle` type function makes it possible to detect whether you have handled all possible events at compile time instead of having to raise an exception at runtime.
I see little value in seeing that enum of all possible event on one place.
On the other hand, I want to see what is the action performed by that button and there is value in having small event handler close to the thing they handle.
Ironic. Desktop UI APIs used to be designed around enums of messages as the primary surface (e.g. Win32), but then UI frameworks wrapped all that to allow us to write code that looks exactly like you suggested. Now it's back to messages again... I wonder when we get back to wrapping them. ~
Message passing is a lower level construct. It's nice in the way that you can have multiple subscribers, but for __most__ application you won't need it.
I mean, we're not really doing this anymore, right?
No one has addressed the actual issue here, unfortunately. There's a good reason why you'll often find message passing done with enums in Rust where other languages use closures, and it's not ideological.
In Rust, if you want to access shared state, you need to hold onto a reference to it. If a reference captured by a closure is unique, the compiler conservatively prevents you from not only accessing that variable, but anything that could transitively lead to it, and if it's shared, the compiler prevents you from mutating anything that could lead to it. Additionally, you can't move or destroy the object until there's no chance any value transitively referenced through it is still borrowed--which across threads, often means pretty much no borrow length will suffice. In a UI framework sending stuff across threads, that's a totally unacceptable state of affairs. By contrast, enums that declaratively specify what you want to modify have no such issues, because the state only needs to be accessed when you actually want to run the event.
There are three ways to work around the lifetime part, but they're not ideal. The most common is to use something like Rc or Arc, which allow reference counting (a primitive form of garbage collection). Reference counting always makes your objects shared, so it guarantees you'll have to tackle the mutation problem somehow, which can be unsatisfying; it also has a runtime cost (which can be quite significant) and can cause memory leaks if you have cycles in your data (though this is rarely an issue in UI frameworks). Secondly, you can architect your application around things like arenas, which give you "scoped" access to lifetimes--these can be really efficient with the right use case, especially pointer-spaghetti graphs, but can restrict a lot of the patterns you'd write in Rust, since the arenas have to outlive everything you want to do with the objects. Finally, you can separate your shared data from the closure--which works great but now you are restricted to "bare" functions or functions that only capture unrelated state. The closure can't specify the state it wants to update, it has to be directed to the object--which is much less ideal (since now the closures aren't universal in any way, as they need to be passed in the state they want to modify).
You can work around the mutation part in two ways, but neither is that great. One is, again, to ban closures from capturing any environment state, so the data is still stored separately and has to be passed in, which has the limitations mentioned previously.
The other is to use "interior mutability" which uses some combination of runtime checks and other restrictions to guarantee that you're only accessing the element one at a time. Almost invariably, these mechanisms are slower at runtime, restrict your types somehow (e.g. making them non thread safe), can cause your program to panic if you misuse them, or are just plain annoying to use (or some combination of all of these). So when you can avoid this option, you do!
So basically: enums make a ton of sense in Rust because they totally avoid these borrow checker issues. Your intuitions about using closures for this stuff in functional or garbage collected languages don't apply here--patterns that make a lot of sense in them don't work at all in Rust. That's one big reason there are so many Rust UI frameworks rather than just wrapping the C ones or using the same patterns you see in functional languages--the usual patterns simply don't work well.
That's not say there aren't some upsides to the approach from a reasoning standpoint. A big one is that it enforces "single writer" control flow--just one place is responsible for all the state updates, so you don't have to worry about synchronization weirdness due to lots of places updating the state on their own. IME, this helps a lot with reasoning. But not everyone agrees that that outweighs the convenience of being able to use a callback, which I totally get--it's just that the "convenient way" isn't really convenient in Rust at all. And it's this that causes all these libraries to be designed this way (at least that's been my experience), not theoretical concerns about serialization and stuff which I agree is basically a nonissue in practice.
Thank you for this thorough and balanced explanation! This is probably why all the GUI frameworks in Rust (that I've seen) require message passing like this. You have a really good handle on the borrow checker's benefits and limitations.
I'm trying to make a language (https://vale.dev/) that addresses the borrow checker's limitations (by making it automatic and opt-in) to handle cases like these, if you want to drop by our discord (https://discord.gg/SNB8yGH), I'd love to get your thoughts!
I may check it out more later! From what it sounds like, Vale is an instance of option 2, arenas (which are the same as regions), but automatically managed--reducing their complexity compared to Rust--combined with implicit use of Rust's `Cell` (the interior-mutable type with the fewest compromises, in a single-thread setting).
Unfortunately Rust as a language pays the price for its flexibility here. By not mandating any particular memory management or interior mutability strategy, it makes even the safe cases like this (which are relatively "easy" to validate for safety) much more complex. So if your use case is in the intersection that Vala supports, I agree this can be a great point in the tradeoff space! FYI, you might want to check out old work on the MLton optimizing compiler, which also did automatic region inference to avoid garbage collection.
Vale uses the term "regions" a bit differently, as a way to represent a collection of objects that all are all treated the same (usually w.r.t. memory management). We have something called the "region borrow checker" which is kind of like Rust's borrow checker, but without the aliasability-xor-mutability restriction.
We don't use Cell implicitly, we instead rely on "type stability" for mutable aliasing, generational references for memory safety, and scope-tethering to enable zero-cost dereferencing.
Relying on "type stability" (which I assume means no structure-altering mutations are allowed) is pretty similar to "just use atomics" as a strategy, which is what languages like Java and C# employ (it also has some similarities to the "value restriction" in languages like Standard ML and OCaml).
However, this can be quite restrictive depending on your definition of type stability (and what optimizations are allowed); for instance, Rust considered changing a value to null to be a structure altering mutation and is able to optimize around that, while it sounds like Vale cannot.
The rest of it sounds a lot like Rust (though I'm not sure whether regions are more like arenas or lifetimes, I'd have to look at the implementation, but generally speaking people call the same kinds of things regions even if they weren't trying to coordinate, and the performance tradeoffs you're describing sound really similar to those of regions in other languages). Overall, like I said, it sounds like a very interesting point in the tradeoff space!
Edit: Also, I misspoke when I said MLton, what you actually want is the similarly named sister project MLkit (http://www.mlton.org/MLKit ; see http://www.mlton.org/Regions for details on why they did not end up going with this strategy).
Type stability lets us mutably alias, which atomics and Cell do not. It doesn't really have a Rust counterpart, a struct containing a bunch of Cells cannot be mutably aliased. If you're curious, this (draft) page has some information on it: https://vale.dev/vision/safety-2-type-stability
Vale also considers changing between Some<T> and None<T> to be a type-unstable operation, and the automatic borrow checker (HGM) detects the possibility for that (because the field was marked non-final), to not do auto-borrowing for it. We might decide to use something called the Undead Cycle to help with the Opt<T> case, though it comes with interesting tradeoffs.
There's a lot more to Vale, such as how our region borrow checker can enable zero-cost structured concurrency without the Send/Sync requirements, and the better RAII support. In the end, the only similarities we really share with Rust are the syntax and (hopefully!) the speed.
Vale regions are tracked like lifetimes, and have a concept of immutable vs const vs readwrite to aid in optimization, but they come with an implicit interface pointer to the containing objects' allocator, similar to Odin but memory-safe.
Oh, MLKit! I've looked into MLkit a bit before, but wrote them off because their regions needed a backup GC, where Vale's regions don't.
Keep an eye out, we'll be posting a lot of articles to HN soon ;)
> Type stability lets us mutably alias, which atomics and Cell do not.
Well, that's not the case :) Both Cell and atomics let you mutably alias, in the sense most languages mean. They don't let you mutably alias in the Rust `&mut` / "restrict" sense, which is far more powerful and would be unsound in any language, but they do let you mutably alias.
> Vale also considers changing between Some<T> and None<T> to be a type-unstable operation, and the automatic borrow checker (HGM) detects the possibility for that (because the field was marked non-final), to not do auto-borrowing for it. We might decide to use something called the Undead Cycle to help with the Opt<T> case, though it comes with interesting tradeoffs.
That's interesting and good to know! I'm glad you are taking this seriously.
> There's a lot more to Vale, such as how our region borrow checker can enable zero-cost structured concurrency without the Send/Sync requirements, and the better RAII support. In the end, the only similarities we really share with Rust are the syntax and (hopefully!) the speed.
A lot of the stuff in Rust is also found in other languages :) In this case, it sounds to me like Send/Sync are thoroughly built into your language, to the point that annotations are not needed--which is very nice, but not completely unique (this is the basis for memory safety in C# and Java, after all!), but may be so when combined with regions. From what it sounds like, though, I'm having a very hard time telling the difference between regions and (generational) arenas--they sound like they have almost exactly the same tradeoffs. Perhaps I'm missing something?
Overall, seems like an interesting language, and I will definitely stay on the lookout.
Thank you for this explanation. I must admit that not being able to use the power of closures feels like a huge disappointment. Unless anybody can convince me otherwise, I think I'll write my GUIs in a language with a garbage collector, and perhaps use Rust for smaller projects which are at a lower level.
That's a perfectly reasonable conclusion--and callback patterns are definitely, by far, the single case where Rust's restrictions are the most onerous. In most other cases, where closures are passed for more temporary use, these restrictions don't matter nearly as much, fortunately, or else they might not even be part of the language. And in a few cases (like automatic parallelization) the restrictions are actually incredibly helpful (since if a closure's environment can't be written to while shared, it's generally safe to execute in another thread!). So, whether Rust is a good fit for your project or not really depends on your usecase.
I've never dabbled with multi-threading in Rust, so I'm curious how far it can go in practice.
In our closure, we probably can't have any RefCell or Rc or anything that could reach either of those things, because those can't be safely reachable by multiple (even read-only) threads.
And I think these closures can't contain references to the outside world? I might be wrong on this one. And maybe structured concurrency could help here, though RefCell and Rc might confound that.
How well do Rust closures work in practice? Have you found a good use for them?
> I've never dabbled with multi-threading in Rust, so I'm curious how far it can go in practice.
Extremely far. The strict mutability rules are basically your downpayment for the easiest safe parallelism of any production language (that isn't purely functional, anyway). In particular the Rayon library (https://docs.rs/rayon/1.5.1/rayon/index.html) allows you to make your sequential code parallel by merely changing `.iter()` to `.par_iter()`, and experimentally offers comparable performance to best in class work stealing schedulers, often outperforming manual C and C++ implementations of the same algorithms.
> In our closure, we probably can't have any RefCell or Rc or anything that could reach either of those things, because those can't be safely reachable by multiple (even read-only) threads.
Assuming you're talking about Vale, I'm curious how you actually enforce this in practice. `Cell<T>` is basically an "overhead free" type in a single-threaded context, but it's not safe to access from multiple threads since it lets you mutate through a shared reference. To get around that, you either need to have a different type for shared references from other threads, or you need something like GhostCell, which separates permission to read/write to the Cell from access to the Cell itself. I would have to learn more about how your language works to say more.
If you're talking about in Rust, yes, you're correct. Using `Cell` or `RefCell` prevents use with automatic parallelization APIs, which is one big reason they aren't just done automatically for you (as they effectively are in many other languages). However, there are thread-safe mutability types with various tradeoffs: atomics in specialized cases where the type has native atomic operations, mutexes and reader-writer locks for thread safe concurrent mutable access, and APIs like GhostCell that use type-level trickery to make access safe without overhead, at the cost of a more complex API (something like GhostCell probably has the best performance tradeoffs for the UI case, but it is not very easy to use!). For `Rc`, your option is more straightforward: just use `Arc` and you are thread safe again (assuming the type you're protecting is).
> And I think these closures can't contain references to the outside world? I might be wrong on this one. And maybe structured concurrency could help here, though RefCell and Rc might confound that.
Rust closures can contain references to the outside world--and usually do! The references are just heavily restricted, as the text notes, to the point that it's hard to use them for callbacks (it's still possible, but there's a lot of ceremony).
> How well do Rust closures work in practice? Have you found a good use for them?
Hm, apparently I should have explained this up front, since apparently I did a bit too good a job explaining the downsides! Closures are used very heavily in Rust, almost everywhere in fact. They're a cornerstone of the Rust iterator API (probably the most commonly used trait in all of Rust), Rayon's parallelization API, numerous methods on Option, Result, etc., and are exploited by tons and tons of functions. Closures are very restricted, but you also get a lot as a result--they not only parallelize well, they are zero overhead abstractions that frequently inline to the same thing as hand-rolled SIMD (as noted here: https://twitter.com/badamczewski01/status/138634513508098458...) which means people are never worried about performance overhead when including them in abstractions. They can also be boxed, reference counted, etc. to erase their types in cases where you're okay with paying for dynamic dispatch (in which case they more closely resemble closures in typical languages).
So, they work extremely well in practice! What don't work well in practice in Rust are "callback registration" patterns in general. In fact, this is so much the case that I'd say finding a way to do things without callbacks is the main reason people need to refactor their applications for Rust.
> So, whether Rust is a good fit for your project or not really depends on your usecase.
One of the biggest problems in software engineering seems to be that requirements change all the time. So unfortunately I can't know beforehand whether Rust fits my usecase unless the scope of the project is very limited. I do like the safety guarantees of Rust but now feel that they might conflict too much with flexibility at some point.
Using Enums makes you build a Finite State Machine and therefore all effects/state transitions explicit and concentrated at one place. This enables you to see what this particular piece of code is able to do. Methods could obscure what might happen. Using inline closures would make it hard if not impossible to get an overview what might happen.
I think the example in the readme is presented and worded in a confusing way, "Finally, we need to be able to react to any produced messages and change our state accordingly in our update logic:". Since your referencing the Elm architecture, it's 1. send msg, 2. update 3. view. This is presented, 1. 3. 2. The code looks right, but the example seems out of order and consequently confusing.
The most recent time I evaluated Iced (Aug 2020), it didn't support any of my Linux machines due to broken/non-existent OpenGL support. With the progress the wgpu renderer has made, I'm pleased to report the situation has improved, although on Linux my initial attempt landed me here: https://github.com/hecrj/iced/issues/1013
I've had no issues with Iced on any other platforms, fwiw.
congrats on plugging one of the biggest holes in the rust ecosystem. Looks like an api I'd love working with. Id love to imagine an alt universe where someone builds an entire desktop off this the way gnome was formed from gtk.
I'm dislike the not-so-recent trend of companies/projects/libraries that co-opt common words as their name. "Rust" and "elm" and "iced" wouldn't normally have anything to do with computing, but it still feels like a small intrusion on our language.
You mean like Apple, BASIC, Lisp, Explorer, Safari, Chrome, Android, ...? This practice has existed from the start and I don't see it as an intrusion into the language. We humans are able to consider context (which is necessary for language either way), and that mostly takes care of it. When someone says "I've got rust on my PC's power supply" or "my python escaped" it's still very obvious it's not the programming language.
And an outsider might have trouble following the conversation, but that would happen no matter what words are used. Whether we say "Elm" or "C++", the target audience knows what's meant and others wouldn't be able to follow either way.
> it still feels like a small intrusion on our language.
All languages evolve over time, despite official bodies trying to reject foreign influence. (IIRC the French government has a department that tries to limit loan words from other languages, but good luck with that given the internet.)
So what's the actual problem with it? Only downside I see is google-ability and occasional conflicts when two projects use the same name. Both of those exist with any naming Scheme (ha!) and imho haven't really ended badly for projects or their users.
Any idea if mobile support is coming? That’s starting to feel table stakes as time goes on I think, even more important than desktop which can always fall back to web
I've been playing with the iced examples, particularly the styling example. The code is beautiful and the result is clean and responsive and fast. Love this.
GUI state mutations are triggered by messages passed around the system. Messages can be generated from user interactions for by asynchronous subscriptions (network socket, etc).
https://news.ycombinator.com/item?id=26958154
https://news.ycombinator.com/item?id=24919571
It's great to see GUI frameworks being designed in Rust, countering the trend of doing everything in Electron.