Refactoring std for ultimate portability

Rust is really super portable. It supports a lot of platforms. But not enough ... not nearly enough. Rust wants to be everywhere, in wild places that only future generations will be able to imagine. We've got some work yet to enable that future though, and it is our responsibility to the Rusting world to do so!

Rust has long been designed with portability at the forefront. It contains relatively few designs that enforce hard restrictions on the underlying plattform, and where it does there are potential strategies to reduce them. But even though there isn't a great deal of platform-specificity bound to the heart of Rust's design, the implementation itself - particularly of the standard library - is tightly coupled to a number of underlying assumptions, most obviously that it is running on something Windowsy or Unixy, that make further porting unnecessarily difficult today.

There has been much discussion on this topic recently: an RFC about refactoring std, a request to merge a port for Redox, a port for Haiku, a port to support Intel SGX, a port to Fuchsia, lamentations that porting std is a quagmire.

Much of my work in the past has been in the organizition of std, and I've been thinking about this a lot lately, and doing some prototyping. Herein I discuss how to incrementally convert the Rust standard library into an ultra-portable, ultra-composable runtime that is suitable for targetting the needs of most any platform.

Goals

Many people have many ideas about where and how to port std. Here are some of the potential goals:

  • Support for non-Unix, non-Windows platforms. There are a lot of possibilities for Rust in novel, future platforms.
  • Support for platforms that don't have or require libc. Web browsers (via wasm, without Emscripten) are one example, but there are others. The API surface is already designed to accomodate this, but not the implementation.
  • Reduced maintenance burden. Porting std today requires touching too many parts of the libraries, and those parts must be maintaned by the std maintainers. In general, day-to-day maintenance should not require dealing with platform-specific code, especially for lesser-maintained ports.
  • Reuse of std's components. There are projects that want to have a standard library, but std itself is not appropriate. Pieces like core and collections can already be reused, but it would be good if more of std was available as independent components.
  • Out-of-tree portability. Many future ports of std are quite speculative. We cannot maintain all of them in tree.
  • Strongly-typed portability. Relating to out-of-tree and reduced maintenance burden, it may be desirable to leverage the type system more to ensure the correctness of ports, e.g. by encoding the entire portability layer into traits.

To these ends my own personal goal is to create a clearly-defined interface through which the standard library interoperates with the platform, one that is easy to implement and maintain. I believe that achieving such an abstraction layer is the first step to any further porting efforts, lest we cause a big mess. That is the focus of the following discussion.

Platform-dependence in std

Before talking about solutions, some background on where std is tied to the platform. When we are talking about the "platform" here we mostly mean "the operating system", but also to some extent the runtime ABI. Concerns in these two areas are generally governed by the target_os and target_env configuration values, so generally when we are talking about platform dependencies in std we are talking about areas of the code where those cfg attributes are required (as well as cfg(unix) / cfg(windows), which are shorthands for the aforementioned). Note that we are mostly not concerned with code that only varies in definition based on target_arch: architecture porting tends to be orthogonal to platform porting, and is more related to compiler codegen support than runtime OS support.

The standard library is notably organized as a "facade": it is composed of a number of small crates whose features are all reexported through the public interface declared by std. Today, all of these inner crates (except for core) are unstable implementation details. The purpose of this facade is mostly to, through the use of the Rust crate DAG, strictly control the interdependencies between the various independent units of functionality within the facade, and thus make the individual facade crates maximally useful outside of std. Even today these facade crates have minimal, well-defined dependencies and are highly portable:

The std facade today

The things to notice about this illustration is that the white crates are platform-independent, the black crates are platform-specific, and that std does not actually depend on alloc_system and panic_unwind (explained below).

Inside the facade

To understand the internal design of std it's important to recognize some things about the language definition and its expectations from the runtime. The Rust language itself imposes very few restrictions on the implementation of the language runtime, but what it does expect are defined as "lang items". Lang items (of which there are 79 defined today) are library routines that the compiler generates code to call into to accomplish various things required by the language semantics. A lang item may only be defined once globally, across the entire crate DAG, and (more-or-less) they must all be defined somewhere in the DAG in order for rustc to generate a runnable exe. Most lang items are defined in the core library, which has no platform dependencies. There are a few though that are tied to more complex runtime features of the language, most notably allocation and unwinding, and much of the organization of the facade is dedicated to providing to the language these features in well-factored ways. In order to achieve this the facade in a number of places resorts to "magic", implementation-specific features that are never intended to be stabilized; the std facade makes incredibly heavy use of unstable features and will never be buildable on stable Rust - it is highly coupled to the compiler implementation. Sometimes this magic involves dependency inversion, where interfaces are defined at lower levels, but the actual platform-specific implementation is defined later in the dependency chain. Yet more dependency inversion is going to be required for further abstracting std, though hopefully no new "magic" features.

When it comes to platform abstraction, where the facade is most successful today is with allocation and unwinding. Both features require access to the underlying platform but are relatively self contained. And interestingly, both features make heavy use of the aforementioned "magic", allowing the interfaces to be used by the standard library, while the standard library itself does not actually depend on the concrete implementation.

The interface to the allocator is defined by the alloc crate. Its only dependency is core, and other than core this is the most important crate in Rust: without allocation Rust is a limited language, and almost every crate depends on it. Like core though, the alloc crate is still platform-independent; it does not actually define the allocator, but it does define the allocator interface, the Box, Rc, Arc types. Just these features are enough to build most of the standard collections, which live in the (still platform-independent) collections crate. The allocator itself is defined in the alloc_system and alloc_jemalloc crates, which are selected and linked in by the compiler only at the final link step via the unstable mechanism described in RFC 1183. Dependency inversion for allocators is accomplished through undefined symbols, where the allocator implementation is accessed through symbols declared in the alloc crate but defined further down the crate DAG.

Like the allocator crates, the unwinding implementations, panic_abort and panic_unwind, employ magic to let the compiler know which one to link in, and undefined symbols to achieve dependency inversion.

Both the alloctor and unwinding implementations have a dependency on libc.

In summary, within the facade the key language functionality provided by core, alloc, and collections is written in platform-independent pure-Rust; the key platform-specific runtime functionalities of allocation and unwinding are isolated to the alloc_system, alloc_jemalloc, panic_unwind and panic_abort crates; and dependency inversion prevents the platform-independent crates from depending concretely on the platform-specific crates.

The standard library itself

Unfortunately, there is still a great deal of important standard library functionality that is not so cleanly factored within the facade, and yet remains entangled with platform-specific functionality. This is the major problem at hand.

That's not to say though that std is utterly disorganized. On the contrary, we have organized it fully with the intent of being portable. It's just not there yet.

Within std, the sys module is where platform-specific code is intended to reside. The sys module is implemented separately for Windows and for Unix, under sys/windows and sys/unix. The sys_common module, on the other hand, contains non-platform-specific code that supports the runtime platform-abstraction needed by std. Unfortunately, today the separation of responsibilities implied by this organization is not perfect, and there is platform specific code elsewhere in std. There is a lint in-tree to enforce this organization; its whitelist is a good view into the places where this abstraction boundary is violated.

Worse than the bits of platform-specific code that reside outside of std::sys though is the dependency graph between sys and the rest of std. Simply put, it's a rat's nest: bidirectional dependencies abound between platform-specific code and platform-dependent code. Breaking these dependencies is going to be the principle challenge of making std more portable.

The contents of sys are implementation details of std but std does also publicly expose platform-specific APIs in std::os. These generally reexport from sys.

For the most part, except for std::os, the standard library does not expose significant platform-specific behavior in its public interface. The one major exception to this is std::path. This module is tailored quite specifically to the intersection of Unix and Windows path handling.

Much of the platform-specific interdependencies in std are due to I/O and the io::Error type, which is a unifying type across many modules. This may be a legacy of std's former construction atop libuv, which is itself an I/O abstraction layer. It may be that "standard I/O" should be thought of as a single, mostly platform-specific, interdependent chunk that should live and port together. Refactoring I/O is going to be the bulk of the work to make std more portable.

A platform abstraction layer for std

Well, that's all background. Now lets talk about how to fulfill the promise of portable std.

I want to paint you a picture of a utopia in which Rust has expanded to become the fabric of the entire classical computing world, where the possibilities of what we can achieve are not shackled to the decaying dreams of computer science past. In this perfect utopia you have invented the perfect model for managing your computer's sci-fi hardware, perfectly free from the legacy of Unix and Windows. And you need the perfect language to write it in. Everywhere you look is legacy: C, C++, Java; the stacks get bigger and bigger, cruft all the way down.

The only shining light is Rust. Those Rustaceans have been chipping away the cruft, distilling their platform to only the essence of bits and bytes, while also expanding its expressive power toward legendary elegance. Rust doesn't want to tell you how to build your system. Rust wants to serve you, to fulfill your dreams, on your terms. For your ambitions, Rust is the only reasonable choice in a world filled with compromises.

The work ahead is dead simple: all you have to do is provide an allocator, an unwinder, and an implementation of the Rust platform abstraction layer. All three of these things are their own crates, you just need to plug them into the build. And as we'll see momentarily, to get started you don't even have to write an allocator or an unwinder (utopic Rust has trivial default implementations that will serve to get you started), so let's focus on getting that platform abstraction layer up and running.

To get a sense of what we need to do, have a look at the utopic std facade:

The std facade tomorrow

The thing to notice here is the trinity of pal_common, pal_unix, and pal_windows. This is the nexus of Rust std porting. Every function necessary to run std on Unix is defined in pal_unix, and every function necessary to run std on Windows is defined in pal_windows, and pal_common is their platform-independent toolkit. The interface surface area is surprisingly small: threading, concurrency, networking, process management, and a few other bits and pieces. Both pal_unix and pal_windows implement the same interface, consumed by std. (In the fullness of time there will certainly be other crates involved in the deconstruction of std, but this simple division is sufficient to understand the approach). So our only task is to implement pal_utopia, the platform abstraction layer implementation for the OS of our dreams ("UtopiaOS"), based on the well-trodden path laid out by other platforms before us.

It's a simple thing to create pal_utopia and get it building, just a few steps:

  • Copy pal_example to pal_utopia. This creates a new PAL implementation that simply panics on all platform-specific invocations.
  • Create your riscv64-unknown-utopia target spec and set the alloctor to alloc_simple and the unwinder to unwind_simple. These are stock pure-Rust implementations of allocation and unwinding that rely only on the platform abstraction layer (simple unwinding is implemented through some portable code generation strategy, not DWARF).
  • Configure cargo to build std with your platform abstraction layer with crate replacement, configuring pal to be implemented by pal_utopia.

With std-aware cargo we can build the standard library out of tree, with stock cargo, specifying our own pal crate. So now all you need to do is run cargo build against your own project and you've created a custom standard library. Of course it does nothing useful yet, just panics. But the path is clear: run some code, find the next panic, fill in some functions, repeat. That's it. Utopia.

OK, hopefully that got your imagination churning. There's all manner of fanciful variations one could envision from there (I'll discuss some in a bit), and hopefully you'll agree that a setup along these lines unlocks great possibilities.

The first step to utopia though is to get all that platform-specific code out of std and isolated into a single crate.

Why put the PAL in crates?

The key to this future is having all platform-specific code in a single crate. But because of interdependencies, especially within I/O code, getting there is going to be non-trivial, requiring some advanced refactoring tricks, and in some cases almost certainly causing the code to become more complex. Why is this worth it?

First, crates are how we enforce constraints in Rust about which code can depend on which other code, and we have strict constraints here. When you want to say 'this code must not depend on that code' you do it with crates. We do this to excellent effect today with the std facade, so extracting the PAL is a continuation of that design.

Second, it allows people to implement std out of tree. This will be a massive enabler. It's simply unfeasible to do small-scale Rust experimentation with novel platforms without this: today to port std one must either maintain their own fork, or work upstream. This situation discourages people from even attempting to port Rust. And furthermore, the Rust maintainers can't be expected to entertain patches for every experimental port one might wish to pursue.

The cost of this is that the more complex subsystems in std will have at least one extra layer of abstraction (though this should not impose any runtime cost).

How to get there

So the task at hand is more-or-less to extract sys/windows to pal_windows, sys/unix to pal_unix, and anything they depend on to pal_common. There are plenty of unknowns, so along the way we're going to learn a lot, but the immediate path forward is relatively clear:

  • Move platform-specific code that does not live in sys/windows, sys/unix into those modules. This work can be driven off the whitelist in the PAL lint.
  • Extract std_common, std_windows, and std_unix, modifying both build systems as appropriate.
  • Begin teasing out the platform-specific parts of std::sys into the appropriate place in the PAL.

The good news is that I've already spent some time on a prototype. As part of that work I already landed a tidy script to enforce where platform-specific code may live in tree, and have another in-flight to corral more code into std::sys. From this work I'm encouraged that extracting all the platform-dependencies into their own crates is possible.

I managed to extract quite a bit into the pal crates before getting sidetracked on other matters. Unfortunately, I've stopped right at a critical junction of figuring out how to extract io::Error which is the lynchpin of the intertangled I/O code, but I think it's quite doable. My prototype is bitrotted and will basically need to be redone to land, but it's probably good to look at the commits to see the kind of work this will entail.

Dependency inversion

At the point where we extract io::Error is where we have to think hard about "dependency inversion". What is that? Well, there are a bunch of systems in std that have the following properties:

  • The bulk of the code is platform-independent
  • The code doesn't work without some platform-specific code
  • There is yet other platform-specific code that depends on it

This creates a situation where the subsystem wants to be defined once, in pal_common, to avoid duplication; supplemented with additional code defined downstream in pal_windows; instantiated for use in pal_windows; and finally exported publicly in std. The dependency "inversion" is that the subsystem is defined upstream of its dependency.

I said earlier that I stopped short of tackling the inversion necessary for io::Error, but I did tackle one inversion: CStr/CString. This is a simple case, but is illustrative, so I want to show what happened here, then talk generally about techniques for dependency inversion.

The reason CString needs to be extracted from std is that various platform-specific pieces of the PAL need to deal with C strings. And why does CString require dependency inversion to extract? Because CString depends on memchr, and memchr is platform-specific. Such a little thing, but big consequences.

Before I explain how I did this I want to note that I would do it differently the second time. Still, a good illustration.

In this version, c_str is its own crate, but that's only because it has a minor (fixable) dependency on libc, and pal_common is not allowed to depend on libc. One might expect it to ultimately be defined in pal_common though.

The way I factored this was to define a platform-independent memchr in pal_common. Then CStr and CString are defined in the c_str crate, using the (possibly slow) platform-independent memchr. Now pal_unix and pal_windows have access to the C string functionality they need via those types.

In std though, both types are redefined as a light wrapper around the types defined previously, except that memchr is replaced with the platform-specific implementations.

The unfortunate consequence of this approach is that all code in pal_unix and pal_windows that use CString get the slow version of memchr. That's why I said earlier that I would do this differently the next time. Instead, CString wants to be instantiated in pal_windows and pal_unix using the fast version of memchr (that is, the pal instantiates the platform-dependent type instead of std), using one of the techniques I'll discuss below.

Now with that case illustrated, let's talk about general strategies for dependency inversion. I'll keep using CString as an example.

Redefinition and IntoInner

Dependency inversion is already an important facet of platform-abstraction in std. The pattern this most often takes inside std is through wrapping and redefinition of inner types, and conversion between the two via the IntoInner trait. You can see an example of this with net::TcpStream, which is a wrapper around an inner sys_common::net::TcpStream.

This is basically what my CString example above does, though it doesn't literally use the IntoInner trait.

The major downside of this is that it requires duplicating a lot of interface surface: once at the lower layer, once again at the upper layer.

Generics

An obvious way to do dependecy inversion in Rust is through generics: you define your type as generic over some trait that specifies the platform-specific functions it needs to operate. So we might define CString like:

struct CString<M> where M: Memchr { ... }

Then the pal can instantiate it with its own Memchr implementation.

Of course, the real String in std is not generic, so std must then define its own CString type that wraps the instantiation of the generic CString. So this has a similar downside of requiring duplicate defenitions to achieve the inversion.

Undefined symbols

The next approach is the classic systemsy way to do this - let the linker deal with it. This is what the alloc crate does: declare some extern functions that must implement some feature, and then have some downstream crate actually define them.

So the c_str crate might declare:

extern "Rust" {
    #[no_mangle]
    fn __rust_pal_memchr(needle: u8, haystack: &[u8]) -> Option<usize>;
    #[no_mangle]
    fn __rust_pal_memrchr(needle: u8, haystack: &[u8]) -> Option<usize>;
}

Then pal_unix and pal_windows define them however they want.

This is a pretty good solution. The downsides though are that (unless using LTO) no such functions will be inlinable; and there are some reasons for not wanting the runtime to impose an excess of public symbols (I'm not clear on this point, but it's something I've heard). It also doesn't work if your dependencies themselves must be generic, though I doubt such things exist in the pal.

Macros

Finally, macros. The last ditch resort when the language doesn't do what you want. To do this with macros, we would define a macro in pal_common that accepts as arguments paths to all its platform-specific dependencies, then pal_unix and pal_windows would instantiate those macros and reexport the results for use by std.

So for CString:

macro_rules! pal_cstr {
    (memchr: $path) => {
        struct CString { ... }

        etc.
    }
}

This doesn't suffer from the problems the others do, but it does make the source and the potential error messages for std hackers worse. I think this is the best solution for tough cases.

Risks

The path forward here is pretty risk-free, and I want to emphasize that. We can do quite a lot of experimentation here without committing long-term to anything. The main risks are to do with the feasibility of a full extraction of platform-specific code, and churn for unstable no-std consumers. It's possible that we do the work of creating these extra crates, get pretty far into the process, and hit some roadblock that makes the endeavor too difficult to complete.

I think the chances of that are unlikely - pretty much any factoring can be achieved with more or less effort. If we were to hit such a roadblock it would most likely to be for social reasons, a distaste for complicating the code involved enough to achive the desired separation, or simply lack of will to put in the effort.

Another risk is that the strategy won't actually get us to the end-goal in a satisfactory way. Between std::os and std::path, std is committed to not being perfectly platform-independent, and there may be other such platform warts that arise in the porting.

One thing we could do to reduce risk is to not start by creating the pal crates and instead do the refactoring to straighten out the dependencies within std, only exploding them out into crates as the last step, when success is assured. To do this we would need some analysis to enforce DAG-ness within a crate, some new, unstable compiler feature. I'm not inclined to bother with this though - the facade is unstable and we guarantee it will break, so let's break stuff.

Future directions

This outlined just the initial work of creating the PAL crates, but there's more that could be done. Here are a few ideas based on desires I've heard floating around:

  • The entire interface to pal can be turned into traits. This would make the exact interface required to port std more clearly defined.
  • The pal can be further split up so it can be reused in a more fine-grained manner, e.g. there might be a threading pal and an I/O pal.
  • Scenarios will allow std to be partially-defined for platforms that can't implement the whole surface, and those platforms will be free to partially-implement the pal.
  • We can create a pal_example crate that panics on every code path, that porters can work off of when starting a port.
  • We can create a simple, pure-Rust allocator that porters to non-libc systems can use before to get a system running, that depends only on sbrk/mmap, defined in the pal.
  • Likewise a simple unwinder that uses e.g. return-based unwinding so porters to unconventional systems don't have to immediately confrunt the hairy issues there.
  • We can create a port for Linux that doesn't use libc.
  • We can create a standard library for wasm that is tailored to the web platform, not the Unix platform.

Next steps and how to help right now

We can move in this direction now. As I've emphasized the risk here is low, and we don't need to know the full path forward to make progress. There's a lot we can do incrementally to move in this direction, and that is beneficial simply as a matter of code cleanup. I'll keep doing so myself, though I can't promise to dedicate any specific amount of time to it. If you want to help to there are things you can do:

  • Bikeshed here!
  • Find parts of std that are platform-specific and not located in sys and move them there. Work to minimize the pal-tidy whitelist.
  • Untangle dependencies in std::sys, so that it only depends on code in sys_common, but not the rest of std. Modules in std::sys will eventually be lowered to pal_unix and pal_windows; and modules in std::sys_common to (more-or-less) pal_comon. You can see the unupstreamed work in my prototype for easy candidates.
  • Likewise, untangle dependencies in std::sys_common so they do not depend on the either std::sys or the rest of std, again in preparation for moving to the platform-dependent pal_common crate.
  • Go ahead and introduce the pal_common, pal_unix and pal_windows crates. I expect that this step will require the most debate and coordination since we don't often change the std facade.
  • Begin moving code into the pal crates.
  • Figure out the refactoring sequence necessary to untangle the various I/O modules in std such that they can be extracted cleanly.

My aforementioned prototype contains a number of unupstreamed commits in this direction that anybody can feel free to crib off of. If I were to start upstreaming it I might use it as a guideline to sort out the dependencies within std, without yet taking the step of extracting the pal crates.

Let's make Rust the best platform it can be!

53 Likes

Yay, thanks for the long and clear writeup! Overall, this sounds pretty good to me. I think this presents a clear path that works for me at least. Some minor comments below.

One big thing I'm not seeing is a solution for io::Error. You say it's hard and then go on talking about CString. I don't think this is an apt comparison because there the only dependency is a function with a well-known signature, whereas for io::Error there's a dependency on system-specific constants. But maybe I'm overthinking this and all you need is get_last_error and decode_error_kind.

Edit: ignore my previous comments about rand. I was confused about the in-tree librand and rand in the nursery.

:+1:

Do you need a wrapper? Can you just say pub type CString = ::pal::CString<PalMemchr>?

Here I'll plug an idea/question I had last week. I think traits are great! But not all API's are well-represented by traits. I think the compiler or some linter should support some form of "header files" that source files can be checked against. These header files should initially be automatically generated from the publicly reachable items in a crate. This tool is useful here for defining a PAL API, but I imagine it can also be useful to enforce stability guarantees on APIs.

Or optionally, instead of sbrk/mmap, on a large predefined .bss section.

I'd like to see this, but I imagine panic_abort is actually sufficient in a lot of cases already.

edit io::Error has been on my mind the last few days, but before I dive into that, let me say thanks again for writing this up and pushing this @brson. The post-unix post-windows post-endlessly-growing-stacks-of-leaky-abstractions utopia is my secret personal motivation too :):).


The solution for io::Error is associated error types on core::io traits:

impl<T> std::io::Read  for T where T: core::io::Read,  std::io::Error: From<T::Error> { .. }
impl<T> std::io::Write for T where T: core::io::Write, std::io::Error: From<T::Error> { .. }

Not only does this fix portability, but also allows one to use the core traits to rule out impossible errors. E.g. &mut [u8]: Read<Error = !> + Write<Error = !>.

I am going to dust off https://github.com/QuiltOS/core-io at a hackathon tomorrow to further demonstrate this.

9 Likes

One thing that isn’t clear: You seem to imply that CString isn’t a platform-specific type, but in fact it is platform-specific. If we imagine a pure Rust platform, there’d be no use for CString and such a platform shouldn’t have to provide any implementation of CString. In fact, I am interested in creating a port that doesn’t have CString and similar at all, so I’d like to find a way to make that work in this framework.

5 Likes

What is platform-specific about it? In my interpretation a CString is just like a slice (of NonZero<c_char>, but really any NonZero type could work this way, e.g. POSIX's environ) where the length is indicated by having a zero-valued item at the end instead of an associated length in a fat pointer.

If the platform doesn't use null-terminated strings for anything, then it is better for the port to that platform to not implement CString.

6 Likes

CString is something that can be implemented purely in Rust without external dependencies. So, while the functionality itself might not be useful on such platforms, keeping it or removing it has no influence on portability. Also, nothing in this proposal seems to support deleting parts of std. The best you can do is stub things out. Getting rid of std features you don't want to use or can't be used on your platform to me sounds more like something that should happen within the scenarios scenario. Removing CString can also happen that way.

3 Likes

Also, even if a platform doesn’t have any native APIs using C strings, you can still compile C code for it with a static libc, and CString could be useful to work with such code.

2 Likes

So behind portability concerns, there are case we we want to opinionatedly ban things:

  • no allocation for real time systems
  • no CStrings on Utopia Exokernel cause author doesn’t like them
  • no dynamic panic handlers, cause just use a static panic strategy
  • no unwinding (assuming portable unwinding implementation)

Most of the time, I think enforcing this is more the domain of scenarios rather than the crate dependency graph, but there could well be exceptions.

5 Likes

This sounds pretty good. I just wanted to share my experience with another (closed source, unfortunately) standard library, in C.

In that embedded system, one simply had to implement a void putchar(char c) function, and you could use all the printf() and logging you wanted. Implement getchar(char c), and you can now read input. Each different embedded target would do that, basically implementing a small serial port driver. Implement two additional functions, void *alloc_page() and free_page(void*) and you’d be able to use all the allocation and data structures.

Having to just implement a few functions to get (admittedly not super efficient) I/O and data structures felt amazing. You could port to an entirely new SoC or operating system in a few hours at most. I’d love Rust to get to that point, even if the “putchar/getchar/alloc_page/free_page machine” is just a library used to quickly implement an embedded PAL.

3 Likes

I think that if we head toward the trait-based approach that @brson mentions as a future direction, we could probably use default methods to get something very close to this flavor.

4 Likes

Let me put it another way: As is indicated near the end of the proposal, it is noted that the PAL could be split up into finer-grained reusable pieces that are composed in Scenerio-specific ways. It seems like CString is one of those things.

But, more generally, if we think about a brand-new port, it would be natural to try to do the port incrementally by building it from the ground up, adding one scenerio after another, rather than trying to implement the entire PAL in one shot. And also, insofar as refactoring libstd into multiple crates to create the PAL, that work also makes a lot of sense to think about in terms of incrementally enabling scenerios. For example, it makes sense to try to prioritize the PAL work such that somebody could implement the PAL for a platform that only needs core + allocation, then add make possible for such platforms to support collections, then make it possible for them to support threads, then make it possible for them to support asynchronous networking (not the whole networking API), etc.

Put a different way, I think it would be useful to explore defining the PAL as a composition of scenerios, bottom-up, instead of as a refactoring of the existing libstd.

2 Likes

Even without scenarios it seems like a monolithic PAL would let you incrementally define the functionality of your platform. If you start off with a PAL where every function is a panic or abort, you should be able to fill in the functionality that you care about while ignoring the stuff that you don’t (such as CString or whatever else). Of course, scenarios would make things cleaner since you can completely remove functionality that you never intend to implement.

So basically it seems like working top-down from the current stdlib is the best way to create the PAL in the first place, and then once it’s in place perhaps it could be further split up into separate scenarios defined from the bottom-up. Does that make sense?

2 Likes

+1 to panic_abort - the rest of the PAL is going to panic by default, why not have unwinding complain loudly too? Getting a fully portable unwinding strategy likely won't be straightforward (see emscripten, where unwinding only works because it's emulated - one could easily imagine a world where it just didn't work). This doesn't preclude having a panic_simple crate you can use most of the time.

Overall this sounds pretty exciting and it clarifies the scenarios idea in my mind somewhat as (I think) being vertical slices of the crate dependency graph (it was less clear when still thinking of platforms as generally sprawling across std etc).

This brings to mind a thought I had a while ago for a new kind of crate that formally supports the kind of dependency inversions at work here. Basically, crates with “parameters” that allow the dependents to inject behaviour into the dependees.

The use I had in mind was for things like UI frameworks where letting the application control main makes things harder (like when you have to guarantee you control the main thread), and other “easy mode” frameworks to make getting up and running with Rust easier. Another target is alternate test harnesses, opening the door to more features on stable.

I don’t have very concrete ideas for how this would work (something, something, crate traits), I just hope that any “magic” that needs to be employed here is made available (as much as is practical) to crates other than std.

3 Likes

@DanielKeep You may want to take a look at the Backpack work that is happening in the Haskell world right now (http://plv.mpi-sws.org/backpack/); it seems very relevant to your ideas.

Coming from the Debian/Ubuntu side of things, there are scripts one runs at every new version of some library software to see if new symbols appeared or got removed compared to previous versions. This is then used to enforce proper versioning - if symbols disappear then you get angry mails from the downstream maintainer. It varies heavily between maintainers how much this is enforced though, but technically you're breaking the ABI if symbols disappear.

This seems very error prone to me. While it works to get the first "just up and running" code, how do you know when you found the last panic, i e, when your program is ready to ship (without implementing the entire pal, of which you only use a fraction)?

I would much prefer many platform dependent crates, i e, pal_allocsystem_unix, pal_file_unix, pal_threads_unix and so on. It helps with crate interdependencies, and it also helps with not having to implement functions you don't need.

That looks very much like what I had in mind.

The question is whether there’d be interest and motivation to do the same for Rust…

I think “pal” is the wrong layer for memchr/memrchr. It’s not a unique or important function in itself, but it’s representative. An efficient way to find a byte value in a &[u8] should be available already in libcore, and it’s needed for slices and strings sooner or later (we just haven’t gotten there yet). At least the portable implementation should be already in there. If possible, it should be an option to inject a platform dependency to use instead.

In comparison, we’re making use of different very platform specific implementations of memcpy that the c library provides. I would want to be able to provide functions of the same type, implemented in Rust.

1 Like

We're breaking the ABI every time we release a new compiler version anyway.

Presumably by running the std test suite. Or grepping for panic!("Not implemented in std").

This can come at a later stage, but as previous attempts at this have shown there are many interdependencies and getting this right is quite complex. While this could (should?) be a future goal, I think we need to implement this proposal first.