this post was submitted on 20 Mar 2026
24 points (100.0% liked)
Rust Programming
9201 readers
19 users here now
founded 7 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
FutureExttrait methods return a wrapper structWithContextwhich is where the data (T) and the context are stored. One method takes context directly from an argument, the other clones one from a thread local. Data can't hang in thin air to be attached to a triat. This also shows that the abstraction you (presumably) were asking about is both possible and already exists.T: Futuretypes would make the abstraction non-zero-cost in any case.You're really hung up on the fact I said "pin metadata to a Future". I didn't mean that literally, but conceptually. The
Futuretrait defines the API, and what I'm asking for is for that API to provide a mechanism to retrieve a context object that can be used to store arbitrary data. It's almost there already. TheContextthat poll takes has apub const fn ext(&mut self) -> &mut (dyn Any + 'static)method already (although I'm not thrilled by the type signature, I'd prefer it have an associated type or generic letting you define your own concrete extension type), it's just that by being associated toWakeinstead ofFutureit makes it impractical to use for storing and retrieving metadata when working withFutures.There's a few ways that the problem could be approached from. One option would be to add a new associated type to
Futurefor an arbitrary context object. If you don't need it you could always just set it to()same as is done for theOutputtype. There might be a clever way to re-use the taskContextobject thatWakehas although I'm not sure how that would be done. Could also define a new standard trait similar to howFutureExtdoes things (maybe something likeContextualFuture), but that's less useful for exactly the reason you pointed out, that Rust doesn't do implicitly transparent structs.You can stop repeating that traits don't literally store data, everyone already knows that. Everyone knew that from the beginning. Repeating that again and again serves no purpose at all. Nobody is asking to literally store data on a trait.
Fundamentally the issue is that
Futureis a n+m model where you have n virtual threads executing on m OS threads. As such traditional solutions to associating data to OS threads like thread locals don't work as there's no guarantee that a given virtual thread will continue to execute on any given OS thread. Many languages have faced this exact problem and the solution that's been arrived at in nearly every case is to add an associated execution context (in a variety of ways) that can be queried for from the runtime and used to store the equivalent of "thread local" data. That's literally what I'm asking for, but in a way that's part of the standard API so you don't need to be tightly coupled to a specific async runtime. It is a problem that comes up repeatedly which is why things likeFutureExtexist, but it should be part of the standard, not something ad hoc that everyone keeps having to solve on their own.This is fully runtime/platform dependant and not true in many cases. Not only strictly single-thread runtimes exist, but async is already used in places like the embedded world where there is no OS for there to be OS threads, single or otherwise.
It is for this reason that the extra trait bounds on the implicit
impl Futurereturn type in async functions have caused a lot of trouble (implicitSend, ...etc). If for nothing else, adding context to the equation would make that situation more complex.In any case, attaching "metadata" to
T: Futuredata (instead of the runtime or some other global context) doesn't make any sense in my head, as one would be "await"ing around and the concrete return type is often hidden behind thatimpl Future. And if the data is really special and needs to be attached toT: Future, then it's (conceptually) just data that is a part ofT, notFuture.Yes, but this is also fundamentally the problem. If you don't want to tightly couple yourself to a particular runtime you can't make any assumptions at all about the underlying runtime and therefore need to write things in such a way that they work equally well across all of them. In the cases where a runtime is single-threaded you could easily achieve the desired result by simply stuffing the context into a thread-local, but that would then be an implementation detail of the runtime and you therefore couldn't assume that.
Maybe a concrete example would help. It is often the case in a REST-ful web service that you want to execute a request while retaining a set of headers from the request. Some of these headers are useful to include in log messages during request processing. Some of them need to be passed along to other services when those calls are executed. Some of them can be used to alter logic during the request processing. Those are also not always exclusively one or the other, it's very often that a header might be used E.G. in logging as well as passed along to outbound requests.
There are a number of ways you could approach this problem. One option would be to simply pass the headers around explicitly as function arguments. In order to keep the number of arguments to a reasonable quantity you probably want to stuff all the headers into some kind of container, either a simple map or a more type safe context object. Regardless though this can be very tedious as you end up needing to pass this same argument to nearly every function in your app.
In order to keep things tidy you're going to want to find an out of band way to pass that around. In a traditional model, you could stuff it into a thread-local, and indeed many application do just that. That doesn't work though with Rusts async model typically, for the reason I had outlined previously.
The solution the opentelemetry crate came up with was to store the context in a thread-local, while also wrapping arbitrary
Futureinstances in a wrapper that overwrites the current threads thread-local with one stored in the wrapper when it's polled, which works but is kind of a brute force approach as well as being slightly fragile. Instead of having to come up with such an approach it would be nice if the ability to attach and maintain a context object was part of the async API so that each runtime would then be able to expose a mechanism that was appropriate to its own execution implementation.