- 06 Dec, 2019 1 commit
-
-
Kenton Varda authored
This reverts commit 0123e171. This wasn't the right place to solve this problem. The same problem applies when calling an HttpService directly, or using an HttpClient that wraps an in-process HttpService. Applications that struggle with this need to find a better solution.
-
- 27 Nov, 2019 1 commit
-
-
Kenton Varda authored
Some applications expect to be able to inspect these headers in order to learn about the properties of the entity-body. Currently, over RPC, the sender could send arbitrary header values that have nothing to do with the actual body. Instead, let's just overwrite them to match.
-
- 14 Nov, 2019 1 commit
-
-
Kenton Varda authored
When VatNetwork::connect() returns nullptr, it means that the caller is trying to connect to itself. rpc-test.c++ failed to test this in two ways: - The test VatNetwork's connect() never returned null. - There was no test case for loopback connect. As a result, the code to handle loopback in rpc.c++ had bitrotted. It failed to handle the new bootstrap mechanism introduced in v0.5, and instead only implemented the restorer mechanism from 0.4.
-
- 28 Oct, 2019 2 commits
-
-
Kenton Varda authored
`task` needs to be the last member of ServerRequestContextImpl, because when we construct it, we call `service.request()`, which may call back to send() or acceptWebSocket(), which require `replyTask` to be initialized.
-
Kenton Varda authored
And fix a bug detected by this.
-
- 22 Oct, 2019 3 commits
-
-
Kenton Varda authored
The RPC system itself can sometimes call `releaseParams()` redundantly after the application has already called it. So, it's important that it be idempotent.
-
Kenton Varda authored
-
Kenton Varda authored
-
- 15 Oct, 2019 1 commit
-
-
Vitali Lovich authored
-
- 14 Oct, 2019 2 commits
-
-
Vitali Lovich authored
Allow including a header to obtain the iterator traits instead of needing to carefully define a slightly more obscure macro that someone might be tempted to define globally.
-
Vitali Lovich authored
For Visual Studio we have to wrap the headers with push/pop pragmas at the top and bottom of the file. Define common macros for suppress/unsuppress KJ & the appropriate macros for CAPNP begin/end header wrappers. Because there's a chicken egg problem the KJ_BEGIN_HEADER/CAPNP_BEGIN_HEADER macros are placed below all includes to ensure that the appropriate common.h file has been sourced.
-
- 04 Oct, 2019 1 commit
-
-
Kenton Varda authored
-
- 02 Oct, 2019 1 commit
-
-
Kenton Varda authored
-
- 17 Sep, 2019 1 commit
-
-
Timothy Trindle authored
-
- 11 Sep, 2019 8 commits
-
-
Kenton Varda authored
It seems like MSVC is generating this function in translation units where it isn't acutally called, which inadvertently causes non-RPC Cap'n Proto code to depend on kj-async.
-
Kenton Varda authored
The documentation for this method clearly says that sending the message cannot be delayed because ordering may matter. Only resolving of the returned promise can be delayed to implement flow control.
-
Kenton Varda authored
-
Kenton Varda authored
This is needed now because http-over-capnp will index the Set-Cookie header. This change should make it relatively safe to index Set-Cookie when using KJ HTTP.
-
Kenton Varda authored
This allows an HTTP request/response to be forwarded over Cap'n Proto RPC, multiplexed with arbitrary other RPC transactions. This could be compared with HTTP/2, which is a binary protocol representation of HTTP that allows multiplexing. HTTP-over-Cap'n-Proto provides the same, but with some advantages inherent in leveraging Cap'n Proto: - HTTP transactions can be multiplexed with regular Cap'n Proto RPC transactions. (While in theory you could also layer RPC on top of HTTP, as gRPC does, HTTP transactions are much heavier than basic RPC. In my opinion, layering HTTP over RPC makes much more sense because of this.) - HTTP endpoints are object capabilities. Multiple private endpoints can be multiplexed over the same connection. - Either end of the connection can act as the client or server, exposing endpoints to each other. - Cap'n Proto path shortening can kick in. For instance, imagine a request that passes through several proxies, then eventually returns a large streaming response. If the proxies is each a Cap'n Proto server with a level 3 RPC implementation, and the response is simply passed through verbatim, the response stream will automatically be shortened to skip over the middleman servers. At present no Cap'n Proto implementation supports level 3, but path shortening can also apply with only level 1 RPC, if all calls proxy through a central hub process, as is often the case in multi-tenant sandboxing scenarios. There are also disadvantages vs. HTTP/2: - HTTP/2 is a standard. This is not. - This protocol is not as finely optimized for the HTTP use case. It will take somewhat more bandwidth on the wire. - No mechanism for server push has been defined, although this could be a relatively simple addition to the http-over-capnp interface definitions. - No mechanism for stream prioritization is defined -- this would likely require new features in the Cap'n Proto RPC implementation itself. - At present, the backpressure mechanism is naive and its performance will suffer as the network distance increases. I intend to solve this by adding better backpressure mechanisms into Cap'n Proto itself. Shims are provided for compatibility with the KJ HTTP interfaces. Note that Sandstorm has its own http-over-capnp protocol: https://github.com/sandstorm-io/sandstorm/blob/master/src/sandstorm/web-session.capnp Sandstorm's protocol and this new one are intended for very different use cases. Sandstorm implements sandboxing of web applications on both the client and server sides. As a result, it cares deeply about the semantics of HTTP headers and how they affect the browser. This new http-over-capnp protocol is meant to be a dumb bridge that simply passes through all headers verbatim.
-
Kenton Varda authored
This implementation features path-shortening through pumps. That is, if an incoming Cap'n Proto stream wraps a KJ stream which ends up pumping to an outgoing Cap'n Proto stream, the incoming stream will be redirected directly to the outgoing stream, in such a way that the RPC system can recognize and reduce the number of network hops as appropriate. This proved tricky due to the features of KJ's `pumpTo()`, in particular: - The caller of `pumpTo()` expects eventually to be told how many bytes were pumped (e.g. before EOF was hit). - A pump may have a specified length limit. In this case the rest of the stream can be pumped somewhere else. - Multiple streams can be pumped to the same destination stream -- this implies that a pump does not propagate EOF. These requirements mean that path-shortening is not as simple as redirecting the incoming stream to the outgoing. Intsead, we must first ask the outgoing stream's server to create a "substream" object, and then redirect to that. The substream can have a length limit, swallows EOF, informs the original creator on completion, and can even redirect *back* to the original creator to allow the stream to now pump somewhere else.
-
Kenton Varda authored
With this change, a capability server can implement `Capability::Server::shortenPath()` to return a promise that resolves to a new capability in the future. Once it resolves, further calls can be redirected to the new capability. The RPC system will automatically apply path-shortening on resolution. For example: * Say Alice and Bob are two vats communicating via RPC. Alice holds a capability to an object hosted by Bob. Bob's object later resolves itself (via shortenPath()) to a capability pointing to an object hoste by Alice. Once everything settles, if Alice makes calls on the capability that used to point to Bob, those calls will go directly to the local object that Bob's object resolved to, without crossing the network at all. * In a level 3 RPC implementation (not yet implemented), Bob could instead resolve his capability to point to a capability hosted by Carol. Alice would then automatically create a direct connection to Carol and start using it to make further calls. All this works automatically because the implementation of `shortenPath()` is based on existing infrastructure designed to support promise pipelining. If a capability server implements `shortenPath()` to return non-null, then capabilities pointing to it will appear to be unsettled promise capabilities. `whenResolved()` or `whenMoreResolved()` can be called on these to wait for them to "resolve" to the shorter path later on. Up until this point, RPC calls on a promise capability couldn't possibly return until the capability was settled, but nothing actually depended on this in practice. This feature will be used to implement dynamic path shortening through KJ streams and `pumpTo()`.
-
Kenton Varda authored
Capability::Clients do not follow the usual KJ style with regards to lifetimes of returned promises. RPC methods in particular automatically take a reference on the capability until the method completes. This makes some intuitive sense as Capability::Client itself is a pointer-like type implementing reference counting on some inner object. whenResolved() did not follow the pattern, and instead required that the caller explicitly take a reference. I screwed this up when using it, suggesting that it's pretty unintuitive. It's cheap and safe to automatically take a reference, so let's do that.
-
- 03 Sep, 2019 1 commit
-
-
Kenton Varda authored
-
- 18 Jun, 2019 15 commits
-
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
Consider a capnp streaming type that wraps a kj::AsyncOutputStream. KJ streams require the caller to avoid doing multiple writes at once. Capnp streaming conveniently guarantees only one streaming call will be delivered at a time. This is great because it means the app does not have to do its own queuing of writes. However, the app may want to use a CapabilityServerSet to unwrap the capability and get at the underlying KJ stream to optimize by writing to it directly. However, before it can issue a direct write, it has to wait for all RPC writes to complete. These RPC writes were probably issued by the same caller, before it realized it was talking to a local cap. Unfortunately, it can't just wait for those calls it issued to complete, because streaming flow control may have made them appear to complete long ago, when they're actually still in the server's queue. How does the app make sure that the directly-issued writes don't overlap with RPC writes? We can solve this by making CapabilityServerSet::getLocalServer() delay until all in-flight stream calls are complete before unwrapping. Now, the app can simply make sure that any requests it issued over RPC in the past completed before it starts issuing direct requests.
-
Kenton Varda authored
-
Kenton Varda authored
Also, push harder on the code generator such that `StreamResult` doesn't show up in generated code at all. So now we have `StreamingRequest<Params>` which is like `Request<Params, Results>`, and we have `StreamingCallContext<Params>` which is like `CallContext<Params, Results>`.
-
Kenton Varda authored
-
Kenton Varda authored
There are two things that every capability server must implement: * When a streaming method is delivered, it blocks subsequent calls on the same capability. Although not strictly needed to achieve flow control, this simplifies the implementation of streaming servers -- many would otherwise need to implement such serialization manually. * When a streaming method throws, all subsequent calls also throw the same exception. This is important because exceptions thrown by a streaming call might not actually be delivered to a client, since the client doesn't necessarily wait for the results before making the next call. Again, a streaming server could implement this manually, but almost all streaming servers will likely need it, and this makes things easier.
-
Kenton Varda authored
Note: Apparently, json.capnp had not been added to the bootstrap test, and the checked-in bootstrap had drifted from the source file.
-
Kenton Varda authored
This can be used on a method to indicate that it is used for "streaming", like: write @0 (bytes :Data) -> stream; A "streaming" method is one which is expected to be called many times to transmit an ordered stream of items. For best throughput, it is often necessary to make multiple overlapping calls, so as not to wait for a round trip for every item. However, to avoid excess buffering, it may be necessary to apply backpressure by having the client limit the total number of overlapping calls. This logic is difficult to get right at the application level, so making it a language feature gives us the opportunity to implement it in the RPC layer. We can, however, do it in a way that is backwards-compatible with implementations that don't support it. The above declaration is equivalent to: write @0 (bytes :Data) -> import "/capnp/stream.capnp".StreamResult; RPC implementations that don't explicitly support streaming can thus instead leave it up to the application to handle.
-
Kenton Varda authored
Apparently, Return messages with empty capability tables have been allocated one word too small all along, causing many Return messages to be split into two segments and allocate twice the memory they need. I never bothered to check whether this was happening...
-
- 16 Jun, 2019 1 commit
-
-
Kenton Varda authored
Way back in 538a767e I added `RpcSystem::setFlowLimit()`, a blunt mechanism by which an RPC node can arrange to stop reading new messages from the connection when too many incoming calls are in-flight. This was needed to deal with buggy Sandstorm apps that would stream multi-gigabyte files by doing a zillion writes without waiting, which would then all be queued in the HTTP gateway, causing it to run out of memory. In implementing that, I inadertently caused the RPC system to do a tree walk on every call message it received, in order to sum up the message size. This is silly, becaues it's much cheaper to sum up the segment sizes. In fact, in the case of a malicious peer, the tree walk is potentially insufficient, because it doesn't count holes in the segments. The tree walk also means that any invalid pointers in the message cause an exception to be thrown even if that pointer is never accessed by the app, which isn't the usual behavior. I seem to recall this issue coming up in discussion once in the past, but I couldn't find the thread. For the new streaming feature, we'll be paying attention to the size of outgoing messages. Again, here, it would be nice to compute this size by summing segments without doing a tree walk. So, this commit adds `sizeInWords()` methods that do this.
-
- 15 Jun, 2019 1 commit
-
-
Kenton Varda authored
-