1. 06 Dec, 2019 1 commit
  2. 27 Nov, 2019 1 commit
    • Kenton Varda's avatar
      Override Content-Length and Transfer-Encoding in http-over-capnp. · 0123e171
      Kenton Varda authored
      Some applications expect to be able to inspect these headers in order to learn about the properties of the entity-body. Currently, over RPC, the sender could send arbitrary header values that have nothing to do with the actual body. Instead, let's just overwrite them to match.
      0123e171
  3. 14 Nov, 2019 1 commit
    • Kenton Varda's avatar
      Fix RPC loopback bootstrap(). · 77a57f8c
      Kenton Varda authored
      When VatNetwork::connect() returns nullptr, it means that the caller is trying to connect to itself.
      
      rpc-test.c++ failed to test this in two ways:
      - The test VatNetwork's connect() never returned null.
      - There was no test case for loopback connect.
      
      As a result, the code to handle loopback in rpc.c++ had bitrotted. It failed to handle the new bootstrap mechanism introduced in v0.5, and instead only implemented the restorer mechanism from 0.4.
      77a57f8c
  4. 28 Oct, 2019 2 commits
  5. 22 Oct, 2019 3 commits
  6. 15 Oct, 2019 1 commit
  7. 14 Oct, 2019 2 commits
    • Vitali Lovich's avatar
      Add compat/std-iterator.h · 8fc9857a
      Vitali Lovich authored
      Allow including a header to obtain the iterator traits instead of
      needing to carefully define a slightly more obscure macro that someone
      might be tempted to define globally.
      8fc9857a
    • Vitali Lovich's avatar
      Suppress MSVC warnings in headers files · 93e5be76
      Vitali Lovich authored
      For Visual Studio we have to wrap the headers with push/pop pragmas
      at the top and bottom of the file.
      
      Define common macros for suppress/unsuppress KJ & the appropriate macros
      for CAPNP begin/end header wrappers. Because there's a chicken egg
      problem the KJ_BEGIN_HEADER/CAPNP_BEGIN_HEADER macros are placed below
      all includes to ensure that the appropriate common.h file has been
      sourced.
      93e5be76
  8. 04 Oct, 2019 1 commit
  9. 02 Oct, 2019 1 commit
  10. 17 Sep, 2019 1 commit
  11. 11 Sep, 2019 8 commits
    • Kenton Varda's avatar
      Move Capability::Client::whenResolved() out-of-line to make MSVC linker happy. · 52c63c5c
      Kenton Varda authored
      It seems like MSVC is generating this function in translation units where it isn't acutally called, which inadvertently causes non-RPC Cap'n Proto code to depend on kj-async.
      52c63c5c
    • Kenton Varda's avatar
      Fix streaming: RpcFlowController::send() must send immediately. · bb83561c
      Kenton Varda authored
      The documentation for this method clearly says that sending the message cannot be delayed because ordering may matter. Only resolving of the returned promise can be delayed to implement flow control.
      bb83561c
    • Kenton Varda's avatar
    • Kenton Varda's avatar
      Deal with Set-Cookie not being comma-concatenation-friendly. · 8d2644cc
      Kenton Varda authored
      This is needed now because http-over-capnp will index the Set-Cookie header.
      
      This change should make it relatively safe to index Set-Cookie when using KJ HTTP.
      8d2644cc
    • Kenton Varda's avatar
      Define and implement HTTP-over-Cap'n-Proto. · f5190d24
      Kenton Varda authored
      This allows an HTTP request/response to be forwarded over Cap'n Proto RPC, multiplexed with arbitrary other RPC transactions.
      
      This could be compared with HTTP/2, which is a binary protocol representation of HTTP that allows multiplexing. HTTP-over-Cap'n-Proto provides the same, but with some advantages inherent in leveraging Cap'n Proto:
      - HTTP transactions can be multiplexed with regular Cap'n Proto RPC transactions. (While in theory you could also layer RPC on top of HTTP, as gRPC does, HTTP transactions are much heavier than basic RPC. In my opinion, layering HTTP over RPC makes much more sense because of this.)
      - HTTP endpoints are object capabilities. Multiple private endpoints can be multiplexed over the same connection.
      - Either end of the connection can act as the client or server, exposing endpoints to each other.
      - Cap'n Proto path shortening can kick in. For instance, imagine a request that passes through several proxies, then eventually returns a large streaming response. If the proxies is each a Cap'n Proto server with a level 3 RPC implementation, and the response is simply passed through verbatim, the response stream will automatically be shortened to skip over the middleman servers. At present no Cap'n Proto implementation supports level 3, but path shortening can also apply with only level 1 RPC, if all calls proxy through a central hub process, as is often the case in multi-tenant sandboxing scenarios.
      
      There are also disadvantages vs. HTTP/2:
      - HTTP/2 is a standard. This is not.
      - This protocol is not as finely optimized for the HTTP use case. It will take somewhat more bandwidth on the wire.
      - No mechanism for server push has been defined, although this could be a relatively simple addition to the http-over-capnp interface definitions.
      - No mechanism for stream prioritization is defined -- this would likely require new features in the Cap'n Proto RPC implementation itself.
      - At present, the backpressure mechanism is naive and its performance will suffer as the network distance increases. I intend to solve this by adding better backpressure mechanisms into Cap'n Proto itself.
      
      Shims are provided for compatibility with the KJ HTTP interfaces.
      
      Note that Sandstorm has its own http-over-capnp protocol: https://github.com/sandstorm-io/sandstorm/blob/master/src/sandstorm/web-session.capnp
      
      Sandstorm's protocol and this new one are intended for very different use cases. Sandstorm implements sandboxing of web applications on both the client and server sides. As a result, it cares deeply about the semantics of HTTP headers and how they affect the browser. This new http-over-capnp protocol is meant to be a dumb bridge that simply passes through all headers verbatim.
      f5190d24
    • Kenton Varda's avatar
      Implement byte streams over Cap'n Proto. · 63c34d47
      Kenton Varda authored
      This implementation features path-shortening through pumps. That is, if an incoming Cap'n Proto stream wraps a KJ stream which ends up pumping to an outgoing Cap'n Proto stream, the incoming stream will be redirected directly to the outgoing stream, in such a way that the RPC system can recognize and reduce the number of network hops as appropriate.
      
      This proved tricky due to the features of KJ's `pumpTo()`, in particular:
      - The caller of `pumpTo()` expects eventually to be told how many bytes were pumped (e.g. before EOF was hit).
      - A pump may have a specified length limit. In this case the rest of the stream can be pumped somewhere else.
      - Multiple streams can be pumped to the same destination stream -- this implies that a pump does not propagate EOF.
      
      These requirements mean that path-shortening is not as simple as redirecting the incoming stream to the outgoing. Intsead, we must first ask the outgoing stream's server to create a "substream" object, and then redirect to that. The substream can have a length limit, swallows EOF, informs the original creator on completion, and can even redirect *back* to the original creator to allow the stream to now pump somewhere else.
      63c34d47
    • Kenton Varda's avatar
      Allow capability servers to redirect themselves. · f203027c
      Kenton Varda authored
      With this change, a capability server can implement `Capability::Server::shortenPath()` to return a promise that resolves to a new capability in the future. Once it resolves, further calls can be redirected to the new capability.
      
      The RPC system will automatically apply path-shortening on resolution. For example:
      * Say Alice and Bob are two vats communicating via RPC. Alice holds a capability to an object hosted by Bob. Bob's object later resolves itself (via shortenPath()) to a capability pointing to an object hoste by Alice. Once everything settles, if Alice makes calls on the capability that used to point to Bob, those calls will go directly to the local object that Bob's object resolved to, without crossing the network at all.
      * In a level 3 RPC implementation (not yet implemented), Bob could instead resolve his capability to point to a capability hosted by Carol. Alice would then automatically create a direct connection to Carol and start using it to make further calls.
      
      All this works automatically because the implementation of `shortenPath()` is based on existing infrastructure designed to support promise pipelining. If a capability server implements `shortenPath()` to return non-null, then capabilities pointing to it will appear to be unsettled promise capabilities. `whenResolved()` or `whenMoreResolved()` can be called on these to wait for them to "resolve" to the shorter path later on. Up until this point, RPC calls on a promise capability couldn't possibly return until the capability was settled, but nothing actually depended on this in practice.
      
      This feature will be used to implement dynamic path shortening through KJ streams and `pumpTo()`.
      f203027c
    • Kenton Varda's avatar
      Client::whenResolved() should automatically attach a reference. · fe6024d0
      Kenton Varda authored
      Capability::Clients do not follow the usual KJ style with regards to lifetimes of returned promises. RPC methods in particular automatically take a reference on the capability until the method completes. This makes some intuitive sense as Capability::Client itself is a pointer-like type implementing reference counting on some inner object.
      
      whenResolved() did not follow the pattern, and instead required that the caller explicitly take a reference. I screwed this up when using it, suggesting that it's pretty unintuitive. It's cheap and safe to automatically take a reference, so let's do that.
      fe6024d0
  12. 03 Sep, 2019 1 commit
  13. 18 Jun, 2019 15 commits
    • Kenton Varda's avatar
      Really fix -Wall build. · f94b1a6f
      Kenton Varda authored
      f94b1a6f
    • Kenton Varda's avatar
      e2094eed
    • Kenton Varda's avatar
    • Kenton Varda's avatar
    • Kenton Varda's avatar
      Fix C++14 build. · ca0e85ce
      Kenton Varda authored
      ca0e85ce
    • Kenton Varda's avatar
      3ef62d15
    • Kenton Varda's avatar
      Fix typos. · 90667fbd
      Kenton Varda authored
      90667fbd
    • Kenton Varda's avatar
      CapabilityServerSet::getLocalServer() must wait for stream queue. · 4a4fe65c
      Kenton Varda authored
      Consider a capnp streaming type that wraps a kj::AsyncOutputStream.
      
      KJ streams require the caller to avoid doing multiple writes at once. Capnp streaming conveniently guarantees only one streaming call will be delivered at a time. This is great because it means the app does not have to do its own queuing of writes.
      
      However, the app may want to use a CapabilityServerSet to unwrap the capability and get at the underlying KJ stream to optimize by writing to it directly. However, before it can issue a direct write, it has to wait for all RPC writes to complete. These RPC writes were probably issued by the same caller, before it realized it was talking to a local cap. Unfortunately, it can't just wait for those calls it issued to complete, because streaming flow control may have made them appear to complete long ago, when they're actually still in the server's queue. How does the app make sure that the directly-issued writes don't overlap with RPC writes?
      
      We can solve this by making CapabilityServerSet::getLocalServer() delay until all in-flight stream calls are complete before unwrapping.
      
      Now, the app can simply make sure that any requests it issued over RPC in the past completed before it starts issuing direct requests.
      4a4fe65c
    • Kenton Varda's avatar
      Implement client RPC side of streaming. · 7a0e0fd0
      Kenton Varda authored
      7a0e0fd0
    • Kenton Varda's avatar
      Add client-side streaming hooks. · 56493100
      Kenton Varda authored
      Also, push harder on the code generator such that `StreamResult` doesn't show up in generated code at all.
      
      So now we have `StreamingRequest<Params>` which is like `Request<Params, Results>`, and we have `StreamingCallContext<Params>` which is like `CallContext<Params, Results>`.
      56493100
    • Kenton Varda's avatar
      Update bootstraps for previous commit. · 34481c85
      Kenton Varda authored
      34481c85
    • Kenton Varda's avatar
      Implement server side of streaming. · c3cfe9e5
      Kenton Varda authored
      There are two things that every capability server must implement:
      
      * When a streaming method is delivered, it blocks subsequent calls on the same capability. Although not strictly needed to achieve flow control, this simplifies the implementation of streaming servers -- many would otherwise need to implement such serialization manually.
      * When a streaming method throws, all subsequent calls also throw the same exception. This is important because exceptions thrown by a streaming call might not actually be delivered to a client, since the client doesn't necessarily wait for the results before making the next call. Again, a streaming server could implement this manually, but almost all streaming servers will likely need it, and this makes things easier.
      c3cfe9e5
    • Kenton Varda's avatar
      Regenerate bootstraps for streaming. · a784f2f7
      Kenton Varda authored
      Note: Apparently, json.capnp had not been added to the bootstrap test, and the checked-in bootstrap had drifted from the source file.
      a784f2f7
    • Kenton Varda's avatar
      Introduce new 'stream' keyword. · bd6d75ba
      Kenton Varda authored
      This can be used on a method to indicate that it is used for "streaming", like:
      
          write @0 (bytes :Data) -> stream;
      
      A "streaming" method is one which is expected to be called many times to transmit an ordered stream of items. For best throughput, it is often necessary to make multiple overlapping calls, so as not to wait for a round trip for every item. However, to avoid excess buffering, it may be necessary to apply backpressure by having the client limit the total number of overlapping calls. This logic is difficult to get right at the application level, so making it a language feature gives us the opportunity to implement it in the RPC layer.
      
      We can, however, do it in a way that is backwards-compatible with implementations that don't support it. The above declaration is equivalent to:
      
          write @0 (bytes :Data) -> import "/capnp/stream.capnp".StreamResult;
      
      RPC implementations that don't explicitly support streaming can thus instead leave it up to the application to handle.
      bd6d75ba
    • Kenton Varda's avatar
      Fix estimation of Return message sizes. · c826a71a
      Kenton Varda authored
      Apparently, Return messages with empty capability tables have been allocated one word too small all along, causing many Return messages to be split into two segments and allocate twice the memory they need. I never bothered to check whether this was happening...
      c826a71a
  14. 16 Jun, 2019 1 commit
    • Kenton Varda's avatar
      Add cheaper way to check size of RPC messages for flow control. · 76e35a7c
      Kenton Varda authored
      Way back in 538a767e I added `RpcSystem::setFlowLimit()`, a blunt mechanism by which an RPC node can arrange to stop reading new messages from the connection when too many incoming calls are in-flight. This was needed to deal with buggy Sandstorm apps that would stream multi-gigabyte files by doing a zillion writes without waiting, which would then all be queued in the HTTP gateway, causing it to run out of memory.
      
      In implementing that, I inadertently caused the RPC system to do a tree walk on every call message it received, in order to sum up the message size. This is silly, becaues it's much cheaper to sum up the segment sizes. In fact, in the case of a malicious peer, the tree walk is potentially insufficient, because it doesn't count holes in the segments. The tree walk also means that any invalid pointers in the message cause an exception to be thrown even if that pointer is never accessed by the app, which isn't the usual behavior.
      
      I seem to recall this issue coming up in discussion once in the past, but I couldn't find the thread.
      
      For the new streaming feature, we'll be paying attention to the size of outgoing messages. Again, here, it would be nice to compute this size by summing segments without doing a tree walk.
      
      So, this commit adds `sizeInWords()` methods that do this.
      76e35a7c
  15. 15 Jun, 2019 1 commit