1. 14 Oct, 2019 2 commits
  2. 04 Oct, 2019 2 commits
  3. 03 Oct, 2019 2 commits
  4. 02 Oct, 2019 4 commits
    • Kenton Varda's avatar
      Fix handling of queued RT signals. · c84d57a3
      Kenton Varda authored
      For regular (non-RT) POSIX signals, the process can only have at most one instance of each signal queued for delivery at a time. If another copy of the signal arrives before the first is delivered, the new signal is ignored. The idea was that signals are only meant to wake the process up to check some input; the signal itself is not the input.
      
      POSIX RT signals are different. Multiple copies of the same signal can be queued, and each is delivered separately. Each signal may contain some additional information that needs to be processed. The signals themselves are input.
      
      UnixEventPort's `onSignal()` method returns a Promise that resolves the next time the signal is delivered. When the Promise is resolved, the signal is also supposed to be blocked until `onSignal()` can be called again, so that the app cannot miss signals delivered in between.
      
      However, the epoll/signalfd implementation had a bug where it would pull _all_ queued signals off the `signalfd` at once, only delivering the first instance of each signal number and dropping subsequent instances on the floor. That's fine for regular signals, but not RT signals.
      
      This change fixes the bug and adds a test. Incidentally, the poll()-based implementation has been correct all along.
      c84d57a3
    • Kenton Varda's avatar
      Fix bug when multiple cmsgs are present. · 44c6b461
      Kenton Varda authored
      I don't really know how to test this since the other cmsg types are bizarre and non-portable, but they do exist.
      44c6b461
    • Kenton Varda's avatar
      Test that headers are allowed to contain '.'s. · 11f612a9
      Kenton Varda authored
      This and many other characters are surprisingly allowed by the relevant RFCs. But it turns out we implemented the RFCs correctly, so yay.
      11f612a9
    • Kenton Varda's avatar
      Avoid an unnecessary malloc. · ffbc7043
      Kenton Varda authored
      ffbc7043
  5. 30 Sep, 2019 1 commit
  6. 19 Sep, 2019 1 commit
  7. 17 Sep, 2019 1 commit
  8. 16 Sep, 2019 2 commits
  9. 11 Sep, 2019 13 commits
    • Kenton Varda's avatar
      Merge pull request #829 from capnproto/http-over-capnp · 355697c8
      Kenton Varda authored
       Define and implement HTTP-over-Cap'n-Proto
      355697c8
    • Kenton Varda's avatar
      Move Capability::Client::whenResolved() out-of-line to make MSVC linker happy. · 52c63c5c
      Kenton Varda authored
      It seems like MSVC is generating this function in translation units where it isn't acutally called, which inadvertently causes non-RPC Cap'n Proto code to depend on kj-async.
      52c63c5c
    • Kenton Varda's avatar
      Fix MSVC build. · 1a22ce4b
      Kenton Varda authored
      1a22ce4b
    • Kenton Varda's avatar
      Fix streaming: RpcFlowController::send() must send immediately. · bb83561c
      Kenton Varda authored
      The documentation for this method clearly says that sending the message cannot be delayed because ordering may matter. Only resolving of the returned promise can be delayed to implement flow control.
      bb83561c
    • Kenton Varda's avatar
    • Kenton Varda's avatar
      An aborted userland pipe should throw DISCONNECTED, not FAILED. · 755f675b
      Kenton Varda authored
      Rationale: If this were a native OS pipe, closing or aborting one end would cause the other end to throw DISCONNECTED.
      
      Note that dropping the read end of a userland pipe is implemented in terms of aborting it, which makes it even more clear that this is a disconnect scenario.
      755f675b
    • Kenton Varda's avatar
      Deal with Set-Cookie not being comma-concatenation-friendly. · 8d2644cc
      Kenton Varda authored
      This is needed now because http-over-capnp will index the Set-Cookie header.
      
      This change should make it relatively safe to index Set-Cookie when using KJ HTTP.
      8d2644cc
    • Kenton Varda's avatar
      Define and implement HTTP-over-Cap'n-Proto. · f5190d24
      Kenton Varda authored
      This allows an HTTP request/response to be forwarded over Cap'n Proto RPC, multiplexed with arbitrary other RPC transactions.
      
      This could be compared with HTTP/2, which is a binary protocol representation of HTTP that allows multiplexing. HTTP-over-Cap'n-Proto provides the same, but with some advantages inherent in leveraging Cap'n Proto:
      - HTTP transactions can be multiplexed with regular Cap'n Proto RPC transactions. (While in theory you could also layer RPC on top of HTTP, as gRPC does, HTTP transactions are much heavier than basic RPC. In my opinion, layering HTTP over RPC makes much more sense because of this.)
      - HTTP endpoints are object capabilities. Multiple private endpoints can be multiplexed over the same connection.
      - Either end of the connection can act as the client or server, exposing endpoints to each other.
      - Cap'n Proto path shortening can kick in. For instance, imagine a request that passes through several proxies, then eventually returns a large streaming response. If the proxies is each a Cap'n Proto server with a level 3 RPC implementation, and the response is simply passed through verbatim, the response stream will automatically be shortened to skip over the middleman servers. At present no Cap'n Proto implementation supports level 3, but path shortening can also apply with only level 1 RPC, if all calls proxy through a central hub process, as is often the case in multi-tenant sandboxing scenarios.
      
      There are also disadvantages vs. HTTP/2:
      - HTTP/2 is a standard. This is not.
      - This protocol is not as finely optimized for the HTTP use case. It will take somewhat more bandwidth on the wire.
      - No mechanism for server push has been defined, although this could be a relatively simple addition to the http-over-capnp interface definitions.
      - No mechanism for stream prioritization is defined -- this would likely require new features in the Cap'n Proto RPC implementation itself.
      - At present, the backpressure mechanism is naive and its performance will suffer as the network distance increases. I intend to solve this by adding better backpressure mechanisms into Cap'n Proto itself.
      
      Shims are provided for compatibility with the KJ HTTP interfaces.
      
      Note that Sandstorm has its own http-over-capnp protocol: https://github.com/sandstorm-io/sandstorm/blob/master/src/sandstorm/web-session.capnp
      
      Sandstorm's protocol and this new one are intended for very different use cases. Sandstorm implements sandboxing of web applications on both the client and server sides. As a result, it cares deeply about the semantics of HTTP headers and how they affect the browser. This new http-over-capnp protocol is meant to be a dumb bridge that simply passes through all headers verbatim.
      f5190d24
    • Kenton Varda's avatar
      Minor extensions to HttpHeaders. · 00c0cbfb
      Kenton Varda authored
      - Add a `size()` method.
      - Add a `forEach()` that enumerates tabled headers by ID, and only uses string names for non-tabled headers.
      00c0cbfb
    • Kenton Varda's avatar
      Implement byte streams over Cap'n Proto. · 63c34d47
      Kenton Varda authored
      This implementation features path-shortening through pumps. That is, if an incoming Cap'n Proto stream wraps a KJ stream which ends up pumping to an outgoing Cap'n Proto stream, the incoming stream will be redirected directly to the outgoing stream, in such a way that the RPC system can recognize and reduce the number of network hops as appropriate.
      
      This proved tricky due to the features of KJ's `pumpTo()`, in particular:
      - The caller of `pumpTo()` expects eventually to be told how many bytes were pumped (e.g. before EOF was hit).
      - A pump may have a specified length limit. In this case the rest of the stream can be pumped somewhere else.
      - Multiple streams can be pumped to the same destination stream -- this implies that a pump does not propagate EOF.
      
      These requirements mean that path-shortening is not as simple as redirecting the incoming stream to the outgoing. Intsead, we must first ask the outgoing stream's server to create a "substream" object, and then redirect to that. The substream can have a length limit, swallows EOF, informs the original creator on completion, and can even redirect *back* to the original creator to allow the stream to now pump somewhere else.
      63c34d47
    • Kenton Varda's avatar
      Allow capability servers to redirect themselves. · f203027c
      Kenton Varda authored
      With this change, a capability server can implement `Capability::Server::shortenPath()` to return a promise that resolves to a new capability in the future. Once it resolves, further calls can be redirected to the new capability.
      
      The RPC system will automatically apply path-shortening on resolution. For example:
      * Say Alice and Bob are two vats communicating via RPC. Alice holds a capability to an object hosted by Bob. Bob's object later resolves itself (via shortenPath()) to a capability pointing to an object hoste by Alice. Once everything settles, if Alice makes calls on the capability that used to point to Bob, those calls will go directly to the local object that Bob's object resolved to, without crossing the network at all.
      * In a level 3 RPC implementation (not yet implemented), Bob could instead resolve his capability to point to a capability hosted by Carol. Alice would then automatically create a direct connection to Carol and start using it to make further calls.
      
      All this works automatically because the implementation of `shortenPath()` is based on existing infrastructure designed to support promise pipelining. If a capability server implements `shortenPath()` to return non-null, then capabilities pointing to it will appear to be unsettled promise capabilities. `whenResolved()` or `whenMoreResolved()` can be called on these to wait for them to "resolve" to the shorter path later on. Up until this point, RPC calls on a promise capability couldn't possibly return until the capability was settled, but nothing actually depended on this in practice.
      
      This feature will be used to implement dynamic path shortening through KJ streams and `pumpTo()`.
      f203027c
    • Kenton Varda's avatar
      Client::whenResolved() should automatically attach a reference. · fe6024d0
      Kenton Varda authored
      Capability::Clients do not follow the usual KJ style with regards to lifetimes of returned promises. RPC methods in particular automatically take a reference on the capability until the method completes. This makes some intuitive sense as Capability::Client itself is a pointer-like type implementing reference counting on some inner object.
      
      whenResolved() did not follow the pattern, and instead required that the caller explicitly take a reference. I screwed this up when using it, suggesting that it's pretty unintuitive. It's cheap and safe to automatically take a reference, so let's do that.
      fe6024d0
    • Kenton Varda's avatar
      Fix userland pipe bug that propagated EOF from a pump. · dba7f35b
      Kenton Varda authored
      A pump does not propagate EOF. So a BlockedRead state should not complete when a pump happens that does not satisfy the read.
      dba7f35b
  10. 10 Sep, 2019 3 commits
  11. 09 Sep, 2019 1 commit
    • Kenton Varda's avatar
      Fix http-test.c++ to avoid dubious assumptions about gather-writes. · 8f9717d2
      Kenton Varda authored
      Most (all?) implementations of `write(ArrayPtr<const ArrayPtr<const byte>>)`, if the outer array contains only one inner array, do not use the outer array again after the initial call returns (as opposed to the promise resolving). But, this is not a safe assumption and http-test.c++ should not be relying on it.
      
      (I found this when I tried forcing all writes to complete asynchronously to check if it resulted in any bugs. This is all I found.)
      8f9717d2
  12. 03 Sep, 2019 5 commits
  13. 21 Aug, 2019 1 commit
  14. 20 Aug, 2019 2 commits