- 28 Oct, 2019 4 commits
-
-
Kenton Varda authored
Detect and report when a HashMap suffers from excessive collisions.
-
Kenton Varda authored
Fix failures in http-over-capnp-test.
-
Kenton Varda authored
`task` needs to be the last member of ServerRequestContextImpl, because when we construct it, we call `service.request()`, which may call back to send() or acceptWebSocket(), which require `replyTask` to be initialized.
-
Kenton Varda authored
And fix a bug detected by this.
-
- 23 Oct, 2019 1 commit
-
-
Kenton Varda authored
Various bug fixes
-
- 22 Oct, 2019 4 commits
-
-
Kenton Varda authored
-
Kenton Varda authored
The RPC system itself can sometimes call `releaseParams()` redundantly after the application has already called it. So, it's important that it be idempotent.
-
Kenton Varda authored
-
Kenton Varda authored
-
- 15 Oct, 2019 2 commits
-
-
Kenton Varda authored
Add std-iterator.h to build files
-
Vitali Lovich authored
-
- 14 Oct, 2019 4 commits
-
-
Kenton Varda authored
Suppress MSVC warnings in headers files
-
Kenton Varda authored
Add compat/std-iterator.h
-
Vitali Lovich authored
Allow including a header to obtain the iterator traits instead of needing to carefully define a slightly more obscure macro that someone might be tempted to define globally.
-
Vitali Lovich authored
For Visual Studio we have to wrap the headers with push/pop pragmas at the top and bottom of the file. Define common macros for suppress/unsuppress KJ & the appropriate macros for CAPNP begin/end header wrappers. Because there's a chicken egg problem the KJ_BEGIN_HEADER/CAPNP_BEGIN_HEADER macros are placed below all includes to ensure that the appropriate common.h file has been sourced.
-
- 04 Oct, 2019 2 commits
-
-
Kenton Varda authored
Fixes #889.
-
Kenton Varda authored
-
- 03 Oct, 2019 2 commits
-
-
Kenton Varda authored
Fix handling of queued RT signals, plus some other crap
-
Kenton Varda authored
-
- 02 Oct, 2019 4 commits
-
-
Kenton Varda authored
For regular (non-RT) POSIX signals, the process can only have at most one instance of each signal queued for delivery at a time. If another copy of the signal arrives before the first is delivered, the new signal is ignored. The idea was that signals are only meant to wake the process up to check some input; the signal itself is not the input. POSIX RT signals are different. Multiple copies of the same signal can be queued, and each is delivered separately. Each signal may contain some additional information that needs to be processed. The signals themselves are input. UnixEventPort's `onSignal()` method returns a Promise that resolves the next time the signal is delivered. When the Promise is resolved, the signal is also supposed to be blocked until `onSignal()` can be called again, so that the app cannot miss signals delivered in between. However, the epoll/signalfd implementation had a bug where it would pull _all_ queued signals off the `signalfd` at once, only delivering the first instance of each signal number and dropping subsequent instances on the floor. That's fine for regular signals, but not RT signals. This change fixes the bug and adds a test. Incidentally, the poll()-based implementation has been correct all along.
-
Kenton Varda authored
I don't really know how to test this since the other cmsg types are bizarre and non-portable, but they do exist.
-
Kenton Varda authored
This and many other characters are surprisingly allowed by the relevant RFCs. But it turns out we implemented the RFCs correctly, so yay.
-
Kenton Varda authored
-
- 30 Sep, 2019 1 commit
-
-
Edward Z. Yang authored
I initially didn't read the docs carefully enough and returned a StringPtr to a locally allocated string. Whoops. Make the doc a little clearer about this.
-
- 19 Sep, 2019 1 commit
-
-
Kenton Varda authored
Increase arenaSpace for emscripten builds
-
- 17 Sep, 2019 1 commit
-
-
Timothy Trindle authored
-
- 16 Sep, 2019 2 commits
-
-
Kenton Varda authored
kj::atomicAddRef(): Fix assertion error message
-
Joe Lee authored
-
- 11 Sep, 2019 12 commits
-
-
Kenton Varda authored
Define and implement HTTP-over-Cap'n-Proto
-
Kenton Varda authored
It seems like MSVC is generating this function in translation units where it isn't acutally called, which inadvertently causes non-RPC Cap'n Proto code to depend on kj-async.
-
Kenton Varda authored
-
Kenton Varda authored
The documentation for this method clearly says that sending the message cannot be delayed because ordering may matter. Only resolving of the returned promise can be delayed to implement flow control.
-
Kenton Varda authored
-
Kenton Varda authored
Rationale: If this were a native OS pipe, closing or aborting one end would cause the other end to throw DISCONNECTED. Note that dropping the read end of a userland pipe is implemented in terms of aborting it, which makes it even more clear that this is a disconnect scenario.
-
Kenton Varda authored
This is needed now because http-over-capnp will index the Set-Cookie header. This change should make it relatively safe to index Set-Cookie when using KJ HTTP.
-
Kenton Varda authored
This allows an HTTP request/response to be forwarded over Cap'n Proto RPC, multiplexed with arbitrary other RPC transactions. This could be compared with HTTP/2, which is a binary protocol representation of HTTP that allows multiplexing. HTTP-over-Cap'n-Proto provides the same, but with some advantages inherent in leveraging Cap'n Proto: - HTTP transactions can be multiplexed with regular Cap'n Proto RPC transactions. (While in theory you could also layer RPC on top of HTTP, as gRPC does, HTTP transactions are much heavier than basic RPC. In my opinion, layering HTTP over RPC makes much more sense because of this.) - HTTP endpoints are object capabilities. Multiple private endpoints can be multiplexed over the same connection. - Either end of the connection can act as the client or server, exposing endpoints to each other. - Cap'n Proto path shortening can kick in. For instance, imagine a request that passes through several proxies, then eventually returns a large streaming response. If the proxies is each a Cap'n Proto server with a level 3 RPC implementation, and the response is simply passed through verbatim, the response stream will automatically be shortened to skip over the middleman servers. At present no Cap'n Proto implementation supports level 3, but path shortening can also apply with only level 1 RPC, if all calls proxy through a central hub process, as is often the case in multi-tenant sandboxing scenarios. There are also disadvantages vs. HTTP/2: - HTTP/2 is a standard. This is not. - This protocol is not as finely optimized for the HTTP use case. It will take somewhat more bandwidth on the wire. - No mechanism for server push has been defined, although this could be a relatively simple addition to the http-over-capnp interface definitions. - No mechanism for stream prioritization is defined -- this would likely require new features in the Cap'n Proto RPC implementation itself. - At present, the backpressure mechanism is naive and its performance will suffer as the network distance increases. I intend to solve this by adding better backpressure mechanisms into Cap'n Proto itself. Shims are provided for compatibility with the KJ HTTP interfaces. Note that Sandstorm has its own http-over-capnp protocol: https://github.com/sandstorm-io/sandstorm/blob/master/src/sandstorm/web-session.capnp Sandstorm's protocol and this new one are intended for very different use cases. Sandstorm implements sandboxing of web applications on both the client and server sides. As a result, it cares deeply about the semantics of HTTP headers and how they affect the browser. This new http-over-capnp protocol is meant to be a dumb bridge that simply passes through all headers verbatim.
-
Kenton Varda authored
- Add a `size()` method. - Add a `forEach()` that enumerates tabled headers by ID, and only uses string names for non-tabled headers.
-
Kenton Varda authored
This implementation features path-shortening through pumps. That is, if an incoming Cap'n Proto stream wraps a KJ stream which ends up pumping to an outgoing Cap'n Proto stream, the incoming stream will be redirected directly to the outgoing stream, in such a way that the RPC system can recognize and reduce the number of network hops as appropriate. This proved tricky due to the features of KJ's `pumpTo()`, in particular: - The caller of `pumpTo()` expects eventually to be told how many bytes were pumped (e.g. before EOF was hit). - A pump may have a specified length limit. In this case the rest of the stream can be pumped somewhere else. - Multiple streams can be pumped to the same destination stream -- this implies that a pump does not propagate EOF. These requirements mean that path-shortening is not as simple as redirecting the incoming stream to the outgoing. Intsead, we must first ask the outgoing stream's server to create a "substream" object, and then redirect to that. The substream can have a length limit, swallows EOF, informs the original creator on completion, and can even redirect *back* to the original creator to allow the stream to now pump somewhere else.
-
Kenton Varda authored
With this change, a capability server can implement `Capability::Server::shortenPath()` to return a promise that resolves to a new capability in the future. Once it resolves, further calls can be redirected to the new capability. The RPC system will automatically apply path-shortening on resolution. For example: * Say Alice and Bob are two vats communicating via RPC. Alice holds a capability to an object hosted by Bob. Bob's object later resolves itself (via shortenPath()) to a capability pointing to an object hoste by Alice. Once everything settles, if Alice makes calls on the capability that used to point to Bob, those calls will go directly to the local object that Bob's object resolved to, without crossing the network at all. * In a level 3 RPC implementation (not yet implemented), Bob could instead resolve his capability to point to a capability hosted by Carol. Alice would then automatically create a direct connection to Carol and start using it to make further calls. All this works automatically because the implementation of `shortenPath()` is based on existing infrastructure designed to support promise pipelining. If a capability server implements `shortenPath()` to return non-null, then capabilities pointing to it will appear to be unsettled promise capabilities. `whenResolved()` or `whenMoreResolved()` can be called on these to wait for them to "resolve" to the shorter path later on. Up until this point, RPC calls on a promise capability couldn't possibly return until the capability was settled, but nothing actually depended on this in practice. This feature will be used to implement dynamic path shortening through KJ streams and `pumpTo()`.
-
Kenton Varda authored
Capability::Clients do not follow the usual KJ style with regards to lifetimes of returned promises. RPC methods in particular automatically take a reference on the capability until the method completes. This makes some intuitive sense as Capability::Client itself is a pointer-like type implementing reference counting on some inner object. whenResolved() did not follow the pattern, and instead required that the caller explicitly take a reference. I screwed this up when using it, suggesting that it's pretty unintuitive. It's cheap and safe to automatically take a reference, so let's do that.
-