- 11 Sep, 2019 3 commits
-
-
Kenton Varda authored
With this change, a capability server can implement `Capability::Server::shortenPath()` to return a promise that resolves to a new capability in the future. Once it resolves, further calls can be redirected to the new capability. The RPC system will automatically apply path-shortening on resolution. For example: * Say Alice and Bob are two vats communicating via RPC. Alice holds a capability to an object hosted by Bob. Bob's object later resolves itself (via shortenPath()) to a capability pointing to an object hoste by Alice. Once everything settles, if Alice makes calls on the capability that used to point to Bob, those calls will go directly to the local object that Bob's object resolved to, without crossing the network at all. * In a level 3 RPC implementation (not yet implemented), Bob could instead resolve his capability to point to a capability hosted by Carol. Alice would then automatically create a direct connection to Carol and start using it to make further calls. All this works automatically because the implementation of `shortenPath()` is based on existing infrastructure designed to support promise pipelining. If a capability server implements `shortenPath()` to return non-null, then capabilities pointing to it will appear to be unsettled promise capabilities. `whenResolved()` or `whenMoreResolved()` can be called on these to wait for them to "resolve" to the shorter path later on. Up until this point, RPC calls on a promise capability couldn't possibly return until the capability was settled, but nothing actually depended on this in practice. This feature will be used to implement dynamic path shortening through KJ streams and `pumpTo()`.
-
Kenton Varda authored
Capability::Clients do not follow the usual KJ style with regards to lifetimes of returned promises. RPC methods in particular automatically take a reference on the capability until the method completes. This makes some intuitive sense as Capability::Client itself is a pointer-like type implementing reference counting on some inner object. whenResolved() did not follow the pattern, and instead required that the caller explicitly take a reference. I screwed this up when using it, suggesting that it's pretty unintuitive. It's cheap and safe to automatically take a reference, so let's do that.
-
Kenton Varda authored
A pump does not propagate EOF. So a BlockedRead state should not complete when a pump happens that does not satisfy the read.
-
- 10 Sep, 2019 3 commits
-
-
Kenton Varda authored
Add a little more detail to invalid response status line errors.
-
Kenton Varda authored
-
Kenton Varda authored
Fix http-test.c++ to avoid dubious assumptions about gather-writes.
-
- 09 Sep, 2019 1 commit
-
-
Kenton Varda authored
Most (all?) implementations of `write(ArrayPtr<const ArrayPtr<const byte>>)`, if the outer array contains only one inner array, do not use the outer array again after the initial call returns (as opposed to the promise resolving). But, this is not a safe assumption and http-test.c++ should not be relying on it. (I found this when I tried forcing all writes to complete asynchronously to check if it resulted in any bugs. This is all I found.)
-
- 03 Sep, 2019 5 commits
-
-
Kenton Varda authored
Add `kj::evalLast()` for running a callback after all other events are done.
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
- 21 Aug, 2019 1 commit
-
-
Kenton Varda authored
Return HTTP 501 instead of 400 on unrecognized method.
-
- 20 Aug, 2019 3 commits
-
-
Kenton Varda authored
-
Kenton Varda authored
Fix some error propagation bugs in HTTP.
-
Kenton Varda authored
-
- 05 Aug, 2019 1 commit
-
-
Harris Hancock authored
Report raw HTTP content when handling client protocol errors in kj-http
-
- 02 Aug, 2019 1 commit
-
-
Harris Hancock authored
This information is typically necessary to debug such protocol errors. However, it requires special handling, since it could contain sensitive information. The code which has the best view on protocol errors is HttpHeaders::tryParseRequest(), which previously returned an empty Maybe<Request> on failure. This commit changes that function to return a OneOf<Request, ProtocolError>. This required some surgery in various other parts of the code to deal with the OneOf.
-
- 23 Jul, 2019 2 commits
-
-
Harris Hancock authored
Add HttpServerErrorHandler interface to provide visibility and customization of protocol errors
-
Harris Hancock authored
-
- 22 Jul, 2019 3 commits
-
-
Harris Hancock authored
The HttpServerErrorHandler is separate from the HttpService interface, because error-handling is more in the domain of the HttpServer, which can export multiple HttpServices (via the factory constructor overload).
-
Harris Hancock authored
I broke this while iterating on the new error handler mechanism, so here's a test.
-
Harris Hancock authored
Maybe<T> has a non-trivial destructor, but there's no point making it constexpr, but Maybe<T&> can be.
-
- 10 Jul, 2019 1 commit
-
-
Kenton Varda authored
I decided that such wrapping did not make senes.
-
- 08 Jul, 2019 16 commits
-
-
Kenton Varda authored
Extend KJ event loop to support cross-thread events.
-
Kenton Varda authored
-
Kenton Varda authored
They are more efficient, and self-contained enough not to create trouble. Also, Cygwin's pthread_rwlock implementation appears buggy. I am seeing it allow double locks from time to time.
-
Kenton Varda authored
Also required making KJ_WIN32 macros available on Cygwin. (Cygwin allows direct calls to Win32 functions.)
-
Kenton Varda authored
I observed the cygwin async-xthread-test getting deadlocked here, and noticed the bug. However, the predicate in question was not flappy, so this doesn't really fix async-xthread-test.
-
Kenton Varda authored
-
Kenton Varda authored
A long, long time ago, an early version of the event loop code was multithread-aware and used futexes. I ripped that all out later but apparently didn't remove these includes.
-
Kenton Varda authored
The delay() here isn't long enough on Cygwin when using pipes for wakeup. Apparently, Cygwin pipes are very slow. :/
-
Kenton Varda authored
-
Kenton Varda authored
See my latest report: https://cygwin.com/ml/cygwin/2019-07/msg00052.html We can't use signals for cross-thread wakeups on Cygwin because the same signal cannot be pending on two different threads at the same time. So I broke down and made an implementation that uses pipes. Ugh.
-
Kenton Varda authored
-
Kenton Varda authored
Apparently, `sigprocmask()` on macOS affects all threads. You must use `pthread_sigmask()` to affect only the current thread. (To be fair, POSIX says that `sigprocmask()` has unspecified behavior in the presence of threads. However, having it affect all threads is bizarre. On Linux, it affects only the calling thread.) Moreover, `siglongjmp()` on macOS is implemented in terms of `sigprocmask()`. Hence, `siglongjmp()` (with `true` for the second parameter) is not safe to use in a multithreaded program. Instead, we must decompose `siglongjmp()` into its component parts, so that we can correctly use `pthread_sigmask()`. This fixes the deadlocks on macOS.
-
Kenton Varda authored
Cygwin seems to be having an issue with this...
-
Kenton Varda authored
I guess this is particularly likely to happen when the threads are sharing a single core, which is probably common in CI services, explaining why I had trouble reproducing on my own hardware.
-
Kenton Varda authored
I should have done this a long time ago. We don't get any benefit from the parallel test runner, but we get a massive disadvantage in CI from not being able to see the logs.
-
Kenton Varda authored
-