- 26 Jun, 2019 1 commit
-
-
Kenton Varda authored
-
- 22 Jun, 2019 1 commit
-
-
Kenton Varda authored
Add per-object streaming flow control
-
- 21 Jun, 2019 4 commits
-
-
Kenton Varda authored
Implement MutexGuarded::when() (i.e. condvars) on all platforms.
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
Note that at present I think the only way they would have happened not under-lock is if one of the pthread or syscalls failed, which should never happen. Exceptions thrown by the predicate were already always rethrown under lock. But it doesn't hurt to be safe.
-
- 20 Jun, 2019 2 commits
-
-
Harris Hancock authored
Fix sendStream() failing if it can't complete immediately.
-
Kenton Varda authored
-
- 19 Jun, 2019 3 commits
-
-
Kenton Varda authored
-
Kenton Varda authored
And it turns out that the Windows implementation was returning too early due to rounding error. Fixed.
-
Kenton Varda authored
It turns out GetTickCount64() is only precise to the nearest timeslice, which can be up to 16ms. The imprecision caused test failures.
-
- 18 Jun, 2019 20 commits
-
-
Kenton Varda authored
I think I imagined once upon a time that this would be a convenient way to deal with external interfaces that like to return nullable pointers. However, in practice it is used nowhere in KJ or Cap'n Proto, and it recently hid a bug in my code where I had assigned a `Maybe<T>` from an `Own<T>`. We can introduce a `fromNullablePointer()` helper or something if that turns out to be useful.
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
Consider a capnp streaming type that wraps a kj::AsyncOutputStream. KJ streams require the caller to avoid doing multiple writes at once. Capnp streaming conveniently guarantees only one streaming call will be delivered at a time. This is great because it means the app does not have to do its own queuing of writes. However, the app may want to use a CapabilityServerSet to unwrap the capability and get at the underlying KJ stream to optimize by writing to it directly. However, before it can issue a direct write, it has to wait for all RPC writes to complete. These RPC writes were probably issued by the same caller, before it realized it was talking to a local cap. Unfortunately, it can't just wait for those calls it issued to complete, because streaming flow control may have made them appear to complete long ago, when they're actually still in the server's queue. How does the app make sure that the directly-issued writes don't overlap with RPC writes? We can solve this by making CapabilityServerSet::getLocalServer() delay until all in-flight stream calls are complete before unwrapping. Now, the app can simply make sure that any requests it issued over RPC in the past completed before it starts issuing direct requests.
-
Kenton Varda authored
-
Kenton Varda authored
Also, push harder on the code generator such that `StreamResult` doesn't show up in generated code at all. So now we have `StreamingRequest<Params>` which is like `Request<Params, Results>`, and we have `StreamingCallContext<Params>` which is like `CallContext<Params, Results>`.
-
Kenton Varda authored
-
Kenton Varda authored
There are two things that every capability server must implement: * When a streaming method is delivered, it blocks subsequent calls on the same capability. Although not strictly needed to achieve flow control, this simplifies the implementation of streaming servers -- many would otherwise need to implement such serialization manually. * When a streaming method throws, all subsequent calls also throw the same exception. This is important because exceptions thrown by a streaming call might not actually be delivered to a client, since the client doesn't necessarily wait for the results before making the next call. Again, a streaming server could implement this manually, but almost all streaming servers will likely need it, and this makes things easier.
-
Kenton Varda authored
Note: Apparently, json.capnp had not been added to the bootstrap test, and the checked-in bootstrap had drifted from the source file.
-
Kenton Varda authored
This can be used on a method to indicate that it is used for "streaming", like: write @0 (bytes :Data) -> stream; A "streaming" method is one which is expected to be called many times to transmit an ordered stream of items. For best throughput, it is often necessary to make multiple overlapping calls, so as not to wait for a round trip for every item. However, to avoid excess buffering, it may be necessary to apply backpressure by having the client limit the total number of overlapping calls. This logic is difficult to get right at the application level, so making it a language feature gives us the opportunity to implement it in the RPC layer. We can, however, do it in a way that is backwards-compatible with implementations that don't support it. The above declaration is equivalent to: write @0 (bytes :Data) -> import "/capnp/stream.capnp".StreamResult; RPC implementations that don't explicitly support streaming can thus instead leave it up to the application to handle.
-
Kenton Varda authored
I have this pattern: Maybe<Own<T>> foo; // ... foo = heap<T>(); KJ_ASSERT_NONNULL(foo)->doSomething(); The assertion feels non-type-safe. Now you can do: auto& ref = foo.emplace(heap<T>()); ref.doSomething();
-
Kenton Varda authored
`kj::Quantity<T>` already supported this. I copied from it.
-
Kenton Varda authored
This was failing to chain the promises, and so returning `Promise<Promise<T>>`. The idea here is you can create a PromiseAdapter which eventually produces another promise to chain to. The adapter is finished and should be destroyed at that point, but the final promise should then redirect to the new promise.
-
Kenton Varda authored
Apparently, Return messages with empty capability tables have been allocated one word too small all along, causing many Return messages to be split into two segments and allocate twice the memory they need. I never bothered to check whether this was happening...
-
Kenton Varda authored
-
- 17 Jun, 2019 7 commits
-
-
Kenton Varda authored
-
Kenton Varda authored
-
Kenton Varda authored
Gotta admit, Win32's modern synchronization interfaces (introduced with Vista) are beautiful.
-
Kenton Varda authored
Once upon a time, POSIX specified that these static initializers could only be used for global variables, but apparently essentially all implementations have always supported these initializers for local variables as well, and POSIX recently enshrined this as a requirement.
-
Kenton Varda authored
I originally left this unimplemented because pthreads annoyingly doesn't support condvar on top of rwlocks. It turns out there's a trick that can be used involving an extra mutex and some redundant locking operations -- the same trick that powers std::condition_variable_any. I used that here. Win32 support will come in a subsequent commit, before merging to master.
-
Kenton Varda authored
Calculate SO_VERSION in configure for compatibility with BSD make.
-
Kenton Varda authored
-
- 16 Jun, 2019 2 commits
-
-
Kenton Varda authored
BSD Make does not support `$(shell ...)`. It does support an alternative, `!=` assignments (which, confusingly, don't mean "not equal" but rather "evaluate the right in the shell before assignment"). GNU Make also supports `!=` as of version 4.0, released in 2013. Unfortunately, f***ing Apple ships GNU Make version 3.81, from 2006, with MacOS/XCode.
-
Kenton Varda authored
Way back in 538a767e I added `RpcSystem::setFlowLimit()`, a blunt mechanism by which an RPC node can arrange to stop reading new messages from the connection when too many incoming calls are in-flight. This was needed to deal with buggy Sandstorm apps that would stream multi-gigabyte files by doing a zillion writes without waiting, which would then all be queued in the HTTP gateway, causing it to run out of memory. In implementing that, I inadertently caused the RPC system to do a tree walk on every call message it received, in order to sum up the message size. This is silly, becaues it's much cheaper to sum up the segment sizes. In fact, in the case of a malicious peer, the tree walk is potentially insufficient, because it doesn't count holes in the segments. The tree walk also means that any invalid pointers in the message cause an exception to be thrown even if that pointer is never accessed by the app, which isn't the usual behavior. I seem to recall this issue coming up in discussion once in the past, but I couldn't find the thread. For the new streaming feature, we'll be paying attention to the size of outgoing messages. Again, here, it would be nice to compute this size by summing segments without doing a tree walk. So, this commit adds `sizeInWords()` methods that do this.
-