- 11 May, 2016 1 commit
-
- 29 Apr, 2016 1 commit
-
-
Jon Skeet authored
(And likewise ignore the prefix in unpack.) Fixes issue #1459.
-
- 20 Apr, 2016 1 commit
-
-
Jon Skeet authored
JSON tests fail, as we're not using OriginalNameAttribute yet.
-
- 21 Jan, 2016 1 commit
-
-
Jon Skeet authored
-
- 15 Jan, 2016 9 commits
- 13 Jan, 2016 2 commits
- 06 Jan, 2016 2 commits
-
-
Jon Skeet authored
This involves quoting timestamp/duration/field-mask values, even when they're not in fields. It's better for consistency. Fixes issue #1097.
-
Jon Skeet authored
- Tighten up on Infinity/NaN handling in terms of whitespace handling (and test casing) - Validate that values are genuinely integers when they've been parsed from a JSON number (ignoring the fact that 1.0000000000000000001 == 1 as a double...) - Allow exponents and decimal points in string representations
-
- 02 Dec, 2015 1 commit
-
-
Jon Skeet authored
This required a rework of the tokenizer to allow for a "replaying" tokenizer, basically in case the @type value comes after the data itself. This rework is nice in some ways (all the pushback and object depth logic in one place) but is a little fragile in terms of token push-back when using the replay tokenizer. It'll be fine for the scenario we need it for, but we should be careful...
-
- 09 Nov, 2015 1 commit
-
-
Jon Skeet authored
-
- 05 Nov, 2015 1 commit
-
-
Jon Skeet authored
This is only thrown directly by JsonTokenizer, but surfaces from JsonParser as well. I've added doc comments to hopefully make everything clear. The exception is actually thrown by the reader within JsonTokenizer, in anticipation of keeping track of the location within the document, but that change is not within this PR.
-
- 04 Nov, 2015 1 commit
-
-
Jon Skeet authored
Fixes issue #932.
-
- 03 Nov, 2015 1 commit
-
-
Jon Skeet authored
This includes all the well-known types except Any. Some aspects are likely to require further work when the details of the JSON parsing expectations are hammered out in more detail. Some of these have "ignored" tests already. Note that the choice *not* to use Json.NET was made for two reasons: - Going from 0 dependencies to 1 dependency is a big hit, and there's not much benefit here - Json.NET parses more leniently than we'd want; accommodating that would be nearly as much work as writing the tokenizer This only really affects the JsonTokenizer, which could be replaced by Json.NET. The JsonParser code would be about the same length with Json.NET... but I wouldn't be as confident in it.
-