Skip to main content

50 ways to crash our product

· 10 min read
Andre Bogus

50 ways to crash our product

I personally think that the software we build should make more people's lives better than it makes worse. So when users recently started filing bug reports, I read them with mixed feelings. On one hand, it meant that those particular users were actually using synth, on the other hand, it also meant that we were failing to give them a polished experience. So when it was my turn to write more stuff about what we do here, I set myself a challenge: Find as many ways as I can to break our product.

I briefly considered fuzzing, but decided against it. It felt like cheating. Where's the challenge in that? Also, I wanted to be sure that the bugs would be reachable by ordinary (or perhaps at least exceptional) users, and would accept misleading error messages (that a fuzzer couldn't well decide) as bugs. Finally I am convinced I learn more about some code when actively trying to break it, and that's always a plus. So "let's get cracking!" I quoth and off I went.

Overview#

Before we start, I should perhaps consider giving a short architectural overview on synth. Basically the software has four parts:

  1. The DSL (which is implemented by a set of types in core/src/schema that get deserialized from JSON),
  2. a compiler that creates a graph (which form a directed acyclic graph of items that can generate values),
  3. export (writing to the data sink) and
  4. import facilities (for creating a synth namespace from a database schema)

My plan was to look at each of the components and see if I can find inputs to break them in interesting ways. For example, leaving out certain elements or putting incorrect JSON data (that would not trip up the deserialization part, but lead to incorrect compilation later on) might be a fruitful target. Starting from an empty schema:

{    "type": "array",    "length": 1,    "content": {        "type": "object"    }}

I then called out synth generate until finding a problem. First, I attempted to insert confusing command line arguments, but the clap-based parser handled all of them gracefully. Kudos!

#1 The first thing I tried is using a negative length:

{    "type": "array",    "length": -1,    "content": {        "type": "object"    }}

Which was met with BadRequest: could not convert from value 'i64(-1)': Type { expected: "U32", got: "i64(-1)" }. Not exactly a crash, but the error message could be friendlier and have more context. I should note that this is a very unspecialized error variant within the generator framework. It would make sense to validate this before compiling the generator and emit a more user-friendly error.

Bonus: If we make the length "optional": true (which could happen because of a copy & paste error), depending on the seed, we will get another BadRequest error. The evil thing is that this will only happen with about half of the seeds, so you may or may not be lucky here (or may even become unlucky if another version would slightly change the seed handling).

#2 Changing the length field to {} makes for another befuddling error:

Error: Unable to open the namespace
Caused by:    0: at file 2_unitlength/unitlength.json    1: Failed to parse collection    2: missing field `type` at line 8 column 1

The line number is wrong here, the length should be in line six in the content object, not in line eight.

#3 It's not that long that we can use literal numbers for number constants here (for example given the length). The old way would use a number generator. A recent improvement let us generate arbitrary numbers, however this is likely not a good idea for a length field:

{    "type": "array",    "length": {        "type": "number",        "subtype": "u32"    },    "content": {        "type": "object"    }}

This might be done very quickly, but far more likely it will work for a long time, exhausing memory in the process, because this actually generates a whole lot of empty objects (which are internally BTreeMaps, so an empty one comes at 24 bytes) – up to 4.294.967.295 of them, which would fill 96GB! While this is not an error per se, we should probably at least warn on this mistake. We could also think about streaming the result instead of storing it all in memory before writing it out, at least unless there are references that need to be stored, and this would also allow us to issue output more quickly.

#4 Let's now add a string:

``json synth[expect = "string` generator is missing a subtype"] { "type": "array", "length": { "type": "number", "subtype": "u32" }, "content": { "type": "object", "s": { "type": "string" } } }


Oops, I forgot to specify which kind of string. But I wouldn't know that from the error:
```consoleError: Unable to open the namespace
Caused by:    0: at file 4_unknownstring/unknownstring.json    1: Failed to parse collection    2: invalid value: map, expected map with a single key at line 10 column 1

#5 Ok, let's make that a format then. However, I forgot that the formatmust contains a map with the keys "format" and "arguments", putting them into the s map directly:

``json synth[expect = "argumentsis expected to be a field offormat`"] { "type": "array", "length": { "type": "number", "subtype": "u32" }, "content": { "type": "object", "s": { "type": "string", "format": "say my {name}", "arguments": { "name": "name" } } } }


```consoleError: Unable to open the namespace
Caused by:    0: at file 5_misformat/misformat.json    1: Failed to parse collection    2: invalid value: map, expected map with a single key at line 14 column 1

#6 Ok, then let's try to use a faker. Unfortunately, I haven't really read the docs, so I'll just try the first thing that comes to mind:

``json synth[expect = "fakeris expected to have agenerator` field. Try '"faker": {"generator": "name"}'"] { "type": "array", "length": { "type": "number", "subtype": "u32" }, "content": { "type": "object", "name": { "type": "string", "faker": "name" } } }


This gets us:
```consoleError: Unable to open the namespace
Caused by:    0: at file empty/empty.json    1: Failed to parse collection    2: invalid type: string "name", expected struct FakerContent at line 11 column 1

One could say that the error is not exactly misleading, but not exactly helpful either. As I've tried a number of things already, I'll take it. Once I get the syntax right ("faker": { "generator": "name" }, the rest of the faker stuff seems to be rock solid.

#7 Trying to mess up with date_time, I mistakenly specify a date format for a naive_time value.

``json synth[expect = "unknown variant date_time, expected one of pattern, faker, categorical, serialized, uuid, truncated, sliced, format, constant`"] { "type": "array", "length": 1, "content": { "type": "object", "date": { "type": "string", "date_time": { "format": "%Y-%m-%d", "subtype": "naive_time", "begin": "1999-01-01", "end": "2199-31-12" } } } } }


This gets me the following error which is again misplaced at the end of the input, and not exactly understandable. The same happens if I select a date format of `"%H"` and bounds of `0` to `23`.
```consoleError: Unable to open the namespace
Caused by:    0: at file 7_datetime/datetime.json    1: Failed to parse collection    2: input is not enough for unique date and time at line 16 column 1

I believe since the time is not constrained in any way by the input, we should just issue a warning and generate an unconstrained time instead, so the user will at least get some data. Interestingly, seconds seem to be optional, so %H:%M works.

#8 More, if I use naive_date instead, but make the minimum 0-0-0, we get the technically correct but still mis-spanned:

Error: Unable to open the namespace
Caused by:    0: at file 8_endofdays/endofdays.json    1: Failed to parse collection    2: input is out of range at line 16 column 1s

For the record, the error is on line 11.

#9 Now we let date_time have some rest and go on to categorical. Having just one variant with a weight of 0 will actually trigger an unreachable error:

{    "type": "array",    "length": 1,    "content": {        "type": "object",        "cat": {            "type": "string",            "categorical": {                "empty": 0            }        }    }}

Well, the code thinks we should not be able to reach it. Surprise!

thread 'main' panicked at 'internal error: entered unreachable code', /home/andre/projects/synth/core/src/schema/content/categorical.rs:82:9note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

For the record, this is the first internal error I was able to uncover so far. Given this success with categorical strings, it was natural to look if one_of could be similarly broken, but the generator just chose the one variant despite its 0.0 weight.

#10 Unsupported types on import

Databases can sometimes contain strange things, and so far the support is in beta, so it was expected that I would find types for which we currently don't implement import. This includes JSON for mysql and postgres, the mysql spatial datatypes as well as postgres' geometric types, user-defined enumerations, postgres' network address types, postgres arrays (soon only nested ones), etc.

The way to reproduce that is to create a table with a field of the type, e.g. here with mysql

CREATE TABLE IF NOT EXISTS json (    data JSON);
DELETE FROM json;
INSERT INTO json (data) VALUES ('{ "a": ["b", 42] }');

Now call synth import jsonnamespace --from mysql://<user>:<password>@<host>:<port>/<database> to get

Error: We haven't implemented a converter for json

Since the error is mostly the same for all types, and was somewhat expected, I won't claim a point for each type here.

#11 Exporting an array of nulls into postgres is not correctly implemented, so

{    "type": "array",    "length": 5,    "content": {        "type": "object",        "s": {            "type": "array",            "length": 1,            "content": {                "type": "null"            }        }    }}

will give us a wrong data type error from postgres. The problem here is that we lose the type information from the generator, and just emit null values which do not allow us to construct the right types for encoding into a postgres buffer. The solution would be to re-architect the whole system to reinstate that type information, possibly side-stepping sqlx in the process. Note that this is not equal to issue #171, which relates to nested arrays.

#12 going back to #3, I thought about other ways to make the code overconsume resources. But time and memory are only one thing to consume, in fact it's easy enough to consume another: The stack. The following bash script:

X='{ "type": "null" }'
for i in $(seq 0 4096)do    X="{ \"type\": \"string\", \"format\": { \"format\": \"{x}\", \"arguments\": { \"x\": $X } } }"done
echo $X > 12_stack_depth/stack_depth.jsonsynth gen --size 1 12_stack_depth

will generate the following error:

Error: Unable to open the namespace
Caused by:    0: at file 12_stack_depth/stack_depth.json    1: Failed to parse collection    2: recursion limit exceeded at line 1 column 2929

So I give up. I've found 1 way to crash our product with an unintended error, reproduced some known limitations and outlined a number of error messages we can improve on. I fell far short of my original goal, which either means I'm really bad at finding errors, or our code is incredibly reliable. Given the track record of software written in Rust, I'd like to think it's the latter, but I'll leave judgement to you.

Anyway, this was a fun exercise and I looked at many more things that turned out to just work well, so that's a good thing™. With all the test automation we have today, it's easy to forget that the manual approach also has its upsides. So feel free and try to break your (or our) code!