Automatic type declaration for JSON input data.
Generates datatype declarations with Aeson''s 'Data.Aeson.FromJSON'
instances from a set of example .json
files.
To get started you need to install the package,
and run json-autotype
binary on an input .json
file.
That will generate a new Aeson-based JSON parser.
$ json-autotype input.json -o JSONTypes.hs
Feel free to tweak the by changing types of the fields
— any field type that is instance of 'Data.Aeson.FromJSON'
should work.
You may immediately test the parser by calling it as a script:
$ runghc JSONTypes.hs input.json
One can now use multiple input files to generate better type description.
Now with Elm code generation support!
(If you want your favourite programming language supported too —
name your price and mail the author.)
See introduction on https://github.com/mgajda/json-autotype for details.'
json-autotype
Takes a JSON format input, and generates automatic Haskell type declarations.
Parser and printer instances are derived using Aeson.
The program uses union type unification to trim output declarations. The types of same attribute tag and similar attribute set, are automatically unified using recognition by attribute set matching. (This option can be optionally turned off, or a set of unified types may be given explicitly.) :|:
alternatives (similar to Either
) are used to assure that all JSON
inputs seen in example input file are handled correctly.
I should probably write a short paper to explain the methodology.
Details on official releases are on Hackage We currently support code generation to Haskell, and Elm.
Please volunteer help or provide financial support, if you want your favourite language supported too! Expression of interest in particular feature may be filed as GitHub issue.
USAGE:
After installing with cabal install json-autotype
, you might generate stub code for the parser:
json-autotype input1.json ... inputN.json -o MyFormat.hs
Then you might test the parser by running it on an input file:
runghc MyFormat.hs input.json
At this point you may see data structure generated automatically for you. The more input files you give to the inference engine json-autotype
, the more precise type description will be.
Algorithm will also suggest which types look similar, based on a set of attribute names, and unify them unless specifically instructed otherwise.
The goal of this program is to make it easy for users of big JSON APIs to generate entries from example data.
Occasionally you might find a valid JSON for which json-autotype
doesn't generate a correct parser. You may either edit the resulting file and send it to the author as a test case for future release.
Patches and suggestions are welcome.
You can run with Docker:
docker run -it migamake/json-autotype
EXAMPLES:
The most simple example:
{
"colorsArray":[{
"colorName":"red",
"hexValue":"#f00"
},
{
"colorName":"green",
"hexValue":"#0f0"
},
{
"colorName":"blue",
"hexValue":"#00f"
}
]
}
It will produce the module with the following datatypes and TH calls for JSON parser derivations:
data ColorsArray = ColorsArray {
colorsArrayHexValue :: Text,
colorsArrayColorName :: Text
} deriving (Show,Eq)
data TopLevel = TopLevel {
topLevelColorsArray :: ColorsArray
} deriving (Show,Eq)
Note that attribute names match the names of JSON dictionary keys.
Another example with ambiguous types:
{
"parameter":[{
"parameterName":"apiVersion",
"parameterValue":1
},
{
"parameterName":"failOnWarnings",
"parameterValue":false
},
{
"parameterName":"caller",
"parameterValue":"site API"
}]
}
It will produce quite intuitive result (plus extra parentheses, and class derivations):
data Parameter = Parameter {
parameterParameterValue :: Bool :|: Int :|: Text,
parameterParameterName :: Text
}
data TopLevel = TopLevel {
topLevelParameter :: Parameter
}
Real-world use case examples are provided in the package source repository.
Methodology:
- JSON-Autotype uses its own union type system to derive types from JSON documents as the first step.
- Then it finds all those records that have 90% of the same key names, and suggest them as similar enough to merit treating as instances of the same type. (Note that this is optional, and can be tuned manually.)
- Last step is to derive unique-ish type names - we currently do it by concatenating the name of the container and name of the key. (Please open PR, if you want something fancy about that - initial version used just key name, when it was unique.)
- Finally it generates Haskell or Elm code for the type.
Combination of robust union type system, and heuristic makes this system extremely reliable. Main test is QuickCheck-based generation of random JSON documents, and checking that they are all correctly parsed by resulting parser.
More details are described in Haskell.SG meetup presentation.
Other approaches:
There is a TypeScript type provider, and PLDI 2016 paper on solving this problem using preferred type shapes instead of union types. One can think about it as a alternative theory that gives very similar results, with more complicated exposition. It also does not tackle the problem of tagged records. It also does not attempt to guess unification candidates in order to reduce type complexity.
There was a json-sampler that allows to make simpler data structure from JSON examples, but doesn't seem to perform unification, nor is it suitable for big APIs.
PADS project is another attempt to automatically infer types to treat arbitrary data formats (not just JSON). It mixes type declarations, with parsing/printing information in order to have a consistent view of both. It does not handle automatic type inference though.
JSON Schema generator uses .NET types to generate JSON Schema instead (in opposite direction.) Similar schema generation is used here
Microsoft Developer Network advocates use of Data Contracts instead to constrain possible input data.
QuickType uses Markov chains heuristic instead of theory.