textx-lang-json is a python implementation of the JSON (JavaScript Object Notation) data interchange format RFC8259 using the textX meta-language. Though it is not intended to replace the standard python JSON encoder and decoder Lib/json, which is much faster, it is a good alternative when you want to mix some JSON in your own textX grammar, or a good starting point should you want to develop your own JSON-like grammar.
The textxjson
package provides a parser (basically a textX metamodel), able to build a textx model from a JSON file or string. This model can be visualized, for educational purpose, but more importantly can be decoded to obtain the usual (as done by Lib/json) python representation of the JSON document.
textx-lang-json has been created by Jean-François Baget at the Boreal team (Inria and LIRMM). It is part of the textx-lang-dlgpe project.
The following code demonstrates, in python, how to build a parser
, generate a model
from a python string respecting the JSON standard, and decode
the model to obtain the usual python representation of the python string (in that case a dictionary). It also shows that parser.model_from_str(data).decode()
returns the same python object as the standard json.loads(data)
.
from textx import metamodel_for_language
parser = metamodel_for_language('textxjson') # building the parser
data = '{"Hello": "World"}' # data is a python string respecting the JSON format
model = parser.model_from_str(data) # model is a JsonText object
textxresult = model.decode() # textxresult is a python dictionary
test1 = textxresult == {'Hello' : 'World'} # test1 is True
import json
jsonresult = json.loads(data) # using the standard python function to decode data
test2 = textxresult == jsonresult # test2 is True
Note that a parser can also read a JSON file:
model = parser.model_from_file("./path/to/data.json")
TO DO when registered on pip.
You can test that everything behaves correctly...
python -m unittest
..............
----------------------------------------------------------------------
Ran 14 tests in 11.538s
OK
Thanks to ArenaNet whose GW2 API provided some data used in our testbed.
The first thing to do is to build the Json parser. This can be done with the following code.
from textx import metamodel_for_language
parser = metamodel_for_language('textxjson')
This parser can be used to obtain a graphical representation of the grammar json.tx. For more details on textx visualization, see https://textx.github.io/textX/visualization.html.
from textx.export import metamodel_export
metamodel_export(parser, 'json.dot')
This codes generates a file json.dot
that can be visualized with Graphviz, as shown below.
Most importantly, the parser can be used to generate a model from a python string encoding some JSON data, or directly from a JSON file.
The parsing below is demonstrated using a python string.
some_json = r'''
{
"name" : "textx-lang-json",
"authors" : [
"Jean-François Baget"
],
"year" : 2024,
"tested" : true
}
'''
model = parser.model_from_str(some_json)
If we have the following JSON file data.json...
{
"name" : "textx-lang-json",
"authors" : [
"Jean-François Baget"
],
"year" : 2024,
"tested" : true
}
... the parser can build the model directly from the file:
model = parser.model_from_file("data.json")
As for the parser, the model can be visualized.
from textx.export import model_export
model_export(model, 'model.dot')
This file model.dot
can also be visualized with Graphviz.
The method decode()
is called on a model to obtain the usual python representation of JSON strings. The test shows the interest of this representation.
result = model.decode()
test = result['authors'][0] == 'Jean-François Baget' # test is True