\[ \newcommand{\assert}[1]{\texttt{assert }#1} \newcommand{\array}[1]{[ #1 ]} \newcommand{\assign}[2]{ #1\texttt{ = }#2} \newcommand{\binary}[3]{ #1\texttt{ }#2\texttt{ }#3} \newcommand{\error}[1]{\texttt{error }#1} \newcommand{\false}{\texttt{false}} \newcommand{\function}[2]{\texttt{function}\texttt{(} #1\texttt{)}{\ #2}} \newcommand{\if}[3]{\texttt{if }#1\texttt{ then }#2\texttt{ else }#3} \newcommand{\ifnoelse}[2]{\texttt{if }#1\texttt{ then }#2} \newcommand{\import}[1]{\texttt{import }#1} \newcommand{\importstr}[1]{\texttt{importstr }#1} \newcommand{\importbin}[1]{\texttt{importbin }#1} \newcommand{\index}[2]{#1\texttt{[}#2\texttt{]}} \newcommand{\local}[2]{\texttt{local }#1\texttt{ ; }#2} \newcommand{\null}{\texttt{null}} \newcommand{\object}[1]{\{ #1 \}} \newcommand{\objlocal}[1]{\texttt{local }#1} \newcommand{\ocomp}[4]{\object{[#1]:#2\texttt{ for }#3\texttt{ in }#4}} \newcommand{\self}{\texttt{self}} \newcommand{\super}{\texttt{super}} \newcommand{\true}{\texttt{true}} \newcommand{\unary}[2]{\texttt{ }#1\texttt{ }#2} \newcommand{\rule}[3]{\frac{#2}{#3}\textrm{(#1)}} \]

Jsonnet Specification

This page is the authority on what Jsonnet programs should do. It defines Jsonnet lexing and syntax. It describes which programs should be rejected statically (i.e. before execution). Finally, it specifies the manner in which the program is executed, i.e. the JSON that is output, or the runtime error if there is one.

The specification is intended to be terse, precise, and illuminate all the subtleties and edge cases in order to allow fully-compatible language reimplementations and tools. The specification employs some standard theoretical computer science techniques, namely type systems and big step operational semantics. If you just want to write Jsonnet code (not build a Jsonnet interpreter or tool), you don't need to read this. You should read the tutorial and reference .

Lexing

A Jsonnet program is UTF-8 encoded text. The file is a sequence of tokens, separated by optional whitespace and comments. Whitespace consists of space, tab, newline and carriage return. Tokens are lexed greedily. Comments are either single line comments, beginning with a # or a //, or block comments beginning with /* and terminating at the first */ encountered within the comment.

  • id: Matched by [_a-zA-Z][_a-zA-Z0-9]*.

    Some identifiers are reserved as keywords, thus are not in the set id: assert else error false for function if import importstr importbin in local null tailstrict then self super true.

  • number: As defined by JSON but without the leading minus.

  • string: Which can have five quoting forms:

    • Double-quoted, beginning with " and ending with the first subsequent non-quoted "
    • Single-quoted, beginning with ' and ending with the first subsequent non-quoted '
    • Double-quoted verbatim, beginning with @" and ending with the first subsequent non-quoted "
    • Single-quoted verbatim, beginning with @' and ending with the first subsequent non-quoted '
    • Text block, beginning with |||, followed by optional whitespace and a new-line. The next non-empty line must be prefixed with some non-zero length whitespace W. The block ends at the first subsequent line that is non-empty and does not begin with W, and it is an error if this line does not contain some optional whitespace followed by |||. The content of the string is the concatenation of all the lines between the two |||, which either begin with W (in which case that prefix is stripped) or they are empty lines (in which case they remain as empty lines). The line ending style in the file is preserved in the string. This form cannot be used in import statements.

    Double- and single-quoted strings are allowed to span multiple lines, in which case whatever dos/unix end-of-line character your editor inserts will be put in the string. They both understand the following escape characters: "'\/bfnrt which have their standard meanings, as well as \uXXXX for hexadecimal unicode escapes.

    Verbatim strings eschew all of the normal string escaping, including hexadecimal unicode escapes. Every character in a verbatim string is processed literally, with the exception of doubled end-quotes. Within a verbatim single-quoted string, '' is processed as ', and a verbatim double-quoted string, "" is processed as ".

    In the rest of this specification, the string is assumed to be canonicalized into a sequence of unicode codepoints with no record of the original quoting form as well and any escape characters removed.

  • symbol: The following single-character symbols:

    {}[],.();

  • operator: A sequence of at least one of the following single-character symbols: !$:~+-&|^=<>*/%.

    Additionally it is subject to the following rules, which may cause the lexing to terminate with a shorter token:

    • The sequence // is not allowed in an operator.
    • The sequence /* is not allowed in an operator.
    • The sequence ||| is not allowed in an operator.
    • If the sequence has more than one character, it is not allowed to end in any of +, -, ~, !, $.

Abstract Syntax

The notation used here is as follows: { } denotes zero or more repetitions of a sequence of tokens, and [ ] represents an optional sequence of tokens. This is not to be confused with { } and [ ] which represent tokens in Jsonnet itself.

Note that although the lexer will generate tokens for a wide range of operators, only a finite set are currently parseable, the rest being reserved for possible future use.

expr ∈ Expr ::= null | true | false | self | $ | string | number
| { objinside }
| [ [ expr { , expr } [ , ] ] ]
| [ expr [ , ] forspec compspec ]
| expr . id
| expr [ [ expr ] [ : [ expr ] [ : [ expr ] ] ] ]
| super . id
| super [ expr ]
| expr ( [ args ] )
| id
| local bind { , bind } ; expr
| if expr then expr [ else expr ]
| expr binaryop expr
| unaryop expr
| expr { objinside }
| function ( [ params ] ) expr
| assert ; expr
| import string
| importstr string
| importbin string
| error expr
| expr in super

objinside ::= member { , member } [ , ]
| { objlocal , } [ expr ] : expr [ { , objlocal } ] [ , ] forspec compspec
member ::= objlocal | assert | field
field ∈ Field ::= fieldname [ + ] h expr
| fieldname ( [ params ] ) h expr
h ∈ Hidden ::= : | :: | :::
objlocal ::= local bind
compspec ∈ CompSpec ::= { forspec | ifspec }
forspec ::= for id in expr
ifspec ::= if expr
fieldname ::= id | string | [ expr ]

assert ::= assert expr [ : expr ]
bind ∈ Bind ::= id = expr
| id ( [ params ] ) = expr

args ::= expr { , expr } { , id = expr } [ , ]
| id = expr { , id = expr } [ , ]

params ::= param { , param } [ , ]
param ::= id [ = expr ]

binaryop ::= * | / | % | + | - | << | >> | < | <= | > | >= | == | != | in | & | ^ | | | && | ||
unaryop ::= - | + | ! | ~

Associativity and Operator Precedence

The abstract syntax by itself cannot unambiguously parse a sequence of tokens. Ambiguities are resolved according to the following rules, which can also be overridden by adding parenthesis symbols ().

Everything is left associative. In the case of assert, error, function, if, import, importstr, importbin, and local, ambiguity is resolved by consuming as many tokens as possible on the right hand side. For example the parentheses are redundant in local x = 1; (x + x). All remaining ambiguities are resolved according to the following decreasing order of precedence:

  1. e(...) e[...] e.f   (application and indexing)
  2. + - ! ~   (the unary operators)
  3. * / %   (these, and the remainder below, are binary operators)
  4. + -
  5. << >>
  6. < > <= >= in
  7. == !=
  8. &
  9. ^
  10. |
  11. &&
  12. ||

Core Language Subset

To make the specification of Jsonnet as simple as possible, many of the language features are represented as syntax sugar. Below is defined the core syntax and the desugaring function from the abstract syntax to the core syntax. Both the static checking rules and the operational semantics are defined at the level of the core language, so it is possible to desugar immediately after parsing.

Core Syntax

The core language has the following simplifications:

  • The set of identifiers now includes $, which is no-longer a special keyword.
  • The following binary operators are removed: != == % in
  • Array slices [::] are removed.
  • Array and object comprehensions are replaced with a simple object comprehension construct.
  • Expression-level asserts are removed.
  • Object-level level assert messages are removed.
  • Object-level level assert values are ignored, but their evaluation may still raise an error.
  • Object methods and local functions are removed.
  • Object-level locals are removed.
  • Object field name definitions can only be expressions.
  • The +:, +::, and +::: sugars are removed.
  • Field lookup is only possible through e[e].
  • All conditionals must have an else branch.
  • The keyword super can exist on its own.

Commas are no-longer part of this abstract syntax but we may still write them in our notation to make the presentation more clear.

Also removed in the core language are import, importstr, and importbin. The semantics of these constructs is that they are replaced with either the contents of the file, or an error construct if importing failed (e.g. due to I/O errors). In the first case, the file is parsed, desugared, and subject to static checking before it can be substituted. In the case of importstr, the file is substituted in the form of a string, so it merely needs to contain valid UTF-8. For importbin, the file is substituted as an array of integer numbers between 0 and 255 inclusive.

A given Jsonnet file can be recursively imported via import. Thus, the implementation loads files lazily (i.e. during execution) as opposed to via static desugaring. The imported Jsonnet file is parsed and statically checked in isolation. Therefore, the behavior of the import is not affected by the environment into which it is imported. The files are cached by filename, so that even if the file changes on disk during Jsonnet execution, referential transparency is maintained.

e ∈ Core ::= null | true | false | self | super | string | number
| { { assert e } { [ e ] h e } }
| { [ e ] : e for id in e }
| [ { e } ]
| e [ e ]
| e ( { e } { id = e } )
| id
| local id = e { id = e } ; e
| if e then e else e
| e binaryop e
| unaryop e
| function ( { id = e } ) e
| error e

Desugaring

Desugaring removes constructs that are not in the core language by replacing them with constructs that are. It is defined via the following functions, which proceed by syntax-directed recursion. If a function is not defined on a construct then it simply recurses into the sub-expressions of that construct. Note that we import the standard library at the top of every file, and some of the desugarings call functions defined in the standard library. Their behavior is specified by implementation. However not all standard library functions are written in Jsonnet. The ones that are built into the interpreter (e.g. reflection) will be given special operational semantics rules with the rest of the core language constructs.

desugar: Expr → Core. This desugars a Jsonnet file. Let \(e_{std}\) be the parsed content of std.jsonnet.

\[ desugar(e) = desugar_{expr}(\local{\texttt{std} = e_{std}}{e}, false) \]

desugarexpr: (Expr × Boolean) → Core: This desugars an expression. The second parameter of the function tracks whether we are within an object.

\[ desugar_{expr}(\{ \objlocal{bind_1} \ldots \objlocal{bind_n}, assert_1 \ldots assert_m, field_1 \ldots field_p \}, b) = \\ \hspace{10mm} \textrm{let }binds = \left\{\begin{array}{ll} bind_1 \ldots bind_n & \textrm{if }b \\ bind_1 \ldots bind_n, \objlocal{$ = \self} & \textrm{otherwise} \\ \end{array}\right. \\ \hspace{10mm} \textrm{let } obj = \{ \\ \hspace{20mm} desugar_{assert}(assert_1, binds) \ldots desugar_{assert}(assert_m, binds), \\ \hspace{20mm} desugar_{field}(field_1, binds) \ldots desugar_{field}(field_p, binds, b) \\ \hspace{10mm} \} \\ \hspace{10mm} \textrm{in } \left\{\begin{array}{ll} \local{\texttt{\$outerself} = \self, \texttt{\$outersuper} = \super} {obj} & \textrm{if }b \\ obj & \textrm{otherwise} \\ \end{array}\right. \\ \]
\[ desugar_{expr}(\object{ \objlocal{bind_1} \ldots \objlocal{bind_m}, [e_f] : e_{body}, \objlocal{bind_m+1} \ldots \objlocal{bind_n} forspec\ compspec }, b) = \\ \hspace{10mm} \textrm{Let } arr \textrm{ fresh and } x_1 \ldots x_n \textrm{ be the sequence of variables defined in }forspec\ compspec \\ \hspace{10mm} \object{ \\ \hspace{20mm} [desugar_{expr}(\local{x_1=arr[0] \ldots x_n=arr[n-1]}{e_f}, b)]: \\ \hspace{30mm} desugar_{expr}( \local{x_1=arr[0] \ldots x_n=arr[n-1]}{ \local{bind_1 \ldots bind_n}{e_{body}} }, true) \\ \hspace{20mm} \textrm{ for } arr \textrm{ in } desugar_{expr}([ [x_1 \ldots x_n] forspec\ compspec, b): \\ \hspace{10mm}} \]
\[ desugar_{expr}([e\ forspec\ compspec], b) = desugar_{arrcomp}(e, forspec\ compspec, b) \]
\[ desugar_{expr}(\local {bind_1 \ldots bind_n} e, b) = \local {desugar_{bind}(bind_1, b) \ldots desugar_{bind}(bind_n, b)} {desugar_{expr}(e, b)} \]
\[ desugar_{expr}(e \{ objinside \}, b) = desugar_{expr}(e + \{ objinside \}, b) \]
\[ desugar_{expr}(\function{p_1 \ldots p_n}e, b) = \function{desugar_{param}(p_1, b)\ldots desugar_{param}(p_n, b)} desugar_{expr}(e, b) \]
\[ desugar_{expr}(\assert e ; e', b) = desugar_{expr}(\assert e : \texttt{"Assertion failed"} ; e', b) \]
\[ desugar_{expr}(\assert e : e' ; e'', b) = desugar_{expr}(\if {e} {e''} {\error{e'}}, b) \]
\[ desugar_{expr}(e[e':e'':], b) = desugar_{expr}(e[e':e'':\null], b) \]
\[ desugar_{expr}(e[e':e''], b) = desugar_{expr}(e[e':e'':\null], b) \]
\[ desugar_{expr}(e[e'::e'''], b) = desugar_{expr}(e[e':\null:e'''], b) \]
\[ desugar_{expr}(e[:e'':e'''], b) = desugar_{expr}(e[\null:e'':e'''], b) \]
\[ desugar_{expr}(e[e':], b) = desugar_{expr}(e[e':\null:\null], b) \]
\[ desugar_{expr}(e[:e'':], b) = desugar_{expr}(e[\null:e'':\null], b) \]
\[ desugar_{expr}(e[:e''], b) = desugar_{expr}(e[\null:e'':\null], b) \]
\[ desugar_{expr}(e[::e'''], b) = desugar_{expr}(e[\null:\null:e'''], b) \]
\[ desugar_{expr}(e[::], b) = desugar_{expr}(e[\null:\null:\null], b) \]
\[ desugar_{expr}(e[:], b) = desugar_{expr}(e[\null:\null:\null], b) \]
\[ desugar_{expr}(e[e':e'':e'''], b) = desugar_{expr}(\texttt{std.slice}(e, e', e'', e'''), b) \]
\[ desugar_{expr}(\ifnoelse e e'), b) = \if {desugar_{expr}(e, b)} {desugar_{expr}(e', b)} {\null} \]
\[ desugar_{expr}(e.id, b) = desugar_{expr}(e, b)[\texttt{"}id\texttt{"}] \]
\[ desugar_{expr}(\super.id, b) = \super[\texttt{"}id\texttt{"}] \]
\[ desugar_{expr}(e \mathop{\textrm{!=}} e', b) = desugar_{expr}(!(e \mathop{==} e'), b) \]
\[ desugar_{expr}(e \mathop{==} e', b) = desugar_{expr}(\texttt{std.equals}(e, e'), b) \]
\[ desugar_{expr}(e \mathop{\%} e', b) = desugar_{expr}(\texttt{std.mod}(e, e'), b) \]
\[ desugar_{expr}(e \mathop{\texttt{in}} e', b) = desugar_{expr}(\texttt{std.objectHasEx}(e', e, \texttt{true}), b) \]
\[ desugar_{expr}(e \mathop{\texttt{in}} \texttt{super}, b) = desugar_{expr}(\texttt{std.objectHasEx}(\texttt{super}, e, \texttt{true}), b) \]

desugarassert: (Field × [Bind]) → Field. This desugars object assertions.

\[ desugar_{assert}(\assert e, binds) = desugar_{assert}(\assert e : \texttt{"Assertion failed"}, binds) \]
\[ desugar_{assert}(\assert e : e', binds) = \assert{desugar_{expr}(\local{binds}{\if{e}{\null}{\error{e'}}}, \true)} \]

desugarfield: (Field × [Bind] × Boolean) → Field. This desugars object fields. Recall that h ranges over :, ::, :::. The boolean records whether the object containing this field is itself in another object. The notation string(id) means converting the identifier token to a string literal.

\[ desugar_{field}(id \mathrel{h} e, binds, b) = desugar_{field}([string(id)] \mathrel{h} e), binds, b) \]
\[ desugar_{field}(id \mathrel{+h} e, binds, b) = desugar_{field}([string(id)] \mathrel{+h} e), binds, b) \]
\[ desugar_{field}(id(params) \mathrel{h} e, binds, b) = desugar_{field}([string(id)](params) \mathrel{h} e), binds, b) \]
\[ desugar_{field}(string \mathrel{h} e, binds, b) = desugar_{field}([string] \mathrel{h} e), binds, b) \]
\[ desugar_{field}(string \mathrel{+h} e, binds, b) = desugar_{field}([string] \mathrel{+h} e), binds, b) \]
\[ desugar_{field}(string(params) \mathrel{h} e, binds, b) = desugar_{field}([string](params) \mathrel{h} e), binds, b) \]
\[ desugar_{field}([e] \mathrel{h} e', binds, b) = [desugar_{expr}(e, b)] \mathrel{h} desugar_{expr}(\local{binds}{e}, \true) \]
\[ desugar_{field}([e] \mathrel{+h} e', binds, b) = \\ \hspace{10mm} \textrm{let } e'' = e[\texttt{\$outerself} / \self, \texttt{\$outersuper} / \super] \\ \hspace{10mm} \textrm{let } e''' = \if{e''\mathrel{\texttt{in}} \super}{\super[e''] + {e'}}{e'} \\ \hspace{10mm} \textrm{in } desugar_{field}([e] \mathrel{h} e''', binds, b) \]
\[ desugar_{field}([e](params) \mathrel{h} e', binds, b) = desugar_{field}([e] \mathrel{h} \function{params}{e'}, binds, b) \]

desugarbind: (Bind × Boolean) → Field. This desugars local bindings.

\[ desugar_{bind}(id \texttt{=} e, b) = id \texttt{=} desugar_{expr}(e, b) \]
\[ desugar_{bind}(id(params) \texttt{=} e, b) = id \texttt{=} desugar_{expr}(\function{params}e, b) \]

desugarparam: (Param × Boolean) → Param. This desugars function parameters.

\[ desugar_{param}(id, b) = id \texttt{=} \error{\texttt{"Parameter not bound"}} \]
\[ desugar_{param}(id \texttt{=} e, b) = id \texttt{=} desugar_{expr}(e, b) \]

desugararrcomp: (Expr × CompSpec × Boolean) → Field. This desugars array comprehensions.

\[ desugar_{arrcomp}(e, \textrm{if }e'\ compspec, b) = desugar_{expr}(\if{e'}{desugar_{arrcomp}(e, compspec, b)}{[\ ]}, b) \]
\[ desugar_{arrcomp}(e, \textrm{if }e', b) = desugar_{expr}(\if{e'}{[e]}{[\ ]}, b) \]
\[ desugar_{arrcomp}(e, \textrm{for }x\textrm{ in }e'\ compspec, b) = \\ \hspace{10mm}\textrm{Let }arr, i\textrm{ fresh} \\ \hspace{10mm}desugar_{expr}( \local{arr = e'}{ \texttt{std.join}(\\\hspace{20mm}[\ ], \texttt{std.makeArray}( \texttt{std.length}(arr), \function{i}{\local{x = arr[i]}{desugar_{arrcomp}(e, compspec, b)}} )) }, b ) \]
\[ desugar_{arrcomp}(e, \textrm{for }x\textrm{ in }e', b) = \\ \hspace{10mm}\textrm{Let }arr, i\textrm{ fresh} \\ \hspace{10mm}desugar_{expr}( \local{arr = e'}{ \texttt{std.join}(\\\hspace{20mm}[\ ], \texttt{std.makeArray}( \texttt{std.length}(arr), \function{i}{\local{x = arr[i]}{[e]}} )) }, b ) \]

Static Checking

After the Jsonnet program is parsed and desugared, a syntax-directed algorithm is employed to reject programs that contain certain classes of errors. This is presented like a static type system, except that there are no static types. Programs are only rejected if they use undefined variables, or if self, super or $ are used outside the bounds of an object. In the core language, $ has been desugared to a variable, so its checking is implicit in the checking of bound variables.

The static checking is described below as a judgement \(Γ ⊢ e\), where \(Γ\) is the set of variables in scope of \(e\). The set \(Γ\) initially contains only std, the implicit standard library. In the case of imported files, each jsonnet file is checked independently of the other files.

\[ \rule{chk-lit} { } { \_ ⊢ \null, \true, \false, string, number } \]
\[ \rule{chk-self} { \self ∈ Γ } { Γ ⊢ \self } \]
\[ \rule{chk-super} { \super ∈ Γ } { Γ ⊢ \super } \]
\[ \rule{chk-object} { Γ ⊢ e_1 \ldots e_m \\ Γ ∪ \{\self,\super\} ⊢ e'_1 \ldots e'_n \\ ∀ i,j: e_i ∈ string ∧ e_j = e_i ⇒ i = j } { Γ ⊢ \object{[e_1] h_1 e'_1 \ldots [e_m] h_m e'_m, \assert e'_{m+1} \ldots \assert e'_n} } \]
\[ \rule{chk-object-comp} { Γ ∪ \{x\} ⊢ e_1 \\ Γ ∪ \{x,\self,\super\} ⊢ e_2 \\ Γ ⊢ e_3 } { Γ ⊢ \ocomp{e_1}{e_2}{x}{e_3} } \]
\[ \rule{chk-array} { Γ ⊢ e_1 \ldots e_n } { Γ ⊢ \array{e_1 \ldots e_n} } \]
\[ \rule{chk-array-index} { Γ ⊢ e \\ Γ ⊢ e' } { Γ ⊢ e[e'] } \]
\[ \rule{chk-apply} { Γ ⊢ e \\ ∀ i∈\{1\ldots n\}: Γ ⊢ e_i \\ ∀ i,j∈\{1\ldots n\}: x_i = x_j ⇒ i = j } { Γ ⊢ e(x_1 = e_1 \ldots x_n = e_n) } \]
\[ \rule{chk-var} { x ∈ Γ } { Γ ⊢ x } \]
\[ \rule{chk-local} { Γ ∪ \{x_1 \ldots x_n\} ⊢ e_1 \ldots e_n, e \\ ∀ i,j: x_i = x_j ⇒ i = j } { Γ ⊢ \local{\assign{x_1}{e_1} \ldots \assign{x_n}{e_n}}e } \]
\[ \rule{chk-if} { Γ ⊢ e_1, e_2, e_3 } { Γ ⊢ \if{e_1}{e_2}{e_3} } \]
\[ \rule{chk-binary} { Γ ⊢ e_L, e_R } { Γ ⊢ \binary{e_L}{sym}{e_R} } \]
\[ \rule{chk-unary} { Γ ⊢ e } { Γ ⊢ \unary{sym}{e} } \]
\[ \rule{chk-function} { ∀ i ∈ \{m+1 \ldots n\}: Γ ∪ \{x_1 \ldots x_n\} ⊢ e_i \\ Γ ∪ \{x_1 \ldots x_n\} ⊢ e' \\ ∀ i,j: x_i = x_j ⇒ i = j } { Γ ⊢ \function{x_1\ldots x_m, x_{m+1}=e_{m+1}\ldots x_n=e_n}{e'} } \]
\[ \rule{chk-import} { } { Γ ⊢ \import{s} } \]
\[ \rule{chk-importstr} { } { Γ ⊢ \importstr{s} } \]
\[ \rule{chk-importbin} { } { Γ ⊢ \importbin{s} } \]
\[ \rule{chk-error} { Γ ⊢ e } { Γ ⊢ \error{e} } \]

Operational Semantics

We present two sets of operational semantics rules. The first defines the judgement \(e ↓ v\) which represents the execution of Jsonnet expressions into Jsonnet values. The other defines the judgement \(v ⇓ j\) which represents manifestation, the process by which Jsonnet values are converted into JSON values.

We model both explicit runtime errors (raised by the error construct) and implicit runtime errors (e.g. array bounds errors) as stuck execution. Errors can occur both in the \(e ↓ v\) judgement and in the \(v ⇓ j\) judgement (because it is defined in terms of \(e ↓ v\)).

Jsonnet Values

When executed, Jsonnet expressions yield Jsonnet values. These need to be manifested, an additional step, to get JSON values. The differences between Jsonnet values and JSON values are: 1) Jsonnet values contain functions (which are not representable in JSON). 2) Due to the lazy semantics, both object fields and array elements have yet to be executed to yield values. 3) Object assertions still need to be checked.

Execution of a statically-checked expression will never yield an object with duplicate field names. By abuse of notation, we consider two objects to be equivalent even if their fields and assertions are re-ordered. However this is not true of array elements or function parameters.

v Value = PrimitiveObjectFunctionArray
Primitive ::= null | true | false | string | double
o Object ::= { { assert e } { string h e } }
Function ::= function ( { id=e } ) e
a Array ::= [ { e } ]

Hidden status inheritance

The hidden status of fields is preserved over inheritance if the right hand side uses the : form. This is codified with the following function:

\[ h_L + h_R = \left\{\begin{array}{ll} h_L & \textrm{if }h_R = \texttt{:} \\ h_R & \textrm{otherwise} \\ \end{array}\right. \]

Capture-Avoiding Substitution

The rules for capture-avoiding variable substitution [e/id] are an extension of those in the lambda calculus.

Let y ≠ x.

self[e/x] = self
super[e/x] = super
x[e/x] = e
y[e/x] = y
{ ... assert e' ... [e''] h e''' ... }[e/x] = { ... assert e'[e/x] ... [e'[e/x]] h e''[e/x] ... }
{ [e']: e'' for x in e''' }[e/x] = { [e']: e'' for x in e'''[e/x] }
{ [e']: e'' for y in e''' }[e/x] = { [e'[e/x]]: e''[e/x] for y in e''' [e/x] }
(local ... y=e' ... ; e'') [e/x] = local ... y=e' ... ; e'' (If any variable matches.)
(local ... y=e' ... ; e'') [e/x] = local ... y=e'[e/x] ... ;e''[e/x] (If no variable matches.)
(function ( ... y=e' ... ) e'')[e/x] = function ( ... y=e' ... )e'' (If any param matches.)
(function ( ... y=e' ... ) e'')[e/x] = function ( ... y=e'[e/x] ... ) e''[e/x] (If no param matches.)
Otherwise, e' [e/x] proceeds via syntax-directed recursion into subterms of e'.

The rules for keyword substitution ⟦e/kw⟧ for kw ∈ { self, super } avoid substituting keywords that are captured by nested objects:

selfe/self⟧ = e
supere/super⟧ = e
selfe/super⟧ = self
supere/self⟧ = super
{ ... assert e' ... [e'']h e''' ... }e/kw⟧ = { ... assert e' ... [e''e/kw]h e''' ... }
{ [e']: e'' for x in e''' }e/kw⟧ = { [e'e'/kw]: e'' for x in e'''e/kw}
Otherwise, e'e'/kw⟧ proceeds via syntax-directed recursion into the subterms of e'.

Execution

The following big step operational semantics rules define the execution of Jsonnet programs, i.e. the reduction of a Jsonnet program e into its Jsonnet value v via the judgement \(e ↓ v\).

Let f range over strings, as used in object field names.

\[ \rule{value} { v ∈ \{\null, \true, \false\} ∪ String ∪ Number ∪ Function ∪ Array } { v ↓ v } \]
\[ \rule{object} { ∀i∈\{1\ldots p\}: e_i ↓ f_i \\ ∀i∈\{p+1 \ldots n\}: e_i ↓ \null \\ ∀i,j∈\{1\ldots p\}: f_i = f_j ⇒ i = j \\ o = \object{ \assert{e_1} \ldots \assert{e_m}, f_1\mathop{h_1}e''_1 \ldots f_p\mathop{h_p}e''_p } } { \object{\assert{e_1} \ldots \assert{e_m}, [e'_1]\mathop{h_1}e''_1 \ldots [e'_n]\mathop{h_n}e''_n } ↓ o } \]
\[ \rule{object-comp} { e_{arr} ↓ [ e_1 \ldots e_n ] \\ ∀i∈\{1 \ldots n\}: e_{field}[e_i/x] ↓ f_i \\ ∀i,j∈\{1\ldots n\}: f_i = f_j ≠ \null ⇒ i = j \\ \{ (f'_1, e'_1) \ldots (f'_m, e'_m) \} = \{ (f_i, e_{body}[e_i/x]) \ | \ i ∈\{1\ldots n\} ∧ f_i ≠ \null \} \\ o = \object{f'_1: e'_1 \ldots f'_m: e'_m} } { \ocomp{e_{field}}{e_{body}}{x}{e_{arr}} ↓ o } \]
\[ \rule{array-index} { e ↓ \array{e_0 \ldots e_n} \\ e' ↓ i ∈ \{ 0 \ldots n \} \\ e_i ↓ v } { \index{e}{e'} ↓ v } \]
\[ \rule{object-index} { e ↓ o = \object{\assert{e'''_1} \ldots \assert{e'''_m}, f_1 h_1 e''_1 \ldots f_n h_n e''_n} \\ ∀j ∈ \{1 \ldots m \}: e'''_j⟦ o / \self, \{\} / \super ⟧↓\_ \\ e' ↓ f_i \\ e''_i ⟦ o / \self, \{\} / \super ⟧ ↓ v } { \index{e}{e'} ↓ v } \]
\[ \rule{apply} { e_0 ↓ \function{y_1=e'_1 \ldots y_p=e'_p}{e_b} \\ ∀i∈\{m+1 \ldots n\}: x_i ∈ \{y_1 \ldots y_p\} \\ ∀i∈\{1 \ldots p\}: e''_i = \left\{\begin{array}{ll} e_i & \textrm{if } i ≤ m \\ e_j & \textrm{if } y_i=x_j \textrm{ for some } j \\ e'_i & \textrm{otherwise}\\ \end{array}\right. \\ (\local{y_1=e''_1 \ldots y_p=e''_p}{e_b}) ↓ v } { e_0(e_1 \ldots e_m, x_{m+1}=e_{m+1} \ldots x_n = e_n) ↓ v } \]
\[ \rule{local} { e ↓ v } { \local{\_}e ↓ v } \]
\[ \rule{local-var} { binds = \assign{x_1}{e_1} \ldots \assign{x_n}{e_n} \\ \local{binds}{e[\local{binds}{e_1} / x_1 \ldots \local{binds}{e_n} / x_n ]} ↓ v } { \local{binds}e ↓ v } \]
\[ \rule{if-true} { e_1 ↓ \true \hspace{15pt} e_2 ↓ v } { \if{e_1}{e_2}{e_3} ↓ v } \]
\[ \rule{if-false} { e_1 ↓ \false \hspace{15pt} e_3 ↓ v } { \if{e_1}{e_2}{e_3} ↓ v } \]
\[ \rule{object-inherit} { e^L ↓ \object{ \assert e''^L_1 \ldots \assert e''^L_n,\ f_1 h^L_1 e^L_1 \ldots f_m h^L_m e^L_m,\ f'^L_1 h'^L_1 e'^L_1 \ldots f'^L_p h'^L_p e'^L_p } \\ e^R ↓ \object{ \assert e''^R_1 \ldots \assert e''^R_q,\ f_1 h^R_1 e^R_1 \ldots f_m h^R_m e^R_m,\ f'^R_1 h'^R_1 e'^R_1 \ldots f'^R_r h'^R_r e'^R_r } \\ \{ f'^L_1 \ldots f'^L_p \} ∩ \{ f'^R_1 \ldots f'^R_r \} = ∅ \\ x, y \textrm{ fresh} \hspace{15pt} \textrm{let } S = λe . e⟦x/\self, y/\super⟧ \\ e_s = \super + \object{ \assert S(e''^L_1) \ldots \assert S(e''^L_n),\\ \hspace{20mm} f_1 h^L_1 S(e^L_1) \ldots f_m h^L_m S(e^L_m),\ f'^L_1 h'^L_1 S(e'^L_1) \ldots f'^L_p h'^L_p S(e'^L_p) } \\ ∀i∈\{1 \ldots m\}: h'''_i = h^L_i + h^R_i ∧ e'''_i = (\local{x = \self, y = \super}{e^R_i ⟦e_s / \super⟧}) \\ o = \{ \\ \hspace{5mm} \assert e''^L_1 \ldots \assert e''^L_n, \ \assert e''^R_1 \ldots \assert e''^R_q, \\ \hspace{5mm} f'^L_1 h'^L_1 e'^L_1 \ldots f'^L_p h'^L_p e'^L_p, \ f'^R_1 h'^R_1 e'^R_1 \ldots f'^R_r h'^R_r e'^R_r, \ f_1 h'''_1 e'''_m \ldots f_m h'''_m e'''_m \\ \} } { e^L \texttt{ + } e^R ↓ o } \]
\[ \rule{array-concat} { e ↓ \array{e_0 \ldots e_m} \\ e' ↓ \array{e_{m+1} \ldots e_n} } { e + e' ↓ \array{e_1 \ldots e_n} } \]
\[ \rule{string-concat} { e_L ↓ v_L \hspace{15pt} e_R ↓ v_R \\ v_L ∈ String \vee v_R ∈ String } { e_L \texttt{ + } e_R ↓ stringconcat(tostring(v_L), tostring(v_R)) } \]
\[ \rule{less-than-true} { \texttt{std.cmp}(e_L, e_R) ↓ -1 } { e_L < e_R ↓ \texttt{true} } \]
\[ \rule{less-than-false} { \texttt{std.cmp}(e_L, e_R) ↓ r \\ r ∈ \{0, 1\} } { e_L < e_R ↓ \texttt{false} } \]
\[ \rule{greater-than-true} { \texttt{std.cmp}(e_L, e_R) ↓ 1 } { e_L < e_R ↓ \texttt{true} } \]
\[ \rule{greater-than-false} { \texttt{std.cmp}(e_L, e_R) ↓ r \\ r ∈ \{-1, 0\} } { e_L < e_R ↓ \texttt{false} } \]
\[ \rule{less-or-equal-true} { \texttt{std.cmp}(e_L, e_R) ↓ r \\ r ∈ \{-1, 0\} } { e_L <= e_R ↓ \texttt{true} } \]
\[ \rule{less-or-equal-false} { \texttt{std.cmp}(e_L, e_R) ↓ 1 } { e_L <= e_R ↓ \texttt{false} } \]
\[ \rule{greater-or-equal-true} { \texttt{std.cmp}(e_L, e_R) ↓ r \\ r ∈ \{0, 1\} } { e_L >= e_R ↓ \texttt{true} } \]
\[ \rule{greater-or-equal-false} { \texttt{std.cmp}(e_L, e_R) ↓ -1 } { e_L >= e_R ↓ \texttt{false} } \]
\[ \rule{cmp-array-left-empty} { e_L ↓ \array{} \hspace{15pt} e_R ↓ \array{a_0 \ldots a_{n - 1}} \\ n > 0 } { \texttt{std.cmp}(e_L, e_R) ↓ -1 } \]
\[ \rule{cmp-array-right-empty} { e_L ↓ \array{a_0 \ldots a_{n - 1}} \hspace{15pt} e_R ↓ \array{} \\ n > 0 } { \texttt{std.cmp}(e_L, e_R) ↓ 1 } \]
\[ \rule{cmp-array-both-empty} { e_L ↓ \array{} \hspace{15pt} e_R ↓ \array{} } { \texttt{std.cmp}(e_L, e_R) ↓ 0 } \]
\[ \rule{cmp-array-first-different} { e_L ↓ \array{a_0 \ldots a_{n - 1}} \hspace{15pt} e_R ↓ \array{b_0 \ldots b_{m - 1}} \\ n > 0 \hspace{15pt} m > 0 \\ \texttt{std.cmp}(a_0, b_0) ↓ r \\ r ≠ 0 } { \texttt{std.cmp}(e_L, e_R) ↓ r } \]
\[ \rule{cmp-array-first-equal} { e_L ↓ \array{a_0 \ldots a_{n - 1}} \hspace{15pt} e_R ↓ \array{b_0 \ldots b_{m - 1}} \\ n > 0 \hspace{15pt} m > 0 \\ \texttt{std.cmp}(a_0, b_0) ↓ 0 \\ \texttt{std.cmp}(\array{a_1 \ldots a_{n - 1}}, \array{b_1 \ldots b_{m - 1}}) ↓ r' } { \texttt{std.cmp}(e_L, e_R) ↓ r' } \]
\[ \rule{cmp-string} { e_L ↓ v_L \hspace{15pt} e_R ↓ v_R \\ v_L ∈ String \\ v_R ∈ String } { \texttt{std.cmp}(e_L, e_R) ↓ stringcmp(v_L, v_R) } \]
\[ \rule{cmp-number} { e_L ↓ v_L \hspace{15pt} e_R ↓ v_R \\ v_L ∈ Number \\ v_R ∈ Number } { \texttt{std.cmp}(e_L, e_R) ↓ numbercmp(v_L, v_R) } \]
\[ \rule{boolean-and-shortcut} { e_L ↓ \false } { e_L \texttt{ && } e_R ↓ \false } \]
\[ \rule{boolean-and-longcut} { e_L ↓ \true \\ e_R ↓ b } { e_L \texttt{ && } e_R ↓ b } \]
\[ \rule{boolean-or-shortcut} { e_L ↓ \true } { e_L \texttt{ || } e_R ↓ \true } \]
\[ \rule{boolean-or-longcut} { e_L ↓ \false \\ e_R ↓ b } { e_L \texttt{ || } e_R ↓ b } \]
\[ \rule{not-true} { e ↓ \true } { \texttt{!} e ↓ \false } \]
\[ \rule{not-false} { e ↓ \false } { \texttt{!} e ↓ \true } \]
\[ \rule{primitiveEquals} { e ↓ v \\ e' ↓ v' \\ b = (v ∈ String ∨ v ∈ Boolean ∨ v ∈ Number ∨ v = \null) ∧ v = v' } { \texttt{std.primitiveEquals}(e, e') ↓ b } \]
\[ \rule{length-array} { e ↓ \array{e_0 \ldots e_{n - 1}} } { \texttt{std.length}(e) ↓ n } \]
\[ \rule{length-object} { \texttt{std.length}(\texttt{std.objectFieldsEx}(e, false) ↓ n } { \texttt{std.length}(e) ↓ n } \]
\[ \rule{length-string} { e ↓ v ∈ String } { \texttt{std.length}(e) ↓ strlen(v) } \]
\[ \rule{makeArray} { e ↓ n \\ e' ↓ f ∈ Function } { \texttt{std.makeArray}(e, e') ↓ \array{f(0) \ldots f(n - 1)} } \]
\[ \rule{filter} { e ↓ f ∈ Function \\ e' ↓ \array{e_0 \ldots e_{n - 1}} \\ j_1 \ldots j_m = \{ i \ |\ f(e_i) ↓ \true \} } { \texttt{std.filter}(e, e') ↓ \array{e_{j_1} \ldots e_{j_m}} } \]
\[ \rule{type-null} { e ↓ \texttt{null} } { \texttt{std.type}(e) ↓ \texttt{"null"} } \]
\[ \rule{type-boolean} { e ↓ v ∈ Boolean } { \texttt{std.type}(e) ↓ \texttt{"boolean"} } \]
\[ \rule{type-number} { e ↓ v ∈ Number } { \texttt{std.type}(e) ↓ \texttt{"number"} } \]
\[ \rule{type-string} { e ↓ v ∈ String } { \texttt{std.type}(e) ↓ \texttt{"string"} } \]
\[ \rule{type-object} { e ↓ v ∈ Object } { \texttt{std.type}(e) ↓ \texttt{"object"} } \]
\[ \rule{type-function} { e ↓ v ∈ Function } { \texttt{std.type}(e) ↓ \texttt{"function"} } \]
\[ \rule{type-array} { e ↓ v ∈ Array } { \texttt{std.type}(e) ↓ \texttt{"array"} } \]
\[ \rule{object-has-ex} { e' ↓ f \\ e'' ↓ b' \\ e ↓ \object{\assert{e'_1} \ldots \assert{e'_m}, f_1 h_1 e_1 \ldots f_n h_n e_n } \\ b = ∃i: f = f_i ∧ (h_i \mathop{≠} :: \mathop{∨} b') } { \texttt{std.objectHasEx}(e, e', e'') ↓ b } \]
\[ \rule{object-fields-ex} { e' ↓ b' \\ e ↓ \object{\assert{e'_1} \ldots \assert{e'_m}, f_1 h_1 e_1 \ldots f_n h_n e_n } \\ \{ f'_1 \ldots f'_p \} = \{ f\ |\ ∃i: f = f_i ∧ (h_i \mathop{≠} :: \mathop{∨} b') \} \\ ∀i,j∈\{1 \ldots p\}: i≤j ⇒ f'_i≤f'_j } { \texttt{std.objectFieldsEx}(e, e') ↓ \array{f'_1 \ldots f'_p} } \]

String concatenation will implicitly convert one of the values to a string if necessary. This is similar to Java. The referred function \(tostring\) returns its argument unchanged if it is a string. Otherwise it will manifest its argument as a JSON value \(j\) and unparse it as a single line of text. The referred function \(strlen\) returns the number of unicode characters in the string.

The numeric semantics are as follows:

  • Arithmetic: Binary *, /, +, -, <, <=, >, >=, and unary + and - operate on numbers and have IEEE double precision floating point semantics, except that the special states NaN, Infinity raise errors. Note that + is also overloaded on objects, arrays, and when either argument is a string. Also, <, <=, >, >= are overloaded on strings and on arrays. In both cases the comparison is performed lexicographically (in case of strings, by unicode codepoint).
  • Bitwise: Operators <<, >>, &, ^, | and ~ first convert their operands to signed 64 bit integers, then perform the operations in a standard way, then convert back to IEEE double precision floating point. In shift operations <<, >>, the right hand value modulo 64 is interpreted as the shift count. Shifting with a negative shift count raises an error.
  • Standard functions: The following functions have standard mathematical behavior and operate on IEEE double precision floating point: std.pow(a, b), std.floor(x), std.ceil(x), std.sqrt(x), std.sin(x), std.cos(x), std.tan(x), std.asin(x), std.acos(x), std.atan(x), std.log(x), std.exp(x), std.mantissa(x), std.exponent(x) and std.modulo(a, b). Also, std.codepoint(x) take a single character string, returning the unicode codepoint as a number, and std.char(x) is its inverse.

The error operator has no rule because we model errors (both from the language and user-defined) as stuck execution. The semantics of error are that its subterm is evaluated to a Jsonnet value. If this is a string, then that is the error that is raised. Otherwise, it is converted to a string using \(tostring\) like during string concatenation. The specification does not specify how the error is presented to the user, and whether or not there is a stack trace. Error messages are meant for human inspection, and there is therefore no need to standardize them.

Finally, the function std.native(x) takes a string and returns a function configured by the user in a custom execution environment, thus its semantics cannot be formally described here. The function std.extVar(x) also takes a string and returns the value bound to that external variable at the time the Jsonnet environment was created.

JSON Values

After execution, the resulting Jsonnet value is manifested into a JSON value whose serialized form is the ultimate output. The Manifestation process removes hidden fields, checks assertions, and forces array elements and non-hidden object fields. Attempting to manifest a function raises an error since they do not exist in JSON. JSON values are formalized below.

By abuse of notation, we consider two objects to be equivalent even if their fields are re-ordered. However this is not true of array elements whose ordering is strict.

j JValue = PrimitiveJObjectJArray
Primitive ::= null | true | false | string | double
o JObject ::= { { string : j } }
a Array ::= [ { j } ]

Note that JValueValue.

Manifestation

Manifestation is the conversion of a Jsonnet value into a JSON value. It is represented with the judgement \(v⇓j\). The process requires executing arbitrary Jsonnet code fragments, so the two semantic judgements represented by \(↓\) and \(⇓\) are mutually recursive. Hidden fields are ignored during manifestation. Functions cannot be manifested, so an error is raised in that case (formalized as stuck execution).

\[ \rule{manifest-value} { j ∈ (\{\null, \true, \false\} ∪ String ∪ Number) } { j ⇓ j } \]
\[ \rule{manifest-object} { o = \object{ \assert e''_1 \ldots \assert e''_m,\ f_1 h_1 e_1 \ldots f_n h_n e_n,\ f'_1 :: e'_1 \ldots f'_p :: e'_p,\ } \\ ∀i∈\{1\ldots m\}: e''_i⟦o/\self,\{\}/\super⟧↓\_ \\ ∀i∈\{1\ldots n\}: e_i⟦o/\self,\{\}/\super⟧↓v_i⇓j_i\ ∧\ h_i ≠ :: } { o ⇓ \object{f_1 : j_1 \ldots f_n : j_n} } \]
\[ \rule{manifest-array} { ∀i∈\{1\ldots n\}: e_i↓v_i⇓j_i } { \array{e_1 \ldots e_n} ⇓ \array{j_1 \ldots j_n} } \]