
onnx-array-api implements APIs to create custom ONNX graphs. The objective is to speed up the implementation of converter libraries. The library is released on pypi/onnx-array-api and its documentation is published at APIs to create ONNX Graphs.
The first one matches numpy API. It gives the user the ability to convert functions written following the numpy API to convert that function into ONNX as well as to execute it.
importnumpyasnpfromonnx_array_api.npximportabsolute, jit_onnxfromonnx_array_api.plotting.text_plotimportonnx_simple_text_plotdefl1_loss(x, y): returnabsolute(x-y).sum() defl2_loss(x, y): return ((x-y) **2).sum() defmyloss(x, y): returnl1_loss(x[:, 0], y[:, 0]) +l2_loss(x[:, 1], y[:, 1]) jitted_myloss=jit_onnx(myloss) x=np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32) y=np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32) res=jitted_myloss(x, y) print(res) print(onnx_simple_text_plot(jitted_myloss.get_onnx()))[0.042] opset: domain='' version=18 input: name='x0' type=dtype('float32') shape=['', ''] input: name='x1' type=dtype('float32') shape=['', ''] Sub(x0, x1) -> r__0 Abs(r__0) -> r__1 ReduceSum(r__1, keepdims=0) -> r__2 output: name='r__2' type=dtype('float32') shape=None It supports eager mode as well:
importnumpyasnpfromonnx_array_api.npximportabsolute, eager_onnxdefl1_loss(x, y): err=absolute(x-y).sum() print(f"l1_loss={err.numpy()}") returnerrdefl2_loss(x, y): err= ((x-y) **2).sum() print(f"l2_loss={err.numpy()}") returnerrdefmyloss(x, y): returnl1_loss(x[:, 0], y[:, 0]) +l2_loss(x[:, 1], y[:, 1]) eager_myloss=eager_onnx(myloss) x=np.array([[0.1, 0.2], [0.3, 0.4]], dtype=np.float32) y=np.array([[0.11, 0.22], [0.33, 0.44]], dtype=np.float32) res=eager_myloss(x, y) print(res)l1_loss=[0.04] l2_loss=[0.002] [0.042]
The second API or Light API tends to do every thing in one line. It is inspired from the Reverse Polish Notation. The euclidean distance looks like the following:
importnumpyasnpfromonnx_array_api.light_apiimportstartfromonnx_array_api.plotting.text_plotimportonnx_simple_text_plotmodel= ( start() .vin("X") .vin("Y") .bring("X", "Y") .Sub() .rename("dxy") .cst(np.array([2], dtype=np.int64), "two") .bring("dxy", "two") .Pow() .ReduceSum() .rename("Z") .vout() .to_onnx() )Almost every converting library (converting a machine learned model to ONNX) is implementing its own graph builder and customizes it for its needs. It handles some frequent tasks such as giving names to intermediate results, loading, saving onnx models. It can be used as well to extend an existing graph.
importnumpyasnpfromonnx_array_api.graph_apiimportGraphBuilderg=GraphBuilder() g.make_tensor_input("X", np.float32, (None, None)) g.make_tensor_input("Y", np.float32, (None, None)) r1=g.make_node("Sub", ["X", "Y"]) # the name given to the output is given by the class,# it ensures the name is uniqueinit=g.make_initializer(np.array([2], dtype=np.int64)) # the class automatically# converts the array to a tensorr2=g.make_node("Pow", [r1, init]) g.make_node("ReduceSum", [r2], outputs=["Z"]) # the output name is given because# the user wants to choose the nameg.make_tensor_output("Z", np.float32, (None, None)) onx=g.to_onnx() # final conversion to onnx