This is the ONNX inference API, which serves as the core of everything.
Choose your platform and follow three steps to run your first inference.
Install the ailia Python package from PyPI. Python 3.6 or later is required.
pip3 install ailia
View on PyPI
ailia-models ships 400+ ready-to-run inference scripts. Clone once, install the shared requirements, then cd into any model folder and run — pass -v 0 for webcam input, -i image.png for a still, or skip the flag to use the bundled demo input. python3 launcher.py at the repo root opens a GUI that browses every model.
git clone https://github.com/ailia-ai/ailia-models.git
cd ailia-models
pip3 install -r requirements.txt
cd object_detection/yolox
python3 yolox.py -v 0
Sample Repository (400+ models)
ailia SDK runs across desktop, mobile, and embedded platforms with a wide range of GPU back-ends.
Minimal examples for loading an ONNX model and running inference.
import ailia
import numpy as np
# stream=None lets ailia derive the graph from the .onnx file
net = ailia.Net(stream=None, weight="model.onnx")
input_tensor = np.zeros((1, 3, 224, 224), dtype=np.float32)
output = net.run(input_tensor)
print(output[0].shape)
Common questions from first-time ailia SDK users.
The evaluation license is downloaded automatically at runtime for Python, Unity, Flutter, and JNI, and is valid for one month for C++. It is intended for development and trial use only.
For commercial deployment, redistribution, or longer-term use, request a production license. See the ailia license terms for details.
Pass an env_id to ailia.Net(). List available environments (CPU, CUDA, Metal, Vulkan, MKL, NNAPI) with ailia.get_environment_list(), then select the one you want.
By default, ailia chooses the fastest available environment for your platform.
Sample scripts in ailia-models download .onnx and .onnx.prototxt files into the model's own directory the first time you run them. Subsequent runs reuse the cached files.
For ailia Speech and ailia Voice, models are downloaded into ./models/ by default, configurable via initialize_model(model_path=...).
Yes, after the first run. The evaluation license and any auto-downloaded model files require an internet connection on first use; once cached, subsequent inference works offline.
For C++, the license is fetched once via download_license.py from the binding repository.
Export your model to ONNX with the framework's standard tooling (torch.onnx.export, tf2onnx, etc.), then generate the matching .onnx.prototxt using the script bundled with ailia SDK.
The model conversion tutorial walks through the process.
For bug reports and questions on the sample repositories, open an issue on the relevant GitHub repo. For SDK licensing and commercial inquiries, contact ailia Inc. directly.