ailia SDK

This is the ONNX inference API, which serves as the core of everything.

Getting Started

Choose your platform and follow three steps to run your first inference.

1

Install

Install the ailia Python package from PyPI. Python 3.6 or later is required.

pip3 install ailia
View on PyPI
2

Run a Sample

ailia-models ships 400+ ready-to-run inference scripts. Clone once, install the shared requirements, then cd into any model folder and run — pass -v 0 for webcam input, -i image.png for a still, or skip the flag to use the bundled demo input. python3 launcher.py at the repo root opens a GUI that browses every model.

git clone https://github.com/ailia-ai/ailia-models.git
cd ailia-models
pip3 install -r requirements.txt
cd object_detection/yolox
python3 yolox.py -v 0
Sample Repository (400+ models)

System Requirements

ailia SDK runs across desktop, mobile, and embedded platforms with a wide range of GPU back-ends.

Operating Systems

  • Windows 10 / 11
  • macOS 11 or later
  • Linux (Ubuntu 20.04+)
  • iOS 13+
  • Android 7+
  • Jetson, Raspberry Pi

Languages

  • Python 3.6 or later
  • C++17
  • C# / Unity 2021.3.10f1+
  • Kotlin / Java (JNI)
  • Dart / Flutter 3.19+

GPU Acceleration

  • NVIDIA CUDA
  • Apple Metal
  • Vulkan (cross-platform)
  • Intel MKL
  • NPU via NNAPI (Android)

Model Formats

  • ONNX (primary)
  • TFLite (via ailia TFLite Runtime)
  • GGUF (via ailia LLM)
  • Convert from PyTorch / TensorFlow / Keras

Use the API in Your Project

Minimal examples for loading an ONNX model and running inference.

import ailia
import numpy as np

# stream=None lets ailia derive the graph from the .onnx file
net = ailia.Net(stream=None, weight="model.onnx")

input_tensor = np.zeros((1, 3, 224, 224), dtype=np.float32)
output = net.run(input_tensor)
print(output[0].shape)

API Reference by Platform

FAQ

Common questions from first-time ailia SDK users.

What is the difference between the evaluation license and a production license?

The evaluation license is downloaded automatically at runtime for Python, Unity, Flutter, and JNI, and is valid for one month for C++. It is intended for development and trial use only.

For commercial deployment, redistribution, or longer-term use, request a production license. See the ailia license terms for details.

How do I switch between CPU and GPU?

Pass an env_id to ailia.Net(). List available environments (CPU, CUDA, Metal, Vulkan, MKL, NNAPI) with ailia.get_environment_list(), then select the one you want.

By default, ailia chooses the fastest available environment for your platform.

Where are model files stored after I download them?

Sample scripts in ailia-models download .onnx and .onnx.prototxt files into the model's own directory the first time you run them. Subsequent runs reuse the cached files.

For ailia Speech and ailia Voice, models are downloaded into ./models/ by default, configurable via initialize_model(model_path=...).

Can I run ailia SDK offline?

Yes, after the first run. The evaluation license and any auto-downloaded model files require an internet connection on first use; once cached, subsequent inference works offline.

For C++, the license is fetched once via download_license.py from the binding repository.

How do I convert a PyTorch or TensorFlow model to ONNX for ailia?

Export your model to ONNX with the framework's standard tooling (torch.onnx.export, tf2onnx, etc.), then generate the matching .onnx.prototxt using the script bundled with ailia SDK.

The model conversion tutorial walks through the process.

Where can I get help?

For bug reports and questions on the sample repositories, open an issue on the relevant GitHub repo. For SDK licensing and commercial inquiries, contact ailia Inc. directly.

Materials