Unified AI Model Generator

Platform for Edge AI Devices

Bringing the End-to-End AI/ML Model Lifecycle into one Unified Platform.

WHAT IS eFabric?

eFabric™: The Unified ML Factory for Edge Devices

eFabric™ is a "Unified ML Factory" designed for Edge AI. It is a low-code / no-code platform designed to build and train Artificial Intelligence models from raw data and directly deploy onto Syntiant’s Neural Decision Processors (NDP) and Renesas RZ/V2L family chips.

Key Capabilities

End-to-End Lifecycle

eFabric consolidates the entire AI lifecycle from dataset management to training and deployment into a single seamless workflow, eliminating the need for external tools.

Zero-Friction Deployment

It bridges the gap between data science and embedded engineering, allowing developers to deploy hardware-ready models in minutes without complex firmware coding.

Optimized for the Edge

Creates models specifically tuned for "always-on" battery-powered devices (microwatt scale) rather than general cloud AI.

Deterministic AI Execution

eFabric eliminates deployment surprises by ensuring that components like the AI model and edge hardware used during testing & validation are carried into production environment.

WHY eFabric?

Optimized for the Edge

Simplicity

A GUI-driven experience that allows software developers to deploy hardware-ready models without deep hardware knowledge.

Development Speed

Accelerate time-to-market. Single streamlined workflow from dataset import to silicon deployment. No need for complex toolchains and SDK coding.

Power Efficiency

Build models for power-efficient and memory-constrained chips as well as embedded Linux MPU chips.

Scalability

Accelerate development by starting with EVK based PoC, then scale seamlessly to production with our production-ready SoM.

Integration

Fast-track your product launch by embedding our production-ready SoM to deliver autonomous edge intelligence.

Know More

Expert deliver AI solution with real impact

contact us

THE DEVELOPER JOURNEY

Zero Friction Handover

A seamless journey from raw data to deployed silicon, consolidated into a single intuitive interface.

Data Ingestion

Import, organize, label datasets and manage in a Project.

Pre-process

Configure pre-processing and
feature extraction.

Build Model

Choose an existing model architecture or design your own.

Train & Validate

Live training metrics, logs,
and validation.

Deploy

Export optimized model &
flash to chip.

Retraining

Improve models using refined datasets and optimized parameters.

Monitoring

Track, analyze, and optimize
Model performance.

Zero Friction Handover: From Cloud to Edge

THE HARDWARE ECOSYSTEM

Plug-and-Play AI Hardware

Featured Product: TML120 and R2L100 Evaluation Kit.

  • Powered By Syntiant NDP family / Renesas RZV family.
  • No-code / Low-code Environment Zero or very minimal firmware development needed.
  • Instant Flashing Flash models quickly using standard interfaces.
  • Specs From 6.3 GOPS to 1+ TOPS built-in AI Accelerations.

TML120 SOM

TML120 - XENO+ Tiny ML Module

TML120 is a XENO+ Series Tiny ML Module which is a solderable module and is designed for building audio- and sensor-based Edge AI IoT devices

TML120 Tiny ML Module

  • Compact 15mm × 15mm module
  • 28-pin solderable footprint
  • Integrated Neural Decision Processor
  • Supports CNN models up to 1MB weight vectors
Features Description
MCU ARM Cortex-M23
Memory 8MB NOR Flash
Audio-0 Interface Audio port interface for external PDM Mics
Audio-1 Interface PDM/I2S/TDM for external Audio input from Mic/audio
Audio-2 Interface I2S/TDM for Audio output to be sent to Audio processor
I2C Interface I2C interface for connecting any sensors for tme-series-data input
GPIO 3x GPIOs from ML processor
Programming Interface Serial UART-based programming interface with TML flashing tool
Host Interface Serial UART-based interface to report ML classification events to host processor
Size 15 x 15 mm
Operating Temp -40 to 85 °C
Module Supply Voltage 1.8V / 3.3V

EVK

TML120 Evaluation Kit (EVK)

  • The TML120 EVK Kit is a powerful development platform designed to accelerate the creation of Audio and Sensor-based Edge AI / TinyML IoT applications.
  • Built around the TML120 TinyML module with Neural Decision Processor, the EVK enables developers, researchers, and product designers to quickly prototype and deploy intelligent edge devices.
  • The kit provides ready-to-use audio interfaces, sensor interfaces, and factory flashing tools, allowing users to focus on AI model development and application innovation rather than low-level embedded software development.

Key Benefits

  • Accelerates Edge AI product development
  • Ready platform for Audio and Sensor ML applications
  • Reduces embedded software complexity
  • Supports rapid PoC and prototype development
  • Flexible interfaces for custom sensors and audio devices

Target Applications

  • Noise suppression systems
  • Voice command detection
  • Human alert sound detection
  • Industrial machine vibration monitoring
  • Pest sound classification

R2L100 SOM

R2L100 (Vison) SOM

XENO+ Vision ML (Machine Learning) SOM (System-On-Module) module is a production ready solderable SOM Module and can be used as core CPU module for building Linux OS based Video/Image based edge AIML devices.

  • Supports 2MP to 5MP @ 30FPS RGB/RGB-IR video/ Image in MIPI CSI-2 format
  • Built-in AI Engine with 1TOPS useful for running DNN/CNN on video/ image data
Features Description
Main CPU Dual ARM Cortex A55 @ 1.2GHz
MCU ARM Cortex M33 @ 200MHz
RAM 16Bit 2GB DDR4-1600
Flash 16GB eMMC Flash (Up to 32GB)
Serial Flash 16MB Serial NOR Boot flash
ISP Simple ISP
AI Accelerator DRP-AI Accelerator
Video CODEC H.264 Enc/Dec 2K/30fps
OS Linux OS
Features Description
Graphics Engine ARM Mali-G31 3D GPU
Camera Interface 1× MIPI CSI-2 (4 lanes)
Display Interface 1× MIPI DSI (4 lanes)
Ethernet 2× RGMII Gigabit Ethernet interface
SD Interface 1× SD interface for WiFi module or SD Card
USB 2× USB2.0 for Camera or LTE module
Others 4× I2C, 5× UART, 2× SPI, 8× ADC, 2× CAN-FD

EVK

R2L100 Evaluation Kit (EVK)

  • The R2L100 EVK is a complete evaluation and prototyping platform designed to accelerate development of vision-based AI applications at the edge, with ready access to compute, camera, and connectivity interfaces.
  • Built to simplify development workflows, the EVK enables engineers to capture, process, and infer visual data in real time, eliminating the need for complex hardware bring-up.
  • With an optimized hardware-software stack, the platform supports rapid deployment of AI models, helping transition quickly from concept to production-ready systems.

Key Benefits

  • Ready-to-use development platform — reduces hardware design cycles
  • Real-time edge intelligence — no dependency on cloud processing
  • Flexible I/O ecosystem — supports diverse application needs
  • Scalable design path — from evaluation to deployment
  • Robust software support — ensures long-term product viability

Target Applications

  • Vision-enabled industrial automation
  • Smart edge cameras & surveillance systems
  • Robotics and machine vision
  • Intelligent traffic and mobility systems
  • Human detection and tracking systems

HARDWARE INTEGRATION

Real-Time Deployment & Validation

eFabric connects seamlessly to TML and R2L100 EVKs via a simple USB interface for instant model testing.

Flash the optimized model binary to the TML/R2L100 EVKs in seconds directly from the GUI.

Validate edge-AI performance on actual hardware with live confidence scores and logs.

Verify latency, power profiles, and accuracy in the real-world deployment environment.

Seamlessly transition validated models to high-volume manufacturing with optimized deployment toolsets.

ONE PLATFORM

Syntiant NDP and Renesas RZ/V family

Built for the Syntiant NDP and Renesas RZ/V Series edge AI Linux MPUs, for Audio/Vison/SLM

NDP 100

NDP 101

NDP 115

NDP 120

NDP 200

NDP 250

RZ/V2L

RZ/V2N

RZ/V2H

  • Rapidly expanding to support the full spectrum of ultra-low-power chips and other hardware, coming soon.

CORE COMPETENCY

Audio Machine Learning

eFabric excels in Audio Classification, enabling developers to build robust models for complex auditory environments.

Sound Classification

Identify and classify diverse environmental sounds and events.

  • Baby cry & animal sounds
  • Alarm & glass break detection
  • Machinery fault noises

Keyword Spotting

Detect specific trigger words or phrases with high accuracy and low latency.

  • Wake word detection
  • Voice command interfaces
  • Multi-language support

Noise Suppression

Classify background noise and suppress the noise for clear audio

  • Noise suppression headset
  • Echo & wind noise reduction
  • Real-time voice enhancement

Machine Sound Classification

Classify industrial machine sound patterns for operational health.

  • Machinery fault noises
  • Predictive maintenance alerts
  • Bearing & motor diagnostics

CORE COMPETENCY

Time-Series Machine Learning

Enable new application domains by supporting the generation of sensor-based machine learning models, allowing eFabric to handle complex temporal data patterns and anomaly classification.

Motion Recognition

Accelerometer-based activity tracking and movement classification.

Gesture Detection

Precise hand-gesture recognition for touchless control interface.

Vibration Analysis

Classify vibration patterns for anomaly detection in machinery.

Battery Health

RUL (Remaining Useful Life) & SOH (State of Health) prediction.

CORE COMPETENCY

Vision Based Machine Learning

eFabric has evolved into a unified ML factory for all edge modalities. Next-wave enhancements enable advanced computer vision directly on edge chips.

Face Recognition

Secure identity verification and access control on low-power devices.

People Counting

Real-time occupancy tracking for smart buildings and retail.

Smart Surveillance

Automated anomaly detection and vision-based threat monitoring.

Unified ML Factory

One-platform for Audio, Sensor, and Vision model generation.

REAL-WORLD USE CASES

Intelligence in Action

Automotive (EVs)

Battery health degradation monitoring, RUL estimation, and state-of-health prediction. Thermal stress detection, Charging abnormality detection.

Advanced Audio Detection

Glass break detection, Alarm detection, Baby cry detection, Animal sound recognition.

Voice & Control Interfaces

Wake word detection, Voice ID, Touchless control for gesture detection.

Industrial IoT

Vibration analysis, Acoustic fault detection, Anomaly detection.

Vision based Security

Face recognition, People counting, Occupancy tracking, Smart surveillance.