M1076 V2

Introducing our latest product, M1076

M1076 V2

The M1076 Mythic AMP™ delivers up to 25 TOPS in a single chip for high-end edge AI applications.

Overview

M1076 (3)
Breakthrough AI Inference at the edge

Achieve the full potential of your AI applications with the Mythic Analog Matrix Processor (Mythic AMP™). The M1076 AMP utilizes the groundbreaking Mythic Analog Compute Engine (Mythic ACE™) to deliver the compute resources of a GPU at up to 1/10th the power consumption – all in a single chip. This means your edge device can run complex AI applications, or multiple applications, at higher resolutions and faster frame-rates for better inference results.

Lower power, superior performance

Mythic AMPs are designed as an array of compute tiles. At the heart of each AMP tile is the Mythic Analog Compute Engine (Mythic ACE™) which integrates a flash memory array and ADCs that combine to store the model parameters and perform low power, high-performance matrix multiplication. Each Mythic ACE is complemented by a digital subsystem that includes a 32-bit RISC-V nano-processor, SIMD vector engine, 64KB of SRAM, and a high-throughput network-on-chip (NoC) router. The result is an Analog Matrix Processor that delivers power-efficient AI inference at up to 25 TOPS. Edge devices can now deploy powerful AI models without the challenges of high power consumption, thermal management, and form factor constraints.

Designed to scale

The Mythic AMP™ is designed to provide powerful compute in small edge devices and easily scales to deliver server-class compute in data centers. Use a single AMP to process a single stream or easily increase system performance with multiple AMPs on a board for multiple streams, ultra HD resolution, and multiple neural network applications. Mythic provides developers with the flexibility to future-proof their designs with more AMPs as more workloads require increased performance.

Architected for guaranteed results

The efficient dataflow architecture of the Mythic AMP™, combined with its powerful model optimization suite and efficient graph compiler, provides for deterministic execution of AI applications. AI system developers can be confident when they design their product for a targeted accuracy, frame-rate, latency, or power profile, that their performance targets are guaranteed their performance targets are guaranteed at product deployment.

Eb3ca24a43337ba34358ec892b356321