InferX Software makes AI Inference easy, provides more throughput on tough models, costs less, and requires less power.
AI Inference Acceleration
Top throughput on tough models.
More throughput for less $ & less watts.
Accelerate workloads & make your SoC flexible for changing needs.
eFPGA proven on 6/7, 12, 16, 22, 28, 40 & 180nm.
Flex Logix™ Technology
Inference and eFPGA are both data flow architectures. A single inference layer can take over a billion multiply-accumulates. Our Reconfigurable Tensor Processor reconfigures the 64 TPUs and RAM resources to efficiently implement a layer with a full bandwidth, dedicated data path, like an ASIC; then repeats this layer by layer. Flex Logix utilizes a new breakthrough interconnect architecture: less than half the silicon area of traditional mesh interconnect, fewer metal layers, higher utilization and higher performance. The ISSCC 2014 paper detailing this technology won the ISSCC Lewis Winner Award for Outstanding Paper. The interconnect continues to be improved resulting in new patents.
We can easily scale up our Inference and eFPGA architectures to deliver compute capacity of any size. Flex Logix does this using a patented tiling architecture with interconnects at the edge of the tiles that automatically form a larger array of any size.
TIGHTLY COUPLED SRAM AND COMPUTE
SRAM closely couples with our compute tiles using another patented interconnect. Inference efficiency is achieved by closely coupling local SRAM with compute which is 100x more energy efficient than DRAM bandwidth. This interconnect is also useful for many eFPGA applications.
DYNAMIC TENSOR PROCESSOROur dynamic tensor processor features 64 one-dimensional tensor processors closely coupled with SRAM. The tensor processors are dynamically reconfigurable during runtime, using our proprietary interconnect, thus enabling implementation of multi-dimensional tensor operations as required for each layer of a neural network model, resulting in high utilization and high throughput.
SOFTWAREUnlike solutions designed around AI model development and training, our Inference accelerator starts with a trained ML model, typically in ONNX format and generates a program that runs on our InferX accelerators.
Our eFPGA compiler has been in use by dozens of customers for several years. Software drivers will be available for common Server OS and real time OS for MCUs and FPGAs.
InferX PCI Express and M.2 offeringsThe InferX X1 processor is in production and is available now in PCI Express (HHHL), M.2 (M+B key) and chip level offerings.
SUPERIOR LOW-POWER DESIGN METHODOLOGYFlex Logix has numerous architecture and circuit design technologies to deliver the highest throughput at the lowest power.
FLEX LOGIX PARTNERS WITH INTRINSIC ID TO SECURE eFPGA PLATFORM
Flex Logix Technologies announced that it has partnered with Intrinsic ID to ensure that any device using its eFPGA remains secure and can’t be modified maliciously, whether through physical attacks or remote hacking.
Using AI to Speed Up Edge Computing
AI is being designed into a growing number of chips and systems at the edge, where it is being used to speed up the processing of massive amounts of data, and to reduce power by partitioning and prioritization. That, in turn, allows systems to act upon that data more rapidly.
FLEX LOGIX COLLABORATING WITH MICROSOFT TO HELP BUILD SECURE STATE-OF-THE-ART CHIPS FOR US DEPARTMENT OF DEFENSE (DOD)
Flex Logix(R) Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP, architecture and software, announced today that it has been selected to be part of a team of microelectronic industry leaders, led by Microsoft, to build a chip development platform with the utmost regard for security as demonstrated by the DoD RAMP Project. Flex Logix was chosen for its leading embedded (eFPGA) technology that enables chips to be reconfigurable after tape-out, allowing companies to adapt to new requirements, changing standards and protocols as needed.