Resources
InferX™ X1
- High Efficiency Edge Vision Processing based on Dynamic TPU Technology - Flex Logix CTO Cheng Wang
- The Flex Logix InferX X1: Pairing Software and Hardware to Enable Edge Machine Learning - Flex Logix Vice President of Software - Randy Allen
- InferX X1 Product Brief - Product Overview of InferX1 AI Accelerator
- InferX X1M Product Brief - X1 M.2 AI Accelerator
- InferX X1P1 Product Brief - X1 PCI Express AI Accelerator Card
- Linley Fall Processor Conference: a Flexible and Powerful Architecture for Edge AI
- Linley Spring Processor Conference: Easy to use X1 Inference Compiler Software
- Linley Spring Processor Conference: Low Power X1 M.2 Card
- Microprocessor Report Article
Videos
- High Efficiency Edge Vision Processing based on Dynamic TPU Technology - Flex Logix CTO Cheng Wang
- The Flex Logix InferX X1: Pairing Software and Hardware to Enable Edge Machine Learning - Flex Logix Vice President of Software - Randy Allen
- February 2020 the Importance of Software for Architecting Inference Accelerators

Using AI to Speed Up Edge Computing
AI is being designed into a growing number of chips and systems at the edge, where it is being used to speed up the processing of massive amounts of data, and to reduce power by partitioning and prioritization. That, in turn, allows systems to act upon that data more rapidly.

Speeding Up AI Algorithms
AI at the edge is very different than AI in the cloud. Salvador Alvarez, solution architect director at Flex Logix, talks about why a specialized inferencing chip with built-in programmability is more efficient and scalable than a general-purpose processor, why high-performance models are essential for getting accurate real-time results, and how low power and ambient temperatures can affect the performance and life expectancy of these devices.

Q&A with Sam Fuller from FlexLogix - InferX and computer vision applications
We caught up with Sam to discuss what Flex Logix does, what the InferX platform is, how both the company and the platform differ from the competition, how easy it is to port models to the InferX platform, and more.