Resources

Using AI to Speed Up Edge Computing

Using AI to Speed Up Edge Computing

AI is being designed into a growing number of chips and systems at the edge, where it is being used to speed up the processing of massive amounts of data, and to reduce power by partitioning and prioritization. That, in turn, allows systems to act upon that data more rapidly.

Speeding Up AI Algorithms

Speeding Up AI Algorithms

AI at the edge is very different than AI in the cloud. Salvador Alvarez, solution architect director at Flex Logix, talks about why a specialized inferencing chip with built-in programmability is more efficient and scalable than a general-purpose processor, why high-performance models are essential for getting accurate real-time results, and how low power and ambient temperatures can affect the performance and life expectancy of these devices.

Q&A with Sam Fuller from FlexLogix - InferX and computer vision applications

Q&A with Sam Fuller from FlexLogix - InferX and computer vision applications

We caught up with Sam to discuss what Flex Logix does, what the InferX platform is, how both the company and the platform differ from the competition, how easy it is to port models to the InferX platform, and more.