Understanding AI Inferencing
Modern Artifical Intelligence (AI) systems use a paradigm called machine learning. Machine learning (ML) is typically composed of both training and inferencing components. Training is the highly computationally intensive process where the machine (computer) learns how to perform a task. ML Training and is usually performed in very large scale cloud computing systems and can take a very long time to process (weeks or months) even when running on very high performance hardware. The output of the training process, a trained ML model, can be leveraged across many systems in the form of inference processing. Inference processing or inferencing is the term that is used to refer to the process of providing a response to a stimulus based on training from example data sets. Example inferencing tasks include object or face detection in images or video, understanding human speech, or identifying cancerous cells in X-Ray images.
The AI training processing can be very computationally intensive and take weeks or months to complete running on large scale data center server.
AI Inferencing at the Edge is historicallly accomplished with GPU-based accelerator solutions. AI Inferencing doesn't require the same high performance data center class systems needed for AI training but does require much higher performance than is available from standard CPU processors. GPU-based solutions are difficult to program are expensive and power hungry. For AI inferencing to flourish a new solution is required.
InferX Provides the Best Inference Solution
Inference processing when properly accelerated requires much less processing and can typically be performed in a fraction of a second when using InferX AI acceleration technology.
The Flex Logix InferX AI acceleration technology is designed to provide acceleration of AI applications at the Edge of the Internet. Edge devices typically have stringent power dissipation, size and cost requirements. The InferX technology is able to compress the trillions of operations required for performing AI inferencing into a very compact and efficient AI accelerator bringing AI capabilities, like real-time vision, that would have required a super computer just a few years ago within the reach of any company's budget.
Flex-Logix Solutions for Edge Inferencing
InferX Family of Edge Inferencing Solutions
InferX X1 Delivers More Throughput/$ Than Tesla T4, Xavier NX and Jetson TX2
Inference optimized solutions like the InferX X1 are designed to be very silicon efficient. When compared to GPU appoaches the silicon savings are significant.
Flex Logix Technologies announced that it has partnered with Intrinsic ID to ensure that any device using its eFPGA remains secure and can’t be modified maliciously, whether through physical attacks or remote hacking.
AI is being designed into a growing number of chips and systems at the edge, where it is being used to speed up the processing of massive amounts of data, and to reduce power by partitioning and prioritization. That, in turn, allows systems to act upon that data more rapidly.
FLEX LOGIX COLLABORATING WITH MICROSOFT TO HELP BUILD SECURE STATE-OF-THE-ART CHIPS FOR US DEPARTMENT OF DEFENSE (DOD)
Flex Logix(R) Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP, architecture and software, announced today that it has been selected to be part of a team of microelectronic industry leaders, led by Microsoft, to build a chip development platform with the utmost regard for security as demonstrated by the DoD RAMP Project. Flex Logix was chosen for its leading embedded (eFPGA) technology that enables chips to be reconfigurable after tape-out, allowing companies to adapt to new requirements, changing standards and protocols as needed.