Intel sign in front of building

Intel’s Alterra: Gamechanger or Gimmick?

Posted on May 15, 2024

With Edge computing and AI becoming an increasingly crucial part of industrial computing, embedded chips need to evolve as well. And Intel’s Agilex chips are designed to address this need. But does it actually bring anything new to the table? Let’s find out.

The Rise of FPGAs

Field Programmable Gate Arrays or FPGAs have been steadily becoming the favorite of embedded computing, thanks to their flexibility and cost-efficiency when compared to traditional processors.

Especially when you consider edge computing, which requires devices to process the data at the source, requiring highly distributed computing systems. Implementing such a system with standard processors is too expensive, both in terms of capital and the power-draw. ASICs perform better, of course, but require incredible investment into engineering the design itself, making it ill-suited to any application not operating at a massive scale. An FPGA, on the other hand, can be repurposed for any kind of workload by programming it differently.

AI Infused FPGAs

With AI algorithms becoming a greater part of the modern industrial landscape, there has been a growing demand for AI chips as well. The problem is that most tasks do not require AI processing, so installing a dedicated AI chip alone isn’t worth it.

To solve this conundrum, Intel has “infused” AI into its new Agilex FPGAs. But what does that mean?

As you may know, FPGAs don’t have the traditional specialized design seen in other chips. Instead, they have several blocks of logic processing, memory, etc, that can be connected with programming to achieve different purposes.

With Intel’s new FPGA design, AI processing blocks are also infused throughout the chip and can be accessed in a similar manner. This gives it the flexibility to adapt to any AI application without sacrificing performance in other areas.

So far Intel’s Alterra-branded FPGAs are the only ones that possess this capability, making it the only real choice for any AI-based applications.

The ARC Factor

For a long time, there have been only two companies producing GPUs – Nvidia and AMD. This has seen the market struggle with pricing, as the demand typically outpaces production. Because contrary to popular expectations, it is not just gaming that needs discrete graphics cards.

Many computers operating commercially require powerful graphics processing as well. And with Intel releasing its own line of GPUs, we can expect the availability to improve, pushing down prices as well.

Intel’s ARC GPUs are also well-integrated with its chipset and processors, which can mean smoother performance for embedded PCs. And while ARC cards were initially rolled out in limited offerings, Intel has finally stepped up the production of these GPUs to meet the needs of the embedded market as well.

Many IoT devices and digital signage systems can take advantage of discrete graphics cards to offer better performance. Intel’s new FPGAs can help accelerate AI and other processing tasks as well, letting an all-Intel loadout excel in multiple scenarios.

Applications for the Agilex 5 Processors

Intel’s new Agilex 5 FPGAs are looking to become the de-facto choice of industries across multiple verticals, and it boasts of the features to make that possible as well.

FPGAs are already known for their modular approach, and the new Alterra chips take this further with AI and graphics blocks integrated into the framework. This means that even without a discrete GPU, these FPGAs can perform well in modern workloads, crunching datasets and graphics alike without any slowdown.

As a result, the Agilex 5 FPGAs are ideally suited for virtually all embedded applications, ranging from industrial automation to retail or healthcare. Be it AI-based object recognition or graphics-heavy imaging tasks.

Should You Switch to Intel’s FPGA Chips?

For many applications, a flexible FPGA is more efficient than a more standard processor. And as FPGAs can be reprogrammed as well, investing heavily into FPGA chips is never a bad call.

Especially if your application is looking to leverage Edge computing or AI algorithms, which standard architectures are ill-suited to. Depending on the setup in question, it might be even feasible to get an FPGA chip as an add-on to accelerate certain workloads while running a standard chip for normal tasks.

Share: