What would be the next technology within Xilinx FPGAs? Analog In-Memory Processing or Manycore architecture?

What would be the next technology within Xilinx FPGAs? Analog In-Memory Processing or Manycore architecture?

What is the next feasible technology that could be integrated with #Xilinx FPGAs?

Xilinx #Versal #ACAP was a surprise for engineers. I am not sure whether or not it will be a successful product as ZYNQ SoC due to complexity of software architecture (see the article “Is the Software a Nightmare for Xilinx Versal ACAP?”). However, it could be suitable for many different applications if it architects properly for the specific applications (see the article “How to architect engines of Xilinx Versal ACAP for a specific application?”). 

After integrating ARM processors within Xilinx FPGAs (SoC), MPSoC, RFSoC, high speed SERDES, High Bandwidth Memory (HBM) driver, floating-point DSP engines, AI engines, Network-on-Chip (NoC), etc., as an FPGA developer or application engineer what feasible technology would you like Xilinx integrate within FPGA?

I personally like to see the “Analog In-Memory Processing” and “Manycore architecture” within Xilinx FPGAs.

Please share your preferences with me in the comments.

Analog In-Memory Processing is a way to implement many parallel Multiplier-Accumulator (MAC) computations in the memory. The following picture shows the general concept. For more information, see www.mythic-ai.com and www.gridgain.com .

No alt text provided for this image

Manycore processors are specialist multi-core processors designed for a high degree of parallel processing, containing a large number of simpler, independent processor cores (from a few tens of cores to thousands or more).

No alt text provided for this image


Theodore Omtzigt

Accelerating innovation: solving problems with high-performance compute

5 年

Parallel software has to be written from the top down, not from the bottom-up. The knowledge that has been accrued by MPI and million core systems are constantly being reinvented by folks that are starting from the bottom. The synopsis of what the supercomputing guys figured out 30 years ago is that the domain needs to abstract their problem into operators and data structures, and that the parallel runtime owns the data structures on which these operators work. Any technology, like OpenCL or ACAP or Adaptiva that creates a fabric of computes devoid of any application or system level algorithm requirements will always be sub-optimal compared to a system that adapts the fabric to the requirements of the application. The technology that the FPGA vendors need to deliver is logic fabrics that are much more flexible than the current LUT/DSP/BRAM structures. I have heard some rumblings through the grapevine that some new architectures are being prototyped by startups in Israel and Europe. Really looking forward to better FPGAs.

After inventing the chunk of Veriog-AMS that handles the A/D boundary, I expected to see more mixed-signal design/verification work. To this date I'm still seeing people that try to use analog techniques with digital hacking around in the weeds with SPICE simulation. Interestingly, neural-networks look like the same computational problem as mixed-signal circuit simulation, and the fact no EDA or IC company has good mixed-signal tools makes me wonder if they are just making this up as they go along. Should you actually want to use an AMS approach to AI I'd be happy to help.

The path of FPGAs has been to incorporate other technologies into the fabric. the answer to your either/or question of course is both. But the real need is software, software, software. I should just be able to take a program and compile it to the fabric. The tools should do almost everything for me.? The one thing I'd like to see Xilinx and Intel do is use their own tools to do place and route. Why? Because by using their own to accelerate their own P&R they will learn what it takes to make that whole process easy. For example right now I'm waiting for a SR on a problem with their SDAccel OpenCL compiler. If I worked at Xilinx and I was working on putting their tools in hardware I walk over to the compiler group say "help me fix this" and it would be fixed in an hour. That fix would help all their customers by having a better compiler. But I'll have to wait and now I'm stuck. They need to use their software for mission critical acceleration.? It's the only way to get bullet proof tools needed by their customers.?

要查看或添加评论,请登录

Yousef B. Bedoustani, PhD. Eng.的更多文章

社区洞察

其他会员也浏览了