Rapid Application Prototyping and AI

AI is a complex technology but have we thought of it in terms of Rapid Application Prototyping, Well No. So far all developments kind of go back to large releases where in bug fixes might also need a lot of time, or simply the bug cannot be fixed and the feature is withdrawn until a large releases comes in and we solve the problem after a lot of architectural changes needed to develop the feature again. But the misnomer for this is that is being termed as adoption of AI withing industries or public consumers to help solve problems which need currently human interventions.

A example of AI adoptions etc... can be found below -

https://www.hpe.com/in/en/solutions/artificial-intelligence/nvidia-collaboration.html?jumpid=ps_oxjugq8la_aid-521080176&ef_id=Cj0KCQiAo5u6BhDJARIsAAVoDWtSA5IUM_lLQ-kW6ycfCVPaSCOAg9ofqCz8wFLDj82hjieOA6oQm6YaAgVfEALw_wcB:G:s&s_kwcid=AL!13472!3!702418234206!p!!g!!ai%20models!21380592857!164370432140&gad_source=1&gclid=Cj0KCQiAo5u6BhDJARIsAAVoDWtSA5IUM_lLQ-kW6ycfCVPaSCOAg9ofqCz8wFLDj82hjieOA6oQm6YaAgVfEALw_wcB

The problems why RAD (Rapid Application Prototyping) might not be feasible for AI is the computing needed to develop these models either trainings or in solutions. The more the need to limit computing not just Workloads but making Designable components which are plug and play.

Summary : Rapid Application Prototyping was thought of as an XP (eXtreme Programming) way of doing things where in test driven developments seem to be the key, but in AI as a consideration can only be trained over billion parameters and petabyte data at a minimum. But is a design possible where in individual components need only a 100 K parameters and gigabytes of data is to be seen. We have seen in the past some of the concepts like microservices were overkilled and this very approach is a problem.

要查看或添加评论,请登录