On the Horizon for Software Development

On the Horizon for Software Development

Here is the looming question for us working in the field: How many more years will humans be writing and maintaining lines of code? Outside of certain algorithms such as block chain that require the power of an actual human brain, which is still far more powerful than the best and most powerful silicon chips, the generation of code is bound to be taken over in leaps and bounds by AI and machine learning platforms over the next several years.

Let us consider what software is in the first place, which is a set of instructions that at the code level humans can fathom, and at the lower level just flipping of switches on and off, the proverbial zeros and ones of machine language, which are not so intelligible to most people. Most people working in the plethora of extant programming languages are not savants that could easily write machine code, and how many really write at the next level up, the low level assembly language in which some of us did work in back in the 1970s and 1980s?

So... given this prognostication, which is undoubtedly a more or less 'self-fulfilling prophecy' already carved in cyber-stone by the authorities writing on the subject, how will human brains still be in the loop as the machine learning and AI platforms still need guidance until they are untethered and fully autonomous? And that could still literally be centuries away according to those authorities. There can be voice recognition, which is already fairly mature, however there can also be direct brain-wave interfaces on the horizon. How many miles away from that horizon are we right now?

Some of us really enjoy working with code directly, but producing code with zero defects can be challenging, depending on the level of complexity of the instruction sets required, at least as far as the first draft, and then the iterations before it makes it to production. What 'machine generated' code will bring is taking out the 'human error' component, at least ostensibly. There is always the potential for something getting lost in translation, and the dependency on entirely logical definitions from the human brain side is perhaps always a potential vulnerability. The best of both worlds could be the melding of the human brain with the AI to iron out the vulnerabilities on both sides. Whatever the case may be, it should be a brighter future coming, despite the ongoing paranoia about 'evil' AI entities. And how is true intelligence ever going to be an attribute of electronic impulses in silicon chips, when these are not alive in the sense humans with their powerful brains are?

Software engineering as a discipline is so very recent as compared with structural engineering, given that the Great Pyramid is dated at over 4,500 years ago. One might even consider it a nascent field of endeavor. The first generation of commercial digital computers as we know them today hit the market in 1951, which is pretty recent historically vis a vis technology in general. Yes, the power of computer processors has exploded at an accelerating pace, but true 'artificial intelligence' has not made the same strides since Turing's inroads in the counter-Enigma-machine project. Getting any machine to mimic the human brain with 86 billion neurons, with an average 7,000 synaptic connections to other neurons, requires technology that is currently not available to us. The theorized organic processor with carbon based as opposed to silicon based technology - the artificial 'brain', may be realized before the current processing technology ever gets to the point of human brain level power. Other organs are already being grown on scaffolds, so a brain is just one logical progression from a heart, kidney or liver.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了