Do we still need Engineers, after all everything now is modularised and “plug and play”?

Do we still need Engineers, after all everything now is modularised and “plug and play”?

I am going to use some generalisations, but bear with me, please ??

Think of every technology device you have? How many of its features and capabilities do you use? How much or how many services continue running in the background? How many unpatched applications and what firmware needs updating? Many of us, just use our devices, but Chief Information Security Officers, and their staff worry about such things?

It’s not just security, it’s also a case of us not understanding what the device is capable of, and although that device could control a spaceship (albeit it some time ago) we now use it to control a home camera (or similar).

How much time, effort, upheaval and risk does our Business suffer because that “module” is “closed”? When we need to change how we work, we might change them all, or pay to have a bespoke development done by someone. Then it’s not supported, and when the next upgrade/patch arrives, it stops working?

You can likely guess where I am going?

I prefer to have utility devices that can perform multiple, and different tasks. The hardware might cost a lot more than that really cost effective dongle, but the operating system is opensource, and you can own the development work, and plan upgrades etc?

But then you need engineers who can do more than “plug and play”.

Is Artificial Intelligence (A.I.) a Threat?

What is A.I.? Going to generalise again …. Oops ?and wander randmomly for a minute, or two?

I have only really considered Machine Learning (M.L.). I know there is Deep Learning and Neural Networks, but (so far) having a basic understanding of M.L. has been sufficient for me.

So what's "in" A.I.?

I am pretty good on the hardware side of things, but I have not done any coding for a very long time.

i.e. in the technology world that I “touch” it usually it involves some type of “end point”, network, database, application and now A.I. or in my case, really M.L.

The thing I am missing is coding. I am not a programmer, but if I did want to do any coding, I would google for some code, maybe try ChatGPT and see what the code does? I use Raspberry Pi's and for the limited coding I do they are really good.

Note re ChatPGT. I love ChatGPT - and yes I know it makes mistakes ....... and there's a good point? If you have an idea of what an answer should be, or what laws/equations/definitions should be involved, and ChatPGT comes up with a weird answer, then hopefully you will know enough to see the error?

But to try and learn a bit about Python, I recently joined a large Python Meetup Group. My thinking was that to be more "hands-on" with M.L. I needed to learn a bit about programming, and Python seems the most popular language that I keep coming across.

... again just chatting here?? I was going to use PyCharm as my IDE. I have a Windows laptop (think I am the only person at my Python Meetup who has a Windows laptop?) ..... anyway, before I went, I installed a VM with Ubuntu, and PyCharm ... and I struggled. Can't remember why, but I did not get anywhere very quickly?

I then installed Microsoft Visual Studio. WOW!! That was easy? Lots of online help, YouTube etc. Plus the Visual Studio IDE, seems to realise I am new to Visual Studio, let alone Python.

The result was that at my second Python Meetup Group meeting, I was one of a handful of the 47 (approx) attendees that had prepared the environment ready for that night's "work/project".

I learned some new “words” – Vector Databases and Chunk Storage, so while every one else had experience with those, just learning those new concepts made it worthwhile attending the Meetup.

So, what’s the point to this preamble … and no I am not talking about networks ?? …. my point being that we should not fear A.I.?

Those of us with experience, don’t need to be as good as the new coders that just joined, but if we can understand the concepts, and not look lost, then we can be a good team player.

We can help our other colleagues by explaining some of the things we know about, and how they will affect what they are doing?

M.L. and Training Data

I want to highlight how important accurate data is for training. In the early days of the IoT we used low capacity networks and resource constrained end-points. (A lot of my background over the past few years has been Military & Civilan IoT type networks).

To enable efficient use of that bandwidth early end-points would have some remote compute capabilities. Those endpoints would receive a lot of data (think training data)? but their role (in those days) was just to monitor for a condition and report it.

In effect the vast majority of training data that might be available, for an A.I. system to use to learn, was "designed out", at the end-point.

I am still seeing this where designers are concerned about constrained or congested networks. They are designing systems, where the end-point still makes decisions on what data to send to the back-end, and the reasoning for this, is that the important “thing” is to raise an alert.

My approach on this is different. I am suggesting that when the network becomes busy, that “superfluous” data is not sent, but instead is saved locally/remotely at the end-point. It can then be transmitted at a later date, when the network conditions allow.

Many of you will have already seen this, but it should “make the point” re training data;

But where do you route your Training Data?

Ingesting data – where?

Data directly ingested into the application

The fastest way to process something within the application is to ingest the data straight from the sensors/gateway into the application.

Obviously, that is a risky process, because you have not saved that data in non-volatile storage, so how would you do an accurate recovery?

Depending on your failure process, what do you do if your system/application crashes then at what state do you recover? Is your data integrity correct?

For instance if your data is messaged from the sensors to the gateway every 10 seconds (with time-stamp A) and your gateway (aggregates the data from five other sensors and uses time-stamp B) and messages that payload to the message broker, then how easy is it to work out what time-stamp you use to accurately recover? Do you store data at the gateway, if so that will make a recovery more reliable.

Data routed to a dashboard (first)

Still fast and you can do some clever things in your database to verify security, or to add functionality to the “attributes”. But be careful about how incremental your back-ups are. If data is stored at the gateway for a period of time, that helps with regard to data integrity.

Data routed to non-volatile storage (first)

Is generally the safest method, as long as you really do use a proper “write commit” in your process, and don’t rely on any caching anywhere?

Time Stamping Data

This is a complex and important area, especially in medical environments, where multiple and different machines (or sensors) are monitoring a patient. Generally all sensors and gateways and all systems should have a common synchronized time-source.

There might be a delay in transmitting data from source to back-end application, however, the original time-stamp from the source/sensor, should be used as the measuring time-stamp. i.e. there might be several other time-stamps added, to the package envelope, but you must always make sure that your application uses the original source time-stamp.

Speeding a Computer Up (making the pulses faster?)

We need to agree on some concepts. Here’s a few pictures/diagrams.


The clock in a computer system is used partly to keep the CPU working. If you have power on a CPU, but no clock, the CPU does nothing. So, for example a clock at 100 Hz triggers the CPU to do something every 100th of a second (almost – that is not 100% correct, as one clock cycle does not normally equal one machine cycle – read on).

A CPU might take two or three clock pulses time duration to read in a new instruction.

The CPU might take a total of five or eight clock pulses to execute an instruction. So, a clock pulse of 100 Hz does not mean that the CPU executes 100 instructions per second. Now scale up to GHz, it’s just the same.

An exceptionally fast clock speed does not mean that a CPU runs at that speed. It all depends on the CPU design, and is that CPU designed well to execute those specific types of instructions.

Over Clocking

It can be effective, but if you over-clock you can slow a system down. Even at the CPU level, if you “over-clock” a CPU, then the result of the instruction may “miss” the relevant clock cycle.

In the example below the “active” clock pulse edge is the “rising” edge. Really clever/fast systems will be set up to do something on multiple parts of the clock pulse. For instance, they will do something at 63% of Vcc on the rising edge, something else at 100% Vcc, and then something else on the falling edge (maybe) at 37% of Vcc. (assuming Vcc is your max voltage level on the clock pulse)

None of that takes into account, what the peripheral devices are doing. i.e. your mega fast application is doing all it can to compute results, but your peripherals are just not fast enough to communicate to other systems. That might be because you are using the wrong hardware, drivers, networking or messaging protocols.

Using the wrong type of messaging protocols is probably the worst possible technical and commercial mistake. You can build an application that just cannot possibly pass all the data it needs to, or cannot adhere to a required QoS. (this is a big area, too big for this short paper).

Over Clocking Diagram

The above is a short summary of this area. I wrote a fairly long article recently. If you want to dig a bit deeper see

https://www.dhirubhai.net/posts/mike-mckean-991a7743_iot-aiandml-artificialintelligence-activity-7177269262380617728-Fl8N?

Or get in touch Mike Mckean mmckean917 (at) gmail.com



要查看或添加评论,请登录

社区洞察

其他会员也浏览了