Short Review of GPT capabilities for IT tasks
So a brief overview and conclusions that I formed during the GPT testing
The first of a series of reviews dedicated to ChatGPT. My main thoughts after a 2-week test of several AI))....
from the latter, yesterday I tried to test all sorts of AI services that, based on the linkedin profile, generate a beautiful profile for you. Fail in short)....
Found some cool CV transformation services
https://www.kickresume.com/en/
https://www.resumai.com/
both services promise to do everything for you, but in practice this is the most boring task
Main conclusions concerning GPT:
All my tested was dedicated generating program code (most on Ruby and JS).
1. AI will not be able to solve a difficult task for you. It's such task when you need new knowledge which can be obtained only from other knowledge. Usually it's called synthetic knowledge.
Here is synthetics of the 3rd and 4th levels, that is, a derivative of derivatives, there is a dead end there for AI, cause it can't just generate synthetic.
Well, those tasks when you understand one subject area, another subject area and know how to combine them and get 3rd knowledge. In fact, these are any integration tasks. Even when children mix knowledge as a paints. AI can't do it
for example, if you ask GPT to write you methods for authenticating with a Google account, it will write to you proper code
But if you ask to write a multidimensional role model, then some kind of truthful nonsense will begin generating under AI. That is, the code will be similar to the real one, but in terms of some kind of nonsense inside.
Another such example was when I asked it to write an integration ruby code with the Camunda BPM service. As a result, the network generated some abstract HTTP service called Camunda for me (instead to take the real one, that's was funny), and did not try to find a working API for Camunda and build an integration. Feel the difference?
2. AI cannot solve a "tangled" problem, the solution space for which is formed on the basis of several other knowledge spaces. That is, if the solution needs to be reached logically through conclusions basing on several dimensions, and there isn't any direct solution.
In part of the code, I asked her to generate AJAX methods for me to asynchronously draw a JS script to get Github gist on the screen. Who knows, it renders through JS of another JS, this is the most confusing task. So the network just started generating some kind of trash)
3. AI creates simple designs that contain ineffective scenarios and elements, because it does not know which are effective.
Example:
It generates program code, which then immediately requires refactoring, since, for example, the same event is called many times in it. And so all code should be revised and refactored. I mean all code.
4. AI will generate even more nonsense when trying to refine the problem for a "tangled problem" step by steps.
And here plus the 5th point
5. AI is limited by the maximum scenarios space size of the input script, it is 3-4 lines of description within one request. Above this value, the system starts generating nonsense. Let's just call it. Blagirev's Law :)
You may check it.
Returning to the same Camunda, I asked:
pls write ruby application with can: 1) set task on user. 2) provide users with user profile, 3) provide user with the possibility to check what kind of task user have, 4) all tasks should be recieved from camunda BPM, 5) taks should be received when process instance was started at Camunda BPM by ruby appplication by remote call "start process" and sending process definition card id as id for proper process within process repository at Camunda BPM
well, that is, I formed the simplest possible description of the user's scenario space and thought that it would describe it. But apparently there is some limit to the memory of AI, and he simply cannot imagine such a difficult task in his "head" now.
That is, if you dictate more than 3-4 sentences for AI, then the probability of getting nonsense will grow exponentially. Let's call it Blagirev's law :).
6. An attempt to somehow teach the network to solve "tangled" problems by solving the problem itself will not lead to network learning.
领英推荐
Well, that is, if you want to train the network to solve complex problems by trying to divide a complex problem into several simple, well, this is a road to nowhere. I tried during 3 different scenarios to ask to write an integration with BPM Camunda, in general, the result was still the same. Moreover, it is not clear whether the network learns and considers erroneous decisions correct, or not.
7. I did not understand how to carry it to the network, what it was and where it was wrong. I mean in the process of generation result. How to tell to AI network - "Look pal! Here you were wrong!"
This is a problem with all such diffuse mechanisms. They take en masse all in / all out. What happened, happened. Trying to clarify something inside the generation process is a whole art
8. AI does not understand common nouns or somehow does it selectively for general things and not for private ones !!!
That is absolutely. That's real. For it, for example, Camunda name, it's just some capitalized word, for any back end developer, it's a cool business process orchestration service. For me it came as a shock.
Or I asked her to write the RAFT consensus code for blockchain network. You don't know, but it's already a household name for Web3 projects. The output turned out to be nonsense.
It turns out that it does not know all the common names of specific subject areas, and this is precisely where there is value. Synthesize knowledge of such subject areas, but GPT can't do it.
9. AI is good at simple tasks
For example generate a code to connect to Amazon Cloud or Yandex cloud. Generate code for signing a document with an electronic signature, etc.
These are all narrow simple tasks and they are easily solved by the network, because their implementation scenario is narrow and finite.
10. AI generates a very believable result. Straight och
Only an understanding person can, after reading this result, understand that something is not right in it. For example, in such a program code.
For me, it was generally a surprise that GPT passed some graduate test in some univesity or during the employee assesment. This is just some nonsense. Apparently, on the side of the selection committee, the competence of people is so low ..
11. AI violates NDA and copyright while learning. There is no control.
In general, the case is simply enchanting. Many programmers started uploading (this is real funny !) their code to GPT asking them to find a bug and fix the code. Can you image? ..suddenly your commercial code leaked to GPT. Sounds cool? No.
Or vice versa GPT analyzed how you draw and your technique and tomorrow drew a copy of the same picture. Would this be copyright infringement? In general, the question is very interesting.
For a snack.
The CV services I tested turned out to be a stream of consciousness. I uploaded a profile from linkedin'a. Unfortunately, it turned out that I have too much experience, projects, and so on. And then everything broke down. Each of these services offered me to fill out the questionnaire with my hands (!!!!), that is, AI gave up and said, "Bro, it's actually too much and complicated, let's ask the user to fill out the questionnaire himself."
In general, TOTAL
=1. Unfortunately, for now, any complex task for AI services is an unsolvable task. by AI services, I mean not narrow-profile services, such as biometrics or search, etc., but wide-range AI services such as GPT.
=2. Unfortunately, services are very quickly misleading with their results. I read a lot of threads on programmer forums where a junior programmer beat his chest that he is a cool programmer and can generate code quickly, but was smashed to smithereens by older guys when analyzing his GPT generated code. This is a very dangerous track, because it can form a whole false-positive (false positive) trend in society, as was the case with pseudoscience, when we will be given the generated plausible knowledge from AI as the truth
P/S
examples of the generated code may be found at my telegram channel
https://t.me/Blagirev_Digital_Tech
Chief Business Leader | ex Oracle | ex SAS | ex Nortel
1 年If it comes to CV automation and scoring AI based services, targeting and cover letters, then check out these two: resumeworded.com & www.resumego.net Harvard guide worth that too as a methodology: https://hwpi.harvard.edu/files/ocs/files/hes-resume-cover-letter-guide.pdf