Embrace and don't fear GenAI.

Embrace and don't fear GenAI.

Generative AI(AI), is getting much attention, but much of that focus seems negative and driven by fear. The rate of innovation in AI has picked up pace, as has the concern. The fear/concern is due to a lack of knowledge and understanding, fueled by the typical negative hyperbole you find when you search the internet, read the press, or use social media. Using AI at work will only improve and teach us more, so the rising concerns and lack of adoption will be harmful. We need to walk towards and embrace AI's capabilities rather than shield, lockdown, and over-govern its use.

Most of the fears of AI in tech are that it will displace jobs, generate biased results, violate security or privacy policies, and cause a lack of accountability and transparency.

I don't fear walking into a dark room; however, I will not walk into a pitch-black one blindfolded. I wonder why I would put the effort into the unknown. When considering something like AI, I research enough to understand what I could gain from it, decide on an experiment(s), and then learn and form views based on the knowledge gained from the experiment(s). I encourage a similar approach: trust facts, don't trust opinions or hearsay.

We have all been using AI for quite a while, and I'm no different from most. AI has been a fellow passenger of mine in writing documents (Word and Grammarly), searching (AI Overviews in Google), and helping me build data analysis (Amazon Q in Quicksight). All of my experiences have been favourable. The rate of innovation is fast, and the value created from it continues to increase.

I have just started guiding the creation of an organisational-wide AI Policy. To inform that process, I needed to broaden my understanding of how AI could help Product and Software development. However, the insights I gained could also be applied to other uses of AI.

When I started in tech, I was a developer, focusing mainly on data and analysis. I've always been able to code, and I've changed my language of choice several times. Today, I write simple scripts to manipulate data using Python. I can develop some cool things based on data—at least, I think they are cool!

A few weeks ago, I set a target for understanding an AI CoPilot coding tool whilst gathering data to understand a specific Australian Charge Point Operator(CPO) market attribute. I planned to automate market data gathering, transform it, and load it into an analytics data platform using Python and VSCode with Tabnine as a coding CoPilot.

In the experiment, I learnt that GenAI was beneficial in helping me generate code, including tests, produce documentation, and helped me understand the code I wrote and the code that Tabnine generated. The tool helped make me more intelligent by putting more relevant information before me. Coding CoPilots have much more context than you would naturally provide a search engine or could use to search a manual. The CoPilots use information from the open code windows in the IDE and attributes you supplied and can generate notes, code, and tests that match the style you use to develop. The results of /Generate or a /Describe were excellent.

Whatever is generated by an AI tool is the tool's responsibility; however, the person or owner of the product using the tool is accountable for what's generated, how that is used, the behaviour, and the outcome created. You are responsible for and not excused from any AI-generated output impacts. You can't unquestioningly accept, use, or send what is produced, no matter how excellent the results.??

One of the fears of AI is that the results will be biased due to the training data or the datasets. It seems illogical that AI should be feared in this way more than anything we already use today. If you take solving a development problem, people may search the internet or read a book. Arent both sources biased? There are many ways to solve a problem, and using websites or books as a source of knowledge is just reading a person('s) perspective or opinion on how - There are biases in that information. Google search results are biased. Try searching "link between coffee and hypertension" and then "no link between coffee and hypertension". Both result in extracts from the Mayo Clinic, one saying there is a link and the other saying there isn't. Google pulls bits of information from the same article to feed you the desired result based on the search.

Another fear with GenAI is that it will make jobs redundant. However, my experience is that it won't. GenAI can perform tasks of a role; however, the objectives of someone's role can only be assured by the person in it. In my case, most of the code created was boiler-plated, considering the context I sent and using the Tabnines dataset. The results were good and well described; however, almost everything needed updating/specialising and always required validation. It got some/most of the parts right, sometimes messy. Without showing code examples, and I'll spare you mine, an anecdotal and great way to explain what happens pictorially is shown in this article Yoga poses generated by AI. Again, anyone using an AI tool must be responsible for what it generates. Don't use a tool to create a document, an email, or code, and then mindlessly email that out or commit the code. You need to review, change, and add yourself to it. There's no lack of transparency if you take this approach.

I stayed in a state of flow and was more productive using a code CoPilot by staying in the IDE and searching for code examples, autogenerating code, documenting, etc. However, the CoPilot didn't take my job away; it elevated how I could work. As I said earlier, it made me more knowledgeable. DORA Accelerate State of DevOps report has an excellent writeup on what it found for interviewing organisations on the use of AI. I wasn't much more productive using a CoPilot; the code generated was continuously wrong and created just as many errors at runtime and in the tests. I needed to instruct the AI tool what I needed it to help me with, it needed someone with skills to drive it. However, using the Copilot kept me moving forward; I definitely worked more effectively.

Remember, when sending information to an externally hosted program/tool, you lose control of who has access to that data and how somebody could use it. When asking an AI tool to generate any form of output, the responsibility for what information that tool has access to lies with you. AI tools won't steal any data you don't send it! When using Tabnine, I would never enter a code generation line like "/generate.... code that searches the charge-session object for Joe Bloggs or his credit card number XXX..". Data security isn't an issue created by the invention of AI. AI data security issues will arise because people misuse the tool. Industries have spent enormous effort teaching how to secure PII, secretive, and financial data. Those teachings must be updated to highlight what it means to work with AI tools.

Using an AI coding CoPilot made me more effective and elevated the relevance of the information shown to me and the code/information I created. I worked better and stayed focused longer, ultimately making me more intelligent. However, it needed me to make the right decisions, instruct the tool what to do next, decide what information it had access to, correct its best efforts, and finally ensure that the outcomes created by my work were proper.

I have experienced AI in various ways, including searching, writing code, analysing and reporting on data, and writing documents. We shouldn't fear it, and we shouldn't try to lock down or restrict its use. The only way for people to understand how it can benefit them, their teams, and their organisations is to try it out, experiment with it, learn from those experiments, and make decisions based on understanding and knowledge. The rate of innovation in this field means that the straightforward ways AI can be used today will be improved upon and surpassed in a not-too-distant tomorrow. The waters warm, understand what you don't know, find ways to learn, and never fear walking into dark rooms.

Marcus English

Partner & Head of Insurance

1 个月

John, thanks for sharing with your network!

回复

要查看或添加评论,请登录

John Sullivan的更多文章

  • There is more value in creating and using OKRs than in the actual OKRs.

    There is more value in creating and using OKRs than in the actual OKRs.

    Chargefox is the fifth company I have worked at, where OKRs are the method or practice used to set the company's or…

    4 条评论
  • Just Start!

    Just Start!

    Pre-Covid I would be an Avid poster on LinkedIn. I would re-post articles I had read or write short articles and post…

    6 条评论
  • A Tech Career That’s Fuelled Me to Heights of Innovation with Brighte

    A Tech Career That’s Fuelled Me to Heights of Innovation with Brighte

    I have had a long career in technology. I started as a 16-year-old intern developer at Digital Equipment Reading UK and…

    4 条评论
  • The positive signs in systems delivery

    The positive signs in systems delivery

    Obvious statement alert! Awful times we live in! However, it's not all doom and gloom there are some positives emerging…

    2 条评论
  • Train your brain

    Train your brain

    Last week I presented at a Brisbane's company teaming day on how to deliver business value, use principle-based…

    1 条评论
  • A career is like how you eat an elephant

    A career is like how you eat an elephant

    Learning something new or refreshing skills to help your career can be exciting and frustrating all at the same time…

    1 条评论
  • Know yourself

    Know yourself

    I've been fortunate to be the leader and manager of many a team. I have mentored quite a few people, and still do.

    4 条评论
  • Next International Women's Day make a difference

    Next International Women's Day make a difference

    International Women's day, 8th of March, has come and gone and its, in my opinion, been a massive missed opportunity:…

  • Transforming! Clear the decks and reset.

    Transforming! Clear the decks and reset.

    What is the collective noun for a group of Agile-Transformations? Personally, I like the collective noun A Murder..

    12 条评论
  • Be Agile not a doers of Agile

    Be Agile not a doers of Agile

    Companies throw themselves at organisational wide transformation effort to move to use Agile processes, but what are…

    7 条评论

社区洞察

其他会员也浏览了