What's Up With Generative AI?
Image by fauxels via Pexels.

What's Up With Generative AI?

I get asked by colleagues and friends about generative AI a lot. It's great because it is one of the primary reasons why I pursued computer science and technology as a career. It's been around forever, but it's current time in the limelight is definitely exciting to talk about.

Here's a quick summary of those conversations (no names, I promise!), generalized in this article so that others may benefit.

This is more of a highlight-and-google-this-string-of-text kind of summary, as it isn't my intention to repeat a lot of the material already out there.

What do I need to know?

In short: AI is here to stay.

It's place in our professional and personal lives will evolve, as will its potential for real value.

The most important thing to know is that generative AI, in all its forms, is a tool (we'll set aside AI agents for a different topic). It's up to you, either for your personal productivity or as a developer, to determine if it's the right tool for the job.

How do I know if it's the "right" tool?

For personal productivity, the answers are pretty straightforward. If you need a jumping off point for brainstorming or need help sifting through a mountain of text, generative AI can be helpful. Summarize the text, analyze the tone of an email, produce an image that hopefully looks like what you're imagining, or even identify how many people have their hands raised in the company photo are things most of the popular generative AI products can do.

For developers, your use case will be driven by constraints.

Do you need deterministic answers to a discrete problem?

Are you looking for a component that can handle a wide range of possibilities, narrow the scope, and provide a meaningful result to a person or another system?

Is performance critical?

What type of safeguards do you need if things go wrong?

Do you have a blank check or do you need to tightly manage costs?

Your answers to these questions will help you narrow down if AI (either as a component or an agentic system) is the right answer.

What do each of these use cases have in common?

You have to have a good idea of what you want.

Think about what you need: what it should look like, what purpose does it fulfill, who is the target audience, etc. When you have a good handle on that, the other sections below can help you realize it.

What should I learn?

At the highest level:

  • Bias: Was your model trained on data points that could favor returning positive responses to certain requests? An example I like to use: if I trained a model to provide facts about pets and every response it returns about dogs is negative, then there might be some bias. (Disclaimer: I love all animals.)
  • Hallucinations: If I asked my pet model about airplanes and it responds that a Chihuahua is a popular model for transatlantic flights that seats 200 passengers, that might be a "hallucination".
  • Intellectual property: Popular products either filter or have other mechanisms to control the direct output of intellectual property (IP). IP, loosely defined, is the creative output that someone (individual or company) has produced. IP is typically backed by laws, patents, copyright, etc., which could differ by country. The legal system is catching up, so take a minute to think about what you're sharing with an AI and how you might spot it in results.
  • Context window: Loosely termed as "memory", this is how many "stuff" a model can hold, typically done by counting "tokens" (words, punctuation, etc.). Once the limit is reached, without other mechanisms in place, old information slides out and new information slides in. Products are evolving quickly and there are techniques to combat this, but it's good to be aware that it exists.
  • General safety: Offensive text, images, etc. produced or input into an AI (for generation and/or training) should be controlled. Some products have content switches or sliders while others do their best to avoid such content altogether.
  • Etiquette: If you're using AI to produce direct content that people consume or otherwise directly affect them, it should be disclosed. Depending on your company's -- or government's -- policy, disclosure of how you arrived at your end result may be required (transparency). The industry is catching up with this, so do a little research before running wild with all sorts of neat AI output.

For general productivity, prompt engineering -- a set of techniques to guide an AI -- is a worthwhile investment. There are many guides, videos, and articles that cover this topic.

For developers, it's a bit of a mixed bag. The terms, tooling, and best practices evolve daily. Understanding how AI works, however, will help you drill down into the right topic with the depth that you need. Learning how not only generative AI works, but also data science techniques for data cleaning, selection, training, etc. will be useful. Architecture for larger scale solutions will stress these fundamentals and spill over to other concerns, such as load balancing, effective data ingestion pipelines, information management, and compliance/monitoring requirements.

Vector databases, embeddings, retrieval augmented generation (RAG), and other tools and techniques would come up in any search related to AI solutioning and data management, including AI/ML-Ops. The libraries that are out there to make putting this together, such as LlamaIndex and Semantic Kernel are worth a deep dive. The raw API calls and other interactions are certainly possible and are great for knowing how things work, but using a well maintained, well documented library saves a lot of time.

And lastly, security. This is a subject that warrants an article its own, but the most important thing to know is that this is a powerful new vector to manage. All of the techniques discussed thus far can be used for cyberattacks (e.g. one of your AI agents gets corrupted with injected instructions from a nefarious 3rd party) and with rapid evolution of AI-infused tools makes keeping up a significant challenge.

Should I get certified?

Getting certified requires going through curated material that you'll be tested on. Learning is, and will always be, the most important thing. If the material has already been curated for you (and if the certification will be paid for by someone else), it's just a matter of if you think it's worth the time.

There are free (in cost, but still requires your time) ones that cover the basics, especially around personal productivity. A lot of them cover the topics mentioned here, as well. If you're starting from scratch, start here.

But if you want a short, quick answer: then yes, getting a certification can be a good idea if for the simple reason of providing focus. There is a lot out there on what to learn and for what role, but a certification track that's curated narrowed the focus can be helpful.

It's a double-edged sword, so do a little research. Which ones you'll want to target will depend on your personal and professional goals.

How can it make me more effective?

Ever stare at a blank document with that menacing cursor furiously blinking at you, taunting you to write your first word? Now you get to do the same thing with an AI chat bot (kidding... sort of).

Some ideas have already been mentioned in this article, but offloading the tasks to an AI that would otherwise take a long time for you to do is an easy win. Sometimes, tasks are easy to do but time-consuming, like doing sentiment analysis on hundreds of survey responses (the "Other: <write whatever you want here!>" field, I'm looking at you).

Framing a presentation from content you've already created is fantastic. Insight into large datasets is another one. Maybe there's a style or technique you've never heard of or seen before, but now a response from the AI has sparked new ideas.

Great stuff.

And developers, don't worry, I didn't forget about your productivity. Ever jump between multiple languages, sometimes days, weeks, months, or years at a time from you last time you touched it? Do you know exactly what you need, what you want to do, but can't quite remember the syntax or quirk of a particular language?

AI has you covered (maybe).

AI as a coding assistant is an effective tool, from explaining existing code to helping you write new code, it's pretty neat. It's also an excellent tool to help you with software archeology (e.g. "why in the world was this built like that?") by combining the code base, commits, and other documentation into a summary.

Can you kick back, write prompts, and watch it produce a complete application for you from start to finish? Sure you can. Would you want to deploy that to production as is? As of this writing and depending on the complexity, probably not.

It's great as a learning tool, dangerous as a crutch, and is capable enough to be useful. A separate article dedicated to this topic is warranted, but some other ideas you might want to consider other than coding:

  • Diagramming (visualize the system in a standard diagram or generate code from an existing diagram)
  • Reverse engineering (explain what this does, correlating these bug reports and requirements documents)
  • Test case identification (are you missing any edge cases, etc.)

It's the pair programming partner that doesn't get tired.

What about the stigma?

There are two components to the perceived stigma when it comes to the use of generative AI: internal and external.

The internal forces are your own, so I can only provide some general advice. First, be honest with yourself. Would you drive to the other side of town to get an apple when you can walk down the street and get one? Or do you copy the answers to a test by looking over someone's shoulder?

In the first scenario, that's entirely up to you. Maybe the cross-town apples taste better. Maybe you like the drive. If an apple's an apple to you, then there's no harm going down the street, saving yourself the time of a commute to do other things.

The second scenario, however, is different. As mentioned earlier, you should know what you need. To do that, you need to know what "correct" looks like. In addition, you should know "why". Copying the answers to the test and turning it in as if it's your own bypasses the "why" and the ability to apply what you've learned. You're also hedging your bets that what you're copying is correct and applicable to your use case. This is dangerous and, depending on your field, could get you into a legal kerfuffle.

Now for the external forces. It depends on your situation. Maybe your industry values creativity and ideas, so there may be a negative perception around using AI to assist in that process. Perhaps you are in a heavily regulated field where mistakes are costly, so even introducing generative AI into the mix is a risk that someone may not want to take.

In either case, it's your choice because you are ultimately responsible for the outcome.

How do I stay up to date?

As of right now, open your favorite browser, app, or podcast and you can't miss it. Ask the AI on your device for updates about AI.

I don't know why, but if I have questions can I talk to you?

Of course! Let's collaborate.

Every word lovingly chosen and sequenced by a human... for better or worse.

要查看或添加评论,请登录

Mike Barksdale的更多文章

  • Is Coding "Dead"?

    Is Coding "Dead"?

    Maybe you've read an article or two. Maybe you've followed the development of Github Copilot from its earliest stages…

    1 条评论
  • We've Got a Workaround for That

    We've Got a Workaround for That

    Whenever I hear the term workaround, the cynic inside of me raises an eyebrow and sternly asks Come again? (And when I…

  • The Power of "No": A Shift in IT Thinking

    The Power of "No": A Shift in IT Thinking

    Children are funny. Their outlook on life is usually filled with a wonderment that many adults -- especially me, at…

    1 条评论

社区洞察