Proper AI Etiquette in the Workplace

Proper AI Etiquette in the Workplace

AI is a quick, easy solution to a lot of problems, so more and more professionals are jumping on board the AI train. Many of these AI tools are available at little to no cost, so naturally, people are looping them into their current workflows, for better or?worse.

But if you work as part of a team, this can cause problems. You must tell your team the AI tools you’re using, as it can impact their workflows as well. Let's take a look at why when it comes to AI, transparency is?key.


Keeping Data?Secure

Despite what AI companies would have you believe, AI doesn’t “learn” anything. It’s not conscious or intelligent. What it does do is leverage an algorithm to collect available data that it then uses to answer similar inquiries. It absorbs data.

This means that entering sensitive data or proprietary information into something like ChatGPT can constitute a security breach. That information is taken and stored, and your organization may not be okay with that happening. If you’re going to use ChatGPT or its plugins to create assets on behalf of your organization, be sure to let leadership know you’re doing it and that they know the potential risks.


Company Ethics

There’s a decent chance that your org’s website has a ‘core values’ page as it’s become pretty commonplace to include some sort of declaration showing what you stand for as a company. No one likes an organization that doesn’t believe in anything but profit.

The way that these algorithms work has raised ethical concerns since their popularity first started exploding. Art and writing are scrubbed from available sources and then leveraged without the consent of their creators. This could be a concern to potential partners, so be sure to read the room in your industry and see how thought leaders are reacting to the use of AI.

Another concern could be the use of AI itself as a way of streamlining content creation. There’s a thin line between something being “a cost-saving measure” and it being “cheap”, and that line is being decided day by day by public opinion. Your partners may have partnered with you because they knew things would be created with care by human hands, so if that changes, you want to let them know. It’ll be a lot less messy than them finding out.


Maintain Information?Integrity

When you’re generating anything from an LLM, there’s no way to guarantee that you’re getting an accurate result. There are already so many cases of AI generating recipes that make disgusting food and even publishing books on Amazon that give false information on how to identify poisonous mushrooms! That’s pretty much a worst case scenario if we’ve ever heard of?one.

The worst it can do for you is destroy your credibility. If you’re tasked with creating these assets, you’re tasked with ensuring their accuracy. The people you’re working with need to know if you’re using AI to generate these things because they need to ensure that your time is being tracked accurately (while AI may be quicker on the surface, you’re going to have to do the same amount of traditional research to ensure a good end product), and they need to have QA on deck to check for any inaccuracies in the generation. Any mistakes that slip by can kill public?trust.


Overreliance on?AI

Your organization has a voice. It has ideas, it has a heartbeat, and it has a personality. While you might be tempted to lean heavily on AI because of its ease of use and inexpensive operational costs, remember what it can cost?you.

AI is great as long as you’re using it as a foundational tool and not using it to churn out content for the sake of it. Everything should still be changed to fit your voice. To fit your brand. If you let AI dictate that for you, you’re going to lose the genuine human connections you’ve been able to forge up until now. Even though you prompt AI with words like, “say this more casually” or “sound analytical”, it’s still a machine doing its best impression of a human, and people will be able to?tell.


The solution to all of the problems with AI comes from regulation. Regulation of the technology itself and regulation of use within organizations. When you have a dialogue with your team about using AI, you’re starting an important conversation. You’re building the guidelines by which these tools will be used. You can be the pioneer in your organization leading the way in ethical AI use and protocol development as long as you’re aware of what AI is capable?of!


Need help navigating your AI?protocols? SLX can help you become the AI guru your organization?needs!


要查看或添加评论,请登录

社区洞察

其他会员也浏览了