Oh, it's here. Artificial Intelligence (AI) and Machine Learning (ML) have hit the mainstream, with tools like ChatGPT breaking the Internet with how many questions it conjured up.
Is this Skynet? Should we be worried? Is it going to take our jobs? Solve all our problems? How will AI change people operations and HR? What can it... do?
It can do a lot, and that's why it's important to have a policy in place for how your employees can use AI at work. From generating to-do lists to creating entire articles and courses, AI generation tools can save tons of time and energy. The reality is though, a lot of these technologies are young and pulling from limited datasets, and if we all get too excited and prolific, we don't quite know what we'll run into down the line yet.
There are still question marks around the risks of using AI for productivity
Could these tools perpetuate biases?
Make mistakes with data?
Lower standards of quality for editorial?
Get the facts wrong?
Use old facts?
...Or any number of ways that AI-generated content and AI-powered tools can throw a curveball that we never even predicted.
Do you even need a policy?
Like everything, it depends on what works for your organization. If enough people are asking about how you plan to use AI, it's worth bubbling those questions up to leadership and discussing it.
You might want a policy if...
If you use tools with baked-in AI, you may want to mention how you avoid biases and fact-check for errors.
If your company is a publisher and creates content, you'll want your editorial team to discuss how to ethically and efficiently use the tools to ensure you're still producing valuable, accurate, quality content.
If your company deals with lots of data, think about how you can use it for advanced business analysis, accurately.
You have a strong software approval process in place and think AI tools should be treated the same.
If these don't apply to your company, then you might not need a policy. You don't need to be a hammer looking for a nail. But it never hurts to get ahead of the inevitable.
Address everyone's FAQ with... well, an FAQ
An Acceptable Use Policy doesn't always have to say POLICY at the top in a red stamp.
You can simply call it, "The FAQ of how we use AI at [Company name]"
FAQs save a lot of time and headache, keeping you from repeating the same story, and can predict other questions to answer before they even pop up. They also get leadership firmly on the same page.
What do you put in an AI policy?
In an AI Acceptable Use Policy for the workplace, clearly outline guidelines and expectations for the appropriate use of AI and AI-based tools.
Here are the key elements to include:
Authorized use. Specify the approved software, purposes, and contexts where employees are allowed to use AI and AI-based tools.
This could mean saying "using tool X and similar tools is approved for content generation for social media, but not website articles."
Data usage and privacy: Data sensitivity always applies – teach employees how to handle sensitive data when using AI tools, and what is required to remain compliant with common regulations such as GDPR.
Prohibited activities. This may include activities that violate privacy, engage in illegal or unethical behavior, or compromise the security of the company's systems.
Accuracy and reliability: Employees should fact-check to ensure accuracy and reliability. Ask employees to report any concerns about biased or inaccurate results.
Transparency: Employees should be able to understand and explain how AI-derived results are obtained from the tool.
Human oversight. Should new AI tools be reviewed for approved use just like other software in your company? Define the level of supervision.
Intellectual property. With more and more employees developing role-transcending personal brands, it's key to define ownership rights and intellectual property concerns around content or innovations. Make sure you're clear about who retains ownership of AI-related work.
Compliance with laws and regulations: Explain how you comply with all applicable laws and regulations when using AI tools, especially concerning data privacy, intellectual property, and industry-specific guidelines.
Training and awareness: Provide employees with the resources to use the tools well. Record training videos and provide documentation in an accessible knowledge base.
Consequences for violations: These consequences should be consistent with other policies related to employee conduct. It would have to be a major violation that puts the business at risk in some way, to trigger a significant consequence. For minor offenses, even simply "review the training" is appropriate.
Periodic review and updates: List the date on the document, and revisit it every year to make sure you've accounted for advancements in technology, company and governmental policy.
By anticipating your team's response to this trend, you can efficiently and thoughtfully answer common questions in an FAQ. With everyone on the same page about AI's role in your workplace, you can promote responsible, ethical, and secure use of these new-fangled tools while maintaining a productive and compliant work environment.
Wondering what else you can do to create a more productive workplace? Sign up for the Want to Work There newsletter and get human-generated advice directly in your inbox.