Artificial intelligence (AI) has made its way into many workplaces nationwide and is rapidly changing how organizations operate and make decisions. In some cases, employees may be using AI tools without their employers’ permission or knowledge. While this technology presents opportunities for organizations, including enhanced workflows, streamlined operations and improved customer experiences, it has limitations and exposures that employers need to consider. Implementing workplace policies can help ensure employers understand the potential legal, business and reputation risks associated with using AI tools and protect against them. Therefore, now is the time for employers to start considering how best to create and enforce policies that address the use of AI technology in the workplace.
This article outlines general considerations for employers to keep in mind as they establish AI-related workplace policies.
Many employers are using AI systems to sort through resumes, create job postings, streamline the hiring and onboarding processes, and automate many HR functions. While this technology can help improve organizations’ operational efficiencies, it presents certain risks. For example, AI algorithms can reinforce biased or discriminatory hiring practices even when unintentional. Additionally, AI tools’ increased monitoring of employee activities can trigger privacy issues. As the integration of AI systems becomes more widespread, anticipating the issues this technology may pose in the workplace is increasingly essential.
Despite the potential risks of using AI tool, laws and regulations haven’t kept up with employers’ acceptance and incorporation of this technology. While many existing laws address AI-relate issues, as a whole, such technology is a relatively new legal area. There’s currently a patchwork of federal and state regulations that address aspects of using AI tools in the employment context; however, legal issues related to these tools will likely continue to emerge as AI technology develops and becomes more advanced.
Because AI technology in the workplace is largely unregulated, there are many gray areas employers must navigate. Employers can establish governance policies and procedures to evaluate and monitor AI tools as well as assess the long-term impacts of these tools. Understanding how AI tools are used in the workplace can direct employers as they develop related policies. Existing workplace policies may already address some AI-related risks, but employers may need to reevaluate these policies to address specific concerns. This can help ensure that organizations use AI tools responsibly and integrate such technology to complement human activity in the workplace.
For employers operating in multiple states, the use of AI tools can present compliance problems due to varying federal and state laws regulating this technology. In particular, it’s possibly that using AI tools in the workplace may be illegal in some jurisdictions or subject to different regulations. As such, organizations must devise policies to navigate these issues if they have employees working in different states.
In addition, adopting AI tools may lead many workers to feel their jobs as being threatened. That’s why it’s important for employers to understand the impact that introducing AI technology in the workplace may have on the well-being of their employees. Employers must consider ways to support their employees during this transition. This can be done by establishing policies and educating and training employees to understand the roles and functions of AI tools in the workplace. With this in mind, it’s vital that organizations implement related policies and procedures when adopting AI tools.
AI technology can collect and analyze data to help increase workforce and organizational productivity. This can help employers transform their approaches based on AI-derived insights or tracking employee performance. However, employers must consider employees’ privacy rights when doing so and institute effective policies to outline and protect those rights. Some jurisdictions have imposed consent and notice requirements for using AI tools in the workplace. Currently, New York, Delaware and Connecticut require employers to notify employees of electronic monitoring. Other states have implemented consent and notice requirements for using AI technology as an interview tool. For example, in Maryland, an employer cannot use facial recognition software during an interview unless the interviewee signs a waiver. Establishing policies to address these issues can help ensure that increased monitoring of employees through AI tools doesn’t become intrusive or reveal private or confidential information. This can include disclosing how such technology is utilized with applicants and employees.
AI-generated content can violate copyright laws or infringe on third-party intellectual property rights. For instance, conversations employees have with AI chatbots may be reviewed by AI trainers, inadvertently disclosing sensitive and confidential business information and trad secrets to third parties. This could potentially expose employers to legal risks under privacy laws. Additionally, employers should consider the status of any content generated using AI tools, how it’s protected and who holds the right to utilize that content before using it. Employers can review and update their confidentiality and trade secret policies to ensure they cover third-party AI tools. Organizations can also train employees on potential copyright and intellectual property issues, ensuring inputs used to create AI-generated content do not include data that’s protected or confidential. Employers can also restrict access to AI tools to reduce their legal risks.
Using AI technology can lead to intentional and unintentional discrimination in the workplace, resulting in costly lawsuits or investigations. For example, AI algorithms used to make employment decisions may be based on historical data sets that could be biased or discriminatory – benchmarking resumes or other job requirements based on protected characteristics, such as age, race, gender or national origin. As a result, employers should be cautious when developing, applying or modifying data to train and operate AI tools to make employment decisions.
As AI tools become more advanced, employers’ abilities to control this technology will likely become more limited. That’s why it’s important that organizations establish policies to ensure the ethical use of AI tools. While there are still many unknowns when it comes to AI tools, employers should establish policies to account for what is known and reevaluate their policies regularly as the technology evolves.
AI technology is revolutionizing the employment landscape. As more organizations embrace this technology, establishing proper workplace policies can help employers protect against related risks and prevent potential violations. Being proactive in creating AI-related policies and procedures can help employers identify their exposures and outline strategies to address them.
For more information on cyber security, go to www.ecompnow.com/cyber and get a Cyber Liability quote today!
Insurance services provided by E-COMP NOW! Insurance Services and its licensed agents and affiliates. The information contained within these materials are confidential and not to be distributed. Descriptions are general in nature only. Please refer to the terms and conditions of policies offered or purchased. Insurance products are subject to application and underwriting requirements. Pricing depends on a variety of factors including policyholder location. Not all discounts available in all states. Not all products available in all states. Use of and access to this information, site or any of the links contained within this site does not create a relationship between the user and E-COMP. © 2024 E-COMP, Inc. All Rights Reserved.