Uncle Sam Wants Worker Input in AI

The U.S. Department of Labor issues a new directive on AI in the workplace.

Microsoft calls 2024 the year where artificial intelligence “gets real,” as the use of the technology has grown exponentially.

According to a new report from the technology giant, the use of generative AI has “nearly doubled” in 2024, with 75% of global knowledge workers using it. Meanwhile, 78% of workers say they’re bringing their own AI tools to the workplace as they “struggle under the pace and volume of work,” according to Microsoft.

Yet the same study shows that as executives increasingly recognize the value of AI skills on the job, “they’re missing the value of developing their own people.”

This from the study:

— 45% of US executives are not currently investing in AI tools or products for employees.

— Only 39% of people globally who use AI at work have gotten AI training from their company.

— Only 25% of companies plan to offer generative AI training this year, further cementing this training deficit.

The Federal Government Weighs In

The U.S. Department of Labor has stepped in to help remedy the situation.

In a new roadmap for AI usage in U.S. companies, the DOL has issued a new set of guidelines that, among other features, champions worker input into company AI policies and establish guardrails that protect workers as artificial intelligence emerges in the workplace.

The DOL calls its paper “Principles for Developers and Employers” when using AI in the workplace, and it’s loaded with new AI guidelines for companies.

“These principles will create a roadmap for developers and employers on how to harness AI technologies for their businesses while ensuring workers benefit from new opportunities created by AI and are protected from its potential harms,” the DOL stated.

“The precise scope and nature of how AI will change the workplace remains uncertain,” the paper stated. “AI can positively augment work by replacing and automating repetitive tasks or assisting with routine decisions, which may reduce the burden on workers and allow them to better perform other responsibilities.”

The DOL expects AI to “create demand” for workers, especially those with advanced technology skills and training.


“But AI-augmented work also poses risks if workers no longer have autonomy and direction over their work or their job quality declines,” the report noted. “The risks of AI for workers are greater if it undermines workers’ rights, embeds bias and discrimination in decision-making processes, or makes consequential workplace decisions without transparency, human oversight and review. There are also risks that workers will be displaced entirely from their jobs by AI.”

Working off the foundational concepts in the 2023 White House Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the DOL guidelines include input from workers, unions, researchers, academics, employers, and developers, among others, and through public listening sessions.

The DOL said those guidelines “apply to the development and deployment of AI systems in the workplace, and should be considered during the whole lifecycle of AI – from design to development, testing, training, deployment and use, oversight, and auditing.”

Eight Principals

Here are the eight underlying principles included in the DOL document.

Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in designing, developing, testing, training, use, and oversight of AI systems for use in the workplace.
Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers.
Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems in the workplace.
Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems used in the workplace.
Protecting Labor and Employment Rights: AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.
Using AI to Enable Workers: AI systems should assist, complement, and enable workers, as well as improve job quality.
Supporting Workers Impacted by AI: Employers should support or upskill workers during AI-related job transitions.
Ensuring Responsible Use of Worker Data: Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.

The principles are also “applicable to all sectors and intended to be mutually reinforcing,” the DOL stated. Additionally, “AI developers and employers should review and customize the best practices based on their context and with input from workers,” the paper stated.



Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *