Michael Richards Michael Richards
Director, Policy, U.S. Chamber of Commerce Technology Engagement Center (C_TEC)

Published

May 19, 2022

Share

Policymakers, technologists, and business leaders must work together to ensure that the prosperity from artificial intelligence is shared throughout society and the unintended harms are addressed and mitigated, said experts at the U.S. Chamber AI Commission field hearing in Palo Alto, CA.

“We’re seeing a growth in AI systems that can function across multiple domains for the last decade...this can lead to unanticipated and harmful outcomes,” said Rep. Anna Eshoo (CA-18), kicking off the hearing with words of caution. “Policymakers, researchers, and leaders in the private sector need to collaborate to address these issues to ensure that AI advancement accrues to the benefit of society, not at the cost of it.”

She added, “As AI becomes more powerful, we have to keep refocusing technological development on our values to ensure that technology improves society.” Many experts testifying throughout the hearing echoed similar points, advocating for widening the shared prosperity that would result from AI and cautioning the Commission on AI’s potential harm to workers and marginalized communities.

Automation vs. Augmentation

Erik Brynjolfsson, Senior Fellow at the Stanford Institute for Human-Centered AI (HAI) and Director of the Stanford Digital Economy Lab, articulated the difference between automation and augmentation when it comes to jobs: “Economists have made a distinction between economic substitute and economic complement,” he testified. “Substitutes tend to worsen economic inequality and increase concentration of economic and political power.”

Moreover, he stressed that “Most of the progress over time has come not from automating things we are already doing, but from doing new things...When technology complements humans...it increases wages and leads to more widely shared prosperity.”

Katya Klinova, Head of Al, Labor and the Economy at The Partnership on AI, also advocated for the path of AI augmenting and complementing the skills of a much broader group of workers, “making them more valuable for the labor market, boosting their wages, improving economic inclusion, and ultimately creating a more competitive economy,” she said.

The regular discourse “is overwhelmingly focused on how workers should prepare for the age of AI, and how governments and institutions can help them to prepare,” Klinova testified. “By putting all the burden of adjustment on the workers and the government, we are forgetting that the technology too can and should adjust to the needs and realities faced by communities and the workforce.”

Trust Gap

However, “The issue is that in practice, it is often quite difficult to tell apart worker-augmenting technologies from worker-replacing technologies.” Because of that, Klinova asserted, “any company today that wants to claim their technology augments workers can just do it. It’s a free-for-all claim that is not necessarily substantiated by anything.”

Alka Roy, Founder of the Responsible Innovation Project and RI Labs, underscored a trust gap that results from this kind of discrepancy between having “best practices, audits, and governance,” and how and where they are actually used. “Some reports...cite that even companies that have AI principles and ethics, only 9% to 20% of them publicly admit to having operationalizing these principles,” Roy said.

To address these issues, Klinova advocated for “invest[ing] in alternative benchmarks...and in building institutions that allow for empowered participation of workers in the development and deployment of AI.” Adding that, “Workers are ultimately the best people to tell apart which technologies help them and make their day better, and which ones look good on paper in marketing materials, but in practice enable exploitation or over surveillance.”

AI’s Impact on Workers

In talking about the impact of AI on workers, Doug Bloch, Political Director at Teamsters Joint Council 7, referenced his time serving on Governor Newsom’s Future of Work Commission, “I became convinced that all the talk of the robot apocalypse and robots coming to take workers’ jobs was a lot of hyperbole. I think the bigger threat to the workers I represent is the robots will come and supervise through algorithms and artificial intelligence.”

“We have to empower workers to not only question the role of technology in the workplace, but also to use tools such as collective bargaining and government regulation to make sure that workers also benefit from its deployment,” he said. 

In his testimony, Bloch emphasized that workers aren’t afraid of technology, but they will question its purpose and make sure that it’s regulated, and that workers have a voice in the process. “The biggest question for organized labor and worker advocates right now...is how does all of this technology relate to production standards, to production, and to discipline?”

Bloch referenced an existing contract to show how AI and labor may co-exist. Terms provided a safety net for workers by ensuring that they can’t be fired by surveillance technology or an algorithm. A supervisor has to directly observe dishonest behavior to allow a firing. He also underlined the importance that the data workers generate, which helps to inform decisions and increase profits for the company, won't be used against them.

Bloch closed by stating, “If the fight of the last century was for workers to have unions and protections like OSHA, I honestly believe that the fight of this century for workers will be around data, and that workers should have a say in what happens with it and to share in the profit with it.”

Risks of Use

Jacob Snow, Staff Attorney for the Technology and Civil Liberties Program at the ACLU of Northern California, told the Commission that the critical discussions on AI are, “not narrow technical questions about how to design a product. They are social questions about what happens when a product is deployed to a society, and the consequences of that deployment on people’s lives.”

He explained why he believed facial recognition should be on the other side of the technological red line: “There are applications of facial recognition, which I think at least officially seem like they might be valuable – finding a missing person or tracking down a dangerous criminal, for example. But...any tool that can find a missing person, can find a political dissident. Any tool that can pick a criminal out of a crowd, can do same for an undocumented person or a person who has received reproductive healthcare.” He cautioned, “We’re living in a time when it’s not necessary for civil rights and privacy advocates to say ‘just imagine if the technology fell into the wrong hands.’ It’s going directly into the wrong hands after it’s been built.”

“We can think a little bit more broadly about what constitutes AI regulation – worker protections, housing support, private laws – all those frameworks put in place deeper social, health-related, and economic protections that limit the harm about algorithms,” Snow testified.

Conclusion

Rep. Ro Khanna (CA-17), who provided concluding remarks, talked about the disparate impacts that AI will have in different communities across the United States. “This challenge is the central challenge for the country: How do we both create economic opportunity in places that have been totally left out, how do we build and revitalize a new middle class, and how do we have the benefits of technology be more widely shared?” In summary, the Congressman stated, “There's going to be 25 million of these new jobs in every field from manufacturing to farming to retail to entertainment. The question is, how do we make sure that they are a possibility for people in every community?"

To continue exploring critical issues around AI, the U.S. Chamber AI Commission will host further field hearings in the U.S. and abroad to hear from experts on a range of topics. The next hearing will be held in London, UK, on June 13. Previous hearings took place in Austin, TX, and Cleveland, OH.

Learn more about the AI Commission here.

About the authors

Michael Richards

Michael Richards