Congress wants to protect you from biased algorithms, deepfakes, and other bad AI

Congress wants to protect you from biased algorithms, deepfakes, and other bad AI

On Wednesday, US lawmakers introduced a new bill that represents one of the country’s first major efforts to regulate AI. There are likely to be more to come.

It hints at a dramatic shift in Washington’s stance toward one of this century’s most powerful technologies. Only a few years ago, policymakers had little inclination to regulate AI. Now, as the consequences of not doing so grow increasingly tangible, a small contingent in Congress is advancing a broader strategy to rein the technology in.

Sign up for The Algorithm

Artificial intelligence, demystified

Though the US is not alone in this endeavor—the UK, France, Australia, and others have all recently drafted or passed legislation to hold tech companies accountable for their algorithms—the country has a unique opportunity to shape AI’s global impact as the home of Silicon Valley. “An issue in Europe is that we’re not front-runners on the development of AI,” says Bendert Zevenbergen, a former technology policy advisor in the European Parliament and now a researcher at Princeton University. “We’re kind of recipients of AI technology in many ways. We’re definitely the second tier. The first tier is the US and China.”

The new bill, called the Algorithmic Accountability Act, would require big companies to audit their machine-learning systems for bias and discrimination and take corrective action in a timely manner if such issues were identified. It would also require those companies to audit not just machine learning but all processes involving sensitive data—including personally identifiable, biometric, and genetic information—for privacy and security risks. Should it pass, the bill would place regulatory power in the hands of the US Federal Trade Commission, the agency in charge of consumer protection and antitrust regulation.

The draft legislation is the first product of many months of discussion between legislators, researchers, and other experts to protect consumers from the negative impacts of AI, says Mutale Nkonde, a researcher at the Data & Society Research Institute who was involved in the process. It comes in response to several high-profile revelations in the past year that have shown the far-reaching damage algorithmic bias can have in many contexts. These include Amazon’s internal hiring tool that penalized female candidates; commercial face analysis and recognition platforms that are much less accurate for darker-skinned women than lighter-skinned men; and, mostly recently, a Facebook ad recommendation algorithm that likely perpetuates employment and housing discrimination regardless of the advertiser’s specified target audience.

The bill has already been praised by members of the AI ethics and research community as an important and thoughtful step toward protecting people from such unintended disparate impacts. “Great first step,” wrote Andrew Selbst, a technology and legal scholar at Data & Society, on Twitter. “Would require documentation, assessment, and attempts to address foreseen impacts. That’s new, exciting & incredibly necessary.”

It also won’t be the only step. The proposal, says Nkonde, is part of a larger strategy to bring regulatory oversight to any AI processes and products in the future. There will likely soon be another bill to address the spread of disinformation, including deepfakes, as a threat to national security, she says. Another bill introduced on Tuesday would ban manipulative design practices that tech giants sometimes use to get consumers to give up their data. “It’s a multipronged attack,” Nkonde says.

Each bill is purposely expansive, encompassing different AI products and data processes in a variety of domains. One of the challenges that Washington has grappled with is that a technology like face recognition can be used for drastically different purposes across industries, such as law enforcement, automotive, and even retail. “From a regulatory standpoint, our products are industry specific,” Nkonde says. “The regulators who look at cars are not the same regulators who look at public-sector contracting, who are not the same regulators who look at appliances.”

Congress is trying to be thoughtful about how to rework the traditional regulatory framework to accommodate this new reality. But it will be tricky to do so without imposing a one-size-fits-all solution on different contexts. “Because face recognition is used for so many different things, it’s going to be hard to say, ‘These are the rules for face recognition,’” says Zevenbergen.

Nkonde foresees this regulatory movement eventually giving rise to a new office or agency specifically focused on advanced technologies. There will, however, be major obstacles along the way. While protections against disinformation and manipulative data collection have garnered bipartisan support, the algorithmic accountability bill is sponsored by three Democrats, which makes it less likely to be passed by a Republican-controlled Senate and signed by President Trump. In addition, currently only a handful of members of Congress have a deep enough technical grasp of data and machine learning to approach regulation in an appropriately nuanced manner. “These ideas and proposals are kind of niche right now,” Nkonde says. “You have these three or four members who understand them.”

But she remains optimistic. Part of the strategy moving forward includes educating more members about the issues and bringing them on board. “As you educate them on what these bills include and as the bills get cosponsors, they will move more and more into the center until regulating the tech industry is a no-brainer,” she says.

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Source Link