Connect with us

Business

Regulators Take Aim At AI To Protect Consumers And Workers

Published

on

regulators

NEW YORK — The nation’s finance authority has pledged to ensure that businesses comply with the Regulators law when utilizing artificial intelligence in light of rising concerns over increasingly capable AI systems like ChatGPT.

Automated systems and algorithms already heavily influence credit scores, loan conditions, bank account fees, and other monetary factors. Human resources, real estate, and working conditions are all impacted by AI.

According to Electronic Privacy Information Centre Senior Counsel Ben Winters Regulators, the federal agencies’ joint statement on enforcement released last month was a good starting step.

However, “there’s this narrative that AI is entirely unregulated, which is not really true,” he argued. “What they’re arguing is, ‘Just because you utilise AI to make a judgement, it doesn’t mean you’re exempt from responsibility for the repercussions of that decision. This is how we feel about it. “We are watching.

The Consumer Financial Protection Bureau has issued fines to financial institutions in the past year for using new technology and flawed algorithms, leading to improper foreclosures, repossessions, and lost payments of homes, cars, and government benefits payments.

regulators

These enforcement proceedings are used as instances of how there will be no “AI exemptions” to consumer protection, according to regulators.

Director of the Consumer Financial Protection Bureau Rohit Chopra stated that the organization is “continuing to identify potentially illegal activity” and has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists, and others to make sure we can confront these challenges.”

The Consumer Financial Protection Bureau (CFPB) joins the Federal Trade Commission, the Equal Employment Opportunity Commission, the Department of Justice, and others in claiming they are allocating resources and personnel to target emerging technologies and expose their potentially detrimental effects on consumers.

Chopra emphasized the importance of organizations understanding the decision-making process of their AI systems before implementing them. “In other cases, we are looking at how the use of all this data complies with our fair lending laws and Regulators.”

Financial institutions are required to report reasons for negative credit decisions by law, per the Fair Credit Regulators Act and the Equal Credit Opportunity Act, for instance. Decisions about housing and work are also subject to these rules. Regulators have warned against using AI systems whose decision-making processes are too complex to explain.

Chopra speculated, “I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,'” I think what we’ve learned is that that’s not the case. The data itself may contain inherent biases.

regulators

Regulators have warned against using AI systems whose decision-making processes are too complex to explain.

Chair of the Equal Employment Opportunity Commission (EEOC) Charlotte Burrows has pledged enforcement action against artificial intelligence (AI) Regulators recruiting technology that discriminates against people with disabilities and so-called “bossware” that illegally monitors employees.

Burrows also discussed the potential for algorithms to dictate illegal working conditions and hours to people.

She then added, “You need a break if you have a disability or perhaps you’re pregnant.” The algorithm only sometimes accounts for that kind of modification. Those are the sorts of things we’re taking a careful look at… The underlying message here is that laws still apply, and we have resources to enforce them; I don’t want anyone to misunderstand that just because technology is changing.

At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.

OpenAI’s general counsel, Jason Kwon, recently spoke at a technology summit in Washington, DC, held by software industry group BSA. Industry standards and a consensus on them would be a good place to start. More debate is warranted about whether these should be mandated and how often they should be revised.

regulators

At a conference earlier this month, OpenAI’s top lawyer advocated for an industry-led approach to regulation.

The CEO of OpenAI, the company responsible for creating ChatGPT, Sam Altman, recently stated that government action “will be critical to mitigate the risks of increasingly powerful” AI systems and advocated for establishing a U.S. or global body to license and regulate the technology.

Altman and other tech CEOs were invited to the White House this month to confront tough questions about the consequences of these tools, even though there is no indication that Congress would draught sweeping new AI legislation like European politicians are doing.

As they have in the past with new consumer financial products and technologies, the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used, according to Winters of the Electronic Privacy Information Centre.

He said that “Buy Now, Pay Later” businesses had been dealt with effectively by the Consumer Financial Protection Bureau. “The AI ecosystem has a great deal of undiscovered territory. Putting that knowledge out there would help.

SOURCE – (AP)

Kiara Grace is a staff writer at VORNews, a reputable online publication. Her writing focuses on technology trends, particularly in the realm of consumer electronics and software. With a keen eye for detail and a knack for breaking down complex topics.

Download Our App

vornews app

Volunteering at Soi Dog

Soi Dog

Buy FUT Coins

comprar monedas FC 25