Stakeholders call for fair, responsible AI for consumers
ISLAMABAD: On World Consumer Rights Day (WCRD) being observed today (Friday), stakeholders have called for fair and responsible artificial intelligence (AI) for consumers as it can play havoc with their lives.
AI can only be used with reduced risks and increased benefits if these technologies are programmed to follow consumer-friendly ethical standards and principles, says The Network for Consumer Protection (TNCP) on the occasion of WCRD.TNCP is a coalition comprised of several NGOs addressing various matters, including AI.
The questions posed by TNCP for building the foundations for genuine transparency and trust include: What measures are needed to protect consumers against deep fakes and misinformation? How do we ethically navigate the collection and use of consumer data? Who is responsible when a person is harmed by generative AI?
It is worth mentioning that the recent breakthroughs in generative AI have taken the digital world by storm. Its adoption by consumers has grown rapidly, and the technology is set to have an enormous impact on the way we work, create, communicate, gather information, and much more. This year, WCRD’s theme is focused onhighlighting concerns like misinformation, privacy violations, and discriminatory practices, as well as how AI-driven platforms can spread false information and perpetuate biases.
Nadeem Iqbal, CEO of TNCP, said that there was a real opportunity. If used properly, generative AI can enhance consumer care and improve channels of redress.
“However, it will also have serious implications for consumer safety and digital fairness. With developments taking place at breakneck speed, we must move quickly to ensure a fair and responsible AI future. While the world is developing the regulatory framework for AI, it is imperative for Pakistani policymakers that, as a first step, they incorporate consumer-friendly principles into the existing regulations that might be used to curb AI misuse,” he called on the government.
The Norwegian Consumer Council, in its report “Ghost in the Machine: Addressing the Consumer Harms of Generative AI,” says that the discussion about how to control or regulate these systems is ongoing, with policymakers across the world trying to engage with the promises and challenges of generative AI. Though they do not have the answers to all the questions raised by generative AI, many of the emerging or ongoing issues can be addressed through a combination of regulation, enforcement, and concrete policies designed to steer the technology in a consumer and human-friendly direction.
BEUC, a Belgian consumer protection organisation, has called on the European Parliament to improve the draft AI Act, stating that the data sets used to train generative AI systems needed to be subject to important safeguards such as measures to prevent and mitigate possible biases.
The World Health Organisation (WHO) has also released new guidance on the ethics and governance of large multi-modal models (LMMs), a type of fast-growing generative artificial intelligence (AI) technology with applications across health care.
The guidance outlines over 40 recommendations for consideration by governments, technology companies, and healthcare providers to ensure the appropriate use of LMMs to promote and protect the health of populations.
“Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said the WHO. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”
Published in Dawn, March 15th, 2024