Artificial intelligence (AI) has rapidly evolved from hype to a technological powerhouse, sparking debates on its regulation and ethical use.
In Pakistan, AI is gradually influencing sectors like e-commerce and finance, mirroring global trends. However, fears linger about AI potentially turning against its creators, as seen in movies like Terminator. While these risks seem distant, the “AI safety” camp warns of scenarios where AI could surpass human intelligence and act independently with misaligned goals.
In November 2023, global leaders gathered at an AI safety summit to address potential threats. Critics, however, argue that the focus should be on current issues like bias in AI, disinformation, and the violation of intellectual property and human rights. These problems already impact industries and individuals, particularly in countries like Pakistan, where data and technology regulations are still developing. The challenge lies in balancing innovation with safety.
AI systems have often failed in real-world scenarios. Google’s image-labelling AI once misidentified black individuals as gorillas and facial recognition tech often misidentified people of colour due to biased training data. In recruitment, AI has favoured male candidates, while deepfakes are being used for malicious purposes like fake political speeches. In Pakistan, these risks grow with the rise of social media, while lawsuits from artists highlight AI’s misuse of intellectual property.
In a recent statement, experts emphasised the necessity for AI systems to respect human rights, embrace diversity, and promote fairness. This guiding principle compels a thorough examination of how AI technologies are designed and implemented, ensuring they foster equality rather than perpetuating or worsening existing biases.
Equitable access to artificial intelligence learning must be prioritised, along with addressing its impact on the job market
“Despite the remarkable advancements made by the current generation of large language models (LLMs) in mimicking human-like intelligence, these systems are not without significant flaws. Key issues such as hallucinations, lack of grounding in real-world contexts, unreliable reasoning, and opacity stem from the fundamental architectures and training methodologies of these models. These challenges are not simply technical glitches; they represent inherent limitations that raise serious concerns about the safety, robustness, and true intelligence of AI systems,” says Jawad Raza, Corinium Global Top 100 Innovators in Data & Analytics.
Adding to this further, he said that the call for ethical AI deployment is echoed by various organisations, including Unesco, which stresses the importance of transparency and explainability in AI systems to safeguard human rights and fundamental freedoms. The organisation advocates for robust oversight and impact assessments to prevent conflicts with human rights norms. Moreover, the UN High Commissioner for Human Rights has highlighted the need for regulations that prioritise human rights in developing AI technologies.
This includes assessing the potential risks and impacts of AI systems throughout their lifecycle, ensuring that technologies that do not comply with international human rights laws are either banned or suspended until adequate safeguards are established. “As AI continues to evolve, it is crucial for stakeholders to engage in ongoing discussions about the ethical implications of these technologies, ensuring that they are developed with a commitment to fairness and inclusivity,” he opined.
Where does Pakistan stand?
Pakistan is in the early stages of developing comprehensive regulations and ethical requirements for artificial intelligence. However, like many other countries, Pakistan is becoming increasingly aware of the importance of AI governance. Muhammad Aamir, Senior Director Engineering, 10Pearls, says that as the Personal Data Protection Bill progresses, regulations must be robust to safeguard individuals’ privacy rights, particularly in AI applications.
“Secure data handling aligned with international standards is crucial. At the same time, AI developers and users will need clear guidelines ensuring algorithmic transparency and accountability. Standards for explainability and audit trails will be key in this process. Ethical concerns also arise around bias and fairness, as AI systems must be free from inherent discrimination.
“Notable cases, such as the Gender Shades project, highlight alarming error rates of up to 34.7pc for darker-skinned women in facial recognition systems, compared to just 0.8pc for lighter-skinned men. Sector-specific regulations for healthcare, law enforcement, and surveillance are essential to ensure AI operates responsibly in critical areas.
“In education, equitable access to AI learning must be prioritised, along with addressing its impact on the job market. For AI research and public sector adoption, ethical guidelines and transparent practices will foster public trust.”
He further says that special provisions for women and persons with disabilities must be established, ensuring inclusivity in AI education and resource access. Overseeing all these efforts, the AI Regulatory Directorate, under the National Commission for Personal Data Protection, can ensure compliance with ethical standards across the board.
The writer is the head of content at a communications agency
Published in Dawn, The Business and Finance Weekly, September 23rd, 2024
Dear visitor, the comments section is undergoing an overhaul and will return soon.