Article

Navigating the Ethics of AI

Abigail McDonald
Abigail McDonald
November 6, 2024

Artificial Intelligence (AI) is transforming industries and impacting society in ways that may not be fully recognized. In the healthcare industry, AI is leveraged to conduct complex diagnoses, reviewing MRIs, CT scans and mammograms to detect cancer with greater accuracy. AI has also reshaped the way that society interacts with its smart devices—from virtual assistants like Siri and Alexa to smart security features such as facial recognition. However, as AI continues to transform society, it raises serious ethical challenges. Of critical concern is algorithmic parity: are AI systems unintentionally perpetuating biases? Additionally, who is accountable when AI makes a wrong decision? Addressing these concerns is an essential undertaking for organizations, policymakers, and society to ensure that AI systems are trustworthy and aligned with societal values.

What Is “Ethical AI”?

While everyone may have their own definition of ethics or ethical, EAI is a bit more focused. The faculty.ai defines ethical AI as follows:

Ethical AI is when AI systems are designed and deployed in ways that consider the harms associated with AI systems and [have] mitigated them. These harms include bias and discrimination, unsafe or unreliable outcomes, unexplainable outcomes and invasions of privacy.

Why does AI Ethics Matter?

Since AI’s use is pervasive in critical industries such as healthcare, finance, and public services, ethical considerations are essential to its design because of AI’s ability to impact individuals on a personal level. For example, Amazon’s hiring algorithm was found to discriminate against female applicants because it was trained on resumes which were submitted over a decade ago and these applicants, were primarily male. Without clear ethical guidelines in place, the use of AI can unintentionally introduce biases in decision-making such as those evidenced in Amazon’s recruiting process.

Since AI is evolving at a pace that is hard for laws and regulations to keep up with, there can be accountability challenges when mistakes are made. Including ethics into AI from the design phase subverts this risk, allowing the technology to work for everyone in a fair, transparent, and responsible manner.

Key Principles of AI Ethics

EU AI Act

The EU Artificial Intelligence Act, a key legislation aimed at comprehensively regulating AI was passed in August 2024 to address many of the ethical concerns associated with AI’s use. This act categorizes AI systems into three risk-based categories:

  • Unacceptable Risk (Prohibited AI Systems): These are AI systems banned because they are too dangerous for use and includes those used in social scoring and for subliminal manipulation.
  • High-Risk AI Systems: These are systems that are used in critical areas like healthcare and law enforcement. These systems are required to meet strict security and ethical standards including risk management, data governance, and human oversight.
  • Transparency Obligations: These are AI systems with lower risk such as chatbots and spam filters. Although low-risk, users must be informed when they are interacting with these low-risk systems to enable trust and accountability.

NIST AI RMF

In addition to the EU Artificial Intelligence Act, the NIST AI Risk Management Framework (AI RMF) is a commonly referenced framework that provides AI ethics guidelines. This framework highlights four pillars for managing AI-related risks:

  1. Governance: Ensures that AI systems align with ethical standards and organizational values and principles, and that accountability is integrated into the design and development of AI systems.
  2. Mapping: Involves identifying risks within AI systems and the factors that contribute to those risks.
  3. Measuring: Assesses the impact of risks using both quantitative and qualitative methods, or mixed-method tools to ensure that AI systems work as intended.
  4. Managing: Focuses on allocating resources to mapped and measured risks to effectively respond to, and recover from, identified AI security risks.

AI Best Practices

Organizations can ensure that their AI technology includes ethical considerations by using these strategies:

  • Ethical Audits: Regularly assess AI systems to ensure they meet ethical standards such as those outlined in the EU Artificial Intelligence Act and the AI RMF.
  • Diverse Training Data: Be aware that AI systems are only as good as the data they’re trained on, making it imperative that training data should be reviewed to ensure that it does not include existing biases.
  • Diverse Training Teams: Ensure that development teams are diverse so that a range of perspectives are considered in design, training and deployment.

Privacy and Data Concerns

Since AI systems require a lot of data to work well, it raises privacy concerns. How do organizations strike a balance between using data to power AI and protecting people’s privacy? Best practices include:

  • Data Minimization: Only use data needed for the AI system’s intended purpose.
  • Informed Consent: Ensure that people know how their data will be used and that they also have the option to opt out.
  • Anonymization: Strip personal information that can be used to identify directly identify a person such as names, addresses, phone numbers, social security numbers, etc.

Accountability and Governance

As AI becomes more ingrained in organizational processes, ensuring accountability is critical. Companies should establish governance bodies such as an AI ethics board that can actively manage and monitor AI development and deployment. Additionally, organizations should also keep up with proposed legislation such as the Algorithmic Accountability Act that urges organizations to assess the impact of their AI technologies to facilitate transparency and fairness in AI outcomes.

The Road Ahead for AI Ethics

While the challenges of AI ethics are complex, it also provides an opportunity to guide the future of technology in a way that reflects societal values. Organizations and policymakers should work together to develop guidelines and regulations that ensure that AI is used ethically.

Although AI has already delivered on its potential to improve society, it is crucial that it is managed and monitored from development to deployment so that it is equally beneficial. By focusing on fairness, transparency, and accountability, an AI-driven future can be created that works for everyone.

AI Ethics and ROCIMG’s Role

As AI becomes a key strategy in business operations, it is essential to ensure that both its ethical and security aspects are managed effectively. At ROCIMG, we help clients navigate these challenges by developing AI policies, and developing data governance best practices that align with the latest security standards. Our commitment to ethical AI reflects our vision for a future where AI technology aligns with societal values.

Sources

Like This Article? Help us Spread the Word

About the Author

Abigail McDonald
Abigail McDonald
Cybersecurity GRC Analyst

With 3+ years in cybersecurity consulting, Abigail brings unparalleled expertise in risk management, vulnerability, assessments, and strategic security planning. She is adept at both active and reactive security measures, excelling in performing cybersecurity risk assessments and audits against industry-standard frameworks, developing robust security policies and procedures, conducting vulnerability and penetration assessments, and developing remediation roadmaps.

Get Front-Row Industry Insights with our Monthly Newsletter

Looking for more exclusive insights and articles? Sign-up for our newsletter to recieve updates and resources curated just for you.