Skip to content

The Data Scientist

AI– ethically

Let’s talk about the right way to build AI– ethically, of course

Beginning

What once sounded like a utopian idea has met reality. What was once just desired, has come to be realised. Artificial intelligence (AI), per se, has been tremendously fulfilling. It has offered help to its users worldwide– sellers, customers, educators, learners and precisely everyone, so concerned. It continues to deliver overwhelming solutions today. 

But ever since its introduction in the business landscape, stakeholders have been largely accepting AI as it comes, rarely understanding, caring or giving a damn about the risks that come along with it. In the rush to adopt AI, ethics and responsibility are taking a backseat at many companies, notes Lakshmi Varanasi in an article on Business Insider.

Perhaps, this where the need to understand the fundamentals of ethical AI and the dire need to incorporate those practices comes into the picture.

AI is undoubtedly and rapidly transforming the world we live in. From self-driving cars to personalised recommendations, AI is picking up tremendous pace and becoming an integral part of our daily lives. It’s getting indispensable. 

But many AI models, particularly deep learning models, are considered ‘black boxes’ because it’s difficult to understand how they arrive at their decisions. It’s like trying to figure out a magician’s trick—you know the outcome, but the process behind it remains a mystery.”

However, accepting that with every access to technology there are limitations, which must be paid heed to, we move on to the discussion at hand— the ethical part of it. As AI becomes more sophisticated and mainstream, it’s extremely important to ensure that it is developed and deployed ethically.

What’s ethical AI and the pillars it stands on?

Ethical AI fundamentally refers to the development and deployment of AI systems and the inbred practices in a way that aligns with ethical principles and values. It involves considering the potential social, economic, and environmental impacts of AI and ensuring that these systems are used responsibly and beneficially. The underlying idea is to follow a sustainable, transparent and equitable approach to technology so that businesses can thrive, individuals and organisations go unharmed, and the world is impacted positively.

Some key aspects/pillars of ethical AI are best categorised into the following-

Fairness and non-discrimination: AI systems should be so designed and fed that they treat every stakeholder fairly and avoid discrimination based on factors such as race, colour, gender, age, demography or religion. 

Transparency and explainability: The use of AI should be understandable and explainable to the extent possible. This means that it should be possible to understand how they make decisions, and what factors such a system considers for synthesis and output of information.

Accountability: Accountability leads to owning responsibility. If we can make people at the helm of AI development and deployment accountable, that’ll be a great move. In short, proper mechanisms must be put in place to hold developers and users of AI systems answerable for their actions. 

Privacy and data protection: AI systems, with all their flaws and vulnerabilities considered, should be so devised that they respect every individuals’ privacy rights and protect their personal data.

Enlightenment about AI’s beneficial use: This rather comes from within. Though we can own the responsibility of enlightening stakeholders about the potential benefits of AI and why not to use it to anybody’s detriment, it all lies with the stakeholders. AI’s use should be necessitated and socially institutionalised as a responsible one. It must be used for beneficial purposes and not for causing harm or malign.

AI- Addressing the pressing concerns and citing solutions

One of the most pressing ethical concerns surrounding AI is bias. AI systems are trained on vast datasets, and if these datasets are biased, the AI will learn to perpetuate those biases. A data that is used to train an AI system towards a particular group or demography shows signs of biasness, flawed information or any predispositions, it is bound to show partial outcomes. That’s the GIGO (garbage in, garbage out) principle in practice, you see. 

Let’s consider an instance where facial recognition systems have been shown to be less accurate for people of colour, particularly women. Having known this issue exists with AI, what probable solution floats in the mind? 

In such a case, it is essential to ensure that AI training data is diverse and representative of the population it will serve. A holistic data set brings with itself the possibilities of patching up the shortcomings and anomalies, if any, thereby catering to a large section of the population. Moreover, developers can behave even more vigilantly in identifying and mitigating biases within their AI systems. This solution will seemingly resolve such biases’ issues.

Another major ethical concern looming around the rampant use of AI is privacy. There is no denying the fact that all AI systems that are in use today, usually collect, store and synthesise large amounts of personal data. The catch? This data is highly vulnerable to cyberattacks and online scams of sorts. This renders it crucial that data, so obtained and utilised, must be closely protected and guarded from all possible corners. Any unauthorised access paves way for its misuse and poses a potential threat to privacy. The use of AI for surveillance and control can, too, raise concerns about privacy and civil liberties.

To address this issue, companies must implement robust data privacy measures and be transparent about how they collect and use data. This includes obtaining informed consent from individuals before collecting their data and providing them with clear information about how their data will be used— and stick to it with really ‘no incidents’ of data leak.

Finally, there is the pressing issue of accountability. When an AI system makes a harmful decision that jeopardises the image and sanctity of an individual, who is responsible? It’s like the road has been laid, but the rules of driving have not been put in place.

So that leads us to the question as to who is the culprit— the developer, the user, or the AI itself? It’s really vital to establish clear guidelines for accountability to prevent unintended consequences and to oversee that people at the helm of disgust brought about by AI, do not go unpunished or free. This may involve developing ethical frameworks for AI development and deployment, as well as creating mechanisms for oversight and accountability, and that’s never too much to ask for. For a field that is evolving tremendously and promises becoming  even more mainstream in the days to come, it’s frankly never too much.

Wrapping Up

In a rather quick wrap up of the article, let’s settle for the fact that AI has the potential to transform a plethora of disciplines and the entire world for the better. However, to ensure that AI is a force for good, we must tread cautiously. 

We, as responsible businesses and developers ought to prioritise ethics in its development and deployment. By addressing issues such as bias, privacy, and accountability, we can build and configure AI systems that benefit society as a whole. This may require a collaborative effort from the developers, policymakers, and the public to ensure that AI is developed ethically and used responsibly.

Simultaneously though, an increased enlightenment about how a below-par understanding of AI can cost us should be taken gravely. And that means pretty much everything— be it the potential of AI to exacerbate existing social inequalities, produce rather real-appearing fake images, its character to produce flawed outcomes or misinterpreting medical reports. 

It’s all in the understanding and if we can understand the gravity of things that circumambulate the sphere of AI, the technology and business landscape can become pleasingly positive. Maybe then, the need to develop and deploy ethical AI practices would not be needed. Till then, the ascent is arduous, but the view from the peak will be worth the struggle.

Author bio:

Dr. S.Z. Khan has been a university-level academician and is currently the VP of Marketing & Strategy at Hyperzod. He is an admired mentor to students and a thought leader for professionals. He has authored several research publications, including book chapters on several topics of evolving interest. He has been avidly writing on a vast range of topics, but chiefly on technology, AI, education, marketing, and everything business. 

Some links to previous work:

https://builtin.com/articles/navigating-hype-hope-and-doom-openai-sora

https://hackernoon.com/fooling-the-masses-the-allure-and-dangers-of-deep-fakes

https://www.smartbrief.com/original/mimetic-behavior-and-aspiration-economy-whos-really-choosing-what-you-want