Jun 27, 2024

6 Ethical Considerations for Building AI Products

Image Generated by AI – DALLE-3

AI is always advancing

Advancements in computer chips and refined algorithms have allowed AI to appear in more places than ever in our daily lives. A week doesn’t go by without a new AI model or feature being released, promising more efficient work and more intelligent decisions.

Platforms like OpenAI’s ChatGPT, Google's Gemini, and Microsoft’s AI push through Bing all offer new ways of generating text, photos, and even video (soon) that can help individuals across the workforce.

For all the benefits that AI platforms provide, there is always risk associated with using these tools. It is critical to consider the ethical implications of using these platforms and, perhaps more so, the processing of creating AI tools.

Moral AI And How We Get There

In the book Moral AI And How We Get There, Borg, Sinnott-Armstrong, and Conitzer explore how AI tools should be built ethically and fairly. They lay out 6 distinct considerations that are critical for anyone creating an AI product or adding AI as a feature.

6 Ethical Considerations

1. Safety

For a tool to be truly useful to the general public, it must be safe to use. Issues like deepfakes or autonomous vehicles pose potential threats to ourselves and others.  

2. Equality


AI models always contain some form of bias. Ethical firms work to counter these biases, but it is clearly an uphill battle. For instance, an AI model meant to flag high-risk medical patience favored white individuals over black individuals with the same illness.

3. Privacy


Companies like Cambridge Analytica worked to influence election decisions by collecting and leveraging user’s social media data. Further, countries like China are using facial recognition to surveil the Uighurs people – a new form of social profiling.

4. Freedom


Undermining people’s privacy can easily slide into removal of an individual’s freedom. Governments can use surveillance to track individual’s religious preferences and stop certain groups from practicing their religion.

5. Transparency


Tools are being developed and used to scan résumés or provide credit scores. Often the algorithms are either proprietary or may not even be fully understood by the developers. This lack of transparency can easily lead to unintended and potentially harmful outcomes.

6. Deception


Beyond accidental problems, there is a genuine concern for those intentionally using the tools to cause harm. Deepfakes and false news can spread quickly and can influence individuals' perceptions about everything from companies to presidential candidates.

Thoughtful Building of AI

New AI breakthroughs are already on the horizon and new tools and ways to interact with computers is inevitable through AI. However, those looking to utilize this technology should carefully consider the impact that their use of AI will have on their users, community, and even the world.

As with any tool, there is an opportunity for a breakthrough in benefits, but there is also the possibility of great harm. It is up to the creators and implementers of the tools to consider the ramifications.

Thomas Clapper
Thomas Clapper
Marketing and Brand Manager
Follow us on:

Subscribe to our newsletter