The Ethics of AI

A friend of mine recently posted the following questions to his Instagram about the ethics of AI. I thought I’d take that as a prompt to try and synthesize what I’ve been thinking of for the last little while.

  • Is generative AI run by major companies immoral?
  • Is generative AI run by individuals (self-hosted) immoral?
  • Is there such a thing as “ethical AI”?

To me, the answer to each of these is “it depends”. I’m not necessarily going to answer these questions explicitly, but I’ll instead I want to lay out how I’m thinking about these questions, and AI in general. I’ve left a lot out of this article, and I also haven’t cited any sources—these are just my thoughts based on things I’ve seen and read—so take this all with a massive grain of salt.

Ethics of Origin

Was the creation of the technology ethical?

Generative AI works on inference—that is—based on content that has already been created, it uses probability to generate output.

The ethical questions at hand here are:

  • Was the content used to train the AI model obtained ethically?
  • Is it ethical to use a model that is trained on copyrighted material?

The answer to the former is a resounding “No”—Meta torrented over 80 terabytes of pirated books to train their models, three’s an open lawsuit by the New York Times vs OpenAI, and the source of training data for image, audio and video models is equally dubious.

Courts in the US have ruled that the use of copyrighted material for training AI is “Fair Use”, however legal doesn’t always necessarily equate with ethical. Without the mass ingestion of dubiously-procured content, without compensation to the original creator, none of these models would be as powerful as they are today.

I wouldn’t blame you if you stopped here, and said AI is unethical. But I think the reality is more complex as I’ll keep exploring.

There are many technologies that have dubious origin. Nuclear power would likely not exist without the Manhattan Project which directly lead to the deaths of about a quarter million people. And the Germans of the same era made medical advances by severely unethical means. Even from where I write this, I sit on unceded (read stolen) Lenape land. There are unethical origins in everything around us, and we’re just expected to be OK with that.

Maybe this is a question is one of choice? I didn’t choose to be born in North America, inheriting the sins of others, but I can choose whether I adopt the new thing.

Ethics of Use

The answer to the later question is a bit more complicated in my mind. If you replace the AI with a person, the question becomes “is it ethical to create content heavily influenced by copyrighted material”. The answer to this question is obviously “yes”—that’s how the creative process works.

Good artists copy. Great artists steal - Picasso

But Picasso isn’t saying that “great artists” are taking other’s artistic style and passing it off as their own. Rather he’s saying that a great artist transform ideas from many external sources, embed them in their practice, combine them with their own insight, and make their own.

This is something that generative AI models, almost by definition, cannot do. Unless you consider a random number generator the model’s “insight”, the model is doing nothing but copy elements of existing works and present them as novel output.

“But my prompt is inspired”—You’re getting ahead of me—keep reading.

Ethics of Operation

Is operating the technology ethical?

The primary ethical concerns with operating AI are power consumption, water consumption, and personal data privacy. While it might more accurately live under “Ethics of Origin”, I’ll consider the resource consumption of training the models here.

Training an AI model is by far the most resource intensive part of generative AI. While it is a one-time cost (per model), it’s a huge energy cost, often in the order of hundreds of kilowatts, or even megawatts (though it’s hard to find any solid data on this). In a climate-changing world, if this power isn’t sourced from sustainable generation, it’s would be hard to consider the creation of these models ethical.

On the other hand, the resource consumption of running a model is relatively low. In my mind there’s no significant difference between running an AI model, and other resource intensive data-center processes (like streaming Netflix, or rendering CGI). The only difference is that of scale. If generative AI is being used significantly more than other computation, then there may be an argument that its use could be unethical on a global scale. But again, the ethics of running an AI data-center are tied up with the ethics of cloud-computing and the technology industry as a whole.

In terms of data privacy, my opinion is that this is a question of the ethics of a specific AI product, not the technology itself. You could avoid any privacy concerns by locally running a model.

Again, I wouldn’t blame you if you wanted to stop here and claim AI is unethical. But I want to keep going.

Ethics of Output

Is the end output of the technology ethical?

Is it ethical to generate content with generative AI and claim it as your own? It depends on what was generated and how you use it.

Writing Code

As a software developer, this is where I spend most of my time thinking about AI. Is it ethical to use AI to help write code? On the “ethics of origin” of AI code generation, there is enough open source software out there that in my opinion, the models don’t need copyrighted or closed source software to be just as powerful.

But is it ethical to use AI to write code—to “vibe code”, as it were? In my opinion yes it is ethical but it may not be the smart decision. This is probably the topic for a whole separate post, but the crux of it is: AI generated code is only as good as the ”prompt engineer’s” understanding of the underlying software systems. In my experience, these tools are best used to complete small pieces of code where you understand its place in the system, rather than trying to whole sale generate an app.

AI answers & summaries

Is an AI summary of an article or Google search ethical? Is asking a question to ChatGPT ethical? I don’t think there’s anything necessarily unethical about these scenarios. My answer to this question comes down to how the summary is presented. Is the answer presented as truth, or rather as the best prediction by the AI model? If the AI product isn’t clear about how an answer is generated, then that product may be behaving somewhat unethically, but I don’t think this is necessarily an indictment of the technology.

There are also the sociological implications of products like ChatGPT where people are developing a para-social attachment with the AI. This is again more an indictment of the product than the technology itself.

Creating “Art”

Using AI to generate what would typically be considered “art” (stories, images, music, films etc.) and then try and pass it off as your own, then no, I don’t believe it’s ethical.

In art, there are a number of things we typically attribute to its value. There’s the value we give to the of craft and effort that went into making the piece—the objective value—and there’s the meaning, the value we ascribe to the final piece—the subjective value.

For an AI generated piece of “art”, there is no objective value. No amount of craft went into the generation of the output.

“But my prompt is inspired”. Oh, you again. We could possibly say that the creation of the prompt is a form of “craft”, and that there is some amount of objective value in the final output. Is this enough to consider the output “art”? Maybe—I’m not trying to answer “what is art?” here. Is its use ethical? Again, maybe.

What is the final purpose of the generated output? Are you trying to cut out “real” artists and their dedication, craft, and insights, and pass off the generated output as a final product? No, this is not ethical (and it probably won’t be as good)

But there’s a middle ground that I believe is ethical AI use that artists, and any creator, can use in their craft. Using generative AI to create temporary artifacts to help with your practice can be an ethical, and productive use of the technology.

In what may be a sign of my position, the only AI used in the creation of this rant was to generate some code to get it on my website. (Which in retrospect might have been a mistake—Copilot really mangled my Netlify configuration).

Generating “realism”

If the purpose of your AI output is to create something “realistic”, with the intent on passing it off as “real”, then you’re lying to people, and no that’s not ethical. This includes “deep fakes” (sorry, “cameos”) of anybody, depicting events that did not happen, propaganda, and porn. I don’t think any use of AI where the intent is to deceive can be called ethical.

Is using AI ethical?

The ethics of AI is complex and multifaceted. Did it come about ethically? Is it run ethically? Is the final output ethical? While I think the answer to the majority of these and their underlying questions is “no” , I don’t know if I believe the answer to the ultimate question is as resounding a “no” as it might seem at first.

I’m reminded of the scene from The Good Place where the characters discover that, due to a complex series of externalities, nobody has made it to the “Good Place” in 500 years. There’s no avoiding “unethical” actions in an absolute sense in a complex, globally integrated economy. To me, the question “is AI use ethical” needs to be answered against this “background ethics” of living in modern society.

(I’m also reminded of the meme in which a peasant complains we should improve society, yet is criticized for hypocritically “participating in society”)

I may be oversimplifying (and I may be trying to assuage my own conscience), but I think that if you can put up with living in a society with such complex externalities and ethical ambiguity (as we all have to), I believe that use of AI can be considered ethical, only if the end use of the output is ethical.