Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Understanding Fake News
Artistic Self-Awareness – How to Engage with Generative AI Ethically

Artistic Self-Awareness
© onurdongel, Getty Images

Is the relationship between artists and AI complicated because we are yet to understand it? Who makes the rules for AI generated art?

By Anindita Sen and Biswajyoti Bandyopadhyay

A majority of the art that fills the world’s most famous museums today was mainly produced by and for the elite. Those who had the wealth to patronise artists, like the Church or the nobility, were thus able to shape who and what would be represented and how certain subjects would be depicted.

One of the first tech advancements to challenge this ideology was the invention of photography. Later, the advent of the digital medium and the internet would usher in a new generation of artists.

Today AI represents the next level in democratisation. It is a tool that lowers the entry barrier. We are being told by the media and popular culture that it will challenge the gatekeeping that previously allowed only those who had privilege, access to opportunities, the gift of certain skills, tastes or sensibilities.

The chaotic input that will follow as the gates open could definitely lead to an expansion of the horizon and definition of art itself. A greater diversity of emotional experiences, cultural representations, and visual language will be made possible.

Learn how it works
However, we must be conscious that AI can also be used maliciously or monopolistically, depending on the input it gets. It can inadvertently make us party to copyright claims. Additionally, we as individuals interacting with AI can feed its biases and discrimination.

The cohort interacting with generative AI for artistic production are primarily those who are technocrats and technicians. They have the industrial knowledge and skills, but may not have the discursive awareness required to check the stereotypes inherent in the system. In other words, they may not even realise that they are perpetuating biases. 

The South Asian market is also extremely cost sensitive. Companies and agencies prefer such platforms, because constant budgetary pressures make Generative AI a real time, lucrative, and easy solution. However, crunched timelines further reduce scope for critical learning and thinking.

AI requires artistic self-awareness on a deeper level. Therefore in an ideal world, AI would liberate artists and creators of the mundane, allowing them to concentrate on more challenging tasks which only a human can do.

The main goal for artists should be to use the medium effectively, ethically and responsibly.

Make AI use safer
Here, the buck stops with the government and state. If government agencies and global organisations don't step in to set rules and checks, online platforms might keep working with powerful people and groups. This could push artists to create content that aligns with the interests of these external forces.

The ethical use of AI is not the responsibility of the user alone. There should be laws and a body governing the use of AI to keep a check on where you can and cannot use it.

Many believe that the decision making for how to use AI has to be done at a higher level. For example, in 2022, UNCEB published the Principles for the Ethical Use of Artificial Intelligence in the United Nations System. Many countries are working on their own legal parameters for AI to keep users safe.

AI may spark a renaissance in terms of our ability to do things faster. But that might sometimes be at the cost of ethics. When it comes from a top-down approach, a lot of big companies want efficiency, and ethics goes out the window. It really is up to users to draw the line, because these decisions are being made right now.

Artists can cultivate criticality by deepening one’s subjective knowledge of technology. To understand the technology, is to use it well. Artists can only defend themselves in the right way when they are well prepared with how technology can be used against them.

Our experience with Technology and Big Data in the last two decades has made it very clear that we must read the fine print, and understand what rights we are signing away. When people signed up for social networks, many didn't read the fine print. They didn't know what data was being signed over. These companies are now selling their data to others. This chain of corruption has existed on the Internet ever since Web 2.0. Often these cases are so convoluted that they require very, very intelligent legal investigation, to figure out what really happened. It is important to be cautious from the start.

Human or AI?
AI generated visual images have become ubiquitous in our daily lives - especially in the digital space, sometimes with very real negative consequences. Is there any sure way to tell if an image was made using an algorithm? The disconcerting answer is that you really can’t unless there is a disclaimer. You may step into a cafe to admire some of the art work on the walls, without realising that they are AI generated. These instances are real and happening all around us.

One way to better identify Generative AI is to approach it with curiosity instead of the fear that often accompanies new technology. The more you use technology the more you understand where it comes from. The only practical way to tell AI and non AI material apart is to become AI fluent. Like reading books by a certain author, you will gradually develop the skills to identify the styles and tell where it is coming from.

Generative AI is already facing a lot of controversy about copyright infringement. There are many competitive claims to the intellectual property of the art being created. How do we navigate this? Moreover, how would the final creatives produced attribute/distribute the credits among the various stakeholders? Is it the platform creator, the individuals feeding and training the AI, and the end user using the application? Who owns the right to the IP being created?

AI in the future
The legal frameworks that govern AI are still in their early stages of being written. As of now, the onus lies squarely with the artists who must take ownership of not just what they have created through AI, but also the linguistic decision chain that they use to create art. We have to ask ourselves, even if it is within legal limits, what are we using it for, and how complex are our inputs and our prompts? Are we making it complex enough where we can say this is coming out of our own creativity, or are we just asking it to copy? Imagine there is an artist that you want to take inspiration from, and they say that their data was used illegally. It is then your ethical duty to put your project aside and support them as a fellow artist.

Our engagement with AI is provoking us to question ownership of art itself. As we move forward into the algorithmic age, we may have to ask more fundamental questions of ourselves as self-aware individuals. Especially since all art is influenced by the experiences of the artist. One perspective is to question if an image contains the essence of the artist or not.

Artists take learnings from hundreds of things that they experience around them. As of today, AI doesn't make anything of its own. It takes prompts, and makes things different. It is a tool that cannot add anything extra as of now. To do that, the AI would have to start feeling. In the case of a human artist, they will always imbibe their art with a little bit of themselves - and that is the difference.
 

Top