peter's bookstore

exploring the edges of the extraordinary

can ai create

on the notion of creation and why gpt detectors do not work

last updated on december 31st


When we consider the concept of AI-generated content, the scope of the training data profoundly influences the nature of the AI's output. A model trained on a limited, personal dataset may reflect more closely the nuances of an individual's writing style, whereas a model trained on the vast corpus of internet text would likely produce more generalized content.

However, in both cases, can we truly consider these outputs 'creations' of AI?

The term "created by AI" is somewhat misleading. As it currently operates, AI does not genuinely create in the human sense. It doesn't conceive ideas or experiences from a vacuum but rather synthesizes new combinations based on existing data.

This is a form of generative creativity, but it differs fundamentally from human creativity, which often involves drawing from personal experiences, emotions, and conscious thought.

The AI's 'creativity' is, thus, a reflection of its programming and the data it has been fed.

The concern that AI's capability to generate original content might lead to significant challenges is valid. Citing DALL-E 3 as an example, if AI reaches a point where its output is indistinguishable from human creativity, it raises ethical and practical questions.

Ethically, it challenged our understanding of authorship and intellectual property. Practically, it led to a flood of AI-generated content that overwhelmed human-created works.

It's important to acknowledge that while AI doesn't create in the human sense, its ability to recombine and recontextualize information at an unprecedented scale is something to be concerned about and consider.

This is why, at their core, AI text classifiers often struggle with nuances such as context, tone, and ambiguity - they, too, rely on algorithms trained on large datasets to identify and categorize text based on learned patterns.

Sarcasm and idiomatic expressions can go over their (literal) overhead logic.

Furthermore, classifiers can struggle with understanding the intent behind the text, especially in complex or nuanced scenarios where human judgment is crucial.

Therefore, while these tools offer valuable assistance in processing and categorizing large volumes of text, they are not infallible and should be used with human oversight.