Some musings and questions on ChatGPT and art generating AIs.

in computing •  2 years ago 

image.png

I got my computer science degree a few decades ago, even studied AI. At the time, I was deeply skeptical of predictions for what AI (and nuclear fusion research) would accomplish within a decade. Indeed, that "one decade out" window kept slipping, always remaining about 10 years ahead of the current state of the art.

AIs have made some huge advances in recent years, and the shape of the curve seems to be changing. Chess playing went from able to be beat most humans in 1980, to the first win against a grand master in 89, to beating the world champion in 97. Progress in understanding and speaking human languages went from primitive in the 80s, to passable dictation on PC hardware in the late 90s, to Watson playing Jeopardy, to nearly human levels of listening and speaking today.

Progress in playing Go seems to have been even more stunning, an immensely difficult challenge, and AI came from nowhere to dominate in just a handful of years.

GPT-3 and the Art AIs have made even more breathtaking progress. They're still early and rough, but they've advanced so far beyond the state of the art 5 years ago as to be almost unrecognizable.

Will AIs become better at researching, writing and explaining than humans? Will they become better at making great works of art than the most talented humans? They're already better than the average human at both of these tasks, and "going vertical" on their skill curve.

My understanding of the construction and training of neural nets is getting more and more out of date, but the bits I understand tell me that it is a tragic mistake to dismiss these as just sampling millions of source works and pulling pieces of them to make a new soup. Something far deeper is happening.

These networks have billions of parameters that are tuned, trained and re-evaluated in an intense process of "training". They now have more parameters than human brains have neurons. These parameters are shaped and interact with others to improve their - what? - understanding? intuition? skill? mastery? - of their subject domains.

An AI generating works of "Eiffel tower, moon in background, in the style of 'starry night'" isn't cutting and pasting pieces of original art. It is using its understanding based on practice that would take thousands of human lifetimes, to refine a bunch of pixels to more and more closely fulfil the intent of the request.

From what I know at this point, I think it's absolutely accurate to talk about these systems having "learned", "practiced", "developed intuition" or "mastered" subjects and techniques. Their process of learning is very similar to a human artist's development, just millions of times broader and deeper.

Is the emotion you feel on seeing a great piece of art something contained in the work itself, or a reaction inside you elicited by the form of the work? If the former, it might be impossible for an emotionless algorithm to create great art. If the latter, it may be impossible for humans to compete 10 or 20 years from now.

Is that last thought horrifying or exciting? It's definitely catastrophic for the breadth of employment opportunities in art, although there will always be a market for hand crafted originals. But it's also wildly exciting to imagine a world where every day brings new artworks that are so stunningly good that everyone will be able to own art better than the best that has ever been produced.

Very, very interesting times.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!