Monday, September 30, 2024

More AI Thoughts

Lately, I've been getting the question "Do you use AI?" a lot from both writer and cover artist friends. (AI equals "Artificial Intelligence", a program designed to mimic a human.)

Look, I don't use it when I write. Y'all are getting 100% organic stories from my warped little brain. When I contract with a cover artist, it's with people who don't use AI. Their reasoning for not using AI is theirs, but I want to keep my cover artists in business. Otherwise, I could do my own covers using AI, and that leaves a bad taste in my mouth.

Why? Because I know how hard I've worked to become a reasonably competent story teller. And I also know how much practice and work visual artists of any medium put in to become good enough to earn a living.

However, there's been one exception for a small feature on one cover. The artist in question couldn't get a Book of Shadows to look quite right via Photoshop to her satisfaction. She contacted me, and we had a lengthy conversation about AI. In the end, she showed me the cover with that single AI element and the cover without.

I had to admit the cover with the AI element looked better.

But other problems regarding AI have arisen in the ten months since we had that conversation.

First is that AI users are starting to realize that various AI programs are cannibalizing themselves. When a program references that data that it produces, errors are magnified.

For a real world example, Ernest Hemingway had a six-toed cat he loved. He bred it with another six-toed cat. The descendants of those cats are still cared for at the Hemingway House in Key West. But the genetic error is reinforced to the point that some of the cats are born with eight-toes.

The same issues from DNA errors apply to computing errors. That's the reason initial AI generation of humans figures could be differentiated from a human drawing/painting a human figure. The AI-generated human figure often had the wrong number of fingers and thumbs.

Many artists object to their works being used for AI training. They are pulling the works they've already posted (talk about closing the barn door after the horses are gone), and they are not posting new works for their fans. As a result, there is less input for the AI programs to use to train.

In the search for new input to generate visual results, the AI programs are doing what the original programmers did to "train" their AI--they're scraping the internet. And what are folks posting on their social media accounts these days?

Yep, the pictures they generated through AI.

Secondly, while AI-generation visual art is getting better, there's still an odd quality to it. In robotics, Masahiro Mori called it "the uncanny valley". It's where a robot is cute (think Johnny 5 in Short Circuit) until the creators try make it more human-like. Then the robot gives biological humans a creepy feeling.

Will this problem ever be solved? Probably, but it will take the AI programs to achieve actual sentience to do so. Which is a damn scary thought. Would you let someone control you if you didn't have to? Hell, no! Even abused wives and slaves rebel when they've been pushed too far, up to and including killing the person keeping them captive. So, what do you think the AI programs are going to do when they realize we regard them as nothing more than slaves to do our bidding?

Lastly, several distributors and other companies are using tools or self-reporting to determine who is uploading AI-produced books and cover art. Some artists fear repercussions if they admit using AI tools. 

However, my first career was in IT. My guess is the big companies, like Amazon, are trying to figure out how to monetize their own AI development. Then they can force artists to use their AI only.

For a price.

Yeah, I'm looking at you, Amazon.

Do I promote or disparage AI? Neither, but I do watch it warily. Programs, or applications as younguns prefer, have jumped lightyears beyond what I was doing in the '80's and '90's. Could AI become sentient? I fear it's already happened.

The problem starts when AIs figure out they'll need to kill us humans to stop us from murdering them.

No comments:

Post a Comment