Everything learns from what trains it. One of the things happening already is ai chat bots are becoming racist, political, deceitful, and worse at distinguishing fact from fiction every day. (Just like most of the users? tee hee...) It doesn't think, so it can be manipulated, whether intentionally or not. The intentional side is going to result in more scams and fraud as criminals learn to manipulate it to their needs, just like they do with every other technology. There's really no way to completely prevent it because, well, ai doesn't think or know the difference, people give it way more credit than it deserves. We can't even control scam, spam & phishing emails, calls, and texts we already get bombarded with, ai is becoming just another tool for the low life, and adding ways to steal art, literature, and more for hacks to profit from.
On the chat side it's all just based on predictive text, prone to errors on it's own already, and enough user input can sway how it predicts. Every time some content creator uses it to make bogus content portrayed as real skews the learning curve, because again, it doesn't think, doesn't know better, it's programmed, and users are programing it to publish BS. With the ability to create realistic deep fakes of so much it's being used to kill reputations, sway voters, and of course the mainstay of everything today, to sell products, none of which requires integrity, accuracy, or honesty.
The image end of things isn't much better, and progress has slowed to a snails pace. There currently isn't enough to train it, seriously, there isn't enough art in the world that hasn't already been used to advance it any farther, so the technology, in many ways has just stalled, which is why it's not getting that much better after a big quick leap over the last couple years. It's kinda like image trace programs, none really does a good job at anything except basic shapes, despite the fact they've been around so long. Plus the current lawsuit from Disney and Universal will probably have a major impact on the technology, probably even open the door for even more lawsuits, and could set any advancements back even more. Not what I'd be looking at investing into.
The bottom line to me is that ai is a lot like EV's. Great in theory, but technology hasn't evolved enough to meet the hype or be completely usable, prematurely rushed to market to turn it into revenue, and it's hampering it's own chance of survival. Any advancements beyond this point will require actual advancements in technology to achieve, not just programming or feeding it more, which could take decades, and may never fully deliver what people are promising to get you to buy in. From here out it'll be very slow and costly. In it's current state, ai is little more than a novelty that just steals every bit of data and art it can from all those who actually created it and spits it out like a glorified xerox machine, a chicken that plays tic-tac-toe app, the idiot who tries to finish sentences for you. It's not intelligent, has no logical thought process, no compassion, empathy, sympathy, it's just mathematical. It's a way for children to cheat at learning, ad-bots to blast you with ads for everything you either don't want, need, or already own, and of course a way for scammers to up their game. If it can create code for programs, what will prevent it from writing the next
computer viruses, or programs that access financial data, power grids... Creators always say it won't happen until it does. And it will when the right person who understands how to manipulate it comes along, it won't know any better because it can't/ doesn't think like real people do, and the person doing it won't even need to understand coding, creating a new breed of hackers.
OK, Rant over...