First, the required disclaimer: Do not use artificial intelligence voice generators to solve ethical issues. Second, the results are interesting, so be sure to share these dilemmas with this AI-powered simulation from Reddit.
Are You The Asshole (AYTA), as the name implies, is built to mimic Reddit's crowdsourced advice forum r / AmITeAsshole (AITA). Created by Internet artists Morris Kolman and Alex Petros with the support of DigitalVoid, this site allows you to enter scenarios and seek advice. It then generates a series of feedback posts depending on the situation. Feedback is very good at capturing the style of real human-generated responses, but it has a strange and slightly heterogeneous bias that many AI language models generate. His reaction to the plot of the classic science fiction roadside picnic is:
Apart from the strangeness of the assumptions I typed, they tend towards etiquette that doesn't exactly match the prompts-but the writing and content are seemingly very convincing.
We also asked to resolve last year's controversial "Bad Art Friend" debate. The first two bots were even more confused by this. But there were a lot of people to be fair.
There are a few more examples of this site-specific subreddit. The
AYTA is actually the result of three different language models, each trained on a different subset of data. As explained on the website, creators have captured approximately 100,000 AITA posts since 2020, along with comments related to them. Next, I trained a custom text generation system with various data snippets. The bot is provided with a series of comments concluding that the original contributor is an NTA (not a dislike), one is given the opposite post and the other is a combination of data. it was done. It contains both the previous sentence and a comment explaining that all involved were not negligent. Oddly enough, a few years ago someone created an Allbot version of Reddit with a how-to article, but the prompts had a much more surrealistic effect. The
AYTA is similar to an earlier tool called Ask Delphi. This tool uses AI trained in AITA posts (but in combination with responses from hired respondents rather than Redditor) to analyze the morality of user prompts. However, the frameworks of the two systems are quite different.
Ask Delphi implicitly emphasized many flaws in using AI voice analysis for moral judgment, specifically how often it responds to the tone of the post rather than the content of the post. AYTA is clearer about that absurdity. For one thing, it's not an indifferent referee, but mimics the snappy style of Reddit commentators. Second, it doesn't provide a single verdict, instead you can see how AI leads to different conclusions.
"This project is about the prejudice and reasoning that bad data teaches AI," Kolman tweeted in an announcement thread. “Biased AI looks like three models trying to parse the ethical nuances of a situation when one has only ever been shown comments of people calling each other assholes and another has only ever seen comments of people telling posters theyre completely in the right.” Contra a recent New York Times headline, AI text generators aren
t precisely mastering language; theyre just getting very good at mimicking human style — albeit not perfectly, which is where the fun comes in. “Some of the funniest responses aren
t the ones that are obviously wrong,” notes Kolman. “They`re the ones that are obviously inhuman.”