The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.
The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.
It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.
Yeah, there are no obvious profits in Artificial Morality;
https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/artificial-morality