Last May, the technology portal 'CNET' announced that it had started using an artificial intelligence (AI) system to generate some of its articles, especially those related to financial data, statistics or product comparisons. According to the company, it was a way to optimize the work of its human journalists, who could dedicate themselves to more creative and analytical tasks.
However, the decision has not been without criticism and controversy. Some readers have complained about CNET's lack of transparency and ethics, which does not always clearly indicate which articles are AI-generated and which are not. Others have questioned the quality and veracity of the content produced by the machine, which sometimes contains grammatical errors, contradictions or biased information.
In addition, the 'CNET' initiative has opened a broader debate on the role and future of artificial intelligence in journalism and in society in general. To what extent can a machine replace or complement human work? What risks and benefits does the use of AI to generate information imply? What criteria and standards must be followed to guarantee respect for rights, diversity and democracy?
These are some of the questions raised by the phenomenon of articles written by artificial intelligence, which has opened a Pandora's box that cannot be closed easily.