Elon Musk’s AI research group OpenAI has dropped the plan of publishing some of its latest research into automatic text generation out of a concern that the work could be misused to mass-produce fake news and online hate speech.

The decision marks a rare public effort to withhold research as AI systems become increasingly powerful. Much AI research is dual use, meaning it could easily be adapted for harmful purposes.

‘OpenAI’ research involves a language system that tries to generate relevant-sounding text from any prompt. When fed a sentence, it uses statistical methods to try to guess what next words are most likely to be, spinning out follow-on sentences that can sound disconcertingly coherent.

“If you start with a conspiracy theory about how we didn’t land on the moon, it will continue with a conspiracy theory about how we didn’t land on the moon,” said Alec Radford, one of the OpenAI researchers behind the work. “It’s a bit of a chameleon.”

OpenAI said its system was able to automatically generate realistic-sounding, coherent text about half the time. It added that the technology could be misused to produce things such as computer-generated financial news about real companies, racist or sexist screeds that magnify the impact of online trolling, and fake reviews that flood sites like Amazon and Yelp. The organization said it would publish its research paper on the new language system, called GPT2, but withhold the code it had developed, as well as the three largest of four language models it had built.

But Sam Bowman, a natural language researcher at New York University, warned that publishing the paper will still make it relatively easy for others to reproduce OpenAI’s work.

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *