Human reinforced learning could mean ‘more truthful and less toxic’ AI

AI has been making huge leaps in terms of scientific research, and companies like Nvidia and Meta are continuing to throw more resources towards the technology. But AI learning can have a pretty huge setback when it adopts the prejudices of those who make it. Like all those chatbots that wind up spewing hate speech thanks to their exposure to the criminally online.

According to Golem, the OpenAI might have made some headway on that with its new successor to the GPT-3, the autoregressive language model that uses deep learning in an effort to appear human in text. It wrote this article, if you want an example of how that works.

Tips and advice

(Image credit: Future)

How to buy a graphics card: tips on buying a graphics card in the barren silicon landscape that is 2021

Be the first to comment

Leave a Reply

Your email address will not be published.


*