BBC blocks OpenAI data scraping to use generative AI
[ad_1]
London (Uttam Hindu News): The BBC has set out three principles that will shape their approach to working with Generative AI. Let us tell you that BBC, along with other top media organizations like CNN, has blocked the data scraping of OpenAI.
In a blog post, BBC Nation Director Rhodri Talfan Davies said Generative AI offers opportunities to deliver greater value to our audiences and society.
“We believe General AI can provide the BBC with an important opportunity to deepen and extend our mission, allowing us to deliver greater value to our audiences and society,” he said.
“It also has the potential to help our teams work more effectively and efficiently across a wide range of areas, including production workflows and our back-office,” the BBC executive said.
In August, several top news publications such as The New York Times, CNN and the Australian Broadcasting Corporation (ABC) blocked Microsoft-backed OpenAI from accessing their content to train their AI models.
The NYT blocked OpenAI’s web crawler, meaning the company run by Sam Altman can’t use content from the publication to train its AI models.
OpenAI’s web crawler called GPTbot can scan web pages to help improve its AI models.
Davis said in the blog post that if General AI is not used properly, it is likely to introduce new and significant risks.
“These include ethical issues, legal and copyright challenges, and significant risks related to misinformation and bias,” he stressed. “These risks are real and cannot be underestimated.”
It will explore how we can use generative AI to strengthen our public missions and deliver greater value to audiences, drawing from three principles outlined by the UK’s top media outlets.
“We will always prioritize talent and creativity and remain open and transparent,” the company said.
[ad_2]
Source link