File – Image of the interface of an Android mobile with Google Phone, Messages, Play Store, Google Chrome, Camera and the Google search engine – DANIEL ROMERO/UNSPLASH – Archive
June 7 (Portaltic/EP) –
Developers who want their generative artificial intelligence (AI) applications distributed on Google Play Store They must prevent the creation of illegal content and provide reporting tools that help improve the tool.
Google has updated its policies to include one specific to apps that use generative AI, since this technology is increasingly accessible and developers can include it “to increase interaction and improve the user experience.”
The policy focuses specifically on the ‘apps’ that generate content, such as ‘chatbots’ or tools that create images from a description. They are required to “prohibit or prevent the generation of unauthorized content”, such as fake nude images or with manipulations that seek to deceive.
Likewise, they must include internal functions that allow users to report or notify when they detect that the ‘app’ is used to generate illegal or offensive content, which, according to Google, developers must use to “improve content filtering and moderation in your apps.”
Add Comment