Monika Bauerlein, executive director of the CIR, said in a statement that both companies collect their stories to make their product more powerful, “but they never asked permission or offered compensation (…) this opportunistic behavior is not only unfair, but which also violates copyright.”
However, both companies do not use the same technology. Sarah Bird, product manager for Microsoft’s Responsible AI Office, says the two companies have a copy of the model and, in Microsoft’s case, keeps it completely tied to its ecosystem, so the company has the ability to execute its own security and privacy measures.
“When you use a Microsoft product, you use our platform entirely, while OpenAI has its own version,” explains Bird about the filters that information has to respect copyright.
Natasha Crampton, vice president and director of the Office of Responsible AI at Microsoft, contributes that while Microsoft has worked with OpenAI to share best practices in terms of security, when AI is implemented in Microsoft products it has additional layers of security that correspond to the company’s standards and values.
It should be remembered that in September of last year, Microsoft announced that it will assume legal responsibility if its users are sued for any copyright infringement when using its AI Copilot service.
According to the company’s chief legal officer, Brad Smith, the company will take care of any legal risk a company has when using its system in terms of how they protect works not authored by the technology.
This initiative is called Copilot Copyright Commitment and is part of a series of general policies and commitments that Microsoft is assuming with its business customers of Artificial Intelligence systems.
“If a third party sues a commercial client for copyright infringement for their use of Copilot, or the output they generate, we will defend the client and pay the amount of any adverse judgment or settlement that results from the lawsuit,” the company points out. .
However, it is important to highlight that it also has some conditions to be able to assume responsibility for the content, such as the fact that the client “has used security barriers and content filters,” Smith mentions.
The company highlights three main reasons for this stance on copyright. The first is that it wants to support the trust of its customers when using its Artificial Intelligence services.
The second is that he understands the copyright concerns that have arisen after the rise of generative AI and the third explanation is to test the barriers that have been built to prevent this technology from infringing on protected material.
In this sense, Smith highlights that with this type of projects they seek to manage the uncertainty around author legislation and generative AI, since they consider that artists must maintain ownership of their works, in addition to obtaining a profit from them.
Add Comment