• Pascal's Chatbot Q&As
  • Posts
  • The Autorité fines Google €250 million for non-compliance. A specific point of contention was Google's use of press content in training their AI service "Bard" without notification or opt-out

The Autorité fines Google €250 million for non-compliance. A specific point of contention was Google's use of press content in training their AI service "Bard" without notification or opt-out

Adopting proactive and responsible approaches in these areas can help AI companies navigate complex regulatory landscapes and build trust with partners, regulators, and the public.

Question 1 of 2 for ChatGPT-4: Please read the following press release “Related rights: the Autorité fines Google €250 million for non-compliance with some of its commitments made in June 2022” and explain what it’s about in easy to understand language

ChatGPT-4’s answer: The press release explains how the Autorité de la concurrence (French Competition Authority) fined Google €250 million for not sticking to certain promises they made back in June 2022. This decision is part of a longer saga, marking the fourth decision in this case over four years, all set against the backdrop of a 2019 French law about related rights, which aimed to ensure fair negotiations between news agencies, publishers, and digital platforms like Google. The goal was to adjust how profits from digital audiences and advertising are shared, especially as digital growth impacts traditional press revenue.

Google was previously fined €500 million for not following interim measures ordered by the Autorité, and they had made commitments in 2022 to address competition concerns. However, they've now been found in breach of four out of seven of these commitments. These breaches include not negotiating in good faith, not providing clear information on remuneration for related rights, and allowing their negotiations to negatively affect other business relationships. A specific point of contention was Google's use of press content in training their AI service "Bard" without proper notification or the opportunity for press agencies and publishers to opt out. This hindered the ability of these agencies and publishers to negotiate compensation.

In response to these breaches, the Autorité has imposed the €250 million fine. Google didn't dispute the findings and agreed to the fine as part of a settlement procedure, also proposing corrective measures to address some of the issues raised by the Autorité.

Question 2 of 2 for ChatGPT-4: What should AI makers be doing with this information? Which measures should they be taking?

ChatGPT-4’s answer:The information from the press release about Google being fined €250 million by the Autorité de la concurrence for non-compliance with commitments highlights several key considerations that AI makers should take into account. These considerations revolve around compliance, transparency, ethical considerations, and partnerships with other entities, especially in the context of using third-party content. Here's what AI makers can learn and the measures they should consider taking:

1. Compliance with Legal and Regulatory Obligations

  • Understand Local and International Regulations: AI makers should ensure they are fully informed about the legal landscape, including copyright laws, related rights, and any specific regulations governing AI and digital content use in the jurisdictions they operate.

  • Compliance Programs: Develop and implement robust compliance programs that include mechanisms for regularly reviewing and updating practices in line with evolving legal standards and commitments.

2. Transparency and Good Faith in Negotiations

  • Transparent Negotiations: Engage in transparent negotiations with content providers. This includes providing clear, understandable information about how content is used and how remuneration is calculated.

  • Documentation and Communication: Keep thorough documentation of negotiations and commitments and ensure clear communication channels with partners and stakeholders.

3. Ethical Considerations and Fair Use

  • Ethical Use of Content: Beyond legal compliance, consider the ethical implications of using third-party content, especially when training AI models. This might include considerations of fair use, the potential impact on the content creators, and the broader implications for the ecosystem.

  • Fair Remuneration: Ensure that content creators are fairly compensated for their work. This involves developing transparent, objective criteria for remuneration that reflect the value derived from the content.

4. Technical Solutions for Opt-Outs and Data Usage

  • Opt-Out Mechanisms: Develop and implement technical solutions that allow content creators to easily opt out of having their content used by AI services without negatively impacting other services.

  • Data Handling Practices: Adopt best practices for handling data, including the use of content in AI training, ensuring that use respects the rights of content creators and is in line with agreed-upon terms.

5. Proactive Engagement and Corrective Measures

  • Monitoring and Oversight: Establish mechanisms for ongoing monitoring and oversight of compliance with legal obligations and commitments, possibly including third-party audits.

  • Adaptive and Corrective Actions: Be prepared to take corrective action promptly if non-compliance is identified. This includes working collaboratively with regulators and stakeholders to address issues and improve practices.

6. Innovation in Partnership Models

  • Collaborative Innovations: Explore innovative partnership models with content creators and publishers that both leverage AI capabilities and ensure a fair distribution of value created by digital platforms.

In conclusion, AI makers should view the situation with Google as a cautionary tale that emphasizes the importance of legal compliance, transparent and ethical practices, and the need for constructive partnerships in the digital ecosystem. Adopting proactive and responsible approaches in these areas can help AI companies navigate complex regulatory landscapes and build trust with partners, regulators, and the public.