Workshop: “Between secrecy claims and transparency needs: Why, how and for whom the EU digital rulebook negotiates and produces transparency of regulated digital technologies”

On 16-17 April, the Artificial Secrecy researchers are excited to organise an expert workshop “Between secrecy claims and transparency needs: Why, how and for whom the EU digital rulebook negotiates and produces transparency of regulated digital technologies”.

Through this workshop, we aim to connect academics with expertise in corporate confidentiality protection, including trade secrets, with researchers focused on transparency in EU digital and freedom of information laws.

The aim of the workshop is twofold:

First, to explore how EU digital and data regulation balances the protection of confidential information with transparency requirements, including transparency for affected users, qualified transparency for regulatory bodies and public transparency.

Second, to invite participants to contribute to an edited volume on the nexus between transparency and confidentiality. The bundle would include contributions on the covered legislation and emerging case law, as well as contributions clarifying the protection of confidentiality in relation to regulated digital technologies and FOIA requests at the EU and member states’ levels.

The workshop is structured along the following eight panels:

  1. Transparency of regulated digital technologies
  2. Confidential business information as a counterclaim
  3. Transparency regimes before and behind the trade secret barrier
  4. Transparency under the General Data Protection Regulation and the Artificial Intelligence Act
  5. Right to an explanation of Automated Decision Making under the GDPR and the AI Act
  6. Transparency rights under the Digital Services Act and the Digital Markets Act
  7. Transparency rights under the Data Act and the Data Governance Act
  8. Freedom of Information requests to regulatory authorities

Stay tuned!

The European Commission’s failure to reply to a submission raising concerns about the compliance of generative AI systems with the EU Charter of Fundamental Rights

The EU Ombudswoman opened an inquiry into the Commission’s failure to reply to a submission on ethical concerns about generative AI systems’ compliance with the EU Charter of Fundamental Rights. The central issues of the inquiry include the failure to follow the law (Art. 4 ECGAB) and failure to deal properly with requests for information (Art. 22 ECGAB).

Source: The European Commission’s failure to reply to a submission raising concerns about the compliance of generative AI systems with the EU Charter of Fundamental Rights

The European Commission’s refusal to give public access to the risk assessment report of a large social media company on its compliance with the provisions of the Digital Services Act | Case | European Ombudsman

The complainant asked the European Commission for public access to the risk assessment report of a large social media company on its compliance with the provisions of the Digital Services Act (DSA) – annual reporting is part of the obligations of ‘very large online platforms’ under the Act. The Commission refused access to the document, arguing that it could be generally presumed that disclosure could undermine the commercial interests of the company in question as well as an ongoing investigation into the company’s compliance with the DSA. The Commission did not individually assess the report for possible disclosure.

The Ombudsman concluded that the Commission’s application of a general presumption of non-disclosure to the risk assessment report constituted maladministration. She recommended that the Commission conduct an individual assessment of the document with a view to granting the widest access possible.

Source: The European Commission’s refusal to give public access to the risk assessment report of a large social media company on its compliance with the provisions of the Digital Services Act | Case | European Ombudsman

EU court adviser rejects Meta’s fight against EU antitrust demands for Facebook data

“In his opinion, Advocate General Athanasios Rantos proposes that the Court of Justice dismiss both appeals and uphold the judgments of the General Court”, the court said in a statement, adding that Rantos said in his non-binding opinion that the General Court “did not err in law in assessing the necessity of the information requested or in examining the safeguards for its provision.”

Source: EU court adviser rejects Meta’s fight against EU antitrust demands for Facebook data

The Practical Limits of Algorithmic Transparency: Lessons from France · Félix Tréguer

In the face of concerns raised by AI systems in recent years, transparency has emerged as a central governance principle, enshrined in regulations from France’s Digital Republic Law to the EU AI Act. This talk examines the practical enforcement of algorithmic transparency provisions through action-research conducted by La Quadrature du Net, a French digital rights organization. Drawing on five years of Freedom of Information Act (FOIA) requests targeting surveillance algorithms deployed by French public authorities, we document systematic enforcement failures that render transparency requirements largely symbolic. Our findings suggest that transparency operates primarily as a legitimizing device for algorithmic governance rather than as an effective accountability mechanism.

Source: The Practical Limits of Algorithmic Transparency: Lessons from France · Félix Tréguer