Understanding AI’s impact on society and ensuring that tech companies are held accountable requires conditions of openness and transparency. The DSA enshrined in law requirements for some of the world’s largest tech companies to share data with researchers, and the AI Act follows this with additional transparency and testing requirements for AI. This is a first step in enabling more far-reaching scrutiny of general-purpose AI models and an ever-increasing number of AI-enabled products across sectors — but it can’t be the last. How can external scrutiny and more openness in AI be achieved in practice? What are the needs of and roadblocks for public-interest researchers, as well as the challenges of developers and commercial actors trying to understand these models and ensuring their quality? This panel will explore the role of research and transparency from the experience of online platforms to the next era of AI.