The speaker is a stronger advocate for open source and the importance of supporting open source communities and building systems that can build on top of each other.
The speaker is concerned about the possibility of open source being shut down, citing the example of Cal.com closing its source code.
The rise of AI has fundamentally altered the security landscape, making transparency a risk, and this is driving some entities to close core codebases.
The interaction between AI and security research is changing the landscape, potentially lowering the barrier for finding exploits, which is scary.
The future of security may involve a system where cybersecurity is framed as proof of work, requiring spending more tokens to find exploits.
Finding exploits requires both security knowledge and domain-specific knowledge.
Historically, the lack of knowledge in one area (like full-stack TypeScript) meant that open source didn't meaningfully change the average security researcher's capability in that domain.
AI can now bridge this gap: it can understand the domain-specific stuff, meaning the barrier for finding exploits is shifting from requiring high domain knowledge to requiring more security knowledge.
The speaker notes that the AI has made the amount of domain-specific knowledge hit a floor, and the only remaining blocker is security knowledge.
The speaker argues that open source remains critically important and needs to be supported.
Open source projects face an additional burden to deal with security reports, which is "thankless work."
The speaker calls for companies to convince others that keeping things open is the right call going forward.
The speaker suggests a three-phase cycle for agents: Development, Review, and Hardening, where human input limits the first phase and money limits the last.
The final message is to keep fighting for open source and ensure that the tools we rely on remain secure, reliable, and great.