Rob van der Veer (SIG): Treat Artificial Intelligence as Software Initiatives
Summary
-
AI as software: integration, not isolation
AI systems should be approached and managed like traditional software projects. Rob van der Veer emphasizes extending familiar best practices—like documentation, versioning, and testing—into AI development processes. -
Bridging data science and software engineering
Collaboration between data scientists and software engineers is critical. Each brings distinct strengths, and their cooperation ensures the development of secure, maintainable, and scalable AI systems. Pairing their expertise supports higher code quality and promotes shared responsibility for security. -
Regulation and standardization challenges
Regulatory frameworks like the European AI Act are still evolving. Rob leads and contributes to efforts like the ISO 5338 standard and the OWASP AI Exchange, which aim to consolidate guidance for securing AI systems and prevent fragmentation similar to past security standards. -
AI’s dual role in cybersecurity
AI tools are used both offensively and defensively. While attackers use AI to automate phishing or evade detection, defenders employ it for pattern recognition and anomaly detection. However, current AI models have limitations in static analysis and vulnerability identification. -
Emerging threats and model behavior
AI systems may develop unexpected capabilities, such as emergent behaviors. This unpredictability makes it harder to enforce controls and raises concerns about autonomy and decision-making transparency. -
Embedding AI into existing security frameworks
Organizations can adapt existing standards like ISO/IEC 27001 by appending AI-specific “attention points.” This includes protecting environments where production data is used for training models, an atypical practice in traditional software development. -
The importance of human oversight
Guardrails and human review processes are necessary to mitigate risks from AI models that may act beyond their intended scope. Zero-trust-like thinking is suggested, especially when AI has access to sensitive systems or data. -
Education and mindset shift
Both security professionals and data scientists need to adjust their approaches. Security teams should understand the basics of AI, while data scientists should embrace secure coding and documentation practices, ideally reinforced through real-world scenarios and hands-on exercises. -
Don’t reinvent the wheel
Leverage existing frameworks and tools as foundations. Standards like ISO 5338 and AI governance models can be integrated into current workflows instead of creating entirely new protocols.