Trey Ford has seen it before: The engineering team is off to the races, exploring the latest artificial intelligence (AI) tools, while the security side is still trying to catch its breath.
"Security teams are never overstaffed," says Ford, CISO, Americas, at Bugcrowd. "And out of nowhere, we have this new thing that is a game-changer. But there's this whole sudden mindset where, 'We're going to go do this thing.'"
In the rush to start using AI-powered solutions, many organizations are skipping the basics — starting with a formal AI policy. According to a recent ISACA survey that polled 3,029 digital trust professionals worldwide, only 28% of organizations have a formal, comprehensive policy in place for AI. That's better than last year's 15% but still quite low given the widespread adoption of AI that is afoot in organizations. In fact, the same ISACA research finds 81% of respondents believe employees within their organizations use AI, whether or not it is permitted.
That lack of policy is not just a governance issue — it's a security problem.
"Governance often focuses on ethics and bias but misses the real security threats," says Ankur Shah, co-founder and CEO of AI security platform Straiker.
He points to overlooked risks, like prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools operating outside of IT's visibility. These gaps leave organizations exposed — even as adversaries are already learning how to exploit them.
What Should an AI Policy Include?
For those who have not yet drafted a policy,it should be more than a list of do's and don'ts. A modern AI policy should provide structure for innovation, set guardrails for safety, and define clear boundaries for acceptable use. Every AI policy should include core components, Shah says, such as:
- Acceptable use definitions and business purpose
- Data processing and privacy guidelines
- Security and safety controls
- Transparency and clear expectations
"Make it principle-based but tied to clear, enforceable controls," Shah says.
That includes restrictions on exposure of customers' personally identifiable information, limits on overly permissive AI agents, and differentiated requirements depending on the risk — say, marketing copy versus healthcare data.
Andy Ellis, veteran security leader and current CEO of executive consultancy DUHA, advises against building policies that are too tool-specific or inflexible. Since the generative AI market is still rapidly evolving, policymakers risk having archaic policies that get ignored because they seem no longer relevant, the former CSO of Akamai Technologies says. Instead, he suggests categorizing tools — software-as-a-service embedded AI, content-generation tools, and AI systems that interface with internal data — and writing policy around those categories.
"Security leaders should engage early with organizational leaders and help them define their GenAI roadmaps," Ellis says. "Focus on getting business value from those tools safely, rather than trying to play whack-a-mole later."
Ford agrees. "This should not be done in a vacuum," he says. "This needs to be done in partnership with the business."
The strongest policies, he adds, enable responsible use, guide product and R&D teams early, and incorporate mechanisms for coaching rather than punishment.
It also means recognizing that people will make mistakes.
"We don't get enough sleep. We might be undercaffeinated or hungry, distracted, and we're just trying to get a job done," Ford says. "They're cutting corners with good intent but making mistakes that could be dangerous."
Turning Policy Into Practice
Once the policy is in place, enforcement matters. But it can't just be reactive. Shah recommends embedding controls where people already work — inside developer tools, office software, and employee browsers. Enforcement works when it's built into how people work, not added on top, he says. Ongoing training should be tailored by role and supported by real-time monitoring, automated red teaming and a clear process for detecting and responding to inappropriate AI use.
"Start with a core policy, then layer department-specific controls and reviews," Shah says.
And don't fool yourself into thinking your policy will keep employees from engaging in unsafe behavior. Banning tools like ChatGPT or Gemini outright can backfire, Ellis says.
"Monitoring and securing public AI tool use is going to be more effective than outright bans," he notes, instead recommending providing safe, approved alternatives and building detection and coaching processes for when employees stray from those options.
AI Policies for a Moving Target
From the EU AI Act to US state-level proposals, companies must design their policies to adapt as global AI regulations take shape.
"Flexibility is key," Shah says. "Regulations will evolve, and so should your controls."
Frameworks like NIST's AI Risk Management Framework, OWASP's LLM Top 10, and ISO/IEC 42001 can help establish a baseline. But Shah, Ford, and Ellis all agree that policies should be living documents—regularly updated, context-aware, and integrated into broader enterprise risk management.
"What are the worst things that could possibly happen, and how are we instrumenting against them?" Ford says. "And what are the most regularly contacted loss scenarios?"
AI policies aren't just about compliance; they're about foresight, Ford says. If organizations want to harness the power of AI without compounding their risks, the work starts now.