Dale Hoak confronted a challenge that security leaders know well: identifying what falls outside their field of vision.
As CISO at RegScale, Hoak needed to understand potential gaps in his company's AI security posture.
"The business was moving so fast in using AI, so initially we had some visibility gaps," he explains.
Concerned that his monitoring infrastructure couldn't adequately identify AI-related risks and threats, Hoak reconfigured existing security tools and acquired new ones, including AI-focused monitoring solutions. The process took roughly six months.
"Over time I figured out what to look for using logging and SIEM and AI tools, and I feel like we now have the gaps covered," he says.
Yet uncertainty persists.
"I'm always a little wary," he acknowledges, about potential blind spots in security operations.
This wariness is justified. AI is broadening organizational attack surfaces while introducing novel risks including prompt injection and data poisoning attacks. Security executives understand these threats, but they're simultaneously grappling with visibility challenges as organizations accelerate AI implementation.
Pentera's AI Security Exposure Survey 2026 Report found that 67% of CISOs report inadequate visibility into where and how AI operates across their environments.
Furthermore, 48% identified limited AI usage visibility as a primary security challenge, ranking it second only to insufficient internal expertise at 50%.
Multiple sources of visibility gaps
Nitin Raina, global CISO at Thoughtworks, identifies several scenarios that create visibility problems. Shadow AI remains a concern.
"Initially about 12 to 18 months back, we saw people using [unsanctioned versions of] ChatGPT or Gemini or buying their own niche AI tool. That has slowed down, but it's still one of the risks," Raina notes.
Existing vendors adding AI features present another challenge. "The vendors we use are adding AI capabilities and sometimes we don't have entire visibility into that," he says, despite security team efforts to understand vendor data handling and vulnerability management.
Third-party AI models create additional blind spots, Raina explains. While CISOs can conduct surface-level reviews, they typically cannot perform deep technical audits to identify potential output bias or unauthorized data transmission.
Agentic AI introduces yet another layer of complexity, with risks including hallucinations and prompt injections. These autonomous systems can fail in ways that, due to their operational speed, evade conventional security detection.
The AI security landscape draws comparisons to early cloud adoption, when CISOs faced similar shadow deployments and visibility challenges.
Today's challenges are more acute, according to Nick Kakolowski, senior research director at IANS Research. Executive fear of competitive disadvantage drives higher risk tolerance and rapid AI implementation outside standard procurement processes. The result: "blind spots are kind of everywhere."
CISOs frequently lack complete visibility into fourth-party AI systems and associated risks.
The same applies to AI output quality. "No one understands fully how to assess the outcomes of AI and the quality of the content being created by AI," Kakolowski says. "We're not going to be able to evaluate the quality and trustworthiness of the outputs of AI, and we don't know how to equip our people to do so effectively."
AI-generated code presents similar challenges, particularly when created outside development teams. "They're using vibe coding, and CISOs may not know where that AI-generated code is being integrated," Kakolowski observes.
Security leaders may also lack insight into how AI agents grant access privileges to other agents during workflow execution.
Ethical implications of AI capabilities represent another potential blind spot. "CISOs often get pulled into things that are on the ethical side of risk, and this issue of ethical AI is starting to emerge as one of them," Kakolowski notes.
Perhaps most fundamental is understanding organizational risk tolerance. "Guessing at the organization's risk tolerance is a high-level blind spot," Kakolowski says. CISOs must first define "what the organization considers reasonable versus unreasonable. That helps CISOs figure out the next step."
Improving visibility
CISOs recognize the stakes, with data leaks and problematic AI outputs among common consequences.
Security leaders are working to close visibility gaps, says Aaron Momin, CISO and chief risk officer at Synechron.
"The business has a mandate to adopt AI, but the trouble with this is that the business has been moving at lightspeed and CISOs are just catching up," Momin explains.
Momin relies on comprehensive security strategy, established frameworks, and clear understanding of risk appetite and tolerance. His approach emphasizes people, process, and technology to secure AI deployments and enhance visibility.
Still, gaps may persist. Traditional security tools like URL filtering and data loss prevention provide baseline control but fall short of comprehensive AI visibility.
"They're not necessarily sufficient. They could get to maybe 80% or 90% of what you need, but to get higher visibility, you have to add additional tools," Momin says.
This creates another challenge.
"Those tools have to be matured, have to be extended, have to be broader to get full visibility," Momin notes. "Now some vendors are upgrading the capabilities [offered in their security tools,] and new tools are coming on the market. And they're starting to give you full visibility."
Raina at Thoughtworks employs a similar multipronged approach combining administrative, governance, and technology controls—a historically successful security combination.
However, experts caution that traditional methods alone prove insufficient for AI visibility.
Pentera's survey found zero CISOs reporting complete visibility with no shadow AI. One-third reported good visibility with likely shadow AI, 66% acknowledged limited visibility with known shadow AI issues, and 1% reported no visibility.
Complete visibility may be unattainable currently, says Jared Oluoch, professor and director of Eastern Michigan University's School of Information Security and Applied Computing. Existing tools and strategies reduce but don't eliminate blind spots. "They can minimize the negative effects," he adds.
That's the realistic objective, according to Tal Hornstein, CISO at Cast & Crew.
Hornstein applies established security principles, citing the confidentiality, integrity, and availability (CIA) triad as foundational for ensuring AI operates within guardrails while maintaining observability.
He's exploring emerging technologies for improved observability and enforcement, but acknowledges current security tools lack full maturity.
For now, that must suffice, he argues. CISOs cannot allow visibility challenges to impede AI adoption.
"AI is the most amazing technology, and whoever doesn't use it will be left behind," Hornstein says. "So, it's important for me as a CISO and as a business leader to not put up barriers and block AI but to build up guardrails that allow the organization to move at the velocity it wants and the amount it wants while providing risk mitigation."