Everyone’s racing to go all-in on AI—but here’s the plot twist no one saw coming. While organizations strive to harness AI for growth, the governance needed to sustain that growth often lags. Generative AI is redefining how data is created, classified, and secured… and in doing so, it’s forcing business and security leaders to rethink the balance between innovation and control.
At the AI Technology Top Voice, our mission is to spotlight leaders shaping this balance. I had the privilege of interviewing Dana Simberkoff, Chief Risk, Privacy, and Information Security Officer at AvePoint, whose deep experience at the intersection of compliance, privacy, and technology makes her one of the most credible voices on responsible AI governance today.
According to AvePoint’s latest research, organizations say enhancing customer insights and personalization is their top AI goal. However, there’s a 5.8% gap between what they hope to achieve and what they actually deliver! This gap represents more than a missed KPI; it reflects the broader readiness challenge enterprises face when innovation outpaces governance.
In this conversation, Dana shares actionable insights on how organizations can close that gap through stronger policies, cross-functional accountability, and AI-driven compliance frameworks that turn security into a strategic advantage.
Here’s the full interview.
AI Technology Insights (AIT): Hi Dana, please tell us about your experience with the modern GenAI tools and how you prepare for the disruptions?
Dana Simberkoff: AI isn’t just transforming how we secure data and boost daily productivity; it’s also reshaping cyber risk. Today, any new AI solution – whether it be an internal pilot of a proprietary tool, freely and widely available public domain AI tools, or the launch of something like Copilot – introduces huge potential for information and data breaches.
To manage these risks effectively, organizations need smart AI policies that balance the need to innovate with the need to protect sensitive data. Some organizations have started to do this, but there’s a lot of work still to be done. AvePoint’s own survey recently found that only 43.4% of organizations deploying AI are actively working on AI policies, which shows us that effective AI governance is still in the brainstorming phase in many environments. This needs to change (and fast) since the risks of AI are already here.
Recommended: AITech Top Voice: Interview with Peter Weckesser, Chief Digital Officer at Schneider Electric
AIT: Despite 90% of organizations claiming effective information management, only 30% have robust data classification systems. What’s driving this disconnect between perception and implementation?
Dana Simberkoff: This is a staggering stat!
For me, it just underscores the fact that simply having data governance and security policies is not enough; you actually have to not only “say what you do”, but also “do what you say”.

Classification needs to be implemented as part of basic data hygiene, and you must iterate continuously in order for those policies to be effective. For data governance frameworks to succeed, you need buy-in from your workforce, and you also need technology that can support those policies effectively, on a continual basis. It’s not as simple as drafting a policy and walking away—far from it, in fact.
You need real investment and continual effort to reap the rewards.
AIT: Even the most “prepared” organizations (52.4%) report data security incidents. What does this say about how enterprises are measuring readiness—and where are they going wrong?
Dana Simberkoff: Many organizations that feel prepared to deploy AI safely have policies and technology in place to manage risk, but these measures can often be barebones or cursory, rather than meaningful and truly effective. To actually guard against the threats (both malicious and not) of the AI age, organizations need to adopt a holistic, comprehensive approach to data governance that regulates and governs data across the entire lifecycle, from creation to deletion.
Many organizations—even ones that consider themselves to be prepared and mature—simply aren’t ready to do that, which is why security incidents are so widespread. There’s a disconnect between policy and reality.
AIT: As AI-generated data is projected to surpass 50% of enterprise data within a year, what new risk surfaces does this introduce for information governance and security teams?
Dana Simberkoff: AI-generated data creates challenges in assuring quality, accuracy, and control. The speed at which AI generates new data is astounding, and that’s leading to an explosion in redundant, unclassified information. AI data should be treated like any other asset, which means that it must be tracked, classified, and governed on a continual basis. This infrastructure – accounting for data lifecycles and retention – should be in place before AI even reaches the deployment phase.
AIT: Unsanctioned AI use continues to grow year-over-year. From a CISO’s perspective, what are the most practical governance and enforcement mechanisms to curb this trend without slowing innovation?
Dana Simberkoff: The growth in shadow AI shows us that employees want to experiment with these tools and implement them to speed up repetitive work.
Security teams should provide a list of approved tools for employees to use that are compliant with data policies, as well as constant training and education around evolving AI models, threats, and regulations. CISOs should never assume that employees are self-educating and doing their homework to the same caliber that is required by the organization.
AIT: With 43% of organizations still drafting AI policies, do you believe regulatory clarity—or cultural maturity—is the bigger barrier to faster policy adoption?
Dana Simberkoff: Indecision and deliberation around AI regulation have certainly made the road to secure and effective adoption more challenging, as security teams navigate system compliance with state regulations (like California’s recently passed bills on AI use and disclosure) while also anticipating larger federal laws that may be on the horizon. Cultural maturity is also a significant obstacle to AI readiness. Our survey finds that almost half of organizations are still drafting AI policies despite having it deployed – not because they lack oversight/guidance, but because governance hasn’t become ingrained in their operational DNA. Business and security leaders must view policies as enablers of trust and innovation, not as compliance mandates. AI maturity in 2026 means implementing adaptable frameworks that evolve with regulation, treating AI safety as a business imperative as opposed to a regulatory checkbox.
AIT: AI governance investments are increasing, yet effectiveness often lags. How should enterprises balance tool adoption with foundational governance practices?
Dana Simberkoff: Automated software designed for data management can accelerate and streamline compliance, but not without effective policies, ownership, individual, and team accountability systems in place. Security teams must first conduct thorough security audits, identify high-risk assets, and then classify their entire data environment to regulate the flow of confidential information.
When automation works side-by-side with human input and oversight, organizations can implement these governance frameworks more effectively, on a continual basis.
AIT: As we move into 2025–2026, what emerging cybersecurity risks do you anticipate will most challenge compliance and data protection programs globally?
Dana Simberkoff: AI and agentic AI’s use as an attack vector is now evolving faster than organizations and security teams can keep up, and even the most experienced CISOs are now having to dedicate more time to AI threat tracking and knowledge-building. These threats are highly advanced and hard to spot, rendering some traditional, manual defenses outdated. Security teams must prioritize preventing AI tools from operating outside safe boundaries, and making sure encryption can evolve as threats change to build future-forward resilience.
Recommended: AITech Top Voice: Interview with Carmit DiAndrea, Director, AI Data Management at NiCE
AIT: Given your experience at AvePoint, what role do automation and AI-driven compliance tools play in bridging the gap between “claimed readiness” and actual security performance?
Dana Simberkoff: AI-driven tools are key to streamlining the monumental, constant task of managing, classifying, and governing sensitive data – if the right governance and risk policies are in place.
Once these policies are effectively implemented, security professionals will be able to rely on automated tools to execute some of the most tedious (but still critical) governance and monitoring leg work, which frees up security professionals to think longer-term about big picture security and tech strategy. Automated tools and comprehensive governance can turn compliance into a regular and verifiable process, rather than a once-a-year, mandatory audit exercise.
AIT: How can business leaders—especially outside IT—contribute meaningfully to risk and privacy initiatives in this new AI-driven data landscape?
Dana Simberkoff: Cybersecurity is no longer the sole responsibility of IT and security teams; it now requires buy-in from the entire C-Suite. As AI threats become even more pervasive and damaging, security literacy is critical. The C-suite should work to ensure that security is a cultural priority, and that everyone at the organization is constantly educated on emerging threats and how to properly protect and handle their own data. Beyond the C-Suite, Cybersecurity must be the responsibility of every individual employee in the business-as it we are only as strong as our weakest links.
AIT: Finally, looking ahead to the next phase of digital trust, what guiding principles should every organization embed in its governance framework to stay resilient and compliant?
Dana Simberkoff: Adaptability, accountability, and transparency will be critical. Security teams should be prepared to quickly pivot as new AI regulations emerge, clearly assign responsibilities for each employee/c-suite member for AI management, and communicate data governance practices clearly with customers to create trust amongst stakeholders.
Recommended: AITech Top Voice: Interview with Rick Rosenburg, VP for Government Solutions at Rackspace
To share your insights, please write to us at info@intentamplify.com





