Are your defenses keeping pace with evolving ransomware threats? Are you looking for a way to balance user productivity with granular data safety policies? We've got your back! In our latest episode, we dive into the heart of data security, discussing the persistent threat actors face even with investments in infrastructure security. We are joined by Moinul Khan from Zscaler & Anneka Gupta from Rubrik, who share valuable insights into transforming security perceptions and focuses on data protection.
We don't stop there - we reveal the cunning evolution of ransomware attacks, where perpetrators are not only encrypting primary systems but also targeting backups and exfiltrating data. We discuss the dangerous implications of 'double extortion' ransomware attacks, providing tips on how organizations can fortify themselves with a comprehensive security approach. We also shed light on the challenges of setting up a secure environment and how partnering with a data security provider like Rubrik can be a game-changer.
In the world of cyber threats, awareness is the key. We scrutinize how organizations are adopting URL filtering and sledgehammer approaches to protect their data and discuss the need to balance user productivity with granular policies for data safety. We highlight how AI and ML can change the game by reducing complexity in deploying data protection solutions and helping in document classification and risk insight. And finally, we celebrate the powerful partnership between Zscaler and Rubrik - a collaboration that promises to revolutionize data security. Tune into our podcast for an enlightening discussion on data security. Secure your future by securing your data: let's make it happen together!
Affiliate Links:
NordVPN: https://go.nordvpn.net/aff_c?offer_id=15&aff_id=87753&url_id=902
Follow the Podcast on Social Media!
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Patreon: https://www.patreon.com/SecurityUnfilteredPodcast
YouTube: https://www.youtube.com/@securityunfilteredpodcast
TikTok: Not today China! Not today
Thanks, guys, for joining me today, for I believe this is probably the first episode that I've ever done with more than one person that I'm interviewing, so let's see how this goes. I think it will be a great conversation.
Speaker 2:Hopefully we can live up to the hype.
Speaker 1:I'm sure you will. So you know, annika, why don't we start off with you talking about rubric? So you're currently at rubric. Why don't you tell us about what rubric does, what you specialize in, what the problem is in the space that you're trying to resolve?
Speaker 2:Absolutely so. Hi everybody, I'm the chief product officer here at rubric. I've been at rubric for about two years and rubric is a data security company, so our mission is to secure the world's data. What is the challenge that we see today? So, when we go out and talk to customers, customers have invested a lot in infrastructure security so how to keep attackers out of your system, but the reality is is that attackers are still getting in. In fact, do we did some research and learn that over 90% of organizations have had in a cyber attack that has made it to the attention of senior leadership at least 52 times in the past year. So that means one time a week. So we know that even with all these investments and keeping attackers out, they're still getting in. So rubric's mission is really all around data security. So how do we help companies secure their data such that, when attackers get in, you can have cyber resilience, which means really being able to recover your data and to do that in a very quick manner so that you can minimize business downtime.
Speaker 1:Hmm, it's really interesting. So we're also joined by monos from Zscaler. I apologize if I just butchered your name. I practiced this five minutes ago, I swear.
Speaker 3:Yeah, not a problem, I think, I think you're good.
Speaker 1:Yeah, absolutely so you know. How about you tell us about you know what Zscaler is specializing in in the security space? In case someone that is listening to this podcast has been living under a rock for the past 10 years, why don't you maybe even tell us where Zscaler started and where you guys are going? Because I know you know, even as being a current customer you guys are doing a lot of different. You know unique things. You're integrating a lot of different areas into your platform, which is very interesting, in my mind.
Speaker 3:Yeah, absolutely so. My name is Moinul Khan. I'm vice president and general manager for Zscaler's data protection business been with the company a little over four years. Zscaler, you know. We have been in the industry for last 15 years. We are a cyber security company. We have transformed the way organization think about security. We are, at the end of the day, we are a man in the middle proxy. On a daily basis, we are dealing with 300 billion internet transactions that goes through our cloud. And think of us a zero trust exchange where every single connection is going through us. We are inspecting the content, we are inspecting the payload and we are making sure all your inbound and outbound communications are secure. Right from technology standpoint. We expanded in in in different areas. Data protection is one of the highest priorities in our portfolio and, as a man in the middle proxy, we are putting a lot of focus for data in motion. So anything that is going out to the internet, we inspect that content, we inspect that payload and we are making sure that our customers crown jewels are protected.
Speaker 1:Hmm, so can we talk a bit about how rubric secures data in the cloud, right? So I'll give you a little bit of background. Recently, or at least a couple years ago, I started to specialize more in cloud security and as I grew in this field I learned pretty quickly that if I don't have security around my data in the cloud, I don't really have very much right like that. There's a huge risk if I'm not doing that right. But then the issues start to come in when you're using SAS applications and maybe you don't have the control over your data like you normally would. Maybe it's very dispersed throughout your cloud environment. You may not even know where it all resides. You know I was, I was working for a large company before and they asked me well, where does your data reside? And I said you know well, it resides here. And then another architect on the call said oh, it's also over in this environment that I had never heard of before, and so these things can really grow rapidly and move throughout the environment in ways that you wouldn't expect. How is rubric tackling that problem? Because it seems it seems very complex. It seems very difficult to handle.
Speaker 2:Absolutely, you've hit it spot on is that? Visibility of what is all your infrastructure and where does all your data live and what kinds of data lives where, is a huge, huge challenge facing organizations today, especially as data has become increasingly fragmented. So you have data that's sitting on prem in your various data centers, you have data sitting in your cloud infrastructure and you have data sitting potentially across hundreds of different SAS applications that your organization is using. What rubric does is rubric protects all of this data across all of these three types of environments. We always make sure that you have a copy of the data that you can recover to in the case of an attack. We're constantly scanning that data, looking for where does your sensitive data live, are there any anomalous changes to that data? And then, in wartime, when you have to actually recover after a ransomware attack or another kind of cyber attack, we're giving tools to make sure that each in each of these environments, you're able to recover quickly and recover your data, most importantly, successfully, without having to pay the ransom. And so you talk about these challenges of being able to figure out where data lives, or a big piece of that is some of the observability capabilities we've developed on top of taking a back up and enabling recovery of the data, which is showing things like sensitivity of data and being able to scan that very efficiently and effectively across your entire environment, and we're starting to see that data visibility concept and challenge becoming increasingly important, especially when you in cloud environments, where you inherently, as an IT organization and security organization, have less control about what infrastructure is getting spun up, what data is getting a place in that infrastructure, and is this a high risk or or low risk environment.
Speaker 1:Hmm, so that's interesting. How do you handle data? So you're taking a backup of the data to ensure that, in the event that the data is, you know, under ransomware attack or under attack in general, that the customer has a backup of their data. How are you protecting against you know, sensitive data like PCI or HIPAA, things like that. Are you encrypting this backup and putting it potentially in your own cloud or in another? You know, let's say, we're in AWS, right? Are you putting it into another AWS account that potentially rubric owns and all that good stuff? So how are you doing that?
Speaker 2:Yeah, it's a great question because a big part of what we're doing and how we're differentiated is really around maintaining that secure copy of the data, and we do that across many different ways. One of the ways is through air gaps, so really making sure that there's a totally separate copy and a separate tenant with separate credentials. We have the option, we have a cyber vault capability where it can be completely out of your own environment, not just in a different tenant within your environment with potentially different credentials, but in one where no one in your organization has access. We make sure that when we write the data, it's immutable. That means that it can't be edited or changed once the data is written. This is big, because what ransomware attackers tend to do is they come in and encrypt the environment and they encrypt the backups as well to make sure that you can't recover. So you can do that in rubric you can't change the data once it's been written. Then we create many other controls, such as multi factor authentication or a quorum or otherwise called two person rule, where you can't change the retention policies around how long your backups are retained without these additional checks points to make sure that someone doesn't just come in and set your retention policy from 30 days to 30 minutes and wipes all of your backups. So we do all of these different kinds of capabilities and we're constantly innovating and coming up with new ways in order to create confidence that you will always have a copy of the data to recover to.
Speaker 1:Hmm. So, mono, why don't we talk a bit about how Zscaler is protecting the data in transit? Right, because we talked about data at rest and the security protections around that a little bit. But you know, ruberton, zscaler partnered recently I believe it was announced just before RSA. But you know, you guys partnered recently and I think it's because you guys have some some common synergies that you could potentially have, you know, a unique solution that covers someone's data end to end, which is actually pretty rare in the security space, to be able to offer a singular solution or maybe not a singular solution, but a singular point of contact right that owns the security of the data in the environment end to end.
Speaker 3:Yeah. So if you look at the industry, I would say for the last two decades, the traditional DLP solution everybody tried to protect the data with the lens of incident and compliance. But, as you mentioned, with the migration to the cloud we saw a huge opportunity to really drive business-driven data security. So, if you kind of like, look at our journey. When we started, we were monitoring Web DLP. We are monitoring web traffic with our classification engine, what, quote-unquote, is called Web DLP. We built a stack, a classification stack, with hundreds of predefined dictionaries engines. We have a very flexible custom regular expression engine. But at the same time, on the advanced data classification side, we did exact data match, index document matching, fingerprinting, ocr, right. So all of that stack that we built. First we started monitoring web traffic and then we took that same stack and we started offering a multi-mode CASB in both forward proxy mode with out-of-pan API. We used that same classification engine for public cloud infrastructure, which is AWS, azure and GCP, and then now we are leveraging that same engine for endpoint to deliver endpoint DLP as well as email DLP, right? If you kind of like, look at our journey. What we try to achieve is business-driven data security but at the end of the day, we were trying to reduce complexity. If you talk to large enterprise customers that are very serious about data protection program, they will say they are running with five different DLP classification engine and that's massive complexity and we really kind of like try to reduce that complexity. As far as mobility and cloud is concerned, now, when you think about rubric, we are very, very complimentary because rubric is focusing a lot on the data at rest piece and we are not really there. Like I said, we are man in the middle proxy. We are monitoring all data in transition. So together we are bringing a lot of value. Like you said, end-to-end data security for data at rest as well as data in motion.
Speaker 1:You know, something you don't see too often in this space is a partnership like this right. Most of the times you see other companies purchasing smaller companies that are prevalent in a certain space that this larger company is not currently in right and, from the end-user's perspective, the quality of that product pretty fairly slowly but consistently starts to decline. Right, I've experienced this with probably three or four products at this time and it's just. It's disappointing, right, because a company may have a really great niche product that works perfectly they're doing great things and then a larger company comes in and doesn't keep investing in it like they should. I feel like this is very unique. You know, we don't see it very often where two security companies are saying you know what? You do this fantastic, we do this fantastic. Why don't we just work together rather than one of us buy each other out? Right, let's just work together to fix this problem. And I think the title of the article that came out was the industry's first double extortion ransomware solution. So, anika, you know why don't we talk a little bit about that? We already kind of touched on ransomware, but what would a double extortion ransomware attack look like and how are you guys able to protect your customers against this.
Speaker 2:Yeah, it's a great question. So I think many people are familiar with the first part of what happens during a ransomware attack, which is a ransomware attacker comes in and encrypts your systems such that you no longer have access to them. And then they come to you, they give you the ransom note, they say, hey, pay me $8 million and I will give you the decryption keys so that you can decrypt your systems. Well, that was how ransomware attacks started. As attackers got smarter, they started to do multiple different actions. Not only did they start encrypting the primary systems, but they started to go after the backups as well, because they said, hey, if you can recover from the backups, you're not going to pay the ransom, right, and so they started going after that. We're seeing that now 75% of successful ransomware attacks are going after the backups and are successfully encrypting the backups. Rubrik is not involved in those cases. And then the third tactic that we've seen ransomware attackers do in the past couple of years is not only did they go in and encrypt the primary systems, encrypt the backups, they actually also exfiltrate the data and threaten to put it on the dark web if you don't pay the ransom. And so this is becoming an increasingly large problem. Over 50% of ransomware attacks now are double extortion ransomware attacks and actually in some cases, attackers are not even encrypting the system anymore. They're just going in and exfiltrating the data because they know how invaluable this data could be to an organization. And if you're a health care provider, if you're financial services, the implications of your data getting out on the dark web could be incredibly impactful to your business. So that's what a double extortion ransomware attack is is encrypting with the systems and exfiltrating the data and charging the ransom to ensure that that data doesn't get on the dark web, in addition to getting your decryption keys back.
Speaker 1:Huh, you know, this reminds me of when I started working for a credit bureau and I won't name them, but the Equifax breach happened probably the same month that I was there, that I started, and it was. I mean, it was not just hectic, but for the first time ever, security had a blank check to do whatever, because we saw the impact, we saw how difficult it was to even just put a number to the data that they lost. I mean, I think that there's still actually some debate around the actual number, you know, because there's so many different sources that want to underplay it or overplay it, and they're still probably even recovering from it, to be honest, recovering their reputation, and so, at the end of the day, for companies to lose their data, to lose what they're in business for, I mean, that's an apocalyptic scenario, right when? Hey, we may not exist tomorrow if this keeps on going.
Speaker 2:Absolutely, absolutely, which is why we were so excited to partner with Zscaler to provide this solution, where we could bring insights, since we scan all of a company's most critical assets all the time, can identify that sensitive data, marrying that with Zscaler's data and motion technology and DLP technology to actually prevent the movement of sensitive data and enforce the data policies that an organization has. It's the first of the kind to be able to do that. Previously, it would be a very complex and very arduous to actually implement something like this. So we just saw this incredible complimentary opportunity in JumpData to provide a real solution to these kinds of attacks.
Speaker 1:Yeah, I've actually tried to deploy solutions across all of these different areas and it takes about four different solutions to actually do it properly. It's four different vendors, so you're dealing with four different support teams, four different of everything. It can be very complex. I've always found that going to one or two support teams or companies to resolve an issue is by far the easiest. So, mono, I have a question, because it looks like we potentially have a new acronym or a new title for an attack. It's called adversary in the middle attacks. So I guess maybe I'm a little bit old school. I think of this as man in the middle. Is there any difference to man in the middle attacks or are we talking about something different here?
Speaker 3:Yeah, so again, the man in the middle attack. I think one of the things that is very important from data protection perspective is how much of the traffic that you are seeing. So when you look at an organization, their users, they're not just using Office 365 and Google Drive, but at the same time, they're probably using ZP, share and Prezzi and chat, gpt and everything else. So architecture actually makes a big difference. Right, are you at the right path? Are you being able to see all the web traffic? It could be a random web application, it could be a sanctions as based services, it could be an unsanctioned application. So that's really step number one. Step number one is today, when you look at your internet bound traffic, I would say more than 90% traffic is encrypted. These are all HTTPS. So you need to have the right architecture to really crack open the SSL connection, otherwise no content inspection, no features will ever kick in. Right. So that's absolutely important is that do you have that right architecture, right compute resources to really look at this 300 billion transactions a day that we see every day? And then the third part is obviously, when you are trying to protect data, what is your classification engine? What is the depth and what is the breadth. These are all very, very important and that's exactly you know we try to focus on with Rubrik's integration. So Rubrik will start with the data at rest. It will classify the sensitive data. That sensitive data is being fed to us, and we use our exact data match and index document matching technology. Whatever we are ingesting from rubric, we know what is important for that specific organization. And then we are monitoring all web transaction. Right, it's not just your office 365 and Google we are monitoring, but we are essentially looking at every single application that your users are potentially using and trying to protect that data, right. So so, again, I think the value that we are really providing is the comprehensive mess of the data security. It's not just, hey, let's try to protect one channel and we will just talk about this one channel, and if the 10 other channels are not protected, then then you are not really doing data security.
Speaker 1:That sounds leaps and bounds different from what I'm used to with data security and protecting an organization's data. To be quite honest, you know I'm wondering. You kind of brought up AI tools and chat GPT a bit. Did you see attacks change or kind of reconstruct themselves when chat GPT became more mainstream, and did you have different or new challenges of how to protect your customers with this new tool coming out?
Speaker 3:Yeah, so. So chat, GPT and generative AI all these applications are kind of like very new in the industry. Now the likes of concerns that we have seen from organization is really the insider threats, at least as of today. Because you know your users are naive. They don't they don't really understand how they're putting that data at risk. So the typical approach that we have seen over the last six months is like, hey, let's use a URL filtering and block all these applications, Right. But at the same time, the sledgehammer approach really doesn't work, because if you block users today, they will always find a way, Right. So the desire state from organization is we would like to allow these applications, but we would like to allow them such a way where our data and crown jewels are always protected, and that's exactly what we have delivered, Right, so? So again, you know, if you want to be a sledgehammer, Z skiller will help you to be a sledgehammer, because again, we are man in the middle proxy and we can just block everything. But the better approach is how do you enable users to be productive but do the data security such a way where certain activities are not allowed? Perhaps developers uploading source code to chat GPT is not a good idea, so so again, that's exactly how we are empowering our customers like enforce very granular policy for data in motion. Don't try to be a sledgehammer. Allow them to use it, but use it such a way where the data is always protected.
Speaker 1:You know, I'm sure there's some people out there that that laugh a bit, right, when you say, oh, it's probably a bad idea for developers to put source code in and chat GPT, and it seems like that's such a far fetch thing, like, oh, everyone you know has a brain, right, they can work through this problem. They should know better. But you would be so surprised as to how many times I've had to sit down with a developer and say, hey, maybe this public repository is not a great idea or a great place for you to put our proprietary data in it. And you know, they're confused, right, maybe they didn't realize the security implications of it and whatnot. And I feel like the part of this cybersecurity role that a lot of people don't like doing is the actual training part. Right, it's actually talking to your users, your engineers, your developers, getting them to do things correctly, changing their workflows and processes. That's, that's always, you know, the most difficult challenge by far.
Speaker 2:Yep, and I think generative AI is only making those challenges more acute. For the reasons you talked about of, you have to train people around what data they can and can't put into these models. Or, if you do put them in, like, how do you protect that and not just use the models publicly available to everyone? There's also challenges around we're starting to see this too around phishing attacks getting a lot more sophisticated, because you could make a phishing attack email look so real with generative AI Right that you can fool even people that have been trained to look for this. You can fool people a lot more easily or people using generative AI to actually like look at the models and figure out what are ways to evade common detection of attacks. We're still in the early days. Like chat, you can only came out in November and, yeah, I think this is going to transform the landscape from a threat perspective, from an IP perspective, all of these things that we're going to have to change quite quickly and retrain our workforces very quickly, and we're going to have to change that in response to yeah, absolutely.
Speaker 1:You know, you bring up a really good point with the fishing attacks. So when I got into security, you know, one of my first projects was to stand up this fishing solution and, you know, create a whole campaign around it and whatnot. Right, my goal was to make it as difficult as possible. Right, because the attackers that are doing it for real, they're not going to give you any leeway, they're going to throw their best email at you to fool you, to give over everything that you can, right? So I had that mentality and I would get a lot of complaints saying, oh, these fishing campaigns are unfair. Who would ever send me, you know, requests like this other than an internal employee? And it took a little bit, right. But a couple of years later they actually got hit by a fishing attack that was crafted just like an internal employee, straight from the CEO's advisor or assistant. It was someone very close to the CEO and they got you know, it was millions of dollars from this one fishing attack, right. And that only amps the attacks. Because now they're saying, oh, these people are probably not that well trained, you know, let's attack them more, let's hit them more. And even fairly recently, I experienced this personally, where there was a fishing attack that came into my email and I have a personal policy I don't click on any links, nothing like that. You know like, if you want to send me a link, like guess again, just get on a meeting with me and show me what you want, right. But I was looking at this fishing email and it almost fooled me. It almost got me and I had to think about it for like 30 minutes and say to myself is this actually a real thing? Because the timing of it was just so I don't know. The timing of it was perfect, right. It had something to do with like the 401k, right, and something that I had like just signed up for as well with the company you know, and so like it was just like perfect timing. It's like I don't know. You know this is getting a little, a little too advanced, and I think I even put a post on LinkedIn about it. Like these fishing attacks are getting a little bit too good, because I almost just got fooled.
Speaker 2:Yeah, I mean and bringing it back to this conversation right, it's, you know, what are the attackers doing? They're trying to get compromise your credentials, and then the next thing they're going to try to do is exfiltrate data, and so that's, that's what we have to all be on high alert for. That's what we need better and more solutions to help solve is, first of all, making sure that least privileged access is actually implemented within your organization. So much easier said than done, right, we all know that. And then you know, being able to detect and prevent data movement, and especially if it's, if it's data that's very sensitive. These are the tools that we're going to need to be just continuing to develop out to prevent against this, because we only know that, like, the attackers are going to get much more sophisticated with generative AI technology and all the other technology that exists out there.
Speaker 1:So I I'm trying to think of a synergy between rubric and Zscaler in this way. So we talked about categorizing the data, which is essentially tagging the data and whatnot, and then there's an AI or ML built into it to where it starts looking for other data that looks like it Maybe it wasn't tagged, maybe it was and it starts building a knowledge base. Right, is that? Is that what's going on, right? And I feel like Zscaler kind of gives you that insight throughout the entire environment, because where rubric might only be in one area of the cloud, right, zscaler is, is everywhere. To be quite honest, I mean, you turn that thing on, I get into my on-prem data centers, I can get into resources in the cloud, I go wherever I need to, and so you know Mono. Is that? Is that accurate? Am I missing a piece? Where is the synergy with this?
Speaker 3:Yeah, so again, the synergy is, as you said, being a man in the middle proxy. We are seeing all the transactions that are going in and out. The value that we are getting from Rubrik is the visibility for the data at rest piece that we are not paying attention to. So if you look at the flow, how the integration work, it starts at the data at rest. Rubrik's backup system is probably storing zetabytes of data, like it's pretty much every single data. Now do you need to protect every single piece of the data? Probably not, because there are some crown jewels, there are some sensitive data. So they do first level of data classification when they are sharing that sensitive data. We are indexing the data. We are essentially learning from customers data so that we know what exact data that we need to protect when we see the data in motion action. So the simple action is we understand that for this organization, the most critical asset is their PHI data. We learn it from Rubrik and then we are monitoring every single web transactions and then either we are running it in monitor and allow mode. If it's monitor and allow mode, we are trying to coach the user hey, don't do this, don't do that. If it's a block mode, we are basically flat out blocking the transaction. So that's kind of like one part of data protection. But also think about anomalies, right? Like when we are doing our own AI and ML and we are trying to understand users behavior. We are always looking for normal behavior. So Anika talked about credential theft and someone is getting in now they are trying to exfiltrate that data. Most likely we are gonna see an abnormal behavior. So that's when our UEBA will kick in and our adaptive access will kick in and challenge the user. When we see a huge amount of data, that is leaving the premise, right. So, and then when you think about UEBA and adaptive access, that equally applies for the data at rest piece, right. So again it's how do you merge that all aspects of data at rest security with data in motion? Security is what we try to achieve with this integration.
Speaker 2:Yeah, and from our end, we are classifying data, whether it's sitting on-prem, in cloud or in SaaS environments, and we have both have our own built-in classifiers to look for things like PHI, pci data, things like that that are pretty common. But then people can write their own classifiers too and say this is what constitutes sensitive data in our environments, and we continue to adapt our models, make them more sophisticated, cover more and more of the surface area that we're already backing up and that holistic nature of the visibility that we can provide back into Cscaler. That's very unique because most people have to, as you were talking about, implement a different data classification tool for their SaaS app, for each SaaS app, for their cloud environment, for their on-prem environment, and those tend to be pretty expensive to run, pretty expensive to implement and difficult to maintain as well, and end up leaving data that's critical or systems are critical that they can't see on the side.
Speaker 3:One of the Joe, one of the use cases that I should talk about here before the integration. When you think about our existing data protection customer, cscaler customer, as you know, you probably know we have 7,000 global customers. Thousands of these customers are using our EDM-IDM technology. Now, when you think about EDM and IDM Exact Data Match customers are basically feeding their crown jewels, their exact data. They're not interested to get an alert based on a generic credit card number or social security number. They're saying, hey, I have these crown jewels and can you protect these exact data? Now, in the past, before the integration, we had to tell our customer is that, hey, we are going to give you an on-prem VM, you are gonna deploy it on-prem and you are going to feed your data to this on-prem VM too. Now there is a challenge with that because the guy who is deploying the VM, they need to know where the sensitive data is and they need to try all of that process at the back end. All of that complexity we have addressed with the integration because there is no manual process here. We know all the data is sitting in one system, which is rubric, and we are just getting that feed from that centralized system and we automated that whole process right. So this particular use case is all about how do you reduce the complexity of the data protection program, and I'm sure you heard this from organization is that data protection is a complex program and we try to simplify it With this integration.
Speaker 1:So you bring up a lot of really interesting points and I feel like legacy DLP is very cumbersome to create. I mean, you have to create so many policies for that solution to work and for it to actually do the protections that you want it to have and whatnot. And I feel like in the cloud, that scale just dramatically increases and so is there any like AI or MO in the background that is informing users of you know, hey, this is the policy that we recommend you deploy, or is it potentially just deploying it on its own in the background to enable a Zscaler to have that zero trust throughout the environment? I guess let's talk about a little bit about the, that legacy model and how you guys are improving it. Yes, okay.
Speaker 3:So you are absolutely right. The data protection, the perception is it just too complex or it requires me to build a 30 people organization who can walk around with a badge called I'm a regex expert, right? So we were very aware of those complexity. I will give you two things that we have done to significantly we were able to reduce that complexity. The first part is that when you think about the initial deployment of DLP the traditional DLP solution will require you to hire people who can write regex and can build hundreds of policies. You have to first tell the DLP engine what is sensitive data for you. And people struggle with it, right? Because if you are talking to an admin in target and if you ask him what is it that you are trying to protect, they will say I'm trying to protect my customer credit card information. But guess what? The same company has a legal team, the same company has an HR team and they don't know what is that extent of that sensitivity, right? So that's one challenge. So what we did? We said look, people are already pumping all their internet bound traffic. We are seeing millions and millions of files on the wire. They don't have to tell us what is sensitive. And we used heavy AI and ML behind the scene to auto classify all their documents, right? So behind the scene, we use some very soft-skated ML algorithms like limitization and natural language processing and clustering and so on and so forth. But the benefit of that is, in order for you to deploy Zscaler data protection, you do not have to build a single policy. Whatever we see on the wire, we will automatically classify them and we are putting them into different thematic document categories. So we will tell you. In the last eight days, I have seen 8,000 documents went up. Some of them are 1040, some of them are tags documents, some of them are litigation documents. Guess what? In the last week, your employees uploaded 2,500 resumes. You might have a potential retention issue, who knows right? So that was received extremely well within our existing install base and that really gave us a huge boost in our data protection deployment, so that auto classification based on AI and ML really paid off.
Speaker 1:Yeah, really, that's the only way to do it nowadays, right, like that's the only way that you're going to stay on top of this, because the legacy way is really on the end user, the data creator, so to speak, the data owner, to actually classify that data, and then it falls within whatever policy the security team has set up and whatnot. So we're coming up on the end of our time, but before I let you guys go, you know why don't you tell us where you think this space is going right? Because right when I think that data protection is stagnant, right when I think that the field isn't isn't evolving right, you guys come up with this fantastic partnership and these really interesting and fascinating ways of how you're adapting to protect organizations data. So where do you see it going within the next 12 to 18 months?
Speaker 2:Well, I would definitely say that I think there's going to be massive acceleration of change and of innovation, and that's being driven by a few different things. One is just the threat landscape and the importance of data and how much this threat landscape I mean the bad actors are finding new things to do, new ways to impact organizations, and we're all running to catch up. So that's one aspect of it. And then I think this whole generative AI piece that we talked about is another aspect. And then, third, is we're finally in this place where we have technologies that can bring visibility into the security world. You know, five years ago, six years ago, even though Rupert considered itself a data security company, we weren't calling ourselves a data security company. But now what we're realizing is that we can partner with the likes of Zscaler and we can partner with the likes of Microsoft. We can partner all over the place because we're able. We have an asset that, historically, no one has been able to bring to the table Everyone is struggling with how do you get a complete view into the critical data of an organization and, once you have that complete view, the amount of like analytics, the amount of AI that you can apply on top of that, the amount of insights that you can provide to do everything from identify threats all the way to recovering from them quickly. You can start to automate more and you can start to bring the IT ops and security ops organizations together even closer and you can really revolutionize the way that we react and also be proactive against preventing cyber threats.
Speaker 1:Yeah, it's a really fascinating area. Well before I let you guys go, anika, why don't we tell my audience you know where they can find you, where they can find Rupert if they want to learn more, and where they could potentially even go specifically to learn more about this Zscaler rubric partnership? Absolutely.
Speaker 2:So you can find rubric. You can go to our website, wwwrubriccom R-U-B-R-I-K. You can find me on LinkedIn. You can find me on Twitter. I'm always happy to respond to messages, so feel free to shoot me a message and then you can find out about our partnership on both of the Zscaler and and rubric website. We have a lot of materials going. We have some press releases, we have blog posts, so there's a lot there and we're really hoping that if you're listening and you're a rubric and Zscaler customer, that you'll come talk to us about this integration, because we do think it's very unique and very valuable asset for security and IT teams Awesome.
Speaker 1:And Monal you know. Before I let you go as well, why don't you tell my audience where they can find you? Everyone should know where Zscaler is.
Speaker 3:Yeah, well, everything Anika said it goes. You can come to our website, zscalercom. What I would recommend is, when you go to Zscalercom, there are some specific track sessions that we have recorded during Zenit Live. You know, we had Zenit Live for the US, we had Zenit Live in Berlin and we have captured a couple of track sessions that talks about that integration, so you will be able to find more in-depth information about the integration when you go there.
Speaker 1:Awesome. Well, this was a really fascinating and fantastic conversation. I really enjoyed it and I thank you guys for for coming on.
Speaker 2:Thanks for having us, Joe. Thank you, Joe.
Speaker 1:Thanks for having us. Yeah, absolutely, absolutely, and I hope everyone listening enjoyed this episode.