Ready to uncover the mysteries of cybersecurity and IT? Today, we are joined by Martin Roesch, CEO of Netography, a self-taught cybersecurity expert who delved into this recession-proof industry. He shares his trials and triumphs, his unique approach to learning, and his insights into the balance between product usability and the industry's penchant for buying solutions rather than building them from scratch. We also chat about the deployment challenges of Web Application Firewalls (WAFs) and how Martin has navigated these waters.
As we traverse the development of intrusion detection systems, we spotlight the rise of Snort - the late 90s darling of open-source intrusion detection. Martin unpacks the intricacies of product sophistication versus user-friendliness and why it's paramount to create products that cater to users across the spectrum. From novices to experts, a user-friendly product can be a game-changer, especially in the cybersecurity realm.
Our discussion widens to network scattering and the security implications in multi-cloud environments. We consider how top review platforms present cloud-native solutions, unravelling their functionalities across multiple clouds and on-premises landscapes. The conversation takes a turn towards addressing the gaps in network infrastructure and how looking at an entire enterprise network as a single entity can be beneficial. We also dive into the world of Netography, the first platform to seamlessly stitch data sources together, and how it deals with alert fatigue. Lastly, we discuss the future of cybersecurity - the promise of cloud-based solutions, real-time monitoring, and how compromise detection approaches can potentially level the playing field for businesses of all sizes. Join us on this fascinating journey through the world of cybersecurity.
Follow the Podcast on Social Media!
TikTok: Not today China! Not today
How's it going, martin? It's really good to finally have you on the podcast here. I feel like we've been trying to schedule this thing for quite a long time, but having a new baby at home can really change things for you in ways that are significant, that you don't expect.Speaker 2:
Yeah, that's for sure. I've had a few and definitely throw curve balls for sure. But it's good to be here next, for having me on and being patient with getting things scheduled.Speaker 1:
Yeah, absolutely. Well, I appreciate your patience as well. So talk to me a bit about how you got started going into cybersecurity or just IT overall. Right, what made you think that this area was a good idea, that this was something that you wanted to do? Did you see the opportunity, or was it like a comfortable transition, potentially?Speaker 2:
Well, that's a big story, let's see. So to get into my background getting into IT we have to go all the way back to the 90s. So I got my computer engineering degree from Clarkson University back in 92. And I kind of went out and started working as a software engineer for a while and eventually I ended up moving to Maryland and getting a job with a defense contractor working on. Information security is what we call it at the time. Cybersecurity is what we call it now. But back in the mid 90s this is like 96 or so if you wanted to learn cybersecurity you had to teach yourself. There were no college courses, there was no secondary learning or very little out there to go pick it up. So a lot of what you did was there were a few books out. Certainly a lot of them talked about cryptography and things like that as a means to securing data, but if you want to learn about hacking and exploits and security tools, you basically had to read source code. In hot cases there were text files out there and there was frac and stuff like that, not the industry, the hacker zines and basically the way that I taught myself was working in my day job and also writing my own tools, since I am a software guy. I started writing little scanners and firewalls and sniffers and things like that and eventually I decided to be a fun project to do with Bidra write my own cross platform sniffer. That came to be called SCART and it turned into a lot more than that. But the whole career path of getting into and deciding that cybersecurity was the field that I wanted to go into was really back in the early days. So I was working as a government contractor for here in central Maryland and that usually means one of the very few places that you're working for and one day the customer came in and it was a team from the shop that we were doing our contracting with to check over our work and do some stuff. And these guys rolled in and this was 1996. And these guys were like the kings of the internet. They knew the protocols, they knew how everything worked, they knew how I'll plug together, they knew secure from not secure, things like that. And I was just super impressed with them and I was like that's what I want to focus on, that's what I want to do. And the thing in the back of my mind was that when I came out of college in the early 90s, there was a recession on the cold war. It ended, so there was this peace dividend and all of these people got flushed out of the defense industry. So there are lots of very experienced engineers out there that you're competing for jobs with and things like that. So it was a rough time back then and I also wasn't the world's greatest student, so that certainly didn't help and I saw cybersecurity is just. It was this, it was something that people are always going to need. And the kind of the flippant way that I talk about it now is that cybersecurity is like plumbing Everybody always needs it, it's never going away and the pipes are always leaking. So it's a great field to go into, because I figured it was pretty recession proof, which turned out to be the case in 2001 and 2008. And even more recently, the cybersecurity industry has been on the rise for really 25 plus years that I've been involved with it. But, yeah, it's a great field. It's really interesting. It's always on the move, there's always new stuff. It's not a solvable problem, but there are really fascinating approaches to problem solving that you can apply to it. There's constant curve balls. So, yeah, it's fascinating field.Speaker 1:
Yeah, that's definitely for sure, and you have a lot there that I want to dive into. Recently you talk about there's always like curve balls and learning curves with cybersecurity and everything. Recently I had to go through a cloud WAF POC and I didn't realize deploying a WAF in a week is extraordinarily difficult, just to get it in front of the app that you want to test. And that's a learning curve, right. You're not going to know that unless you do it or you have someone on the team that's done it before. It's like oh no, you need this amount of time. It's a, I mean, it's a whole other beast, right, like you could spend your entire career doing only WAFs. You could spend your whole career doing only network detection and threat detection, whatever it might be right. That's what drew me to it as well. You know the never ending flow of information that you could dive into. When you were working for the Defense Department or agency, whatever, were you in a situation where they were requiring you to create your own solutions for problems? The reason why I say that right is because the industry is kind of built around go buy another solution, right. Go buy some other solution to fix this problem. And recently in my career, I met someone that had literally just gotten out of the Navy's like cyber warfare division and he was talking about like we were, we were literally discussing buying a CSPM and what CSPM works and everything like that, and his idea was to just build it himself. And I'm over here, you know that just like blew me away, like what are you talking about? Man, we're like I'm not building that thing, you know, and like you shouldn't be building that thing. Like why, why are we doing this? You know, and you know, diving into his experience a little bit, he's like well, you know, they give you no budget and they give you a stack of papers that has problems in it and they say figure it out. You know, like there's nothing else to give you a terminal and like that's it. You know, was that a similar experience for you? Or was there? Was there a bit more hands? Hands on trying to trying to do the solution route? Maybe, maybe that didn't work right. That's my thought process with it.Speaker 2:
Yeah, they typically came in with their own. So it was a mix of using commercial stuff and their own tools. Back in those days I was pretty young, I was in my 20s, and I was very frequently just working on problems that I was given, and sometimes we were writing software, sometimes we were setting up a configuring system, sometimes we were getting commercial stuff running or sometimes we were using open source technology as well, even back then. And yeah, it really kind of depended on what problem we were trying to solve, how fast we were trying to solve it and things like that. So I did do creative work, building building technology for customers. That was kind of suspect but it was like here's the solution. We want to go build it. So I did stuff like that. But I also did stuff like you know, here's the, here's the component tree that we've developed and we need to add this to it or tie these things together or whatever. So yeah, it was a bit of a bit of everything. But yeah, you know the well, you know, the funny thing about it is that, especially in the early days, you know you don't know any better than to do something like go write your own CSPM or go write your own intrusion detection system because you know you're kind of. You haven't been exposed to the budgets of the vendor ecosystem, the budget required of the vendor ecosystem to get things going. You've been just making things go on your own by tying things together and building glue or building original technology. I think that's the Especially when we're young, we just don't know any better. The hard part of a CSPM is not building the actual engine that does CSPM. All the content is required to drive it, all the checks and what they mean, and all configuration guidance and the CVE mapping and the wider attack mapping and all the other stuff that you've got to do to make it an actual useful tool, which is a lot less about building the tool, a lot more about building all the knowledge infrastructure around it that makes it go. Same thing with intrusion detection systems. I mean I was just developing snorters, kind of a fun thing to do on rainy days, weekends, and then all these people showed up and all of a sudden this thing snowballed overnight into this huge thing. But if I had known when I was getting myself into you, I would have been like this is not. How could I possibly do something like that? So I mean, there's always kind of that there might be a better way to do it than it's being done by the whizzes and orcas of the world. So I wouldn't discourage anybody from trying. But maybe it's better that you don't know what you're getting yourself into, because you never try it, otherwise that's where everything comes from.Speaker 1:
Right when you were creating Snort. What was the problem that you were trying to resolve with creating that, and was it simpler or more difficult than what you expected starting out?Speaker 2:
So I was trying to solve for a few problems. One I was teaching myself coding. I wrote, and still write, a little bit when I write it all in C for the most part, and so C is a deep language it's very tricky to do right. So I was teaching myself to be a better C programmer. I was also teaching myself how to write cross-platform sniffers. So I'd written Linux specific sniffers and SunOS specific sniffers or Solaris specific sniffers In the past. But I wanted to write something that was cross-platform. So I was also teaching myself how to use the PCAP and things like that. So there was all that. But the goal that I was going for was wouldn't it be fun if I could monitor my home network while I'm at work for the day and see if anybody's knocking on the door? So I would leave it running and at that point Snort was a sniffer with a packet logger and I would come home and have the Snort directory structure of here's all the IP addresses that talk to you, here's all the port numbers. You can look at the packet doms and see. You know, just visually see port scanners and people trying to prove forest passwords and stuff like that. That was pretty entertaining. But after I released it to open source about a month after I started writing it and the 25th anniversary of the first release to Snort is this December it very quickly, like I said, it snowballed. People started asking for features and stuff like that. So it wasn't super hard to write the fundamentals of it. It didn't really get hard until I started SourceFire and then it needed to have enterprise features, it needed to have anti-vasion technology and stuff like that and all of a sudden I really had to start, you know, taking it from being this very kind of simplistic packet processing pipeline with simple pattern matching to being a much more sophisticated animal. And that took a lot of doing, but I was very motivated because I was trying to make money At that point.Speaker 1:
Yeah, that is. I mean, that's really interesting, you know, like creating an open source project that you know everyone knows. You know everyone knows what Snort is, what to use it for, right, that sort of thing in cybersecurity at least, and it's really interesting. You know, how did people find your solution or Snort? How did they find Snort back then? Right, like it probably wasn't easy. What did you find? Like potentially a forum that all the hackers trafficked, you know frequently. You know and post about it on there, like how did people find it?Speaker 2:
Yeah, so back then there were actually very few tool sites out there where kind of everybody hung out. So you know, if you want to talk about it in kind of modern language, we would call it a watering hole attack, right. So you find out. You know, the site that Snort debuted on was Packstorm. So packstormcom was run by a guy named Ken Williams. I think it's still out there but it's under. You know, it's changed hands several times since then because it has been 25 years, and so that was like the premier tool site. It wasn't just defensive tools, it was also exploits and things like that. So like when an exploit came out, very frequently you'd be able to find it on Packstorm. So I decided and there were a few others, like Technotronic that was run by Kirby Kuhl, and there were a few other tool sites like that. But anyway, I contacted Ken Williams and I said, hey, I got this new Sniffer I worked, called Snort. Would you post it? You know, would you put it up on the site? Sure thing, send it in. And you know, got a couple of emails and then I did another release and I got a couple more, and that's kind of how it went. But Snort was kind of fascinating because it was very much a tool for its time at the time because the only other intrusion detection technology out there were commercial ones, from like Wheel Group and ISS and things like that. They cost a lot of money. So if you wanted to do the intrusion detection function, there were really no other answers at that time. There were some other contemporary open source intrusion detection systems, but Snort kind of captured the easy enough for anybody to use, you know, but deep enough for people to get sophisticated with, especially as time marched on and it grew in sophistication. So yeah, it was like it was the it's product market fit it's what we would say now. It's product market fit was exceptional for its time and you know, let those product-led growth kind of motion was what we would call it now. That was really incredible. But yeah, you know, there were a surprisingly few sites in the late 90s where kind of everybody hung out, all the tools were and stuff like that and that's how it got started. There was no greater marketing than just putting it up on PacketStorm and Technotronic and a few other sites and it just took off Wow.Speaker 1:
Yeah, you bring up a really good point. You know, the product is a balance between anyone being able to use it and then diving as deep as they want to go Right. That's a I guess that's a problem that some products that I've seen have had recently. Right when it's, either it's not deep enough or it's not simple enough for anyone to pick it up. It's a really fine balance between that, you know, and it always seems like the people that have been doing it for 20, 25 years are the ones that you know understand that. So, like, even if they step into a product right that is struggling in that area, they can navigate it and kind of untangle it, I guess.Speaker 2:
Yeah, I think it's funny. So my career in computers started when I was 17, working at a retail computer store. I was a service technician and this is in the late 80s and the interesting thing is there's this big movement in the computer industry back then for PCs to deliver a great out of box experience. So this was like early, early attempts at low friction deployments, low friction selling stuff like that. So when you got your consumer PC you'd open up the box, all the cables would be labeled and stuff like that, so it was simple, and the color code of the connectors, so it'd be simple to plug this thing in and get it going. Because a lot of people had trouble with that and you know I think that always kind of that's always in the back of my mind. You know an entography, my current company. The thing is, you know it's easy to get started with but it's also very deep. Snort was kind of the same way very simple rule language that was in there and you could do a lot with the rules and you get more sophisticated, especially as we added stuff like regular expressions and things like that. But it also was an extensible engine. So if you could write C it had all sorts of API interfaces in it where you could extend the program. You could add in your own detection logic, you could add in your own output mechanisms to talk to databases or you know whatever you wanted to and things like that. So there were many levels to the system, but the most cursory level you can still be productive with almost immediately, and that was the whole goal from out of the box and up and running in 15 minutes, is what I, what the mantra was at SourceFire. But even with Snort I want to be able to go from tar ball to running Snort instance in five minutes. And that was kind of the you know always my, my guiding thing in the back of my mind when I was developing the packages and things like that.Speaker 1:
Yeah, I feel like that is. That's a stat or a thought process that is still rare to come across, right Of being able to download it and run it and be up and running within five minutes. You know that's that is not something that is seen very often, right, as someone that has led at various POCs across different domains of security, like this recent, this recent POC that I did with the Cloud WAF there was one solution that, within you know, two hours, right, we were up and running. It was still two hours, but it wasn't the 10 days that I needed for the other solutions. You know, and that's absolutely something that matters, especially when you know you talk about, like a disaster recovery scenario and someone else that doesn't know the tool, someone that didn't buy it or take part in the POC, even has to run it. You know they have to know how to do it and they have to be able to figure it out. I feel like, personally, all of that has to be taken into account when you're when you're purchasing the right technology. You know it's an interesting problem. So, you know, I'm really I'm fascinated by by your current company and I want to dive into it. So tell me a bit more about what you're doing now. What's the company called all that stuff?Speaker 2:
Sure, so company called netography. We're a network defense platform. So this is something new under the Sun. It's not NDR, it's not sim, it's not XDR, it's it's its own thing. And the fundamental problem that we're trying to solve is that, especially since the end of the pandemic, networks have scattered. So everybody went home and you know, march of 2020, and they were told, hey, just get your job done with them back in two weeks. And you know, turn into six weeks. And that turned into six months and people just got their jobs done. They scattered their enterprise networks to the forewinds, basically. So, also, we have this huge cadre of work from home people. We've got all this cloud infrastructure that was just kind of thrown up, as you know, as needed by whoever needed it, wherever they needed it, for whatever they needed it for. And now we're kind of trying to get our hands around it as a, as an IT industry, and it's really difficult to just answer fundamental questions of what have I got? What is it doing, what's happening to it, how's it changing, being able to do so systemically across this multi-cloud plus on-prem world that most enterprises are in today? And you know, the funny thing is is I kind of saw this problem coming a long, long time ago, and it's also compounded by encryption and now zero trust, which is basically Based on this foundation of let's encrypt everything and then we'll give permissive access to whatever you need, based on your credentials and stuff like that. Well, all that stuff works great until it doesn't. And you know and we've merely been blinding all the ways that we used to keep tabs on things for the last Call it five to eight years. So what the top reviews is doing is we have a cloud Platform. So it is a Cloud native platform that can take data in from your infrastructure and tell you what's going on, what you've got, what it's doing, what's happening to it. But we're doing it in a way where you don't have to deploy a bunch of technology to make it work. We don't have appliances, we don't have agents, and we work the same across any cloud environment that you've got. Well, the five big ones anyway, so I'd be more oracle, in case you're wondering what the other two are plus GCP, aws and Azure and On-prem all under one roof. So we have one UI, we have one language that describes good and bad and what we're looking for and things like that. So it enables us to do compromise detection, threat hunting. We do governance with it because the way this thing is built, we can see the configuration drift happening in real time because see when things are not behaving as they should. I can just do straight-up discovery and mapping with it, and it's also surprisingly good at picking up DDoS attacks and things like that. So it's a really different approach from what's coming for. You know, obviously snorkels a deep packet inspection system, and I started becoming concerned with kind of the future of DPI back in probably 2008, 2009, when all of a sudden our encryption started to turn in with a more and more of a thing. And here we are, you know, 15 years later, and now, like network encryption is blinding DPI very systemically. It's hard to get a DPI Sensor where you need it, especially in the cloud, because the clouds not friendly to doing deep packet inspection. Nobody likes appliances anymore. It's a terrible model, has huge life cycle management, curation Costs associated with it, not to mention what it's telling you is hey, I saw a bunch of things which could be attacks. Please figure out which ones are attacks. So what we've done in a tography is like really change the, the game and the equation to give you something that's super low, friction, right. So this is all the way back to. You know the early days of start. My experience as a technician in the early PC industry of you know, from out of box stop and up and running as quickly as possible. You know time to install matters being frictionless matters so, and letting you see things instantly about your environment, like you don't even have to configure our stuff. Well, as soon as you plug in the data sources, it will start telling you stuff about your network that are very hard to see otherwise, without you even have to configure it. And then, once you start configuring it, it just gets richer and richer of what it can tell you. It's got a language in it so you can extend it. So you know the easy stuff is easy. The hard stuff is possible. It's all API driven so you can plug it into your infrastructure and extend it and things like that. So this is a new way of doing network security that is built on Really leveraging your existing infrastructure. So we take flow data out of your on-prem infrastructure and flow logs out of your cloud infrastructure as a real-time data source for the activities that are occurring in the environment. But we also pull context out of your environments as well. So we can pull context out of your EDR like a crowd strike. We can pull context out of your VPCs in the cloud we can get it from Axiomus or Wiz or you know, working on extending and integrating all across the board so you can look at your network traffic and activities that are occurring not just in terms of show me all my database servers, show me all my executives and what they're connecting to, but you can also do things like, say, show me all of my traffic that is occurring to devices with CVSS scores above 7.5 that are responding to the public internet. So like it's really a Super powerful way of understanding your network environment and being able to look for, you know, attack some compromises as well, as you know, just provide you with kind of fundamental governance capabilities, which, once again, are hard to do with scan, report tools like vulnerability management or even CSPM tools hmm, wow, there is a, there's a significant amount there, right, and I feel like, okay, so I have two, two major questions.Speaker 1:
How do you build a product that is Working great for the cloud as well as on-prem, right? Like? I feel like that is designing for two completely separate things and that's a. That's a gap that's hard to bridge, you know. So how did, how did you accomplish doing that?Speaker 2:
Well, I'm just the CEO so I didn't have to. But the guys who did do it, they saw the same, the same problems that I did. You know, if you have, if you have a bifurcated Security platform where I treat on-prem differently than I treat the cloud, I mean the cloud. You know. The joke is the cloud is somebody else's computer but really the cloud is somebody else's Linux computer and you don't really own the network. But there is a network and you need to understand how it's being utilized and you can really see what's going on by Observing it properly. And then you've got the on-prem world. If I have two different technology stacks for looking at both of those things differently, you know you're gonna have gaps between them gaps and capabilities gaps, invisibility Gaps and understanding Probably have different teams running the tools. Like you've got just this ever-expanding pile problems because you're treating them as different things. Your enterprise network is a composite of all of its components and you need to treat it that way and that's what this Platform is really designed to do treat your network as a composite of all its parts instead of this part is different than that part. Well, if you're gonna do that, you have to understand okay, what are the? What are the fundamental Data types that I can pull out of here that will show me the information that I need so that I can treat these as Different aspects of the same thing? Well, it's, it's flow data. It's other. There's other data types out there, like you know DNS or HTTP logs, potentially, or you know whatever that aren't in the product right now, but you know, obviously we're aware of, as well as the context that you need to have. You've got all these platforms out there that know all sorts of things about kind of their area, where it's CSPM knows about the configuration, vulnerability, state of cloud workloads. Or, you know, a vulnerability management system does the same thing about what's happening on prem, or like an axionist, which is or axonious Sorry, it's more of a CMDB sort of platform, or an EDR, like a crowdstrike, which you know has very granular data about the systems that it's installed on, but isn't installed on everything in every other. So, like being able to weave that picture together and then show the activities that are occurring on it is extremely powerful and, like you know, it takes a little bit of imagination. But you know this. This problem has been steering us in the face as an industry for at least a decade and Natography is the first company that's come along. It's really Sympathizing this, this picture based on all these data sources and then giving you one way to analyze it and instrument it and treat it as your enterprise network, not as that's cloud a, that's cloud b and this is my legacy on-prem stuff.Speaker 1:
I would assume a solution like this does a really good job of detecting Kind of rogue cloud accounts, right? If, if you have a developer that you know determines, right, that they want to use their company card to start an account with GCP or something like that, when you're predominantly in AWS, that would probably be something that would be pretty easy for you to detect, right? Yep, absolutely. Yeah, I have a pain point with that because, you know, I started at a company a couple years ago and they told me, oh yeah, 100% AWS, there's nothing anywhere else. We have 0365, but that's it. Okay, you know. And then I slowly started to find out, through the grapevine of people you know mistakenly saying it, that we're in Azure. And then, oh, randomly, oh, we're also in GCP, we're using this other thing in GCP. It's like guys, this is a problem, like security didn't know this.Speaker 2:
Well, and it's not just that this is like and you're never going to see it Like unless somebody tells you about it like the old way that we would do things with DPI systems is probably not going to find it either, Because D-Pack inspection systems are primarily focused on what. They're focused on finding threats that are active in the network environment and they're looking north-south almost exclusively and they're only looking for attacks. So they're not even going to characterize the kind of the connecto sphere I guess I'll just coin a word right now the connected biome of your actual enterprise network, because they have no ability to observe it. It's outside the scope of their application, Whereas an approach like what we're doing in that photography will show you north-south, east-west doesn't really matter. We can instrument any part of your network. And oh, by the way, the other cool thing about it is that as soon as you become aware that, oh, we got stuff in GCP, you can shake information about what deployment you've got in GCP out of the people who are doing it and you can deploy because we are cloud-based. You can deploy now, like right now.Speaker 1:
You don't have to deploy to plants.Speaker 2:
You don't have to get a workload up and running or anything like that. You guys turn on Flow over there and give me your VPC context, we'll import it into the Netography and we'll see what's going on. Boom, and then you've got all of a sudden visibility into it and you can start seeing what's actually there and you can start instrumenting how you actually want to protect it and govern it.Speaker 1:
Yeah, I feel like that's a huge pain point, especially with the pandemic and everyone working from home. There's no longer that oversight that used to be present, and so then you can have a situation where employees are starting to spin up and use other technologies and other platforms to accomplish their job that the company didn't even know about it, or whatnot. One of the areas that I would imagine would be a difficult area to master, especially with a product like this, is alert fatigue and avoiding it and making sure that you're providing the end user only what they need to make the proper decision in an environment and showing them what they should be paying attention to. How do you address that within your system and within your solution, because I would imagine having all of this data and doing all of the logic and the matrixes behind it. I would assume that that is a difficult challenge to go through.Speaker 2:
Yes, OK, so now you're going to get some orthodoxy here. So let's talk about threat detection and compromise detection, because this is the difference by degree. So I obviously am one of the people who's primarily responsible for the threat detection game, as it's been done for the last 25 years, Like I established, a fairly iconic piece of software that is one of the foundational elements of event generation for detecting security incidents in an environment. Here's the problem with the approach and this is my analysis of how we're doing with this approach after 25 years of doing it. The problem is that storage can detect hundreds of thousands of things now and there's tons and tons and tons of rules for it, and the issue is that if you don't configure your rule set appropriately, you're going to get a lot of noise. Even if you do configure a rule set properly, you're still going to get a lot of noise the vast majority of things that it detects, because we don't have the granularity to do a host by host configuration of what storage should detect on a device by device basis. We're just going to look generally for the things we think could be problems. We're going to get an event load out of that and then we're going to sift through that event load. We're going to contextualize each and every event that's in there for the stuff that I care about. And then, of the stuff that I care about, I'm going to try to figure out what well, what most of those events that I actually care about were actually compromises. Am I going to look for anything beyond compromises? In most enterprises you're not. You're just going to say I've turned 10,000 events into 10 events of interest. I'm going to look at those 10 events of interest. Oh, none of them actually affected anything. Today, Guess, I don't care. And tomorrow, none. The day after that, none the day after that. Oh, there's one. Oh, I did get compromised. Let's kick off our IR playbook and we are going to image that machine, reformat it and get it back in service, and we're going to take that image and run it through our forensics and our IR process and figure out what's going on. And that's what people do. So what's the flaw with that? Well, I did all of this stuff. I curated all these rules, I got these snort sensors up and running, I got them tuned and figured and performance working at the rate that I want them to, and I swapped out. Every three to five years. I swapped out the hardware platform that they're on because I have to, because either the vendor that I got from is forcing me to because it's the end of life, or I need a fast machine because I'm pumping more bandwidth now, or whatever. So I did all that stuff so I could deal with tens of thousands of false positives, no matter the source fire. We invented whole new technologies to get rid of false positives. We're good at it. We get rid of, give you 95% data reduction, not without too much effort, but you're still dealing with 5% of 10,000 or 100,000, trying to figure out which one of these are the ones that I care about. And, at the end of the day, the outcome that you're actually going for is I would like to know when I've actually been compromised so I can kick off my IR playbook and get back to business. So we do that. We have been doing that for 25 years, or what if I just said hey, actually nobody cares about threat detection, Everybody cares about compromised detection. That's the outcome that you actually want. Well, what we built in an autography is actually more of a compromised detector than a threat detector, because what we've come to understand now that we have a lot more great here and we're no longer. Time is actually a factor now. Is that? What if I only tell you about the stuff that was compromised? By being able to essentially suss that out by looking at the activities and behaviors of the devices and the environment establishing because I'm a metadata driven platform that knows a lot of context about the environment I can establish where are my trust boundaries, what are my functional, behavioral, operational envelopes that I operate within, and then I can essentially observe the entire network and all of it, not just north-south traffic but east-west traffic, the stuff in the cloud, everything and look for stuff essentially going off the rails and tell you with a very high degree of accuracy hey, this thing has been compromised. That's probably the bet that we're making. This is the thesis in the topography is that's probably what people actually want. The outcome that people want is compromise detection so they can kick off their IR playbook. I've been reading blog post after blog post. Kevin Mandia just talked about it. Essentially it's all the subtext of all these things. Coinbase just did a three-parter on how they're making detection response scale. All of it was we take this raw event load, we contextualize it and marry up context with it. When we figure out what the actual compromise is, we have the contextual information associated with the event. That was the compromise. We can respond more quickly so we can kick off our IR playbook. You see it over and over again that's the outcome that people actually want from all this analytics, this massive analytics infrastructure that we've deployed for really 30 years. Finally, people are getting to the point where it's like, hey, actually I don't care about every SQL slammer attack that is still out there in the background radiation of the internet. I only care about the stuff that I could actually be affected by, Because when I generate an event I know this is very long drawn out explanation when I generate an event and it goes into my event processing pipeline, it ends up in front of either my level one guy in the SOC or my very small team. It's going to be hours at least before that person sees that event. The difference between detecting the attack when it happened and detecting that this machine has gone off the rails and is in fact, compromised from a temporal standpoint again doesn't really matter. You're going to get the same outcome at the operator level. That's what we're going after with this approach is hey, let's just tell them about compromises. Let's not worry about defining every attack, because we're not doing lever layer seven inspection anyway. I can't do layer seven inspection the way that we're doing things. Now there might be stuff that I can do down the road, looking at things like DNS and HTTP and stuff like that. That might give us deeper capabilities to find more fine-grained attacks. But fundamentally, what I really want to tell you about is when things have gone obviously off the rails. That's something that is doable with the approach that we've taken here. What it frees you up from is all this curation and life cycle management, signature management, false positive rejection all this crap that we've been doing since I was a kid. If we do this right and we are definitely seeing a lot of success with our approach so far that's where we think that we're going to make a big difference.Speaker 1:
Yeah, it's a totally different way of thinking about this problem and how it should be addressed. I worked with I think it was a XDR or something like that, whatever coin term they had. It was the most inundating tool I've ever experienced before, especially when we deployed it and we have 5,000 alerts in there. My manager is like oh well, each one needs to be justified, for we're a financial institution. We have to have a justification behind all of it. I'm sitting here like how am I going to make justification for 5,000 alerts this 5,000 alerts today? This isn't even 5,000 over a year. That was the most frustrating thing I've ever experienced, because now I'm spending 100% of my time on this. I'm not even able to do good enough quality of work on it that I'm comfortable with, because I have to put my name out there saying oh yeah, this is a benign alert, this doesn't matter. This is why it's false, positive, whatever it is. I don't even have the time to do the research myself to go into it, even in the data that they claim that they have within their platform. It was just such a headache.Speaker 2:
Yeah, it's insanity. There's whole sectors of the security industry that are all about oh, got alert fatigue, we can help. Nobody's just saying why do we have all these alerts? We can't look at the, despite the fact that we can do high fidelity detections, we can't actually make a high fidelity determination that you should care. You have to have either a human being look at that, or maybe you can have an AI look at it, or do some contextualization so you can reject all the stuff that can't possibly be a problem. But beyond that, you just have to ask yourself the question. The ha ha, maybe not funny thing is that all the way back to the early days of start, I used to tell people don't run all the rules. For God's sake, do not run all the rules. What you need to do is you need to run the rules for things you could possibly be affected by and then write rules for stuff that should never happen in your environment. That's how you should run. Snort. Everybody would look at me and they're like oh, that sounds great, marty, and nobody did it, because it's a pain in the butt. You have to stand top, fit and understand the things that should never happen, and it gets harder and harder as the network gets bigger and bigger. Well, we've essentially turned the problem on. Us Hadn't said look, look, how scattered these networks are. I coined an acronym for what modern networks are. I call them deed environments. They're dispersed, ephemeral, encrypted and diverse. This is what represents modern network environments, and snort's still out there. It's still doing a great job. If you want to defend the crown jewels of snort, that's probably a good idea. But for broad capability across your entire deed environment, your entire dispersed, ephemeral, encrypted, diverse, multi-cloud plus on-prem network with a work from home workforce, you're nuts if you even try. So I'm a software developer, software engineer guy, but I did five and a half years as chief architect at Cisco and for security and things like that. Coming up with an architecture that will work across all this stuff is something that I had to do. That I do now, and the team in the photography the actual co-founders of the company came to the same conclusions that I did, without talking to me which is why I'm working here Because I was like we have the same idea, let's go. And they actually built it. I just did a whiteboard exercise. This is the only way, from what I understand, and I've been doing this for a while. This is the only way to attack this problem Skellibly, scaleably, across. Take all comers and things like that. This is the only way. If you're going to do it, this is the only way that you can do it. We'll get more sophisticated and more data searches as time goes on and things like that, but you can't do it. The old way, the smart way, isn't going to get you where you want to go anymore, and if you start looking at the outcomes that people are actually trying to get to and things like that, you start to say, hey, wait a second. We should really rethink the problem that we're trying to solve and how we solve it.Speaker 1:
Yeah, it's a really good point. It's, I feel like. I feel like that's a problem, that obviously it needs to be solved, but it can only be solved by the people that have done it for that long. You have to have the experience with it. You have to be able to say, oh, this is a bad workflow here. Like this doesn't make sense for 99% of the organizations out there. And finding that new way to work, especially with the cloud and how dispersed networks are, it sounds like that is, honestly, it's probably the only path forward that we have, and I'm sure in 20 years we'll look back and be like, oh, we have this other brand new thing that allows us to prioritize things differently, but that's 20, 25, 30 years away. If we're talking about the modern day cloud and how it works, especially with on-prem connections and whatnot, that is the only way to go forward with it.Speaker 2:
Yeah, architecturally, in a lot of ways, the way that this thing works is it's modeled on the way that EDR works. So EDRs if you understand how EDRs really do their thing they collect metadata about what's occurring on a device, they ship it up to a cloud backend. They do their magic on the cloud backend, then they send back to terminations, to the agents that are on the devices. We're doing something kind of conceptually similar where we're taking the data coming out of the infrastructure itself no agents deployed or anything like that, no sensors, no agents, no hardware and bringing it all to a cloud backend doing analysis, and then we can respond to our APIs through whatever your local native infrastructure is to be able to respond to attacks or whatever else that you care to respond to. And the cool thing about it is that, much like an intrusion detection and prevention engine, our system works in real time. It is not a store and query system like a Splunk or an XDR or a lot of the technologies almost all the technologies that are out there we, I think, almost uniquely, do what we do in real time. We can look back we have look back capabilities and stuff like that but we're actually as data arrives, we analyze it, run it through our processes and our models and things like that. If we see something happening, we can actually respond as it's happening too, which is once again architecturally like. I'm a person who thinks about things for a long time before I execute on them. In fact, I thought about this for 10 years before somebody else executed on it, and then it would come to be the CEO, which was great, but the Sometimes there really is only one kind of given the technology of the day, one way to go about it, and, from my opinion, this is it.Speaker 1:
Yeah, being able to alert, you know, while an attack is going on or while a compromise is going on. I mean that's really critical, right, Like it's a. You know it's probably a stupid statement saying that it's critical because it's obvious, but it's not obvious of how many solutions out there. You know that even claim that they can do this and, you know, fall short. And there's many companies that have been compromised for, you know, significant amounts of time. The average compromise isn't detected for like six months or something like that. You know, and that's with all of the modern day technology that we have. And you know this is the same for large companies and small companies. You know, like they have 300 people on their security team and then companies with, you know, three people on their security team. They're experiencing the same thing. So it sounds like this solution is really like leveling the playing field in a way, right, because it's, you know, one. You don't need to be a network expert to dive into this thing. At least it doesn't sound like it. It works across all the platforms and it allows you that if you're curious, you can dive into, you know, a certain request or whatever it might be more, and find out the details about it. That's a really interesting solution that you have there.Speaker 2:
Thanks, yeah, it's, you know it's. We believe obviously, or we wouldn't be doing this, we believe it's a game changer, but it is. You know time is a factor, obviously, in every attack and the interesting thing, you know so there are, to your point, there are advantageously few technologies that are able to actually detect and respond in real time. So EDRs one, ips and NGFW are another, and there's not really a whole lot else out there. So the very, very sharp end of the spear, technologies like those are kind of it. I guess WAF is to, for example, to, you know, get back to the plane. But the next level down. But and the thing to understand is that you know EDRs run in block mode all the time. It's very natural. But like most IPS, detection logic does not run in block mode. You know, we saw kind of 80, 20, even at the sourcefire days, when we were the premier intrusion prevention platform out there, only about 20% of our customers actually ran our stuff in. So you know the the fascinating thing about it is that the the next step is essentially getting into an eventing pipeline where, you know, an analyst looks at it and decides if it's compromised or not and then does something about it. But in between, that is the ability to recognize something going off the rails in real time and signal out to your infrastructure to do something about it. So it isn't quite you know at the point of attack, like you'd have with an EDR and NGFW, but it's you know, as soon as compromise is recognized, we can respond, which is like is pretty cool. And we also have the ability to respond to things like governance issues. So, for example, if I see configuration drift happen, say you've got like Wiz and Wiz figures out hey, dev and Proud are talking to each other in your cloud app. You need to fix that. And then you go do it. And then you know push comes out of staging and all of a sudden Dev and Proud are talking to each other. A Wiz style CSPM is. We're not going to see that till the next scan cycle. We see it as it happens, we see it in real time. But we can even signal back to Wiz and say hey, by the way, you know this just happens, so you might want to take a look at it and make a determination. Or we can signal to their you know the operations team and say, hey, we just saw this and this is contrary to your configuration directors. Hmm.Speaker 1:
Man, well, I feel like we could talk for another two hours easily, you know, but unfortunately, you know, due to time, we have to cut it a little short. I guess not short, but we have to, you know, delay our conversation, right? But, martin, you know, before, before I let you go, why don't you tell my audience, you know, where they could find you, where they could find your company and all that good information if they wanted to, you know, find out more about your solution. I'll put all the links in the description of this episode, of course.Speaker 2:
Yeah, absolutely, natigraphycom. That's where I am these days and that's you know. If you take a look at the website and start digging into it, you'll see what we're up to and what we're useful for. If you want to get a demo, we're always happy to demo for people and you know there's a lot that you can look at on the site and if you're looking at the site and you're looking at the screenshots, asking yourself, wow, does it really look that good? It does look that good. Our GUI is awesome and it's also composable, so you can actually make your own dashboards and things like that. So, yeah, it's an extremely powerful system. But, natigraphycom, please check it out.Speaker 1:
Awesome. Well, thanks everyone. I hope you enjoyed this episode.