What if the smallest oversight in software could have catastrophic consequences? Join us as we uncover the remarkable journey of Jake, a visionary engineer who has made significant strides in the tech industry. From his days at the University of Michigan to influential positions at Boeing, Amazon, and Google, Jake's story is a testament to the power of curiosity and relentless problem-solving. Discover how he pioneered Quay, the first private Docker registry, and positioned himself at the cutting edge of security and containerization.
Ever wondered about the stringent processes behind aviation software? Jake takes us through his meticulous work at Boeing, where creating safety-critical software is both a science and an art. He shares the rigorous testing and standards like DO-178B and MCDC that ensure the fail-safe operation of flight systems. Jake's insights illuminate how even the smallest IT services can have profound impacts on safety, offering a rare glimpse into the interconnected world of aviation technology and its regulations born from past tragedies.
As we wrap up, we venture into the realm of high availability software and evolving security technologies. Jake draws parallels from the aviation industry to illustrate the importance of redundancy and robust planning against failures. He discusses the benefits of unified authorization services and modern models, providing practical advice for handling software downtimes and authorization challenges in today's dynamic IT environments. Finally, listeners can learn how to connect with Jake and explore his current venture, Authzed, gaining further insights into innovative security solutions. This episode promises invaluable takeaways for tech enthusiasts and professionals alike.
Follow the Podcast on Social Media!
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Patreon: https://www.patreon.com/SecurityUnfilteredPodcast
YouTube: https://www.youtube.com/@securityunfilteredpodcast
TikTok: Not today China! Not today
Speaker 1: How's it going, Jake?
00:00:01
It's great to get you on the podcast.
00:00:03
I'm really excited for our conversation today.
00:00:07
Speaker 2: Yeah, it's going great.
00:00:07
Thanks for having me.
00:00:09
Speaker 1: Yeah, absolutely so, jake.
00:00:12
I start everyone off with telling their background right,
00:00:15
how they kind of got into IT, what made them interested in
00:00:19
security.
00:00:19
And the reason why I start there is because there's a lot
00:00:23
of people that could be listening, that are maybe trying
00:00:26
to get into IT or trying to get into security for the very
00:00:28
first time and hearing everyone's background I have
00:00:31
felt, you know, lets people know that, hey, it's possible, right
00:00:36
?
00:00:36
Maybe they come from a similar background.
00:00:38
They're saying, well, if this guy did it, maybe I can do it
00:00:40
too.
00:00:40
So what's that background?
00:00:42
Speaker 2: Yeah, I guess I probably come at it from a
00:00:45
little bit different perspective .
00:00:46
It's sort of like an accidental focus on security.
00:00:49
My background, though, is that I'm an engineer by trade.
00:00:53
I went to University of Michigan for engineering.
00:00:56
After that I went out to Seattle and worked for Boeing.
00:00:59
I worked for Amazon, which is where I got my start in
00:01:02
distributed systems and service-oriented architecture,
00:01:05
those kinds of things Then came over here to New York, worked
00:01:08
for Google for a little while doing APIs, infrastructure, and
00:01:12
then left Google to start a company with my co-founder
00:01:15
second-time co-founder now, but a co-founder Joey and we left to
00:01:20
start a company building a web IDE integrated development
00:01:23
environment, so a tool for coding as part of that.
00:01:27
The IDE never really took off in the way that we wanted to, but
00:01:30
we did build a tool called Quay, a service called Quay, which
00:01:34
was the first private Docker registry, and if you know
00:01:37
anything about Docker, docker is kind of like a security-forward
00:01:41
way to run lightweight processes and, as building the
00:01:45
first Docker registry, that really kind of put us at the
00:01:48
forefront of security and infrastructure and this whole
00:01:51
containerization revolution that was happening at the time.
00:01:54
So when I say we were the first private registry, we were
00:01:56
before Docker Hub was like even a thing before you could get it
00:02:00
from any of these other vendors that now offer registries.
00:02:03
So all of a sudden we were both engineers but we found
00:02:07
ourselves in this sort of cloud-native containerization
00:02:10
space and in sort of a very security-focused way.
00:02:14
So at the time we were doing builds like CI, builds for your
00:02:20
containers in a way that had sort of provable bill of
00:02:25
materials, right.
00:02:26
So some supply chain security stuff.
00:02:28
It's a technology for containerization, right.
00:02:31
You don't want to be running any software that's malicious.
00:02:33
You don't want to be running software that you don't know
00:02:35
what it is.
00:02:35
So that was sort of accidental.
00:02:38
We ended up selling that company to CoreOS.
00:02:40
Coreos was kind of like a security-focused
00:02:43
containerization operating system.
00:02:45
Coreos ended up getting sold to Red Hat.
00:02:48
Red Hat got sold to IBM and we were like, hey, let's go start
00:02:51
another company.
00:02:51
A little bit of a wild ride there.
00:02:54
We went from a two-person company to a 400-person
00:02:57
company with sort of a direct lineage.
00:02:59
You can actually still go and buy Quay from Red Hat, from Red
00:03:02
Hat IBM today.
00:03:03
So that's like still a thing that you can get.
00:03:06
So then when we left to start this new company, we were like
00:03:09
what's this challenge.
00:03:10
What's a challenge that's been digging us?
00:03:12
What's been causing us to slow down?
00:03:13
What's been causing us problems ?
00:03:14
One of the things that we landed on was this authorization
00:03:17
challenge.
00:03:18
Right, so when we built Quay, we were like, all right's, just
00:03:31
copy github's permissions.
00:03:32
That was our authorization thesis.
00:03:33
Let's just do what github did.
00:03:33
And so we we copied that at first and we launched it, and
00:03:34
then the requests started coming in.
00:03:35
They started saying, uh, all right, that's cool, you've got
00:03:37
these roles, but I want groups now and I want organizations and
00:03:40
I want groups that can be a part of groups and I want uh
00:03:42
namespaces and I want nested namespaces and I want to that
00:03:44
can be a part of groups, and I want uh name spaces and I want
00:03:45
nested name spaces and I want to be able to federate permissions
00:03:48
anywhere.
00:03:48
And it was kind of like this thing that we could never quite
00:03:51
keep up with and it was the security sensitive code.
00:03:54
Right, authorization code is very security sensitive.
00:03:57
No one ever wanted to touch it, and so we're like let's go and
00:04:00
do something in that space.
00:04:01
Let's make that code something that you don't have to write,
00:04:04
that you can delegate out, you can set yourself up.
00:04:06
Uh, for success.
00:04:08
Speaker 1: I'm in sort of like a scalable, flexible way and
00:04:10
that's what we're doing now at offset, and we've been kind of
00:04:14
shipping that mission ever since well, that's really fascinating
00:04:19
, you know, because you come from an engineering background
00:04:22
and I I wonder if that same mentality kind of you know,
00:04:28
propelled you through everything .
00:04:30
Right, you're identifying a problem and you're thinking how
00:04:33
can I solve that problem?
00:04:35
Right, is it via methods that are already existing?
00:04:39
Maybe it's a novel way, maybe it's, uh, you know, adjusting
00:04:44
the, the current methods a little bit, right You're, you're
00:04:47
going through that thought process of, um, you know,
00:04:51
identifying the problem and then solving it.
00:04:53
Like I talk about this sometimes, where you know my, my
00:04:56
wife, will come to me with a problem and I have to, like,
00:04:59
take a step back first and say do you want a solution or you
00:05:02
want me to hear?
00:05:02
Speaker 2: right, you want me to just hear you out, or you want
00:05:04
the solution, or do you want me to hear Right?
00:05:04
Speaker 1: Yeah, totally.
00:05:05
Do you want me to just hear you out or do you want the solution
00:05:06
?
00:05:06
Right, and it's a different way of thinking, I think.
00:05:12
Speaker 2: Yeah, definitely, scratching our own itch is part
00:05:14
of our DNA as entrepreneurs and as company builders.
00:05:17
So when we built Quay, it was to address a need that we had
00:05:22
found when building our IDE.
00:05:23
Right, it was to address a need that we had found when building
00:05:24
our IDE.
00:05:24
So we needed to be able to run user-supplied, user-written code
00:05:28
on development servers where they could sort of debug those
00:05:31
things, and so we packaged up that code with containers and
00:05:35
then we needed a place to put all of this proprietary code.
00:05:38
So that's where Quay came from and we said, well, if we've got
00:05:41
this problem, other people probably have this problem too.
00:05:43
Let's commercialize this service.
00:05:46
That we felt.
00:05:53
And then sort of the same, we're using the same pattern here,
00:05:54
which is we've struggled with authorization in the past.
00:05:55
You know we filled up entire whiteboards with how do you
00:05:57
build a scalable authorization service that's flexible enough
00:05:59
to work for kind of any service, because if you take any of
00:06:02
those parts away, it becomes sort of a tractable problem.
00:06:06
But when you have a need for scale and flexibility and high
00:06:11
throughput and low latency, that's what makes it really
00:06:13
tricky and that's, you know, that's what bit us in the past
00:06:17
and that's what we set out to solve this time around.
00:06:20
Speaker 1: So talk to me about that a bit, right, because I
00:06:23
think a lot of people will say you know, get an SSO provider
00:06:29
and that'll solve it all.
00:06:31
Right, that's kind of the mentality of IAM almost to some
00:06:36
extent, that kind of offset that or augment that in some way
00:06:48
that provides additional value in ways that an SSO provider
00:06:50
won't.
00:06:51
Speaker 2: Yeah, this might be a good time to jump into the
00:06:53
difference between authentication and authorization
00:06:55
.
00:06:55
So an SSO provider is really going to help you understand who
00:07:00
you're talking to right?
00:07:02
This is authentication.
00:07:03
Who's on the other end of the internet who you're talking to
00:07:05
right?
00:07:05
This is authentication.
00:07:06
Who's on the other end of the internet who has provided
00:07:07
evidence in the form of a certificate or username,
00:07:10
password or whatever, who has provided evidence of who they
00:07:13
are?
00:07:13
And when you use AuthSense, you still need to have an
00:07:15
authentication provider.
00:07:17
We support all the big ones.
00:07:18
We support OIDC and SAML and these various different ways,
00:07:24
and then things like Auth0, where they can do username and
00:07:26
password.
00:07:27
So that's the authentication side of the coin.
00:07:29
What we deal with is the authorization side of the coin.
00:07:31
You know who you're talking to.
00:07:33
Now what are they allowed to do ?
00:07:34
Is Joe allowed to access the balance of this bank account or
00:07:39
initiate a transfer?
00:07:40
I know I'm talking to Joe, but that's not enough to make a
00:07:44
decision about what Joe is allowed to do, and so what we've
00:07:47
done is we've built a service to help you sort of model that
00:07:51
out for your application in a really flexible way and make
00:07:54
those determinations in a secure way.
00:07:57
Speaker 1: So it's more application specific of how your
00:08:01
users should be interacting with that application, with that
00:08:05
data and whatnot.
00:08:07
Speaker 2: Totally, and usually the product manager has a really
00:08:10
strong idea of how they want the application's permissions to
00:08:14
work.
00:08:14
But sometimes engineering is like well, you want groups Cool,
00:08:17
give me a week.
00:08:18
Oh, you want groups within other groups Cool, come back in
00:08:21
two years, right.
00:08:22
It's like not all the asks are equal and sometimes the nuance
00:08:27
of how to actually make that thing perform.
00:08:29
The reason I bring up groups within other groups is because
00:08:33
it often requires a recursive join or recursive expression
00:08:35
which can send a lot of traditional software into fits.
00:08:38
So they already have an idea of how they want the application
00:08:41
to function and what their users would find delightful.
00:08:43
But then being able to actually build and ship that can be a
00:08:47
challenge.
00:08:48
Speaker 1: That is.
00:08:48
That's really fascinating.
00:08:50
You know, I work with a lot of different developers and one of
00:08:55
the last things that they ever want to, you know, discuss with
00:08:59
me as a security professional, is that authorization, part of
00:09:03
it right, is the, is the whole IAM side of it.
00:09:06
They're, they're, they're always asking is there a way
00:09:09
that you could just, you know, set something up and we'll,
00:09:12
we'll direct the request through that, you know, and, um, it's a
00:09:16
, it's a new approach to to, I guess, building out that core
00:09:22
functionality?
00:09:22
You know, cause it is so it is so arduous, it is very painful,
00:09:25
you know, because it is so it is so arduous, it is very painful,
00:09:26
you know, and if you do it wrong, your entire application
00:09:30
is at extreme risk yeah.
00:09:32
Speaker 2: Yeah, um, totally.
00:09:34
It's a common refrain like why can't we just set up a piece of
00:09:38
infrastructure in between the request and the service?
00:09:41
That will sort of make this fail safe for us, right?
00:09:43
Just take this off our plate.
00:09:45
I mean, the reality is that oftentimes there's existing
00:09:50
state behind the scenes that means that you don't have enough
00:09:54
information at request time in order to make an authorization
00:09:57
determination, right?
00:10:06
So let's go back to my example.
00:10:07
Is Joe allowed to read the balance details from this bank
00:10:08
account?
00:10:08
Okay, he's got the account number.
00:10:09
I know it's Joe, right?
00:10:10
If I look at the rest request and I look at the URL parameters
00:10:13
, it's like account number and I've got an identity baked in in
00:10:17
the form of a jot or something.
00:10:18
But there's an existing relationship behind the scenes.
00:10:22
Where is Joe an owner of the account?
00:10:25
Is Joe a spouse on the account?
00:10:26
Is Joe the CFO of a multinational organization?
00:10:30
There's a lot of different ways that that decision can be made
00:10:32
and it can't all be boiled down to request time information.
00:10:33
And it can't all be boiled down to request time information.
00:10:35
If it could, this would be easy , right?
00:10:38
We'd just set up some sort of policy engine in front of the
00:10:43
application and just direct all of our requests through there,
00:10:46
but it's just never that simple.
00:10:47
You can get it's maybe like a 50% solution.
00:10:52
50% of the time you can figure out if it's allowed based on
00:10:54
stuff that's available to you at the request layer, based on
00:10:57
stuff that's available to you at the request layer.
00:10:59
Speaker 1: Yeah, that is definitely an area that's kind
00:11:01
of almost untapped right in the security IAM space, and it's
00:11:07
also a new way of thinking about it, so it's very interesting to
00:11:11
me.
00:11:11
You know, I want to take a step back.
00:11:15
When you were at Boeing, what did you specialize in by chance?
00:11:20
Speaker 2: back when you were at Boeing, what did you specialize
00:11:21
in?
00:11:21
By chance?
00:11:22
Yeah, boeing was my stepping stone into sort of like the tech
00:11:24
ecosystem.
00:11:24
But I was in the flight simulators group, so working on
00:11:28
the 787 before it existed, right before there was a physical
00:11:32
airplane with the number 787 on the tail.
00:11:36
We were in there proving out the control laws and making sure
00:11:38
that the plane was going to fly the way that the pilots would
00:11:41
expect and in a safe fashion, and it was specifically the
00:11:44
engineering simulators.
00:11:45
So the flight controls group would have an idea.
00:11:48
They'd say, hey, I wonder what would happen if and we'd model
00:11:51
it out in the sim and we'd put a pilot and you know one of those
00:11:54
big full motion simulators and we'd see what happened, see if
00:11:58
the pilot liked the way it responded or found it confusing
00:12:01
or whatever.
00:12:02
So that was kind of my stepping stone.
00:12:04
Amazon was really my first sort of internet focused company,
00:12:09
sort of relevant to what I'm doing today.
00:12:14
Speaker 1: Well, with Boeing, though, you had to have a
00:12:16
certain level of curiosity.
00:12:18
You had to have a certain level of curiosity.
00:12:20
You had to have a certain level of, you know, thinking through
00:12:25
complex problems that could potentially be like life or
00:12:29
death.
00:12:29
You know, 10 or 15 years down the line, right, maybe not in
00:12:33
the present day, but you have to think through different
00:12:35
scenarios like, well, what if this thing goes off?
00:12:38
What do we have for compensating controls around it
00:12:42
that can potentially assist in some way?
00:12:44
And so it's a very interesting way of thinking that a lot of
00:12:49
people typically do not think about or go down that path at
00:12:52
all.
00:12:52
Do you see those skills that you gained there, you know, kind
00:12:57
of prevailing through even to this day.
00:13:01
Speaker 2: First of all, I don't want to overstate my role at
00:13:04
Boeing.
00:13:05
So, there were flight control engineers who would have an idea
00:13:08
of what they wanted to happen, and then it was up to my team,
00:13:12
my group, to turn that into code in the simulator.
00:13:14
Right, so it was more just verifying what they had done.
00:13:18
Code in the simulator so it was more just verifying what they
00:13:20
had done.
00:13:20
There was an interesting experience I did have, though,
00:13:22
when I was at Boeing, which is I took a DO-178B course, which is
00:13:25
safety-critical flight software .
00:13:27
So you get an idea of how software gets written in such a
00:13:30
way that you know it's not going to fail in flight when people
00:13:35
are on the line, when lives are on the line, and part of that
00:13:38
does stick with me today.
00:13:39
So the thesis, the fundamental crux of it, is every piece of
00:13:44
software has to be traceable back to a low-level requirement,
00:13:48
a high-level requirement, and there has to be a test for that
00:13:51
requirement.
00:13:52
That's how you write safety-critical software.
00:13:54
And then there's another concept called MCDC, which is
00:13:58
not only does every piece of code have to be tested, but
00:14:01
every path through the code has to be independently exercised
00:14:04
and tested.
00:14:04
And so that way of thinking has really stuck with me to this
00:14:09
day about how do you build a reliable service.
00:14:12
How do you build a high performance, high availability
00:14:15
service?
00:14:15
Yeah, I would say that was sort of like the biggest takeaway
00:14:19
outside from being just like an absolute blast of a job, right,
00:14:23
like you know, when I would set up a new flight control, uh, I'd
00:14:26
be like, oh well, I gotta see if it works.
00:14:28
So I'd go in there and fly it right that was fun super fun.
00:14:32
Speaker 1: Yeah, that that is pretty awesome.
00:14:34
You know, I not not like it's any comparison, right, I I got
00:14:38
Microsoft flight sim and uh, you know, of course, like I turn it
00:14:42
on and it's like hard as hell to fly, like I'm over here just
00:14:47
trying to wing it and I'm like, all right, maybe I should go
00:14:50
back to the tutorial, go to the tutorial and, like you know,
00:14:51
it's a little bit better, but it's like man, I'd have to spend
00:14:52
like a hundred hours before I could.
00:14:53
You know it's a little bit better, but it's like man, I'd
00:14:56
have to spend like a hundred hours before I could, you know,
00:15:00
reasonably do this thing and like really pay attention to it.
00:15:04
It's, it's remarkable, you know just the kind of, I guess, the,
00:15:08
the different systems that are involved in a, in a, in a plane,
00:15:12
right, flying a plane, and how they all kind of, you know,
00:15:15
intermingled to some extent, and how they work together and
00:15:20
that's all that.
00:15:22
Safety critical software has always been very interesting to
00:15:25
me, right, because I only deal with, you know, what 99% of the
00:15:30
IT world deals with, which is like non-safety critical lives
00:15:35
are not typically on the line or anything like that.
00:15:37
You know that will be directly impacted by the software that
00:15:41
you're writing or you're trying to protect or whatever it might
00:15:44
be.
00:15:44
That's a very interesting, that's a very interesting like
00:15:49
scenario.
00:15:49
You know who came up with that process and procedure of testing
00:15:54
?
00:15:54
Like you said, like it all has to relate back, then it has to
00:15:56
be individually tested.
00:15:58
Do you know, like, where that came from?
00:16:00
Speaker 2: potentially, I don't know who the authors of the
00:16:08
standard were, the certification but sort of one thing in the
00:16:10
aviation world is that all of the regulation is written in
00:16:12
blood is what they say.
00:16:12
So usually when something goes catastrophically wrong, then
00:16:15
they go back and re-examine the processes and re-examine the
00:16:18
regulations and put new things in place to make sure that that
00:16:21
specific thing never happens again.
00:16:23
So my guess is it came from there, right, like we put some
00:16:27
software on a plane, and it didn't go so great.
00:16:30
So maybe the next time we do that we should think about it a
00:16:33
little more, a little more.
00:16:40
Yeah, it's interesting, though, when you brought up the topic
00:16:41
of like it not being, like lives not being on the line.
00:16:43
Um, it's interesting how often things get commingled in a way
00:16:45
where seemingly benign services like do put lives on the line.
00:16:50
I think there's a lot of stories that came out of the crowd
00:16:52
strike thing that just happened, where, like crowd strike says,
00:16:55
you, you can't use this on systems that are, like vital for
00:16:59
life support, right, that's fine, right, that's totally like
00:17:03
people don't do that.
00:17:04
But what ends up happening is that they use it to like run the
00:17:07
like electronic charting solution at a hospital, and then
00:17:11
that's down, and then they don't know what medicines they
00:17:14
can give to people, and then now , all of a sudden, lives are on
00:17:17
the line and it's through like this tangential or like this
00:17:20
inversion of uh priorities, where things sort of like bleed.
00:17:24
So you generally, like you want to make sure that you're
00:17:27
building high quality, reliable software, whether you you think
00:17:30
lives are on the line or not.
00:17:32
Um, and the other thing is that like there should always be a
00:17:35
plan for when it goes down, because no software is 100% None
00:17:39
.
00:17:39
So it's like if this thing fails, like on the airplane, our
00:17:42
solution to that was you have three copies of the software and
00:17:46
if one of the boxes disagrees with the other two, you shut
00:17:48
that one off and restart it and then the plane just flies on two
00:17:51
.
00:17:51
So it's like nothing is ever up 100% of the time.
00:17:54
So what's your strategy?
00:18:06
Speaker 1: What's your plan when it inevitably goes down the
00:18:08
worst possible time?
00:18:08
Yeah, that is.
00:18:09
It's fascinating because you know that was my initial
00:18:10
response to right was well, why are we so dependent on
00:18:12
CrowdStrike?
00:18:12
But then, at the same time, it's a lot more difficult to
00:18:16
pitch two EDRs to the board or to the CIO than it is to pitch
00:18:21
one.
00:18:21
I mean, for a company that I worked for previously.
00:18:26
They were a real big McAfee shop McAfee all across the globe
00:18:30
.
00:18:30
I'm sure that by the time we got rid of McAfee we were
00:18:34
probably their only customer remaining.
00:18:37
Something like that and just trying to pivot to CrowdStrike
00:18:40
was a multi-year process.
00:18:42
It was like they literally tried it one time in a
00:18:46
three-year period and it failed.
00:18:48
Like the internal buy failed and then, like a year later,
00:18:53
they tried again and that's another three years of trial and
00:18:57
error and everything else.
00:18:58
Like that.
00:18:58
Right, like it's.
00:18:59
It's a very arduous process, but for a piece of technology
00:19:03
that is so tightly integrated into the system itself, right, I
00:19:08
mean, like the issue was a couple lines of code caused the
00:19:11
computer to like infinitely blue screen.
00:19:13
Essentially, right, that's.
00:19:17
That's a very difficult situation.
00:19:19
That it's like.
00:19:22
You know, as a technology professional, you, you can
00:19:25
account for that.
00:19:26
You can kind of plan for it to some extent, but just thinking
00:19:32
of how that planning goes, it's a little bit farther down the
00:19:35
priority tree.
00:19:36
You know like I'm more worried about like backups being
00:19:39
corrupted or and systems going down with no backups and things
00:19:43
like that, right before I get to crowd strike, who's a tried and
00:19:48
true, proven technology up to that, up to that event which, of
00:19:53
course, the one time that they have an issue is on the global
00:19:57
stage Maybe the worst time to have an issue.
00:20:05
Speaker 2: You know it's like any of the other failure domains
00:20:06
that we talk about.
00:20:07
Right Like you don't want to have a single power provider, a
00:20:08
single internet provider or be in a single data center.
00:20:10
Now there's another checkbox.
00:20:11
Right Like we're not using a single operating system vendor.
00:20:14
We're not using a single security vendor.
00:20:16
These things happen.
00:20:17
Operating system vendor, we're not using a single security
00:20:20
vendor.
00:20:20
These things happen In the startup land.
00:20:25
We had Silicon Valley Bank was it last summer, I think where
00:20:27
they were struggling and all of a sudden, startups aren't
00:20:28
allowed to have a single bank account anymore.
00:20:29
It's very similar to the aviation regulation is written
00:20:33
in blood.
00:20:33
It's like now, okay, we found a new way that things can have
00:20:37
coordinated, cascading failures, and now we have to go back and
00:20:40
add that to our checklist of things that we worry about and
00:20:43
think about.
00:20:44
It's kind of interesting too, because it was affecting one
00:20:48
operating system so totally.
00:20:50
It was almost like what we anticipated Y2K would have been.
00:20:54
It was just 24 years late, right.
00:20:56
Speaker 1: It was like one whole operating system just
00:20:59
disappeared yeah, that's a that's an interesting way of
00:21:04
thinking about it.
00:21:05
I never thought about it like that is like what we thought y2k
00:21:08
was going to be.
00:21:09
You know, I was pretty young when y2k was coming along and
00:21:13
like that's all that people were talking about and I'm just
00:21:16
sitting here like I don't know, like am I going to be able to
00:21:17
watch my cartoons?
00:21:18
You know, like that's all that people were talking about and
00:21:19
I'm just sitting here like I don't know, like am I going to
00:21:20
be able to watch my cartoons?
00:21:20
You know, like that's what I care about.
00:21:24
Speaker 2: I was like I thought that the whole thing was
00:21:26
overplayed, because I'm like, did nobody think of just setting
00:21:29
the system clock on one of these computers to 2000 and see
00:21:32
what happens?
00:21:33
But obviously that was like minimizing the whole situation,
00:21:37
right, like it's more than just personal computers at play.
00:21:41
Speaker 1: Yeah, you know, even to this day, like I don't quite
00:21:45
completely understand it, you know, because I'm thinking of,
00:21:48
like, how computers are and they just count.
00:21:50
You know, like in the background they're just counting
00:21:53
forever.
00:21:53
Yeah, so like, why would we think that there would be a
00:21:56
weird situation with that?
00:21:59
I, I don't know.
00:21:59
Well, you know, that's just someone that wasn't old enough
00:22:02
to go through it, I guess well, that's how they work.
00:22:05
Speaker 2: Now, right like that's how athletics oh, but
00:22:07
they weren't at the time yeah, and then a lot of software was
00:22:11
written, just assuming you'd have like a two-digit date field
00:22:13
, and I was like, yeah, are these two numbers bigger than
00:22:17
these other two numbers, uh, that kind of thing.
00:22:20
God, so like zero, zero is smaller than 83 and you're like
00:22:24
this person must be younger or older, whatever you know, like I
00:22:27
don't know born first but yeah, it was just a just a whole
00:22:32
thing and it's funny that the world got to kind of witness
00:22:35
what would happen just 24 years too late right.
00:22:40
Speaker 1: So we talk about a lot of it's kind of like single
00:22:44
point of failure.
00:22:45
Right, we're going guys, land a big customer like Delta or
00:22:49
something like that, and then your solution goes down.
00:23:07
Now Delta's complaining about you in the news.
00:23:09
But with your background, you know, with Boeing and the other
00:23:13
companies that you were at, you probably have like HA built into
00:23:17
your mindset.
00:23:18
With whatever you're doing, it's going to be HA to some
00:23:21
degree.
00:23:21
What does that look like, right ?
00:23:24
Speaker 2: Yeah.
00:23:24
So it's definitely built into the product and you're right,
00:23:27
right, if our product goes down, you can't answer those
00:23:29
questions, and if you can't answer those questions, you
00:23:31
can't make progress as a product .
00:23:33
So HA is built in.
00:23:35
So like, think primary and secondary load balancers, think
00:23:39
a data store like CockroachDB or Google Cloud Spanner.
00:23:43
That's inherently a multi-region, distributed data
00:23:47
store.
00:23:47
Think distributed systems, distributed caching, with
00:23:52
multiple copies of data sprinkled everywhere, all those
00:23:56
things.
00:23:57
So our service is actually modeled after an internal
00:23:59
service at Google called Zanzibar, and one of the things
00:24:03
that drew us to that model was the fact that Zanzibar runs at.
00:24:05
One of the things that drew us to that model was the fact that
00:24:06
Zanzibar runs at Google with five nines of uptime.
00:24:08
So if you're five, nines is, like you know, minutes of
00:24:12
downtime per month or something like that.
00:24:15
So five nines is usually sort of the gold standard for, like
00:24:18
telephony or like high availability things that aren't
00:24:21
airplanes.
00:24:21
So by copying sort of that architecture and using the same
00:24:27
mitigations for single points of failure, that's how we managed
00:24:30
to keep a highly reliable, highly available service yeah,
00:24:36
it's uh.
00:24:37
Speaker 1: when I was studying for my ccsp and they started
00:24:40
going over like high availability and uptime and all
00:24:43
that sort of stuff, was it like S3 is like 11, nines or
00:24:48
something like that?
00:24:49
It's insane availability and uptime.
00:24:53
Speaker 2: S3 is 11 nines for durability.
00:24:59
Speaker 1: I think we all know S3 has gone down in the past.
00:25:00
Speaker 2: It's usually for something tangentially related,
00:25:02
like Like a misconfigured router or something, but yeah.
00:25:05
So the goal for S3 is not to lose your data Right and to be
00:25:11
able to survive, like asteroid strike level events.
00:25:16
Speaker 1: That would be really fascinating.
00:25:17
I mean, like you know, in situations like that, like when
00:25:19
my mind starts going down that path, it's like you know what is
00:25:23
actually going to be left, and I guess S3 will still be around
00:25:29
yeah, a lot of that like disaster um, like disaster
00:25:33
recovery planning, I feel like is a little bit tongue-in-cheek,
00:25:36
like some of the scenarios that we plan for.
00:25:38
Speaker 2: Like, well, what if new york city, uh, which is
00:25:40
where the founding team for our company, gets hit by an asteroid
00:25:43
?
00:25:43
We're like, well, that'll be, but none of us will really care,
00:25:46
right, we're at ground zero.
00:25:52
Speaker 1: Yeah, it's a weird path or, I guess, like project
00:25:56
you know, because you kind of do it regularly in security to
00:25:59
some extent is a disaster recovery.
00:26:02
Speaker 2: And you have to test it too.
00:26:03
That's the part that people never go back and finish.
00:26:06
It's like we put together these plans, we bought a bunch of
00:26:09
infra, we put everything in place, but we never tested if
00:26:11
any of it actually works.
00:26:14
Speaker 1: Yeah, I worked for a company where they claim oh yeah
00:26:18
, we tested.
00:26:18
We do this tabletop year round or yearly, and it's for a whole
00:26:24
week and we always test it.
00:26:27
And I just asked one question Okay, well, what domain
00:26:30
controllers do we typically restore to a backup, because
00:26:34
that would be the first step in getting our network back up?
00:26:38
And they're like, we don't actually do that, we just make
00:26:42
sure the backups are there and they're, you know, stored
00:26:45
properly, like well, how are they stored?
00:26:47
How?
00:26:48
Speaker 2: do you know?
00:26:48
Speaker 1: Like, oh, you know, like they're secure, and I look
00:26:51
and they're unencrypted.
00:26:52
I'm sitting here like guys like what are we doing?
00:26:55
How are we passing anything right now?
00:26:57
Yeah, I um, when we talk about disaster recovery, I kind of go
00:27:03
back to this, this story that I heard, um, you know, in during
00:27:07
nine 11, right, it took place during nine 11, right.
00:27:17
But you know, a year or two before nine 11 took place, this
00:27:19
person, working at a firm that was in one of the towers their
00:27:20
HA site was in the other tower, oh my gosh, and they were doing
00:27:24
disaster recovery planning and he was in a room of 20, 25
00:27:29
people and he was literally the only one in the conference room
00:27:33
that said like hey, I know this is never going to happen.
00:27:37
I know that this is crazy, you can laugh at me, that's fine.
00:27:41
But what if both towers like have a catastrophic issue?
00:27:43
You know power is taken, a catastrophic issue.
00:27:44
You know power is taken out, or you know, maybe they fall down
00:27:48
or something like that?
00:27:48
Right, like what if something like that happens?
00:27:51
And literally everyone said that will never happen.
00:27:54
And he said yeah, but like can we just plan if it does?
00:27:57
You know what?
00:27:58
What if it does right.
00:27:59
And so they entertained it and they developed like a big, a big
00:28:05
like tupperware, like a big plastic container containing all
00:28:09
the documents of how to restore the company back to its
00:28:13
original I guess running state, good enough to run and serve its
00:28:18
key customers and whatnot, Gave you everything of taking power
00:28:23
of the company and everything else like that right.
00:28:25
A couple years later, this plan is actually enacted and they
00:28:30
hired a company to actually enact this plan right, and they
00:28:33
had certain people that were allowed to enact this plan and
00:28:37
the one guy that was in the room that you know pitched this idea
00:28:42
was the one guy that was late on 9-11 to work and he had he
00:28:49
had access to this box.
00:28:50
He had it, like you know, locked away in a secure place
00:28:54
and he, you know, took this box, took it over to this other
00:28:58
company and they, they restored this company wow, I've never
00:29:02
heard that story.
00:29:03
Yeah, I looked it up previously.
00:29:05
There's like one article on it.
00:29:06
I can't even remember the company name.
00:29:09
I'd have to look it up again.
00:29:10
I'm sure people are probably going to say I'm lying or
00:29:12
whatever, but I read it and going through that path and then
00:29:17
taking part in disaster recovery planning myself, it
00:29:21
kind of takes you down a weird place because it's you know,
00:29:24
kind of like what that guy said.
00:29:25
I know this is never going to happen, but like can we just
00:29:29
talk about for five minutes if, if, what it?
00:29:31
What if it does happen?
00:29:33
Speaker 2: Yeah, Forces you to really think outside of the box
00:29:35
and and think about what failure domains might exist that nobody
00:29:39
thought about yet.
00:29:40
Speaker 1: Right, yeah, like, even you know, even like when
00:29:44
seal team six was raiding bin laden's compound, right, the
00:29:48
youngest guy in the room was like hey, I know this is never
00:29:51
going to happen, but what if a helicopter crashes, like in his
00:29:53
front yard?
00:29:54
What are we going to do?
00:29:55
And everyone was pissed off at him because he brought that up,
00:30:00
because he, you know, had that thought and that idea.
00:30:02
And then everyone else was like , all right, right, well, what
00:30:05
would we do?
00:30:05
And they're like, okay, this isn't going to happen, but what
00:30:09
would we do?
00:30:09
And sure enough that one thing happens.
00:30:13
Speaker 2: Wow, man, we're way off the reservation now?
00:30:15
Speaker 1: Oh, have you not listened to any other podcast
00:30:18
episodes?
00:30:18
We go all over.
00:30:21
Speaker 2: Yeah, I'm just thinking this kind of stuff,
00:30:25
these kind of stories, would tend to lead people down sort of
00:30:27
like a conspiracy theory path, right?
00:30:30
Speaker 1: Yeah, yeah, I mean, I like to entertain, right, I
00:30:34
like to entertain that idea because it gets your mind
00:30:36
working in a different way.
00:30:37
Times in security, we kind of we kind of brush off that, that
00:30:48
tabletop exercise, or we kind of brush off, you know that, that
00:30:50
disaster recovery plan that we have, you know like, oh, that
00:30:53
thing that we never use, why are we even updating it?
00:30:55
Um, it's, it's that.
00:30:57
It's those like less than one percent situations where you
00:31:01
know it's worth its weight in gold, that it really goes
00:31:04
through.
00:31:04
Speaker 2: You know, it seems like it's always the last
00:31:07
priority after every other thing , right, like shipping product
00:31:11
and keeping customers happy and whatnot, but when you need it,
00:31:14
you need it.
00:31:14
It's like an insurance policy.
00:31:16
Speaker 1: Yeah, yeah yeah, there have been definitely some
00:31:20
stressful times where I have accidentally deleted a
00:31:22
customer's database and it's like oh, oh God, where's that
00:31:26
backup at?
00:31:27
You know it's a very stressful time.
00:31:31
But with AuthZ and authorization overall and how
00:31:37
the cloud is so rapidly, you know, growing and evolving right
00:31:42
, where do you see this space kind of going in the near future
00:31:46
?
00:31:46
Right, because it seems like we're getting different
00:31:50
permutations of IAM to some extent.
00:31:53
Overall, that's kind of adjusting and refactoring to
00:31:57
this new world of the cloud.
00:31:58
Speaker 2: Yeah, I think it's interesting to consider what if
00:32:04
every piece of software that you brought in to your company used
00:32:09
a service like Outset behind the scenes to store
00:32:12
authorization.
00:32:13
Are there any delightful, interesting experiences that you
00:32:16
could build if you now had authorization as a separate
00:32:20
service for all of the software?
00:32:23
Right, could you audit things, could you see and visualize
00:32:27
things, and could you guarantee access for things that you
00:32:30
otherwise wouldn't be able to do if your authorization were in
00:32:33
like a heterogeneous set of solutions?
00:32:35
Now, obviously that's very selfish for me to bring up right
00:32:39
, because, like in that scenario , our company does super well,
00:32:43
but I think it's worth asking right, like are we doing things
00:32:46
in an efficient way, or are we doing things in a way that we're
00:32:49
building the experiences that our users are demanding?
00:32:52
So I guess I see that as a potential interesting outcome
00:32:56
from this new generation of authorization, which is like
00:33:00
well, what if all of these things can start speaking each
00:33:02
other's language for authorization?
00:33:04
What if I can start granting access to things based on access
00:33:08
that exists somewhere else?
00:33:09
Kind of questions like that, and I won't claim that I have
00:33:11
all the answers, I just think the questions themselves are
00:33:14
kind of interesting.
00:33:16
Speaker 1: Yeah, it's really fascinating to see how the IT
00:33:22
world is evolving, because we're kind of, to some extent, taking
00:33:30
different security principles or different security
00:33:32
responsibilities and we're offshoring it to some extent.
00:33:36
Right Like I don't have a better term right now for it,
00:33:39
but that's what's coming to mind we're putting we're
00:33:41
externalizing it.
00:33:42
Yeah, we're externalizing it or delegating it that's probably
00:33:46
the perfect word, right.
00:33:47
We're kind of delegating that difficult part, like you
00:33:50
described it or maybe even I described it right where the
00:33:54
last thing that these developers want to hear from me is I am in
00:33:58
groups and god forbid I say a nested group to them, like
00:34:02
they're gonna throw something at me if I say that to them, which
00:34:05
has actually happened before.
00:34:07
And you know like it's.
00:34:09
It's a huge, it's a huge pain, right, and as a security
00:34:14
professional, I don't have the expertise to build it myself.
00:34:18
You know like, if I did have the expertise to build it myself
00:34:22
, I wouldn't be working that nine to five.
00:34:25
You know like, that's, that's the thing.
00:34:28
But it's something that is a critical piece of software that
00:34:32
needs to take place in the software to some extent.
00:34:35
Right, it needs to be managed somehow.
00:34:37
You can't just have it open to everyone in the world and being
00:34:42
able to delegate that away and put that into a solution where
00:34:46
it's like, hey, this is all that they do.
00:34:48
If they do one thing, well, this is literally that one thing
00:34:52
, that's a huge benefit.
00:34:55
And we do that with every other domain, we do that with every
00:34:59
other facet of security and even IT to basically every extent
00:35:06
you know like you don't hear about anyone using another
00:35:09
solution next to GitHub to store their code, right Like
00:35:13
everyone's, just like well, why would I go anywhere else?
00:35:15
And so we're kind of going into that place where now we're
00:35:20
delegating different components of IAM.
00:35:23
No-transcript.
00:35:46
Speaker 2: And I think we already have come to terms with
00:35:48
delegating out the I part of it.
00:35:50
Right, people use hosted directory providers and they use
00:35:55
hosted identity management, things like Auth0, things like
00:35:58
SuperTokens, whatever.
00:36:00
So we've already come to terms with that.
00:36:02
And it's the IAM part, because the requirements are so fluid
00:36:05
and flexible that it's really hard to come up with something
00:36:08
generic until we have this model of storing things as
00:36:13
relationships and storing things as a directed graph and making
00:36:16
our determinations that way.
00:36:17
That's really what brings the flexibility to sort of model
00:36:21
anything right.
00:36:21
You can model traditional RBAC, you can model ABAC.
00:36:23
To sort of model anything right .
00:36:23
You can model traditional RBAC, you can model ABAC.
00:36:26
You can model Rebat like relationship-based access
00:36:28
control, where it's like how many edges does the graph have?
00:36:32
So this really is, finally, the to me feels like the right
00:36:36
layer to program at, to describe to the system how authorization
00:36:42
works in my app in order to be able to delegate it.
00:36:44
And before that it was all custom code and all of that code
00:36:48
was, like you said earlier, if you get it wrong, you know
00:36:51
you've lost the game, it's game over.
00:36:53
But of course, again, take that with a grain of salt, right?
00:36:57
I have to feel that way as a founder of a startup in this
00:37:01
space, but I truly do believe it , because time is our most
00:37:04
precious asset, so I'd like to spend my time on very worthwhile
00:37:08
things.
00:37:10
Speaker 1: Oh for sure.
00:37:10
So maybe I'm thinking from an engineering perspective of
00:37:16
choosing a technology and then deploying it and who's going to
00:37:19
manage it, or everything else like that, right?
00:37:22
Do you make it easy enough for the developers to integrate it
00:37:28
into their workflows, to where they're the ones that are kind
00:37:31
of, I guess, deploying it, deciding on those roles and the
00:37:36
authorization you know, infrastructure or framework that
00:37:40
they want to use?
00:37:41
Or is it geared more towards like a security professional,
00:37:44
where in that situation, I would basically present some
00:37:49
questions to that developer and say what do you want it to look
00:37:52
like?
00:37:52
And then I build it out in you know your solution, and then
00:37:56
they just integrate with it.
00:37:57
How does that look?
00:37:59
Speaker 2: Yeah, it's almost always engineers or people on
00:38:03
the platform team that are bringing us in.
00:38:05
Security's role is usually let's take a look at their
00:38:08
development practices, let's take a look at the source code
00:38:11
run.
00:38:11
Some scanners make sure it passes muster, but it's usually
00:38:14
product teams that are bringing us in and it's because we offer
00:38:16
that flexibility they can get to revenue faster, they can ship
00:38:21
authorization experiences that would take longer if they were
00:38:24
to build it in-house, and we make it super easy to integrate
00:38:27
too.
00:38:27
So we're open core, um, we're open source.
00:38:30
Uh, like our, our primary authorization engine is fully
00:38:34
open source.
00:38:34
We come from a developer tooling background as well, so
00:38:38
we have great tooling for integrating that open source in
00:38:41
with like your ci and with your development workflow, and you
00:38:44
can just spin up a container and that container can run next to
00:38:47
your app like very easily.
00:38:49
So all of that stuff.
00:38:50
You know we've given all of that thought in order to make
00:38:52
sure that this thing really just does kind of like plug into
00:38:55
your stack hmm, yeah, it's really helpful.
00:38:59
Speaker 1: I always.
00:38:59
It's always a struggle to bring a new technology into an
00:39:05
environment because it's like, all right, what complexity am I
00:39:08
introducing in ways that I don't even think of right now, you
00:39:12
know, and that's always a huge issue.
00:39:14
So any solution that makes an existing process or procedure
00:39:18
you know easier or better in any way, you know, is always
00:39:22
something that I personally want to explore more for sure,
00:39:25
because it's an easier sell for me, right?
00:39:27
And as someone that is in security, that is trying to
00:39:31
always make sure that we have, you know, the best tech, the
00:39:34
best principles and frameworks in place and whatnot, you know,
00:39:38
that's something that I'm always looking for.
00:39:39
So with that, jake, I think we're actually at the end of our
00:39:43
time here, unfortunately, but it was a fantastic conversation.
00:39:48
Speaker 2: It went by so quickly .
00:39:49
Speaker 1: Yeah, right it.
00:39:51
You know it tends to when we started going down the rabbit
00:39:53
holes that we explored 2K and 9-11.
00:39:56
Speaker 2: I didn't think we were going there today, but we
00:39:57
did.
00:39:59
Speaker 1: This isn't even the craziest episode that I've had.
00:40:02
It's always a fascinating conversation.
00:40:04
The craziest episode that I've had, it's always a fascinating
00:40:05
conversation.
00:40:05
You know, when you open it up to be more free form, like it is
00:40:07
, you know you have those possibilities of like man.
00:40:11
Where is this going to go?
00:40:12
You know, and it's always fascinating.
00:40:16
Speaker 2: Excellent.
00:40:16
Well, I definitely enjoyed our chat and thanks for having me on
00:40:20
.
00:40:21
Speaker 1: Yeah, absolutely.
00:40:22
Well, Jake, you know, before I let you go, how about you tell
00:40:24
my audience where they can find you if they wanted to connect
00:40:27
and where they can find your company if they wanted to learn
00:40:29
more?
00:40:30
Speaker 2: Yeah, you can learn more at our website, which is
00:40:33
predictably at authzedcom.
00:40:35
I'm on LinkedIn.
00:40:37
I'm not a big user of socials I think I have like two tweets
00:40:42
ever but you can find me on LinkedIn or you can follow us.
00:40:47
I write a lot for our blog, so if you want to get some insight
00:40:51
into how I think about the particular problem domain or how
00:40:54
we're building the company or anything like that, definitely
00:40:56
encourage you to check it out Awesome.
00:40:59
Speaker 1: That's great.
00:41:00
Well, thanks everyone.
00:41:01
I hope you enjoyed this episode .