How Human-Centric Identity Strategy Unlocks DevOps Potential


🔊 MP3 Recording

Summary

"Silos aren't bad — lack of feedback between them is."
This talk by Marcus Wells to DevOps Columbia on July 24th, 2025 encompased the following themes:

Core Themes

1. Breaking Silos Between DevOps, UX, and Identity Security

The speaker emphasizes the need for collaboration between DevOps, UX, and identity security teams, rather than working in isolated silos. Identity security should support DevOps and UX by integrating security early, not as an afterthought.

2. Challenges with DevSecOps

The "Sec" in DevSecOps often burdens developers with security responsibilities they aren't equipped to handle. Organizations sometimes collapse roles (e.g., IAM engineers doing developer work) to save costs, leading to inefficiencies.

3. Human-Centric Identity & Security

Security controls (like least privilege, separation of duties) stem from regulations like SOX and are critical for reducing fraud and abuse. Poor identity management can lead to breaches (e.g., compromised developer accounts in high-profile incidents).

4. Non-Human Identities & Modern Challenges

Identities now include IoT, APIs, and AI agents, requiring expanded security frameworks. Organizations struggle with inventorying and securing non-human identities, especially legacy systems.

5. Practical Solutions

  • Role Clarity: Avoid toxic combinations (e.g., engineers also acting as architects).
  • Audit & Monitoring: Use SIEM tools effectively to detect anomalies (e.g., excessive MFA prompts).
  • FedRAMP & Compliance: Regulatory frameworks force better practices (e.g., code reviews, secure pipelines).

6. User Experience (UX) & Security Balance

Security shouldn't sacrifice usability. UX teams should work with IAM to design intuitive yet secure flows. Example: Reducing user friction to prevent workarounds that weaken security.

7. Real-World Examples

A phishing-resistant MFA audit revealed ignored attack patterns in SIEM logs. Legacy apps complicate identity integration (e.g., SAML, cookie-based auth).

8. Collaborative Culture

Security teams should "serve" DevOps/UX by understanding their workflows and adding value, not just enforcing controls.

Memorable Quotes & Analogies

On Silos: "Silos aren't bad—lack of feedback between them is."

On Security vs. Convenience: "They're not opposites; technology can achieve both."

On Role Bloat: "Asking developers to be security experts is like asking a musician to also be their sound engineer."

On Non-Human Identities: "If you don't know what's in your environment, you can't protect it."

Audience Engagement Highlights


Final Takeaway

The talk advocates for a unified, empathetic approach where identity security enables DevOps and UX—through collaboration, clear role definitions, and proactive risk reduction—rather than acting as a gatekeeper.

TL;DR: Security succeeds when it's a bridge, not a barrier. Break silos, define roles, and integrate early—without sacrificing usability or overburdening teams.


Transcript


I wanted to speak to something that was a bit more authentic,
a bit more related to my lived experience
and some of the things I've noticed in my career
that coalesced with DevOps and UX.
And so this is where we're at.
I eventually just landed on the unvarnished truth
of my experience, which is human-centric identity
and how it relates into DevOps and UX
and really talking about it from the perspective
of an identity practitioner, you know,
just as a bit of a forward.
In many of my experiences working in this space,
I have not had the pleasure of working with DevOps.
I have not had the pleasure of working with the UX team.
And I think that in and of itself is a bit of a statement
that kind of lays the foundation for what we're going to talk about next.
As I mentioned, I had a whole talk planned.
Complete scrapped. Whole thing, right?
I wanted to show that relationship between DevOps, UX,
and identity security.
And what I realized is that the approach
that I was inherently taking was,
and whether I was trying to or not,
that DevOps somehow had a bit of a responsibility to bear
and that that responsibility wasn't really being taken up.
And what I realized through, once again,
just doing self-examination is that DevOps isn't the problem.
The real problem is that the teams that should be working with DevOps
and UX are not, right?
They're not communicating.
This is kind of what I mentioned before starting all this,
which is effectively that there are silos within these organizations.
We are working on the same team,
but oftentimes we don't get a chance to really interact
in a meaningful and impactful way.
So I wanted to talk about some of those blockers
and how really from speaking from my side of the spectrum,
my side of the house,
like what we can do to really bridge that gap, right?
One of the things we kind of mentioned
even before I started this actual talk is
I mentioned DevSecOps, right?
Just kind of threw that out there.
And I said that I didn't understand it.
And what I mean when I say that is that
I can understand the concept of DevOps, right?
You've got operational development
and how that all unfolds in your organization.
I get it.
I'm not a software developer, but I totally get it.
I understand, right?
The sec part always threw me for a bit
because traditionally developers have a very specific skill set.
And I'm not saying that developers
can't be good security professionals,
but I think it is a lot to ask, right?
I think it's a lot to put on the shoulders of developers
and even UX design folks
to then take on this massive requirement
to then bake in security into that process
when historically the organization
from an enterprise perspective
has always treated security as a tack-on in the first place, right?
And so I wanted to really highlight
some of the reasons why that happens.
One of the main reasons that I've seen
is that the organization is trying to save money, right?
They're trying to save money.
They're trying to save time
so they collapse these roles into each other.
And we see it within DevOps,
obviously DevSecOps,
but we also see it in my space as well
in identity security.
A lot of times what that will look like is,
you know, an IAM engineer, for example,
is responsible for now taking on the relationship
between building out structures
within identity security
that will then protect the organization
as well as coding and building custom connectors
into applications like SailPoint, Saviynt,
yada, yada, yada, right?
All the identity stack applications, right?
And once again, in my opinion,
I don't think it's appropriate or fair
to ask the DevOps folks
to own that security responsibility.
It's not fair or responsible
to ask the IAM folks
to take on the responsibility
of the developer's work
that has to go into it.
And part of that,
the reason why I'm calling it out specifically
is because as an identity security professional,
we're not adhering to the practices of DevOps, right?
We're not looking at CI/CD pipelines.
We're not having those typical stand-ups
and the structure behind it, right?
You hear people talk about it all the time,
like, oh, yeah, we're an agile team
and, you know, we use this methodology
and we're doing all the right things right.
But ultimately, it doesn't really come out
in the results of what's happening
because they're two very different skill sets.
And I think instead of trying
to collapse these things together,
there's definitely better way to approach it.
One of the things I did want to touch on
before I moved on from this
because I haven't listened to here as well,
I'm a bit of a policy nerd
in the sense that, you know,
there's all types of security controls
that we're trying to implement
within the work that we do, right?
Just generally speaking.
A lot of times that is the lens
by which we look at the work that we do.
Generally speaking,
we know about separation of duties,
we know of least privileged access.
I mean, I don't think I'm saying anything
new here.
A lot of us may not actually understand
or know where those things came from.
So around 2002,
there was that massive impropriety
within the financial institutions
that led to the Sarbanes–Oxley Act,
or what we call SOX, right?
And that really was a legal framework
that led to a lot of the enforcement
that we're seeing
within these tech industries now,
within cybersecurity and within DevOps,
where the importance
of having separation of least privilege
is being emphasized
more than it's ever been,
not because it's just, quote-unquote,
the right thing to do,
but specifically because
when someone has more access
to do whatever they need to do,
it opens up the ability
for that person to act
without any level of oversight, right?
So it's difficult to catch
when someone might be doing something
that maybe they should be doing,
or in a scenario
where they're not doing something
that they should be doing,
but their account gets compromised,
how do you catch it, right?
You've got a similar bit of resource.
They have all this access
to be able to do
any number of things
in your organization,
and by the time you catch it,
it's already two minutes.
Typically the reason why
when we're talking about
these cybersecurity events,
those individuals
who are laying in wait
dormant in the organization
are there for, what,
like six months
before anyone even catch notice,
like between six and nine months.
So yeah, I mean,
it's super important.
I just wanted to call that out
before I kind of move on
to my next point.
And, you know,
as I mentioned before,
this is probably going to be
a topic that comes up routinely
throughout these slides, right,
is that subversion of duty beast.
So from an identity perspective,
right, you know,
our job, or I imagine our job
is to build those structures
or provide those structures
that offer the security
within the environment,
the scaffolding, if you will.
Right?
So really,
we should be leaning
into these principles
from a security control perspective.
We should be looking at,
you know,
where are our separation injuries?
Where are those these privilege,
you know,
practices being implemented?
You know,
how do these things
reduce fraud, waste,
and abuse
within the environments
that we're working in?
You know,
and ultimately,
what I've noticed
is that we end up
just duct taping identity,
into these delivery pipelines
for DevOps
without really understanding
and knowing
why it's important
or,
in many cases,
what we're even doing.
I'm just,
I'm going to call it out,
right?
Identity security professionals
are not developers.
We don't have
the knowledge and skillset.
I think there's definitely
an overlap.
There's some people
that are potentially
really good at it,
but when your focus
is on one specific thing,
it's the law
of diminishing returns,
right?
You're going to sacrifice
in other places.
So,
one of the things
that really resonated
with me
when I was putting this together
is how much security
is failing
without our experts,
right?
So,
I can come
and I can speak
about identity security
and I can speak
about NIST 800-53
or like any
of these other
robust controls,
zero trust,
which I know
is much of a meme now,
but these things
are things
that organizations
are definitely chasing after.
They want to achieve it
because they understand
that at some conceptual level
they offer
a level of assurance
that things
are going to operate
in a way
that they need to
to offer continuity
to the business operations,
right?
That's ultimately
what we're talking about.
The problem is,
in the cybersecurity folks
including Identity
running these programs
and we're running them
and not bringing
DevOps into the room
and having them
at a seat to table
and having UX
have a seat to table,
we miss out
on so much value,
right?
We miss out
on the value
of really building
a continuous program
across the spectrum.
We miss out
on the value
that inherently UX brings
because once again
we're talking about
how the users
interact with the data,
right?
If we're talking
about an identity,
if I'm just
isolating it down
to a human identity here,
just to make it simple,
we're talking about
how those individuals
interact with the data
in the environment.
If we're not talking
to the UX team
and understanding
the flow
of these working sessions
within individual teams
and groups,
we're not talking
to them to find out
how we're modifying
the actual platforms
they're interacting with,
whether it be
ServiceNow
for a particular system
or if we're going
to open up SailPoint
and let people
use it directly,
understanding how
you want to funnel
that traffic
through that experience,
then what are we
really doing?
We're missing out
on so much value
from not having
those conversations
and of course,
I've hinted on it
quite a bit throughout
with the DevOps side.
We talk about DevOps
in the framework
of an identity
and it's like,
"we just want to
integrate these
two applications"
and if we're talking
about modern apps,
sure, no big deal.
You can do an app
registration in Azure
or AWS or GCP
in five, ten minutes.
Pretty simple.
But the moment
you start talking
about legacy applications
which makes up
a large swath
of the platforms
in the organization
then we're talking
about legacy
communication protocols.
We're talking
about SAML.
We're talking
about cookie-based
headers, right?
I don't like
anything about that.
Not really, right?
Not in any meaningful way.
I can't open up
an application's code
and say,
oh, well,
here's where
this is happening
and here's how
this is working.
I'm not going
to pretend
like I do either, right?
So we need
to be bringing
in these other
professionals
into the space
with us
so that we can
collaborate
and actually come
to the floor
with a sustainable
and resilient solution.
Otherwise,
we're building on SAML
and it's just not a great idea.

I know I've talked
about this quite a bit,
inviting DevOps UX
to the table
and having these
conversations
and I know
for some
it might seem
like I'm saying,
"oh,
just tear down
all the silos,
right?
Let's bring it all down,
let's be one
big happy family."
I'm not arguing
against silos.
I don't think
silos are inherently
a bad thing
and that is
a bit of a change
of heart
that I've had
over the years
of my career,
right?
And you just think,
"oh,
silos are terrible,
we have so many silos,
this is the worst
thing ever."
Silos aren't bad,
right?
The problem is
when you don't have
any kind of feedback
groups,
when you don't have
interconnectivity
with those silos,
right?
When you don't have
any interoperability
with these other teams
and so that's
what I'm really
pushing for
from my perspective.
It's something
that I do all the time
when I'm on these projects
and I'm working
with these organizations,
I want to get
a line of sight
into what other teams
are doing.
I want to understand
how they operate
with what they're doing,
how they're doing it
and I want to understand
how the work
that I'm doing
in identity security
can add value
to what you're doing,
right?
I think a lot of times
we tend to think
about it from the sense
of we have a mission
to partake in,
we have a mandate,
we have to perform
these actions
and that's the reason
why we're going forward
but we don't really know why
and I think that
a lot of times
we face friction
because the individuals
that we're talking to
don't have the context
and we don't have
a gift to them,
right?
So instead,
going in and having
that more empathetic lens
essentially,
right,
and understanding
how we can add value
to somebody else
opens up that door
so that we can have
a much more meaningful
conversation
and get people
to actually work with us
willingly
versus trying to
fight tooth and nail
and pull teeth
to be able to get
the things
that need to be
done done,
right?
So,
I have to throw
another meme in here of course.
What does balance look like?
What does this all look like?
So, in my vision,
the IAM analyst
would be working
with the UX team
for example
very closely.
They would,
they would study behaviors
from users
to be able to improve
that user experience
without weakening
the security controls
that we're going to try
to do.
the IAM engineers
would work directly
with DevOps.
They would collaborate
on custom access patterns
for legacy environments
and legacy applications
within the enterprise.
And then IAM architects
would really work
across all of these streams
to ensure that
the strategic vision
is being aligned
to and followed,
right?
and that ultimately
the people
who are directly
impacted by these changes
are going to have
the best possible experience
without sacrificing security.
I know that's often been
one of those things
at least in my experience
where this perception
has been that
well,
convenience and security
operate on two opposite
ends of the spectrum
and you can't have both.
I disagree.
I vehemently disagree.
I think as technology
continues to advance,
as we continue
to understand
the improved processes
that are really
unlocked by
those technologies
we're able to achieve
a much more
synergistic relationship
between the security
and the convenience,
right?

Question
"In UX there,
would you call that
DevX, Judy?
What, DevX?
Yeah.
That's different than UX.
DevX is developers
and UX.
So is this UX
on your previous slide,
is it developer experience?"

It is user experience.
"User experience.
It is actual end user.
Yes.
Okay."

Yeah, without weakening.
So it's making it easier
for your users to have.
Right.
I understand.
Yeah, and so
I'm actually glad
you asked that question
because it brings forward
a point that I didn't
even think about
until just now.
Part of the struggle
that we all face,
especially from
a cybersecurity perspective,
is the end users
being often referred to
as the weakest part
of the infrastructure,
right?
They're always going to find
some kind of way
to undermine security policies
and controls.
And so the idea
is that we make it easier
for them to align
with the things
that we want them to do
so that way
they're not going
out of their way,
whether intentionally
or accidentally,
breaking security controls.
Right?
So that's the importance
really of that trifecta.
All right.
So I wanted to touch
on this as well.
And once again,
I don't work in DevOps,
so forgive me
if I get this wrong.
But from my experience
and from the work
that I've done,
it seems as if DevOps
works in a very highly
sensitive environment,
just generally speaking.
if anyone were
to gain access
to any of your access
here in the environments
that you work in,
someone could absolutely
just create massive amounts
of terror,
generally speaking.
I would say
that the information,
the data
that you're handling
on a daily basis
is highly sensitive,
right?
in what we call
privileged data,
right?
A lot of times
you may have
a separate account
that you leverage
in order to access
this information,
not your basic user accounts,
right?
And so this is really important
for an identity security practitioner
and not really
as an organization,
like as a culture
of identity security professionals.
We have to really understand,
right?
We have to understand
that the way we protect
that information
and the way we approach
these conversations
has to be very different,
has to come
with a certain lens of,
situational awareness,
right?
And so it's really important
to understand
that level of risk
that comes along
with managing
and having access
and control
over these sensitive environments,
these sensitive data.
Effectively,
what I'm saying is
this is not a DevOps problem.
DevOps didn't fail
at any point
in this process, right?
The issue was not
with DevOps.
There's been a number
of high-profile
cybersecurity incidents
in the past,
I don't know,
five years, right?
Use that as a barometer
where I think
with the last pass incident,
someone did get access
to a developer account
and they were able
to use that
to exploit
and create a security
incident breach
around that.
That wasn't
a DevOps problem,
right?
That was an identity problem
that the identity team
in that case
did not do
the most they could do
to ensure
that the DevOps team
was working
as safely as possible,
right?
And this is just
one of the natural gaps
that comes out of
these scenarios
when cybersecurity doesn't work
with these DevOps teams.
They just try to come down the mountain
with their stone tablet
and give commands
and there's a real
danger behind it.
"Do you work
in a regulated environment?"
Oh, extreme regulated.
So you have something
like FedRAMP.
Correct.
Okay.
Yeah, yeah.
Yeah.
Kind of interesting
where you go over that,
but we can talk
about it later.
It's fine.
That's fine.
So, you know,
once again,
repitching this entire
talk instead of
looking at how
DevOps can do
something different
and prove what
they're doing,
I'm taking it
from a very
different approach,
right?
What can we do
to serve the
DevOps community
better?
What can we do
to actually help
the DevOps teams,
the UX teams,
in the missions
that they're
trying to accomplish?
I understand that
what I'm talking
about implicitly
creates friction.
That is not the objective.
I also understand
that in order to
have any type of
forward movement
momentum progress,
there's going to be
disruption, right?
The status quo
is called the status
quo for a reason,
and we're trying
to interrupt that
so that we can
have something
better, right?
The biggest thing
that I'm thinking
about is how we
can help reduce
risk and make
everyone's life
easier, not just
DevOps, but of course
that's the focus,
right?
But everyone's
life easier, right?
I talked about
before, the end
users being in a
position where
they're required
to operate
in a certain
kind of way.
Those policies
are there for
user, those
user access
controls, the
acceptable use
policies, all
those things
exist to be
able to define
how the
individuals within
the organization
are going to
operate, right?
How do you
expect them to
behave?
But these things can really only
happen through
having clearly defined rules,
by having
structured paths for access, and
identity signals
that then fuel
automation, right?
And not just
noise.
And what I mean
when I say that
is not what we
think should
happen, but what
we are measuring
from what we can
observe in the
environment, right?
Bottom line.
DevOps, UX, you
guys are working
within a huge
blast-based area.
The work that
you're doing is
oftentimes, in my
opinion, underrated
and not given
enough credit,
right?
They're kind of like
music producers.
The artist gets
all the glitz, the
glamour, the concerts,
the shows, the music videos,
interviews on TV.
The producers
are really there
making the magic
happen in the
background.
But you don't
really ever get
to see them,
right?
And so, you
know, I don't
think anyone
really gets in
DevOps to be a
rock star in a
traditional sense.
However, you
know, we want to
make sure that
given the space
that you're working
in and given
the natural risk
and liability that
comes along with
it, that you're
protected, right?
We want to be
that, well, I
imagine that we
want to be kind
of like that
insurance policy
for when you're
in these
environments and
you're working
in these highly
sensitive areas and
you're encountering
something and
something goes
wrong, right?
The first thing
that's going to
happen if an
account gets
compromised, whether
we're doing DevOps
or not, is that
there's going to be
an investigation
where there should
be, I would hope.
Right?
Typically, the first
thing that's going to
happen is someone's
going to say, well,
you know, this person
who works in our
organization who has
this responsibility
is likely the
culprit.
That's typically
where you start
until you start
digging a little
bit deeper and
you understand
like, oh, there
are signals that
are showing us
that there was
an account
compromise or
the IP address
that logged into
that account was
hacked into from
some other country
in a different
time zone and it
gives you more
insight on what
may have happened.
So the idea here
is that I'm
pitching that identity
works much more
closely with some
of these internal
teams like DevOps
and UX.
so we can provide
some level of
commiserate cover,
right?
Some level of
"acceptance."
What's that?
"Acceptance."
Yes.
Acceptance is a good
way to put it.
Ultimately, just
cavalry, right?
You know?
I think we all need
support in different
ways and I think
that's just one way
that identity security
is a bit of a
support system for
these internal
teams.
Yeah?
"And when you say
identity security,
signals, it's more
like audits, right?
Doing an audit log
and using things in
that regard.
Is that what you
mean by signals?"
Yes.
That is one key
area where your
signals typically
are captured,
right?
And when you say
audit, I'll just kind
of share from my
perspective.
What I typically
think of is, you
know, a regulatory
body is coming into
an organization and
they're checking to
see that, you know,
certain controls are
being met, right?
That certain controls
are being satisfied
as regards to what
they are required to
do legally.
And I kind of put that
emphasis on it because
many private
organizations, they
are not required to
align to security
control frameworks like
NIST800
That's completely
optional.
But there are still
legal frameworks that
they have to align to
that overlap with those
controls.
"Okay."
Part of that as well,
and we talk about
signals is, you know,
I'm sure everyone here
is familiar with SIEM,
to some degree, right?
Yep.
SIEM?
Yeah, "seem", "sim".
I call it "seem",
We'll go with that
one, we'll SIEM, right?
And so, you know, it's
collecting this information
from the environment and
really capturing that
telemetry.
One of the things I've
noticed in my personal
experience is that
sometimes those SIM tools
like Splunk, for example,
if they're not configured
in a way where they are
looking for specific
events, then the team
that is managing the
outputs don't even know
that something has
happened.
I'll give you a really
good example.
"That's very true.
You're not looking.
You're going to miss it."
Exactly.
And I'll give you a real
world example.
So I was working with a
client, very large
infrastructure client,
global client, and they
had their SIEM set up,
their SIEM set up, and
they brought me in to
help them to redefine how
they were using MFA.
They wanted to go to a
phishing-resistant MFA
solution.
So one of the things I did
was I said, hey, look,
I know you've got the
SIEM set up.
Let me just see the
logs coming directly out
of your MFA solution.
Yeah, right.
Now, right, because
they're saying, "oh,
everything is great.
We have no problems here,
nothing to see.
Keep it moving.
Let's go."
And of course, I'm just
like, that sounds
suspicious.
So I started looking at
their logs for their
existing MFA, and lo and
behold, I found some
users that were getting
multiple push-pull
requests from the MFA, so
basically requesting an MFA
authentication, and that
there were about 20
requests within a single
minute of time.
"Yep."
And so I found out from
them that what they were
looking for in their SIEM
in Splunk was if there was
a failed login attempt.
That's what they were
measuring against.
Oh, yeah.
And what they didn't
realize is that if there's
a failed login attempt,
the likelihood of that
being a signal to threat
factors very well.
In most cases, it's
probably going to just be
someone that didn't have
you all the time, right?
So if you have multiple
attempts that are not
unsuccessful, and then they
end in an event that was
successful, you're not
going to be looking for
those, and those are going
to be the signals that we
typically, when I refer to
signals, that's kind of
what I'm talking about to,
right?
Yeah, right.
Thank you for that.
Yeah.
Yeah.
Yeah, very good.
So it's super important to
just have, once again,
situational awareness.
And sometimes, we may not
always know what we're
looking for, but we have
to be open-minded about
what we might discover by
looking for anything.
Yeah.
"Can I share something?"
No, I'm actually done.
Yeah.
"Okay, so I'm very relevant
to this.
So last week, I was at
Hackathon.
And one of the aspects of my
project was taking, we create
audit logs, right?
There's, instead of, it's, you
know, you have info debug
levels.
We have a very specific audit
level.
So I fed that into a
database, right?
And then I put a LLM on top
of that.
And it was, it found, it
found, it was, it was nice.
Yeah.
It was nice to have.
So, you know, it finds
patterns.
That's what it does very
nicely.
So I was, I just shared an
experience that I was
impressed by how well it
worked."
Yeah.
Yeah.
I think that's a, that's a
really good and pretty
poignant experience.
I mean, I have mixed feelings
about LLMs.
I do use them.
Don't get me wrong, but I do
have mixed feelings.
I did, well, I do in often
cases use them, not dissimilar
to what you're referring to,
right?
Yeah.
You know, of course, I do my
best to anonymize the data
as much as I can without
losing the context of what
I'm looking for.
Right.
But it's really good with
being able to find those
patterns.
In the same area I'm talking
about, unfortunately, I did
not even think they'd use
an LLM.
I just, I was just, you
know, raw-dogging an Excel
spreadsheet and looking for
any kind of anomalous
behavior.
And it just so happened
that, you know, I guess it
really depends on how you
are organizing instruction
in your data in the first
place, which is another
whole different topic.
But, you know, I organized
it by date and time.
And so, because I was able
to organize it that way, I
know, then kind of restructured
it based off of the user.
Yeah.
And was able to see, hey,
you know, we've got, let's
just say, December 13th at
9.15 a.m.
I've got 20 different events
and none of them are
resolving in anyone actually
going and accepting the
authentication request.
That's when you're coming.
Right, right.
Yes.
Yeah.
But that's it.
That's all I got.
I'll, you know, I hope you
will enjoy it.
Thank you.
Thank you.
Yeah.
Did we have any questions?
I'm more than open to having
a further conversation.
"So you were talking about
defined roles within the
organization.
I think you're talking about
IAM a little bit.
Is your thought to
synchronize these between
the roles that people are
actually filling and their
IAM situation?"
When I tell you I am so
glad you asked that
question because this is
a conversation that I've
had with so many
different people.
And to answer your
question, emphatically yes.
There is a deep
correlation between, for
instance, the role that
someone has within your
organization that they're
being paid to do, the job
description, and how that
all maps back to role
provisioning and whatever
platform you're using,
whether it be Active
Directory on-prem in the
cloud, or even in a
specific bespoke
application, right?
If we don't go into it
with a level of
understanding, and put
this very simply, I
think in a way that we
can all understand, we
don't go into it with
clean data, the results
on the other side are
going to be real dirty.
And that's really what we're
seeing, right?
You know, the way I often
look at it is when I'm
looking at a job
description, for example,
for an organization I'm
doing consulting work with,
each one of those
responsibilities, in my
mind, is going to be able
to correlate back to a
level of access, to some
degree, that you're going
to have to give that
person either an
application or an
Active Directory, right?
So, in a lot of ways,
they're putting on full
display how mature their
identity security practices
are, right?
If I can see, just by
looking at the job
description, that there are
toxic combinations of role
access, because you're
expecting someone to operate
as both an engineer and
an architect, that's a
problem, right?
Because now you're
having that personal
ability to not only affect
change within the
organization through their
capabilities of what you're
hiring to do, but you're
also given the option to
design what the system is
going to look like.
So, who has the oversight?
It's a massive conflict of
interest.
"You know, I agree with
that so much, and that's
the industry that I, at
least where I'm sitting,
has moved away from that.
You know, you have an
architect.
Architect is design, right?
You have a developer, a developer
work does the engineering
and stuff, right?
And, and operations.
I used to love those
separations.
And then, I think kind of
what you alluded to is you
put all that together,
then there's really, you
lose the responsibility.
And then you go and you
ask the executives, well,
whose responsibility is it?
It's everyone's.
Yeah.
And what a terrible answer."
Yeah, yeah, yeah.
I think, I think you can
have a shared responsibility
model where, where another
person is perhaps informed,
right?
Kind of like, you know,
what they did in New York
and 9-11.
You see something, you say
something, right?
If you want to take it from
the other approach, I think
it's totally fair, right?
But to say that everyone
is absolutely responsible
for it, it just
creates so many different,
potential areas of risk.
Right.
Um, one of the things I
really appreciate to your
point is that, I think
it's NIST has the NICE
framework.
I don't know if anyone's
ever heard of this.
No.
It's called the NICE
framework.
And what it does is it
breaks down these,
quintessential functions that have to
exist within an organization,
um, against these different
types of roles.
And it's, it's
incredible.
I mean, it's really well
thought out.
It's not
perfect, but it's, it's
pretty close, in my opinion.
And a lot of times if I'm,
if I'm doing, um, any
kind of, like, uh, role
mining activities, right?
Um, I will, I will, in
many cases, use LLMs.
I'll say, like, reference
this nice framework.
Tell me who should be
responsible for these
things.
And they'll give me
a matrix of,
where that individual should
be and where that role
should be bifurcated and
given to other individuals
within the organization.
Is that like a RACI chart?
It is the foundation to a
RACI chart, I would say.
Right?
I think, I think it,
a much easier way to
build a RACI chart.
Because RACI charts are
typically very cumbersome
to build.
Yeah?
Question.
"At the beginning, you
were talking about
devsecops, right?"
Mm-hmm.
"Having that core
conversation, you want
to throw security in
there as a developer.
Yeah.
So, I think, and now I
count this a lot in my
work, they're like,
we'll put the security
in afterwards, and that's
what you don't want to
do.
Right.
So, like, when you buy
a house, right, you
might not want a
bathroom, but you're
going to get the
roughing, because if
you're going to need a
bathroom later on, it's
going to be a
couple of more work,
right?
Yeah.
And I think that's
really what we need to
do, is we need to be
prepared to add that
security, slash,
compliance, slash,
whatever in there.
I don't think you want
the developers, because
they are not going to,
I had just yesterday,
somebody came and
said, oh, we checked
off this box, we got
this to work, because we
turned off the
security control"
Oof!
Oof!
"But they're not going
to put it in production,
but it was the first
step.
But it's always the
first thing they do,
right?"
Right, right, right.
"So we have to, you
know, continue to keep
that in mind, and I
think that's why SEC is
stuck in the middle of
DevOps, just because, just
like with DevOps, you
don't want to put
everything off until
later."
Yeah.
"Well, that's the
reason why I like
FedRamp in this
regard.
If your developers and
your team go through
it, and you know what
you've experienced in
the past, by God, you
will build it back in
because you're going to
go through it again."
Yeah.
Yeah.
"I'm sorry.
I don't, I think it is
very much so.
The developer's
responsibility from get
go, performance, security,
features, test it all.
Yeah.
You are responsible.
I don't care.
You may have the realm
of users, great.
I'm fine with that.
Just vote me out.
Vote me off the island.
I'm happy with it.
But I still have to, at
the end of the day, I
have to give the systems
structure, and therefore
things like Terraform.
I want every developer on
the team to see what the
Terraform is, and when
they log in, in the
morning, they run the
Terraform, and they
ensure that the system
has not changed."
Yeah.
"And I'm sorry.
You want to change in
it?
We're doing this in
dev.
We're going to change
the Terraform.
We're not coming in
after all the metrics
and everything.
You have to do it then
and there, and move
forward with it into the
next environments."
I agree with you.
I genuinely agree with
you.
I think part of, you
know, my take on the
matter, especially when I
say, you know, hey,
I think security should
be brought in early in
that process, right?
I think just kind of,
Judy, using your, like,
framework of an example,
it's not quite there, but
I moved to Baltimore back
in December, and when we
bought our house, we had,
of course, a home
inspection done, right?
I think that in that same
framework, the security
folks can be, like, the
home inspectors, right?
They can come in and
assess.
Maybe they don't do it
themselves, but they can
provide feedback, right,
at those necessary
feedback points and say,
hey, look, you know,
these controls are
effective or they're not
effective.
"But think of it this
way.
If you're building the
house or you're selling
the house, you are aware
of whether you can get an
occupancy permit or whether
you have violated code
because you wired something
or put in some plumbing
that wasn't inspected.
I'm sorry, you have to live
with it.
That's why I'm saying that
all the people, anybody who
has had a FedRAMP experience
pushing something, especially
Kubernetes through
FedRAMP, this is what I love
the most.
You have to live it and
understand you're going to do
it now."
Yeah.
And once again, I agree with
you.
I've been, unfortunately, I've
been in federal environments
where, this is very recent,
so it's a little sore, where,
for instance, the SSPs
(System Security Plans)
are missing information about the
boundaries of an application.
Yep.
Right?
That application in question
should never even get an ATO
(Authorization to Operate)
if that SSP is not providing
that information to them.
Right?
So I think what you're saying
is absolutely correct.
Right?
I think it, and having that
additional layer, it provides
a level of assurance.
Right?
"Well, because of the fact, this
is why, if it starts to get
kicked away, if the test is
being removed, if the identity
is removed, these are types of
things that you're not going to
pass going into a FedRAMP.
They're going to audit it, they're
going to find it, and you're
going to get busted.
I'm sorry.
And then the client's going to
be unhappy because you're not
making the deadline on time."
Yeah.
Yeah.
Now, you're absolutely right.
And I think there's some
additional complexity when it
comes to, and this has been a
bit of a newer trend within
identity, is the scope of
identity security is expanding.
Right?
So it's going beyond just user
identities.
Right?
So that's been like kind of the
focal point that I chose to
align here because it's something
that we can all kind of grasp
pretty easily.
But non-human identities are
becoming increasingly within
scope for identity security
teams.
And so I'm kind of curious how
all of that is going to take
shape given exactly what you're
saying.
Right?
I don't disagree with you at all.
But I think it's going to cause for
some level of practical changes to
occur.
"So can we expand that conversation?
Yeah.
Because, uh, so we, we, we built
security, at Virtru, we built
security platforms.
Mm-hmm.
And we have the concept of
person entities and then non-person
entities.
Correct.
And we've built that into our
system.
It's, it's, we've been doing it
seven years, seven years.
So, um, we handle that case.
And we're able to handle that.
Can you expand
on like what issues you're seeing
I'm trying to fish some stuff from you.
So if you're, if you can just
give some high level things or
something, but what happens is
this becomes important in the AI
world, the new agent world, right?
This is really what it comes down
to, you want identities for your
agents and you want very specific
access controls on them.
And for many, many reasons you
want to do that,
that's my goal, right?
This is what I'm thinking
about right now.
So if you could just discuss
around."
So, so from the entry point,
I'll say this
way, identities as, as, as it
exists within the digital spectrum
and have existed for over 20
years.
That, I don't think there's a
question on that, right?
And that isn't just human
identities.
It's all identities.
It's human and not human.
we talked about it
through a different lens a few
years back.
So we, we said like things like
IoT, right?
It was the, the language that
was available for most people
in years to be able to talk
about non-human identities.
Yeah.
One of the
practical, uh, struggles that I
see a lot of organizations
grappling with currently is the
idea that they did not consider
non-human identities before,
right?
Like that's just the initial
entry point is now they're
having to recontextualize
everything they've been talking
about and dealing with up at
this point under this new lens
of, oh, wow.
So we're talking about
manufacturing.
We're talking about IoT.
We're talking about all these
different things.
Now we're, we, we have to
recontextualize and look at the
lens of these are non-human
identities.
When we're talking about, um,
APIs, right?
We talk about LLMs.
And when I say this,
I'm, probably need to
provide some context here, so
bear with me.
An LLM, in, in really with
any other non-human identity
type, when I, when I say they're
an identity, I don't mean that,
they require what I mean
inherently to do, a credential,
like a username and password
because these are non-interactive
systems, right?
In the sense that, like, from the
back end, they don't type in a
username and password.
Right, right.
Um, they're using API calls in
many cases to be able to
interact with other non-human
systems.
Right.
And so all of these things
become non-human identity.
And then once you understand it,
like, all of these things are
non-human identity, it starts to
kind of, like, expand.
It's like that meme where the guys
like this, like, it just explodes
your brain around, like, oh,
wow, so now we're responsible for
all of these things.
And now we have to take a
different type of approach.
How do we even do that?
Right?
Like, how do we even begin to
reel this in?
And I'll tell you the, the,
the first step, as with anything,
you can't protect what you don't
know, right?
What you can't see.
Uh, so the first thing you want
to do is you want to be able to
have an accurate inventory of both
your human and your non-human
identities.
In many ways, the, the
framework by which you want to
protect these things should be
the same, right?
The framework.
I'm not saying the exact
approach.
The framework should be the same.
Because, and I use this
analogy, I had a friend of
mine a few years back, we were
talking about, she was in
pen testing.
And, you know, she's explaining
to me a little bit about that
because I'm not, I'm not an
expert in that either.
I know a little bit about that.
I've done some things.
we talked about,
finding vulnerabilities in
websites, right?
And, and using cross
site scripting, for example.
And, and the analogy I use even
then, this is like, I believe
more than, maybe five years
more, more, more ago, um, is
that in a scenario where you're
finding vulnerabilities in
cross site scripting, in many
ways functioning, you are, uh,
assuming the identity of the
web page itself, right?
Uh, the web page doesn't hold
the information, the web page
talks to a server or database,
captures the information, and,
and retrieves it and serves it
to the user.
So, when you have cross site
scripting in that scenario, and
you're finding a vulnerability
where you can extract that
data, you're basically telling
that, that, that, that database,
"hey, I'm a, I'm a trusted user.
We're just a website.
We work here all the time.
We're buddies.
We're friends.
You can give me this
information.
It's totally fine, right?
You, you know me."
And it does.
And it absolutely does, right?
And so, in many ways, the, the
framework, like I said, it, it
should be the same with how we,
we, we approach those things.
Um, you want to, you want to,
you want to be able to monitor
and have that governance.
You want to understand the, the
inventory of those identities on
both sides of the spectrum first,
and then depending on what type
of non-human identity you're
dealing with, depending on the
capabilities and the level of
access for the data that, that,
that non-human identity can
interact with, it's going to
foundationally change how you
approach it, right?
Maybe we're talking about
breaking down into another
subcategory.
Is it just core data that
is relative to your
base identity and access
management, or is it
privileged data that has
access to it?
In most cases, it's
privileged data, right?
Um, and so, okay, well, now
we don't want to just keep it
in the, the gen pop (General Population) of
identity, right?
We want to keep behind a, a
vaulted, uh, platform, like a
CyberArk or a BeyondTrust, right?
And we want to be able
to have, uh, cycling passwords
or provisional accounts or
some other way of being able
to protect those applications or
those services.
"So here's the thing.
When you talk about these
things, I'm going to give
you a specific solution, an
example of how a project I
had that I was working on had
to resolve what you're talking
about.
Because let's say you have the
trusted application and the
trusted application is talking
to the database.
Well, the trusted application only
has so much scope, so much
scope.
This is true.
And therefore, you then have
to put in a fence in the
database to not allow the
application to escape its
role.
Correct.
And this is where you use
something like a database
vault.
Yeah.
Or an audit vault.
Yeah.
Both of these things have to
exist because you still have
to have a DBA in there.
You are asking.
But just like the same old
story, someone looked up
Obama's birth certificate.
Who looked at it?
We need to know.
That should be in the audit
log.
Yeah.
If you go outside, you do
something else, you know, for
the application, we can fence
you.
For everything else, we
audit you.
Right.
All I'm saying is that is one
aspect of one of the things
that you lock down, and
therefore, the application, yes,
you're trusted.
But if you do some type of
SQL injection, someone screws
up, writes bad code, I don't
care.
It's still fencing.
You can't go wherever out the
fence has been put into the
database.
So it's like block and
tackle.
You have to block it up.
So you have to, like, put
everything in.
And so the same thing about
even services.
Yeah.
Relational services.
I don't care whichever one
you want to use.
We always had to have
identity in it.
Yeah.
Whether it is a server or it
is a user.
But the user mostly is the
one who's going through it
because any time when we
log in a user, they're going
through a front end.
If they're going through a
queue, they're going through
a database, they're going
through something else.
They are being audited on
every step, and they're
going to leave a finger
print because the cloud
watch or whatever thing
you want to put in there
has to see it.
Yeah.
No, you're absolutely
right."
You're absolutely right.
And it's funny because
what you're mentioning kind
of ties back to what you
asked before, Julian, around
role scope, right?
You said it.
It's a scope.
And so when I mentioned
having a similar framework
for how you manage the
user identity versus
the non-human identity,
that's a really good
example because you have
to have a really tightly
defined scope around
their non-human identities
to ensure that you have
those, uh, that, that
control, the bifurcated
control around these,
these services.
You're absolutely
correct.
Yes.
"But these are the things
that I always get stuck
into.
Yeah.
I don't like thinking of
myself as an architect.
I don't write code.
I don't want to do this.
I'll set something up and
we'll put it in place.
But I swear to God, every
time I get in somewhere,
and this is where we go to
the user and say, I want
to know what is your core
amount of testing.
Percentage-wise, will you
accept to put this thing
to the next level?
Right.
And they have to give me a
number.
Yeah.
A number must exist.
And then we have to then
go through it and find all
the public access points and
say, what percentage do you
have of tests?
Each one of these things
has to be counted up.
And that's part of DevOps,
whatever you want to talk
at.
And that's part of the
story that you have to
tell the client because
the client has to pay for
it.
He has to know why is he
spending so much time here
because at the end of the
day, I want him to push
the button, to push it
into the next system.
Not me.
Deniability."
Exactly.
Exactly.
Exactly.
I'm not the decision maker.
I'm just carrying it out
for you, right?
And honestly, I love it
because even hearing you
describe these things, I
imagine you and I are
two sides of the same
coin.
I want to do the
development work.
I want to do the
architect.
I want to decide the
solution, right?
I want to code.
I want to do that.
I just want to have an
opinion at some point and
say, why are we doing
this?
So it's like, I want to do
all the things you don't
want to do.
But I think this is the
reason why there's a lot of
value, too, in that
cross-collaboration.
That's a great discussion.
This has been fantastic.
I didn't think I was going
to enjoy this as much as I
did.
Oh, yeah.
No, it's great here.
Yeah.
Yeah.
We have good discussions.
Yeah.
Yeah.
This has been great.
This has been fantastic.
Awesome.
All right.
Well, thank you again.
Yeah.
Yeah.
Yeah.
"I have a little story to
share.
So I was at my work, they
hired a team outside
corporation to try to hack
our system.
Oh, okay.
So I, yeah, yeah, it was
really cool.
So we were holding this big
event for something, D.C.
Tech something.
It's for startups.
But I replied to them, you
know, promoting ourselves in
the event.
Said, I'll attend.
I hope you will, too.
Well, this set off a red flag
for this other team.
So they went after me for
like, must have been like
two months, trying to hack
my account.
They were sending me
fishing mails and, and, uh,
links.
And, um, it was quite an
experience.
Hmm.
And, yeah, I was reporting
it, and we were flagging it,
and we were reaching out to
people and all this stuff.
And I was like, what is
going on?
And then, in a, at a
December event, company
event, they were like,
yeah, we hired this
company to hack us.
And they put my picture
up on the.
That's funny.
That's funny.
So, um.
The most diligent.
Yeah, yeah, yeah, yeah.
So when, when you were
talking about that, I
forgot about that.
Happened.
But, um, I just
want to show it.
It's, it's quite, it's
quite an event when
I was about to
they sent us
to this website.
Aight, we recorded all
of that.