Thought Leadership

From Identity Analytics to AI: My Journey into LLMs

Written By
Colton Chojnacki
Published On
Feb 4, 2025

Like many of you, I've watched the explosion of AI and Large Language Models (LLMs) over the past couple of years with a mix of fascination and skepticism. Working at Beyond Identity on Access360, our identity analytics platform, I kept wondering: How could this technology transform the way we interact with identity data?

It started with a pattern I noticed in our customer feedback. We'd build specific visualizations and reports in Access360 to answer questions like "Show me all privileged users who haven't authenticated in 30 days" or "Which applications have the most security policy violations?" But as soon as we solved one question, three more would pop up. Each new query meant new feature development, new UI elements, new filters. The questions were endless, and they should be – identity practitioners need to be curious about their security posture.

That's when it hit me. What if instead of building specific features for each question, we could let people just... ask? I'd been using ChatGPT for everything from coding help to writing documentation, and the experience was transformative. You simply ask what you want to know, and it responds intelligently. Could we bring that same natural interaction to identity management?

As a note, I know one of the first questions that people have when thinking about AI usage relative to identity and security are around privacy. I'd like to address that in a separate blog since it's deserving of a dedicated discussion. Here, I'm aiming to outline and share insights for a few use cases that I've been validating.

Why LLMs Matter for Identity Management

The more I dug into this idea, the more exciting it became. Identity and access management systems are complex beasts. They're filled with rich data about users, permissions, authentication patterns, and security policies. But extracting meaningful insights often requires:

  1. Understanding complex query languages
  2. Knowing the exact schema of your data
  3. Building specific reports or dashboards
  4. Writing custom scripts for one-off analyses

LLMs could potentially change this paradigm entirely. Imagine asking questions as naturally as you would to a security analyst:"Who has access to our AWS production environment that hasn't used it in the last quarter?" "Show me all contractors with persistent access to sensitive data" "What unusual authentication patterns have you noticed this week?" "Which teams have the most standing privileged access?"

Beyond Simple Queries

But here's where it gets really interesting. LLMs don't just parse questions – they understand context, can identify patterns, and make connections. This opens up possibilities far beyond simple data queries:

  • Anomaly Detection: Instead of setting rigid rules, describe the patterns you're worried about in natural language
  • Access Reviews: Have the LLM analyze access patterns and suggest questionable permissions
  • Compliance Reporting: Generate human-readable summaries of your compliance state
  • Risk Analysis: Ask about potential security risks in your current access patterns

Let me share a couple of real examples of how this might work in practice:

Example 1: Identifying Dormant Privileged
Access Query: "Show me all users with admin privileges who haven't logged in for the past 45 days"

Found 7 admin users with no login activity for 45+ days:

  • sarah.mitchell@acme-demo.com (IT Admin, Last login: 89 days ago)
  • james.wilson@acme-demo.com (System Admin, Last login: 62 days ago)
  • dev.admin@acme-demo.com (Development Admin, Last login: 52 days ago)

🚨 Risk Analysis: 71% of inactive admins are from the IT department, suggesting a potential process issue with access lifecycle management.

Example 2: Detecting Unusual Access Patterns
Query: "What unusual access patterns have you noticed in our cloud environments this month?"

Detected several notable patterns in March 2024:

  1. Unusual Time-Based Access
    • User john.doe@acme-demo.com accessed AWS production outside business hours 5 times
    • 3 failed login attempts to Azure DevOps from unusual locations
  2. Privilege Escalation Patterns   
    • 2 instances of temporary admin elevation lasting longer than standard 4-hour window   
    • Unusual sequence of role changes for user team-lead@acme-demo.com
  3. Resource Access Anomalies
    • Spike in S3 bucket access from development team (400% above baseline)   
    • First-time access to production databases from 3 QA team members

🔍 Recommendation: Review the temporary privilege elevation process and QA team's production access requirements

These examples demonstrate how LLMs can not only answer direct questions but also provide context, identify patterns, and make intelligent recommendations based on the data. Instead of just returning raw data, the system can highlight potential risks and suggest actionable next steps.

The Road Ahead

As I dive deeper into this intersection of identity and AI, I have as many questions as answers. How do we ensure security when connecting LLMs to sensitive identity data? What's the right balance between AI assistance and human oversight? How do we handle edge cases and ensure accuracy?These are the questions I'll be exploring in this blog series. I'll share my experiments, successes, and failures as I work to bridge the gap between the promise of LLMs and the practical realities of identity management. In the next post, we'll get technical and look at how to actually build an identity-aware AI assistant.I'd love to hear your thoughts. What questions would you want to ask your identity system? What concerns do you have about bringing AI into identity management? Let's figure this out together.

Get started with Device360 today

From Identity Analytics to AI: My Journey into LLMs

Download

Like many of you, I've watched the explosion of AI and Large Language Models (LLMs) over the past couple of years with a mix of fascination and skepticism. Working at Beyond Identity on Access360, our identity analytics platform, I kept wondering: How could this technology transform the way we interact with identity data?

It started with a pattern I noticed in our customer feedback. We'd build specific visualizations and reports in Access360 to answer questions like "Show me all privileged users who haven't authenticated in 30 days" or "Which applications have the most security policy violations?" But as soon as we solved one question, three more would pop up. Each new query meant new feature development, new UI elements, new filters. The questions were endless, and they should be – identity practitioners need to be curious about their security posture.

That's when it hit me. What if instead of building specific features for each question, we could let people just... ask? I'd been using ChatGPT for everything from coding help to writing documentation, and the experience was transformative. You simply ask what you want to know, and it responds intelligently. Could we bring that same natural interaction to identity management?

As a note, I know one of the first questions that people have when thinking about AI usage relative to identity and security are around privacy. I'd like to address that in a separate blog since it's deserving of a dedicated discussion. Here, I'm aiming to outline and share insights for a few use cases that I've been validating.

Why LLMs Matter for Identity Management

The more I dug into this idea, the more exciting it became. Identity and access management systems are complex beasts. They're filled with rich data about users, permissions, authentication patterns, and security policies. But extracting meaningful insights often requires:

  1. Understanding complex query languages
  2. Knowing the exact schema of your data
  3. Building specific reports or dashboards
  4. Writing custom scripts for one-off analyses

LLMs could potentially change this paradigm entirely. Imagine asking questions as naturally as you would to a security analyst:"Who has access to our AWS production environment that hasn't used it in the last quarter?" "Show me all contractors with persistent access to sensitive data" "What unusual authentication patterns have you noticed this week?" "Which teams have the most standing privileged access?"

Beyond Simple Queries

But here's where it gets really interesting. LLMs don't just parse questions – they understand context, can identify patterns, and make connections. This opens up possibilities far beyond simple data queries:

  • Anomaly Detection: Instead of setting rigid rules, describe the patterns you're worried about in natural language
  • Access Reviews: Have the LLM analyze access patterns and suggest questionable permissions
  • Compliance Reporting: Generate human-readable summaries of your compliance state
  • Risk Analysis: Ask about potential security risks in your current access patterns

Let me share a couple of real examples of how this might work in practice:

Example 1: Identifying Dormant Privileged
Access Query: "Show me all users with admin privileges who haven't logged in for the past 45 days"

Found 7 admin users with no login activity for 45+ days:

  • sarah.mitchell@acme-demo.com (IT Admin, Last login: 89 days ago)
  • james.wilson@acme-demo.com (System Admin, Last login: 62 days ago)
  • dev.admin@acme-demo.com (Development Admin, Last login: 52 days ago)

🚨 Risk Analysis: 71% of inactive admins are from the IT department, suggesting a potential process issue with access lifecycle management.

Example 2: Detecting Unusual Access Patterns
Query: "What unusual access patterns have you noticed in our cloud environments this month?"

Detected several notable patterns in March 2024:

  1. Unusual Time-Based Access
    • User john.doe@acme-demo.com accessed AWS production outside business hours 5 times
    • 3 failed login attempts to Azure DevOps from unusual locations
  2. Privilege Escalation Patterns   
    • 2 instances of temporary admin elevation lasting longer than standard 4-hour window   
    • Unusual sequence of role changes for user team-lead@acme-demo.com
  3. Resource Access Anomalies
    • Spike in S3 bucket access from development team (400% above baseline)   
    • First-time access to production databases from 3 QA team members

🔍 Recommendation: Review the temporary privilege elevation process and QA team's production access requirements

These examples demonstrate how LLMs can not only answer direct questions but also provide context, identify patterns, and make intelligent recommendations based on the data. Instead of just returning raw data, the system can highlight potential risks and suggest actionable next steps.

The Road Ahead

As I dive deeper into this intersection of identity and AI, I have as many questions as answers. How do we ensure security when connecting LLMs to sensitive identity data? What's the right balance between AI assistance and human oversight? How do we handle edge cases and ensure accuracy?These are the questions I'll be exploring in this blog series. I'll share my experiments, successes, and failures as I work to bridge the gap between the promise of LLMs and the practical realities of identity management. In the next post, we'll get technical and look at how to actually build an identity-aware AI assistant.I'd love to hear your thoughts. What questions would you want to ask your identity system? What concerns do you have about bringing AI into identity management? Let's figure this out together.

From Identity Analytics to AI: My Journey into LLMs

Discover how LLMs are transforming identity management from rigid queries to natural conversations. Learn how AI can enhance security analysis and simplify complex IAM tasks.

Like many of you, I've watched the explosion of AI and Large Language Models (LLMs) over the past couple of years with a mix of fascination and skepticism. Working at Beyond Identity on Access360, our identity analytics platform, I kept wondering: How could this technology transform the way we interact with identity data?

It started with a pattern I noticed in our customer feedback. We'd build specific visualizations and reports in Access360 to answer questions like "Show me all privileged users who haven't authenticated in 30 days" or "Which applications have the most security policy violations?" But as soon as we solved one question, three more would pop up. Each new query meant new feature development, new UI elements, new filters. The questions were endless, and they should be – identity practitioners need to be curious about their security posture.

That's when it hit me. What if instead of building specific features for each question, we could let people just... ask? I'd been using ChatGPT for everything from coding help to writing documentation, and the experience was transformative. You simply ask what you want to know, and it responds intelligently. Could we bring that same natural interaction to identity management?

As a note, I know one of the first questions that people have when thinking about AI usage relative to identity and security are around privacy. I'd like to address that in a separate blog since it's deserving of a dedicated discussion. Here, I'm aiming to outline and share insights for a few use cases that I've been validating.

Why LLMs Matter for Identity Management

The more I dug into this idea, the more exciting it became. Identity and access management systems are complex beasts. They're filled with rich data about users, permissions, authentication patterns, and security policies. But extracting meaningful insights often requires:

  1. Understanding complex query languages
  2. Knowing the exact schema of your data
  3. Building specific reports or dashboards
  4. Writing custom scripts for one-off analyses

LLMs could potentially change this paradigm entirely. Imagine asking questions as naturally as you would to a security analyst:"Who has access to our AWS production environment that hasn't used it in the last quarter?" "Show me all contractors with persistent access to sensitive data" "What unusual authentication patterns have you noticed this week?" "Which teams have the most standing privileged access?"

Beyond Simple Queries

But here's where it gets really interesting. LLMs don't just parse questions – they understand context, can identify patterns, and make connections. This opens up possibilities far beyond simple data queries:

  • Anomaly Detection: Instead of setting rigid rules, describe the patterns you're worried about in natural language
  • Access Reviews: Have the LLM analyze access patterns and suggest questionable permissions
  • Compliance Reporting: Generate human-readable summaries of your compliance state
  • Risk Analysis: Ask about potential security risks in your current access patterns

Let me share a couple of real examples of how this might work in practice:

Example 1: Identifying Dormant Privileged
Access Query: "Show me all users with admin privileges who haven't logged in for the past 45 days"

Found 7 admin users with no login activity for 45+ days:

  • sarah.mitchell@acme-demo.com (IT Admin, Last login: 89 days ago)
  • james.wilson@acme-demo.com (System Admin, Last login: 62 days ago)
  • dev.admin@acme-demo.com (Development Admin, Last login: 52 days ago)

🚨 Risk Analysis: 71% of inactive admins are from the IT department, suggesting a potential process issue with access lifecycle management.

Example 2: Detecting Unusual Access Patterns
Query: "What unusual access patterns have you noticed in our cloud environments this month?"

Detected several notable patterns in March 2024:

  1. Unusual Time-Based Access
    • User john.doe@acme-demo.com accessed AWS production outside business hours 5 times
    • 3 failed login attempts to Azure DevOps from unusual locations
  2. Privilege Escalation Patterns   
    • 2 instances of temporary admin elevation lasting longer than standard 4-hour window   
    • Unusual sequence of role changes for user team-lead@acme-demo.com
  3. Resource Access Anomalies
    • Spike in S3 bucket access from development team (400% above baseline)   
    • First-time access to production databases from 3 QA team members

🔍 Recommendation: Review the temporary privilege elevation process and QA team's production access requirements

These examples demonstrate how LLMs can not only answer direct questions but also provide context, identify patterns, and make intelligent recommendations based on the data. Instead of just returning raw data, the system can highlight potential risks and suggest actionable next steps.

The Road Ahead

As I dive deeper into this intersection of identity and AI, I have as many questions as answers. How do we ensure security when connecting LLMs to sensitive identity data? What's the right balance between AI assistance and human oversight? How do we handle edge cases and ensure accuracy?These are the questions I'll be exploring in this blog series. I'll share my experiments, successes, and failures as I work to bridge the gap between the promise of LLMs and the practical realities of identity management. In the next post, we'll get technical and look at how to actually build an identity-aware AI assistant.I'd love to hear your thoughts. What questions would you want to ask your identity system? What concerns do you have about bringing AI into identity management? Let's figure this out together.

From Identity Analytics to AI: My Journey into LLMs

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

Like many of you, I've watched the explosion of AI and Large Language Models (LLMs) over the past couple of years with a mix of fascination and skepticism. Working at Beyond Identity on Access360, our identity analytics platform, I kept wondering: How could this technology transform the way we interact with identity data?

It started with a pattern I noticed in our customer feedback. We'd build specific visualizations and reports in Access360 to answer questions like "Show me all privileged users who haven't authenticated in 30 days" or "Which applications have the most security policy violations?" But as soon as we solved one question, three more would pop up. Each new query meant new feature development, new UI elements, new filters. The questions were endless, and they should be – identity practitioners need to be curious about their security posture.

That's when it hit me. What if instead of building specific features for each question, we could let people just... ask? I'd been using ChatGPT for everything from coding help to writing documentation, and the experience was transformative. You simply ask what you want to know, and it responds intelligently. Could we bring that same natural interaction to identity management?

As a note, I know one of the first questions that people have when thinking about AI usage relative to identity and security are around privacy. I'd like to address that in a separate blog since it's deserving of a dedicated discussion. Here, I'm aiming to outline and share insights for a few use cases that I've been validating.

Why LLMs Matter for Identity Management

The more I dug into this idea, the more exciting it became. Identity and access management systems are complex beasts. They're filled with rich data about users, permissions, authentication patterns, and security policies. But extracting meaningful insights often requires:

  1. Understanding complex query languages
  2. Knowing the exact schema of your data
  3. Building specific reports or dashboards
  4. Writing custom scripts for one-off analyses

LLMs could potentially change this paradigm entirely. Imagine asking questions as naturally as you would to a security analyst:"Who has access to our AWS production environment that hasn't used it in the last quarter?" "Show me all contractors with persistent access to sensitive data" "What unusual authentication patterns have you noticed this week?" "Which teams have the most standing privileged access?"

Beyond Simple Queries

But here's where it gets really interesting. LLMs don't just parse questions – they understand context, can identify patterns, and make connections. This opens up possibilities far beyond simple data queries:

  • Anomaly Detection: Instead of setting rigid rules, describe the patterns you're worried about in natural language
  • Access Reviews: Have the LLM analyze access patterns and suggest questionable permissions
  • Compliance Reporting: Generate human-readable summaries of your compliance state
  • Risk Analysis: Ask about potential security risks in your current access patterns

Let me share a couple of real examples of how this might work in practice:

Example 1: Identifying Dormant Privileged
Access Query: "Show me all users with admin privileges who haven't logged in for the past 45 days"

Found 7 admin users with no login activity for 45+ days:

  • sarah.mitchell@acme-demo.com (IT Admin, Last login: 89 days ago)
  • james.wilson@acme-demo.com (System Admin, Last login: 62 days ago)
  • dev.admin@acme-demo.com (Development Admin, Last login: 52 days ago)

🚨 Risk Analysis: 71% of inactive admins are from the IT department, suggesting a potential process issue with access lifecycle management.

Example 2: Detecting Unusual Access Patterns
Query: "What unusual access patterns have you noticed in our cloud environments this month?"

Detected several notable patterns in March 2024:

  1. Unusual Time-Based Access
    • User john.doe@acme-demo.com accessed AWS production outside business hours 5 times
    • 3 failed login attempts to Azure DevOps from unusual locations
  2. Privilege Escalation Patterns   
    • 2 instances of temporary admin elevation lasting longer than standard 4-hour window   
    • Unusual sequence of role changes for user team-lead@acme-demo.com
  3. Resource Access Anomalies
    • Spike in S3 bucket access from development team (400% above baseline)   
    • First-time access to production databases from 3 QA team members

🔍 Recommendation: Review the temporary privilege elevation process and QA team's production access requirements

These examples demonstrate how LLMs can not only answer direct questions but also provide context, identify patterns, and make intelligent recommendations based on the data. Instead of just returning raw data, the system can highlight potential risks and suggest actionable next steps.

The Road Ahead

As I dive deeper into this intersection of identity and AI, I have as many questions as answers. How do we ensure security when connecting LLMs to sensitive identity data? What's the right balance between AI assistance and human oversight? How do we handle edge cases and ensure accuracy?These are the questions I'll be exploring in this blog series. I'll share my experiments, successes, and failures as I work to bridge the gap between the promise of LLMs and the practical realities of identity management. In the next post, we'll get technical and look at how to actually build an identity-aware AI assistant.I'd love to hear your thoughts. What questions would you want to ask your identity system? What concerns do you have about bringing AI into identity management? Let's figure this out together.

Book

From Identity Analytics to AI: My Journey into LLMs

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

Download the book

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.