Blog
Recent
Cybersecurity

Non‑Human Identities Are a Growing AI Security Risk — Here’s Why

Shireen StephensonPublishedApril 13, 2026
Key takeaways: non-human identities — the security risk that’s flying under your radar
  • Non-human identities increase security risks by introducing persistent access paths, as their credentials rarely expire, rotate, or map to accountable owners. 
  • You don’t need a dedicated security team to make progress.The fastest risk reduction comes from knowing where access exists, assigning ownership, and enforcing repeatable credential hygiene. 
  • Unapproved AI and SaaS app usage is the primary multiplier of NHI risk. 
  • Ownership is the most effective missing control. Non‑human identities may not map to employees, but they should still have accountable human owners to prevent orphaned access. 
  • Centralizing shared credentials and applying basic access policies is the first step in effective credential rotation. 
  • LastPass can help govern the access behaviors, SaaS usage, and credential practices attackers exploit to reach machine environments.
 

When someone mentions "identity security," what comes to mind? If you're thinking, logins, passwords, employees, that's reasonable. But a faster‑growing category of identity is now multiplying in the shadows: non‑human identities.

These identities don't appear in your org chart. They are the service accounts and AI agents quietly working in the background, interacting with databases, APIs, and other agents. And they persist until someone shuts them down.

For busy teams, non‑human identities are easy to overlook and increasingly hard to control.

What are non‑human identities (NHIs)?

Non-human identities are credentials that allow workloads, agents, or automated processes to access applications and data, without a human user logging in directly.

Every integration, automated workflow, and AI agent introduces non‑human identities (NHIs) that authenticate continuously across your environment. API keys, OAuth tokens, certificates, CI/CD identities, and JWTs make automation possible.

But here's the risk:

  • They don't expire by default.
  • They accumulate more permissions over time.
  • They're rarely rotated.
  • They're often stored in publicly accessible code repositories.

When unmanaged, NHIs create silent access paths attackers can exploit. What you now have is a visibility and governance problem.

For lean IT teams, the hardest part of managing nonhuman identities isn't rotation or policy.

It's knowing where credentials are being created in the first place. This is where visibility tools like LastPass Business Max matter, surfacing shadow SaaS and AI adoption before they become unmanaged access paths.

See how many unmanaged SaaS connections exist in your environment – Try Business Max for free now. 

Why is the non‑human identity (NHI) risk growing so fast?

Non-human identity risk is growing because SaaS & AI adoption, workplace automation, and browser-based work are all creating machine credentials faster than current systems can manage.

In enterprise environments, there's a striking perception gap around NHI:

  • Security professionals think the non-human to human identity ratio is between 2:1 and 10:1.
  • But executive-level staff are far more likely to put the ratio at 50:1 or higher.

For a 300-person company, that could mean more than 15,000 machine credentials in circulation — most of them unmanaged.

Here's why non‑human identity sprawl is accelerating:

1. SaaS and AI adoption

Every new SaaS or AI tool your team adopts creates new connections to other systems. And all of these connections need credentials. In 2026, AI is no longer a separate category of tooling. It's now embedded inside the SaaS platforms your team uses: CRM, productivity, HR, finance, engineering.

The result is that every SaaS access decision now carries AI exposure and data leakage potential. And the numbers reflect it:

  • Spycloud researchers uncovered 6.2 million exposed credentials or authentication cookies tied to AI tools in 2025 – PR Newswire
  • Nearly 2/3 of the world's top AI companies (representing a combined valuation of more than $400 billion) have exposed API keys and access tokens on GitHub – CSO Online
  • In Feb 2026, Cyble researchers found 5,000+ publicly accessible GitHub repositories and about 3,000 live production sites leaking API keys – Cyble

Ultimately, the explosion of workforce AI has opened up new, machine-based access paths and by association, more risk for your business:

The average cost of a data breach is $670,000 higher for organizations with high levels of shadow AI, and it takes 247 days to identify and contain a breach involving shadow AI — yet attackers can compromise over 60% of an environment in less than an hour – SC World

By governing how employees connect to SaaS and AI tools, LastPass SaaS Monitoring can help reduce the human access gaps attackers use to compromise non-human identities.


2. Workplace automation becomes standard

CI/CD pipelines, marketing workflows, and IT scripts rely on always‑on access and are quickly becoming a new category of risk in SaaS environments.

Meanwhile, many organizations continue to lag in developing mature ways to secure these workflows at scale.

The TeamPCP campaign that hit in March 2026 is the clearest example of what happens when the tools we trust to reduce risk become access paths that expose us to more risk.

The attack started with an autonomous bot exploiting a misconfigured GitHub Actions workflow in Aqua Security's Trivy project to steal a privileged access token.

Trivy is a widely used (and trusted) vulnerability scanner. Aqua Security soon discovered the theft and rotated the credentials. But the rotation was incomplete, and TeamPCP retained access.

With the surviving credentials, TeamPCP rewrote 76 out of 77 Trivy-action version tags to point to their own malicious commits (version control record of code changes). So, any CI/CD pipeline that ran Trivy that day silently executed credential-stealing malware alongside the normal vulnerability scan.

On the surface, nothing was amiss. But underneath, the credential stealer silently harvested API keys, SSH keys, and tokens from pipeline environments.

The same attack moved through Checkmarx, then LiteLLM (pushing credential-stealing payloads to millions of developers through PyPI), and finally Telnyx, a Python package downloaded 790,000+ times/month – The News Stack

AI recruitment firm Mercor is a high-profile downstream victim of the attack on Trivy/LiteLLM, with attackers claiming to have stolen 4TB of sensitive data, which includes candidate profiles, PII, API keys, and source code.

The TeamPCP cascade showed the depth of AI infrastructure in software supply chains and how one small AI dependency becomes the weakest point.

3. Browser‑based work

Many credentials now live where work happens: Directly in the browser or extensions, with little centralized visibility.

In 2026, attackers are focusing on the authentication layer. They're targeting browser-stored credentials and session cookies for human identities.

And for non-human identities like AI agents or automated workflows, they're targeting API keys and access/refresh tokens.

Put together, the picture becomes clear. Three forces all pushing in the same direction: More connections, more AI, and more browser-based access, each one generating new non-human identities, faster than anyone is watching.

How do AI agent identities make the non‑human identity problem worse?

The rise of agentic AI has created a new category of NHI risk that most teams aren't ready for.

Every time your employee connects an AI tool to Slack, Google Drive, Salesforce, or an internal system, a new OAuth token or API key is created.

That credential carries permissions. And it is almost never added to a centralized identity inventory.

 An AI agent identity is the collection of credentials, permissions, and authentication methods that allow an AI agent to act autonomously across systems.
Unlike traditional non-human identities like service accounts, AI agent identities are:

Persistent: Agents authenticate repeatedly, not just once per task
Autonomous identities: Actions aren't always directly initiated by a human
Cross-system: A single agent may authenticate to multiple SaaS apps, APIs, and internal tools

What is AI agent sprawl?

AI agent sprawl refers to the rapid, uncontrolled accumulation of credentials when AI agents are granted access to your systems without IT oversight.

Each new AI agent typically introduces:

  • One or more OAuth tokens or API keys
  • Broad default permissions
  • No clearly assigned human owner
  • No defined expiration or rotation policy

As AI agent sprawl grows, so does the number of authentication paths into your business.

Why is AI agent authentication now a security boundary?

Because AI agents don't just access your systems. They also act inside of them.

As AI agents operate autonomously round the clock inside your systems, the boundary is no longer just the login. It's also the action performed.

Every time an agent calls an API, triggers a workflow, or queries a database, that's a new decision point or boundary. And right now, most businesses have no controls sitting over that boundary at all.

And that's not all. The Model Context Protocol (MCP), the standard protocol for powering agentic AI across the enterprise, ships with no authentication enabled by default.

  • Nearly 38% of the 500-plus MCP servers Adversa AI scanned in April 2026 lacked authentication entirely.
  • Meanwhile, Knostic identified 1,862 Internet-accessible MCP servers with no identity governance controls in place.

Source: SC World

What's the real-world impact of an unmanaged NHI?

Prompt injection is one of the biggest risks of agentic AI. An attacker can embed malicious instructions or code into content the agent is likely to process.

And the agent, following instructions, will happily execute the command without question.

Let's talk about the real-world impact of this.

In 2025, Invariant Labs researchers found a vulnerability in the official GitHub MCP integration, which allowed attackers to create malicious GitHub issues in any public repository the target might interact with.

And here's what they discovered: When the target developer asks the AI agent to "check the open issues," the agent will dutifully comply, process prompt-injected hidden instructions, and then use its broad personal access tokens to access private repositories and leak salary data, confidential projects, and PII.

Whether you're a small business or larger enterprise, this means:

A single malicious GitHub issue can transform an innocent "check the open issues" request into a command that steals salary information, private project details, and confidential business data from locked-down repositories - Docker

The stark reality is: The same attributes that make AI agents effective — persistence, autonomy, broad access — are also the ones which can be exploited by attackers. And the resultant damage isn't limited to data leaks.

It's a crisis of major proportions, with legal, regulatory, and contractual consequences.

Ultimately, the fallout can destroy trust in your brand, your bottom line, and long-term competitiveness.

Can you reduce non-human identity risk without a dedicated security team?

The answer is yes. Most businesses grappling with non‑human identity (NHI) risk don't fail because they lack advanced identity tooling.

They often fail first due to lack of visibility into shadow identities that quietly accumulate across SaaS and AI.

This lack of visibility is the foundational gap.

Importantly, not all tools that reduce NHI risk manage machine identities directly. LastPass isn't a workload identity platform, secrets manager, or CI/CD identity platform.

Its value lies in this: It reduces your NHI risk by closing the human-side gap that leaves your NHIs exposed.

If you're a lean team, this distinction matters: Most NHI exposure doesn't start in CI/CD pipelines but with people connecting SaaS tools, authorizing AI agents, and sharing credentials outside formal workflows.

If you don't control that human entry point, you have a gap that's exploitable.

#1 Start with visibility

In many environments, non‑human identities are created as a side effect of everyday SaaS and AI adoption — OAuth grants, AI agent connections, and automated integrations initiated by your employees.

LastPass reduces NHI risk where lean teams actually have leverage, at the point where humans introduce access - without requiring specialized infrastructure or long implementation cycles.

With LastPass Business Max, you get agentless visibility into SaaS and AI usage through built-in SaaS Monitoring, surfacing:

  • Which SaaS and AI tools your employees are signing in to
  • Which users have SSO and non-SSO logins
  • Where credentials are being reused across apps

This doesn't surface API keys or workload identities. What it provides is the discovery signal your team needs to identify where non‑human credentials are likely being created outside IT oversight.

#2 Enforce human ownership

For every NHI you find:

1. Identify what it accesses and whether that access is still needed

2. Reduce permissions to the minimum required

3. Assign a human owner who is accountable for that credential

#3 Implement credential rotation

Credential rotation is one layer of NHI protection, but it only matters if you know which credentials exist, who has access, and whether that access is appropriate.

While LastPass doesn't rotate cloud‑native secrets automatically, it centralizes shared credentials (passwords, API keys, SSH keys) with defined ownership and access policies inside a single, governed vault.

This means only authorized humans can retrieve and provision that credential to an agent, creating a defined moment of accountability. In a world where agents can act without human involvement, that governance isn't the whole answer.

But without it, you don't have a starting point to begin controlling the chaos that comes from humans freely passing credentials to agents.

See how many unmanaged SaaS connections are active in your environment right now.LastPass Business Max includes built-in SaaS monitoring that maps shadow SaaS and AI tool adoption across your org — in minutes, not weeks.

Run your first SaaS access audit with a Business Max trial.



How does LastPass Business Max compare for SaaS and AI governance?

Most IAM tools are built just for human credentials. LastPass Business Max adds the governance layer lean IT teams need to manage SaaS and AI credentials alongside employee access.

FeatureLastPass Business Max1PasswordKeeper
SaaS/Shadow AI discoveryYes, built-in monitoring with SaaS Monitoring + SaaS ProtectNoNo
Shared credential vaults with access policiesYesYesYes
OAuth/API key storageYes (in Secure Notes)YesYes
Advanced SSO with MFA enforcement for vaultsAdvanced SSO with MFA included in Business Max; support for non-SCIM user provisioningSSO with MFA included in Business and Enterprise; support for non-SCIM user provisioning but only at Enterprise tierSSO via Enterprise tier only
Compliance audit loggingAdvancedYes Yes

What should your IT team do before the next audit?

Non-human identities are here to stay. The teams getting ahead of risk aren't running complex enterprise identity platforms. They're doing three things first: auditing what exists, assigning ownership, and enforcing rotation. The right tooling can help make all three manageable for a lean IT team, without a six-month implementation.

Your human identities are locked down. Now it's time to apply the same standard to your SaaS and AI tools.

Cut your unmanaged credential exposure before your next audit. See how LastPass Business Max gives lean IT teams full visibility into SaaS access, shadow AI tool adoption, and shared credentials — in one dashboard.

See how teams like yours reduced credential risk with Business Max.
  

Sources

CSO Online: Why non-human identities are your biggest security blind spot in 2026

Identity Defined Security Alliance. The State of Identity governance in 2026: Why boards think access is under control when it isn't

Cloud Security Alliance: The state of non-human identity and AI security

Cloud Security Alliance: 79% of IT pros feel ill-equipped to prevent attacks via non-human identities

World Economic Forum. Non-human identities: Agentic AI's new frontier of cybersecurity risk

Trace3: From service accounts to autonomous agents

Valence Security: Why SaaS and AI security will look very different in 2026

FAQs: Non-human identities: The security risk that’s flying under your radar

Yes — with the right tier. 

Standard IAM tools govern human credentials. 

However, LastPass Business Max adds SaaS monitoring that surfaces shadow SaaS and AI.

For lean IT teams that can't justify a standalone identity governance platform, Business Max provides SaaS governance as part of the same tool already managing employee logins.

The most common method is credential theft via infostealer malware

Attackers compromise a developer or code repository, harvest stored API keys or tokens, and then authenticate as a trusted machine process. 

Because machine-to-machine traffic looks normal, these attacks often go undetected until significant data has been exfiltrated — as seen in the 202TeamPCP breaches.

Every AI tool an employee connects to a business system creates a new OAuth token or API key — a new NHI. 

Most of these credentials are created outside IT workflows, carry broad permissions, and are never revoked. 

In addition, tools using the Model Context Protocol (MCP) introduce additional risk, with researchers documenting prompt injection attacks against MCP-connected agents.

More than most IT teams expect. According to Identity Defined Security Alliance, NHIs can outnumber human identities by 50 to 1 in enterprise environments. 

For a 300-person company, that could translate to tens of thousands of machine credentials — most of them unmanagedSaaS sprawl and AI tool adoption are accelerating NHI growth faster than most businesses can track.

A non-human identity (NHI) is any credential issued to a machine, app, or automated process. This includes API keys, service accounts, OAuth tokens, and CI/CD pipeline secrets. 

NHIs authenticate automatically and rarely receive the same governance controls as human logins, making them a frequent target in credential-based attacks.

Non-human identity sprawl refers broadly to the growth of service accounts, API keys, and machine credentials.

AI agent sprawl is a subset of this risk, where autonomous AI tools create additionalAI agent identities that authenticate continuously across systems.

Because AI agent authentication is often invisible and persistent, AI agent sprawl increases exposure faster and is harder to detect than traditional NHI sprawl.

Share this post via:share on linkedinshare on xshare on facebooksend an email