🍀 Luck-Based Access Control? [#56]
Authorization is not just for humans. What about non-human agents?
First of all, happy Saint Patrick’s Day to everyone around the world. We’re sure there’s a pot of gold at the end of the rainbow… if only we could get past that thicket of IAM policies set by trickster Leprechaudmins! After all, we all know a system or two that are so frustrating it seems to run on LBAC: Luck-Based Access Control!
Welcome to the latest issue of the AuthZ newsletter which, more than ever, depends on your input so please provide us with your suggestions (blog posts, videos, podcasts, events, etc) and fill out the subscriber survey!
Speaking of contributions, we’re glad to welcome Chahal Arora, a Senior Software Engineer at IndyKite, who’s contributing to his first issue today. Coincidentally, his colleagues also recently posted a blog spot-on for this issue’s theme of serving Man and Machines: Securely enabling AI agents, by Joakim E. Andresen.
📈 2025 Authorization Trends
Two of the top trends for 2025 worth paying attention to are almost twins:
AI for Authorization — Smarter & simpler access controls in apps & services; and
Authorization for AI — Privacy in LLMs & delegating access to Agents
Both suggest a wide range of implications for identity and authorization technologies…
🤖 Man vs. Machine
The term “Non-human identities” (NHI) has become an industry buzzword to reckon with, but it isn’t quite about AIs or Agents alone. It has a broader scope as elicited in the work Pieter Kasselman & Justin Richer, the chairs of IETF WIMSE.
To prove how important NHI is becoming, the Open Worldwide Application Security Project (OWASP) has just published its OWASP Non-Human Identities (NHI) Top 10 for 2025:
Several of these threats tie directly to authorization, such as offboarding, leakage, and overprivileged identities. The good news is that, by and large, existing authorization solutions that were built with enterprise use cases in mind also protect NHI and sidestep many of these Top 10 risks. If you use a policy-based stateful or stateless approach, odds are that you will be able to define similar policies (or even reuse policies!) for NHI use cases.
All the same, NHIs do pose interesting and novel AuthZ challenges around access delegation. Pretty soon, we’ll need to define what an agent or NHI can do on my behalf. Perhaps Risk #10 will become Opportunity #10?
Speaking of man vs. machine…
🧠 AI Threats & Opportunities
Let’s be honest… soon there will be — or already is! — some eager and/or rogue developer in your company that will deploy a cheap, local, uncontrolled LLMs with his or her own:
Credentials to a private database, to login and share secrets it shouldn’t leak; and
Credentials for an internal SaaS app, to login do stuff it shouldn’t do.
This is the sort of challenge NIST identified as “AI Agent Hijacking” in January 2025. You might not even know about that shadow AI, without a centralized approach to authorization for all kinds of identities. Even then, you’ll need to be able to monitor or control what users do better.
NHI challenges for LLM in RAG
There is an urgent need to perform RAG “with strings attached”, i.e. RAG that operates on data the end-user is entitled to see. If that isn’t the case, someone would be able to generate insights based on unauthorized data. This would eventually leak data. We need to identify AI agents and who, they are operating on behalf of. See OpenAI CEO’s take on it: Sam Altman's World now wants to link AI agents to your digital identity | TechCrunch from January 24, 2025.
NHI challenges for authorizing AI experiments
Our industry colleague Omri Gazitt from Aserto, adds that LLMs “too cheap to meter” are the worst-case for CISOs, when they have little insight into their use. That’s part of a case for Centralized AuthZ (January 9, 2025), but equally well might leads us to thinking we need Externalized Authorization with De-centralized implementations to keep up with the pace of change in AI experimentation right now. And, while all employee identities are maintained in the corporate directory (like Okta, Entra, etc…), what about AI agents’ identities? Where do we go to track all those rogue workflows, again?
🛡️ AI Protection
AI protection is essential to safeguard against misuse, bias, and security threats while ensuring ethical and responsible deployment. As AI systems become more integrated into critical industries, robust safeguards are necessary to maintain trust, privacy, and fairness, preventing unintended consequences and reinforcing accountability in decision-making.
LLMs don’t just hallucinate false information, they can leak truly private secrets, too. In the Retrieval-Augmented Generation (RAG) architectural style, LLMs that fetch information from additional sources on the fly are also risking data breaches, misinformation, and compliance violations because it’s so hard to enforce replicas of the AuthZ policies from the origin systems. Data is easy to copy, but hard to keep the “strings attached”.
Alex Babeanu has written a blog post for RSA highlighting the security risks associated with insecure Retrieval-Augmented Generation (RAG) systems. His insights shed light on potential vulnerabilities and the importance of implementing robust safeguards to protect against threats in AI-driven applications.
“The AI race has organizations sprinting forward, often neglecting basic cybersecurity hygiene. The recent DeepSeek breach exposed private keys and chat histories due to poor database security—issues unrelated to AI but fundamental to infrastructure hygiene.”
Redactive’s “Semantic” approach 📚
Redactive.ai co-founder Alexander Valente introduced “Semantic Data Security” in a new whitepaper (LI). SDS uses AI to both find sensitive data, by meaning and context; and to learn how data is accessed. Specifically, compared to other approaches, they are trying to infer sensitivity at a much finer grain. As they put it:
The Mismatch: Document-Level Security vs. Chunk-Level Knowledge
The author of most documents in a workplace does not have the proper business context to correctly classify the content they have created. [Even the author! —ed.] This leads to errors in the access level of data in the document.

📐 OpenAgentSpec.org, also led by Redactive
“Every application that offers “AI Agents” seems to mean something different, and either provide zero guarantees around security and authorisation or at least very little […] To standardise what Secure Agents are, we’re proposing OpenAgentSpec.org” (LI)
Sometimes, it means forcing a RAG to ingest confidential information, so it only quotes BusyCorp’s employee handbook: (from GitHub spec’s example of an hr-agent)
input_restriction:
assertion: recent.search-knowledge-base.inputs["knowledge_base_id"].startsWith("Busycorp/HR/")
Some other quotes from co-founder Alexander Valente from its launch around AWS re:Invent launch back in 2024:
“In short, you can’t just let AI agents loose on unstructured data without securing your fine-grained permissions first!” (LI)
“everyone is grappling with how to implement controls for agent-to-agent data sharing without introducing new vulnerabilities” (LI)
and his tip to “join an Age of Empires group chat to find your next machine learning CTO 🗡️🤴” (LI from his Forbes vs. Alex: AI Predictions That Hit or Miss | Day One FM podcast)
IndyKite joins the chat…
Our colleague Joakim commented along the same lines in IndyKite’s 2025 predictions: The emerging role of AI agents in enterprises:
“striking the right balance between data security and accessibility will be crucial for enabling AI agents to function within their intended scope.”
“manipulating AI agents to steal sensitive data, disrupt critical operations, or even cause real-world harm.”
Presumably IndyKite’s Identity Knowledge Graph will help, using their Knowledge-Based Access Control (KBAC) solution.
‼️ Authorization Matters
Alright, fellow AuthZ aficionados, let’s talk shop. Phil Windley’s New Year’s missive on “Authorization Matters” recaps why AuthZ ≠ AuthN — it’s the gatekeeper to your data and resources.
For our crew, his missive is all about the nuances of RBAC, ABAC, and ReBAC. He has the experience to tackles tough spots, like fine-grained control in complex setups. When Phil mentions tools like OPA, Cedar, or OpenFGA, those are definitely worth a look.... Keep an eye out for his thoughts on AI or AuthZEN in this new year as well. After reading it, join his newsletter, and let’s chat about it on our channels: join the #Authorization Slack chat for IDPros or comment below, on Substack itself.
📅 News & Events
Next week will be rife with Authorization events: London is gearing up to host the Gartner IAM Summit 2025, Europe Edition. Attendees from all over the world will converge on the O2 to learn about the latest in terms of identity & access management. Gartner was kind enough to let OpenID organize not one, but two interop events during the summit:
OpenID Shared Signals Framework Interop 📡
On Monday, SGNL.ai’s CTO Atul Tulshibagwale will deliver an Executive Story: Building a Trust Fabric With the OpenID Shared Signals Framework:
“OpenID SSF, CAEP, and RISC are open standards that enable instantaneous event-based communication, enabling real-time ITDR. Leading companies have announced their support for this set of standards and some are demonstrating its interoperability in this conference. […] it is critical to building a trust fabric for your organization.”
There will also be three half-hour Interop sessions where a dozen vendors will show interoperability with respect to CAEP on Monday, March 24, 2025 at 1PM, 2:30PM, and 4PM (GMT), in the Italian Room.
OpenID AuthZEN Interop 🔐
On Tuesday, David and co-chair Omri will take to the stage to talk about the importance of standardizing authorization in Executive Story: AuthZEN: the “OpenID Connect” of Authorization. There will also be another three half-hour Interop sessions specifically demonstrating AuthZEN support on Tuesday, March 25, 2025 at 1PM, 2:45PM, and 4:30PM (GMT).
Axiomatics ⨉ Curity Special Event
David’s also proud to collaborate with Jacob Ideskog to host a customer-centric event on financial-grade API security. Their companies, Axiomatics and Curity, are both helping customers explore identity-driven security at an event the day Gartner and followed by happy hour in London’s swingin’ Shoreditch. Spots are limited so do sign up now if you can swing by!
🧃 Soylent Service: “It’s made of PEOPLE!”
Even an issue all about Non-Human Identities still has to be lovingly hand-made by Actual Humans, so if you want to help return the Authorization Clipping Service from a monthly back into a weekly, be sure to send in yer’ blurbs, volunteer to edit, or at least add your voice — don’t let the 2% have their way!1
Who knows, we all might band together and hold our own conference — or celebrate a milestone as impressive as Identity at the Center’s record of 600,000 downloads. Hats off to them, for walking the walk, with 337 episodes to earn Jeff and Jim their own lucky pot o’gold today! 🌈
Our glass is half-full, since about half of you open each issue — but 98% of you still, um, have the opportunity to respond?