đ Luck-Based Access Control? [#56]
Authorization is not just for humans. What about non-human agents?
First of all, happy Saint Patrickâs Day to everyone around the world. Weâre sure thereâs a pot of gold at the end of the rainbow⌠if only we could get past that thicket of IAM policies set by trickster Leprechaudmins! After all, we all know a system or two that are so frustrating it seems to run on LBAC: Luck-Based Access Control!
Welcome to the latest issue of the AuthZ newsletter which, more than ever, depends on your input so please provide us with your suggestions (blog posts, videos, podcasts, events, etc) and fill out the subscriber survey!
Speaking of contributions, weâre glad to welcome Chahal Arora, a Senior Software Engineer at IndyKite, whoâs contributing to his first issue today. Coincidentally, his colleagues also recently posted a blog spot-on for this issueâs theme of serving Man and Machines: Securely enabling AI agents, by Joakim E. Andresen.
đ 2025 Authorization Trends
Two of the top trends for 2025 worth paying attention to are almost twins:
AI for Authorization â Smarter & simpler access controls in apps & services; and
Authorization for AI â Privacy in LLMs & delegating access to Agents
Both suggest a wide range of implications for identity and authorization technologiesâŚ
đ¤ Man vs. Machine
The term âNon-human identitiesâ (NHI) has become an industry buzzword to reckon with, but it isnât quite about AIs or Agents alone. It has a broader scope as elicited in the work Pieter Kasselman & Justin Richer, the chairs of IETF WIMSE.
To prove how important NHI is becoming, the Open Worldwide Application Security Project (OWASP) has just published its OWASP Non-Human Identities (NHI) Top 10 for 2025:
Several of these threats tie directly to authorization, such as offboarding, leakage, and overprivileged identities. The good news is that, by and large, existing authorization solutions that were built with enterprise use cases in mind also protect NHI and sidestep many of these Top 10 risks. If you use a policy-based stateful or stateless approach, odds are that you will be able to define similar policies (or even reuse policies!) for NHI use cases.
All the same, NHIs do pose interesting and novel AuthZ challenges around access delegation. Pretty soon, weâll need to define what an agent or NHI can do on my behalf. Perhaps Risk #10 will become Opportunity #10?
Speaking of man vs. machineâŚ
đ§ AI Threats & Opportunities
Letâs be honestâŚÂ soon there will be â or already is! â some eager and/or rogue developer in your company that will deploy a cheap, local, uncontrolled LLMs with his or her own:
Credentials to a private database, to login and share secrets it shouldnât leak; and
Credentials for an internal SaaS app, to login do stuff it shouldnât do.
This is the sort of challenge NIST identified as âAI Agent Hijackingâ in January 2025. You might not even know about that shadow AI, without a centralized approach to authorization for all kinds of identities. Even then, youâll need to be able to monitor or control what users do better.
NHI challenges for LLM in RAG
There is an urgent need to perform RAG âwith strings attachedâ, i.e. RAG that operates on data the end-user is entitled to see. If that isnât the case, someone would be able to generate insights based on unauthorized data. This would eventually leak data. We need to identify AI agents and who, they are operating on behalf of. See OpenAI CEOâs take on it: Sam Altman's World now wants to link AI agents to your digital identity | TechCrunch from January 24, 2025.
NHI challenges for authorizing AI experiments
Our industry colleague Omri Gazitt from Aserto, adds that LLMs âtoo cheap to meterâ are the worst-case for CISOs, when they have little insight into their use. Thatâs part of a case for Centralized AuthZ (January 9, 2025), but equally well might leads us to thinking we need Externalized Authorization with De-centralized implementations to keep up with the pace of change in AI experimentation right now. And, while all employee identities are maintained in the corporate directory (like Okta, Entra, etcâŚ), what about AI agentsâ identities? Where do we go to track all those rogue workflows, again?
đĄď¸ AI Protection
AI protection is essential to safeguard against misuse, bias, and security threats while ensuring ethical and responsible deployment. As AI systems become more integrated into critical industries, robust safeguards are necessary to maintain trust, privacy, and fairness, preventing unintended consequences and reinforcing accountability in decision-making.
LLMs donât just hallucinate false information, they can leak truly private secrets, too. In the Retrieval-Augmented Generation (RAG) architectural style, LLMs that fetch information from additional sources on the fly are also risking data breaches, misinformation, and compliance violations because itâs so hard to enforce replicas of the AuthZ policies from the origin systems. Data is easy to copy, but hard to keep the âstrings attachedâ.
Alex Babeanu has written a blog post for RSA highlighting the security risks associated with insecure Retrieval-Augmented Generation (RAG) systems. His insights shed light on potential vulnerabilities and the importance of implementing robust safeguards to protect against threats in AI-driven applications.
âThe AI race has organizations sprinting forward, often neglecting basic cybersecurity hygiene. The recent DeepSeek breach exposed private keys and chat histories due to poor database securityâissues unrelated to AI but fundamental to infrastructure hygiene.â
Redactiveâs âSemanticâ approach đ
Redactive.ai co-founder Alexander Valente introduced âSemantic Data Securityâ in a new whitepaper (LI). SDS uses AI to both find sensitive data, by meaning and context; and to learn how data is accessed. Specifically, compared to other approaches, they are trying to infer sensitivity at a much finer grain. As they put it:
The Mismatch: Document-Level Security vs. Chunk-Level Knowledge
The author of most documents in a workplace does not have the proper business context to correctly classify the content they have created. [Even the author! âed.] This leads to errors in the access level of data in the document.

đ OpenAgentSpec.org, also led by Redactive
âEvery application that offers âAI Agentsâ seems to mean something different, and either provide zero guarantees around security and authorisation or at least very little [âŚ] To standardise what Secure Agents are, weâre proposing OpenAgentSpec.orgâ (LI)
Sometimes, it means forcing a RAG to ingest confidential information, so it only quotes BusyCorpâs employee handbook: (from GitHub specâs example of an hr-agent)
input_restriction:
assertion: recent.search-knowledge-base.inputs["knowledge_base_id"].startsWith("Busycorp/HR/")
Some other quotes from co-founder Alexander Valente from its launch around AWS re:Invent launch back in 2024:
âIn short, you canât just let AI agents loose on unstructured data without securing your fine-grained permissions first!â (LI)
âeveryone is grappling with how to implement controls for agent-to-agent data sharing without introducing new vulnerabilitiesâ (LI)
and his tip to âjoin an Age of Empires group chat to find your next machine learning CTO đĄď¸đ¤´â (LI from his Forbes vs. Alex: AI Predictions That Hit or Miss | Day One FM podcast)
IndyKite joins the chatâŚ
Our colleague Joakim commented along the same lines in IndyKiteâs 2025 predictions: The emerging role of AI agents in enterprises:
âstriking the right balance between data security and accessibility will be crucial for enabling AI agents to function within their intended scope.â
âmanipulating AI agents to steal sensitive data, disrupt critical operations, or even cause real-world harm.â
Presumably IndyKiteâs Identity Knowledge Graph will help, using their Knowledge-Based Access Control (KBAC) solution.
âźď¸ Authorization Matters
Alright, fellow AuthZ aficionados, letâs talk shop. Phil Windleyâs New Yearâs missive on âAuthorization Mattersâ recaps why AuthZ â AuthN â itâs the gatekeeper to your data and resources.
For our crew, his missive is all about the nuances of RBAC, ABAC, and ReBAC. He has the experience to tackles tough spots, like fine-grained control in complex setups. When Phil mentions tools like OPA, Cedar, or OpenFGA, those are definitely worth a look.... Keep an eye out for his thoughts on AI or AuthZEN in this new year as well. After reading it, join his newsletter, and letâs chat about it on our channels: join the #Authorization Slack chat for IDPros or comment below, on Substack itself.
đ
News & Events
Next week will be rife with Authorization events: London is gearing up to host the Gartner IAM Summit 2025, Europe Edition. Attendees from all over the world will converge on the O2 to learn about the latest in terms of identity & access management. Gartner was kind enough to let OpenID organize not one, but two interop events during the summit:
OpenID Shared Signals Framework Interop đĄ
On Monday, SGNL.aiâs CTO Atul Tulshibagwale will deliver an Executive Story: Building a Trust Fabric With the OpenID Shared Signals Framework:
âOpenID SSF, CAEP, and RISC are open standards that enable instantaneous event-based communication, enabling real-time ITDR. Leading companies have announced their support for this set of standards and some are demonstrating its interoperability in this conference. [âŚ] it is critical to building a trust fabric for your organization.â
There will also be three half-hour Interop sessions where a dozen vendors will show interoperability with respect to CAEP on Monday, March 24, 2025 at 1PM, 2:30PM, and 4PM (GMT), in the Italian Room.
OpenID AuthZEN Interop đ
On Tuesday, David and co-chair Omri will take to the stage to talk about the importance of standardizing authorization in Executive Story: AuthZEN: the âOpenID Connectâ of Authorization. There will also be another three half-hour Interop sessions specifically demonstrating AuthZEN support on Tuesday, March 25, 2025 at 1PM, 2:45PM, and 4:30PM (GMT).
Axiomatics ⨠Curity Special Event
Davidâs also proud to collaborate with Jacob Ideskog to host a customer-centric event on financial-grade API security. Their companies, Axiomatics and Curity, are both helping customers explore identity-driven security at an event the day Gartner and followed by happy hour in Londonâs swinginâ Shoreditch. Spots are limited so do sign up now if you can swing by!
đ§ Soylent Service: âItâs made of PEOPLE!â
Even an issue all about Non-Human Identities still has to be lovingly hand-made by Actual Humans, so if you want to help return the Authorization Clipping Service from a monthly back into a weekly, be sure to send in yerâ blurbs, volunteer to edit, or at least add your voice â donât let the 2% have their way!1
Who knows, we all might band together and hold our own conference â or celebrate a milestone as impressive as Identity at the Centerâs record of 600,000 downloads. Hats off to them, for walking the walk, with 337 episodes to earn Jeff and Jim their own lucky pot oâgold today! đ
Our glass is half-full, since about half of you open each issue â but 98% of you still, um, have the opportunity to respond?






