Mozart Brocchini’s Post

This is a great post, if you’re building Agentic systems for the enterprise you should read it beyond the main post, there are important considerations in the comments discussion. Disclaimer, I’m not advocating to use this exact pattern, but there are good concepts here that you might be missing. Christian Posta, if you’re attending QCon AI NY, I’d love to catch up.

🚫 API keys and Personal Access Tokens (PATs) for AI agents are a BAD idea. They are usually 𝘭𝘰𝘯𝘨 𝘭𝘪𝘷𝘦𝘥 (> 1 day) 🔑 credentials that are 𝘣𝘳𝘰𝘢𝘥𝘭𝘺 𝘴𝘤𝘰𝘱𝘦𝘥 and can be used in unexpected ways. ⚠️ Get exposed in logs? ⚠️ Shared across developer teams? ⚠️ Ex-employee takes with them. They create a 𝐡𝐮𝐠𝐞 𝐩𝐫𝐨𝐛𝐥𝐞𝐦 𝐟𝐨𝐫 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲, 𝐚𝐮𝐝𝐢𝐭 𝐚𝐧𝐝 𝐜𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞. I recently wrote an in-depth blog about the challenges of API keys / PATs for AI agents (see comments 👇 👇 👇 ), but someone asked: "𝘉𝘶𝘵 𝘵𝘩𝘦 𝘓𝘓𝘔 𝘱𝘳𝘰𝘷𝘪𝘥𝘦𝘳𝘴 𝘢𝘭𝘭 𝘪𝘴𝘴𝘶𝘦 𝘈𝘗𝘐 𝘬𝘦𝘺𝘴, 𝘴𝘰 𝘩𝘰𝘸 𝘥𝘰 𝘺𝘰𝘶 𝘥𝘦𝘢𝘭 𝘸𝘪𝘵𝘩 𝘵𝘩𝘢𝘵?" ✅ Great question. It's true, API keys may be unavoidable BUT in an enterprise environment, they should be shielded from clients. They should be locked down and tucked away in the infrastructure using something like an egress LLM/AI gateway. They can then be governed, revoked, stored, etc. in a consistent approved way. 🏰  In this pattern, the internal enterprise relies on existing user/machine identity/security mechanisms (SSO, service accounts, etc) and any policy to communicate with LLMs gets handled at the gateway (Allow/Deny). If the calls are allowed, then the AI gateway can inject the upstream LLM API keys. Clients/callers do not see these. 💡 I consistently see folks experimenting with AI technology (Agentic IDEs, public agents, custom agents) and willy nilly handing out API keys and PATs for usage. These start as POCs, but then they take this to production. 👉 𝐃𝐨𝐧'𝐭 𝐝𝐨 𝐭𝐡𝐢𝐬. 𝘛𝘩𝘦𝘳𝘦 𝘢𝘳𝘦 𝘣𝘦𝘵𝘵𝘦𝘳 𝘸𝘢𝘺𝘴. If you have scenarios you'd like to discuss, 𝐩𝐥𝐞𝐚𝐬𝐞 𝐫𝐞𝐚𝐜𝐡 𝐨𝐮𝐭 / 𝐜𝐨𝐧𝐧𝐞𝐜𝐭 / 𝐟𝐨𝐥𝐥𝐨𝐰. I would love to hear alternative thoughts, specific use cases, etc and help figure out an acceptable way to solve those problems. If we are serious about AI adoption, it's time to clean up our sloppy security practices.

  • No alternative text description for this image

thank you for sharing with your thoughts!

To view or add a comment, sign in

Explore content categories