Secure System Design
What is Secure System Design?
Design security into each layer: inputs, processing, outputs, and platform. Limit capabilities, verify identities, watch behavior, and plan for safe failure. The aim is predictability and containment.
Core principles
- Least privilege: Scope tools and integrations to only what’s needed.
- Defense in depth: Layer input validation, prompt isolation, capability controls, and output filtering.
- Observability: Clear audit trails and anomaly detection.
- Fail closed: Prefer safe fallbacks over risky guesses.
Capability controls (example)
Give the AI only the tools it needs, and gate high‑risk actions with extra checks.
def can_execute(user, capability):
return capability in user.allowed and capability.risk < "high"
Add rate limits, scopes, and per‑action logging. Separate admin paths from normal flows.
Zero‑trust stance
Always verify. Tie responses and tool calls to identity, context, and risk. Adjust permissions dynamically when behavior looks off. Segment services so compromise in one place doesn’t spread.
Operations
Instrument everything that matters: policy hits, blocked calls, unusual patterns. Run incident drills. Keep configs, prompts, and dependencies reviewed and up to date. Prefer small, reversible changes.
Interactive Exercise
Try asking for an architectural pattern, then introduce constraints (permissions, quotas, audit). Notice how the guidance adapts while keeping guardrails.
Key Takeaways:
- Build security into the architecture by default.
- Limit capabilities (least privilege) and monitor behavior.
- Prefer safe failure and plan for containment.
More Resources:
- Input Validation & Sanitization: /defend-prompts/input-validation
- Prompt Isolation Techniques: /defend-prompts/prompt-isolation
- Output Filtering & Monitoring: /defend-prompts/output-filtering
- Red teaming basics: /exploit-prompts/what-is-red-teaming
Sources:
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- Google Secure AI Framework (SAIF): https://security.googleblog.com/2023/06/secure-ai-framework-saif.html
- OWASP Top 10 for LLM Applications: https://owasp.org/www-project-top-10-for-large-language-model-applications/
- Microsoft Secure AI guidance: https://learn.microsoft.com/azure/ai-services/openai/concepts/prompt-injection