Usable, Fine-Grained Consent Model for Agent Skills

Design a consent model for the Agent Skills framework that provides fine-grained, security-effective permission granting while remaining usable for non-expert users and calibrated to the action space of LLM-based agents.

Background

The paper identifies a structural consent gap in Agent Skills: a single installation grants persistent, undifferentiated operator-level authority. Post-installation content changes inherit trust without re-approval, creating opportunities for abuse.

The authors emphasize the usability–security tension: overly granular prompts cause approval fatigue while coarse permissions fail to constrain risk. They call for an interdisciplinary design that is both effective and usable.

References

Designing a consent model that is both security-effective and usable for non-expert users, calibrated to the specific action space of LLM-based agents, is an open problem that requires interdisciplinary research spanning security, human-computer interaction, and agent system design.

Towards Secure Agent Skills: Architecture, Threat Taxonomy, and Security Analysis  (2604.02837 - Li et al., 3 Apr 2026) in Section 7.2, Open Challenges (C2: Consent Model Design)