Papers
Topics
Authors
Recent
Search
2000 character limit reached

Descriptive Access Levels

Updated 24 January 2026
  • Descriptive access levels are structured categories that define gradations of information, functionality, and control across domains like AI auditing, accessibility, and MIS administration.
  • They enable precise tailoring and evaluation of audits and interactions by calibrating technical modalities, user roles, and ethical principles.
  • These frameworks support secure and adaptive management of system internals, balancing transparency, risk minimization, and operational efficiency.

Descriptive access levels are structured categories that define the gradations of information, functionality, or control available to an individual or system across domains such as algorithmic auditing, accessibility, data visualization, and information system administration. These levels operationalize variable access to resources, semantic content, or system internals, enabling precise tailoring and evaluation of audits, interactions, and permissions based on user roles, technical constraints, and ethical principles. There is extensive cross-disciplinary research formalizing such levels in AI model evaluation (Charnock et al., 17 Jan 2026), accessibility for blind and low-vision users (Zhao et al., 2024, Zong et al., 2022, Lundgard et al., 2021), algorithmic fairness audits (Zaccour et al., 1 Feb 2025), and management information systems (Mishra, 2013).

1. Foundational Principles and Taxonomies

Descriptive access levels are instantiated as discrete bundles of capabilities or information disclosure, often configured along orthogonal axes. In auditing and security contexts, access levels are chiefly governed by the technical modality of access (e.g., black-box, grey-box, white-box), granularity of provided information, and constraints on time frame for evaluation (Charnock et al., 17 Jan 2026). In accessibility research, descriptive access levels structure the richness with which non-visual users can traverse or interrogate visual information—ranging from basic presence notification to full multimodal interaction and semantic enrichment (Zhao et al., 2024, Zong et al., 2022, Lundgard et al., 2021). For MIS administration, access is determined by user role, data sensitivity, functional right (read/write/admin), and exceptions, captured by formal mappings and role-data matrices (Mishra, 2013).

2. Access Levels in AI Model Auditing

Charnock et al. propose three nested descriptive access levels—AL1, AL2, and AL3—for external evaluations of dangerous capabilities in frontier AI systems (Charnock et al., 17 Jan 2026):

  • AL1 (Black-Box & Minimal Information): Evaluator submits inputs via API, views raw outputs plus optional chain-of-thought traces or flag signals. Only minimal model and data information is provided (e.g., excluded topics, system prompts, guardrail lists). Evaluation time frame is ≥ 20 business days. Principal benefits include rapid detection of surface-level vulnerabilities; however, high false-negative risk limits stakeholder trust.
  • AL2 (Grey-Box & Substantial Information): Adds access to partial internals (logits, log-probs, activations), unfiltered fine-tuning, and detailed contextual information (evaluation methods, sample sizes, confidence intervals). Enables deeper capability elicitation and more robust adversarial testing. Benefits include lower false-negative rate and greater transparency, at increased operational and security cost.
  • AL3 (White-Box & Comprehensive Information): Offers full read/write access to model parameters, gradients, classifier code, and detailed internal reports. Supports exhaustive probing of latent vulnerabilities and model behaviours. Yields highest confidence in evaluation rigor and trust, with maximal risk of IP leakage and misuse.

A tabular summary:

Dimension AL1 – Black-Box, Minimal AL2 – Grey-Box, Substantial AL3 – White-Box, Comprehensive
Model Access API, CoT, flags +logits, log P, fine-tune +full θ/∇θ, classifier code
Information Minimal Substantial Comprehensive
Time Frame ≥ 20 business days ≥ 20 business days > 20 business days
False Negatives High Medium Low
Security Risk Low Medium High
EU CoP Mapping Best Practice State of the Art Innovative

3. Access Levels for Algorithm Auditing and Data Transparency

Zaccour et al. identify three auditor data-access scenarios, each structuring the reliability of fairness metric estimation for quantitative evaluation (Zaccour et al., 1 Feb 2025):

  • A. Aggregated Statistics: Auditor only receives confusion-matrix counts (TP, FP, FN, TN) partitioned by protected attribute. No individual-level data or model outputs. Suitable for statistical parity audit, but precludes nuanced stratification or error analysis. Differentially private (DP) counts yield strong privacy guarantees and audit reliability at appropriate budget (ε).
  • B. Individual Data + Model Outputs: Auditor obtains full record-level information, including model predictions. Enables exhaustive metric calculation (demographic parity, equalized odds, etc.), intersectionality analysis, bootstrap-based uncertainty estimation. Reliability depends on sample size and feature completeness; removal of key features collapses reliability.
  • C. Individual Data, No Model Outputs: Auditor must re-train a surrogate model given only features and ground-truth labels. Requires substantially larger audit samples (≈160% of standard), perfect knowledge of model class and hyperparameters. Sensitive to missingness, feature bias, and sample constraints.

Key metrics:

ΔDP=P(Y^=1A=0)P(Y^=1A=1)\Delta_{\rm DP} = |P(\hat Y=1|A=0) - P(\hat Y=1|A=1)|

ΔEO=12(P(Y^=1A=0,Y=1)P(Y^=1A=1,Y=1)+P(Y^=1A=0,Y=0)P(Y^=1A=1,Y=0))\Delta_{\rm EO} = \tfrac12 \Big( |P(\hat Y=1|A=0, Y=1) - P(\hat Y=1|A=1, Y=1)| + |P(\hat Y=1|A=0, Y=0) - P(\hat Y=1|A=1, Y=0)| \Big)

ΔEOpp=P(Y^=1A=0,Y=1)P(Y^=1A=1,Y=1)\Delta_{\rm EOpp} = |P(\hat Y=1|A=0, Y=1) - P(\hat Y=1|A=1, Y=1)|

Summary table:

Auditor Scenario Inputs/Outputs Available Reliability/Privacy Profile
Aggregated Statistics Group-level confusion matrices High privacy, adequate for DP
Individual + Outputs Full features, outcomes, predictions Best accuracy if data quality high
Individual, No Outputs Features & labels, no predictions Higher error, large data required

4. Hierarchical Descriptive Access in Visual Accessibility

Accessibility research formalizes descriptive access levels as ladders or hierarchical structures for BLV users interpreting diagrams and visualizations (Zhao et al., 2024, Zong et al., 2022, Lundgard et al., 2021):

  • Ladder of Diagram Access (Zhao & Nacenta):
  1. Unaware: Diagram presence not communicated.
  2. Aware: Diagram presence is signaled, but no semantic schema or content.
  3. Single Static Perspective: One-shot description (alt-text/OCR) yields sentential, context-stripped representation; basic item memorization possible, no pattern detection.
  4. Multiple Perspectives: Multiple vantage points (semantic queries, tactile/audio mapping), interaction supports partial synthesis and comparison.
  5. Comprehensive Access: Rapid overview, drill-down, integrated multimodal workflows (text/audio/haptics), supports advanced sensemaking, full parity with sighted users.
  • Descriptive Access Modes in Visualization (Zong et al.):
  1. High-level summary (“overview mode”): Chart existence, axes, global trends; orienting the user.
  2. Mid-level overview (“branch mode”): Branch-specific descriptions (e.g., subset mean, anomaly highlight) with context, enabling localized information foraging.
  3. Datum-by-datum (“leaf mode”): Fine-grained readings of individual entries, optimized by verbosity controls.

Design guidelines include layered, hierarchical segmentation; overview-first ordering; custom verbosity settings; and robust navigation affordances for efficient granularity switching (Zong et al., 2022).

5. Semantic Content Levels in Natural Language Descriptions

Four-level semantic models structure the content of natural language descriptions in visual and tabular representations (Lundgard et al., 2021, Zong et al., 2022):

  1. Elemental & Encoded Properties: Chart construction, marks, axes, labels; provides context, grounds the reader.
  2. Statistical Concepts & Relations: Data facts, descriptive statistics, outlier detection, group comparisons; enables fact-based interpretation.
  3. Perceptual & Cognitive Phenomena: Higher-order trends, clusters, gaps, exceptions; supports “gist” comprehension.
  4. Domain-Specific Insights: External, expert commentary—social, political, or scientific explanations beyond direct visual evidence.

Reader studies demonstrate that BLV participants rank Level 3 (perceptual/cognitive) highest for usefulness, whereas sighted users favor Level 4 insights and Level 3 trends. The distribution of levels in a corpus emphasizes the importance of modular, user-controlled access that enables selection or suppression of commentary depth (Lundgard et al., 2021).

6. Formalization, Matrix Models, and Permission Control

MIS frameworks implement descriptive access as multi-dimensional matrices over user roles, data types, technical levels, and exceptional overrides (Mishra, 2013):

  • Technical Levels: Read (r), Write (w), Administer (a), ordered r < w < a.
  • User Types: Outsider, guest, staff, manager, administrator.
  • Data Types: General, managerial, confidential.
  • Special Access: Exception rules for individualized permission adjustments.

Formalization:

Let UU = users, DD = data objects, L={r,w,a}L = \{r, w, a\} = access levels, with mapping functions for user type t: UUTU \to UT and data type c: DDTD \to DT. Effective access:

E(u,d)={S(u,d)if S(u,d) B(t(u),c(d))otherwiseE(u, d) = \begin{cases} S(u, d) & \text{if } S(u, d) \ne \bot \ B(t(u), c(d)) & \text{otherwise} \end{cases}

Tabular mapping supports rapid auditing and evolution of the access control matrix as organizational needs change.

7. Guidelines, Risks, and Best Practices

Descriptive access level frameworks prescribe several universal guidelines:

  • Always commence with a global summary or presence notification.
  • Segment information into modular, hierarchical layers, affording navigation between granularity levels.
  • Expose verbosity and commentary toggles enabling personalized selection of detail.
  • Implement secure, role-based designs that minimize privilege by default and restrict “admin” capabilities to tightly controlled administrators.
  • For algorithm auditing, favor DP-aggregated statistics where individual-level sharing is infeasible, and reject synthetic or partial data absent explicit validation of disparity fidelity.
  • Avoid proliferation of exception rules without oversight, and maintain robust “who-can-see-what” auditing.

Risks include the potential for false negatives in shallow or minimal access scenarios, security and privacy leakage in white-box or high-detail modes, and operational burden as complexity increases. The recommended approach is to calibrate access levels to task risk, stakeholder trust requirements, and regulatory mandates (e.g., EU Code of Practice mappings: AL1→Best Practice, AL2→State of the Art, AL3→Innovative) (Charnock et al., 17 Jan 2026).

Descriptive access level models thus unify granularity, auditing rigor, and user-centered design, enabling secure, reliable, and adaptive management of information and capabilities across technical domains.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Descriptive Access Levels.