Trustworthy Influence Protocol (TIP)
- TIP is a computational trust framework for multi-human multi-robot teams that integrates both direct and indirect trust dynamics using Bayesian inference.
- It employs cumulative Beta-distributed updates to capture and propagate trust from personal experiences and shared perceptions.
- Experimental validation shows TIP achieves lower RMSE than direct-only models, highlighting its effectiveness in scalable trust propagation.
The Trustworthy Influence Protocol (TIP) is a computational trust modeling framework designed for multi-human multi-robot teams, explicitly capturing both direct and indirect trust dynamics as they unfold over repeated interactions. Unlike prior models that focus primarily on dyadic (one human, one robot) trust relationships, TIP extends Bayesian trust inference to scenarios with multiple humans and robots, enabling structured propagation of trust across networked agents. This approach allows each human agent’s trust in any robot (or other human) to be dynamically updated based on both personal experience and the communicated trust of teammates, modulated by an explicit weighting of interpersonal trust. The TIP formulation thus underpins a principled manner for trust formation and propagation in human-robot teaming systems (Guo et al., 2023).
1. Mathematical Formulation and Notation
In TIP, let denote human agents, and robot agents. For each ordered pair with and , the self-reported trust at time is . TIP models as a Beta-distributed random variable:
with the expected value:
Trust evolution is governed by cumulative “experience counts” , which are incremented according to both direct and indirect observations.
Direct-Experience Update
For direct human-robot interaction, with the performance of robot at : where and are per-unit gain hyper-parameters for “success” and “failure.”
Indirect-Experience (Propagation) Update
When receives a trust report from on robot ,
the update is: where are propagation gains and is how much trusts .
2. Algorithmic Structure and Pseudocode
The TIP trust update can be operationalized as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
initialize α₀^(a,b), β₀^(a,b), s^(a,b), f^(a,b), ŝ^(a,b), f̂^(a,b) for all a∈H, b∈H∪R for k = 1…K do -- 1) assign each human h a robot r observe robot performance pₖ^r for each human h assigned to r do -- direct update αₖ^(h,r) ← αₖ₋₁^(h,r) + s^(h,r)·pₖ^r βₖ^(h,r) ← βₖ₋₁^(h,r) + f^(h,r)·(1−pₖ^r) compute μₖ^(h,r) -- 2) humans report and share tₖ^(h,r), and also report tₖ^(h,h’) for each pair of humans (x,y) do for each robot r that y just used do Δ⁺ ← max(0, tₖ^(y,r) − tₖ₋₁^(x,r)) Δ⁻ ← max(0, tₖ₋₁^(x,r) − tₖ^(y,r)) αₖ^(x,r) ← αₖ^(x,r) + ŝ^(x,r)·tₖ^(x,y)·Δ⁺ βₖ^(x,r) ← βₖ^(x,r) + f̂^(x,r)·tₖ^(x,y)·Δ⁻ compute μₖ^(x,r) end for |
This process yields closed-form, monotonic updates for all agent-agent trust values, supporting online inference with state variables and potential trust propagation links.
3. Trust Propagation Dynamics and Network Interpretation
TIP can be viewed as a protocol and as a trust propagation model over a directed graph , where . Edges represent direct human-robot experience () or indirect propagation (). The amount of indirect trust propagated depends on the trust between human agents (): higher interpersonal trust leads to more substantial trust transfer. No explicit decay or damping is present; all updates are cumulative unless an extension introduces a discount parameter.
A plausible implication is that trust relationships in large teams will strongly depend on the network structure and frequency of interaction, and that selective sharing or more sophisticated network topologies are needed in scaled deployments.
4. Hyper-Parameters and Model Components
TIP requires careful setting of the following hyper-parameters:
| Parameter | Role | Notes |
|---|---|---|
| , | Prior counts | Sets initial trust bias |
| , | Direct experience gains | Positive/negative slopes |
| , | Indirect propagation gains | For trust propagation |
All updates are monotonic, with no built-in temporal decay. Temporal discounting could be incorporated by scaling all experience counts at each step by .
5. Experimental Validation
TIP was validated via a human-subjects experiment with 15 pairs of participants () over sessions of a simulated drone search-and-detect task. Each team operated with two drones: (90% reliability) and (60% reliability). At each session, after performing 10 trials with an assigned drone, individuals reported:
- Trust in their own robot (direct trust)
- Trust in their teammate (interpersonal trust)
- Trust in the other drone (indirect trust)
Model parameters were fitted by maximizing the log-likelihood:
using (concave) gradient descent.
TIP’s predictive performance, as measured by RMSE, was superior to a direct-only baseline model:
- TIP: RMSE=0.057, RMSE=0.082
- Direct-only: RMSE=0.085, RMSE=0.107
Paired -tests showed these improvements were statistically significant ( for each drone). Individual trust curves exhibited meaningful 90% intervals derived from posteriors (Guo et al., 2023).
6. Practical Implementation and Limitations
Implementation of TIP as a “Trustworthy Influence Protocol” depends on accurate, online elicitation of trust reports and interpersonal trust at each timestep. The protocol assumes truthful and cooperative self-reporting, does not explicitly account for adversarial or deceptive agents, and may require adaptation in heterogeneous or larger teams.
Scalability is for maintaining state variables, and for propagation links. Current limitations include the lack of temporal decay for outdated experiences and modeling only single-dimensional trust; extensions to multi-faceted trust or trust estimation via behavioral or psychophysiological signals are proposed. The approach assumes uniform sharing; more complex social network structures or varying degrees of selectivity could further enhance fidelity in larger or hierarchical teams.
7. Significance and Prospective Extensions
TIP provides, to date, the first Bayesian computational trust model specifically for multi-human multi-robot teams, establishing mechanisms to both capture and propagate trust across agents in a principled manner. Its ability to fuse direct and indirect experience yields more accurate, personalized trust dynamics, outperforming baseline models that ignore trust propagation.
Potential future directions include incorporation of temporal decay mechanisms, richer trust representations (e.g., multi-dimensional trust attributes), and indirect estimation of trust via analysis of agent behavior or biosignals. A more generalized network structure supporting selective, non-uniform information sharing would enhance applicability to real-world, large-scale collaborative teams (Guo et al., 2023).