Papers
Topics
Authors
Recent
Search
2000 character limit reached

Argumentative inference in uncertain and inconsistent knowledge bases

Published 6 Mar 2013 in cs.AI | (1303.1503v1)

Abstract: This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A so-called argumentative-consequence relation taking into account the existence of consistent arguments in favor of a conclusion and the absence of consistent arguments in favor of its contrary, is particularly investigated. Flat knowledge bases, i.e. without any priority between their elements, as well as prioritized ones where some elements are considered as more strongly entrenched than others are studied under different consequence relations. Lastly a paraconsistent-like treatment of prioritized knowledge bases is proposed, where both the level of entrenchment and the level of paraconsistency attached to a formula are propagated. The priority levels are handled in the framework of possibility theory.

Citations (187)

Summary

Argumentative Inference in Uncertain and Inconsistent Knowledge Bases

In the realm of artificial intelligence, managing knowledge bases that contain inconsistencies and uncertainties is a critical challenge. The paper by Salem Benferhat, Didier Dubois, and Henri Prade titled "Argumentative Inference in Uncertain and Inconsistent Knowledge Bases" provides an in-depth exploration of various methodologies for reasoning within such heterogeneous knowledge repositories. The predominant focus is the development of an argumentative consequence relation which distinguishes itself by evaluating the presence of consistent arguments for a conclusion while ensuring the absence of equally compelling arguments for the opposite. This approach is studied within both flat and prioritized knowledge bases, exploring the implications of entrenchment levels using possibility theory to manage priorities.

Management of Inconsistency and Argumentative Consequence

Traditional methods for dealing with inconsistencies often involve revising the knowledge base to restore consistency, which can lead to the loss of valuable information. Alternatively, coping strategies merely navigate inconsistencies without resolving them by extrapolating useful conclusions despite contradictions. The paper investigates these coping strategies explicitly through the concept of argumentative inference. This approach proposes that a conclusion can be safely inferred from an inconsistent knowledge base only if there is an unequivocal argument supporting it, with no counterarguments of similar strength present.

Comparative Analysis of Consequence Relations

The paper performs a comprehensive comparative analysis of several inconsistency-tolerant consequence relations. Among these:

  • Free-Consequence is conservative, relying only on information that is uninvolved in inconsistencies.
  • MC-Consequence and Lex-Consequence leverage maximal consistent sub-bases, with Lex selecting simplified subsets based on parsimony and others based on lexical ordering.
  • Existential Consequence is noted for its permissiveness, deriving conclusions from any single consistent subset but at a risk of inconsistency.

The authors assert that the argumentative consequence avoids outright contradictions and is theoretically akin to paraconsistent logics that eschew the "ex absurdo quodlibet" rule.

Extension to Prioritized Knowledge Bases

For prioritized knowledge bases, where elements possess varying reliability levels, the paper proposes an advanced treatment by integrating levels of certainty into argumentative inference. This involves assessing arguments for propositions and their negations across different priority layers, ensuring conclusions are drawn from consistently reliable information.

Paraconsistent-Like Reasoning

Paraconsistent reasoning is extended further by attaching dual weights to propositions—reflecting both certainty levels and the degree of negation—transforming the treatment of inconsistency into a more nuanced form. This systemic dual-component approach enriches the understanding of knowledge states and emphasizes local inconsistency rather than presenting a homogeneous assessment across the base.

Practical and Theoretical Implications

The methodologies presented have significant implications for both theoretical advancements and practical applications in AI. They offer a pathway to maintain operational utility of knowledge bases despite inherent contradictions and prioritize information effectively. Future developments may include applying these inference modes within more complex, real-world AI systems, particularly in areas requiring nuanced decision-making and reasoning under uncertainty. Evaluations on default reasoning frameworks could also extend insights into the frameworks' flexibility and robustness.

In conclusion, the investigation led by Benferhat, Dubois, and Prade presents a robust framework that balances conservative reasoning within information systems with advanced methodologies that capitalize on argument structures under uncertainty. As AI continues to grapple with increasingly complex systems and datasets, such strategies will be pivotal in ensuring reliable, consistent, and meaningful inferences in software agents and decision-support systems.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.