Issue 349: Belief Values

Starting Date: 
2017-10-06
Working Group: 
3
Status: 
Proposed
Background: 

Posted by martin on 3/10/2017

Dear All,

Following a request from Dominic how to deal with uncertain associations,

such as "probably author of" I'd like to discuss a solution expanding properties

with the "Property Class" PC and adding a "certainty value" as a ".2" property for all those cases in which the belief is the one of the maintainers of the knowledge base,

in contrast to an explicit inference by a particular actor.

Posted by Robert on 3/10/2017

We have dealt with this situation by using AttributeAssignment, as in RDF the .1 (and .2) properties would require reification anyway.
It can also cover “workshop of” or “style of” style attributions which are often uncertainty about the individual.

We resisted trying to quantify uncertainty, as from an interoperability viewpoint, there’s very little to be gained from saying that one person is 5/10 sure of an assertion whereas someone else is 4/10 certain… the temptation is to use the strength of belief as an indicator of likelihood of truth, rather than the state of mind of the asserting agent.  The first would be useful but impossible, we consider the second not to be useful for interoperability between public systems.
(Which is not to say it’s not valuable, just not in our scope of work)
 

Posted by Martin on 3/10/2017

Dear Robert,

In the discussion about co-reference statements and deductions from shortcuts we have understood that reification via Attribute Assignment is a wrong method to extend properties, because it confuses agency of belief when the maintainer of the database describes himself as making an attribute assigment, and in other cases not.

Therefore, I propose the PC construct.

Secondly, I absolutely do agree and do not propose a quantification of belief. There are non-quantitative forms of logic dealing with belief values other than true-false, such as "possible". We can think of other measures of supporting evidence.

WRT to interoperability there is no problem as long as a recall-precision hierarchy is preserved. As long as "true" implies "possible", we get what we need (i.e., querying for possible returns also true, only querying for possible and not true would return only possible).

Posted by Franco on 4/10/2017

Dear all

the issue is extensively discussed in this paper:

Niccolucci, F. & Hermon, S. Expressing reliability with CIDOC CRM, Int J Digit Libr (2016). https://doi.org/10.1007/s00799-016-0195-1

I can send a draft copy to those interested - but not broadcast it for copyright reasons.

Shortly, the idea is to consider the assessment of the assignment as an E14 Measurement, which measures a dimension, the uncertainty or better the reliability of this assignment. The outcome E60 Number of this measurement can be anything: a number, a function, an ordinal value. It is linked to the dimension by P90 has value. We were actually proposing a numeric approach and that’s why we end up with a number.

I tend to disagree with Robert’s statement that quantification is in this case useless for public systems. In my opinion it is instead paramount for data reuse, as the stars in Booking.com reviews are paramount to choose an hotel. It doesn’t matter if the statement “Martin Doerr is an alien from Saturn” has reliability 0.000001 for you and 0.1 for me; people who know you and me can draw conclusions exactly because they know you and me. This, regardless the truth of the statement, which every SIG member knows to be true

Perhaps the explanation of the “subjective" approach to this quantification may provide additional insight. references 7 and 8 in the paper explain this approach in a quite difficult and complicate way, that’s why I quote them.

The paper also addresses how this compares to the CRMinf approach and I6 Belief value. If on this regard something changed in CRMinf after early 2016, it is of course not taken into account.

Finally, there are provision to document who said that, why, and where it is documented.