print

Links and Functions

Breadcrumb Navigation



Content

Abstracts

Degrees of Logical Competence

MARCELLO D’AGOSTINO
University of Ferrara

Is logical competence a matter of degree? It is uncontroversial that we may be perfectly able to conceive and understand sentences that are logically inconsistent without being able to recognize their inconsistency. And we may well understand a set of sentences without realizing that, taken together, they logically entail some other sentence. It is also commonly assumed that the capability of correctly recognizing inconsistency or logical entailment comes in degrees. How can such degrees of “logical depth” be defined, measured and experimentally texted? While being of crucial importance for the study of human (and non-human) cognition, this issue has been surprisingly neglected in the logical literature. We argue that this situation depends on the fact that most popular logical theories are inadequate to account for natural approximations to the idealized logically omniscient agents they deal with. Indeed, these theories overdefine the meaning of the logical operators to the effect that any agent who fully understands this meaning must, ipso facto, be logically omniscient. This clashes with our intuition and with the known fact that most interesting logical systems are either undecidable or most likely to be intractable. Building on some remarks by W.V.O Quine, we put forward an alternative, more primitive, view of the meaning of the logical operators – that we call “the informational view” – according to which an agent who understands this primitive meaning is committed to correctly recognizing only a minimal (and feasible) subset of classically valid inferences. Deeper valid inferences can be recognized by means of the additional logical capability of manipulating “virtual information”, i.e. information that the agent does not actually possess. The depth at which this additional competence can be recursively used in reasoning, naturally leads to defining degrees of logical depth that can be associated on the one hand to the cognitive effort required to perform a given reasoning task, and on the other to the amount of computational resources that are available to the agent.