Description
Leave review
Description
We routinely do something that many theories treat as exceptional: we talk about actions as if they were things, treat properties as objects of thought, and even turn function words into discourse entities when explanation, teaching, or analysis demands it. This book argues that these are not “category violations” to be patched over, but signatures of a deeper capacity: noun-slot stabilization.
At the center is the Reverse Entailment Diagnostic (RED). Rather than merely repeating the truism that “context matters,” RED insists on witness design—controlled environments (possessives/genitives, nominalizers, quotation, and related regimes) that force stabilization and therefore permit backward inference to an item’s nominal potential.
The book then makes an architectural move: sustained success of RED witnesses motivates a Universal Linguistic Domain (ULD ≡ UNS)—a constrained geometry of admissible discourse positions in which linguistic material can be hosted as entity-like, predicate-like, or connective-like under principled constraints rather than ad hoc labels.
Built into this program are cognitive and computational consequences: if stabilization is an operation with graded cost, it should leave measurable signatures in processing and in NLP systems that currently fail on nominalizations and “property-as-argument” structures.