Abstract: Understanding the temporal relations among events in text is a critical aspect of reading comprehension, which can be evaluated in the form of temporal question answering (TQA). When explicit timestamps are absent, TQA is a challenging task that requires models to understand the nuanced difference in textual expressions that indicate different temporal relations (e.g., "What happened right before dawn" indicates a small subset of "What happened before dawn"). In this paper, we propose to reformulate the task of TQA as open temporal relation extraction. Specifically, we decompose each question into a question event (e.g., "dawn") and an open temporal relation (OTR, e.g., "happened before") which is not pre-defined nor with timestamps, and ground the former in the context while sharing the representation of the latter across contexts. This OTR for QA formulation has two advantages: 1) it allows us to learn context-agnostic, free-text-based relation representations that generalize across different contexts and events, which leads to higher data efficiency; 2) it allows us to explicitly model the differences in temporal relations with a contrastive loss function, which helps better capture mutually exclusive relations (e.g., an event cannot simultaneously "happen before" and "happen after" another) as well as more nuanced differences (e.g., not everything that "happened before" an event "happened right before" it). Empirical evaluations on the TORQUE challenge, a recently released dataset for temporal ordering questions, show that our approach attains significant improvements correspondingly over the state of the art performance, especially gains more on EM consistency computed on the contrast question sets.