The problem of recognizing textual entailment (RTE) has been recently addressed using syntactic and lexical models with some success. Here, we further explore this problem, this time using the world knowledge captured in large semantic graphs such as WordNet. We show that semantic graphs made of synsets and selected relationships between them enable fairly simple methods that provide very competitive performance. First, assuming a solution to word sense disambiguation, we report on the performance of these methods in the four basic areas of information retrieval (IR), information extraction (IE), question answering (QA), and multi-document summarization (SUM), as described using benchmark datasets designed to test the entailment problem in the 2006 RTE (Recognizing Textual Entailment) challenge. We then show how the same methods yield a solution to word sense disambiguation, which combined with the previous solution yields a fully automated solution with about the same performance.