Current approaches to characterize and detect hate speech focus on content posted in Online Social Networks (OSNs). They face shortcomings to get the full picture of hate speech due to its subjectivity and the noisiness of OSN text. This work partially addresses these issues by shifting the focus towards users. We obtain a sample of Twitter's retweet graph with 100,386 users and annotate 4,972 as hateful or normal, and also find 668 users suspended after 4 months. Our analysis shows that hateful/suspended users differ from normal/active ones in terms of their activity patterns, word usage and network structure. Exploiting Twitter's network of connections, we find that a node embedding algorithm outperforms content-based approaches for detecting both hateful and suspended users. Overall, we present a user-centric view of hate speech, paving the way for better detection and understanding of this relevant and challenging issue.