Global reasoning plays a significant role in many computer vision tasks which need to capture long-distance relationships. However, most current studies on global reasoning focus on exploring the relationship between pixels and ignore the critical role of the regions. In this paper, we propose an novel approach that explores the relationship between regions which have richer semantics than pixels. Specifically, we design a region aggregation method that can gather regional features automatically into a uniform shape, and adjust theirs positions adaptively for better alignment. To achieve the best performance of global reasoning, we propose various relationship exploration methods and apply them on the regional features. Our region-based global reasoning module, named ReGr, is end-to-end and can be inserted into existing visual understanding models without extra supervision. To evaluate our approach, we apply ReGr to fine-grained classification and action recognition benchmark tasks, and the experimental results demonstrate the effectiveness of our approach.