We describe initial experiments using meta-learning techniques to learn models of fraudulent credit card transactions. Our experiments reported here are the first step towards a better understanding of the advantages and limitations of current meta-learning strategies on real-world data. We argue that, for the fraud detection domain, fraud catching rate (True Positive rate) and false alarm rate (False Positive rate) are better metrics than the overall accuracy when evaluating the learned fraud classifiers. We show that given a skewed distribution in the original data, artificially more balanced training data leads to better classifiers. We demonstrate how meta-learning can be used to combine different classifiers and maintain, and in some cases, improve the performance of the best classifier.