We study the problem of information design in human-in-the-loop systems, where the sender (the system) aims to design an information disclosure policy to influence the receiver (the user) in making decisions. This problem is ubiquitous in systems with humans in the loop, e.g., recommendation systems might choose whether to present others' reviews to encourage users to follow recommendations, online retailers might choose which set of product features to present to persuade buyers to make the purchase. Among the flourish literature on information design, Bayesian persuasion has been one of the most prominent efforts in formalizing this problem and has spurred various research studies in both economics and computer science. While there has been significant progress in characterizing the optimal information disclosure policies and the corresponding computational complexity, one common assumption in this line of research is that the receiver is Bayesian rational, i.e., the receiver processes the information in a Bayesian manner and takes actions to maximize her expected utility. However, as empirically observed in the literature, this assumption might not be true in real-world scenarios. In this work, we relax this common Bayesian rational assumption in information design in the persuasion setting. In particular, we develop an alternative framework for information design based on discrete choice model and probability weighting to account for this relaxation. Moreover, we conduct online behavioral experiments on Amazon Mechanical Turk and demonstrate that our framework better explains real-world user behavior and leads to more effective information design policy.