Bayesian belief nets (BNs) are often used for classification tasks --- typically to return the most likely "class label" for each specified instance. Many BN-learners, however, attempt to find the BN that maximizes a different objective function (viz., likelihood, rather than classification accuracy), typically by first learning an appropriate graphical structure, then finding the maximal likelihood parameters for that structure. As these parameters may not maximize the classification accuracy, "discriminative learners" follow the alternative approach of seeking the parameters that maximize conditionallikelihood (CL), over the distribution of instances the BN will have to classify. This paper first formally specifies this task, and shows how it relates to logistic regression, which corresponds to finding the optimal CL parameters for a naive-bayes structure. After analyzing its inherent (sample and computational) complexity, we then present a general algorithm for this task, ELR, which applies to arbitrary BN structures and which works effectively even when given the incomplete training data. This paper presents empirical evidence that ELR works better than the standard "generative" approach in a variety of situations, especially in common situation where the BN-structure is incorrect.