We present an implementation of stable indeuctive logic programming (stable-ILP), a cross-disciplinary concept bridging machine learning and nonmonotonic reasoning. In a deductive capacity, stable models give meaning to logic programs containing negative assertions and cycles of dependencies. In stable-ILP, we employ these models to represent the current state specified by (possibly) negative extensional and intensional (EDB and IDB) database rules. Additionally, the computed state then serves as the domain backgraound knowledge for a top-down ILP learner. In this paper, we discuss the architecture of the two constituent computation engines and their symbiotic interaction in computer system INDED (pronounced 'indeed'). We introduce the concept the notion of negation as failure-to-learn, and provide a real world source of negatively recursive rules (those of the form p (-- not p) by explicating scenarios that foster induction of such rules. Last, we briefly mention current work using INDED in data mining.