This describes the simplest algorithm used by Netica for parameter learning of conditional probability tables (CPTs) from a file of cases, called counting-learning. Although it is simple, it is a true Bayesian learning algorithm.
Before learning begins, the net starts off in a state of ignorance (providing there has been no previous learning or entry of probabilities by an expert). At each node, all CPT probabilities start as uniform, and each experience starts at its lowest value (normally 1.0).
For each case to be learned the following is done. Only nodes for which the case supplies a value (finding), and supplies values for all of its parents, have their experience and conditional probabilities modified (i.e., no missing data for that node). Each of these nodes is modified as follows.
Only the single experience number, and the single probability vector, for the parent configuration which is consistent with the case is modified. The new experience number (exper') is found from the old (exper) by:
exper' = exper + degree
where degree is the multiplicity of the case (set by you just before learning begins). It is normally 1, but is included so that you can make it 2 to learn two identical cases at once, or -1 to “unlearn” a case, etc. If the case file has a NumCases column, then actually degree would be the product of the degree you entered and the value from the NumCases column.
Within the probability vector, the probability for the node state that is consistent with the case is changed from probc to probc' as follows:
probc' = (probc * exper + degree) / exper'
The other probabilities in that vector are changed by:
probi' = (probi * exper) / exper'
which will keep the vector normalized (exper and exper'
act as the old and new normalization constants).
Home > Learning From Cases > Counting-Learning Algorithm