In computer science, incremental learning is a method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.
One of the ideas we are exploring here is how to learn on a case-by-case basis. We don't have a large corpus of known-examples to draw inference from. This style of learning is supervised, that is, there is a teacher saying something is a "positive" example of an object or something is a "negative" example of an object. We then either generalize or specialize.
Each time we have another object to analyze, we revise our understanding, either generalizing or specializing. We do this via heuristics for specializing and generalizing:
|link must be present to be a positive example of the concept
|link must be absent to be a positive example of the concept
|link is not necessary to be a positive example of the concept
|multiple objects or links may fit one role in the concept
|generalize over multiple objects in the same role based knowledge
|expand range of values to be a positive example of the concept