Incremental concept learning
In computer science, incremental learning is a method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.
One of the ideas we are exploring here is how to learn on a case-by-case basis. We don't have a large corpus of known-examples to draw inference from. This style of learning is supervised, that is, there is a teacher saying something is a "positive" example of an object or something is a "negative" example of an object. We then either generalize or specialize.
Each time we have another object to analyze, we revise our understanding, either generalizing or specializing. We do this via heuristics for specializing and generalizing:
|require-link||link must be present to be a positive example of the concept|
|forbid-link||link must be absent to be a positive example of the concept|
|drop-link||link is not necessary to be a positive example of the concept|
|enlarge set||multiple objects or links may fit one role in the concept|
|climb-tree||generalize over multiple objects in the same role based knowledge|
|close interval||expand range of values to be a positive example of the concept|
Represent by means of a labelled, accurate diagram or graph, using a pencil. A ruler (straight edge) should be used for straight lines. Diagrams should be drawn to scale. Graphs should have points correctly plotted (if appropriate) and joined in a straight line or smooth curve.
Break down in order to bring out the essential elements or structure. To identify parts and relationships, and to interpret information to reach conclusions.
Offer for display, observation, examination or consideration.