Tuesday 12 January 2016

Decision Trees

Decision Trees

Decision trees are, in general, a non-parametric inductive learning technique, able to produce classifiers for a given problem which can assess new, unseen situations and/or reveal the mechanisms driving a problem. They can be applied to both regression and classification problems.

Decision trees can be easy-to-understand with intuitively clear rules understandable to domain experts

Decision trees offer the ability to track and evaluate every step in the decision-making process. This is because each path through a tree consists of a combination of attributes which work together to distinguish between classes. This simplicity gives useful insights into the inner workings of the method.

Decision trees can handle both nominal and numeric input attributes and are capable of handling data sets that contain misclassified values

Decision trees can easily be programmed for use in real time systems.

They are relatively inexpensive computationally and work well on both large and small data sets

Decision trees are considered to be a non-parametric method. This means that decision trees have no assumptions about the space distribution and on the classifier structure

No comments:

Post a Comment