首页 > 代码库 > Similarity-based Learning

Similarity-based Learning

<style></style>

Similarity-based approaches to machine learning come from the idea that the best way to make a predictions is to simply look at what has worked well in the past and predict the same thing. The fundamental concepts required to build a system based on this idea are feature spaces and measures of similarity, and these are covered in the fundamentals section of this chapter. These concepts allow us to understand the standard approach to building similarity-based models: the nearest neighbor algorithm. After covering the standard algorithm, we then look at extensions and variations that allow us to handle noisy data (the k nearest neighbor, or k-NN, algorithm), to make predictions more efficiently (k-d trees), to predict continuous targets, and to handle different kinds of descriptive features with varying measures of similarity. We also take the opportunity to introduce the use of data normalization and feature selection in the context of similarity-based learning. These techniques are generally applicable to all machine learning algorithms but are especially important when similarity-based approaches are used.

5.1 Big Idea

The year is 1798, and you are Lieutenant-Colonel David Collins of HMS Calcutta exploring the region around Hawkesbury River, in New South Wales. One day, after an expedition up the river has returned to the ship, one of the men from the expedition tells you that he saw a strange animal near the river. You ask him to describe the animal to you, and he explains that he didn‘t see it very well because, as he approached it, the animal growled at him, so he didn‘t approach too closely. However, he did notice that the animal had webbed feet and a duck-billed snout.

In order to plan the expedition for the next day, you decide that you need to classify the animal so that you can determine whether it is dangerous to approach it or not. You decide to do this by thinking about the animals you can remember coming across before and comparing the features of these animals with the features the sailor described to you. We illustrate this process by listing some of the animals you have encountered before and how they compare with the growling, web-footed, duck-billed animal that the sailor described. For each known animal, you count how many features it has in common with the unknown animal. At the end of this process, you decide that the unknown animal is most similar to a duck, so that is what it must be. A duck, no matter how strange, is not a dangerous animal, so you tell the men to get ready for another expedition up the river the next day.

The process of classifying an unknown animal by matching the features of the animal against the features of animals you have encountered before neatly encapsulated the big idea underpinning similarity-based learning: if you are trying to make a prediction for a current situation then you should search your memory to find situations that are similar to the current one and make a prediction based on what was true for the most similar situation in your memory.

5.2 Fundamentals

As the name similarity-based learning suggests, a key component of this approach to prediction is defining a computational measure of similarity between instances. Often this measure of similarity is actually some form of distance measure. A consequence of this, and a somehow less obvious requirement of similarity-based learning, is that if we are going to compute distances between instances, we need to have a concept of space in the representation of the domain used by our model. In this section we introduce the concept of a feature space as a representation for a training dataset and then illustrate how we can compute measures of similarity between instances in a feature space.

5.2.1 Feature Space

We list an example dataset containing two descriptive features, the SPEED and AGILITY ratings for college athletes (both measures out of 10), and one target feature that list whether the athletes were drafted to a professional team. We can represent this dataset in a feature space by taking each of the descriptive features to the axes of a coordinate system. We can then place each instance within the feature space based on the values of its descriptive features. There is a scatter plot to illustrate the resulting feature space when we do this using the data. In this figure SPEED has been plotted on the horizontal axis, and AGILITY has been plotted on the vertical axis. The value of the DRAFT feature is indicated by the shape representing each instance as a point in the feature space: triangles for no and crosses for yes.

There is always one dimension for every descriptive feature in a dataset. In this example, there are only two descriptive features, so the feature space is two-dimensional. Feature spaces can, however,have many more dimensions-in document classification tasks, for example, it is not uncommon to have thousands of descriptive features and therefore thousands of dimensions in the associated feature space. Although we can‘t easily draw feature spaces beyond three dimensions, the ideas underpinning them remain the same.

We can formally define a feature space as an abstract m-dimensional space that is created by making each descriptive feature in a dataset an axis of an m-dimensional coordinate system and mapping each instance in the dataset to a point in the coordinate system based on the values of its descriptive features.

For similarity-based learning, the nice thing about the way feature spaces work is that if the values of the descriptive features of two or more instances in the dataset are the same, then these instances will be mapped to some point in the feature space. Also, as the difference between the values of the descriptive features of two instances grows, so too does the distance between the points in the feature space that represent these instances. So the distance between two points in the feature space is a useful measure of the similarity of the descriptive features of the two instances.

5.2.2 Measuring Similarity Using Distance Metrics

The simplest way to measure the similarity between two instances, and , in a dataset is to measure the distance between the instances in a feature space. We can use a distance metric to do this:

i技术分享s a function that returns the distance between two instances and . Mathematically, a

must conform to the following four criteria:技术分享技术分享技术分享技术分享技术分享

1. Non-negativity:

技术分享

2.Identity:

技术分享

3.Symmetry:

技术分享

4.TriangularInequality:

技术分享

Oneof the best known distance metrics is Euclideandistance,which computes the length of the straight line between two points.Euclidean distance between two instances and in an m-dimensionalfeature space is defined as

技术分享技术分享

(1)技术分享

Thedescriptive features in the college athlete dataset are bothcontinuous, which means that the feature space representing this datais technically known as a Euclideancoordinate space,and we can compute the distance between instances in it usingEuclidean distance. Foe example, the Euclidean distance betweeninstances 技术分享

(SPEED= 5.00, AGILITY = 2.50) and 技术分享(SPEED= 2.75, AGILITY = 7.50) from Table (1) is

技术分享

Another,less well-known, distance metric is the Manhattandistance.The Manhattan distance between two instance 技术分享and 技术分享in a feature space with 技术分享dimensions is defined as

技术分享 (2)

wherethe 技术分享function returns the absolute value. For example, the Manhattandistance between instances 技术分享(SPEED = 5.00, AGILITY = 2.50) and 技术分享(SPEED = 2.75, AGILITY = 7.50) in Table (2) is

技术分享

Weillustrate the differences between the Manhattan and Euclideandistances between two points in a two-dimensional feature space. Ifwe compare Equation(1) and Equation(2), we can see that bothdistance metrics are essentially functions of the differences betweenthe values of the features. Indeed, the Euclidean and Manhattandistances are special cases of the Minkowskidistance,which defines a family of distance metrics based on different betweenfeatures.

TheMinkowskidistancebetween two distances 技术分享and 技术分享in a feature space with 技术分享dimensions is defined as

技术分享 (3)

wherethe parameter 技术分享is typically set to a positive value and defines the behavior of thedistance metric. Different distance metrics result from adjusting thevalue of 技术分享.For example, the Minkowski distance with 技术分享is the Manhattan distance, and with 技术分享is the Euclidean distance. Continuing in this manner, we can definean infinite number of distance metrics.

Thefact that we can define an infinite number of distance metrics is notmerely an academic curiosity. In fact, the predictions produced by asimilarity-based model will change depending on the exact Minkowskidistance used (i.e., 技术分享).Larger values of 技术分享place more emphasis on large differences between feature values thansmaller values of 技术分享because all differences are raised to the power of 技术分享.Consequently, the Euclidean distance (with 技术分享)is more strong influenced by a single large difference in one featurethan the Manhattan distance (with 技术分享).

Wecan see this if we compare the Euclidean and Manhattan distancesbetween instances 技术分享and 技术分享with the Euclidean and Manhattan distances between instances 技术分享and 技术分享(SPEED = 5.25, AGILITY = 9.50).

TheManhattan distances between both pairs of instances are the same:7.25. It is striking, however, that the Euclidean distance between 技术分享and 技术分享is 8.25, which is greater than the Euclidean distance between 技术分享and 技术分享,which is just 5.48. This because the maximum difference between 技术分享and 技术分享for any single feature is 7 units (for AGILITY), whereas the maximumdifference between 技术分享and 技术分享on any single feature is just 5 units (for AGILITY). Becausethese differences are squared in the Euclidean distance calculation,the larger maximum single difference between 技术分享and 技术分享results in a larger overall distance being calculated for this pairof instances. Overall the Euclidean distance weights features withlarger differences in values more than features with smallerdifferences in values. This means that the Euclidean difference ismore influenced by a single large difference in one feature ratherthan a log of small differences across a set of features, whereas theopposite is true of Manhattan distance.

Althoughwe have an infinite number of Minkowski-based distance metrics tochoose from, Euclidean distance and Manhattan distance are the mostcommonly used of these. The question of which is the best one to use,however, still remains. From a computational perspective, theManhattan distance has a slight advantage over the Euclidean distance- the computation of the squaring and the square root is saved - andcomputational considerations can become important when dealing withvery large datasets. Computational considerations aside, Euclideandistance is often used as the default.

Similarity-based Learning