SpeededUpRobustFeaturesDetector Class |
Namespace: Accord.Imaging
[SerializableAttribute] public class SpeededUpRobustFeaturesDetector : BaseSparseFeatureExtractor<SpeededUpRobustFeaturePoint>
The SpeededUpRobustFeaturesDetector type exposes the following members.
Name | Description | |
---|---|---|
SpeededUpRobustFeaturesDetector |
Initializes a new instance of the SpeededUpRobustFeaturesDetector class.
|
Name | Description | |
---|---|---|
ComputeDescriptors |
Gets or sets a value indicating whether all feature points
should have their descriptors computed after being detected.
Default is to compute standard descriptors.
| |
ComputeOrientation |
Gets or sets a value indicating whether all feature points
should have their orientation computed after being detected.
Default is true.
| |
NumberOfInputs |
Returns -1.
(Inherited from BaseFeatureExtractorTFeature.) | |
NumberOfOutputs |
Gets the dimensionality of the features generated by this extractor.
(Inherited from BaseFeatureExtractorTFeature.) | |
Octaves |
Gets or sets the number of octaves to use when building
the response filter.
Each octave corresponds to a series of maps covering a
doubling of scale in the image. Default is 5.
| |
Step |
Gets or sets the initial step to use when building
the response filter.
Default is 2.
| |
SupportedFormats |
Gets the list of image pixel formats that are supported by
this extractor. The extractor will check whether the pixel
format of any provided images are in this list to determine
whether the image can be processed or not.
(Inherited from BaseFeatureExtractorTFeature.) | |
Threshold |
Gets or sets the non-maximum suppression
threshold. Default is 0.0002.
|
Name | Description | |
---|---|---|
Clone |
Creates a new object that is a copy of the current instance.
(Inherited from BaseFeatureExtractorTFeature.) | |
Clone(ISetPixelFormat) |
Creates a new object that is a copy of the current instance.
(Overrides BaseFeatureExtractorTFeatureClone(ISetPixelFormat).) | |
Dispose |
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
(Inherited from BaseFeatureExtractorTFeature.) | |
Dispose(Boolean) |
Releases unmanaged and - optionally - managed resources.
(Overrides BaseFeatureExtractorTFeatureDispose(Boolean).) | |
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetDescriptor |
Gets the
feature descriptor for the last processed image.
| |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
InnerTransform |
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
(Overrides BaseFeatureExtractorTFeatureInnerTransform(UnmanagedImage).) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
ProcessImage(Bitmap) | Obsolete.
Obsolete. Please use the Transform(Bitmap) method instead.
(Inherited from BaseSparseFeatureExtractorTPoint.) | |
ProcessImage(BitmapData) | Obsolete.
Obsolete. Please use the Transform(Bitmap) method instead.
(Inherited from BaseSparseFeatureExtractorTPoint.) | |
ProcessImage(UnmanagedImage) | Obsolete.
Obsolete. Please use the Transform(Bitmap) method instead.
(Inherited from BaseSparseFeatureExtractorTPoint.) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) | |
Transform(Bitmap) |
Applies the transformation to an input, producing an associated output.
(Inherited from BaseFeatureExtractorTFeature.) | |
Transform(Bitmap) |
Applies the transformation to an input, producing an associated output.
(Inherited from BaseFeatureExtractorTFeature.) | |
Transform(UnmanagedImage) |
Applies the transformation to an input, producing an associated output.
(Inherited from BaseFeatureExtractorTFeature.) | |
Transform(UnmanagedImage) |
Applies the transformation to an input, producing an associated output.
(Inherited from BaseFeatureExtractorTFeature.) | |
Transform(Bitmap, IEnumerableTFeature) |
Applies the transformation to a set of input vectors,
producing an associated set of output vectors.
(Inherited from BaseFeatureExtractorTFeature.) | |
Transform(UnmanagedImage, IEnumerableTFeature) |
Applies the transformation to a set of input vectors,
producing an associated set of output vectors.
(Inherited from BaseFeatureExtractorTFeature.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
Based on original implementation in the OpenSURF computer vision library by Christopher Evans (http://www.chrisevansdev.com). Used under the LGPL with permission of the original author.
Be aware that the SURF algorithm is a patented algorithm by Anael Orlinski. If you plan to use it in a commercial application, you may have to acquire a license from the patent holder.
References:
The first example shows how to extract SURF descriptors from a standard test image:
// Let's load an example image, such as Lena, // from a standard dataset of example images: var images = new TestImages(path: localPath); Bitmap lena = images["lena.bmp"]; // Create a new SURF with the default parameter values: var surf = new SpeededUpRobustFeaturesDetector(threshold: 0.0002f, octaves: 5, initial: 2); // Use it to extract the SURF point descriptors from the Lena image: List<SpeededUpRobustFeaturePoint> descriptors = surf.ProcessImage(lena); // We can obtain the actual double[] descriptors using double[][] features = descriptors.Apply(d => d.Descriptor); // Now those descriptors can be used to represent the image itself, such // as for example, in the Bag-of-Visual-Words approach for classification.
The second example shows how to use SURF descriptors as part of a BagOfVisualWords (BoW) pipeline for image classification:
// Ensure results are reproducible Accord.Math.Random.Generator.Seed = 0; // The Bag-of-Visual-Words model converts images of arbitrary // size into fixed-length feature vectors. In this example, we // will be setting the codebook size to 10. This means all feature // vectors that will be generated will have the same length of 10. // By default, the BoW object will use the sparse SURF as the // feature extractor and K-means as the clustering algorithm. // Create a new Bag-of-Visual-Words (BoW) model var bow = BagOfVisualWords.Create(numberOfWords: 10); // Note: a simple BoW model can also be created using // var bow = new BagOfVisualWords(numberOfWords: 10); // Get some training images Bitmap[] images = GetImages(); // Compute the model bow.Learn(images); // After this point, we will be able to translate // images into double[] feature vectors using double[][] features = bow.Transform(images); // We can also check some statistics about the dataset: int numberOfImages = bow.Statistics.TotalNumberOfInstances; // 6 // Statistics about all the descriptors that have been extracted: int totalDescriptors = bow.Statistics.TotalNumberOfDescriptors; // 4132 double totalMean = bow.Statistics.TotalNumberOfDescriptorsPerInstance.Mean; // 688.66666666666663 double totalVar = bow.Statistics.TotalNumberOfDescriptorsPerInstance.Variance; // 96745.866666666669 IntRange totalRange = bow.Statistics.TotalNumberOfDescriptorsPerInstanceRange; // [409, 1265] // Statistics only about the descriptors that have been actually used: int takenDescriptors = bow.Statistics.NumberOfDescriptorsTaken; // 4132 double takenMean = bow.Statistics.NumberOfDescriptorsTakenPerInstance.Mean; // 688.66666666666663 double takenVar = bow.Statistics.NumberOfDescriptorsTakenPerInstance.Variance; // 96745.866666666669 IntRange takenRange = bow.Statistics.NumberOfDescriptorsTakenPerInstanceRange; // [409, 1265]
// Now, the features can be used to train any classification // algorithm as if they were the images themselves. For example, // let's assume the first three images belong to a class and // the second three to another class. We can train an SVM using int[] labels = { -1, -1, -1, +1, +1, +1 }; // Create the SMO algorithm to learn a Linear kernel SVM var teacher = new SequentialMinimalOptimization<Linear>() { Complexity = 10000 // make a hard margin SVM }; // Obtain a learned machine var svm = teacher.Learn(features, labels); // Use the machine to classify the features bool[] output = svm.Decide(features); // Compute the error between the expected and predicted labels double error = new ZeroOneLoss(labels).Loss(output);