TwoSampleTTestPowerAnalysis Class |
Namespace: Accord.Statistics.Testing.Power
[SerializableAttribute] public class TwoSampleTTestPowerAnalysis : BaseTwoSamplePowerAnalysis
The TwoSampleTTestPowerAnalysis type exposes the following members.
Name | Description | |
---|---|---|
TwoSampleTTestPowerAnalysis(TwoSampleHypothesis) |
Creates a new TTestPowerAnalysis.
| |
TwoSampleTTestPowerAnalysis(TwoSampleTTest) |
Creates a new TTestPowerAnalysis.
|
Name | Description | |
---|---|---|
Effect |
Gets or sets the effect size of the test.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
Power |
Gets or sets the power of the test, also
known as the (1-Beta error rate).
(Inherited from BaseTwoSamplePowerAnalysis.) | |
Samples1 |
Gets or sets the number of observations
in the first sample considered in the test.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
Samples2 |
Gets or sets the number of observations
in the second sample considered in the test.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
Size |
Gets or sets the significance level
for the test. Also known as alpha.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
Tail |
Gets the test type.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
TotalSamples |
Gets the total number of observations
in both samples considered in the test.
(Inherited from BaseTwoSamplePowerAnalysis.) |
Name | Description | |
---|---|---|
Clone |
Creates a new object that is a copy of the current instance.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
ComputeEffect |
Computes the minimum detectable effect size for the test
considering the power given in Power, the
number of samples in TotalSamples and the
significance level Size.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
ComputePower | (Overrides BaseTwoSamplePowerAnalysisComputePower.) | |
ComputeSamples |
Computes the recommended sample size for the test to attain
the power indicated in Power considering
values of Effect and Size.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
ComputeSize |
Computes the minimum significance level for the test
considering the power given in Power, the
number of samples in TotalSamples and the
effect size Effect.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetDiferentiableUnits(Double) |
Gets the minimum difference in the experiment units
to which it is possible to detect a difference.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
GetDiferentiableUnits(Double, Double) |
Gets the minimum difference in the experiment units
to which it is possible to detect a difference.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetSampleSize(Double, Double, Double, Double, Double, TwoSampleHypothesis) |
Estimates the number of samples necessary to attain the
required power level for the given effect size.
| |
GetSampleSize(Double, Double, Double, Double, Double, Double, TwoSampleHypothesis) |
Estimates the number of samples necessary to attain the
required power level for the given effect size.
| |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
ToString |
Converts the numeric power of this test to its equivalent string representation.
(Inherited from BaseTwoSamplePowerAnalysis.) | |
ToString(String, IFormatProvider) |
Converts the numeric power of this test to its equivalent string representation.
(Inherited from BaseTwoSamplePowerAnalysis.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
There are different ways a power analysis test can be conducted.
// Let's say we have two samples, and we would like to know whether those // samples have the same mean. For this, we can perform a two sample T-Test: double[] A = { 5.0, 6.0, 7.9, 6.95, 5.3, 10.0, 7.48, 9.4, 7.6, 8.0, 6.22 }; double[] B = { 5.0, 1.6, 5.75, 5.80, 2.9, 8.88, 4.56, 2.4, 5.0, 10.0 }; // Perform the test, assuming the samples have unequal variances var test = new TwoSampleTTest(A, B, assumeEqualVariances: false); double df = test.DegreesOfFreedom; // d.f. = 14.351 double t = test.Statistic; // t = 2.14 double p = test.PValue; // p = 0.04999 bool significant = test.Significant; // true // The test gave us an indication that the samples may // indeed have come from different distributions (whose // mean value is actually distinct from each other). // Now, we would like to perform an _a posteriori_ analysis of the // test. When doing an a posteriori analysis, we can not change some // characteristics of the test (because it has been already done), // but we can measure some important features that may indicate // whether the test is trustworthy or not. // One of the first things would be to check for the test's power. // A test's power is 1 minus the probability of rejecting the null // hypothesis when the null hypothesis is actually false. It is // the other side of the coin when we consider that the P-value // is the probability of rejecting the null hypothesis when the // null hypothesis is actually true. // Ideally, this should be a high value: double power = test.Analysis.Power; // 0.5376260 // Check how much effect we are trying to detect double effect = test.Analysis.Effect; // 0.94566 // With this power, that is the minimal difference we can spot? double sigma = Math.Sqrt(test.Variance); double thres = test.Analysis.Effect * sigma; // 2.0700909090909 // This means that, using our test, the smallest difference that // we could detect with some confidence would be something around // 2 standard deviations. If we would like to say the samples are // different when they are less than 2 std. dev. apart, we would // need to do repeat our experiment differently.
Another way to create the power analysis is to pass the hypothesis test to the t-test power analysis constructor.
// Create an a posteriori analysis of the experiment var analysis = new TwoSampleTTestPowerAnalysis(test); // When creating a power analysis, we have three things we can // change. We can always freely configure two of those things // and then ask the analysis to give us the third. // Those are: double e = analysis.Effect; // the test's minimum detectable effect size (0.94566) double n = analysis.TotalSamples; // the number of samples in the test (21 or (11 + 10)) double b = analysis.Power; // the probability of committing a type-2 error (0.53) // Let's say we would like to create a test with 80% power. analysis.Power = 0.8; analysis.ComputeEffect(); // what effect could we detect? double detectableEffect = analysis.Effect; // we would detect a difference of 1.290514
However, to achieve this 80%, we would need to redo our experiment more carefully. Assuming we are going to redo our experiment, we will have more freedom about what we can change and what we can not. For better addressing those points, we will create an a priori analysis of the experiment:
// We would like to know how many samples we would need to gather in // order to achieve a 80% power test which can detect an effect size // of one standard deviation: // analysis = TwoSampleTTestPowerAnalysis.GetSampleSize ( variance1: A.Variance(), variance2: B.Variance(), delta: 1.0, // the minimum detectable difference we want power: 0.8 // the test power that we want ); // How many samples would we need in order to see the effect we need? int n1 = (int)Math.Ceiling(analysis.Samples1); // 77 int n2 = (int)Math.Ceiling(analysis.Samples2); // 77 // According to our power analysis, we would need at least 77 // observations in each sample in order to see the effect we // need with the required 80% power.