Serial Correlation in Time Series Analysis  QuantStart
TypeI error is often called that consumers reject a good product or service indicated by the null hypothesis.
Relation: correlation & ttest  Real Statistics Using Excel
Linear regression and correlation assume that the data points are of each other, meaning that the value of one data point does not depend on the value of any other data point. The most common violation of this assumption in regression and correlation is in time series data, where some Y variable has been measured at different times. For example, biologists have counted the number of moose on Isle Royale, a large island in Lake Superior, every year. Moose live a long time, so the number of moose in one year is not independent of the number of moose in the previous year, it is highly dependent on it; if the number of moose in one year is high, the number in the next year will probably be pretty high, and if the number of moose is low one year, the number will probably be low the next year as well. This kind of nonindependence, or "autocorrelation," can give you a "significant" regression or correlation much more often than 5% of the time, even when the null hypothesis of no relationship between time and Y is true. If both X and Y are time series—for example, you analyze the number of wolves and the number of moose on Isle Royale—you can also get a "significant" relationship between them much too often.
To illustrate how easy it is to fool yourself with timeseries data, I tested the correlation between the number of moose on Isle Royale in the winter and the number of strikeouts thrown by major league baseball teams the following season, using data for 2004–2013. I did this separately for each baseball team, so there were 30 statistical tests. I'm pretty sure the null hypothesis is true (I can't think of anything that would affect both moose abundance in the winter and strikeouts the following summer), so with 30 baseball teams, you'd expect the P value to be less than 0.05 for 5% of the teams, or about one or two. Instead, the P value is significant for 7 teams, which means that if you were stupid enough to test the correlation of moose numbers and strikeouts by your favorite team, you'd have almost a 1in4 chance of convincing yourself there was a relationship between the two. Some of the correlations look pretty good: strikeout numbers by the Cleveland team and moose numbers have an r^{2} of 0.70 and a P value of 0.002:
View modules of the Critical Appraisal Online Course
The task is to decide whether to accept a null hypothesis: H = = or to reject the null hypothesis in favor of the alternative hypothesis: H: is significantly different from The testing framework consists of computing a the tstatistics: Where is the estimated mean and S is the estimated variance based on n random observations.
The test rejects the null hypothesis of no difference between the two populations if the difference between the two empirical distribution functions is "large".Prior to applying the KS test it is necessary to arrange each of the two sample observations in a frequency table.
Efficient Markets Hypothesis: History
For example, we accept the alternative hypothesis H and reject the null H, if an event is observed which is at least atimes greater under H than under H.
In other words, the simplest correction is to move the cutoff point for the continuous distribution from the observed value of the discrete distribution to midway between that and the next value in the direction of the null hypothesis expectation.
History of the efficient markets hypothesis ..

History of the efficient market hypothesis.
We would need to test the null hypothesis that there is no correlation (H0: rho=0) between two variables x and y

11. Correlation and regression  The BMJ
SEWELL, Martin, 2011

How to Check if Time Series Data is Stationary with …
Dr
VassarStats: Statistical Computation Web Site
In a statistical hypothesis test, the P value is the probability of observing a test statistic at least as extreme as the value actually observed, assuming that the null hypothesis is true.
List of statistics articles  Wikipedia
The covariate methods provide the same full range of results provided by our earlier methods. That is they provide : (a) a signficance test (i.e. tests whether we can reject the null hypothesis that the case's score, or score difference, is an observation from the scores, or score differences, in the control population); (b) point and interval estimate of the abnormality of the case's score, or score difference; and (c) point and interval estimates of the effect size for the difference between case and controls
List of studies into Astrology.
Among three possible scenarios, the interesting case is in testing the following null hypothesis based on a set of n random sample observations: H: Variation is about the claimed value.
H: The variation is more than what is claimed, indicating the quality is much lower than expected.
EViews Help: Robust Standard Errors
Fortunately, numerous simulation studies have shown that regression and correlation are quite robust to deviations from normality; this means that even if one or both of the variables are nonnormal, the P value will be less than 0.05 about 5% of the time if the null hypothesis is true (Edgell and Noon 1984, and references therein). So in general, you can use linear regression/correlation without worrying about nonnormality.
DOES CORRUPTION AFFECT ECONOMIC GROWTH?
Given that two populations have normal distributions, we wish to test for the following null hypothesis regarding the equality of correlation coefficients:H: = , based on two observed correlation coefficients r, and r, obtained from two random sample of size n and n, respectively, provided  r  1, and  r  1, and n, n both are greater than 3.
Financial development and economic growth: Some …
Since it's possible to think of multiple explanations for an association between two variables, does that mean you should cynically sneer "Correlation does not imply causation!" and dismiss any correlation studies of naturally occurring variation? No. For one thing, observing a correlation between two variables suggests that there's something interesting going on, something you may want to investigate further. For example, studies have shown a correlation between eating more fresh fruits and vegetables and lower blood pressure. It's possible that the correlation is because people with more money, who can afford fresh fruits and vegetables, have less stressful lives than poor people, and it's the difference in stress that affects blood pressure; it's also possible that people who are concerned about their health eat more fruits and vegetables and exercise more, and it's the exercise that affects blood pressure. But the correlation suggests that eating fruits and vegetables may reduce blood pressure. You'd want to test this hypothesis further, by looking for the correlation in samples of people with similar socioeconomic status and levels of exercise; by statistically controlling for possible confounding variables using techniques such as ; by doing animal studies; or by giving human volunteers controlled diets with different amounts of fruits and vegetables. If your initial correlation study hadn't found an association of blood pressure with fruits and vegetables, you wouldn't have a reason to do these further studies. Correlation may not imply causation, but it tells you that something interesting is going on.