When you conduct a test of statistical significance, whether it is a t-test, correlation, an ANOVA, or regression, you are given a p-value in the output. Almost always, this p-value is for a two-tailed test.

If you are using a significance level of .05, a two-tailed test divides this value in half, meaning that .025 is in each tail of the distribution (see picture below).
The splitting of the p-value into each tail makes it more difficult to achieve statistical significance. However, it also means that you do not have to make a prediction about the direction of the effect. In other words, the effect can be either positive or negative and still be statistically significant.

There is much dispute about whether one should use a two-tailed test when one has a prediction on which direction the effect will be. For example, one might expect that the new experimental treatment is better than no treatment at all. This researcher would not expect that the treatment could make people worse (although it is a possibility; hence part of the argument for always doing a two-tailed test).

One advantage of the one-tailed test is that it has more power than a two-tailed test. That is, the probability of a Type II error is reduced because you are less likely to miss a significant effect with a one-tailed test (assuming that you have accurately predicted the direction of the effect; the probability of a Type I error is the same because the same alpha level is used).

Each picture below represents a one-tailed test (but in opposite directions).