Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Prevent plagiarism. Run a free check.

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

  • Search Search Please fill out this field.

What Is Hypothesis Testing?

  • How It Works

4 Step Process

The bottom line.

  • Fundamental Analysis

Hypothesis Testing: 4 Steps and Example

define hypothesis testing

Hypothesis testing, sometimes called significance testing, is an act in statistics whereby an analyst tests an assumption regarding a population parameter. The methodology employed by the analyst depends on the nature of the data used and the reason for the analysis.

Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data. Such data may come from a larger population or a data-generating process. The word "population" will be used for both of these cases in the following descriptions.

Key Takeaways

  • Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data.
  • The test provides evidence concerning the plausibility of the hypothesis, given the data.
  • Statistical analysts test a hypothesis by measuring and examining a random sample of the population being analyzed.
  • The four steps of hypothesis testing include stating the hypotheses, formulating an analysis plan, analyzing the sample data, and analyzing the result.

How Hypothesis Testing Works

In hypothesis testing, an  analyst  tests a statistical sample, intending to provide evidence on the plausibility of the null hypothesis. Statistical analysts measure and examine a random sample of the population being analyzed. All analysts use a random population sample to test two different hypotheses: the null hypothesis and the alternative hypothesis.

The null hypothesis is usually a hypothesis of equality between population parameters; e.g., a null hypothesis may state that the population mean return is equal to zero. The alternative hypothesis is effectively the opposite of a null hypothesis. Thus, they are mutually exclusive , and only one can be true. However, one of the two hypotheses will always be true.

The null hypothesis is a statement about a population parameter, such as the population mean, that is assumed to be true.

  • State the hypotheses.
  • Formulate an analysis plan, which outlines how the data will be evaluated.
  • Carry out the plan and analyze the sample data.
  • Analyze the results and either reject the null hypothesis, or state that the null hypothesis is plausible, given the data.

Example of Hypothesis Testing

If an individual wants to test that a penny has exactly a 50% chance of landing on heads, the null hypothesis would be that 50% is correct, and the alternative hypothesis would be that 50% is not correct. Mathematically, the null hypothesis is represented as Ho: P = 0.5. The alternative hypothesis is shown as "Ha" and is identical to the null hypothesis, except with the equal sign struck-through, meaning that it does not equal 50%.

A random sample of 100 coin flips is taken, and the null hypothesis is tested. If it is found that the 100 coin flips were distributed as 40 heads and 60 tails, the analyst would assume that a penny does not have a 50% chance of landing on heads and would reject the null hypothesis and accept the alternative hypothesis.

If there were 48 heads and 52 tails, then it is plausible that the coin could be fair and still produce such a result. In cases such as this where the null hypothesis is "accepted," the analyst states that the difference between the expected results (50 heads and 50 tails) and the observed results (48 heads and 52 tails) is "explainable by chance alone."

When Did Hypothesis Testing Begin?

Some statisticians attribute the first hypothesis tests to satirical writer John Arbuthnot in 1710, who studied male and female births in England after observing that in nearly every year, male births exceeded female births by a slight proportion. Arbuthnot calculated that the probability of this happening by chance was small, and therefore it was due to “divine providence.”

What are the Benefits of Hypothesis Testing?

Hypothesis testing helps assess the accuracy of new ideas or theories by testing them against data. This allows researchers to determine whether the evidence supports their hypothesis, helping to avoid false claims and conclusions. Hypothesis testing also provides a framework for decision-making based on data rather than personal opinions or biases. By relying on statistical analysis, hypothesis testing helps to reduce the effects of chance and confounding variables, providing a robust framework for making informed conclusions.

What are the Limitations of Hypothesis Testing?

Hypothesis testing relies exclusively on data and doesn’t provide a comprehensive understanding of the subject being studied. Additionally, the accuracy of the results depends on the quality of the available data and the statistical methods used. Inaccurate data or inappropriate hypothesis formulation may lead to incorrect conclusions or failed tests. Hypothesis testing can also lead to errors, such as analysts either accepting or rejecting a null hypothesis when they shouldn’t have. These errors may result in false conclusions or missed opportunities to identify significant patterns or relationships in the data.

Hypothesis testing refers to a statistical process that helps researchers determine the reliability of a study. By using a well-formulated hypothesis and set of statistical tests, individuals or businesses can make inferences about the population that they are studying and draw conclusions based on the data presented. All hypothesis testing methods have the same four-step process, which includes stating the hypotheses, formulating an analysis plan, analyzing the sample data, and analyzing the result.

Sage. " Introduction to Hypothesis Testing ," Page 4.

Elder Research. " Who Invented the Null Hypothesis? "

Formplus. " Hypothesis Testing: Definition, Uses, Limitations and Examples ."

define hypothesis testing

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Statistical Hypothesis Testing Overview

By Jim Frost 59 Comments

In this blog post, I explain why you need to use statistical hypothesis testing and help you navigate the essential terminology. Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables.

This post provides an overview of statistical hypothesis testing. If you need to perform hypothesis tests, consider getting my book, Hypothesis Testing: An Intuitive Guide .

Why You Should Perform Statistical Hypothesis Testing

Graph that displays mean drug scores by group. Use hypothesis testing to determine whether the difference between the means are statistically significant.

Hypothesis testing is a form of inferential statistics that allows us to draw conclusions about an entire population based on a representative sample. You gain tremendous benefits by working with a sample. In most cases, it is simply impossible to observe the entire population to understand its properties. The only alternative is to collect a random sample and then use statistics to analyze it.

While samples are much more practical and less expensive to work with, there are trade-offs. When you estimate the properties of a population from a sample, the sample statistics are unlikely to equal the actual population value exactly.  For instance, your sample mean is unlikely to equal the population mean. The difference between the sample statistic and the population value is the sample error.

Differences that researchers observe in samples might be due to sampling error rather than representing a true effect at the population level. If sampling error causes the observed difference, the next time someone performs the same experiment the results might be different. Hypothesis testing incorporates estimates of the sampling error to help you make the correct decision. Learn more about Sampling Error .

For example, if you are studying the proportion of defects produced by two manufacturing methods, any difference you observe between the two sample proportions might be sample error rather than a true difference. If the difference does not exist at the population level, you won’t obtain the benefits that you expect based on the sample statistics. That can be a costly mistake!

Let’s cover some basic hypothesis testing terms that you need to know.

Background information : Difference between Descriptive and Inferential Statistics and Populations, Parameters, and Samples in Inferential Statistics

Hypothesis Testing

Hypothesis testing is a statistical analysis that uses sample data to assess two mutually exclusive theories about the properties of a population. Statisticians call these theories the null hypothesis and the alternative hypothesis. A hypothesis test assesses your sample statistic and factors in an estimate of the sample error to determine which hypothesis the data support.

When you can reject the null hypothesis, the results are statistically significant, and your data support the theory that an effect exists at the population level.

The effect is the difference between the population value and the null hypothesis value. The effect is also known as population effect or the difference. For example, the mean difference between the health outcome for a treatment group and a control group is the effect.

Typically, you do not know the size of the actual effect. However, you can use a hypothesis test to help you determine whether an effect exists and to estimate its size. Hypothesis tests convert your sample effect into a test statistic, which it evaluates for statistical significance. Learn more about Test Statistics .

An effect can be statistically significant, but that doesn’t necessarily indicate that it is important in a real-world, practical sense. For more information, read my post about Statistical vs. Practical Significance .

Null Hypothesis

The null hypothesis is one of two mutually exclusive theories about the properties of the population in hypothesis testing. Typically, the null hypothesis states that there is no effect (i.e., the effect size equals zero). The null is often signified by H 0 .

In all hypothesis testing, the researchers are testing an effect of some sort. The effect can be the effectiveness of a new vaccination, the durability of a new product, the proportion of defect in a manufacturing process, and so on. There is some benefit or difference that the researchers hope to identify.

However, it’s possible that there is no effect or no difference between the experimental groups. In statistics, we call this lack of an effect the null hypothesis. Therefore, if you can reject the null, you can favor the alternative hypothesis, which states that the effect exists (doesn’t equal zero) at the population level.

You can think of the null as the default theory that requires sufficiently strong evidence against in order to reject it.

For example, in a 2-sample t-test, the null often states that the difference between the two means equals zero.

When you can reject the null hypothesis, your results are statistically significant. Learn more about Statistical Significance: Definition & Meaning .

Related post : Understanding the Null Hypothesis in More Detail

Alternative Hypothesis

The alternative hypothesis is the other theory about the properties of the population in hypothesis testing. Typically, the alternative hypothesis states that a population parameter does not equal the null hypothesis value. In other words, there is a non-zero effect. If your sample contains sufficient evidence, you can reject the null and favor the alternative hypothesis. The alternative is often identified with H 1 or H A .

For example, in a 2-sample t-test, the alternative often states that the difference between the two means does not equal zero.

You can specify either a one- or two-tailed alternative hypothesis:

If you perform a two-tailed hypothesis test, the alternative states that the population parameter does not equal the null value. For example, when the alternative hypothesis is H A : μ ≠ 0, the test can detect differences both greater than and less than the null value.

A one-tailed alternative has more power to detect an effect but it can test for a difference in only one direction. For example, H A : μ > 0 can only test for differences that are greater than zero.

Related posts : Understanding T-tests and One-Tailed and Two-Tailed Hypothesis Tests Explained

Image of a P for the p-value in hypothesis testing.

P-values are the probability that you would obtain the effect observed in your sample, or larger, if the null hypothesis is correct. In simpler terms, p-values tell you how strongly your sample data contradict the null. Lower p-values represent stronger evidence against the null. You use P-values in conjunction with the significance level to determine whether your data favor the null or alternative hypothesis.

Related post : Interpreting P-values Correctly

Significance Level (Alpha)

image of the alpha symbol for hypothesis testing.

For instance, a significance level of 0.05 signifies a 5% risk of deciding that an effect exists when it does not exist.

Use p-values and significance levels together to help you determine which hypothesis the data support. If the p-value is less than your significance level, you can reject the null and conclude that the effect is statistically significant. In other words, the evidence in your sample is strong enough to be able to reject the null hypothesis at the population level.

Related posts : Graphical Approach to Significance Levels and P-values and Conceptual Approach to Understanding Significance Levels

Types of Errors in Hypothesis Testing

Statistical hypothesis tests are not 100% accurate because they use a random sample to draw conclusions about entire populations. There are two types of errors related to drawing an incorrect conclusion.

  • False positives: You reject a null that is true. Statisticians call this a Type I error . The Type I error rate equals your significance level or alpha (α).
  • False negatives: You fail to reject a null that is false. Statisticians call this a Type II error. Generally, you do not know the Type II error rate. However, it is a larger risk when you have a small sample size , noisy data, or a small effect size. The type II error rate is also known as beta (β).

Statistical power is the probability that a hypothesis test correctly infers that a sample effect exists in the population. In other words, the test correctly rejects a false null hypothesis. Consequently, power is inversely related to a Type II error. Power = 1 – β. Learn more about Power in Statistics .

Related posts : Types of Errors in Hypothesis Testing and Estimating a Good Sample Size for Your Study Using Power Analysis

Which Type of Hypothesis Test is Right for You?

There are many different types of procedures you can use. The correct choice depends on your research goals and the data you collect. Do you need to understand the mean or the differences between means? Or, perhaps you need to assess proportions. You can even use hypothesis testing to determine whether the relationships between variables are statistically significant.

To choose the proper statistical procedure, you’ll need to assess your study objectives and collect the correct type of data . This background research is necessary before you begin a study.

Related Post : Hypothesis Tests for Continuous, Binary, and Count Data

Statistical tests are crucial when you want to use sample data to make conclusions about a population because these tests account for sample error. Using significance levels and p-values to determine when to reject the null hypothesis improves the probability that you will draw the correct conclusion.

To see an alternative approach to these traditional hypothesis testing methods, learn about bootstrapping in statistics !

If you want to see examples of hypothesis testing in action, I recommend the following posts that I have written:

  • How Effective Are Flu Shots? This example shows how you can use statistics to test proportions.
  • Fatality Rates in Star Trek . This example shows how to use hypothesis testing with categorical data.
  • Busting Myths About the Battle of the Sexes . A fun example based on a Mythbusters episode that assess continuous data using several different tests.
  • Are Yawns Contagious? Another fun example inspired by a Mythbusters episode.

Share this:

define hypothesis testing

Reader Interactions

' src=

January 14, 2024 at 8:43 am

Hello professor Jim, how are you doing! Pls. What are the properties of a population and their examples? Thanks for your time and understanding.

' src=

January 14, 2024 at 12:57 pm

Please read my post about Populations vs. Samples for more information and examples.

Also, please note there is a search bar in the upper-right margin of my website. Use that to search for topics.

' src=

July 5, 2023 at 7:05 am

Hello, I have a question as I read your post. You say in p-values section

“P-values are the probability that you would obtain the effect observed in your sample, or larger, if the null hypothesis is correct. In simpler terms, p-values tell you how strongly your sample data contradict the null. Lower p-values represent stronger evidence against the null.”

But according to your definition of effect, the null states that an effect does not exist, correct? So what I assume you want to say is that “P-values are the probability that you would obtain the effect observed in your sample, or larger, if the null hypothesis is **incorrect**.”

July 6, 2023 at 5:18 am

Hi Shrinivas,

The correct definition of p-value is that it is a probability that exists in the context of a true null hypothesis. So, the quotation is correct in stating “if the null hypothesis is correct.”

Essentially, the p-value tells you the likelihood of your observed results (or more extreme) if the null hypothesis is true. It gives you an idea of whether your results are surprising or unusual if there is no effect.

Hence, with sufficiently low p-values, you reject the null hypothesis because it’s telling you that your sample results were unlikely to have occurred if there was no effect in the population.

I hope that helps make it more clear. If not, let me know I’ll attempt to clarify!

' src=

May 8, 2023 at 12:47 am

Thanks a lot Ny best regards

May 7, 2023 at 11:15 pm

Hi Jim Can you tell me something about size effect? Thanks

May 8, 2023 at 12:29 am

Here’s a post that I’ve written about Effect Sizes that will hopefully tell you what you need to know. Please read that. Then, if you have any more specific questions about effect sizes, please post them there. Thanks!

' src=

January 7, 2023 at 4:19 pm

Hi Jim, I have only read two pages so far but I am really amazed because in few paragraphs you made me clearly understand the concepts of months of courses I received in biostatistics! Thanks so much for this work you have done it helps a lot!

January 10, 2023 at 3:25 pm

Thanks so much!

' src=

June 17, 2021 at 1:45 pm

Can you help in the following question: Rocinante36 is priced at ₹7 lakh and has been designed to deliver a mileage of 22 km/litre and a top speed of 140 km/hr. Formulate the null and alternative hypotheses for mileage and top speed to check whether the new models are performing as per the desired design specifications.

' src=

April 19, 2021 at 1:51 pm

Its indeed great to read your work statistics.

I have a doubt regarding the one sample t-test. So as per your book on hypothesis testing with reference to page no 45, you have mentioned the difference between “the sample mean and the hypothesised mean is statistically significant”. So as per my understanding it should be quoted like “the difference between the population mean and the hypothesised mean is statistically significant”. The catch here is the hypothesised mean represents the sample mean.

Please help me understand this.

Regards Rajat

April 19, 2021 at 3:46 pm

Thanks for buying my book. I’m so glad it’s been helpful!

The test is performed on the sample but the results apply to the population. Hence, if the difference between the sample mean (observed in your study) and the hypothesized mean is statistically significant, that suggests that population does not equal the hypothesized mean.

For one sample tests, the hypothesized mean is not the sample mean. It is a mean that you want to use for the test value. It usually represents a value that is important to your research. In other words, it’s a value that you pick for some theoretical/practical reasons. You pick it because you want to determine whether the population mean is different from that particular value.

I hope that helps!

' src=

November 5, 2020 at 6:24 am

Jim, you are such a magnificent statistician/economist/econometrician/data scientist etc whatever profession. Your work inspires and simplifies the lives of so many researchers around the world. I truly admire you and your work. I will buy a copy of each book you have on statistics or econometrics. Keep doing the good work. Remain ever blessed

November 6, 2020 at 9:47 pm

Hi Renatus,

Thanks so much for you very kind comments. You made my day!! I’m so glad that my website has been helpful. And, thanks so much for supporting my books! 🙂

' src=

November 2, 2020 at 9:32 pm

Hi Jim, I hope you are aware of 2019 American Statistical Association’s official statement on Statistical Significance: https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913 In case you do not bother reading the full article, may I quote you the core message here: “We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “p < 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way."

With best wishes,

November 3, 2020 at 2:09 am

I’m definitely aware of the debate surrounding how to use p-values most effectively. However, I need to correct you on one point. The link you provide is NOT a statement by the American Statistical Association. It is an editorial by several authors.

There is considerable debate over this issue. There are problems with p-values. However, as the authors state themselves, much of the problem is over people’s mindsets about how to use p-values and their incorrect interpretations about what statistical significance does and does not mean.

If you were to read my website more thoroughly, you’d be aware that I share many of their concerns and I address them in multiple posts. One of the authors’ key points is the need to be thoughtful and conduct thoughtful research and analysis. I emphasize this aspect in multiple posts on this topic. I’ll ask you to read the following three because they all address some of the authors’ concerns and suggestions. But you might run across others to read as well.

Five Tips for Using P-values to Avoid Being Misled How to Interpret P-values Correctly P-values and the Reproducibility of Experimental Results

' src=

September 24, 2020 at 11:52 pm

HI Jim, i just want you to know that you made explanation for Statistics so simple! I should say lesser and fewer words that reduce the complexity. All the best! 🙂

September 25, 2020 at 1:03 am

Thanks, Rene! Your kind words mean a lot to me! I’m so glad it has been helpful!

' src=

September 23, 2020 at 2:21 am

Honestly, I never understood stats during my entire M.Ed course and was another nightmare for me. But how easily you have explained each concept, I have understood stats way beyond my imagination. Thank you so much for helping ignorant research scholars like us. Looking forward to get hardcopy of your book. Kindly tell is it available through flipkart?

September 24, 2020 at 11:14 pm

I’m so happy to hear that my website has been helpful!

I checked on flipkart and it appears like my books are not available there. I’m never exactly sure where they’re available due to the vagaries of different distribution channels. They are available on Amazon in India.

Introduction to Statistics: An Intuitive Guide (Amazon IN) Hypothesis Testing: An Intuitive Guide (Amazon IN)

' src=

July 26, 2020 at 11:57 am

Dear Jim I am a teacher from India . I don’t have any background in statistics, and still I should tell that in a single read I can follow your explanations . I take my entire biostatistics class for botany graduates with your explanations. Thanks a lot. May I know how I can avail your books in India

July 28, 2020 at 12:31 am

Right now my books are only available as ebooks from my website. However, soon I’ll have some exciting news about other ways to obtain it. Stay tuned! I’ll announce it on my email list. If you’re not already on it, you can sign up using the form that is in the right margin of my website.

' src=

June 22, 2020 at 2:02 pm

Also can you please let me if this book covers topics like EDA and principal component analysis?

June 22, 2020 at 2:07 pm

This book doesn’t cover principal components analysis. Although, I wouldn’t really classify that as a hypothesis test. In the future, I might write a multivariate analysis book that would cover this and others. But, that’s well down the road.

My Introduction to Statistics covers EDA. That’s the largely graphical look at your data that you often do prior to hypothesis testing. The Introduction book perfectly leads right into the Hypothesis Testing book.

June 22, 2020 at 1:45 pm

Thanks for the detailed explanation. It does clear my doubts. I saw that your book related to hypothesis testing has the topics that I am studying currently. I am looking forward to purchasing it.

Regards, Take Care

June 19, 2020 at 1:03 pm

For this particular article I did not understand a couple of statements and it would great if you could help: 1)”If sample error causes the observed difference, the next time someone performs the same experiment the results might be different.” 2)”If the difference does not exist at the population level, you won’t obtain the benefits that you expect based on the sample statistics.”

I discovered your articles by chance and now I keep coming back to read & understand statistical concepts. These articles are very informative & easy to digest. Thanks for the simplifying things.

June 20, 2020 at 9:53 pm

I’m so happy to hear that you’ve found my website to be helpful!

To answer your questions, keep in mind that a central tenant of inferential statistics is that the random sample that a study drew was only one of an infinite number of possible it could’ve drawn. Each random sample produces different results. Most results will cluster around the population value assuming they used good methodology. However, random sampling error always exists and makes it so that population estimates from a sample almost never exactly equal the correct population value.

So, imagine that we’re studying a medication and comparing the treatment and control groups. Suppose that the medicine is truly not effect and that the population difference between the treatment and control group is zero (i.e., no difference.) Despite the true difference being zero, most sample estimates will show some degree of either a positive or negative effect thanks to random sampling error. So, just because a study has an observed difference does not mean that a difference exists at the population level. So, on to your questions:

1. If the observed difference is just random error, then it makes sense that if you collected another random sample, the difference could change. It could change from negative to positive, positive to negative, more extreme, less extreme, etc. However, if the difference exists at the population level, most random samples drawn from the population will reflect that difference. If the medicine has an effect, most random samples will reflect that fact and not bounce around on both sides of zero as much.

2. This is closely related to the previous answer. If there is no difference at the population level, but say you approve the medicine because of the observed effects in a sample. Even though your random sample showed an effect (which was really random error), that effect doesn’t exist. So, when you start using it on a larger scale, people won’t benefit from the medicine. That’s why it’s important to separate out what is easily explained by random error versus what is not easily explained by it.

I think reading my post about how hypothesis tests work will help clarify this process. Also, in about 24 hours (as I write this), I’ll be releasing my new ebook about Hypothesis Testing!

' src=

May 29, 2020 at 5:23 am

Hi Jim, I really enjoy your blog. Can you please link me on your blog where you discuss about Subgroup analysis and how it is done? I need to use non parametric and parametric statistical methods for my work and also do subgroup analysis in order to identify potential groups of patients that may benefit more from using a treatment than other groups.

May 29, 2020 at 2:12 pm

Hi, I don’t have a specific article about subgroup analysis. However, subgroup analysis is just the dividing up of a larger sample into subgroups and then analyzing those subgroups separately. You can use the various analyses I write about on the subgroups.

Alternatively, you can include the subgroups in regression analysis as an indicator variable and include that variable as a main effect and an interaction effect to see how the relationships vary by subgroup without needing to subdivide your data. I write about that approach in my article about comparing regression lines . This approach is my preferred approach when possible.

' src=

April 19, 2020 at 7:58 am

sir is confidence interval is a part of estimation?

' src=

April 17, 2020 at 3:36 pm

Sir can u plz briefly explain alternatives of hypothesis testing? I m unable to find the answer

April 18, 2020 at 1:22 am

Assuming you want to draw conclusions about populations by using samples (i.e., inferential statistics ), you can use confidence intervals and bootstrap methods as alternatives to the traditional hypothesis testing methods.

' src=

March 9, 2020 at 10:01 pm

Hi JIm, could you please help with activities that can best teach concepts of hypothesis testing through simulation, Also, do you have any question set that would enhance students intuition why learning hypothesis testing as a topic in introductory statistics. Thanks.

' src=

March 5, 2020 at 3:48 pm

Hi Jim, I’m studying multiple hypothesis testing & was wondering if you had any material that would be relevant. I’m more trying to understand how testing multiple samples simultaneously affects your results & more on the Bonferroni Correction

March 5, 2020 at 4:05 pm

I write about multiple comparisons (aka post hoc tests) in the ANOVA context . I don’t talk about Bonferroni Corrections specifically but I cover related types of corrections. I’m not sure if that exactly addresses what you want to know but is probably the closest I have already written. I hope it helps!

' src=

January 14, 2020 at 9:03 pm

Thank you! Have a great day/evening.

January 13, 2020 at 7:10 pm

Any help would be greatly appreciated. What is the difference between The Hypothesis Test and The Statistical Test of Hypothesis?

January 14, 2020 at 11:02 am

They sound like the same thing to me. Unless this is specialized terminology for a particular field or the author was intending something specific, I’d guess they’re one and the same.

' src=

April 1, 2019 at 10:00 am

so these are the only two forms of Hypothesis used in statistical testing?

April 1, 2019 at 10:02 am

Are you referring to the null and alternative hypothesis? If so, yes, that’s those are the standard hypotheses in a statistical hypothesis test.

April 1, 2019 at 9:57 am

year very insightful post, thanks for the write up

' src=

October 27, 2018 at 11:09 pm

hi there, am upcoming statistician, out of all blogs that i have read, i have found this one more useful as long as my problem is concerned. thanks so much

October 27, 2018 at 11:14 pm

Hi Stano, you’re very welcome! Thanks for your kind words. They mean a lot! I’m happy to hear that my posts were able to help you. I’m sure you will be a fantastic statistician. Best of luck with your studies!

' src=

October 26, 2018 at 11:39 am

Dear Jim, thank you very much for your explanations! I have a question. Can I use t-test to compare two samples in case each of them have right bias?

October 26, 2018 at 12:00 pm

Hi Tetyana,

You’re very welcome!

The term “right bias” is not a standard term. Do you by chance mean right skewed distributions? In other words, if you plot the distribution for each group on a histogram they have longer right tails? These are not the symmetrical bell-shape curves of the normal distribution.

If that’s the case, yes you can as long as you exceed a specific sample size within each group. I include a table that contains these sample size requirements in my post about nonparametric vs parametric analyses .

Bias in statistics refers to cases where an estimate of a value is systematically higher or lower than the true value. If this is the case, you might be able to use t-tests, but you’d need to be sure to understand the nature of the bias so you would understand what the results are really indicating.

I hope this helps!

' src=

April 2, 2018 at 7:28 am

Simple and upto the point 👍 Thank you so much.

April 2, 2018 at 11:11 am

Hi Kalpana, thanks! And I’m glad it was helpful!

' src=

March 26, 2018 at 8:41 am

Am I correct if I say: Alpha – Probability of wrongly rejection of null hypothesis P-value – Probability of wrongly acceptance of null hypothesis

March 28, 2018 at 3:14 pm

You’re correct about alpha. Alpha is the probability of rejecting the null hypothesis when the null is true.

Unfortunately, your definition of the p-value is a bit off. The p-value has a fairly convoluted definition. It is the probability of obtaining the effect observed in a sample, or more extreme, if the null hypothesis is true. The p-value does NOT indicate the probability that either the null or alternative is true or false. Although, those are very common misinterpretations. To learn more, read my post about how to interpret p-values correctly .

' src=

March 2, 2018 at 6:10 pm

I recently started reading your blog and it is very helpful to understand each concept of statistical tests in easy way with some good examples. Also, I recommend to other people go through all these blogs which you posted. Specially for those people who have not statistical background and they are facing to many problems while studying statistical analysis.

Thank you for your such good blogs.

March 3, 2018 at 10:12 pm

Hi Amit, I’m so glad that my blog posts have been helpful for you! It means a lot to me that you took the time to write such a nice comment! Also, thanks for recommending by blog to others! I try really hard to write posts about statistics that are easy to understand.

' src=

January 17, 2018 at 7:03 am

I recently started reading your blog and I find it very interesting. I am learning statistics by my own, and I generally do many google search to understand the concepts. So this blog is quite helpful for me, as it have most of the content which I am looking for.

January 17, 2018 at 3:56 pm

Hi Shashank, thank you! And, I’m very glad to hear that my blog is helpful!

' src=

January 2, 2018 at 2:28 pm

thank u very much sir.

January 2, 2018 at 2:36 pm

You’re very welcome, Hiral!

' src=

November 21, 2017 at 12:43 pm

Thank u so much sir….your posts always helps me to be a #statistician

November 21, 2017 at 2:40 pm

Hi Sachin, you’re very welcome! I’m happy that you find my posts to be helpful!

' src=

November 19, 2017 at 8:22 pm

great post as usual, but it would be nice to see an example.

November 19, 2017 at 8:27 pm

Thank you! At the end of this post, I have links to four other posts that show examples of hypothesis tests in action. You’ll find what you’re looking for in those posts!

Comments and Questions Cancel reply

Reset password New user? Sign up

Existing user? Log in

Hypothesis Testing

Already have an account? Log in here.

A hypothesis test is a statistical inference method used to test the significance of a proposed (hypothesized) relation between population statistics (parameters) and their corresponding sample estimators . In other words, hypothesis tests are used to determine if there is enough evidence in a sample to prove a hypothesis true for the entire population.

The test considers two hypotheses: the null hypothesis , which is a statement meant to be tested, usually something like "there is no effect" with the intention of proving this false, and the alternate hypothesis , which is the statement meant to stand after the test is performed. The two hypotheses must be mutually exclusive ; moreover, in most applications, the two are complementary (one being the negation of the other). The test works by comparing the \(p\)-value to the level of significance (a chosen target). If the \(p\)-value is less than or equal to the level of significance, then the null hypothesis is rejected.

When analyzing data, only samples of a certain size might be manageable as efficient computations. In some situations the error terms follow a continuous or infinite distribution, hence the use of samples to suggest accuracy of the chosen test statistics. The method of hypothesis testing gives an advantage over guessing what distribution or which parameters the data follows.

Definitions and Methodology

Hypothesis test and confidence intervals.

In statistical inference, properties (parameters) of a population are analyzed by sampling data sets. Given assumptions on the distribution, i.e. a statistical model of the data, certain hypotheses can be deduced from the known behavior of the model. These hypotheses must be tested against sampled data from the population.

The null hypothesis \((\)denoted \(H_0)\) is a statement that is assumed to be true. If the null hypothesis is rejected, then there is enough evidence (statistical significance) to accept the alternate hypothesis \((\)denoted \(H_1).\) Before doing any test for significance, both hypotheses must be clearly stated and non-conflictive, i.e. mutually exclusive, statements. Rejecting the null hypothesis, given that it is true, is called a type I error and it is denoted \(\alpha\), which is also its probability of occurrence. Failing to reject the null hypothesis, given that it is false, is called a type II error and it is denoted \(\beta\), which is also its probability of occurrence. Also, \(\alpha\) is known as the significance level , and \(1-\beta\) is known as the power of the test. \(H_0\) \(\textbf{is true}\)\(\hspace{15mm}\) \(H_0\) \(\textbf{is false}\) \(\textbf{Reject}\) \(H_0\)\(\hspace{10mm}\) Type I error Correct Decision \(\textbf{Reject}\) \(H_1\) Correct Decision Type II error The test statistic is the standardized value following the sampled data under the assumption that the null hypothesis is true, and a chosen particular test. These tests depend on the statistic to be studied and the assumed distribution it follows, e.g. the population mean following a normal distribution. The \(p\)-value is the probability of observing an extreme test statistic in the direction of the alternate hypothesis, given that the null hypothesis is true. The critical value is the value of the assumed distribution of the test statistic such that the probability of making a type I error is small.
Methodologies: Given an estimator \(\hat \theta\) of a population statistic \(\theta\), following a probability distribution \(P(T)\), computed from a sample \(\mathcal{S},\) and given a significance level \(\alpha\) and test statistic \(t^*,\) define \(H_0\) and \(H_1;\) compute the test statistic \(t^*.\) \(p\)-value Approach (most prevalent): Find the \(p\)-value using \(t^*\) (right-tailed). If the \(p\)-value is at most \(\alpha,\) reject \(H_0\). Otherwise, reject \(H_1\). Critical Value Approach: Find the critical value solving the equation \(P(T\geq t_\alpha)=\alpha\) (right-tailed). If \(t^*>t_\alpha\), reject \(H_0\). Otherwise, reject \(H_1\). Note: Failing to reject \(H_0\) only means inability to accept \(H_1\), and it does not mean to accept \(H_0\).
Assume a normally distributed population has recorded cholesterol levels with various statistics computed. From a sample of 100 subjects in the population, the sample mean was 214.12 mg/dL (milligrams per deciliter), with a sample standard deviation of 45.71 mg/dL. Perform a hypothesis test, with significance level 0.05, to test if there is enough evidence to conclude that the population mean is larger than 200 mg/dL. Hypothesis Test We will perform a hypothesis test using the \(p\)-value approach with significance level \(\alpha=0.05:\) Define \(H_0\): \(\mu=200\). Define \(H_1\): \(\mu>200\). Since our values are normally distributed, the test statistic is \(z^*=\frac{\bar X - \mu_0}{\frac{s}{\sqrt{n}}}=\frac{214.12 - 200}{\frac{45.71}{\sqrt{100}}}\approx 3.09\). Using a standard normal distribution, we find that our \(p\)-value is approximately \(0.001\). Since the \(p\)-value is at most \(\alpha=0.05,\) we reject \(H_0\). Therefore, we can conclude that the test shows sufficient evidence to support the claim that \(\mu\) is larger than \(200\) mg/dL.

If the sample size was smaller, the normal and \(t\)-distributions behave differently. Also, the question itself must be managed by a double-tail test instead.

Assume a population's cholesterol levels are recorded and various statistics are computed. From a sample of 25 subjects, the sample mean was 214.12 mg/dL (milligrams per deciliter), with a sample standard deviation of 45.71 mg/dL. Perform a hypothesis test, with significance level 0.05, to test if there is enough evidence to conclude that the population mean is not equal to 200 mg/dL. Hypothesis Test We will perform a hypothesis test using the \(p\)-value approach with significance level \(\alpha=0.05\) and the \(t\)-distribution with 24 degrees of freedom: Define \(H_0\): \(\mu=200\). Define \(H_1\): \(\mu\neq 200\). Using the \(t\)-distribution, the test statistic is \(t^*=\frac{\bar X - \mu_0}{\frac{s}{\sqrt{n}}}=\frac{214.12 - 200}{\frac{45.71}{\sqrt{25}}}\approx 1.54\). Using a \(t\)-distribution with 24 degrees of freedom, we find that our \(p\)-value is approximately \(2(0.068)=0.136\). We have multiplied by two since this is a two-tailed argument, i.e. the mean can be smaller than or larger than. Since the \(p\)-value is larger than \(\alpha=0.05,\) we fail to reject \(H_0\). Therefore, the test does not show sufficient evidence to support the claim that \(\mu\) is not equal to \(200\) mg/dL.

The complement of the rejection on a two-tailed hypothesis test (with significance level \(\alpha\)) for a population parameter \(\theta\) is equivalent to finding a confidence interval \((\)with confidence level \(1-\alpha)\) for the population parameter \(\theta\). If the assumption on the parameter \(\theta\) falls inside the confidence interval, then the test has failed to reject the null hypothesis \((\)with \(p\)-value greater than \(\alpha).\) Otherwise, if \(\theta\) does not fall in the confidence interval, then the null hypothesis is rejected in favor of the alternate \((\)with \(p\)-value at most \(\alpha).\)

  • Statistics (Estimation)
  • Normal Distribution
  • Correlation
  • Confidence Intervals

Problem Loading...

Note Loading...

Set Loading...

Teach yourself statistics

What is Hypothesis Testing?

A statistical hypothesis is an assumption about a population parameter . This assumption may or may not be true. Hypothesis testing refers to the formal procedures used by statisticians to accept or reject statistical hypotheses.

Statistical Hypotheses

The best way to determine whether a statistical hypothesis is true would be to examine the entire population. Since that is often impractical, researchers typically examine a random sample from the population. If sample data are not consistent with the statistical hypothesis, the hypothesis is rejected.

There are two types of statistical hypotheses.

  • Null hypothesis . The null hypothesis, denoted by H o , is usually the hypothesis that sample observations result purely from chance.
  • Alternative hypothesis . The alternative hypothesis, denoted by H 1 or H a , is the hypothesis that sample observations are influenced by some non-random cause.

For example, suppose we wanted to determine whether a coin was fair and balanced. A null hypothesis might be that half the flips would result in Heads and half, in Tails. The alternative hypothesis might be that the number of Heads and Tails would be very different. Symbolically, these hypotheses would be expressed as

H o : P = 0.5 H a : P ≠ 0.5

Suppose we flipped the coin 50 times, resulting in 40 Heads and 10 Tails. Given this result, we would be inclined to reject the null hypothesis. We would conclude, based on the evidence, that the coin was probably not fair and balanced.

Can We Accept the Null Hypothesis?

Some researchers say that a hypothesis test can have one of two outcomes: you accept the null hypothesis or you reject the null hypothesis. Many statisticians, however, take issue with the notion of "accepting the null hypothesis." Instead, they say: you reject the null hypothesis or you fail to reject the null hypothesis.

Why the distinction between "acceptance" and "failure to reject?" Acceptance implies that the null hypothesis is true. Failure to reject implies that the data are not sufficiently persuasive for us to prefer the alternative hypothesis over the null hypothesis.

Hypothesis Tests

Statisticians follow a formal process to determine whether to reject a null hypothesis, based on sample data. This process, called hypothesis testing , consists of four steps.

  • State the hypotheses. This involves stating the null and alternative hypotheses. The hypotheses are stated in such a way that they are mutually exclusive. That is, if one is true, the other must be false.
  • Formulate an analysis plan. The analysis plan describes how to use sample data to evaluate the null hypothesis. The evaluation often focuses around a single test statistic.
  • Analyze sample data. Find the value of the test statistic (mean score, proportion, t statistic, z-score, etc.) described in the analysis plan.
  • Interpret results. Apply the decision rule described in the analysis plan. If the value of the test statistic is unlikely, based on the null hypothesis, reject the null hypothesis.

Decision Errors

Two types of errors can result from a hypothesis test.

  • Type I error . A Type I error occurs when the researcher rejects a null hypothesis when it is true. The probability of committing a Type I error is called the significance level . This probability is also called alpha , and is often denoted by α.
  • Type II error . A Type II error occurs when the researcher fails to reject a null hypothesis that is false. The probability of committing a Type II error is called Beta , and is often denoted by β. The probability of not committing a Type II error is called the Power of the test.

Decision Rules

The analysis plan for a hypothesis test must include decision rules for rejecting the null hypothesis. In practice, statisticians describe these decision rules in two ways - with reference to a P-value or with reference to a region of acceptance.

  • P-value. The strength of evidence in support of a null hypothesis is measured by the P-value . Suppose the test statistic is equal to S . The P-value is the probability of observing a test statistic as extreme as S , assuming the null hypothesis is true. If the P-value is less than the significance level, we reject the null hypothesis.

The set of values outside the region of acceptance is called the region of rejection . If the test statistic falls within the region of rejection, the null hypothesis is rejected. In such cases, we say that the hypothesis has been rejected at the α level of significance.

These approaches are equivalent. Some statistics texts use the P-value approach; others use the region of acceptance approach.

One-Tailed and Two-Tailed Tests

A test of a statistical hypothesis, where the region of rejection is on only one side of the sampling distribution , is called a one-tailed test . For example, suppose the null hypothesis states that the mean is less than or equal to 10. The alternative hypothesis would be that the mean is greater than 10. The region of rejection would consist of a range of numbers located on the right side of sampling distribution; that is, a set of numbers greater than 10.

A test of a statistical hypothesis, where the region of rejection is on both sides of the sampling distribution, is called a two-tailed test . For example, suppose the null hypothesis states that the mean is equal to 10. The alternative hypothesis would be that the mean is less than 10 or greater than 10. The region of rejection would consist of a range of numbers located on both sides of sampling distribution; that is, the region of rejection would consist partly of numbers that were less than 10 and partly of numbers that were greater than 10.

MLP Logo

Hypothesis Testing – A Deep Dive into Hypothesis Testing, The Backbone of Statistical Inference

  • September 21, 2023

Explore the intricacies of hypothesis testing, a cornerstone of statistical analysis. Dive into methods, interpretations, and applications for making data-driven decisions.

define hypothesis testing

In this Blog post we will learn:

  • What is Hypothesis Testing?
  • Steps in Hypothesis Testing 2.1. Set up Hypotheses: Null and Alternative 2.2. Choose a Significance Level (α) 2.3. Calculate a test statistic and P-Value 2.4. Make a Decision
  • Example : Testing a new drug.
  • Example in python

1. What is Hypothesis Testing?

In simple terms, hypothesis testing is a method used to make decisions or inferences about population parameters based on sample data. Imagine being handed a dice and asked if it’s biased. By rolling it a few times and analyzing the outcomes, you’d be engaging in the essence of hypothesis testing.

Think of hypothesis testing as the scientific method of the statistics world. Suppose you hear claims like “This new drug works wonders!” or “Our new website design boosts sales.” How do you know if these statements hold water? Enter hypothesis testing.

2. Steps in Hypothesis Testing

  • Set up Hypotheses : Begin with a null hypothesis (H0) and an alternative hypothesis (Ha).
  • Choose a Significance Level (α) : Typically 0.05, this is the probability of rejecting the null hypothesis when it’s actually true. Think of it as the chance of accusing an innocent person.
  • Calculate Test statistic and P-Value : Gather evidence (data) and calculate a test statistic.
  • p-value : This is the probability of observing the data, given that the null hypothesis is true. A small p-value (typically ≤ 0.05) suggests the data is inconsistent with the null hypothesis.
  • Decision Rule : If the p-value is less than or equal to α, you reject the null hypothesis in favor of the alternative.

2.1. Set up Hypotheses: Null and Alternative

Before diving into testing, we must formulate hypotheses. The null hypothesis (H0) represents the default assumption, while the alternative hypothesis (H1) challenges it.

For instance, in drug testing, H0 : “The new drug is no better than the existing one,” H1 : “The new drug is superior .”

2.2. Choose a Significance Level (α)

When You collect and analyze data to test H0 and H1 hypotheses. Based on your analysis, you decide whether to reject the null hypothesis in favor of the alternative, or fail to reject / Accept the null hypothesis.

The significance level, often denoted by $α$, represents the probability of rejecting the null hypothesis when it is actually true.

In other words, it’s the risk you’re willing to take of making a Type I error (false positive).

Type I Error (False Positive) :

  • Symbolized by the Greek letter alpha (α).
  • Occurs when you incorrectly reject a true null hypothesis . In other words, you conclude that there is an effect or difference when, in reality, there isn’t.
  • The probability of making a Type I error is denoted by the significance level of a test. Commonly, tests are conducted at the 0.05 significance level , which means there’s a 5% chance of making a Type I error .
  • Commonly used significance levels are 0.01, 0.05, and 0.10, but the choice depends on the context of the study and the level of risk one is willing to accept.

Example : If a drug is not effective (truth), but a clinical trial incorrectly concludes that it is effective (based on the sample data), then a Type I error has occurred.

Type II Error (False Negative) :

  • Symbolized by the Greek letter beta (β).
  • Occurs when you accept a false null hypothesis . This means you conclude there is no effect or difference when, in reality, there is.
  • The probability of making a Type II error is denoted by β. The power of a test (1 – β) represents the probability of correctly rejecting a false null hypothesis.

Example : If a drug is effective (truth), but a clinical trial incorrectly concludes that it is not effective (based on the sample data), then a Type II error has occurred.

Balancing the Errors :

define hypothesis testing

In practice, there’s a trade-off between Type I and Type II errors. Reducing the risk of one typically increases the risk of the other. For example, if you want to decrease the probability of a Type I error (by setting a lower significance level), you might increase the probability of a Type II error unless you compensate by collecting more data or making other adjustments.

It’s essential to understand the consequences of both types of errors in any given context. In some situations, a Type I error might be more severe, while in others, a Type II error might be of greater concern. This understanding guides researchers in designing their experiments and choosing appropriate significance levels.

2.3. Calculate a test statistic and P-Value

Test statistic : A test statistic is a single number that helps us understand how far our sample data is from what we’d expect under a null hypothesis (a basic assumption we’re trying to test against). Generally, the larger the test statistic, the more evidence we have against our null hypothesis. It helps us decide whether the differences we observe in our data are due to random chance or if there’s an actual effect.

P-value : The P-value tells us how likely we would get our observed results (or something more extreme) if the null hypothesis were true. It’s a value between 0 and 1. – A smaller P-value (typically below 0.05) means that the observation is rare under the null hypothesis, so we might reject the null hypothesis. – A larger P-value suggests that what we observed could easily happen by random chance, so we might not reject the null hypothesis.

2.4. Make a Decision

Relationship between $α$ and P-Value

When conducting a hypothesis test:

  • We first choose a significance level ($α$), which sets a threshold for making decisions.

We then calculate the p-value from our sample data and the test statistic.

Finally, we compare the p-value to our chosen $α$:

  • If $p−value≤α$: We reject the null hypothesis in favor of the alternative hypothesis. The result is said to be statistically significant.
  • If $p−value>α$: We fail to reject the null hypothesis. There isn’t enough statistical evidence to support the alternative hypothesis.

3. Example : Testing a new drug.

Imagine we are investigating whether a new drug is effective at treating headaches faster than drug B.

Setting Up the Experiment : You gather 100 people who suffer from headaches. Half of them (50 people) are given the new drug (let’s call this the ‘Drug Group’), and the other half are given a sugar pill, which doesn’t contain any medication.

  • Set up Hypotheses : Before starting, you make a prediction:
  • Null Hypothesis (H0): The new drug has no effect. Any difference in healing time between the two groups is just due to random chance.
  • Alternative Hypothesis (H1): The new drug does have an effect. The difference in healing time between the two groups is significant and not just by chance.
  • Choose a Significance Level (α) : Typically 0.05, this is the probability of rejecting the null hypothesis when it’s actually true

Calculate Test statistic and P-Value : After the experiment, you analyze the data. The “test statistic” is a number that helps you understand the difference between the two groups in terms of standard units.

For instance, let’s say:

  • The average healing time in the Drug Group is 2 hours.
  • The average healing time in the Placebo Group is 3 hours.

The test statistic helps you understand how significant this 1-hour difference is. If the groups are large and the spread of healing times in each group is small, then this difference might be significant. But if there’s a huge variation in healing times, the 1-hour difference might not be so special.

Imagine the P-value as answering this question: “If the new drug had NO real effect, what’s the probability that I’d see a difference as extreme (or more extreme) as the one I found, just by random chance?”

For instance:

  • P-value of 0.01 means there’s a 1% chance that the observed difference (or a more extreme difference) would occur if the drug had no effect. That’s pretty rare, so we might consider the drug effective.
  • P-value of 0.5 means there’s a 50% chance you’d see this difference just by chance. That’s pretty high, so we might not be convinced the drug is doing much.
  • If the P-value is less than ($α$) 0.05: the results are “statistically significant,” and they might reject the null hypothesis , believing the new drug has an effect.
  • If the P-value is greater than ($α$) 0.05: the results are not statistically significant, and they don’t reject the null hypothesis , remaining unsure if the drug has a genuine effect.

4. Example in python

For simplicity, let’s say we’re using a t-test (common for comparing means). Let’s dive into Python:

Making a Decision : “The results are statistically significant! p-value < 0.05 , The drug seems to have an effect!” If not, we’d say, “Looks like the drug isn’t as miraculous as we thought.”

5. Conclusion

Hypothesis testing is an indispensable tool in data science, allowing us to make data-driven decisions with confidence. By understanding its principles, conducting tests properly, and considering real-world applications, you can harness the power of hypothesis testing to unlock valuable insights from your data.

More Articles

F statistic formula – explained, correlation – connecting the dots, the role of correlation in data analysis, sampling and sampling distributions – a comprehensive guide on sampling and sampling distributions, law of large numbers – a deep dive into the world of statistics, central limit theorem – a deep dive into central limit theorem and its significance in statistics, similar articles, complete introduction to linear regression in r, how to implement common statistical significance tests and find the p value, logistic regression – a complete tutorial with examples in r.

Subscribe to Machine Learning Plus for high value data science content

© Machinelearningplus. All rights reserved.

define hypothesis testing

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free sample videos:.

define hypothesis testing

define hypothesis testing

Hypothesis Testing: Understanding the Basics, Types, and Importance

Hypothesis testing is a statistical method used to determine whether a hypothesis about a population parameter is true or not. This technique helps researchers and decision-makers make informed decisions based on evidence rather than guesses. Hypothesis testing is an essential tool in scientific research, social sciences, and business analysis. In this article, we will delve deeper into the basics of hypothesis testing, types of hypotheses, significance level, p-values, and the importance of hypothesis testing.

  • Introduction

What is a hypothesis?

What is hypothesis testing, types of hypotheses, null hypothesis, alternative hypothesis, one-tailed and two-tailed tests, significance level and p-values, avoiding type i and type ii errors, making informed decisions, testing business strategies, a/b testing, formulating the null and alternative hypotheses, selecting the appropriate test, setting the level of significance, calculating the p-value, making a decision, common misconceptions about hypothesis testing, understanding hypothesis testing.

A hypothesis is an assumption or a proposition made about a population parameter. It is a statement that can be tested and either supported or refuted. For example, a hypothesis could be that a new medication reduces the severity of symptoms in patients with a particular disease.

Hypothesis testing is a statistical method that helps to determine whether a hypothesis is true or not. It is a procedure that involves collecting and analyzing data to evaluate the probability of the null hypothesis being true. The null hypothesis is the hypothesis that there is no significant difference between a sample and the population.

In hypothesis testing, there are two types of hypotheses: null and alternative.

The null hypothesis, denoted by H0, is a statement of no effect, no relationship, or no difference between the sample and the population. It is assumed to be true until there is sufficient evidence to reject it. For example, the null hypothesis could be that there is no significant difference in the blood pressure of patients who received the medication and those who received a placebo.

The alternative hypothesis, denoted by H1, is a statement of an effect, relationship, or difference between the sample and the population. It is the opposite of the null hypothesis. For example, the alternative hypothesis could be that the medication reduces the blood pressure of patients compared to those who received a placebo.

There are two types of alternative hypotheses: one-tailed and two-tailed. A one-tailed test is used when there is a directional hypothesis. For example, the hypothesis could be that the medication reduces blood pressure. A two-tailed test is used when there is a non-directional hypothesis. For example, the hypothesis could be that there is a significant difference in blood pressure between patients who received the medication and those who received a placebo.

The significance level, denoted by α, is the probability of rejecting the null hypothesis when it is true. It is set at the beginning of the test, usually at 5% or 1%. The p-value is the probability of obtaining a test statistic as extreme as

or more extreme than the observed one, assuming that the null hypothesis is true. If the p-value is less than the significance level, we reject the null hypothesis.

Importance of Hypothesis Testing

Hypothesis testing helps to avoid Type I and Type II errors. Type I error occurs when we reject the null hypothesis when it is actually true. Type II error occurs when we fail to reject the null hypothesis when it is actually false. By setting a significance level and calculating the p-value, we can control the probability of making these errors.

Hypothesis testing helps researchers and decision-makers make informed decisions based on evidence. For example, a medical researcher can use hypothesis testing to determine the effectiveness of a new drug. A business analyst can use hypothesis testing to evaluate the performance of a marketing campaign. By testing hypotheses, decision-makers can avoid making decisions based on guesses or assumptions.

Hypothesis testing is widely used in business analysis to test strategies and make data-driven decisions. For example, a business owner can use hypothesis testing to determine whether a new product will be profitable. By conducting A/B testing, businesses can compare the performance of two versions of a product and make data-driven decisions.

Examples of Hypothesis Testing

  • A/B testing is a popular technique used in online marketing and web design. It involves comparing two versions of a webpage or an advertisement to determine which one performs better. By conducting A/B testing, businesses can optimize their websites and advertisements to increase conversions and sales.

A t-test is used to compare the means of two samples. It is commonly used in medical research, social sciences, and business analysis. For example, a researcher can use a t-test to determine whether there is a significant difference in the cholesterol levels of patients who received a new drug and those who received a placebo.

Analysis of Variance (ANOVA) is a statistical technique used to compare the means of more than two samples. It is commonly used in medical research, social sciences, and business analysis. For example, a business owner can use ANOVA to determine whether there is a significant difference in the sales performance of three different stores.

Steps in Hypothesis Testing

The first step in hypothesis testing is to formulate the null and alternative hypotheses. The null hypothesis is the hypothesis that there is no significant difference between the sample and the population, while the alternative hypothesis is the opposite.

The second step is to select the appropriate test based on the type of data and the research question. There are different types of tests for different types of data, such as t-test for continuous data and chi-square test for categorical data.

The third step is to set the level of significance, which is usually 5% or 1%. The significance level represents the probability of rejecting the null hypothesis when it is actually true.

The fourth step is to calculate the p-value, which represents the probability of obtaining a test statistic as extreme as or more extreme than the observed one, assuming that the null hypothesis is true.

The final step is to make a decision based on the p-value and the significance level. If the p-value is less than the significance level, we reject the null hypothesis. Otherwise, we fail to reject the null hypothesis.

There are several common misconceptions about hypothesis testing. One of the most common misconceptions is that rejecting the null hypothesis means that the alternative hypothesis is true. However

this is not necessarily the case. Rejecting the null hypothesis only means that there is evidence against it, but it does not prove that the alternative hypothesis is true. Another common misconception is that hypothesis testing can prove causality. However, hypothesis testing can only provide evidence for or against a hypothesis, and causality can only be inferred from a well-designed experiment.

Hypothesis testing is an important statistical technique used to test hypotheses and make informed decisions based on evidence. It helps to avoid Type I and Type II errors, and it is widely used in medical research, social sciences, and business analysis. By following the steps in hypothesis testing and avoiding common misconceptions, researchers and decision-makers can make data-driven decisions and avoid making decisions based on guesses or assumptions.

  • What is the difference between Type I and Type II errors in hypothesis testing?
  • Type I error occurs when we reject the null hypothesis when it is actually true, while Type II error occurs when we fail to reject the null hypothesis when it is actually false.
  • How do you select the appropriate test in hypothesis testing?
  • The appropriate test is selected based on the type of data and the research question. There are different types of tests for different types of data, such as t-test for continuous data and chi-square test for categorical data.
  • Can hypothesis testing prove causality?
  • No, hypothesis testing can only provide evidence for or against a hypothesis, and causality can only be inferred from a well-designed experiment.
  • Why is hypothesis testing important in business analysis?
  • Hypothesis testing is important in business analysis because it helps businesses make data-driven decisions and avoid making decisions based on guesses or assumptions. By testing hypotheses, businesses can evaluate the effectiveness of their strategies and optimize their performance.
  • What is A/B testing?

If you want to learn more about statistical analysis, including central tendency measures, check out our  comprehensive statistical course . Our course provides a hands-on learning experience that covers all the essential statistical concepts and tools, empowering you to analyze complex data with confidence. With practical examples and interactive exercises, you’ll gain the skills you need to succeed in your statistical analysis endeavors. Enroll now and take your statistical knowledge to the next level!

If you’re looking to jumpstart your career as a data analyst, consider enrolling in our comprehensive  Data Analyst Bootcamp with Internship program . Our program provides you with the skills and experience necessary to succeed in today’s data-driven world. You’ll learn the fundamentals of statistical analysis, as well as how to use tools such as SQL, Python, Excel, and PowerBI to analyze and visualize data. But that’s not all – our program also includes a 3-month internship with us where you can showcase your Capstone Project.

2 Responses

This is a great and comprehensive article on hypothesis testing, covering everything from the basics to practical examples. I particularly appreciate the section on common misconceptions, as it’s important to understand what hypothesis testing can and cannot do. Overall, a valuable resource for anyone looking to understand this statistical technique.

Thanks, Ana Carol for your Kind words, Yes these topics are very important to know in Artificial intelligence.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Hypothesis Testing

Hypothesis testing is a tool for making statistical inferences about the population data. It is an analysis tool that tests assumptions and determines how likely something is within a given standard of accuracy. Hypothesis testing provides a way to verify whether the results of an experiment are valid.

A null hypothesis and an alternative hypothesis are set up before performing the hypothesis testing. This helps to arrive at a conclusion regarding the sample obtained from the population. In this article, we will learn more about hypothesis testing, its types, steps to perform the testing, and associated examples.

1.
2.
3.
4.
5.
6.
7.
8.

What is Hypothesis Testing in Statistics?

Hypothesis testing uses sample data from the population to draw useful conclusions regarding the population probability distribution . It tests an assumption made about the data using different types of hypothesis testing methodologies. The hypothesis testing results in either rejecting or not rejecting the null hypothesis.

Hypothesis Testing Definition

Hypothesis testing can be defined as a statistical tool that is used to identify if the results of an experiment are meaningful or not. It involves setting up a null hypothesis and an alternative hypothesis. These two hypotheses will always be mutually exclusive. This means that if the null hypothesis is true then the alternative hypothesis is false and vice versa. An example of hypothesis testing is setting up a test to check if a new medicine works on a disease in a more efficient manner.

Null Hypothesis

The null hypothesis is a concise mathematical statement that is used to indicate that there is no difference between two possibilities. In other words, there is no difference between certain characteristics of data. This hypothesis assumes that the outcomes of an experiment are based on chance alone. It is denoted as \(H_{0}\). Hypothesis testing is used to conclude if the null hypothesis can be rejected or not. Suppose an experiment is conducted to check if girls are shorter than boys at the age of 5. The null hypothesis will say that they are the same height.

Alternative Hypothesis

The alternative hypothesis is an alternative to the null hypothesis. It is used to show that the observations of an experiment are due to some real effect. It indicates that there is a statistical significance between two possible outcomes and can be denoted as \(H_{1}\) or \(H_{a}\). For the above-mentioned example, the alternative hypothesis would be that girls are shorter than boys at the age of 5.

Hypothesis Testing P Value

In hypothesis testing, the p value is used to indicate whether the results obtained after conducting a test are statistically significant or not. It also indicates the probability of making an error in rejecting or not rejecting the null hypothesis.This value is always a number between 0 and 1. The p value is compared to an alpha level, \(\alpha\) or significance level. The alpha level can be defined as the acceptable risk of incorrectly rejecting the null hypothesis. The alpha level is usually chosen between 1% to 5%.

Hypothesis Testing Critical region

All sets of values that lead to rejecting the null hypothesis lie in the critical region. Furthermore, the value that separates the critical region from the non-critical region is known as the critical value.

Hypothesis Testing Formula

Depending upon the type of data available and the size, different types of hypothesis testing are used to determine whether the null hypothesis can be rejected or not. The hypothesis testing formula for some important test statistics are given below:

  • z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\). \(\overline{x}\) is the sample mean, \(\mu\) is the population mean, \(\sigma\) is the population standard deviation and n is the size of the sample.
  • t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\). s is the sample standard deviation.
  • \(\chi ^{2} = \sum \frac{(O_{i}-E_{i})^{2}}{E_{i}}\). \(O_{i}\) is the observed value and \(E_{i}\) is the expected value.

We will learn more about these test statistics in the upcoming section.

Types of Hypothesis Testing

Selecting the correct test for performing hypothesis testing can be confusing. These tests are used to determine a test statistic on the basis of which the null hypothesis can either be rejected or not rejected. Some of the important tests used for hypothesis testing are given below.

Hypothesis Testing Z Test

A z test is a way of hypothesis testing that is used for a large sample size (n ≥ 30). It is used to determine whether there is a difference between the population mean and the sample mean when the population standard deviation is known. It can also be used to compare the mean of two samples. It is used to compute the z test statistic. The formulas are given as follows:

  • One sample: z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).
  • Two samples: z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing t Test

The t test is another method of hypothesis testing that is used for a small sample size (n < 30). It is also used to compare the sample mean and population mean. However, the population standard deviation is not known. Instead, the sample standard deviation is known. The mean of two samples can also be compared using the t test.

  • One sample: t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\).
  • Two samples: t = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing Chi Square

The Chi square test is a hypothesis testing method that is used to check whether the variables in a population are independent or not. It is used when the test statistic is chi-squared distributed.

One Tailed Hypothesis Testing

One tailed hypothesis testing is done when the rejection region is only in one direction. It can also be known as directional hypothesis testing because the effects can be tested in one direction only. This type of testing is further classified into the right tailed test and left tailed test.

Right Tailed Hypothesis Testing

The right tail test is also known as the upper tail test. This test is used to check whether the population parameter is greater than some value. The null and alternative hypotheses for this test are given as follows:

\(H_{0}\): The population parameter is ≤ some value

\(H_{1}\): The population parameter is > some value.

If the test statistic has a greater value than the critical value then the null hypothesis is rejected

Right Tail Hypothesis Testing

Left Tailed Hypothesis Testing

The left tail test is also known as the lower tail test. It is used to check whether the population parameter is less than some value. The hypotheses for this hypothesis testing can be written as follows:

\(H_{0}\): The population parameter is ≥ some value

\(H_{1}\): The population parameter is < some value.

The null hypothesis is rejected if the test statistic has a value lesser than the critical value.

Left Tail Hypothesis Testing

Two Tailed Hypothesis Testing

In this hypothesis testing method, the critical region lies on both sides of the sampling distribution. It is also known as a non - directional hypothesis testing method. The two-tailed test is used when it needs to be determined if the population parameter is assumed to be different than some value. The hypotheses can be set up as follows:

\(H_{0}\): the population parameter = some value

\(H_{1}\): the population parameter ≠ some value

The null hypothesis is rejected if the test statistic has a value that is not equal to the critical value.

Two Tail Hypothesis Testing

Hypothesis Testing Steps

Hypothesis testing can be easily performed in five simple steps. The most important step is to correctly set up the hypotheses and identify the right method for hypothesis testing. The basic steps to perform hypothesis testing are as follows:

  • Step 1: Set up the null hypothesis by correctly identifying whether it is the left-tailed, right-tailed, or two-tailed hypothesis testing.
  • Step 2: Set up the alternative hypothesis.
  • Step 3: Choose the correct significance level, \(\alpha\), and find the critical value.
  • Step 4: Calculate the correct test statistic (z, t or \(\chi\)) and p-value.
  • Step 5: Compare the test statistic with the critical value or compare the p-value with \(\alpha\) to arrive at a conclusion. In other words, decide if the null hypothesis is to be rejected or not.

Hypothesis Testing Example

The best way to solve a problem on hypothesis testing is by applying the 5 steps mentioned in the previous section. Suppose a researcher claims that the mean average weight of men is greater than 100kgs with a standard deviation of 15kgs. 30 men are chosen with an average weight of 112.5 Kgs. Using hypothesis testing, check if there is enough evidence to support the researcher's claim. The confidence interval is given as 95%.

Step 1: This is an example of a right-tailed test. Set up the null hypothesis as \(H_{0}\): \(\mu\) = 100.

Step 2: The alternative hypothesis is given by \(H_{1}\): \(\mu\) > 100.

Step 3: As this is a one-tailed test, \(\alpha\) = 100% - 95% = 5%. This can be used to determine the critical value.

1 - \(\alpha\) = 1 - 0.05 = 0.95

0.95 gives the required area under the curve. Now using a normal distribution table, the area 0.95 is at z = 1.645. A similar process can be followed for a t-test. The only additional requirement is to calculate the degrees of freedom given by n - 1.

Step 4: Calculate the z test statistic. This is because the sample size is 30. Furthermore, the sample and population means are known along with the standard deviation.

z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).

\(\mu\) = 100, \(\overline{x}\) = 112.5, n = 30, \(\sigma\) = 15

z = \(\frac{112.5-100}{\frac{15}{\sqrt{30}}}\) = 4.56

Step 5: Conclusion. As 4.56 > 1.645 thus, the null hypothesis can be rejected.

Hypothesis Testing and Confidence Intervals

Confidence intervals form an important part of hypothesis testing. This is because the alpha level can be determined from a given confidence interval. Suppose a confidence interval is given as 95%. Subtract the confidence interval from 100%. This gives 100 - 95 = 5% or 0.05. This is the alpha value of a one-tailed hypothesis testing. To obtain the alpha value for a two-tailed hypothesis testing, divide this value by 2. This gives 0.05 / 2 = 0.025.

Related Articles:

  • Probability and Statistics
  • Data Handling

Important Notes on Hypothesis Testing

  • Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant.
  • It involves the setting up of a null hypothesis and an alternate hypothesis.
  • There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.
  • Hypothesis testing can be classified as right tail, left tail, and two tail tests.

Examples on Hypothesis Testing

  • Example 1: The average weight of a dumbbell in a gym is 90lbs. However, a physical trainer believes that the average weight might be higher. A random sample of 5 dumbbells with an average weight of 110lbs and a standard deviation of 18lbs. Using hypothesis testing check if the physical trainer's claim can be supported for a 95% confidence level. Solution: As the sample size is lesser than 30, the t-test is used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) > 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 5, s = 18. \(\alpha\) = 0.05 Using the t-distribution table, the critical value is 2.132 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = 2.484 As 2.484 > 2.132, the null hypothesis is rejected. Answer: The average weight of the dumbbells may be greater than 90lbs
  • Example 2: The average score on a test is 80 with a standard deviation of 10. With a new teaching curriculum introduced it is believed that this score will change. On random testing, the score of 38 students, the mean was found to be 88. With a 0.05 significance level, is there any evidence to support this claim? Solution: This is an example of two-tail hypothesis testing. The z test will be used. \(H_{0}\): \(\mu\) = 80, \(H_{1}\): \(\mu\) ≠ 80 \(\overline{x}\) = 88, \(\mu\) = 80, n = 36, \(\sigma\) = 10. \(\alpha\) = 0.05 / 2 = 0.025 The critical value using the normal distribution table is 1.96 z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) z = \(\frac{88-80}{\frac{10}{\sqrt{36}}}\) = 4.8 As 4.8 > 1.96, the null hypothesis is rejected. Answer: There is a difference in the scores after the new curriculum was introduced.
  • Example 3: The average score of a class is 90. However, a teacher believes that the average score might be lower. The scores of 6 students were randomly measured. The mean was 82 with a standard deviation of 18. With a 0.05 significance level use hypothesis testing to check if this claim is true. Solution: The t test will be used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) < 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 6, s = 18 The critical value from the t table is -2.015 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = \(\frac{82-90}{\frac{18}{\sqrt{6}}}\) t = -1.088 As -1.088 > -2.015, we fail to reject the null hypothesis. Answer: There is not enough evidence to support the claim.

go to slide go to slide go to slide

define hypothesis testing

Book a Free Trial Class

FAQs on Hypothesis Testing

What is hypothesis testing.

Hypothesis testing in statistics is a tool that is used to make inferences about the population data. It is also used to check if the results of an experiment are valid.

What is the z Test in Hypothesis Testing?

The z test in hypothesis testing is used to find the z test statistic for normally distributed data . The z test is used when the standard deviation of the population is known and the sample size is greater than or equal to 30.

What is the t Test in Hypothesis Testing?

The t test in hypothesis testing is used when the data follows a student t distribution . It is used when the sample size is less than 30 and standard deviation of the population is not known.

What is the formula for z test in Hypothesis Testing?

The formula for a one sample z test in hypothesis testing is z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) and for two samples is z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

What is the p Value in Hypothesis Testing?

The p value helps to determine if the test results are statistically significant or not. In hypothesis testing, the null hypothesis can either be rejected or not rejected based on the comparison between the p value and the alpha level.

What is One Tail Hypothesis Testing?

When the rejection region is only on one side of the distribution curve then it is known as one tail hypothesis testing. The right tail test and the left tail test are two types of directional hypothesis testing.

What is the Alpha Level in Two Tail Hypothesis Testing?

To get the alpha level in a two tail hypothesis testing divide \(\alpha\) by 2. This is done as there are two rejection regions in the curve.

Tutorial Playlist

Statistics tutorial, everything you need to know about the probability density function in statistics, the best guide to understand central limit theorem, an in-depth guide to measures of central tendency : mean, median and mode, the ultimate guide to understand conditional probability.

A Comprehensive Look at Percentile in Statistics

The Best Guide to Understand Bayes Theorem

Everything you need to know about the normal distribution, an in-depth explanation of cumulative distribution function, chi-square test, what is hypothesis testing in statistics types and examples, understanding the fundamentals of arithmetic and geometric progression, the definitive guide to understand spearman’s rank correlation, mean squared error: overview, examples, concepts and more, all you need to know about the empirical rule in statistics, the complete guide to skewness and kurtosis, a holistic look at bernoulli distribution.

All You Need to Know About Bias in Statistics

A Complete Guide to Get a Grasp of Time Series Analysis

The Key Differences Between Z-Test Vs. T-Test

The Complete Guide to Understand Pearson's Correlation

A complete guide on the types of statistical studies, everything you need to know about poisson distribution, your best guide to understand correlation vs. regression, the most comprehensive guide for beginners on what is correlation, hypothesis testing in statistics - types | examples.

Lesson 10 of 24 By Avijeet Biswal

What Is Hypothesis Testing in Statistics? Types and Examples

Table of Contents

In today’s data-driven world, decisions are based on data all the time. Hypothesis plays a crucial role in that process, whether it may be making business decisions, in the health sector, academia, or in quality improvement. Without hypothesis & hypothesis tests, you risk drawing the wrong conclusions and making bad decisions. In this tutorial, you will look at Hypothesis Testing in Statistics.

The Ultimate Ticket to Top Data Science Job Roles

The Ultimate Ticket to Top Data Science Job Roles

What Is Hypothesis Testing in Statistics?

Hypothesis Testing is a type of statistical analysis in which you put your assumptions about a population parameter to the test. It is used to estimate the relationship between 2 statistical variables.

Let's discuss few examples of statistical hypothesis from real-life - 

  • A teacher assumes that 60% of his college's students come from lower-middle-class families.
  • A doctor believes that 3D (Diet, Dose, and Discipline) is 90% effective for diabetic patients.

Now that you know about hypothesis testing, look at the two types of hypothesis testing in statistics.

Hypothesis Testing Formula

Z = ( x̅ – μ0 ) / (σ /√n)

  • Here, x̅ is the sample mean,
  • μ0 is the population mean,
  • σ is the standard deviation,
  • n is the sample size.

How Hypothesis Testing Works?

An analyst performs hypothesis testing on a statistical sample to present evidence of the plausibility of the null hypothesis. Measurements and analyses are conducted on a random sample of the population to test a theory. Analysts use a random population sample to test two hypotheses: the null and alternative hypotheses.

The null hypothesis is typically an equality hypothesis between population parameters; for example, a null hypothesis may claim that the population means return equals zero. The alternate hypothesis is essentially the inverse of the null hypothesis (e.g., the population means the return is not equal to zero). As a result, they are mutually exclusive, and only one can be correct. One of the two possibilities, however, will always be correct.

Your Dream Career is Just Around The Corner!

Your Dream Career is Just Around The Corner!

Null Hypothesis and Alternative Hypothesis

The Null Hypothesis is the assumption that the event will not occur. A null hypothesis has no bearing on the study's outcome unless it is rejected.

H0 is the symbol for it, and it is pronounced H-naught.

The Alternate Hypothesis is the logical opposite of the null hypothesis. The acceptance of the alternative hypothesis follows the rejection of the null hypothesis. H1 is the symbol for it.

Let's understand this with an example.

A sanitizer manufacturer claims that its product kills 95 percent of germs on average. 

To put this company's claim to the test, create a null and alternate hypothesis.

H0 (Null Hypothesis): Average = 95%.

Alternative Hypothesis (H1): The average is less than 95%.

Another straightforward example to understand this concept is determining whether or not a coin is fair and balanced. The null hypothesis states that the probability of a show of heads is equal to the likelihood of a show of tails. In contrast, the alternate theory states that the probability of a show of heads and tails would be very different.

Become a Data Scientist with Hands-on Training!

Become a Data Scientist with Hands-on Training!

Hypothesis Testing Calculation With Examples

Let's consider a hypothesis test for the average height of women in the United States. Suppose our null hypothesis is that the average height is 5'4". We gather a sample of 100 women and determine that their average height is 5'5". The standard deviation of population is 2.

To calculate the z-score, we would use the following formula:

z = ( x̅ – μ0 ) / (σ /√n)

z = (5'5" - 5'4") / (2" / √100)

z = 0.5 / (0.045)

We will reject the null hypothesis as the z-score of 11.11 is very large and conclude that there is evidence to suggest that the average height of women in the US is greater than 5'4".

Steps in Hypothesis Testing

Hypothesis testing is a statistical method to determine if there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. Here’s a breakdown of the typical steps involved in hypothesis testing:

Formulate Hypotheses

  • Null Hypothesis (H0): This hypothesis states that there is no effect or difference, and it is the hypothesis you attempt to reject with your test.
  • Alternative Hypothesis (H1 or Ha): This hypothesis is what you might believe to be true or hope to prove true. It is usually considered the opposite of the null hypothesis.

Choose the Significance Level (α)

The significance level, often denoted by alpha (α), is the probability of rejecting the null hypothesis when it is true. Common choices for α are 0.05 (5%), 0.01 (1%), and 0.10 (10%).

Select the Appropriate Test

Choose a statistical test based on the type of data and the hypothesis. Common tests include t-tests, chi-square tests, ANOVA, and regression analysis. The selection depends on data type, distribution, sample size, and whether the hypothesis is one-tailed or two-tailed.

Collect Data

Gather the data that will be analyzed in the test. This data should be representative of the population to infer conclusions accurately.

Calculate the Test Statistic

Based on the collected data and the chosen test, calculate a test statistic that reflects how much the observed data deviates from the null hypothesis.

Determine the p-value

The p-value is the probability of observing test results at least as extreme as the results observed, assuming the null hypothesis is correct. It helps determine the strength of the evidence against the null hypothesis.

Make a Decision

Compare the p-value to the chosen significance level:

  • If the p-value ≤ α: Reject the null hypothesis, suggesting sufficient evidence in the data supports the alternative hypothesis.
  • If the p-value > α: Do not reject the null hypothesis, suggesting insufficient evidence to support the alternative hypothesis.

Report the Results

Present the findings from the hypothesis test, including the test statistic, p-value, and the conclusion about the hypotheses.

Perform Post-hoc Analysis (if necessary)

Depending on the results and the study design, further analysis may be needed to explore the data more deeply or to address multiple comparisons if several hypotheses were tested simultaneously.

Types of Hypothesis Testing

To determine whether a discovery or relationship is statistically significant, hypothesis testing uses a z-test. It usually checks to see if two means are the same (the null hypothesis). Only when the population standard deviation is known and the sample size is 30 data points or more, can a z-test be applied.

A statistical test called a t-test is employed to compare the means of two groups. To determine whether two groups differ or if a procedure or treatment affects the population of interest, it is frequently used in hypothesis testing.

Chi-Square 

You utilize a Chi-square test for hypothesis testing concerning whether your data is as predicted. To determine if the expected and observed results are well-fitted, the Chi-square test analyzes the differences between categorical variables from a random sample. The test's fundamental premise is that the observed values in your data should be compared to the predicted values that would be present if the null hypothesis were true.

Hypothesis Testing and Confidence Intervals

Both confidence intervals and hypothesis tests are inferential techniques that depend on approximating the sample distribution. Data from a sample is used to estimate a population parameter using confidence intervals. Data from a sample is used in hypothesis testing to examine a given hypothesis. We must have a postulated parameter to conduct hypothesis testing.

Bootstrap distributions and randomization distributions are created using comparable simulation techniques. The observed sample statistic is the focal point of a bootstrap distribution, whereas the null hypothesis value is the focal point of a randomization distribution.

A variety of feasible population parameter estimates are included in confidence ranges. In this lesson, we created just two-tailed confidence intervals. There is a direct connection between these two-tail confidence intervals and these two-tail hypothesis tests. The results of a two-tailed hypothesis test and two-tailed confidence intervals typically provide the same results. In other words, a hypothesis test at the 0.05 level will virtually always fail to reject the null hypothesis if the 95% confidence interval contains the predicted value. A hypothesis test at the 0.05 level will nearly certainly reject the null hypothesis if the 95% confidence interval does not include the hypothesized parameter.

Become a Data Scientist through hands-on learning with hackathons, masterclasses, webinars, and Ask-Me-Anything! Start learning now!

Simple and Composite Hypothesis Testing

Depending on the population distribution, you can classify the statistical hypothesis into two types.

Simple Hypothesis: A simple hypothesis specifies an exact value for the parameter.

Composite Hypothesis: A composite hypothesis specifies a range of values.

A company is claiming that their average sales for this quarter are 1000 units. This is an example of a simple hypothesis.

Suppose the company claims that the sales are in the range of 900 to 1000 units. Then this is a case of a composite hypothesis.

One-Tailed and Two-Tailed Hypothesis Testing

The One-Tailed test, also called a directional test, considers a critical region of data that would result in the null hypothesis being rejected if the test sample falls into it, inevitably meaning the acceptance of the alternate hypothesis.

In a one-tailed test, the critical distribution area is one-sided, meaning the test sample is either greater or lesser than a specific value.

In two tails, the test sample is checked to be greater or less than a range of values in a Two-Tailed test, implying that the critical distribution area is two-sided.

If the sample falls within this range, the alternate hypothesis will be accepted, and the null hypothesis will be rejected.

Become a Data Scientist With Real-World Experience

Become a Data Scientist With Real-World Experience

Right Tailed Hypothesis Testing

If the larger than (>) sign appears in your hypothesis statement, you are using a right-tailed test, also known as an upper test. Or, to put it another way, the disparity is to the right. For instance, you can contrast the battery life before and after a change in production. Your hypothesis statements can be the following if you want to know if the battery life is longer than the original (let's say 90 hours):

  • The null hypothesis is (H0 <= 90) or less change.
  • A possibility is that battery life has risen (H1) > 90.

The crucial point in this situation is that the alternate hypothesis (H1), not the null hypothesis, decides whether you get a right-tailed test.

Left Tailed Hypothesis Testing

Alternative hypotheses that assert the true value of a parameter is lower than the null hypothesis are tested with a left-tailed test; they are indicated by the asterisk "<".

Suppose H0: mean = 50 and H1: mean not equal to 50

According to the H1, the mean can be greater than or less than 50. This is an example of a Two-tailed test.

In a similar manner, if H0: mean >=50, then H1: mean <50

Here the mean is less than 50. It is called a One-tailed test.

Type 1 and Type 2 Error

A hypothesis test can result in two types of errors.

Type 1 Error: A Type-I error occurs when sample results reject the null hypothesis despite being true.

Type 2 Error: A Type-II error occurs when the null hypothesis is not rejected when it is false, unlike a Type-I error.

Suppose a teacher evaluates the examination paper to decide whether a student passes or fails.

H0: Student has passed

H1: Student has failed

Type I error will be the teacher failing the student [rejects H0] although the student scored the passing marks [H0 was true]. 

Type II error will be the case where the teacher passes the student [do not reject H0] although the student did not score the passing marks [H1 is true].

Our Data Scientist Master's Program covers core topics such as R, Python, Machine Learning, Tableau, Hadoop, and Spark. Get started on your journey today!

Limitations of Hypothesis Testing

Hypothesis testing has some limitations that researchers should be aware of:

  • It cannot prove or establish the truth: Hypothesis testing provides evidence to support or reject a hypothesis, but it cannot confirm the absolute truth of the research question.
  • Results are sample-specific: Hypothesis testing is based on analyzing a sample from a population, and the conclusions drawn are specific to that particular sample.
  • Possible errors: During hypothesis testing, there is a chance of committing type I error (rejecting a true null hypothesis) or type II error (failing to reject a false null hypothesis).
  • Assumptions and requirements: Different tests have specific assumptions and requirements that must be met to accurately interpret results.

Learn All The Tricks Of The BI Trade

Learn All The Tricks Of The BI Trade

After reading this tutorial, you would have a much better understanding of hypothesis testing, one of the most important concepts in the field of Data Science . The majority of hypotheses are based on speculation about observed behavior, natural phenomena, or established theories.

If you are interested in statistics of data science and skills needed for such a career, you ought to explore the Post Graduate Program in Data Science.

If you have any questions regarding this ‘Hypothesis Testing In Statistics’ tutorial, do share them in the comment section. Our subject matter expert will respond to your queries. Happy learning!

1. What is hypothesis testing in statistics with example?

Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence. An example: testing if a new drug improves patient recovery (Ha) compared to the standard treatment (H0) based on collected patient data.

2. What is H0 and H1 in statistics?

In statistics, H0​ and H1​ represent the null and alternative hypotheses. The null hypothesis, H0​, is the default assumption that no effect or difference exists between groups or conditions. The alternative hypothesis, H1​, is the competing claim suggesting an effect or a difference. Statistical tests determine whether to reject the null hypothesis in favor of the alternative hypothesis based on the data.

3. What is a simple hypothesis with an example?

A simple hypothesis is a specific statement predicting a single relationship between two variables. It posits a direct and uncomplicated outcome. For example, a simple hypothesis might state, "Increased sunlight exposure increases the growth rate of sunflowers." Here, the hypothesis suggests a direct relationship between the amount of sunlight (independent variable) and the growth rate of sunflowers (dependent variable), with no additional variables considered.

4. What are the 3 major types of hypothesis?

The three major types of hypotheses are:

  • Null Hypothesis (H0): Represents the default assumption, stating that there is no significant effect or relationship in the data.
  • Alternative Hypothesis (Ha): Contradicts the null hypothesis and proposes a specific effect or relationship that researchers want to investigate.
  • Nondirectional Hypothesis: An alternative hypothesis that doesn't specify the direction of the effect, leaving it open for both positive and negative possibilities.

Find our PL-300 Microsoft Power BI Certification Training Online Classroom training classes in top cities:

NameDatePlace
21 Sep -6 Oct 2024,
Weekend batch
Your City
12 Oct -27 Oct 2024,
Weekend batch
Your City
26 Oct -10 Nov 2024,
Weekend batch
Your City

About the Author

Avijeet Biswal

Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.

Recommended Resources

The Key Differences Between Z-Test Vs. T-Test

Free eBook: Top Programming Languages For A Data Scientist

Normality Test in Minitab: Minitab with Statistics

Normality Test in Minitab: Minitab with Statistics

A Comprehensive Look at Percentile in Statistics

Machine Learning Career Guide: A Playbook to Becoming a Machine Learning Engineer

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Hypothesis testing, p values, confidence intervals, and significance.

Jacob Shreffler ; Martin R. Huecker .

Affiliations

Last Update: March 13, 2023 .

  • Definition/Introduction

Medical providers often rely on evidence-based medicine to guide decision-making in practice. Often a research hypothesis is tested with results provided, typically with p values, confidence intervals, or both. Additionally, statistical or research significance is estimated or determined by the investigators. Unfortunately, healthcare providers may have different comfort levels in interpreting these findings, which may affect the adequate application of the data.

  • Issues of Concern

Without a foundational understanding of hypothesis testing, p values, confidence intervals, and the difference between statistical and clinical significance, it may affect healthcare providers' ability to make clinical decisions without relying purely on the research investigators deemed level of significance. Therefore, an overview of these concepts is provided to allow medical professionals to use their expertise to determine if results are reported sufficiently and if the study outcomes are clinically appropriate to be applied in healthcare practice.

Hypothesis Testing

Investigators conducting studies need research questions and hypotheses to guide analyses. Starting with broad research questions (RQs), investigators then identify a gap in current clinical practice or research. Any research problem or statement is grounded in a better understanding of relationships between two or more variables. For this article, we will use the following research question example:

Research Question: Is Drug 23 an effective treatment for Disease A?

Research questions do not directly imply specific guesses or predictions; we must formulate research hypotheses. A hypothesis is a predetermined declaration regarding the research question in which the investigator(s) makes a precise, educated guess about a study outcome. This is sometimes called the alternative hypothesis and ultimately allows the researcher to take a stance based on experience or insight from medical literature. An example of a hypothesis is below.

Research Hypothesis: Drug 23 will significantly reduce symptoms associated with Disease A compared to Drug 22.

The null hypothesis states that there is no statistical difference between groups based on the stated research hypothesis.

Researchers should be aware of journal recommendations when considering how to report p values, and manuscripts should remain internally consistent.

Regarding p values, as the number of individuals enrolled in a study (the sample size) increases, the likelihood of finding a statistically significant effect increases. With very large sample sizes, the p-value can be very low significant differences in the reduction of symptoms for Disease A between Drug 23 and Drug 22. The null hypothesis is deemed true until a study presents significant data to support rejecting the null hypothesis. Based on the results, the investigators will either reject the null hypothesis (if they found significant differences or associations) or fail to reject the null hypothesis (they could not provide proof that there were significant differences or associations).

To test a hypothesis, researchers obtain data on a representative sample to determine whether to reject or fail to reject a null hypothesis. In most research studies, it is not feasible to obtain data for an entire population. Using a sampling procedure allows for statistical inference, though this involves a certain possibility of error. [1]  When determining whether to reject or fail to reject the null hypothesis, mistakes can be made: Type I and Type II errors. Though it is impossible to ensure that these errors have not occurred, researchers should limit the possibilities of these faults. [2]

Significance

Significance is a term to describe the substantive importance of medical research. Statistical significance is the likelihood of results due to chance. [3]  Healthcare providers should always delineate statistical significance from clinical significance, a common error when reviewing biomedical research. [4]  When conceptualizing findings reported as either significant or not significant, healthcare providers should not simply accept researchers' results or conclusions without considering the clinical significance. Healthcare professionals should consider the clinical importance of findings and understand both p values and confidence intervals so they do not have to rely on the researchers to determine the level of significance. [5]  One criterion often used to determine statistical significance is the utilization of p values.

P values are used in research to determine whether the sample estimate is significantly different from a hypothesized value. The p-value is the probability that the observed effect within the study would have occurred by chance if, in reality, there was no true effect. Conventionally, data yielding a p<0.05 or p<0.01 is considered statistically significant. While some have debated that the 0.05 level should be lowered, it is still universally practiced. [6]  Hypothesis testing allows us to determine the size of the effect.

An example of findings reported with p values are below:

Statement: Drug 23 reduced patients' symptoms compared to Drug 22. Patients who received Drug 23 (n=100) were 2.1 times less likely than patients who received Drug 22 (n = 100) to experience symptoms of Disease A, p<0.05.

Statement:Individuals who were prescribed Drug 23 experienced fewer symptoms (M = 1.3, SD = 0.7) compared to individuals who were prescribed Drug 22 (M = 5.3, SD = 1.9). This finding was statistically significant, p= 0.02.

For either statement, if the threshold had been set at 0.05, the null hypothesis (that there was no relationship) should be rejected, and we should conclude significant differences. Noticeably, as can be seen in the two statements above, some researchers will report findings with < or > and others will provide an exact p-value (0.000001) but never zero [6] . When examining research, readers should understand how p values are reported. The best practice is to report all p values for all variables within a study design, rather than only providing p values for variables with significant findings. [7]  The inclusion of all p values provides evidence for study validity and limits suspicion for selective reporting/data mining.  

While researchers have historically used p values, experts who find p values problematic encourage the use of confidence intervals. [8] . P-values alone do not allow us to understand the size or the extent of the differences or associations. [3]  In March 2016, the American Statistical Association (ASA) released a statement on p values, noting that scientific decision-making and conclusions should not be based on a fixed p-value threshold (e.g., 0.05). They recommend focusing on the significance of results in the context of study design, quality of measurements, and validity of data. Ultimately, the ASA statement noted that in isolation, a p-value does not provide strong evidence. [9]

When conceptualizing clinical work, healthcare professionals should consider p values with a concurrent appraisal study design validity. For example, a p-value from a double-blinded randomized clinical trial (designed to minimize bias) should be weighted higher than one from a retrospective observational study [7] . The p-value debate has smoldered since the 1950s [10] , and replacement with confidence intervals has been suggested since the 1980s. [11]

Confidence Intervals

A confidence interval provides a range of values within given confidence (e.g., 95%), including the accurate value of the statistical constraint within a targeted population. [12]  Most research uses a 95% CI, but investigators can set any level (e.g., 90% CI, 99% CI). [13]  A CI provides a range with the lower bound and upper bound limits of a difference or association that would be plausible for a population. [14]  Therefore, a CI of 95% indicates that if a study were to be carried out 100 times, the range would contain the true value in 95, [15]  confidence intervals provide more evidence regarding the precision of an estimate compared to p-values. [6]

In consideration of the similar research example provided above, one could make the following statement with 95% CI:

Statement: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22; there was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).

It is important to note that the width of the CI is affected by the standard error and the sample size; reducing a study sample number will result in less precision of the CI (increase the width). [14]  A larger width indicates a smaller sample size or a larger variability. [16]  A researcher would want to increase the precision of the CI. For example, a 95% CI of 1.43 – 1.47 is much more precise than the one provided in the example above. In research and clinical practice, CIs provide valuable information on whether the interval includes or excludes any clinically significant values. [14]

Null values are sometimes used for differences with CI (zero for differential comparisons and 1 for ratios). However, CIs provide more information than that. [15]  Consider this example: A hospital implements a new protocol that reduced wait time for patients in the emergency department by an average of 25 minutes (95% CI: -2.5 – 41 minutes). Because the range crosses zero, implementing this protocol in different populations could result in longer wait times; however, the range is much higher on the positive side. Thus, while the p-value used to detect statistical significance for this may result in "not significant" findings, individuals should examine this range, consider the study design, and weigh whether or not it is still worth piloting in their workplace.

Similarly to p-values, 95% CIs cannot control for researchers' errors (e.g., study bias or improper data analysis). [14]  In consideration of whether to report p-values or CIs, researchers should examine journal preferences. When in doubt, reporting both may be beneficial. [13]  An example is below:

Reporting both: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22, p = 0.009. There was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).

  • Clinical Significance

Recall that clinical significance and statistical significance are two different concepts. Healthcare providers should remember that a study with statistically significant differences and large sample size may be of no interest to clinicians, whereas a study with smaller sample size and statistically non-significant results could impact clinical practice. [14]  Additionally, as previously mentioned, a non-significant finding may reflect the study design itself rather than relationships between variables.

Healthcare providers using evidence-based medicine to inform practice should use clinical judgment to determine the practical importance of studies through careful evaluation of the design, sample size, power, likelihood of type I and type II errors, data analysis, and reporting of statistical findings (p values, 95% CI or both). [4]  Interestingly, some experts have called for "statistically significant" or "not significant" to be excluded from work as statistical significance never has and will never be equivalent to clinical significance. [17]

The decision on what is clinically significant can be challenging, depending on the providers' experience and especially the severity of the disease. Providers should use their knowledge and experiences to determine the meaningfulness of study results and make inferences based not only on significant or insignificant results by researchers but through their understanding of study limitations and practical implications.

  • Nursing, Allied Health, and Interprofessional Team Interventions

All physicians, nurses, pharmacists, and other healthcare professionals should strive to understand the concepts in this chapter. These individuals should maintain the ability to review and incorporate new literature for evidence-based and safe care. 

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Disclosure: Jacob Shreffler declares no relevant financial relationships with ineligible companies.

Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Shreffler J, Huecker MR. Hypothesis Testing, P Values, Confidence Intervals, and Significance. [Updated 2023 Mar 13]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • The reporting of p values, confidence intervals and statistical significance in Preventive Veterinary Medicine (1997-2017). [PeerJ. 2021] The reporting of p values, confidence intervals and statistical significance in Preventive Veterinary Medicine (1997-2017). Messam LLM, Weng HY, Rosenberger NWY, Tan ZH, Payet SDM, Santbakshsing M. PeerJ. 2021; 9:e12453. Epub 2021 Nov 24.
  • Review Clinical versus statistical significance: interpreting P values and confidence intervals related to measures of association to guide decision making. [J Pharm Pract. 2010] Review Clinical versus statistical significance: interpreting P values and confidence intervals related to measures of association to guide decision making. Ferrill MJ, Brown DA, Kyle JA. J Pharm Pract. 2010 Aug; 23(4):344-51. Epub 2010 Apr 13.
  • Interpreting "statistical hypothesis testing" results in clinical research. [J Ayurveda Integr Med. 2012] Interpreting "statistical hypothesis testing" results in clinical research. Sarmukaddam SB. J Ayurveda Integr Med. 2012 Apr; 3(2):65-9.
  • Confidence intervals in procedural dermatology: an intuitive approach to interpreting data. [Dermatol Surg. 2005] Confidence intervals in procedural dermatology: an intuitive approach to interpreting data. Alam M, Barzilai DA, Wrone DA. Dermatol Surg. 2005 Apr; 31(4):462-6.
  • Review Is statistical significance testing useful in interpreting data? [Reprod Toxicol. 1993] Review Is statistical significance testing useful in interpreting data? Savitz DA. Reprod Toxicol. 1993; 7(2):95-100.

Recent Activity

  • Hypothesis Testing, P Values, Confidence Intervals, and Significance - StatPearl... Hypothesis Testing, P Values, Confidence Intervals, and Significance - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Hypothesis Testing: Definition, Uses, Limitations + Examples

busayo.longe

Hypothesis testing is as old as the scientific method and is at the heart of the research process. 

Research exists to validate or disprove assumptions about various phenomena. The process of validation involves testing and it is in this context that we will explore hypothesis testing. 

What is a Hypothesis? 

A hypothesis is a calculated prediction or assumption about a population parameter based on limited evidence. The whole idea behind hypothesis formulation is testing—this means the researcher subjects his or her calculated assumption to a series of evaluations to know whether they are true or false. 

Typically, every research starts with a hypothesis—the investigator makes a claim and experiments to prove that this claim is true or false . For instance, if you predict that students who drink milk before class perform better than those who don’t, then this becomes a hypothesis that can be confirmed or refuted using an experiment.  

Read: What is Empirical Research Study? [Examples & Method]

What are the Types of Hypotheses? 

1. simple hypothesis.

Also known as a basic hypothesis, a simple hypothesis suggests that an independent variable is responsible for a corresponding dependent variable. In other words, an occurrence of the independent variable inevitably leads to an occurrence of the dependent variable. 

Typically, simple hypotheses are considered as generally true, and they establish a causal relationship between two variables. 

Examples of Simple Hypothesis  

  • Drinking soda and other sugary drinks can cause obesity. 
  • Smoking cigarettes daily leads to lung cancer.

2. Complex Hypothesis

A complex hypothesis is also known as a modal. It accounts for the causal relationship between two independent variables and the resulting dependent variables. This means that the combination of the independent variables leads to the occurrence of the dependent variables . 

Examples of Complex Hypotheses  

  • Adults who do not smoke and drink are less likely to develop liver-related conditions.
  • Global warming causes icebergs to melt which in turn causes major changes in weather patterns.

3. Null Hypothesis

As the name suggests, a null hypothesis is formed when a researcher suspects that there’s no relationship between the variables in an observation. In this case, the purpose of the research is to approve or disapprove this assumption. 

Examples of Null Hypothesis

  • This is no significant change in a student’s performance if they drink coffee or tea before classes. 
  • There’s no significant change in the growth of a plant if one uses distilled water only or vitamin-rich water. 
Read: Research Report: Definition, Types + [Writing Guide]

4. Alternative Hypothesis 

To disapprove a null hypothesis, the researcher has to come up with an opposite assumption—this assumption is known as the alternative hypothesis. This means if the null hypothesis says that A is false, the alternative hypothesis assumes that A is true. 

An alternative hypothesis can be directional or non-directional depending on the direction of the difference. A directional alternative hypothesis specifies the direction of the tested relationship, stating that one variable is predicted to be larger or smaller than the null value while a non-directional hypothesis only validates the existence of a difference without stating its direction. 

Examples of Alternative Hypotheses  

  • Starting your day with a cup of tea instead of a cup of coffee can make you more alert in the morning. 
  • The growth of a plant improves significantly when it receives distilled water instead of vitamin-rich water. 

5. Logical Hypothesis

Logical hypotheses are some of the most common types of calculated assumptions in systematic investigations. It is an attempt to use your reasoning to connect different pieces in research and build a theory using little evidence. In this case, the researcher uses any data available to him, to form a plausible assumption that can be tested. 

Examples of Logical Hypothesis

  • Waking up early helps you to have a more productive day. 
  • Beings from Mars would not be able to breathe the air in the atmosphere of the Earth. 

6. Empirical Hypothesis  

After forming a logical hypothesis, the next step is to create an empirical or working hypothesis. At this stage, your logical hypothesis undergoes systematic testing to prove or disprove the assumption. An empirical hypothesis is subject to several variables that can trigger changes and lead to specific outcomes. 

Examples of Empirical Testing 

  • People who eat more fish run faster than people who eat meat.
  • Women taking vitamin E grow hair faster than those taking vitamin K.

7. Statistical Hypothesis

When forming a statistical hypothesis, the researcher examines the portion of a population of interest and makes a calculated assumption based on the data from this sample. A statistical hypothesis is most common with systematic investigations involving a large target audience. Here, it’s impossible to collect responses from every member of the population so you have to depend on data from your sample and extrapolate the results to the wider population. 

Examples of Statistical Hypothesis  

  • 45% of students in Louisiana have middle-income parents. 
  • 80% of the UK’s population gets a divorce because of irreconcilable differences.

What is Hypothesis Testing? 

Hypothesis testing is an assessment method that allows researchers to determine the plausibility of a hypothesis. It involves testing an assumption about a specific population parameter to know whether it’s true or false. These population parameters include variance, standard deviation, and median. 

Typically, hypothesis testing starts with developing a null hypothesis and then performing several tests that support or reject the null hypothesis. The researcher uses test statistics to compare the association or relationship between two or more variables. 

Explore: Research Bias: Definition, Types + Examples

Researchers also use hypothesis testing to calculate the coefficient of variation and determine if the regression relationship and the correlation coefficient are statistically significant.

How Hypothesis Testing Works

The basis of hypothesis testing is to examine and analyze the null hypothesis and alternative hypothesis to know which one is the most plausible assumption. Since both assumptions are mutually exclusive, only one can be true. In other words, the occurrence of a null hypothesis destroys the chances of the alternative coming to life, and vice-versa. 

Interesting: 21 Chrome Extensions for Academic Researchers in 2021

What Are The Stages of Hypothesis Testing?  

To successfully confirm or refute an assumption, the researcher goes through five (5) stages of hypothesis testing; 

  • Determine the null hypothesis
  • Specify the alternative hypothesis
  • Set the significance level
  • Calculate the test statistics and corresponding P-value
  • Draw your conclusion
  • Determine the Null Hypothesis

Like we mentioned earlier, hypothesis testing starts with creating a null hypothesis which stands as an assumption that a certain statement is false or implausible. For example, the null hypothesis (H0) could suggest that different subgroups in the research population react to a variable in the same way. 

  • Specify the Alternative Hypothesis

Once you know the variables for the null hypothesis, the next step is to determine the alternative hypothesis. The alternative hypothesis counters the null assumption by suggesting the statement or assertion is true. Depending on the purpose of your research, the alternative hypothesis can be one-sided or two-sided. 

Using the example we established earlier, the alternative hypothesis may argue that the different sub-groups react differently to the same variable based on several internal and external factors. 

  • Set the Significance Level

Many researchers create a 5% allowance for accepting the value of an alternative hypothesis, even if the value is untrue. This means that there is a 0.05 chance that one would go with the value of the alternative hypothesis, despite the truth of the null hypothesis. 

Something to note here is that the smaller the significance level, the greater the burden of proof needed to reject the null hypothesis and support the alternative hypothesis.

Explore: What is Data Interpretation? + [Types, Method & Tools]
  • Calculate the Test Statistics and Corresponding P-Value 

Test statistics in hypothesis testing allow you to compare different groups between variables while the p-value accounts for the probability of obtaining sample statistics if your null hypothesis is true. In this case, your test statistics can be the mean, median and similar parameters. 

If your p-value is 0.65, for example, then it means that the variable in your hypothesis will happen 65 in100 times by pure chance. Use this formula to determine the p-value for your data: 

define hypothesis testing

  • Draw Your Conclusions

After conducting a series of tests, you should be able to agree or refute the hypothesis based on feedback and insights from your sample data.  

Applications of Hypothesis Testing in Research

Hypothesis testing isn’t only confined to numbers and calculations; it also has several real-life applications in business, manufacturing, advertising, and medicine. 

In a factory or other manufacturing plants, hypothesis testing is an important part of quality and production control before the final products are approved and sent out to the consumer. 

During ideation and strategy development, C-level executives use hypothesis testing to evaluate their theories and assumptions before any form of implementation. For example, they could leverage hypothesis testing to determine whether or not some new advertising campaign, marketing technique, etc. causes increased sales. 

In addition, hypothesis testing is used during clinical trials to prove the efficacy of a drug or new medical method before its approval for widespread human usage. 

What is an Example of Hypothesis Testing?

An employer claims that her workers are of above-average intelligence. She takes a random sample of 20 of them and gets the following results: 

Mean IQ Scores: 110

Standard Deviation: 15 

Mean Population IQ: 100

Step 1: Using the value of the mean population IQ, we establish the null hypothesis as 100.

Step 2: State that the alternative hypothesis is greater than 100.

Step 3: State the alpha level as 0.05 or 5% 

Step 4: Find the rejection region area (given by your alpha level above) from the z-table. An area of .05 is equal to a z-score of 1.645.

Step 5: Calculate the test statistics using this formula

define hypothesis testing

Z = (110–100) ÷ (15÷√20) 

10 ÷ 3.35 = 2.99 

If the value of the test statistics is higher than the value of the rejection region, then you should reject the null hypothesis. If it is less, then you cannot reject the null. 

In this case, 2.99 > 1.645 so we reject the null. 

Importance/Benefits of Hypothesis Testing 

The most significant benefit of hypothesis testing is it allows you to evaluate the strength of your claim or assumption before implementing it in your data set. Also, hypothesis testing is the only valid method to prove that something “is or is not”. Other benefits include: 

  • Hypothesis testing provides a reliable framework for making any data decisions for your population of interest. 
  • It helps the researcher to successfully extrapolate data from the sample to the larger population. 
  • Hypothesis testing allows the researcher to determine whether the data from the sample is statistically significant. 
  • Hypothesis testing is one of the most important processes for measuring the validity and reliability of outcomes in any systematic investigation. 
  • It helps to provide links to the underlying theory and specific research questions.

Criticism and Limitations of Hypothesis Testing

Several limitations of hypothesis testing can affect the quality of data you get from this process. Some of these limitations include: 

  • The interpretation of a p-value for observation depends on the stopping rule and definition of multiple comparisons. This makes it difficult to calculate since the stopping rule is subject to numerous interpretations, plus “multiple comparisons” are unavoidably ambiguous. 
  • Conceptual issues often arise in hypothesis testing, especially if the researcher merges Fisher and Neyman-Pearson’s methods which are conceptually distinct. 
  • In an attempt to focus on the statistical significance of the data, the researcher might ignore the estimation and confirmation by repeated experiments.
  • Hypothesis testing can trigger publication bias, especially when it requires statistical significance as a criterion for publication.
  • When used to detect whether a difference exists between groups, hypothesis testing can trigger absurd assumptions that affect the reliability of your observation.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • alternative hypothesis
  • alternative vs null hypothesis
  • complex hypothesis
  • empirical hypothesis
  • hypothesis testing
  • logical hypothesis
  • simple hypothesis
  • statistical hypothesis
  • busayo.longe

Formplus

You may also like:

Type I vs Type II Errors: Causes, Examples & Prevention

This article will discuss the two different types of errors in hypothesis testing and how you can prevent them from occurring in your research

define hypothesis testing

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Internal Validity in Research: Definition, Threats, Examples

In this article, we will discuss the concept of internal validity, some clear examples, its importance, and how to test it.

Alternative vs Null Hypothesis: Pros, Cons, Uses & Examples

We are going to discuss alternative hypotheses and null hypotheses in this post and how they work in research.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

logo

What Is Hypothesis Testing? An In-Depth Guide with Python Examples

' data-src=

Hypothesis testing allows us to make data-driven decisions by testing assertions about populations. It is the backbone behind scientific research, business analytics, financial modeling, and more.

This comprehensive guide aims to solidify your understanding with:

  • Explanations of key terminology and the overall hypothesis testing process
  • Python code examples for t-tests, z-tests, chi-squared, and other methods
  • Real-world examples spanning science, business, politics, and technology
  • A frank discussion around limitations and misapplications
  • Next steps to mastering practical statistics with Python

So let‘s get comfortable with making statements, gathering evidence, and letting the data speak!

Fundamentals of Hypothesis Testing

Hypothesis testing is structured around making a claim in the form of competing hypotheses, gathering data, performing statistical tests, and making decisions about which hypothesis the evidence supports.

Here are some key terms about hypotheses and the testing process:

Null Hypothesis ($H_0$): The default statement about a population parameter. Generally asserts that there is no statistical significance between two data sets or that a sample parameter equals some claimed population parameter value. The statement being tested that is either rejected or supported.

Alternative Hypothesis ($H_1$): The statement that sample observations indicate statistically significant effect or difference from what the null hypothesis states. $H_1$ and $H_0$ are mutually exclusive, meaning if statistical tests support rejecting $H_0$, then you conclude $H_1$ has strong evidence.

Significance Level ($\alpha$): The probability of incorrectly rejecting a true null hypothesis, known as making a Type I error. Common significance levels are 90%, 95%, and 99%. The lower significance level, the more strict the criteria is for rejecting $H_0$.

Test Statistic: Summary calculations of sample data including mean, proportion, correlation coefficient, etc. Used to determine statistical significance and improbability under $H_0$.

P-value: Probability of obtaining sample results at least as extreme as the test statistic, assuming $H_0$ is true. Small p-values indicate strong statistical evidence against the null hypothesis.

Type I Error: Incorrectly rejecting a true null hypothesis

Type II Error : Failing to reject a false null hypothesis

These terms set the stage for the overall process:

1. Make Hypotheses

Define the null ($H_0$) and alternative hypothesis ($H_1$).

2. Set Significance Level

Typical significance levels are 90%, 95%, and 99%. Higher significance means more strict burden of proof for rejecting $H_0$.

3. Collect Data

Gather sample and population data related to the hypotheses under examination.

4. Determine Test Statistic

Calculate relevant test statistics like p-value, z-score, t-statistic, etc along with degrees of freedom.

5. Compare to Significance Level

If the test statistic falls in the critical region based on the significance, reject $H_0$, otherwise fail to reject $H_0$.

6. Draw Conclusions

Make determinations about hypotheses given the statistical evidence and context of the situation.

Now that you know the process and objectives, let’s apply this to some concrete examples.

Python Examples of Hypothesis Tests

We‘ll demonstrate hypothesis testing using Numpy, Scipy, Pandas and simulated data sets. Specifically, we‘ll conduct and interpret:

  • Two sample t-tests
  • Paired t-tests
  • Chi-squared tests

These represent some of the most widely used methods for determining statistical significance between groups.

We‘ll plot the data distributions to check normality assumptions where applicable. And determine if evidence exists to reject the null hypotheses across several scenarios.

Two Sample T-Test with NumPy

Two sample t-tests determine whether the mean of a numerical variable differs significantly across two independent groups. It assumes observations follow approximate normal distributions within each group, but not that variances are equal.

Let‘s test for differences in reported salaries at hypothetical Company X vs Company Y:

$H_0$ : Average reported salaries are equal at Company X and Company Y

$H_1$ : Average reported salaries differ between Company X and Company Y

First we‘ll simulate salary samples for each company based on random normal distributions, set a 95% confidence level, run the t-test using NumPy, then interpret.

The t-statistic of 9.35 shows the difference between group means is nearly 9.5 standard errors. The very small p-value rejects the idea the salaries are equal across a randomly sampled population of employees.

Since the test returned a p-value lower than the significance level, we reject $H_0$, meaning evidence supports $H_1$ that average reported salaries differ between these hypothetical companies.

Paired T-Test with Pandas

While an independent groups t-test analyzes mean differences between distinct groups, a paired t-test looks for significant effects pre vs post some treatment within the same set of subjects. This helps isolate causal impacts by removing effects from confounding individual differences.

Let‘s analyze Amazon purchase data to determine if spending increases during the holiday months of November and December.

$H_0$ : Average monthly spending is equal pre-holiday and during the holiday season

$H_1$ : Average monthly spending increases during the holiday season

We‘ll import transaction data using Pandas, add seasonal categories, then run and interpret the paired t-test.

Since the p-value is below the 0.05 significance level, we reject $H_0$. The output shows statistically significant evidence at 95% confidence that average spending increases during November-December relative to January-October.

Visualizing the monthly trend helps confirm the spike during the holiday months.

Holiday Spending Spike Plot

Single Sample Z-Test with NumPy

A single sample z-test allows testing whether a sample mean differs significantly from a population mean. It requires knowing the population standard deviation.

Let‘s test if recently surveyed shoppers differ significantly in their reported ages from the overall customer base:

$H_0$ : Sample mean age equals population mean age of 39

$H_1$ : Sample mean age does not equal population mean of 39

Here the absolute z-score over 2 and p-value under 0.05 indicates statistically significant evidence that recently surveyed shopper ages differ from the overall population parameter.

Chi-Squared Test with SciPy

Chi-squared tests help determine independence between categorical variables. The test statistic measures deviations between observed and expected outcome frequencies across groups to determine magnitude of relationship.

Let‘s test if credit card application approvals are independent across income groups using simulated data:

$H_0$ : Credit card approvals are independent of income level

$H_1$ : Credit approvals and income level are related

Since the p-value is greater than the 0.05 significance level, we fail to reject $H_0$. There is not sufficient statistical evidence to conclude that credit card approval rates differ by income categories.

ANOVA with StatsModels

Analysis of variance (ANOVA) hypothesis tests determine if mean differences exist across more than two groups. ANOVA expands upon t-tests for multiple group comparisons.

Let‘s test if average debt obligations vary depending on highest education level attained.

$H_0$ : Average debt obligations are equal across education levels

$H_1$ : Average debt obligations differ based on education level

We‘ll simulate ordered education and debt data for visualization via box plots and then run ANOVA.

The ANOVA output shows an F-statistic of 91.59 that along with a tiny p-value leads to rejecting $H_0$. We conclude there are statistically significant differences in average debt obligations based on highest degree attained.

The box plots visualize these distributions and means vary across four education attainment groups.

Real World Hypothesis Testing

Hypothesis testing forms the backbone of data-driven decision making across science, research, business, public policy and more by allowing practitioners to draw statistically-validated conclusions.

Here is a sample of hypotheses commonly tested:

  • Ecommerce sites test if interface updates increase user conversions
  • Ridesharing platforms analyze if surge pricing reduces wait times
  • Subscription services assess if free trial length impacts customer retention
  • Manufacturers test if new production processes improve output yields

Pharmaceuticals

  • Drug companies test efficacy of developed compounds against placebo groups
  • Clinical researchers evaluate impacts of interventions on disease factors
  • Epidemiologists study if particular biomarkers differ in afflicted populations
  • Software engineers measure if algorithm optimizations improve runtime complexity
  • Autonomous vehicles assess whether new sensors reduce accident rates
  • Information security analyzes if software updates decrease vulnerability exploits

Politics & Social Sciences

  • Pollsters determine if candidate messaging influences voter preference
  • Sociologists analyze if income immobility changed across generations
  • Climate scientists examine anthropogenic factors contributing to extreme weather

This represents just a sample of the wide ranging real-world applications. Properly formulated hypotheses, statistical testing methodology, reproducible analysis, and unbiased interpretation helps ensure valid reliable findings.

However, hypothesis testing does still come with some limitations worth addressing.

Limitations and Misapplications

While hypothesis testing empowers huge breakthroughs across disciplines, the methodology does come with some inherent restrictions:

Over-reliance on p-values

P-values help benchmark statistical significance, but should not be over-interpreted. A large p-value does not necessarily mean the null hypothesis is 100% true for the entire population. And small p-values do not directly prove causality as confounding factors always exist.

Significance also does not indicate practical real-world effect size. Statistical power calculations should inform necessary sample sizes to detect desired effects.

Errors from Multiple Tests

Running many hypothesis tests by chance produces some false positives due to randomness. Analysts should account for this by adjusting significance levels, pre-registering testing plans, replicating findings, and relying more on meta-analyses.

Poor Experimental Design

Bad data, biased samples, unspecified variables, and lack of controls can completely undermine results. Findings can only be reasonably extended to populations reflected by the test samples.

Garbage in, garbage out definitely applies to statistical analysis!

Assumption Violations

Most common statistical tests make assumptions about normality, homogeneity of variance, independent samples, underlying variable relationships. Violating these premises invalidates reliability.

Transformations, bootstrapping, or non-parametric methods can help navigate issues for sound methodology.

Lack of Reproducibility

The replication crisis impacting scientific research highlights issues around lack of reproducibility, especially involving human participants and high complexity systems. Randomized controlled experiments with strong statistical power provide much more reliable evidence.

While hypothesis testing methodology is rigorously developed, applying concepts correctly proves challenging even among academics and experts!

Next Level Hypothesis Testing Mastery

We‘ve covered core concepts, Python implementations, real-world use cases, and inherent limitations around hypothesis testing. What should you master next?

Parametric vs Non-parametric

Learn assumptions and application differences between parametric statistics like z-tests and t-tests that assume normal distributions versus non-parametric analogs like Wilcoxon signed-rank tests and Mann-Whitney U tests.

Effect Size and Power

Look beyond just p-values to determine practical effect magnitude using indexes like Cohen‘s D. And ensure appropriate sample sizes to detect effects using prospective power analysis.

Alternatives to NHST

Evaluate Bayesian inference models and likelihood ratios that move beyond binary reject/fail-to-reject null hypothesis outcomes toward more integrated evidence.

Tiered Testing Framework

Construct reusable classes encapsulating data processing, visualizations, assumption checking, and statistical tests for maintainable analysis code.

Big Data Integration

Connect statistical analysis to big data pipelines pulling from databases, data lakes and APIs at scale. Productionize analytics.

I hope this end-to-end look at hypothesis testing methodology, Python programming demonstrations, real-world grounding, inherent restrictions and next level considerations provides a launchpad for practically applying core statistics! Please subscribe using the form below for more data science tutorials.

' data-src=

Dr. Alex Mitchell is a dedicated coding instructor with a deep passion for teaching and a wealth of experience in computer science education. As a university professor, Dr. Mitchell has played a pivotal role in shaping the coding skills of countless students, helping them navigate the intricate world of programming languages and software development.

Beyond the classroom, Dr. Mitchell is an active contributor to the freeCodeCamp community, where he regularly shares his expertise through tutorials, code examples, and practical insights. His teaching repertoire includes a wide range of languages and frameworks, such as Python, JavaScript, Next.js, and React, which he presents in an accessible and engaging manner.

Dr. Mitchell’s approach to teaching blends academic rigor with real-world applications, ensuring that his students not only understand the theory but also how to apply it effectively. His commitment to education and his ability to simplify complex topics have made him a respected figure in both the university and online learning communities.

Similar Posts

Intro to Property-Based Testing in Python

Intro to Property-Based Testing in Python

Property-based testing is an innovative technique for testing software through specifying invariant properties rather than manual…

What is Docker? Learn How to Use Containers – Explained with Examples

What is Docker? Learn How to Use Containers – Explained with Examples

Docker‘s lightweight container virtualization has revolutionized development workflows. This comprehensive guide demystifies Docker fundamentals while equipping…

How to Create a Timeline Component with React

How to Create a Timeline Component with React

As a full-stack developer, building reusable UI components is a key skill. In this comprehensive 3200+…

A Brief History of the Command Line

A Brief History of the Command Line

The command line interface (CLI) has been a constant companion of programmers, system administrators and power…

Why I love Vim: It’s the lesser-known features that make it so amazing

Why I love Vim: It’s the lesser-known features that make it so amazing

Credit: Unsplash Vim has been my go-to text editor for years. As a full-stack developer, I…

How I Started the Process of Healing a Dying Software Group

How I Started the Process of Healing a Dying Software Group

As the new manager of a struggling 20-person software engineering team, I faced serious challenges that…

Talk to our experts

1800-120-456-456

  • Hypothesis Testing

ffImage

What is Hypothesis Testing?

Hypothesis testing in statistics refers to analyzing an assumption about a population parameter. It is used to make an educated guess about an assumption using statistics. With the use of sample data, hypothesis testing makes an assumption about how true the assumption is for the entire population from where the sample is being taken.  

Any hypothetical statement we make may or may not be valid, and it is then our responsibility to provide evidence for its possibility. To approach any hypothesis, we follow these four simple steps that test its validity.

First, we formulate two hypothetical statements such that only one of them is true. By doing so, we can check the validity of our own hypothesis.

The next step is to formulate the statistical analysis to be followed based upon the data points.

Then we analyze the given data using our methodology.

The final step is to analyze the result and judge whether the null hypothesis will be rejected or is true.

Let’s look at several hypothesis testing examples:

It is observed that the average recovery time for a knee-surgery patient is 8 weeks. A physician believes that after successful knee surgery if the patient goes for physical therapy twice a week rather than thrice a week, the recovery period will be longer. Conduct hypothesis for this statement. 

David is a ten-year-old who finishes a 25-yard freestyle in the meantime of 16.43 seconds. David’s father bought goggles for his son, believing that it would help him to reduce his time. He then recorded a total of fifteen 25-yard freestyle for David, and the average time came out to be 16 seconds. Conduct a hypothesis.

A tire company claims their A-segment of tires have a running life of 50,000 miles before they need to be replaced, and previous studies show a standard deviation of 8,000 miles. After surveying a total of 28 tires, the mean run time came to be 46,500 miles with a standard deviation of 9800 miles. Is the claim made by the tire company consistent with the given data? Conduct hypothesis testing. 

All of the hypothesis testing examples are from real-life situations, which leads us to believe that hypothesis testing is a very practical topic indeed. It is an integral part of a researcher's study and is used in every research methodology in one way or another. 

Inferential statistics majorly deals with hypothesis testing. The research hypothesis states there is a relationship between the independent variable and dependent variable. Whereas the null hypothesis rejects this claim of any relationship between the two, our job as researchers or students is to check whether there is any relation between the two.  

Hypothesis Testing in Research Methodology

Now that we are clear about what hypothesis testing is? Let's look at the use of hypothesis testing in research methodology. Hypothesis testing is at the centre of research projects. 

What is Hypothesis Testing and Why is it Important in Research Methodology?

Often after formulating research statements, the validity of those statements need to be verified. Hypothesis testing offers a statistical approach to the researcher about the theoretical assumptions he/she made. It can be understood as quantitative results for a qualitative problem. 

(Image will be uploaded soon)

Hypothesis testing provides various techniques to test the hypothesis statement depending upon the variable and the data points. It finds its use in almost every field of research while answering statements such as whether this new medicine will work, a new testing method is appropriate, or if the outcomes of a random experiment are probable or not.

Procedure of Hypothesis Testing

To find the validity of any statement, we have to strictly follow the stepwise procedure of hypothesis testing. After stating the initial hypothesis, we have to re-write them in the form of a null and alternate hypothesis. The alternate hypothesis predicts a relationship between the variables, whereas the null hypothesis predicts no relationship between the variables.

After writing them as H 0 (null hypothesis) and H a (Alternate hypothesis), only one of the statements can be true. For example, taking the hypothesis that, on average, men are taller than women, we write the statements as:

H 0 : On average, men are not taller than women.

H a : On average, men are taller than women. 

Our next aim is to collect sample data, what we call sampling, in a way so that we can test our hypothesis. Your data should come from the concerned population for which you want to make a hypothesis. 

What is the p value in hypothesis testing? P-value gives us information about the probability of occurrence of results as extreme as observed results.

You will obtain your p-value after choosing the hypothesis testing method, which will be the guiding factor in rejecting the hypothesis. Usually, the p-value cutoff for rejecting the null hypothesis is 0.05. So anything below that, you will reject the null hypothesis. 

A low p-value means that the between-group variance is large enough that there is almost no overlapping, and it is unlikely that these came about by chance. A high p-value suggests there is a high within-group variance and low between-group variance, and any difference in the measure is due to chance only.

What is statistical hypothesis testing?

When forming conclusions through research, two sorts of errors are common: A hypothesis must be set and defined in statistics during a statistical survey or research. A statistical hypothesis is what it is called. It is, in fact, a population parameter assumption. However, it is unmistakable that this idea is always proven correct. Hypothesis testing refers to the predetermined formal procedures used by statisticians to determine whether hypotheses should be accepted or rejected. The process of selecting hypotheses for a given probability distribution based on observable data is known as hypothesis testing. Hypothesis testing is a fundamental and crucial issue in statistics. 

Why do I Need to Test it? Why not just prove an alternate one?

The quick answer is that you must as a scientist; it is part of the scientific process. Science employs a variety of methods to test or reject theories, ensuring that any new hypothesis is free of errors. One protection to ensure your research is not incorrect is to include both a null and an alternate hypothesis. The scientific community considers not incorporating the null hypothesis in your research to be poor practice. You are almost certainly setting yourself up for failure if you set out to prove another theory without first examining it. At the very least, your experiment will not be considered seriously.

Types of Hypothesis Testing

There are several types of hypothesis testing, and they are used based on the data provided. Depending on the sample size and the data given, we choose among different hypothesis testing methodologies. Here starts the use of hypothesis testing tools in research methodology.

Normality- This type of testing is used for normal distribution in a population sample. If the data points are grouped around the mean, the probability of them being above or below the mean is equally likely. Its shape resembles a bell curve that is equally distributed on either side of the mean.

T-test- This test is used when the sample size in a normally distributed population is comparatively small, and the standard deviation is unknown. Usually, if the sample size drops below 30, we use a T-test to find the confidence intervals of the population. 

Chi-Square Test- The Chi-Square test is used to test the population variance against the known or assumed value of the population variance. It is also a better choice to test the goodness of fit of a distribution of data. The two most common Chi-Square tests are the Chi-Square test of independence and the chi-square test of variance.

ANOVA- Analysis of Variance or ANOVA compares the data sets of two different populations or samples. It is similar in its use to the t-test or the Z-test, but it allows us to compare more than two sample means. ANOVA allows us to test the significance between an independent variable and a dependent variable, namely X and Y, respectively.

Z-test- It is a statistical measure to test that the means of two population samples are different when their variance is known. For a Z-test, the population is assumed to be normally distributed. A z-test is better suited in the case of large sample sizes greater than 30. This is due to the central limit theorem that as the sample size increases, the samples are considered to be distributed normally. 

arrow-right

FAQs on Hypothesis Testing

1. Mention the types of hypothesis Tests.

There are two types of a hypothesis tests:

Null Hypothesis: It is denoted as H₀.

Alternative Hypothesis: IT is denoted as H₁ or Hₐ.

2. What are the two errors that can be found while performing the null Hypothesis test?

While performing the null hypothesis test there is a possibility of occurring two types of errors,

Type-1: The type-1 error is denoted by (α), it is also known as the significance level. It is the rejection of the true null hypothesis. It is the error of commission.

Type-2: The type-2 error is denoted by (β). (1 - β) is known as the power test. The false null hypothesis is not rejected. It is the error of the omission. 

3. What is the p-value in hypothesis testing?

During hypothetical testing in statistics, the p-value indicates the probability of obtaining the result as extreme as observed results. A smaller p-value provides evidence to accept the alternate hypothesis. The p-value is used as a rejection point that provides the smallest level of significance at which the null hypothesis is rejected. Often p-value is calculated using the p-value tables by calculating the deviation between the observed value and the chosen reference value. 

It may also be calculated mathematically by performing integrals on all the values that fall under the curve and areas far from the reference value as the observed value relative to the total area of the curve. The p-value determines the evidence to reject the null hypothesis in hypothesis testing.

4. What is a null hypothesis?

The null hypothesis in statistics says that there is no certain difference between the population. It serves as a conjecture proposing no difference, whereas the alternate hypothesis says there is a difference. When we perform hypothesis testing, we have to state the null hypothesis and alternative hypotheses such that only one of them is ever true. 

By determining the p-value, we calculate whether the null hypothesis is to be rejected or not. If the difference between groups is low, it is merely by chance, and the null hypothesis, which states that there is no difference among groups, is true. Therefore, we have no evidence to reject the null hypothesis.

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Understanding Hypothesis Testing

Hypothesis testing involves formulating assumptions about population parameters based on sample statistics and rigorously evaluating these assumptions against empirical evidence. This article sheds light on the significance of hypothesis testing and the critical steps involved in the process.

What is Hypothesis Testing?

A hypothesis is an assumption or idea, specifically a statistical claim about an unknown population parameter. For example, a judge assumes a person is innocent and verifies this by reviewing evidence and hearing testimony before reaching a verdict.

Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. 

To test the validity of the claim or assumption about the population parameter:

  • A sample is drawn from the population and analyzed.
  • The results of the analysis are used to decide whether the claim is true or not.
Example: You say an average height in the class is 30 or a boy is taller than a girl. All of these is an assumption that we are assuming, and we need some statistical way to prove these. We need some mathematical conclusion whatever we are assuming is true.

Defining Hypotheses

  • Null hypothesis (H 0 ): In statistics, the null hypothesis is a general statement or default position that there is no relationship between two measured cases or no relationship among groups. In other words, it is a basic assumption or made based on the problem knowledge. Example : A company’s mean production is 50 units/per da H 0 : [Tex]\mu [/Tex] = 50.
  • Alternative hypothesis (H 1 ): The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis.  Example: A company’s production is not equal to 50 units/per day i.e. H 1 : [Tex]\mu [/Tex] [Tex]\ne [/Tex] 50.

Key Terms of Hypothesis Testing

  • Level of significance : It refers to the degree of significance in which we accept or reject the null hypothesis. 100% accuracy is not possible for accepting a hypothesis, so we, therefore, select a level of significance that is usually 5%. This is normally denoted with  [Tex]\alpha[/Tex] and generally, it is 0.05 or 5%, which means your output should be 95% confident to give a similar kind of result in each sample.
  • P-value: The P value , or calculated probability, is the probability of finding the observed/extreme results when the null hypothesis(H0) of a study-given problem is true. If your P-value is less than the chosen significance level then you reject the null hypothesis i.e. accept that your sample claims to support the alternative hypothesis.
  • Test Statistic: The test statistic is a numerical value calculated from sample data during a hypothesis test, used to determine whether to reject the null hypothesis. It is compared to a critical value or p-value to make decisions about the statistical significance of the observed results.
  • Critical value : The critical value in statistics is a threshold or cutoff point used to determine whether to reject the null hypothesis in a hypothesis test.
  • Degrees of freedom: Degrees of freedom are associated with the variability or freedom one has in estimating a parameter. The degrees of freedom are related to the sample size and determine the shape.

Why do we use Hypothesis Testing?

Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing. 

One-Tailed and Two-Tailed Test

One tailed test focuses on one direction, either greater than or less than a specified value. We use a one-tailed test when there is a clear directional expectation based on prior knowledge or theory. The critical region is located on only one side of the distribution curve. If the sample falls into this critical region, the null hypothesis is rejected in favor of the alternative hypothesis.

One-Tailed Test

There are two types of one-tailed test:

  • Left-Tailed (Left-Sided) Test: The alternative hypothesis asserts that the true parameter value is less than the null hypothesis. Example: H 0 ​: [Tex]\mu \geq 50 [/Tex] and H 1 : [Tex]\mu < 50 [/Tex]
  • Right-Tailed (Right-Sided) Test : The alternative hypothesis asserts that the true parameter value is greater than the null hypothesis. Example: H 0 : [Tex]\mu \leq50 [/Tex] and H 1 : [Tex]\mu > 50 [/Tex]

Two-Tailed Test

A two-tailed test considers both directions, greater than and less than a specified value.We use a two-tailed test when there is no specific directional expectation, and want to detect any significant difference.

Example: H 0 : [Tex]\mu = [/Tex] 50 and H 1 : [Tex]\mu \neq 50 [/Tex]

To delve deeper into differences into both types of test: Refer to link

What are Type 1 and Type 2 errors in Hypothesis Testing?

In hypothesis testing, Type I and Type II errors are two possible errors that researchers can make when drawing conclusions about a population based on a sample of data. These errors are associated with the decisions made regarding the null hypothesis and the alternative hypothesis.

  • Type I error: When we reject the null hypothesis, although that hypothesis was true. Type I error is denoted by alpha( [Tex]\alpha [/Tex] ).
  • Type II errors : When we accept the null hypothesis, but it is false. Type II errors are denoted by beta( [Tex]\beta [/Tex] ).


Null Hypothesis is True

Null Hypothesis is False

Null Hypothesis is True (Accept)

Correct Decision

Type II Error (False Negative)

Alternative Hypothesis is True (Reject)

Type I Error (False Positive)

Correct Decision

How does Hypothesis Testing work?

Step 1: define null and alternative hypothesis.

State the null hypothesis ( [Tex]H_0 [/Tex] ), representing no effect, and the alternative hypothesis ( [Tex]H_1 [/Tex] ​), suggesting an effect or difference.

We first identify the problem about which we want to make an assumption keeping in mind that our assumption should be contradictory to one another, assuming Normally distributed data.

Step 2 – Choose significance level

Select a significance level ( [Tex]\alpha [/Tex] ), typically 0.05, to determine the threshold for rejecting the null hypothesis. It provides validity to our hypothesis test, ensuring that we have sufficient data to back up our claims. Usually, we determine our significance level beforehand of the test. The p-value is the criterion used to calculate our significance value.

Step 3 – Collect and Analyze data.

Gather relevant data through observation or experimentation. Analyze the data using appropriate statistical methods to obtain a test statistic.

Step 4-Calculate Test Statistic

The data for the tests are evaluated in this step we look for various scores based on the characteristics of data. The choice of the test statistic depends on the type of hypothesis test being conducted.

There are various hypothesis tests, each appropriate for various goal to calculate our test. This could be a Z-test , Chi-square , T-test , and so on.

  • Z-test : If population means and standard deviations are known. Z-statistic is commonly used.
  • t-test : If population standard deviations are unknown. and sample size is small than t-test statistic is more appropriate.
  • Chi-square test : Chi-square test is used for categorical data or for testing independence in contingency tables
  • F-test : F-test is often used in analysis of variance (ANOVA) to compare variances or test the equality of means across multiple groups.

We have a smaller dataset, So, T-test is more appropriate to test our hypothesis.

T-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.

Step 5 – Comparing Test Statistic:

In this stage, we decide where we should accept the null hypothesis or reject the null hypothesis. There are two ways to decide where we should accept or reject the null hypothesis.

Method A: Using Crtical values

Comparing the test statistic and tabulated critical value we have,

  • If Test Statistic>Critical Value: Reject the null hypothesis.
  • If Test Statistic≤Critical Value: Fail to reject the null hypothesis.

Note: Critical values are predetermined threshold values that are used to make a decision in hypothesis testing. To determine critical values for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Method B: Using P-values

We can also come to an conclusion using the p-value,

  • If the p-value is less than or equal to the significance level i.e. ( [Tex]p\leq\alpha [/Tex] ), you reject the null hypothesis. This indicates that the observed results are unlikely to have occurred by chance alone, providing evidence in favor of the alternative hypothesis.
  • If the p-value is greater than the significance level i.e. ( [Tex]p\geq \alpha[/Tex] ), you fail to reject the null hypothesis. This suggests that the observed results are consistent with what would be expected under the null hypothesis.

Note : The p-value is the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming the null hypothesis is true. To determine p-value for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Step 7- Interpret the Results

At last, we can conclude our experiment using method A or B.

Calculating test statistic

To validate our hypothesis about a population parameter we use statistical functions . We use the z-score, p-value, and level of significance(alpha) to make evidence for our hypothesis for normally distributed data .

1. Z-statistics:

When population means and standard deviations are known.

[Tex]z = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}[/Tex]

  • [Tex]\bar{x} [/Tex] is the sample mean,
  • μ represents the population mean, 
  • σ is the standard deviation
  • and n is the size of the sample.

2. T-Statistics

T test is used when n<30,

t-statistic calculation is given by:

[Tex]t=\frac{x̄-μ}{s/\sqrt{n}} [/Tex]

  • t = t-score,
  • x̄ = sample mean
  • μ = population mean,
  • s = standard deviation of the sample,
  • n = sample size

3. Chi-Square Test

Chi-Square Test for Independence categorical Data (Non-normally distributed) using:

[Tex]\chi^2 = \sum \frac{(O_{ij} – E_{ij})^2}{E_{ij}}[/Tex]

  • [Tex]O_{ij}[/Tex] is the observed frequency in cell [Tex]{ij} [/Tex]
  • i,j are the rows and columns index respectively.
  • [Tex]E_{ij}[/Tex] is the expected frequency in cell [Tex]{ij}[/Tex] , calculated as : [Tex]\frac{{\text{{Row total}} \times \text{{Column total}}}}{{\text{{Total observations}}}}[/Tex]

Real life Examples of Hypothesis Testing

Let’s examine hypothesis testing using two real life situations,

Case A: D oes a New Drug Affect Blood Pressure?

Imagine a pharmaceutical company has developed a new drug that they believe can effectively lower blood pressure in patients with hypertension. Before bringing the drug to market, they need to conduct a study to assess its impact on blood pressure.

  • Before Treatment: 120, 122, 118, 130, 125, 128, 115, 121, 123, 119
  • After Treatment: 115, 120, 112, 128, 122, 125, 110, 117, 119, 114

Step 1 : Define the Hypothesis

  • Null Hypothesis : (H 0 )The new drug has no effect on blood pressure.
  • Alternate Hypothesis : (H 1 )The new drug has an effect on blood pressure.

Step 2: Define the Significance level

Let’s consider the Significance level at 0.05, indicating rejection of the null hypothesis.

If the evidence suggests less than a 5% chance of observing the results due to random variation.

Step 3 : Compute the test statistic

Using paired T-test analyze the data to obtain a test statistic and a p-value.

The test statistic (e.g., T-statistic) is calculated based on the differences between blood pressure measurements before and after treatment.

t = m/(s/√n)

  • m  = mean of the difference i.e X after, X before
  • s  = standard deviation of the difference (d) i.e d i ​= X after, i ​− X before,
  • n  = sample size,

then, m= -3.9, s= 1.8 and n= 10

we, calculate the , T-statistic = -9 based on the formula for paired t test

Step 4: Find the p-value

The calculated t-statistic is -9 and degrees of freedom df = 9, you can find the p-value using statistical software or a t-distribution table.

thus, p-value = 8.538051223166285e-06

Step 5: Result

  • If the p-value is less than or equal to 0.05, the researchers reject the null hypothesis.
  • If the p-value is greater than 0.05, they fail to reject the null hypothesis.

Conclusion: Since the p-value (8.538051223166285e-06) is less than the significance level (0.05), the researchers reject the null hypothesis. There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

Python Implementation of Case A

Let’s create hypothesis testing with python, where we are testing whether a new drug affects blood pressure. For this example, we will use a paired T-test. We’ll use the scipy.stats library for the T-test.

Scipy is a mathematical library in Python that is mostly used for mathematical equations and computations.

We will implement our first real life problem via python,

import numpy as np from scipy import stats # Data before_treatment = np . array ([ 120 , 122 , 118 , 130 , 125 , 128 , 115 , 121 , 123 , 119 ]) after_treatment = np . array ([ 115 , 120 , 112 , 128 , 122 , 125 , 110 , 117 , 119 , 114 ]) # Step 1: Null and Alternate Hypotheses # Null Hypothesis: The new drug has no effect on blood pressure. # Alternate Hypothesis: The new drug has an effect on blood pressure. null_hypothesis = "The new drug has no effect on blood pressure." alternate_hypothesis = "The new drug has an effect on blood pressure." # Step 2: Significance Level alpha = 0.05 # Step 3: Paired T-test t_statistic , p_value = stats . ttest_rel ( after_treatment , before_treatment ) # Step 4: Calculate T-statistic manually m = np . mean ( after_treatment - before_treatment ) s = np . std ( after_treatment - before_treatment , ddof = 1 ) # using ddof=1 for sample standard deviation n = len ( before_treatment ) t_statistic_manual = m / ( s / np . sqrt ( n )) # Step 5: Decision if p_value <= alpha : decision = "Reject" else : decision = "Fail to reject" # Conclusion if decision == "Reject" : conclusion = "There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different." else : conclusion = "There is insufficient evidence to claim a significant difference in average blood pressure before and after treatment with the new drug." # Display results print ( "T-statistic (from scipy):" , t_statistic ) print ( "P-value (from scipy):" , p_value ) print ( "T-statistic (calculated manually):" , t_statistic_manual ) print ( f "Decision: { decision } the null hypothesis at alpha= { alpha } ." ) print ( "Conclusion:" , conclusion )

T-statistic (from scipy): -9.0 P-value (from scipy): 8.538051223166285e-06 T-statistic (calculated manually): -9.0 Decision: Reject the null hypothesis at alpha=0.05. Conclusion: There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

In the above example, given the T-statistic of approximately -9 and an extremely small p-value, the results indicate a strong case to reject the null hypothesis at a significance level of 0.05. 

  • The results suggest that the new drug, treatment, or intervention has a significant effect on lowering blood pressure.
  • The negative T-statistic indicates that the mean blood pressure after treatment is significantly lower than the assumed population mean before treatment.

Case B : Cholesterol level in a population

Data: A sample of 25 individuals is taken, and their cholesterol levels are measured.

Cholesterol Levels (mg/dL): 205, 198, 210, 190, 215, 205, 200, 192, 198, 205, 198, 202, 208, 200, 205, 198, 205, 210, 192, 205, 198, 205, 210, 192, 205.

Populations Mean = 200

Population Standard Deviation (σ): 5 mg/dL(given for this problem)

Step 1: Define the Hypothesis

  • Null Hypothesis (H 0 ): The average cholesterol level in a population is 200 mg/dL.
  • Alternate Hypothesis (H 1 ): The average cholesterol level in a population is different from 200 mg/dL.

As the direction of deviation is not given , we assume a two-tailed test, and based on a normal distribution table, the critical values for a significance level of 0.05 (two-tailed) can be calculated through the z-table and are approximately -1.96 and 1.96.

The test statistic is calculated by using the z formula Z = [Tex](203.8 – 200) / (5 \div \sqrt{25}) [/Tex] ​ and we get accordingly , Z =2.039999999999992.

Step 4: Result

Since the absolute value of the test statistic (2.04) is greater than the critical value (1.96), we reject the null hypothesis. And conclude that, there is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL

Python Implementation of Case B

import scipy.stats as stats import math import numpy as np # Given data sample_data = np . array ( [ 205 , 198 , 210 , 190 , 215 , 205 , 200 , 192 , 198 , 205 , 198 , 202 , 208 , 200 , 205 , 198 , 205 , 210 , 192 , 205 , 198 , 205 , 210 , 192 , 205 ]) population_std_dev = 5 population_mean = 200 sample_size = len ( sample_data ) # Step 1: Define the Hypotheses # Null Hypothesis (H0): The average cholesterol level in a population is 200 mg/dL. # Alternate Hypothesis (H1): The average cholesterol level in a population is different from 200 mg/dL. # Step 2: Define the Significance Level alpha = 0.05 # Two-tailed test # Critical values for a significance level of 0.05 (two-tailed) critical_value_left = stats . norm . ppf ( alpha / 2 ) critical_value_right = - critical_value_left # Step 3: Compute the test statistic sample_mean = sample_data . mean () z_score = ( sample_mean - population_mean ) / \ ( population_std_dev / math . sqrt ( sample_size )) # Step 4: Result # Check if the absolute value of the test statistic is greater than the critical values if abs ( z_score ) > max ( abs ( critical_value_left ), abs ( critical_value_right )): print ( "Reject the null hypothesis." ) print ( "There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL." ) else : print ( "Fail to reject the null hypothesis." ) print ( "There is not enough evidence to conclude that the average cholesterol level in the population is different from 200 mg/dL." )

Reject the null hypothesis. There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL.

Limitations of Hypothesis Testing

  • Although a useful technique, hypothesis testing does not offer a comprehensive grasp of the topic being studied. Without fully reflecting the intricacy or whole context of the phenomena, it concentrates on certain hypotheses and statistical significance.
  • The accuracy of hypothesis testing results is contingent on the quality of available data and the appropriateness of statistical methods used. Inaccurate data or poorly formulated hypotheses can lead to incorrect conclusions.
  • Relying solely on hypothesis testing may cause analysts to overlook significant patterns or relationships in the data that are not captured by the specific hypotheses being tested. This limitation underscores the importance of complimenting hypothesis testing with other analytical approaches.

Hypothesis testing stands as a cornerstone in statistical analysis, enabling data scientists to navigate uncertainties and draw credible inferences from sample data. By systematically defining null and alternative hypotheses, choosing significance levels, and leveraging statistical tests, researchers can assess the validity of their assumptions. The article also elucidates the critical distinction between Type I and Type II errors, providing a comprehensive understanding of the nuanced decision-making process inherent in hypothesis testing. The real-life example of testing a new drug’s effect on blood pressure using a paired T-test showcases the practical application of these principles, underscoring the importance of statistical rigor in data-driven decision-making.

Frequently Asked Questions (FAQs)

1. what are the 3 types of hypothesis test.

There are three types of hypothesis tests: right-tailed, left-tailed, and two-tailed. Right-tailed tests assess if a parameter is greater, left-tailed if lesser. Two-tailed tests check for non-directional differences, greater or lesser.

2.What are the 4 components of hypothesis testing?

Null Hypothesis ( [Tex]H_o [/Tex] ): No effect or difference exists. Alternative Hypothesis ( [Tex]H_1 [/Tex] ): An effect or difference exists. Significance Level ( [Tex]\alpha [/Tex] ): Risk of rejecting null hypothesis when it’s true (Type I error). Test Statistic: Numerical value representing observed evidence against null hypothesis.

3.What is hypothesis testing in ML?

Statistical method to evaluate the performance and validity of machine learning models. Tests specific hypotheses about model behavior, like whether features influence predictions or if a model generalizes well to unseen data.

4.What is the difference between Pytest and hypothesis in Python?

Pytest purposes general testing framework for Python code while Hypothesis is a Property-based testing framework for Python, focusing on generating test cases based on specified properties of the code.

Please Login to comment...

Similar reads.

  • data-science
  • Best Twitch Extensions for 2024: Top Tools for Viewers and Streamers
  • Discord Emojis List 2024: Copy and Paste
  • Best Adblockers for Twitch TV: Enjoy Ad-Free Streaming in 2024
  • PS4 vs. PS5: Which PlayStation Should You Buy in 2024?
  • 15 Most Important Aptitude Topics For Placements [2024]

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Compressed Hypothesis Testing: To Mix or Not to Mix?

In this paper, we study the problem of determining k 𝑘 k italic_k anomalous random variables that have different probability distributions from the rest ( n − k ) 𝑛 𝑘 (n-k) ( italic_n - italic_k ) random variables. Instead of sampling each individual random variable separately as in the conventional hypothesis testing, we propose to perform hypothesis testing using mixed observations that are functions of multiple random variables. We characterize the error exponents for correctly identifying the k 𝑘 k italic_k anomalous random variables under fixed time-invariant mixed observations, random time-varying mixed observations, and deterministic time-varying mixed observations. For our error exponent characterization, we introduce the notions of inner conditional Chernoff information and outer conditional Chernoff information . We demonstrated that mixed observations can strictly improve the error exponents of hypothesis testing, over separate observations of individual random variables. We further characterize the optimal sensing vector maximizing the error exponents, which leads to explicit constructions of the optimal mixed observations in special cases of hypothesis testing for Gaussian random variables. These results show that mixed observations of random variables can reduce the number of required samples in hypothesis testing applications. In order to solve large-scale hypothesis testing problems, we also propose efficient algorithms - LASSO based and message passing based hypothesis testing algorithms.

Index Terms:

I introduction.

In many areas of science and engineering such as network tomography, cognitive radio, radar, and Internet of Things (IoTs), one needs to infer statistical information of signals of interest [ 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 ] . Statistical information of interest can be the means, the variances or even the distributions of certain random variables. Obtaining such statistical information is essential in detecting anomalous behaviors of random signals. Espeically, inferring distributions of random variables has many important applications including quickest detections of potential hazards, detecting changes in statistical behaviors of random variables [ 12 , 13 , 14 , 4 , 15 , 10 ] , and detecting congested links with abnormal delay statistics in network tomography [ 9 , 16 , 17 ] .

In this paper, we consider a multiple hypothesis testing problem with few compressed measurements, which has applications in anomaly detection. In particular, we consider n 𝑛 n italic_n random variables, denoted by X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , i ∈ 𝒮 = { 1 , 2 , … , n } 𝑖 𝒮 1 2 … 𝑛 i\in{\mathcal{S}}=\{1,2,...,n\} italic_i ∈ caligraphic_S = { 1 , 2 , … , italic_n } , out of which k 𝑘 k italic_k ( k ≪ n much-less-than 𝑘 𝑛 k\ll n italic_k ≪ italic_n ) random variables follow a probability distribution f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) while the much larger set of remaining ( n − k ) 𝑛 𝑘 (n-k) ( italic_n - italic_k ) random variables follow another probability distribution f 1 ⁢ ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) . However, it is unknown which k 𝑘 k italic_k random variables follow the distribution f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . Our goal in this paper is to infer the subset of random variables that follow f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . In our problem setup, this is equivalent to determining whether X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT follows the probability distribution f 1 ⁢ ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) or f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) for each i 𝑖 i italic_i . The system model of anomaly detection considered in this paper has appeared in various applications such as cognitive radio [ 10 , 18 , 19 ] , quickest detection and search [ 20 , 4 , 21 , 22 , 23 , 24 , 25 ] , and communication systems [ 26 , 27 , 28 ] .

In order to infer the probability distribution of the n 𝑛 n italic_n random variables, one conventional method is to obtain l 𝑙 l italic_l separate samples for each random variable X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and then use hypothesis testing techniques to determine whether X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT follows the probability distribution f 1 ⁢ ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) or f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) for each i 𝑖 i italic_i . To ensure correctly identifying the k 𝑘 k italic_k anomalous random variables with high probability, at least Θ ⁢ ( n ) Θ 𝑛 \Theta(n) roman_Θ ( italic_n ) samples are required for hypothesis testing with these samples involving only individual random variables. However, when the number of random variables n 𝑛 n italic_n grows large, the requirement on sampling rates and sensing resources can easily become a burden in the anomaly detection. For example, in a sensor network, if the fusion center aims to track the anomalies in data generated by n 𝑛 n italic_n chemical sensors, sending all the data samples of individual sensors to the fusion center will be energy-consuming and inefficient in the energy-limited sensor network. In this scenario, reducing the number of samples for inferring the probability distributions of the n 𝑛 n italic_n random variables is desired in order to lessen the communication burden in the energy-limited sensor network. Additionally, in some applications introduced in [ 5 , 8 , 9 ] for the inference of link delay in networks, due to physical constraints, we are sometimes unable to directly obtain separate samples of individual random variables. Those difficulties raise the question of whether we can perform hypothesis testing from a much smaller number of samples involving mixed observations.

superscript 𝒂 𝑇 𝝁 bold-italic-ϵ {\boldsymbol{y}}={\boldsymbol{a}}^{T}{\boldsymbol{\mu}}+{\boldsymbol{\epsilon}} bold_italic_y = bold_italic_a start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_μ + bold_italic_ϵ with some unknown deterministic spare vector 𝝁 𝝁 {\boldsymbol{\mu}} bold_italic_μ and additive noise ϵ bold-italic-ϵ {\boldsymbol{\epsilon}} bold_italic_ϵ . So in standard compressed sensing, the unknown vector 𝒙 𝒙 {\boldsymbol{x}} bold_italic_x takes the same value in each measurement. Unlike the standard compressed sensing, in compressed hypothesis testing, 𝒙 𝒙 {\boldsymbol{x}} bold_italic_x is a vector of n 𝑛 n italic_n random variables taking independent realizations across different measurements. As discussed below, our problem is relevant to multiple hypothesis testing such as in communication systems [ 26 , 27 , 28 ] .

In addition to the multiple hypothesis testing with communication constraints, related works on identifying anomalous random variables include the detection of an anomalous cluster in a network [ 32 ] , Gaussian clustering [ 33 ] , group testing [ 34 , 35 , 36 , 37 , 38 , 39 ] , and quickest detection [ 22 , 23 , 21 , 40 , 41 ] . Especially, in [ 22 , 23 , 21 , 40 ] , the authors optimized adaptive separate samplings of individual random variables and reduced the number of needed samples for individual random variables by utilizing the sparsity of anomalous random variables. However, the total number of observations is still at least Θ ⁢ ( n ) Θ 𝑛 \Theta(n) roman_Θ ( italic_n ) for these methods [ 22 , 23 , 21 , 40 ] , since one is restricted to individually sample the n 𝑛 n italic_n random variables. The major difference between the previous research [ 22 , 23 , 21 , 40 , 32 , 33 ] and ours is that we consider compressed measurements instead of separate measurements of individual random variables. Additionally, group testing is different from our problem setting, since our sensing matrices and variables are general sensing matrices and vectors taking real-numbered values, while in group testing [ 34 , 35 , 36 , 37 , 38 , 39 ] , Bernoulli matrices are normally used. Moreover, in group testing, the unknown vector is often assumed to be deterministic across different measurements rather than assumed to be taking independent realizations as in this paper.

The rest of the paper is organized as follows. Section II describes the mathematical model of the considered anomaly detection problem. In Section III-A , we investigate the hypothesis testing error performance using time-invariant mixed observations in hypothesis testing, and propose corresponding hypothesis testing algorithms. And then, we provide their performance analysis. Section III-B describes the case using random time-varying mixed observations to identify the anomalous random variables, and we derive the error exponent of wrongly identifying the anomalous random variables. In Section III-C , we consider the case using deterministic time-varying mixed observations for hypothesis testing, and derive a bound on the error probability. In Section IV , we take into account undersampling measurement case, where the number of measurement is smaller than the number of random variables, to show the advantage of compressed hypothesis testing with mixed measurements against separate measurements. In Section V , we demonstrate, through examples of Gaussian random variables, that linear mixed observations can strictly improve the error exponent over separate sampling of each individual random variable. Section VI describes the optimal mixed measurements for Gaussian random variables maximizing the error exponent in hypothesis testing. Section VII introduces efficient algorithms to find abnormal random variables using mixed observations, for large values of n 𝑛 n italic_n and k 𝑘 k italic_k . In Section VIII , we demonstrate the effectiveness of our hypothesis testing methods with mixed measurements in various numerical experiments. Section IX provides the conclusion of this paper.

Notations : We denote a random variable and its realization by an uppercase letter and the corresponding lowercase letter respectively. We use X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to refer to the i 𝑖 i italic_i -th element of the random variable vector 𝑿 𝑿 {\boldsymbol{X}} bold_italic_X . We reserve calligraphic uppercase letters 𝒮 𝒮 {\mathcal{S}} caligraphic_S and 𝒦 𝒦 {\mathcal{K}} caligraphic_K for index sets, where 𝒮 = { 1 , 2 , … , n } 𝒮 1 2 … 𝑛 {\mathcal{S}}=\{1,2,...,n\} caligraphic_S = { 1 , 2 , … , italic_n } , and 𝒦 ⊆ 𝒮 𝒦 𝒮 {\mathcal{K}}\subseteq{\mathcal{S}} caligraphic_K ⊆ caligraphic_S . We use superscripts to represent time indices. Hence, 𝒙 j superscript 𝒙 𝑗 {\boldsymbol{x}}^{j} bold_italic_x start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT represents the realization of a random variable vector 𝑿 𝑿 {\boldsymbol{X}} bold_italic_X at time j 𝑗 j italic_j . We reserve the lowercase letters f 𝑓 f italic_f and p 𝑝 p italic_p for Probability Density Functions (PDFs). We also denote the probability density function p X ⁢ ( x ) subscript 𝑝 𝑋 𝑥 p_{X}(x) italic_p start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_x ) as p ⁢ ( x ) 𝑝 𝑥 p(x) italic_p ( italic_x ) or p X subscript 𝑝 𝑋 p_{X} italic_p start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT for notation convenience. In this paper, log \log roman_log represents the logarithm with the natural number e 𝑒 e italic_e as its base.

II Mathematical Models

(II.1)

where 𝒦 ⊂ 𝒮 = { 1 , 2 , ⋯ , n } 𝒦 𝒮 1 2 ⋯ 𝑛 {\mathcal{K}}\subset{\mathcal{S}}=\{1,2,\cdots,n\} caligraphic_K ⊂ caligraphic_S = { 1 , 2 , ⋯ , italic_n } is an unknown “support” index set, and | 𝒦 | = k ≪ n 𝒦 𝑘 much-less-than 𝑛 |{\mathcal{K}}|=k\ll n | caligraphic_K | = italic_k ≪ italic_n . We take m 𝑚 m italic_m mixed observations of the n 𝑛 n italic_n random variables at m 𝑚 m italic_m numbers of time indices. The measurement at time j 𝑗 j italic_j is stated as

which is a function of n 𝑛 n italic_n random variables, where 1 ≤ j ≤ m 1 𝑗 𝑚 1\leq j\leq m 1 ≤ italic_j ≤ italic_m . Note that the random variable X i j superscript subscript 𝑋 𝑖 𝑗 X_{i}^{j} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT follows the probability distribution f 1 ⁢ ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) or f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) depending on whether i ∈ 𝒦 𝑖 𝒦 i\in{\mathcal{K}} italic_i ∈ caligraphic_K or not, which is the same distribution as the random variable X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT . Our goal in this paper is to determine 𝒦 𝒦 {\mathcal{K}} caligraphic_K by identifying those k 𝑘 k italic_k anomalous random variables with as few measurements as possible. We assume that the realizations at different time slots are mutually independent. Additionally, although our results can be extended to nonlinear observations, in this paper, we specifically consider the case when the functions g j ⁢ ( ⋅ ) superscript 𝑔 𝑗 ⋅ g^{j}(\cdot) italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ( ⋅ ) ’s are linear due to its simplicity and its wide range of applications including network tomography [ 9 ] and cognitive radio [ 10 ] . Especially, the network tomography problem is a good example of the considered linear measurement model, where the goal of the problem is figuring out congested links in a communication network by sending packets through probing paths that are composed of connected links. The communication delay through a probing path is naturally a linear combination of the random variables representing the delays of that packet traveling through corresponding links.

Throughout this paper, when the measurements through functions g j superscript 𝑔 𝑗 g^{j} italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ’s are taken, the decoder knows the functions g j superscript 𝑔 𝑗 g^{j} italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ’s. In particular, when the functions are linear functions, the decoder knows the coefficients of these linear functions or the matrices A 𝐴 A italic_A as discussed later in this paper.

When the functions g j superscript 𝑔 𝑗 g^{j} italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ’s are linear, the j 𝑗 j italic_j -th measurement is stated as follows:

(II.2)
(II.3)

We would like to design the sampling functions g j ⁢ ( ⋅ ) superscript 𝑔 𝑗 ⋅ g^{j}(\cdot) italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ( ⋅ ) ’s and the decision function ϕ ⁢ ( ⋅ ) italic-ϕ ⋅ \phi(\cdot) italic_ϕ ( ⋅ ) such that the probability

(II.4)

for an arbitrarily small ϵ > 0 italic-ϵ 0 \epsilon>0 italic_ϵ > 0 .

III Compressed Hypothesis Testing

In compressed hypothesis testing, we consider three different types of mixed observations, namely fixed time-invariant mixed measurements, random time-varying measurements, and deterministic time-varying measurements. Table I summarizes the definition of these types of measurements. For these different types of mixed observations, we characterize the number of measurements required to achieve a specified hypothesis testing error probability.

Measurement type Definition
Fixed time-invariant The measurement function is the same at every time index.
Random time-varying The measurement function is randomly generated from a distribution at each time index.
Deterministic time-varying The measurement function is time-varying at each time index but predetermined.

III-A Fixed Time-Invariant Measurements

In this subsection, we focus on a simple case in which sensing vectors are time-invariant across different time indices, i.e., 𝒂 1 = ⋯ = 𝒂 m := 𝒂 superscript 𝒂 1 ⋯ superscript 𝒂 𝑚 assign 𝒂 {\boldsymbol{a}}^{1}=\cdots={\boldsymbol{a}}^{m}:={\boldsymbol{a}} bold_italic_a start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT = ⋯ = bold_italic_a start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT := bold_italic_a , where 𝒂 ∈ ℝ n × 1 𝒂 superscript ℝ 𝑛 1 {\boldsymbol{a}}\in{\mathbb{R}}^{n\times 1} bold_italic_a ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × 1 end_POSTSUPERSCRIPT . This simple case helps us to illustrate the main idea that will be generalized to more sophisticated schemes in later sections.

Y 1 superscript 𝑌 1 Y^{1} italic_Y start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , Y 2 superscript 𝑌 2 Y^{2} italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , …, Y m superscript 𝑌 𝑚 Y^{m} italic_Y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT follow probability distribution p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT

Y 1 superscript 𝑌 1 Y^{1} italic_Y start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , Y 2 superscript 𝑌 2 Y^{2} italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , …, Y m superscript 𝑌 𝑚 Y^{m} italic_Y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT follow probability distribution p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT

Theorem III.1 .

(III.1)

is the Chernoff information between two probability distributions p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT .

In Algorithm 2 , for two probability distributions p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , we choose the probability likelihood ratio threshold of the Neyman-Pearson testing in such a way that the error probability decreases with the largest possible error exponent, namely the Chernoff information between p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT :

Overall, the smallest possible error exponent of making an error between any pair of probability distributions is

(III.2)

Without loss of generality, we assume that p 1 subscript 𝑝 1 p_{1} italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the true probability distribution for the observation data 𝒀 = 𝒚 𝒀 𝒚 {\boldsymbol{Y}}={\boldsymbol{y}} bold_italic_Y = bold_italic_y . Since the error probability ℙ e ⁢ r ⁢ r subscript ℙ 𝑒 𝑟 𝑟 {\mathbb{P}}_{err} blackboard_P start_POSTSUBSCRIPT italic_e italic_r italic_r end_POSTSUBSCRIPT in the Neyman-Pearson testing scales as (the exponent of the scaling is asymptotically tight [ 49 ] )

where m 𝑚 m italic_m is the number of measurements [ 49 , Chapter 11.9] . By the union bound over the l − 1 𝑙 1 l-1 italic_l - 1 possible pairs ( p 1 , p w ) subscript 𝑝 1 subscript 𝑝 𝑤 (p_{1},p_{w}) ( italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) , the probability that p 1 subscript 𝑝 1 p_{1} italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is not correctly identified as the true probability distribution scales at most as l × 2 − m ⁢ E := ϵ assign 𝑙 superscript 2 𝑚 𝐸 italic-ϵ l\times 2^{-mE}:=\epsilon italic_l × 2 start_POSTSUPERSCRIPT - italic_m italic_E end_POSTSUPERSCRIPT := italic_ϵ , where l = ( n k ) 𝑙 binomial 𝑛 𝑘 l=\binom{n}{k} italic_l = ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) . From the upper and lower bounds on binomial coefficients ( n k ) binomial 𝑛 𝑘 \binom{n}{k} ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) ,

where e 𝑒 e italic_e is the natural number, and 1 ≤ k ≤ n 1 𝑘 𝑛 1\leq k\leq n 1 ≤ italic_k ≤ italic_n , for the failure probability, we have

Thus, for the number of measurements, we have

(III.3)

Therefore, Θ ⁢ ( k ⁢ log ⁡ ( n ) ⁢ E − 1 ) Θ 𝑘 𝑛 superscript 𝐸 1 \Theta(k\log(n)E^{-1}) roman_Θ ( italic_k roman_log ( italic_n ) italic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) , where E 𝐸 E italic_E is introduced in ( III.2 ), samples are enough for identifying the k 𝑘 k italic_k anomalous samples with high probability. ∎

Each random variable among the n 𝑛 n italic_n numbers of random variables has the same probability of being an abnormal random variable. Thus, possible locations of the k 𝑘 k italic_k different random variables out of n 𝑛 n italic_n follow uniform prior distribution; namely, every hypothesis has the same prior probability to occur. Algorithm 1 is based on maximum likelihood detection, which is known to provide the minimum error probability with uniform prior [ 50 ] . Additionally, since the Likelihood Ratio Test (LRT) can provide the same result as the maximum likelihood estimation when the threshold value is one, Algorithm 2 , which is an LRT algorithm, can provide the same result as Algorithm 1 with a properly chosen threshold value in the Neyman-Pearson test.

We also remark that the error exponent (Chernoff information) for the Neyman-Person test is tight, in the sense that a lower bound on the error probability for pairwise Neyman-Person test scales with the same exponent.

If we are allowed to use time-varying sketching functions, we may need fewer samples. In the next subsection, we discuss the performance of time-varying mixed measurements for this problem.

III-B Random Time-Varying Measurements

We propose the maximum likelihood estimate method with random time-varying measurements over ( n k ) binomial 𝑛 𝑘 \binom{n}{k} ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) hypotheses in Algorithm 3 . For the purpose of analyzing the error probability of the maximum likelihood estimation, we further propose a hypothesis testing algorithm based on pairwise comparison in Algorithm 4 . The number of samples required to find the abnormal random variables is stated in Theorem III.3 . Before we introduce our theorem for hypothesis testing with random time-varying measurements, we newly introduce the Chernoff information between two conditional probability density functions, named it as the inner conditional Chernoff information, in Definition III.2 .

  • 𝑨 conditional 𝑌 subscript 𝐻 𝑣 𝒂 conditional 𝑦 subscript 𝐻 𝑣 p_{{\boldsymbol{A}},Y|H_{v}}({\boldsymbol{a}},y|H_{v}) italic_p start_POSTSUBSCRIPT bold_italic_A , italic_Y | italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( bold_italic_a , italic_y | italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT )
  • 𝑨 conditional 𝑌 subscript 𝐻 𝑤 𝒂 conditional 𝑦 subscript 𝐻 𝑤 p_{{\boldsymbol{A}},Y|H_{w}}({\boldsymbol{a}},y|H_{w}) italic_p start_POSTSUBSCRIPT bold_italic_A , italic_Y | italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( bold_italic_a , italic_y | italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT )

Definition III.2 (Inner Conditional Chernoff Information) .

(III.4)

With the definition of the inner conditional Chernoff information, we give our theorem on the sample complexity of our algorithms as follows.

Theorem III.3 .

(III.5)
(III.6)

Here ( III.5 ) is obtained. By further working on ( III.5 ), we have

(III.7)
(III.8)

and the first equation holds for any realization vector 𝒂 ′ superscript 𝒂 ′ {\boldsymbol{a}}^{\prime} bold_italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in the domain of 𝒂 𝒂 {\boldsymbol{a}} bold_italic_a . We take the minimization over 𝒂 ′ superscript 𝒂 ′ {\boldsymbol{a}}^{\prime} bold_italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in order to have the tightest lower bound of the inner conditional Chernoff information. Notice that due to the Holder’s inequality, for any probability density functions f ⁢ ( x ) 𝑓 𝑥 f(x) italic_f ( italic_x ) and g ⁢ ( x ) 𝑔 𝑥 g(x) italic_g ( italic_x ) , we have

(III.9)

In conclusion, we obtain

(III.10)

Overall, the smallest possible error exponent between any pair of hypotheses is

(III.11)

Without loss of generality, we assume H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the true hypothesis. Since the error probability ℙ e ⁢ r ⁢ r subscript ℙ 𝑒 𝑟 𝑟 {\mathbb{P}}_{err} blackboard_P start_POSTSUBSCRIPT italic_e italic_r italic_r end_POSTSUBSCRIPT in the Neyman-Pearson testing satisfies

(III.12)

by the union bound over the l − 1 𝑙 1 l-1 italic_l - 1 possible pairs ( H 1 , H w ) subscript 𝐻 1 subscript 𝐻 𝑤 (H_{1},H_{w}) ( italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) , where l = ( n k ) 𝑙 binomial 𝑛 𝑘 l=\binom{n}{k} italic_l = ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) , the probability that H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is not correctly identified as the true hypothesis is upper-bounded by l × 2 − m ⁢ E 𝑙 superscript 2 𝑚 𝐸 l\times 2^{-mE} italic_l × 2 start_POSTSUPERSCRIPT - italic_m italic_E end_POSTSUPERSCRIPT in terms of scaling. Hence, as shown in the proof of Theorem III.1 , m = Θ ⁢ ( k ⁢ log ⁡ ( n ) ⁢ E − 1 ) 𝑚 Θ 𝑘 𝑛 superscript 𝐸 1 m=\Theta(k\log(n)E^{-1}) italic_m = roman_Θ ( italic_k roman_log ( italic_n ) italic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) , where E 𝐸 E italic_E is introduced in ( III.11 ), samples are enough for identifying the k 𝑘 k italic_k anomalous samples with high probability. ∎

III-C Deterministic Time-Varying Measurements

In this subsection, we consider mixed measurements which are varied over time. However, each sensing vector is predetermined. Hence, for exactly p ⁢ ( 𝑨 = 𝒂 ) ⁢ m 𝑝 𝑨 𝒂 𝑚 p({\boldsymbol{A}}={\boldsymbol{a}})m italic_p ( bold_italic_A = bold_italic_a ) italic_m (assuming that p ⁢ ( 𝑨 = 𝒂 ) ⁢ m 𝑝 𝑨 𝒂 𝑚 p({\boldsymbol{A}}={\boldsymbol{a}})m italic_p ( bold_italic_A = bold_italic_a ) italic_m are integers) measurements, a realized sensing vector 𝒂 𝒂 {\boldsymbol{a}} bold_italic_a is used. In contrast, in random time-varying measurements, each sensing vector 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A is taken randomly, and thus the number of measurements taking realization 𝒂 𝒂 {\boldsymbol{a}} bold_italic_a is random. We define the predetermined sensing vector at time j 𝑗 j italic_j as 𝒂 j superscript 𝒂 𝑗 {\boldsymbol{a}}^{j} bold_italic_a start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT .

For deterministic time-varying measurements, we introduce the maximum likelihood estimate method among l = ( n k ) 𝑙 binomial 𝑛 𝑘 l=\binom{n}{k} italic_l = ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) hypotheses in Algorithm 5 . To analyze the error probability, we consider another hypothesis testing method based on pairwise comparison with deterministic time-varying measurements in Algorithm 6 . Before introducing the sample complexity of hypothesis testing with deterministic time-varying measurements, we define the outer Chernoff information between two probability density functions given hypotheses and a sensing vector in Definition III.4 .

  • subscript 𝐻 𝑣 𝒂 p_{Y|H_{v},{\boldsymbol{A}}}(y|H_{v},{\boldsymbol{a}}) italic_p start_POSTSUBSCRIPT italic_Y | italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , bold_italic_A end_POSTSUBSCRIPT ( italic_y | italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , bold_italic_a )
  • subscript 𝐻 𝑤 𝒂 p_{Y|H_{w},{\boldsymbol{A}}}(y|H_{w},{\boldsymbol{a}}) italic_p start_POSTSUBSCRIPT italic_Y | italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , bold_italic_A end_POSTSUBSCRIPT ( italic_y | italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , bold_italic_a )

Definition III.4 (Outer Conditional Chernoff Information) .

For λ ∈ [ 0 , 1 ] 𝜆 0 1 \lambda\in[0,1] italic_λ ∈ [ 0 , 1 ] , two hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( 1 ≤ v , w ≤ l formulae-sequence 1 𝑣 𝑤 𝑙 1\leq v,w\leq l 1 ≤ italic_v , italic_w ≤ italic_l ), and a sensing vector 𝐀 𝐀 {\boldsymbol{A}} bold_italic_A , define

(III.13)

With this definition, the following theorem describes the sample complexity of our algorithms in deterministic time varying measurements.

Theorem III.5 .

(III.14)

For readability, we place the proof of Theorem III.5 in Appendix X-A . It is noteworthy that from the Jensen’s inequality, the outer Chernoff information introduced in ( III.5 ) is greater than or equal to the inner Chernoff information introduced in ( III.5 ).

IV Compressed Hypothesis Testing in the Regime of Undersampling

Compressed hypothesis testing is especially effective when the number of samples allowed is in the subsampling regime, where the number of samples is small, sometimes even smaller than the number of random variables. Now we give a lower bound on the error probability of determining the set of anomalous random variables when the number of individual samples is smaller than the number of random variables.

Theorem IV.1 .

Consider n 𝑛 n italic_n independent random variables, among which ( n − k ) 𝑛 𝑘 (n-k) ( italic_n - italic_k ) random variables follow a known probability distribution f 1 ⁢ ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) ; while the other k 𝑘 k italic_k random variables follow another known probability distribution f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . Suppose that we take m < n 𝑚 𝑛 m<n italic_m < italic_n samples of individual random variables (no mixing). Then the probability of misidentifying the k 𝑘 k italic_k abnormal random variables is at least

𝑘 𝑚 𝑛 i\geq\max{(0,k+m-n)} italic_i ≥ roman_max ( 0 , italic_k + italic_m - italic_n ) and i ≤ min ⁡ ( k , m ) 𝑖 𝑘 𝑚 i\leq\min{(k,m)} italic_i ≤ roman_min ( italic_k , italic_m ) ) random variables that follow the abnormal distribution f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) are observed in these m 𝑚 m italic_m samples. Then there are at least ( n − m ) 𝑛 𝑚 (n-m) ( italic_n - italic_m ) random variables are never sampled, and among them there are ( k − i ) 𝑘 𝑖 (k-i) ( italic_k - italic_i ) random variables that follow the abnormal distribution f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . Determining correctly the ( k − i ) 𝑘 𝑖 (k-i) ( italic_k - italic_i ) random variables that follow the abnormal distribution f 2 ⁢ ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) is at most of probability 1 ( n − m k − i ) 1 binomial 𝑛 𝑚 𝑘 𝑖 \frac{1}{\binom{n-m}{k-i}} divide start_ARG 1 end_ARG start_ARG ( FRACOP start_ARG italic_n - italic_m end_ARG start_ARG italic_k - italic_i end_ARG ) end_ARG . So the probability of correctly identifying the k 𝑘 k italic_k abnormal random variables is at most

This proves the lower bound on misidentifying the k 𝑘 k italic_k abnormal random variables in separate measurements, where m < n 𝑚 𝑛 m<n italic_m < italic_n . ∎

As we can see, if m > k 𝑚 𝑘 m>k italic_m > italic_k and m ≪ n much-less-than 𝑚 𝑛 m\ll n italic_m ≪ italic_n , the error probability can be very close to 1 1 1 1 . In contrast, compressed hypothesis testing can potentially greatly lower the error probability even with m ≪ n much-less-than 𝑚 𝑛 m\ll n italic_m ≪ italic_n samples. In fact, following the proof of Theorem III.5 , we have the following results on the error probability for deterministic time-varying measurements.

Theorem IV.2 .

If the outer Chernoff information for the mixed measurements is sufficiently small, the error probability can indeed be made close to 0 even if the number of measurements is smaller than the problem dimension n 𝑛 n italic_n . In contrast, according to Theorem IV.1 , for small m 𝑚 m italic_m , the error probability of separate observations is lower bounded by a number close to 1, and it is impossible at all for separate observations to achieve low error probability. We remark that, compared with earlier theorems which deal mostly with the error exponent (where the number of samples m 𝑚 m italic_m is large and goes to infinity), Theorem IV.2 is for the undersampling regime where the number of samples is smaller than the number of variables. Through various numerical experiments in Section VIII , we validate the observations above.

V Examples of Compressed Hypothesis Testing

In this section, we provide simple examples in which smaller error probability can be achieved in hypothesis testing through mixed observations than the traditional individual sampling approach, with the same number of measurements. Especially, we consider Gaussian probability distributions in our examples.

V-A Example 1: two Gaussian random variables

In this example, we consider n = 2 𝑛 2 n=2 italic_n = 2 , and k = 1 𝑘 1 k=1 italic_k = 1 . We group the two independent random variables X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in a random vector [ X 1 , X 2 ] T superscript subscript 𝑋 1 subscript 𝑋 2 𝑇 [X_{1},X_{2}]^{T} [ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT . Suppose that there are two hypotheses for a 2 2 2 2 -dimensional random vector [ X 1 , X 2 ] T superscript subscript 𝑋 1 subscript 𝑋 2 𝑇 [X_{1},X_{2}]^{T} [ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT , where X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are independent:

H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : X 1 ∼ 𝒩 ⁢ ( A , σ 2 ) similar-to subscript 𝑋 1 𝒩 𝐴 superscript 𝜎 2 X_{1}\sim{\mathcal{N}}(A,\sigma^{2}) italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_A , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and X 2 ∼ 𝒩 ⁢ ( B , σ 2 ) similar-to subscript 𝑋 2 𝒩 𝐵 superscript 𝜎 2 X_{2}\sim{\mathcal{N}}(B,\sigma^{2}) italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_B , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ,

H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT : X 1 ∼ 𝒩 ⁢ ( B , σ 2 ) similar-to subscript 𝑋 1 𝒩 𝐵 superscript 𝜎 2 X_{1}\sim{\mathcal{N}}(B,\sigma^{2}) italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_B , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and X 2 ∼ 𝒩 ⁢ ( A , σ 2 ) similar-to subscript 𝑋 2 𝒩 𝐴 superscript 𝜎 2 X_{2}\sim{\mathcal{N}}(A,\sigma^{2}) italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_A , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) .

Here A 𝐴 A italic_A and B 𝐵 B italic_B are two distinct constants, and σ 2 superscript 𝜎 2 \sigma^{2} italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is the variance of the two Gaussian random variables. At each time index, only one observation is allowed, and the observation is restricted to a linear mixing of X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . Namely,

We assume that the sensing vector [ a 1 , a 2 ] T superscript subscript 𝑎 1 subscript 𝑎 2 𝑇 [a_{1},a_{2}]^{T} [ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT does not change over time. Clearly, when a 1 ≠ 0 subscript 𝑎 1 0 a_{1}\neq 0 italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≠ 0 and a 2 = 0 subscript 𝑎 2 0 a_{2}=0 italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0 , the sensing vector reduces to a separate observation of X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ; and when a 1 = 0 subscript 𝑎 1 0 a_{1}=0 italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0 and a 2 ≠ 0 subscript 𝑎 2 0 a_{2}\neq 0 italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≠ 0 , it reduces to a separate observation of X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . In these cases, the observation follows distribution 𝒩 ⁢ ( A , σ 2 ) 𝒩 𝐴 superscript 𝜎 2 {\mathcal{N}}(A,\sigma^{2}) caligraphic_N ( italic_A , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for one hypothesis, and follows distribution 𝒩 ⁢ ( B , σ 2 ) 𝒩 𝐵 superscript 𝜎 2 {\mathcal{N}}(B,\sigma^{2}) caligraphic_N ( italic_B , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for the other hypothesis. The Chernoff information between these two distributions is

(V.1)

superscript subscript 𝑎 1 2 superscript subscript 𝑎 2 2 superscript 𝜎 2 {\mathcal{N}}(a_{1}B+a_{2}A,(a_{1}^{2}+a_{2}^{2})\sigma^{2}) caligraphic_N ( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_B + italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_A , ( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) is given by

(V.2)

where the last equality is obtained by taking the measurement vector [ a 1 , a 2 ] T = [ a 1 , − a 1 ] T superscript subscript 𝑎 1 subscript 𝑎 2 𝑇 superscript subscript 𝑎 1 subscript 𝑎 1 𝑇 [a_{1},a_{2}]^{T}=[a_{1},-a_{1}]^{T} [ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT = [ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , - italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT . Therefore, with mixed measurements, we can double the Chernoff information. This shows that linear mixed observations can offer strict improvement in terms of reducing the error probability in hypothesis testing by increasing the error exponent.

V-B Example 2: Gaussian random variables with different means

  • subscript 𝑋 1 subscript 𝑋 2 … subscript 𝑋 𝑛 [X_{1},X_{2},...,X_{n}] [ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] follow jointly Gaussian distributions 𝒩 ⁢ ( 𝝁 1 , 𝚺 1 ) 𝒩 subscript 𝝁 1 subscript 𝚺 1 {\mathcal{N}}({\boldsymbol{\mu}}_{1},{\boldsymbol{\Sigma}}_{1}) caligraphic_N ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ,
  • subscript 𝑋 1 subscript 𝑋 2 … subscript 𝑋 𝑛 [X_{1},X_{2},...,X_{n}] [ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] follow jointly Gaussian distributions 𝒩 ⁢ ( 𝝁 2 , 𝚺 2 ) 𝒩 subscript 𝝁 2 subscript 𝚺 2 {\mathcal{N}}({\boldsymbol{\mu}}_{2},{\boldsymbol{\Sigma}}_{2}) caligraphic_N ( bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) .

Here 𝚺 1 subscript 𝚺 1 {\boldsymbol{\Sigma}}_{1} bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝚺 2 subscript 𝚺 2 {\boldsymbol{\Sigma}}_{2} bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are both n × n 𝑛 𝑛 n\times n italic_n × italic_n covariance matrices.

Suppose at each time instant, only one observation is allowed, and the observation is restricted to a time-invariant sensing vector 𝑨 ∈ ℝ n × 1 𝑨 superscript ℝ 𝑛 1 {\boldsymbol{A}}\in{\mathbb{R}}^{n\times 1} bold_italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × 1 end_POSTSUPERSCRIPT ; namely

Under these conditions, the observation follows distribution 𝒩 ⁢ ( 𝑨 T ⁢ 𝝁 1 , 𝑨 T ⁢ 𝚺 1 ⁢ 𝑨 ) 𝒩 superscript 𝑨 𝑇 subscript 𝝁 1 superscript 𝑨 𝑇 subscript 𝚺 1 𝑨 {\mathcal{N}}({\boldsymbol{A}}^{T}{\boldsymbol{\mu}}_{1},{\boldsymbol{A}}^{T}{% \boldsymbol{\Sigma}}_{1}{\boldsymbol{A}}) caligraphic_N ( bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT bold_italic_A ) for hypothesis H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , and follows distribution 𝒩 ⁢ ( 𝑨 T ⁢ 𝝁 2 , 𝑨 T ⁢ 𝚺 2 ⁢ 𝑨 ) 𝒩 superscript 𝑨 𝑇 subscript 𝝁 2 superscript 𝑨 𝑇 subscript 𝚺 2 𝑨 {\mathcal{N}}({\boldsymbol{A}}^{T}{\boldsymbol{\mu}}_{2},{\boldsymbol{A}}^{T}{% \boldsymbol{\Sigma}}_{2}{\boldsymbol{A}}) caligraphic_N ( bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT bold_italic_A ) for the other hypothesis H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . We would like to choose a sensing vector 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A which maximizes the Chernoff information between the two possible univariate Gaussian distributions, namely

In fact, from [ 51 ] , the Chernoff information between these two distributions is

We first look at a special case when 𝚺 = 𝚺 1 = 𝚺 2 𝚺 subscript 𝚺 1 subscript 𝚺 2 {\boldsymbol{\Sigma}}={\boldsymbol{\Sigma}}_{1}={\boldsymbol{\Sigma}}_{2} bold_Σ = bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . Under this condition, the maximum Chernoff information is given by

Taking 𝑨 ′ = 𝚺 1 2 ⁢ 𝑨 superscript 𝑨 ′ superscript 𝚺 1 2 𝑨 {\boldsymbol{A}}^{\prime}={\boldsymbol{\Sigma}}^{\frac{1}{2}}{\boldsymbol{A}} bold_italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_Σ start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT bold_italic_A , then this reduces to

From Cauchy-Schwarz inequality, it is easy to see that the optimal λ = 1 2 𝜆 1 2 \lambda=\frac{1}{2} italic_λ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 𝑨 ′ = 𝚺 − 1 2 ⁢ ( 𝝁 1 − 𝝁 2 ) superscript 𝑨 ′ superscript 𝚺 1 2 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{A}}^{\prime}={\boldsymbol{\Sigma}}^{-\frac{1}{2}}({\boldsymbol{% \mu}}_{1}-{\boldsymbol{\mu}}_{2}) bold_italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_Σ start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) , and 𝑨 = 𝚺 − 1 ⁢ ( 𝝁 1 − 𝝁 2 ) 𝑨 superscript 𝚺 1 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{A}}={\boldsymbol{\Sigma}}^{-1}({\boldsymbol{\mu}}_{1}-{% \boldsymbol{\mu}}_{2}) bold_italic_A = bold_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) . Under these conditions, the maximum Chernoff information is given by

(V.3)

Note that in general, 𝑨 ′ = 𝚺 − 1 2 ⁢ ( 𝝁 1 − 𝝁 2 ) superscript 𝑨 ′ superscript 𝚺 1 2 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{A}}^{\prime}={\boldsymbol{\Sigma}}^{-\frac{1}{2}}({\boldsymbol{% \mu}}_{1}-{\boldsymbol{\mu}}_{2}) bold_italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_Σ start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) is not a separate observation of a certain individual random variable, but rather a linear mixing of the n 𝑛 n italic_n random variables. Therefore, a mixed measurement can maximize the Chernoff information.

V-C Example 3: Gaussian random variables with different variances

Let’s look at mixed observations for Gaussian random variables with different variances. Consider the same setting in Example 2, except that we now look at another special case when 𝝁 = 𝝁 1 = 𝝁 2 𝝁 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{\mu}}={\boldsymbol{\mu}}_{1}={\boldsymbol{\mu}}_{2} bold_italic_μ = bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . We will study the optimal sensing vector under this scenario. Then the Chernoff information becomes

(V.4)

To find the optimal sensing vector 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A , we are solving this optimization problem

(V.5)

For a certain 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A , we define

Note that B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 . By symmetry over λ 𝜆 \lambda italic_λ and 1 − λ 1 𝜆 1-\lambda 1 - italic_λ , maximizing the Chernoff information can always be reduced to

(V.6)

The optimal λ 𝜆 \lambda italic_λ , denoted by λ ⋆ superscript 𝜆 ⋆ \lambda^{\star} italic_λ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , is obtained by finding the point which makes the first order differential equation to zero as follows:

By plugging λ ⋆ superscript 𝜆 ⋆ \lambda^{\star} italic_λ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT to ( V.6 ), we obtain the following optimization problem:

(V.7)

We note that the objective function is an increasing function in B 𝐵 B italic_B , when B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 , which is proven in Lemma V.1 .

Lemma V.1 .

The optimal objective value of the following optimization problem

is an increasing function in B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 .

𝜆 1 𝜆 𝐵 superscript 𝐵 1 𝜆 \left(\frac{\lambda+(1-\lambda)B}{B^{1-\lambda}}\right) ( divide start_ARG italic_λ + ( 1 - italic_λ ) italic_B end_ARG start_ARG italic_B start_POSTSUPERSCRIPT 1 - italic_λ end_POSTSUPERSCRIPT end_ARG ) is an increasing function in B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 . In fact, the derivative of it with respect to λ 𝜆 \lambda italic_λ is

Then the conclusion of this lemma immediately follows. ∎

This means we need to maximize B 𝐵 B italic_B , in order to maximize the Chernoff information. Hence, to find the optimal 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A maximizing B 𝐵 B italic_B , we solve the following two optimization problems:

(V.8)
(V.9)

Then the maximum of the two optimal objective values is equal to the optimal objective value of optimizing B 𝐵 B italic_B , and the corresponding 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A is the optimal sensing vector maximizing the Chernoff information. These two optimization problems are not convex optimization programs. However, they still hold zero duality gap from the S-procedure [ 52 , Appendix B] . In fact, they are respectively equivalent to the following two semidefinite programming optimization problems:

(V.10)
subject to
(V.11)
subject to

Thus, they can be efficiently solved via a generic optimization solver.

For example, when 𝚺 1 subscript 𝚺 1 {\boldsymbol{\Sigma}}_{1} bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝚺 2 subscript 𝚺 2 {\boldsymbol{\Sigma}}_{2} bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are given as follows:

V-D Example 4: k = 1 𝑘 1 k=1 italic_k = 1 anomalous random variable among n = 7 𝑛 7 n=7 italic_n = 7 random variables

Let us introduce a specific example to help understanding before considering more general examples on the compressed hypothesis testing. In this example, we have six random variables following the distribution 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , and the other random variable following the distribution 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) , where σ 2 > 1 superscript 𝜎 2 1 \sigma^{2}>1 italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT > 1 . We assume that all random variables X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ,…, and X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT are independent. So overall, there are seven hypotheses:

H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : ( X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , …, X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) ∼ similar-to \sim ∼ ( 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) , 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , …, 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) ),

H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT : ( X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , …, X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) ∼ similar-to \sim ∼ ( 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) , …, 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) ),

H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT : ( X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , …, X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) ∼ similar-to \sim ∼ ( 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , …, 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ).

In this example, we will show that the Chernoff information with separate measurements is smaller than the Chernoff information with mixed measurements. We first calculate the Chernoff information between any two hypotheses with separate measurements. In separate measurements, for a hypothesis H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , the probability distribution for the output is 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) only when the random variable X v subscript 𝑋 𝑣 X_{v} italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT is observed. Otherwise, the output distribution follows 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) . Then, for any pair of hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , when the random variable X v subscript 𝑋 𝑣 X_{v} italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT is observed, the probability distributions for the output are 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) respectively. Similarly, for hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , when the random variable X w subscript 𝑋 𝑤 X_{w} italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT is observed, the probability distributions for the output are 𝒩 ⁢ ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) and 𝒩 ⁢ ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) respectively. For the separate measurements, seven sensing vectors from 𝒂 1 T superscript subscript 𝒂 1 𝑇 {\boldsymbol{a}}_{1}^{T} bold_italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT to 𝒂 7 T superscript subscript 𝒂 7 𝑇 {\boldsymbol{a}}_{7}^{T} bold_italic_a start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT for seven hypotheses are predetermined as follows:

And we take deterministic time-varying measurements using these seven sensing vectors.

The Chernoff information between any two hypotheses, e.g., H v = H 1 subscript 𝐻 𝑣 subscript 𝐻 1 H_{v}=H_{1} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H w = H 2 subscript 𝐻 𝑤 subscript 𝐻 2 H_{w}=H_{2} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , with separate measurements is calculated as follows:

(V.12)

Let us then calculate the Chernoff information between hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT with mixed measurements. For the mixed measurements, we consider using the parity check matrix of ( 7 , 4 ) 7 4 (7,4) ( 7 , 4 ) Hamming codes as follows:

𝑗 mod 3 1 i=(j\;\text{mod}\;3)+1 italic_i = ( italic_j mod 3 ) + 1 . Thus, we use total three sensing vectors for mixed measurements repeatedly.

For the pair of hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , we can have total 21 ( = ( 7 2 ) ) annotated 21 absent binomial 7 2 21(=\binom{7}{2}) 21 ( = ( FRACOP start_ARG 7 end_ARG start_ARG 2 end_ARG ) ) cases in the calculation of the outer Chernoff information. For that, we have the following lemma:

Lemma V.2 .

Given mixed measurements in the parity check Hamming code matrix, for a pair of hypotheses, the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) is the minimum one out of 21 ( = ( 7 2 ) ) annotated 21 absent binomial 7 2 21(=\binom{7}{2}) 21 ( = ( FRACOP start_ARG 7 end_ARG start_ARG 2 end_ARG ) ) combination cases.

From the definition of the outer Chernoff information introduced in ( III.5 ), we can calculate the outer Chernoff information as follows:

For example, for a pair of hypotheses H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , we have

For another pair of hypotheses H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT , we have

Among 21 cases, the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) has only one remaining term in the calculation. Then, to complete the proof of this lemma, let us introduce the following lemma:

Lemma V.3 .

(V.13)

where j 𝑗 j italic_j can be any number between 1 and m / 2 𝑚 2 m/2 italic_m / 2 .

From Lemma V.3 , we can conclude that the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ), which has only one remaining term, is the minimum one out of 21 cases. ∎

Then, let us calculate the outer Chernoff information between H v = H 1 subscript 𝐻 𝑣 subscript 𝐻 1 H_{v}=H_{1} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H w = H 4 subscript 𝐻 𝑤 subscript 𝐻 4 H_{w}=H_{4} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT as follows:

(V.14)

superscript 𝜎 2 3 \sigma^{2}+3 italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 3 , 1 1 1 1 , …, 1 1 1 1 in diagonal.

When σ 2 ≫ 1 much-greater-than superscript 𝜎 2 1 \sigma^{2}\gg 1 italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≫ 1 , for separate observations, we have

where e 𝑒 e italic_e is the natural number. For the comparison in the Chernoff information between separate measurements and mixed measurements, let us subtract the values in logarithm and check whether the subtracted result is positive or negative. For large enough σ 𝜎 \sigma italic_σ , the following condition holds

Therefore, we can conclude that for large enough σ 𝜎 \sigma italic_σ , the Chernoff information with mixed measurements, i.e., the outer Chernoff information, becomes greater than that with separate measurements. Fig. 1 shows the outer Chernoff information, denoted by OCI, and the Chernoff information with separate measurements, denoted by CI, in the figure. From Fig. 1 , it is clearly shown that the Chernoff information with mixed measurements can be larger than the Chernoff information with separate measurements, and the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) is the minimum one among other cases. For simplicity of the figure, we present the outer Chernoff information results for only a few cases of hypotheses pair in Fig. 1 .

Refer to caption

Additionally, the inner Chernoff information between H v = H 1 subscript 𝐻 𝑣 subscript 𝐻 1 H_{v}=H_{1} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H w = H 4 subscript 𝐻 𝑤 subscript 𝐻 4 H_{w}=H_{4} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT can be calculated as follows:

By considering the first-order condition, we have the following critical point, which is between 0 and 1; namely,

COMMENTS

  1. Hypothesis Testing

    Hypothesis Testing | A Step-by-Step Guide with Easy ...

  2. Hypothesis Testing: Uses, Steps & Example

    Formulate the Hypotheses: Write your research hypotheses as a null hypothesis (H 0) and an alternative hypothesis (H A).; Data Collection: Gather data specifically aimed at testing the hypothesis.; Conduct A Test: Use a suitable statistical test to analyze your data.; Make a Decision: Based on the statistical test results, decide whether to reject the null hypothesis or fail to reject it.

  3. 9.1: Introduction to Hypothesis Testing

    In hypothesis testing, the goal is to see if there is sufficient statistical evidence to reject a presumed null hypothesis in favor of a conjectured alternative hypothesis. The null hypothesis is usually denoted H0 while the alternative hypothesis is usually denoted H1. An hypothesis test is a statistical decision; the conclusion will either be ...

  4. Hypothesis Testing: 4 Steps and Example

    Hypothesis testing is a statistical method to assess the plausibility of a hypothesis by using sample data. Learn the four steps of hypothesis testing, the difference between null and alternative hypotheses, and an example of coin flipping.

  5. Introduction to Hypothesis Testing

    Hypothesis Tests. A hypothesis test consists of five steps: 1. State the hypotheses. State the null and alternative hypotheses. These two hypotheses need to be mutually exclusive, so if one is true then the other must be false. 2. Determine a significance level to use for the hypothesis. Decide on a significance level.

  6. Statistical Hypothesis Testing Overview

    Statistical Hypothesis Testing Overview

  7. Hypothesis Testing

    Hypothesis Testing. A hypothesis test is a statistical inference method used to test the significance of a proposed (hypothesized) relation between population statistics (parameters) and their corresponding sample estimators. In other words, hypothesis tests are used to determine if there is enough evidence in a sample to prove a hypothesis ...

  8. Statistical hypothesis test

    Statistical hypothesis test

  9. What is Hypothesis Testing?

    This process, called hypothesis testing, consists of four steps. State the hypotheses. This involves stating the null and alternative hypotheses. The hypotheses are stated in such a way that they are mutually exclusive. That is, if one is true, the other must be false. Formulate an analysis plan.

  10. Hypothesis Testing

    Explore the intricacies of hypothesis testing, a cornerstone of statistical analysis. Dive into methods, interpretations, and applications for making data-driven decisions. In this Blog post we will learn: What is Hypothesis Testing? Steps in Hypothesis Testing 2.1. Set up Hypotheses: Null and Alternative 2.2. Choose a Significance Level (α) 2.3.

  11. 7.1: Basics of Hypothesis Testing

    a. x = salary of teacher. μ = mean salary of teacher. The guess is that μ> $30, 000 and that is the alternative hypothesis. The null hypothesis has the same parameter and number with an equal sign. H0: μ = $30, 000 HA: μ> $30, 000. b. x = number od students who like math. p = proportion of students who like math.

  12. A Complete Guide to Hypothesis Testing

    Hypothesis testing is a method of statistical inference that considers the null hypothesis H ₀ vs. the alternative hypothesis H a, where we are typically looking to assess evidence against H ₀. Such a test is used to compare data sets against one another, or compare a data set against some external standard. The former being a two sample ...

  13. Hypothesis Testing: Understanding the Basics, Types, and Importance

    Hypothesis testing is a statistical method that helps to determine whether a hypothesis is true or not. It is a procedure that involves collecting and analyzing data to evaluate the probability of the null hypothesis being true. The null hypothesis is the hypothesis that there is no significant difference between a sample and the population.

  14. 3.1: The Fundamentals of Hypothesis Testing

    3.1: The Fundamentals of Hypothesis Testing

  15. Hypothesis Testing

    Learn what hypothesis testing is, how to perform it, and what types of tests are used. Hypothesis testing is a statistical tool that tests assumptions and determines how likely something is within a given standard of accuracy.

  16. Understanding Hypothesis Testing. A simple yet detailed dive into all

    Understanding Hypothesis Testing

  17. Hypothesis Testing in Statistics

    What is Hypothesis Testing in Statistics? Types and ...

  18. Hypothesis Testing

    Hypothesis Testing — The What, Why, and How | ...

  19. Hypothesis Testing, P Values, Confidence Intervals, and Significance

    Hypothesis Testing, P Values, Confidence Intervals, and ...

  20. Hypothesis Testing: Definition, Uses, Limitations + Examples

    Learn what hypothesis testing is, why it is important, and how it works in research. Find out the types of hypotheses, the stages of hypothesis testing, and the examples of hypothesis testing in different fields.

  21. 8.1: The Elements of Hypothesis Testing

    Hypothesis testing is a statistical procedure in which a choice is made between a null hypothesis and an alternative hypothesis based on information in a sample. The end result of a hypotheses testing procedure is a choice of one of the following two possible conclusions: Reject H0 (and therefore accept Ha), or.

  22. What Is Hypothesis Testing? An In-Depth Guide with Python Examples

    Real World Hypothesis Testing. Hypothesis testing forms the backbone of data-driven decision making across science, research, business, public policy and more by allowing practitioners to draw statistically-validated conclusions. Here is a sample of hypotheses commonly tested: Business. Ecommerce sites test if interface updates increase user ...

  23. Hypothesis Testing

    Hypothesis Testing - Definition, Procedure, Types and FAQs

  24. Understanding Hypothesis Testing

    Understanding Hypothesis Testing

  25. Compressed Hypothesis Testing: To Mix or Not to Mix?

    Unlike the standard compressed sensing, in compressed hypothesis testing, 𝒙 𝒙 {\boldsymbol{x}} bold_italic_x is a vector of n 𝑛 n italic_n random variables taking independent realizations across different measurements. As discussed below, our problem is relevant to multiple hypothesis testing such as in communication systems [26, 27, 28].