Weekend batch
Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.
Free eBook: Top Programming Languages For A Data Scientist
Normality Test in Minitab: Minitab with Statistics
Machine Learning Career Guide: A Playbook to Becoming a Machine Learning Engineer
Warning: The NCBI web site requires JavaScript to function. more...
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.
Hypothesis testing, p values, confidence intervals, and significance.
Jacob Shreffler ; Martin R. Huecker .
Last Update: March 13, 2023 .
Medical providers often rely on evidence-based medicine to guide decision-making in practice. Often a research hypothesis is tested with results provided, typically with p values, confidence intervals, or both. Additionally, statistical or research significance is estimated or determined by the investigators. Unfortunately, healthcare providers may have different comfort levels in interpreting these findings, which may affect the adequate application of the data.
Without a foundational understanding of hypothesis testing, p values, confidence intervals, and the difference between statistical and clinical significance, it may affect healthcare providers' ability to make clinical decisions without relying purely on the research investigators deemed level of significance. Therefore, an overview of these concepts is provided to allow medical professionals to use their expertise to determine if results are reported sufficiently and if the study outcomes are clinically appropriate to be applied in healthcare practice.
Hypothesis Testing
Investigators conducting studies need research questions and hypotheses to guide analyses. Starting with broad research questions (RQs), investigators then identify a gap in current clinical practice or research. Any research problem or statement is grounded in a better understanding of relationships between two or more variables. For this article, we will use the following research question example:
Research Question: Is Drug 23 an effective treatment for Disease A?
Research questions do not directly imply specific guesses or predictions; we must formulate research hypotheses. A hypothesis is a predetermined declaration regarding the research question in which the investigator(s) makes a precise, educated guess about a study outcome. This is sometimes called the alternative hypothesis and ultimately allows the researcher to take a stance based on experience or insight from medical literature. An example of a hypothesis is below.
Research Hypothesis: Drug 23 will significantly reduce symptoms associated with Disease A compared to Drug 22.
The null hypothesis states that there is no statistical difference between groups based on the stated research hypothesis.
Researchers should be aware of journal recommendations when considering how to report p values, and manuscripts should remain internally consistent.
Regarding p values, as the number of individuals enrolled in a study (the sample size) increases, the likelihood of finding a statistically significant effect increases. With very large sample sizes, the p-value can be very low significant differences in the reduction of symptoms for Disease A between Drug 23 and Drug 22. The null hypothesis is deemed true until a study presents significant data to support rejecting the null hypothesis. Based on the results, the investigators will either reject the null hypothesis (if they found significant differences or associations) or fail to reject the null hypothesis (they could not provide proof that there were significant differences or associations).
To test a hypothesis, researchers obtain data on a representative sample to determine whether to reject or fail to reject a null hypothesis. In most research studies, it is not feasible to obtain data for an entire population. Using a sampling procedure allows for statistical inference, though this involves a certain possibility of error. [1] When determining whether to reject or fail to reject the null hypothesis, mistakes can be made: Type I and Type II errors. Though it is impossible to ensure that these errors have not occurred, researchers should limit the possibilities of these faults. [2]
Significance
Significance is a term to describe the substantive importance of medical research. Statistical significance is the likelihood of results due to chance. [3] Healthcare providers should always delineate statistical significance from clinical significance, a common error when reviewing biomedical research. [4] When conceptualizing findings reported as either significant or not significant, healthcare providers should not simply accept researchers' results or conclusions without considering the clinical significance. Healthcare professionals should consider the clinical importance of findings and understand both p values and confidence intervals so they do not have to rely on the researchers to determine the level of significance. [5] One criterion often used to determine statistical significance is the utilization of p values.
P values are used in research to determine whether the sample estimate is significantly different from a hypothesized value. The p-value is the probability that the observed effect within the study would have occurred by chance if, in reality, there was no true effect. Conventionally, data yielding a p<0.05 or p<0.01 is considered statistically significant. While some have debated that the 0.05 level should be lowered, it is still universally practiced. [6] Hypothesis testing allows us to determine the size of the effect.
An example of findings reported with p values are below:
Statement: Drug 23 reduced patients' symptoms compared to Drug 22. Patients who received Drug 23 (n=100) were 2.1 times less likely than patients who received Drug 22 (n = 100) to experience symptoms of Disease A, p<0.05.
Statement:Individuals who were prescribed Drug 23 experienced fewer symptoms (M = 1.3, SD = 0.7) compared to individuals who were prescribed Drug 22 (M = 5.3, SD = 1.9). This finding was statistically significant, p= 0.02.
For either statement, if the threshold had been set at 0.05, the null hypothesis (that there was no relationship) should be rejected, and we should conclude significant differences. Noticeably, as can be seen in the two statements above, some researchers will report findings with < or > and others will provide an exact p-value (0.000001) but never zero [6] . When examining research, readers should understand how p values are reported. The best practice is to report all p values for all variables within a study design, rather than only providing p values for variables with significant findings. [7] The inclusion of all p values provides evidence for study validity and limits suspicion for selective reporting/data mining.
While researchers have historically used p values, experts who find p values problematic encourage the use of confidence intervals. [8] . P-values alone do not allow us to understand the size or the extent of the differences or associations. [3] In March 2016, the American Statistical Association (ASA) released a statement on p values, noting that scientific decision-making and conclusions should not be based on a fixed p-value threshold (e.g., 0.05). They recommend focusing on the significance of results in the context of study design, quality of measurements, and validity of data. Ultimately, the ASA statement noted that in isolation, a p-value does not provide strong evidence. [9]
When conceptualizing clinical work, healthcare professionals should consider p values with a concurrent appraisal study design validity. For example, a p-value from a double-blinded randomized clinical trial (designed to minimize bias) should be weighted higher than one from a retrospective observational study [7] . The p-value debate has smoldered since the 1950s [10] , and replacement with confidence intervals has been suggested since the 1980s. [11]
Confidence Intervals
A confidence interval provides a range of values within given confidence (e.g., 95%), including the accurate value of the statistical constraint within a targeted population. [12] Most research uses a 95% CI, but investigators can set any level (e.g., 90% CI, 99% CI). [13] A CI provides a range with the lower bound and upper bound limits of a difference or association that would be plausible for a population. [14] Therefore, a CI of 95% indicates that if a study were to be carried out 100 times, the range would contain the true value in 95, [15] confidence intervals provide more evidence regarding the precision of an estimate compared to p-values. [6]
In consideration of the similar research example provided above, one could make the following statement with 95% CI:
Statement: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22; there was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).
It is important to note that the width of the CI is affected by the standard error and the sample size; reducing a study sample number will result in less precision of the CI (increase the width). [14] A larger width indicates a smaller sample size or a larger variability. [16] A researcher would want to increase the precision of the CI. For example, a 95% CI of 1.43 – 1.47 is much more precise than the one provided in the example above. In research and clinical practice, CIs provide valuable information on whether the interval includes or excludes any clinically significant values. [14]
Null values are sometimes used for differences with CI (zero for differential comparisons and 1 for ratios). However, CIs provide more information than that. [15] Consider this example: A hospital implements a new protocol that reduced wait time for patients in the emergency department by an average of 25 minutes (95% CI: -2.5 – 41 minutes). Because the range crosses zero, implementing this protocol in different populations could result in longer wait times; however, the range is much higher on the positive side. Thus, while the p-value used to detect statistical significance for this may result in "not significant" findings, individuals should examine this range, consider the study design, and weigh whether or not it is still worth piloting in their workplace.
Similarly to p-values, 95% CIs cannot control for researchers' errors (e.g., study bias or improper data analysis). [14] In consideration of whether to report p-values or CIs, researchers should examine journal preferences. When in doubt, reporting both may be beneficial. [13] An example is below:
Reporting both: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22, p = 0.009. There was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).
Recall that clinical significance and statistical significance are two different concepts. Healthcare providers should remember that a study with statistically significant differences and large sample size may be of no interest to clinicians, whereas a study with smaller sample size and statistically non-significant results could impact clinical practice. [14] Additionally, as previously mentioned, a non-significant finding may reflect the study design itself rather than relationships between variables.
Healthcare providers using evidence-based medicine to inform practice should use clinical judgment to determine the practical importance of studies through careful evaluation of the design, sample size, power, likelihood of type I and type II errors, data analysis, and reporting of statistical findings (p values, 95% CI or both). [4] Interestingly, some experts have called for "statistically significant" or "not significant" to be excluded from work as statistical significance never has and will never be equivalent to clinical significance. [17]
The decision on what is clinically significant can be challenging, depending on the providers' experience and especially the severity of the disease. Providers should use their knowledge and experiences to determine the meaningfulness of study results and make inferences based not only on significant or insignificant results by researchers but through their understanding of study limitations and practical implications.
All physicians, nurses, pharmacists, and other healthcare professionals should strive to understand the concepts in this chapter. These individuals should maintain the ability to review and incorporate new literature for evidence-based and safe care.
Disclosure: Jacob Shreffler declares no relevant financial relationships with ineligible companies.
Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.
This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.
Bulk download.
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
Hypothesis testing is as old as the scientific method and is at the heart of the research process.
Research exists to validate or disprove assumptions about various phenomena. The process of validation involves testing and it is in this context that we will explore hypothesis testing.
A hypothesis is a calculated prediction or assumption about a population parameter based on limited evidence. The whole idea behind hypothesis formulation is testing—this means the researcher subjects his or her calculated assumption to a series of evaluations to know whether they are true or false.
Typically, every research starts with a hypothesis—the investigator makes a claim and experiments to prove that this claim is true or false . For instance, if you predict that students who drink milk before class perform better than those who don’t, then this becomes a hypothesis that can be confirmed or refuted using an experiment.
Read: What is Empirical Research Study? [Examples & Method]
1. simple hypothesis.
Also known as a basic hypothesis, a simple hypothesis suggests that an independent variable is responsible for a corresponding dependent variable. In other words, an occurrence of the independent variable inevitably leads to an occurrence of the dependent variable.
Typically, simple hypotheses are considered as generally true, and they establish a causal relationship between two variables.
Examples of Simple Hypothesis
A complex hypothesis is also known as a modal. It accounts for the causal relationship between two independent variables and the resulting dependent variables. This means that the combination of the independent variables leads to the occurrence of the dependent variables .
Examples of Complex Hypotheses
As the name suggests, a null hypothesis is formed when a researcher suspects that there’s no relationship between the variables in an observation. In this case, the purpose of the research is to approve or disapprove this assumption.
Examples of Null Hypothesis
Read: Research Report: Definition, Types + [Writing Guide]
To disapprove a null hypothesis, the researcher has to come up with an opposite assumption—this assumption is known as the alternative hypothesis. This means if the null hypothesis says that A is false, the alternative hypothesis assumes that A is true.
An alternative hypothesis can be directional or non-directional depending on the direction of the difference. A directional alternative hypothesis specifies the direction of the tested relationship, stating that one variable is predicted to be larger or smaller than the null value while a non-directional hypothesis only validates the existence of a difference without stating its direction.
Examples of Alternative Hypotheses
Logical hypotheses are some of the most common types of calculated assumptions in systematic investigations. It is an attempt to use your reasoning to connect different pieces in research and build a theory using little evidence. In this case, the researcher uses any data available to him, to form a plausible assumption that can be tested.
Examples of Logical Hypothesis
After forming a logical hypothesis, the next step is to create an empirical or working hypothesis. At this stage, your logical hypothesis undergoes systematic testing to prove or disprove the assumption. An empirical hypothesis is subject to several variables that can trigger changes and lead to specific outcomes.
Examples of Empirical Testing
When forming a statistical hypothesis, the researcher examines the portion of a population of interest and makes a calculated assumption based on the data from this sample. A statistical hypothesis is most common with systematic investigations involving a large target audience. Here, it’s impossible to collect responses from every member of the population so you have to depend on data from your sample and extrapolate the results to the wider population.
Examples of Statistical Hypothesis
Hypothesis testing is an assessment method that allows researchers to determine the plausibility of a hypothesis. It involves testing an assumption about a specific population parameter to know whether it’s true or false. These population parameters include variance, standard deviation, and median.
Typically, hypothesis testing starts with developing a null hypothesis and then performing several tests that support or reject the null hypothesis. The researcher uses test statistics to compare the association or relationship between two or more variables.
Explore: Research Bias: Definition, Types + Examples
Researchers also use hypothesis testing to calculate the coefficient of variation and determine if the regression relationship and the correlation coefficient are statistically significant.
The basis of hypothesis testing is to examine and analyze the null hypothesis and alternative hypothesis to know which one is the most plausible assumption. Since both assumptions are mutually exclusive, only one can be true. In other words, the occurrence of a null hypothesis destroys the chances of the alternative coming to life, and vice-versa.
Interesting: 21 Chrome Extensions for Academic Researchers in 2021
To successfully confirm or refute an assumption, the researcher goes through five (5) stages of hypothesis testing;
Like we mentioned earlier, hypothesis testing starts with creating a null hypothesis which stands as an assumption that a certain statement is false or implausible. For example, the null hypothesis (H0) could suggest that different subgroups in the research population react to a variable in the same way.
Once you know the variables for the null hypothesis, the next step is to determine the alternative hypothesis. The alternative hypothesis counters the null assumption by suggesting the statement or assertion is true. Depending on the purpose of your research, the alternative hypothesis can be one-sided or two-sided.
Using the example we established earlier, the alternative hypothesis may argue that the different sub-groups react differently to the same variable based on several internal and external factors.
Many researchers create a 5% allowance for accepting the value of an alternative hypothesis, even if the value is untrue. This means that there is a 0.05 chance that one would go with the value of the alternative hypothesis, despite the truth of the null hypothesis.
Something to note here is that the smaller the significance level, the greater the burden of proof needed to reject the null hypothesis and support the alternative hypothesis.
Explore: What is Data Interpretation? + [Types, Method & Tools]
Test statistics in hypothesis testing allow you to compare different groups between variables while the p-value accounts for the probability of obtaining sample statistics if your null hypothesis is true. In this case, your test statistics can be the mean, median and similar parameters.
If your p-value is 0.65, for example, then it means that the variable in your hypothesis will happen 65 in100 times by pure chance. Use this formula to determine the p-value for your data:
After conducting a series of tests, you should be able to agree or refute the hypothesis based on feedback and insights from your sample data.
Hypothesis testing isn’t only confined to numbers and calculations; it also has several real-life applications in business, manufacturing, advertising, and medicine.
In a factory or other manufacturing plants, hypothesis testing is an important part of quality and production control before the final products are approved and sent out to the consumer.
During ideation and strategy development, C-level executives use hypothesis testing to evaluate their theories and assumptions before any form of implementation. For example, they could leverage hypothesis testing to determine whether or not some new advertising campaign, marketing technique, etc. causes increased sales.
In addition, hypothesis testing is used during clinical trials to prove the efficacy of a drug or new medical method before its approval for widespread human usage.
An employer claims that her workers are of above-average intelligence. She takes a random sample of 20 of them and gets the following results:
Mean IQ Scores: 110
Standard Deviation: 15
Mean Population IQ: 100
Step 1: Using the value of the mean population IQ, we establish the null hypothesis as 100.
Step 2: State that the alternative hypothesis is greater than 100.
Step 3: State the alpha level as 0.05 or 5%
Step 4: Find the rejection region area (given by your alpha level above) from the z-table. An area of .05 is equal to a z-score of 1.645.
Step 5: Calculate the test statistics using this formula
Z = (110–100) ÷ (15÷√20)
10 ÷ 3.35 = 2.99
If the value of the test statistics is higher than the value of the rejection region, then you should reject the null hypothesis. If it is less, then you cannot reject the null.
In this case, 2.99 > 1.645 so we reject the null.
The most significant benefit of hypothesis testing is it allows you to evaluate the strength of your claim or assumption before implementing it in your data set. Also, hypothesis testing is the only valid method to prove that something “is or is not”. Other benefits include:
Several limitations of hypothesis testing can affect the quality of data you get from this process. Some of these limitations include:
Connect to Formplus, Get Started Now - It's Free!
You may also like:
This article will discuss the two different types of errors in hypothesis testing and how you can prevent them from occurring in your research
Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology
In this article, we will discuss the concept of internal validity, some clear examples, its importance, and how to test it.
We are going to discuss alternative hypotheses and null hypotheses in this post and how they work in research.
Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..
Hypothesis testing allows us to make data-driven decisions by testing assertions about populations. It is the backbone behind scientific research, business analytics, financial modeling, and more.
This comprehensive guide aims to solidify your understanding with:
So let‘s get comfortable with making statements, gathering evidence, and letting the data speak!
Hypothesis testing is structured around making a claim in the form of competing hypotheses, gathering data, performing statistical tests, and making decisions about which hypothesis the evidence supports.
Here are some key terms about hypotheses and the testing process:
Null Hypothesis ($H_0$): The default statement about a population parameter. Generally asserts that there is no statistical significance between two data sets or that a sample parameter equals some claimed population parameter value. The statement being tested that is either rejected or supported.
Alternative Hypothesis ($H_1$): The statement that sample observations indicate statistically significant effect or difference from what the null hypothesis states. $H_1$ and $H_0$ are mutually exclusive, meaning if statistical tests support rejecting $H_0$, then you conclude $H_1$ has strong evidence.
Significance Level ($\alpha$): The probability of incorrectly rejecting a true null hypothesis, known as making a Type I error. Common significance levels are 90%, 95%, and 99%. The lower significance level, the more strict the criteria is for rejecting $H_0$.
Test Statistic: Summary calculations of sample data including mean, proportion, correlation coefficient, etc. Used to determine statistical significance and improbability under $H_0$.
P-value: Probability of obtaining sample results at least as extreme as the test statistic, assuming $H_0$ is true. Small p-values indicate strong statistical evidence against the null hypothesis.
Type I Error: Incorrectly rejecting a true null hypothesis
Type II Error : Failing to reject a false null hypothesis
These terms set the stage for the overall process:
1. Make Hypotheses
Define the null ($H_0$) and alternative hypothesis ($H_1$).
2. Set Significance Level
Typical significance levels are 90%, 95%, and 99%. Higher significance means more strict burden of proof for rejecting $H_0$.
3. Collect Data
Gather sample and population data related to the hypotheses under examination.
4. Determine Test Statistic
Calculate relevant test statistics like p-value, z-score, t-statistic, etc along with degrees of freedom.
5. Compare to Significance Level
If the test statistic falls in the critical region based on the significance, reject $H_0$, otherwise fail to reject $H_0$.
6. Draw Conclusions
Make determinations about hypotheses given the statistical evidence and context of the situation.
Now that you know the process and objectives, let’s apply this to some concrete examples.
We‘ll demonstrate hypothesis testing using Numpy, Scipy, Pandas and simulated data sets. Specifically, we‘ll conduct and interpret:
These represent some of the most widely used methods for determining statistical significance between groups.
We‘ll plot the data distributions to check normality assumptions where applicable. And determine if evidence exists to reject the null hypotheses across several scenarios.
Two sample t-tests determine whether the mean of a numerical variable differs significantly across two independent groups. It assumes observations follow approximate normal distributions within each group, but not that variances are equal.
Let‘s test for differences in reported salaries at hypothetical Company X vs Company Y:
$H_0$ : Average reported salaries are equal at Company X and Company Y
$H_1$ : Average reported salaries differ between Company X and Company Y
First we‘ll simulate salary samples for each company based on random normal distributions, set a 95% confidence level, run the t-test using NumPy, then interpret.
The t-statistic of 9.35 shows the difference between group means is nearly 9.5 standard errors. The very small p-value rejects the idea the salaries are equal across a randomly sampled population of employees.
Since the test returned a p-value lower than the significance level, we reject $H_0$, meaning evidence supports $H_1$ that average reported salaries differ between these hypothetical companies.
While an independent groups t-test analyzes mean differences between distinct groups, a paired t-test looks for significant effects pre vs post some treatment within the same set of subjects. This helps isolate causal impacts by removing effects from confounding individual differences.
Let‘s analyze Amazon purchase data to determine if spending increases during the holiday months of November and December.
$H_0$ : Average monthly spending is equal pre-holiday and during the holiday season
$H_1$ : Average monthly spending increases during the holiday season
We‘ll import transaction data using Pandas, add seasonal categories, then run and interpret the paired t-test.
Since the p-value is below the 0.05 significance level, we reject $H_0$. The output shows statistically significant evidence at 95% confidence that average spending increases during November-December relative to January-October.
Visualizing the monthly trend helps confirm the spike during the holiday months.
A single sample z-test allows testing whether a sample mean differs significantly from a population mean. It requires knowing the population standard deviation.
Let‘s test if recently surveyed shoppers differ significantly in their reported ages from the overall customer base:
$H_0$ : Sample mean age equals population mean age of 39
$H_1$ : Sample mean age does not equal population mean of 39
Here the absolute z-score over 2 and p-value under 0.05 indicates statistically significant evidence that recently surveyed shopper ages differ from the overall population parameter.
Chi-squared tests help determine independence between categorical variables. The test statistic measures deviations between observed and expected outcome frequencies across groups to determine magnitude of relationship.
Let‘s test if credit card application approvals are independent across income groups using simulated data:
$H_0$ : Credit card approvals are independent of income level
$H_1$ : Credit approvals and income level are related
Since the p-value is greater than the 0.05 significance level, we fail to reject $H_0$. There is not sufficient statistical evidence to conclude that credit card approval rates differ by income categories.
Analysis of variance (ANOVA) hypothesis tests determine if mean differences exist across more than two groups. ANOVA expands upon t-tests for multiple group comparisons.
Let‘s test if average debt obligations vary depending on highest education level attained.
$H_0$ : Average debt obligations are equal across education levels
$H_1$ : Average debt obligations differ based on education level
We‘ll simulate ordered education and debt data for visualization via box plots and then run ANOVA.
The ANOVA output shows an F-statistic of 91.59 that along with a tiny p-value leads to rejecting $H_0$. We conclude there are statistically significant differences in average debt obligations based on highest degree attained.
The box plots visualize these distributions and means vary across four education attainment groups.
Hypothesis testing forms the backbone of data-driven decision making across science, research, business, public policy and more by allowing practitioners to draw statistically-validated conclusions.
Here is a sample of hypotheses commonly tested:
Pharmaceuticals
Politics & Social Sciences
This represents just a sample of the wide ranging real-world applications. Properly formulated hypotheses, statistical testing methodology, reproducible analysis, and unbiased interpretation helps ensure valid reliable findings.
However, hypothesis testing does still come with some limitations worth addressing.
While hypothesis testing empowers huge breakthroughs across disciplines, the methodology does come with some inherent restrictions:
Over-reliance on p-values
P-values help benchmark statistical significance, but should not be over-interpreted. A large p-value does not necessarily mean the null hypothesis is 100% true for the entire population. And small p-values do not directly prove causality as confounding factors always exist.
Significance also does not indicate practical real-world effect size. Statistical power calculations should inform necessary sample sizes to detect desired effects.
Errors from Multiple Tests
Running many hypothesis tests by chance produces some false positives due to randomness. Analysts should account for this by adjusting significance levels, pre-registering testing plans, replicating findings, and relying more on meta-analyses.
Poor Experimental Design
Bad data, biased samples, unspecified variables, and lack of controls can completely undermine results. Findings can only be reasonably extended to populations reflected by the test samples.
Garbage in, garbage out definitely applies to statistical analysis!
Assumption Violations
Most common statistical tests make assumptions about normality, homogeneity of variance, independent samples, underlying variable relationships. Violating these premises invalidates reliability.
Transformations, bootstrapping, or non-parametric methods can help navigate issues for sound methodology.
Lack of Reproducibility
The replication crisis impacting scientific research highlights issues around lack of reproducibility, especially involving human participants and high complexity systems. Randomized controlled experiments with strong statistical power provide much more reliable evidence.
While hypothesis testing methodology is rigorously developed, applying concepts correctly proves challenging even among academics and experts!
We‘ve covered core concepts, Python implementations, real-world use cases, and inherent limitations around hypothesis testing. What should you master next?
Parametric vs Non-parametric
Learn assumptions and application differences between parametric statistics like z-tests and t-tests that assume normal distributions versus non-parametric analogs like Wilcoxon signed-rank tests and Mann-Whitney U tests.
Effect Size and Power
Look beyond just p-values to determine practical effect magnitude using indexes like Cohen‘s D. And ensure appropriate sample sizes to detect effects using prospective power analysis.
Alternatives to NHST
Evaluate Bayesian inference models and likelihood ratios that move beyond binary reject/fail-to-reject null hypothesis outcomes toward more integrated evidence.
Tiered Testing Framework
Construct reusable classes encapsulating data processing, visualizations, assumption checking, and statistical tests for maintainable analysis code.
Big Data Integration
Connect statistical analysis to big data pipelines pulling from databases, data lakes and APIs at scale. Productionize analytics.
I hope this end-to-end look at hypothesis testing methodology, Python programming demonstrations, real-world grounding, inherent restrictions and next level considerations provides a launchpad for practically applying core statistics! Please subscribe using the form below for more data science tutorials.
Dr. Alex Mitchell is a dedicated coding instructor with a deep passion for teaching and a wealth of experience in computer science education. As a university professor, Dr. Mitchell has played a pivotal role in shaping the coding skills of countless students, helping them navigate the intricate world of programming languages and software development.
Beyond the classroom, Dr. Mitchell is an active contributor to the freeCodeCamp community, where he regularly shares his expertise through tutorials, code examples, and practical insights. His teaching repertoire includes a wide range of languages and frameworks, such as Python, JavaScript, Next.js, and React, which he presents in an accessible and engaging manner.
Dr. Mitchell’s approach to teaching blends academic rigor with real-world applications, ensuring that his students not only understand the theory but also how to apply it effectively. His commitment to education and his ability to simplify complex topics have made him a respected figure in both the university and online learning communities.
Property-based testing is an innovative technique for testing software through specifying invariant properties rather than manual…
Docker‘s lightweight container virtualization has revolutionized development workflows. This comprehensive guide demystifies Docker fundamentals while equipping…
As a full-stack developer, building reusable UI components is a key skill. In this comprehensive 3200+…
The command line interface (CLI) has been a constant companion of programmers, system administrators and power…
Credit: Unsplash Vim has been my go-to text editor for years. As a full-stack developer, I…
As the new manager of a struggling 20-person software engineering team, I faced serious challenges that…
Talk to our experts
1800-120-456-456
Hypothesis testing in statistics refers to analyzing an assumption about a population parameter. It is used to make an educated guess about an assumption using statistics. With the use of sample data, hypothesis testing makes an assumption about how true the assumption is for the entire population from where the sample is being taken.
Any hypothetical statement we make may or may not be valid, and it is then our responsibility to provide evidence for its possibility. To approach any hypothesis, we follow these four simple steps that test its validity.
First, we formulate two hypothetical statements such that only one of them is true. By doing so, we can check the validity of our own hypothesis.
The next step is to formulate the statistical analysis to be followed based upon the data points.
Then we analyze the given data using our methodology.
The final step is to analyze the result and judge whether the null hypothesis will be rejected or is true.
It is observed that the average recovery time for a knee-surgery patient is 8 weeks. A physician believes that after successful knee surgery if the patient goes for physical therapy twice a week rather than thrice a week, the recovery period will be longer. Conduct hypothesis for this statement.
David is a ten-year-old who finishes a 25-yard freestyle in the meantime of 16.43 seconds. David’s father bought goggles for his son, believing that it would help him to reduce his time. He then recorded a total of fifteen 25-yard freestyle for David, and the average time came out to be 16 seconds. Conduct a hypothesis.
A tire company claims their A-segment of tires have a running life of 50,000 miles before they need to be replaced, and previous studies show a standard deviation of 8,000 miles. After surveying a total of 28 tires, the mean run time came to be 46,500 miles with a standard deviation of 9800 miles. Is the claim made by the tire company consistent with the given data? Conduct hypothesis testing.
All of the hypothesis testing examples are from real-life situations, which leads us to believe that hypothesis testing is a very practical topic indeed. It is an integral part of a researcher's study and is used in every research methodology in one way or another.
Inferential statistics majorly deals with hypothesis testing. The research hypothesis states there is a relationship between the independent variable and dependent variable. Whereas the null hypothesis rejects this claim of any relationship between the two, our job as researchers or students is to check whether there is any relation between the two.
Now that we are clear about what hypothesis testing is? Let's look at the use of hypothesis testing in research methodology. Hypothesis testing is at the centre of research projects.
Often after formulating research statements, the validity of those statements need to be verified. Hypothesis testing offers a statistical approach to the researcher about the theoretical assumptions he/she made. It can be understood as quantitative results for a qualitative problem.
(Image will be uploaded soon)
Hypothesis testing provides various techniques to test the hypothesis statement depending upon the variable and the data points. It finds its use in almost every field of research while answering statements such as whether this new medicine will work, a new testing method is appropriate, or if the outcomes of a random experiment are probable or not.
To find the validity of any statement, we have to strictly follow the stepwise procedure of hypothesis testing. After stating the initial hypothesis, we have to re-write them in the form of a null and alternate hypothesis. The alternate hypothesis predicts a relationship between the variables, whereas the null hypothesis predicts no relationship between the variables.
After writing them as H 0 (null hypothesis) and H a (Alternate hypothesis), only one of the statements can be true. For example, taking the hypothesis that, on average, men are taller than women, we write the statements as:
H 0 : On average, men are not taller than women.
H a : On average, men are taller than women.
Our next aim is to collect sample data, what we call sampling, in a way so that we can test our hypothesis. Your data should come from the concerned population for which you want to make a hypothesis.
What is the p value in hypothesis testing? P-value gives us information about the probability of occurrence of results as extreme as observed results.
You will obtain your p-value after choosing the hypothesis testing method, which will be the guiding factor in rejecting the hypothesis. Usually, the p-value cutoff for rejecting the null hypothesis is 0.05. So anything below that, you will reject the null hypothesis.
A low p-value means that the between-group variance is large enough that there is almost no overlapping, and it is unlikely that these came about by chance. A high p-value suggests there is a high within-group variance and low between-group variance, and any difference in the measure is due to chance only.
When forming conclusions through research, two sorts of errors are common: A hypothesis must be set and defined in statistics during a statistical survey or research. A statistical hypothesis is what it is called. It is, in fact, a population parameter assumption. However, it is unmistakable that this idea is always proven correct. Hypothesis testing refers to the predetermined formal procedures used by statisticians to determine whether hypotheses should be accepted or rejected. The process of selecting hypotheses for a given probability distribution based on observable data is known as hypothesis testing. Hypothesis testing is a fundamental and crucial issue in statistics.
The quick answer is that you must as a scientist; it is part of the scientific process. Science employs a variety of methods to test or reject theories, ensuring that any new hypothesis is free of errors. One protection to ensure your research is not incorrect is to include both a null and an alternate hypothesis. The scientific community considers not incorporating the null hypothesis in your research to be poor practice. You are almost certainly setting yourself up for failure if you set out to prove another theory without first examining it. At the very least, your experiment will not be considered seriously.
There are several types of hypothesis testing, and they are used based on the data provided. Depending on the sample size and the data given, we choose among different hypothesis testing methodologies. Here starts the use of hypothesis testing tools in research methodology.
Normality- This type of testing is used for normal distribution in a population sample. If the data points are grouped around the mean, the probability of them being above or below the mean is equally likely. Its shape resembles a bell curve that is equally distributed on either side of the mean.
T-test- This test is used when the sample size in a normally distributed population is comparatively small, and the standard deviation is unknown. Usually, if the sample size drops below 30, we use a T-test to find the confidence intervals of the population.
Chi-Square Test- The Chi-Square test is used to test the population variance against the known or assumed value of the population variance. It is also a better choice to test the goodness of fit of a distribution of data. The two most common Chi-Square tests are the Chi-Square test of independence and the chi-square test of variance.
ANOVA- Analysis of Variance or ANOVA compares the data sets of two different populations or samples. It is similar in its use to the t-test or the Z-test, but it allows us to compare more than two sample means. ANOVA allows us to test the significance between an independent variable and a dependent variable, namely X and Y, respectively.
Z-test- It is a statistical measure to test that the means of two population samples are different when their variance is known. For a Z-test, the population is assumed to be normally distributed. A z-test is better suited in the case of large sample sizes greater than 30. This is due to the central limit theorem that as the sample size increases, the samples are considered to be distributed normally.
1. Mention the types of hypothesis Tests.
There are two types of a hypothesis tests:
Null Hypothesis: It is denoted as H₀.
Alternative Hypothesis: IT is denoted as H₁ or Hₐ.
2. What are the two errors that can be found while performing the null Hypothesis test?
While performing the null hypothesis test there is a possibility of occurring two types of errors,
Type-1: The type-1 error is denoted by (α), it is also known as the significance level. It is the rejection of the true null hypothesis. It is the error of commission.
Type-2: The type-2 error is denoted by (β). (1 - β) is known as the power test. The false null hypothesis is not rejected. It is the error of the omission.
3. What is the p-value in hypothesis testing?
During hypothetical testing in statistics, the p-value indicates the probability of obtaining the result as extreme as observed results. A smaller p-value provides evidence to accept the alternate hypothesis. The p-value is used as a rejection point that provides the smallest level of significance at which the null hypothesis is rejected. Often p-value is calculated using the p-value tables by calculating the deviation between the observed value and the chosen reference value.
It may also be calculated mathematically by performing integrals on all the values that fall under the curve and areas far from the reference value as the observed value relative to the total area of the curve. The p-value determines the evidence to reject the null hypothesis in hypothesis testing.
4. What is a null hypothesis?
The null hypothesis in statistics says that there is no certain difference between the population. It serves as a conjecture proposing no difference, whereas the alternate hypothesis says there is a difference. When we perform hypothesis testing, we have to state the null hypothesis and alternative hypotheses such that only one of them is ever true.
By determining the p-value, we calculate whether the null hypothesis is to be rejected or not. If the difference between groups is low, it is merely by chance, and the null hypothesis, which states that there is no difference among groups, is true. Therefore, we have no evidence to reject the null hypothesis.
Hypothesis testing involves formulating assumptions about population parameters based on sample statistics and rigorously evaluating these assumptions against empirical evidence. This article sheds light on the significance of hypothesis testing and the critical steps involved in the process.
A hypothesis is an assumption or idea, specifically a statistical claim about an unknown population parameter. For example, a judge assumes a person is innocent and verifies this by reviewing evidence and hearing testimony before reaching a verdict.
Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data.
To test the validity of the claim or assumption about the population parameter:
Example: You say an average height in the class is 30 or a boy is taller than a girl. All of these is an assumption that we are assuming, and we need some statistical way to prove these. We need some mathematical conclusion whatever we are assuming is true.
Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing.
One tailed test focuses on one direction, either greater than or less than a specified value. We use a one-tailed test when there is a clear directional expectation based on prior knowledge or theory. The critical region is located on only one side of the distribution curve. If the sample falls into this critical region, the null hypothesis is rejected in favor of the alternative hypothesis.
There are two types of one-tailed test:
A two-tailed test considers both directions, greater than and less than a specified value.We use a two-tailed test when there is no specific directional expectation, and want to detect any significant difference.
Example: H 0 : [Tex]\mu = [/Tex] 50 and H 1 : [Tex]\mu \neq 50 [/Tex]
To delve deeper into differences into both types of test: Refer to link
In hypothesis testing, Type I and Type II errors are two possible errors that researchers can make when drawing conclusions about a population based on a sample of data. These errors are associated with the decisions made regarding the null hypothesis and the alternative hypothesis.
Null Hypothesis is True | Null Hypothesis is False | |
---|---|---|
Null Hypothesis is True (Accept) | Correct Decision | Type II Error (False Negative) |
Alternative Hypothesis is True (Reject) | Type I Error (False Positive) | Correct Decision |
Step 1: define null and alternative hypothesis.
State the null hypothesis ( [Tex]H_0 [/Tex] ), representing no effect, and the alternative hypothesis ( [Tex]H_1 [/Tex] ), suggesting an effect or difference.
We first identify the problem about which we want to make an assumption keeping in mind that our assumption should be contradictory to one another, assuming Normally distributed data.
Select a significance level ( [Tex]\alpha [/Tex] ), typically 0.05, to determine the threshold for rejecting the null hypothesis. It provides validity to our hypothesis test, ensuring that we have sufficient data to back up our claims. Usually, we determine our significance level beforehand of the test. The p-value is the criterion used to calculate our significance value.
Gather relevant data through observation or experimentation. Analyze the data using appropriate statistical methods to obtain a test statistic.
The data for the tests are evaluated in this step we look for various scores based on the characteristics of data. The choice of the test statistic depends on the type of hypothesis test being conducted.
There are various hypothesis tests, each appropriate for various goal to calculate our test. This could be a Z-test , Chi-square , T-test , and so on.
We have a smaller dataset, So, T-test is more appropriate to test our hypothesis.
T-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.
In this stage, we decide where we should accept the null hypothesis or reject the null hypothesis. There are two ways to decide where we should accept or reject the null hypothesis.
Comparing the test statistic and tabulated critical value we have,
Note: Critical values are predetermined threshold values that are used to make a decision in hypothesis testing. To determine critical values for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.
We can also come to an conclusion using the p-value,
Note : The p-value is the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming the null hypothesis is true. To determine p-value for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.
At last, we can conclude our experiment using method A or B.
To validate our hypothesis about a population parameter we use statistical functions . We use the z-score, p-value, and level of significance(alpha) to make evidence for our hypothesis for normally distributed data .
When population means and standard deviations are known.
[Tex]z = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}[/Tex]
T test is used when n<30,
t-statistic calculation is given by:
[Tex]t=\frac{x̄-μ}{s/\sqrt{n}} [/Tex]
Chi-Square Test for Independence categorical Data (Non-normally distributed) using:
[Tex]\chi^2 = \sum \frac{(O_{ij} – E_{ij})^2}{E_{ij}}[/Tex]
Let’s examine hypothesis testing using two real life situations,
Imagine a pharmaceutical company has developed a new drug that they believe can effectively lower blood pressure in patients with hypertension. Before bringing the drug to market, they need to conduct a study to assess its impact on blood pressure.
Let’s consider the Significance level at 0.05, indicating rejection of the null hypothesis.
If the evidence suggests less than a 5% chance of observing the results due to random variation.
Using paired T-test analyze the data to obtain a test statistic and a p-value.
The test statistic (e.g., T-statistic) is calculated based on the differences between blood pressure measurements before and after treatment.
t = m/(s/√n)
then, m= -3.9, s= 1.8 and n= 10
we, calculate the , T-statistic = -9 based on the formula for paired t test
The calculated t-statistic is -9 and degrees of freedom df = 9, you can find the p-value using statistical software or a t-distribution table.
thus, p-value = 8.538051223166285e-06
Step 5: Result
Conclusion: Since the p-value (8.538051223166285e-06) is less than the significance level (0.05), the researchers reject the null hypothesis. There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.
Let’s create hypothesis testing with python, where we are testing whether a new drug affects blood pressure. For this example, we will use a paired T-test. We’ll use the scipy.stats library for the T-test.
Scipy is a mathematical library in Python that is mostly used for mathematical equations and computations.
We will implement our first real life problem via python,
import numpy as np from scipy import stats # Data before_treatment = np . array ([ 120 , 122 , 118 , 130 , 125 , 128 , 115 , 121 , 123 , 119 ]) after_treatment = np . array ([ 115 , 120 , 112 , 128 , 122 , 125 , 110 , 117 , 119 , 114 ]) # Step 1: Null and Alternate Hypotheses # Null Hypothesis: The new drug has no effect on blood pressure. # Alternate Hypothesis: The new drug has an effect on blood pressure. null_hypothesis = "The new drug has no effect on blood pressure." alternate_hypothesis = "The new drug has an effect on blood pressure." # Step 2: Significance Level alpha = 0.05 # Step 3: Paired T-test t_statistic , p_value = stats . ttest_rel ( after_treatment , before_treatment ) # Step 4: Calculate T-statistic manually m = np . mean ( after_treatment - before_treatment ) s = np . std ( after_treatment - before_treatment , ddof = 1 ) # using ddof=1 for sample standard deviation n = len ( before_treatment ) t_statistic_manual = m / ( s / np . sqrt ( n )) # Step 5: Decision if p_value <= alpha : decision = "Reject" else : decision = "Fail to reject" # Conclusion if decision == "Reject" : conclusion = "There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different." else : conclusion = "There is insufficient evidence to claim a significant difference in average blood pressure before and after treatment with the new drug." # Display results print ( "T-statistic (from scipy):" , t_statistic ) print ( "P-value (from scipy):" , p_value ) print ( "T-statistic (calculated manually):" , t_statistic_manual ) print ( f "Decision: { decision } the null hypothesis at alpha= { alpha } ." ) print ( "Conclusion:" , conclusion )
T-statistic (from scipy): -9.0 P-value (from scipy): 8.538051223166285e-06 T-statistic (calculated manually): -9.0 Decision: Reject the null hypothesis at alpha=0.05. Conclusion: There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.
In the above example, given the T-statistic of approximately -9 and an extremely small p-value, the results indicate a strong case to reject the null hypothesis at a significance level of 0.05.
Data: A sample of 25 individuals is taken, and their cholesterol levels are measured.
Cholesterol Levels (mg/dL): 205, 198, 210, 190, 215, 205, 200, 192, 198, 205, 198, 202, 208, 200, 205, 198, 205, 210, 192, 205, 198, 205, 210, 192, 205.
Populations Mean = 200
Population Standard Deviation (σ): 5 mg/dL(given for this problem)
As the direction of deviation is not given , we assume a two-tailed test, and based on a normal distribution table, the critical values for a significance level of 0.05 (two-tailed) can be calculated through the z-table and are approximately -1.96 and 1.96.
The test statistic is calculated by using the z formula Z = [Tex](203.8 – 200) / (5 \div \sqrt{25}) [/Tex] and we get accordingly , Z =2.039999999999992.
Step 4: Result
Since the absolute value of the test statistic (2.04) is greater than the critical value (1.96), we reject the null hypothesis. And conclude that, there is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL
import scipy.stats as stats import math import numpy as np # Given data sample_data = np . array ( [ 205 , 198 , 210 , 190 , 215 , 205 , 200 , 192 , 198 , 205 , 198 , 202 , 208 , 200 , 205 , 198 , 205 , 210 , 192 , 205 , 198 , 205 , 210 , 192 , 205 ]) population_std_dev = 5 population_mean = 200 sample_size = len ( sample_data ) # Step 1: Define the Hypotheses # Null Hypothesis (H0): The average cholesterol level in a population is 200 mg/dL. # Alternate Hypothesis (H1): The average cholesterol level in a population is different from 200 mg/dL. # Step 2: Define the Significance Level alpha = 0.05 # Two-tailed test # Critical values for a significance level of 0.05 (two-tailed) critical_value_left = stats . norm . ppf ( alpha / 2 ) critical_value_right = - critical_value_left # Step 3: Compute the test statistic sample_mean = sample_data . mean () z_score = ( sample_mean - population_mean ) / \ ( population_std_dev / math . sqrt ( sample_size )) # Step 4: Result # Check if the absolute value of the test statistic is greater than the critical values if abs ( z_score ) > max ( abs ( critical_value_left ), abs ( critical_value_right )): print ( "Reject the null hypothesis." ) print ( "There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL." ) else : print ( "Fail to reject the null hypothesis." ) print ( "There is not enough evidence to conclude that the average cholesterol level in the population is different from 200 mg/dL." )
Reject the null hypothesis. There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL.
Hypothesis testing stands as a cornerstone in statistical analysis, enabling data scientists to navigate uncertainties and draw credible inferences from sample data. By systematically defining null and alternative hypotheses, choosing significance levels, and leveraging statistical tests, researchers can assess the validity of their assumptions. The article also elucidates the critical distinction between Type I and Type II errors, providing a comprehensive understanding of the nuanced decision-making process inherent in hypothesis testing. The real-life example of testing a new drug’s effect on blood pressure using a paired T-test showcases the practical application of these principles, underscoring the importance of statistical rigor in data-driven decision-making.
1. what are the 3 types of hypothesis test.
There are three types of hypothesis tests: right-tailed, left-tailed, and two-tailed. Right-tailed tests assess if a parameter is greater, left-tailed if lesser. Two-tailed tests check for non-directional differences, greater or lesser.
Null Hypothesis ( [Tex]H_o [/Tex] ): No effect or difference exists. Alternative Hypothesis ( [Tex]H_1 [/Tex] ): An effect or difference exists. Significance Level ( [Tex]\alpha [/Tex] ): Risk of rejecting null hypothesis when it’s true (Type I error). Test Statistic: Numerical value representing observed evidence against null hypothesis.
Statistical method to evaluate the performance and validity of machine learning models. Tests specific hypotheses about model behavior, like whether features influence predictions or if a model generalizes well to unseen data.
Pytest purposes general testing framework for Python code while Hypothesis is a Property-based testing framework for Python, focusing on generating test cases based on specified properties of the code.
Similar reads.
In this paper, we study the problem of determining k 𝑘 k italic_k anomalous random variables that have different probability distributions from the rest ( n − k ) 𝑛 𝑘 (n-k) ( italic_n - italic_k ) random variables. Instead of sampling each individual random variable separately as in the conventional hypothesis testing, we propose to perform hypothesis testing using mixed observations that are functions of multiple random variables. We characterize the error exponents for correctly identifying the k 𝑘 k italic_k anomalous random variables under fixed time-invariant mixed observations, random time-varying mixed observations, and deterministic time-varying mixed observations. For our error exponent characterization, we introduce the notions of inner conditional Chernoff information and outer conditional Chernoff information . We demonstrated that mixed observations can strictly improve the error exponents of hypothesis testing, over separate observations of individual random variables. We further characterize the optimal sensing vector maximizing the error exponents, which leads to explicit constructions of the optimal mixed observations in special cases of hypothesis testing for Gaussian random variables. These results show that mixed observations of random variables can reduce the number of required samples in hypothesis testing applications. In order to solve large-scale hypothesis testing problems, we also propose efficient algorithms - LASSO based and message passing based hypothesis testing algorithms.
I introduction.
In many areas of science and engineering such as network tomography, cognitive radio, radar, and Internet of Things (IoTs), one needs to infer statistical information of signals of interest [ 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 ] . Statistical information of interest can be the means, the variances or even the distributions of certain random variables. Obtaining such statistical information is essential in detecting anomalous behaviors of random signals. Espeically, inferring distributions of random variables has many important applications including quickest detections of potential hazards, detecting changes in statistical behaviors of random variables [ 12 , 13 , 14 , 4 , 15 , 10 ] , and detecting congested links with abnormal delay statistics in network tomography [ 9 , 16 , 17 ] .
In this paper, we consider a multiple hypothesis testing problem with few compressed measurements, which has applications in anomaly detection. In particular, we consider n 𝑛 n italic_n random variables, denoted by X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , i ∈ 𝒮 = { 1 , 2 , … , n } 𝑖 𝒮 1 2 … 𝑛 i\in{\mathcal{S}}=\{1,2,...,n\} italic_i ∈ caligraphic_S = { 1 , 2 , … , italic_n } , out of which k 𝑘 k italic_k ( k ≪ n much-less-than 𝑘 𝑛 k\ll n italic_k ≪ italic_n ) random variables follow a probability distribution f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) while the much larger set of remaining ( n − k ) 𝑛 𝑘 (n-k) ( italic_n - italic_k ) random variables follow another probability distribution f 1 ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) . However, it is unknown which k 𝑘 k italic_k random variables follow the distribution f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . Our goal in this paper is to infer the subset of random variables that follow f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . In our problem setup, this is equivalent to determining whether X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT follows the probability distribution f 1 ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) or f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) for each i 𝑖 i italic_i . The system model of anomaly detection considered in this paper has appeared in various applications such as cognitive radio [ 10 , 18 , 19 ] , quickest detection and search [ 20 , 4 , 21 , 22 , 23 , 24 , 25 ] , and communication systems [ 26 , 27 , 28 ] .
In order to infer the probability distribution of the n 𝑛 n italic_n random variables, one conventional method is to obtain l 𝑙 l italic_l separate samples for each random variable X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and then use hypothesis testing techniques to determine whether X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT follows the probability distribution f 1 ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) or f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) for each i 𝑖 i italic_i . To ensure correctly identifying the k 𝑘 k italic_k anomalous random variables with high probability, at least Θ ( n ) Θ 𝑛 \Theta(n) roman_Θ ( italic_n ) samples are required for hypothesis testing with these samples involving only individual random variables. However, when the number of random variables n 𝑛 n italic_n grows large, the requirement on sampling rates and sensing resources can easily become a burden in the anomaly detection. For example, in a sensor network, if the fusion center aims to track the anomalies in data generated by n 𝑛 n italic_n chemical sensors, sending all the data samples of individual sensors to the fusion center will be energy-consuming and inefficient in the energy-limited sensor network. In this scenario, reducing the number of samples for inferring the probability distributions of the n 𝑛 n italic_n random variables is desired in order to lessen the communication burden in the energy-limited sensor network. Additionally, in some applications introduced in [ 5 , 8 , 9 ] for the inference of link delay in networks, due to physical constraints, we are sometimes unable to directly obtain separate samples of individual random variables. Those difficulties raise the question of whether we can perform hypothesis testing from a much smaller number of samples involving mixed observations.
superscript 𝒂 𝑇 𝝁 bold-italic-ϵ {\boldsymbol{y}}={\boldsymbol{a}}^{T}{\boldsymbol{\mu}}+{\boldsymbol{\epsilon}} bold_italic_y = bold_italic_a start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_μ + bold_italic_ϵ with some unknown deterministic spare vector 𝝁 𝝁 {\boldsymbol{\mu}} bold_italic_μ and additive noise ϵ bold-italic-ϵ {\boldsymbol{\epsilon}} bold_italic_ϵ . So in standard compressed sensing, the unknown vector 𝒙 𝒙 {\boldsymbol{x}} bold_italic_x takes the same value in each measurement. Unlike the standard compressed sensing, in compressed hypothesis testing, 𝒙 𝒙 {\boldsymbol{x}} bold_italic_x is a vector of n 𝑛 n italic_n random variables taking independent realizations across different measurements. As discussed below, our problem is relevant to multiple hypothesis testing such as in communication systems [ 26 , 27 , 28 ] .
In addition to the multiple hypothesis testing with communication constraints, related works on identifying anomalous random variables include the detection of an anomalous cluster in a network [ 32 ] , Gaussian clustering [ 33 ] , group testing [ 34 , 35 , 36 , 37 , 38 , 39 ] , and quickest detection [ 22 , 23 , 21 , 40 , 41 ] . Especially, in [ 22 , 23 , 21 , 40 ] , the authors optimized adaptive separate samplings of individual random variables and reduced the number of needed samples for individual random variables by utilizing the sparsity of anomalous random variables. However, the total number of observations is still at least Θ ( n ) Θ 𝑛 \Theta(n) roman_Θ ( italic_n ) for these methods [ 22 , 23 , 21 , 40 ] , since one is restricted to individually sample the n 𝑛 n italic_n random variables. The major difference between the previous research [ 22 , 23 , 21 , 40 , 32 , 33 ] and ours is that we consider compressed measurements instead of separate measurements of individual random variables. Additionally, group testing is different from our problem setting, since our sensing matrices and variables are general sensing matrices and vectors taking real-numbered values, while in group testing [ 34 , 35 , 36 , 37 , 38 , 39 ] , Bernoulli matrices are normally used. Moreover, in group testing, the unknown vector is often assumed to be deterministic across different measurements rather than assumed to be taking independent realizations as in this paper.
The rest of the paper is organized as follows. Section II describes the mathematical model of the considered anomaly detection problem. In Section III-A , we investigate the hypothesis testing error performance using time-invariant mixed observations in hypothesis testing, and propose corresponding hypothesis testing algorithms. And then, we provide their performance analysis. Section III-B describes the case using random time-varying mixed observations to identify the anomalous random variables, and we derive the error exponent of wrongly identifying the anomalous random variables. In Section III-C , we consider the case using deterministic time-varying mixed observations for hypothesis testing, and derive a bound on the error probability. In Section IV , we take into account undersampling measurement case, where the number of measurement is smaller than the number of random variables, to show the advantage of compressed hypothesis testing with mixed measurements against separate measurements. In Section V , we demonstrate, through examples of Gaussian random variables, that linear mixed observations can strictly improve the error exponent over separate sampling of each individual random variable. Section VI describes the optimal mixed measurements for Gaussian random variables maximizing the error exponent in hypothesis testing. Section VII introduces efficient algorithms to find abnormal random variables using mixed observations, for large values of n 𝑛 n italic_n and k 𝑘 k italic_k . In Section VIII , we demonstrate the effectiveness of our hypothesis testing methods with mixed measurements in various numerical experiments. Section IX provides the conclusion of this paper.
Notations : We denote a random variable and its realization by an uppercase letter and the corresponding lowercase letter respectively. We use X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to refer to the i 𝑖 i italic_i -th element of the random variable vector 𝑿 𝑿 {\boldsymbol{X}} bold_italic_X . We reserve calligraphic uppercase letters 𝒮 𝒮 {\mathcal{S}} caligraphic_S and 𝒦 𝒦 {\mathcal{K}} caligraphic_K for index sets, where 𝒮 = { 1 , 2 , … , n } 𝒮 1 2 … 𝑛 {\mathcal{S}}=\{1,2,...,n\} caligraphic_S = { 1 , 2 , … , italic_n } , and 𝒦 ⊆ 𝒮 𝒦 𝒮 {\mathcal{K}}\subseteq{\mathcal{S}} caligraphic_K ⊆ caligraphic_S . We use superscripts to represent time indices. Hence, 𝒙 j superscript 𝒙 𝑗 {\boldsymbol{x}}^{j} bold_italic_x start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT represents the realization of a random variable vector 𝑿 𝑿 {\boldsymbol{X}} bold_italic_X at time j 𝑗 j italic_j . We reserve the lowercase letters f 𝑓 f italic_f and p 𝑝 p italic_p for Probability Density Functions (PDFs). We also denote the probability density function p X ( x ) subscript 𝑝 𝑋 𝑥 p_{X}(x) italic_p start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_x ) as p ( x ) 𝑝 𝑥 p(x) italic_p ( italic_x ) or p X subscript 𝑝 𝑋 p_{X} italic_p start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT for notation convenience. In this paper, log \log roman_log represents the logarithm with the natural number e 𝑒 e italic_e as its base.
(II.1) |
where 𝒦 ⊂ 𝒮 = { 1 , 2 , ⋯ , n } 𝒦 𝒮 1 2 ⋯ 𝑛 {\mathcal{K}}\subset{\mathcal{S}}=\{1,2,\cdots,n\} caligraphic_K ⊂ caligraphic_S = { 1 , 2 , ⋯ , italic_n } is an unknown “support” index set, and | 𝒦 | = k ≪ n 𝒦 𝑘 much-less-than 𝑛 |{\mathcal{K}}|=k\ll n | caligraphic_K | = italic_k ≪ italic_n . We take m 𝑚 m italic_m mixed observations of the n 𝑛 n italic_n random variables at m 𝑚 m italic_m numbers of time indices. The measurement at time j 𝑗 j italic_j is stated as
which is a function of n 𝑛 n italic_n random variables, where 1 ≤ j ≤ m 1 𝑗 𝑚 1\leq j\leq m 1 ≤ italic_j ≤ italic_m . Note that the random variable X i j superscript subscript 𝑋 𝑖 𝑗 X_{i}^{j} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT follows the probability distribution f 1 ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) or f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) depending on whether i ∈ 𝒦 𝑖 𝒦 i\in{\mathcal{K}} italic_i ∈ caligraphic_K or not, which is the same distribution as the random variable X i subscript 𝑋 𝑖 X_{i} italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT . Our goal in this paper is to determine 𝒦 𝒦 {\mathcal{K}} caligraphic_K by identifying those k 𝑘 k italic_k anomalous random variables with as few measurements as possible. We assume that the realizations at different time slots are mutually independent. Additionally, although our results can be extended to nonlinear observations, in this paper, we specifically consider the case when the functions g j ( ⋅ ) superscript 𝑔 𝑗 ⋅ g^{j}(\cdot) italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ( ⋅ ) ’s are linear due to its simplicity and its wide range of applications including network tomography [ 9 ] and cognitive radio [ 10 ] . Especially, the network tomography problem is a good example of the considered linear measurement model, where the goal of the problem is figuring out congested links in a communication network by sending packets through probing paths that are composed of connected links. The communication delay through a probing path is naturally a linear combination of the random variables representing the delays of that packet traveling through corresponding links.
Throughout this paper, when the measurements through functions g j superscript 𝑔 𝑗 g^{j} italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ’s are taken, the decoder knows the functions g j superscript 𝑔 𝑗 g^{j} italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ’s. In particular, when the functions are linear functions, the decoder knows the coefficients of these linear functions or the matrices A 𝐴 A italic_A as discussed later in this paper.
When the functions g j superscript 𝑔 𝑗 g^{j} italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ’s are linear, the j 𝑗 j italic_j -th measurement is stated as follows:
(II.2) |
(II.3) |
We would like to design the sampling functions g j ( ⋅ ) superscript 𝑔 𝑗 ⋅ g^{j}(\cdot) italic_g start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ( ⋅ ) ’s and the decision function ϕ ( ⋅ ) italic-ϕ ⋅ \phi(\cdot) italic_ϕ ( ⋅ ) such that the probability
(II.4) |
for an arbitrarily small ϵ > 0 italic-ϵ 0 \epsilon>0 italic_ϵ > 0 .
In compressed hypothesis testing, we consider three different types of mixed observations, namely fixed time-invariant mixed measurements, random time-varying measurements, and deterministic time-varying measurements. Table I summarizes the definition of these types of measurements. For these different types of mixed observations, we characterize the number of measurements required to achieve a specified hypothesis testing error probability.
Measurement type | Definition |
---|---|
Fixed time-invariant | The measurement function is the same at every time index. |
Random time-varying | The measurement function is randomly generated from a distribution at each time index. |
Deterministic time-varying | The measurement function is time-varying at each time index but predetermined. |
In this subsection, we focus on a simple case in which sensing vectors are time-invariant across different time indices, i.e., 𝒂 1 = ⋯ = 𝒂 m := 𝒂 superscript 𝒂 1 ⋯ superscript 𝒂 𝑚 assign 𝒂 {\boldsymbol{a}}^{1}=\cdots={\boldsymbol{a}}^{m}:={\boldsymbol{a}} bold_italic_a start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT = ⋯ = bold_italic_a start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT := bold_italic_a , where 𝒂 ∈ ℝ n × 1 𝒂 superscript ℝ 𝑛 1 {\boldsymbol{a}}\in{\mathbb{R}}^{n\times 1} bold_italic_a ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × 1 end_POSTSUPERSCRIPT . This simple case helps us to illustrate the main idea that will be generalized to more sophisticated schemes in later sections.
Y 1 superscript 𝑌 1 Y^{1} italic_Y start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , Y 2 superscript 𝑌 2 Y^{2} italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , …, Y m superscript 𝑌 𝑚 Y^{m} italic_Y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT follow probability distribution p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT
Y 1 superscript 𝑌 1 Y^{1} italic_Y start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , Y 2 superscript 𝑌 2 Y^{2} italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , …, Y m superscript 𝑌 𝑚 Y^{m} italic_Y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT follow probability distribution p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT
(III.1) |
is the Chernoff information between two probability distributions p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT .
In Algorithm 2 , for two probability distributions p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , we choose the probability likelihood ratio threshold of the Neyman-Pearson testing in such a way that the error probability decreases with the largest possible error exponent, namely the Chernoff information between p v subscript 𝑝 𝑣 p_{v} italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and p w subscript 𝑝 𝑤 p_{w} italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT :
Overall, the smallest possible error exponent of making an error between any pair of probability distributions is
(III.2) |
Without loss of generality, we assume that p 1 subscript 𝑝 1 p_{1} italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the true probability distribution for the observation data 𝒀 = 𝒚 𝒀 𝒚 {\boldsymbol{Y}}={\boldsymbol{y}} bold_italic_Y = bold_italic_y . Since the error probability ℙ e r r subscript ℙ 𝑒 𝑟 𝑟 {\mathbb{P}}_{err} blackboard_P start_POSTSUBSCRIPT italic_e italic_r italic_r end_POSTSUBSCRIPT in the Neyman-Pearson testing scales as (the exponent of the scaling is asymptotically tight [ 49 ] )
where m 𝑚 m italic_m is the number of measurements [ 49 , Chapter 11.9] . By the union bound over the l − 1 𝑙 1 l-1 italic_l - 1 possible pairs ( p 1 , p w ) subscript 𝑝 1 subscript 𝑝 𝑤 (p_{1},p_{w}) ( italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_p start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) , the probability that p 1 subscript 𝑝 1 p_{1} italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is not correctly identified as the true probability distribution scales at most as l × 2 − m E := ϵ assign 𝑙 superscript 2 𝑚 𝐸 italic-ϵ l\times 2^{-mE}:=\epsilon italic_l × 2 start_POSTSUPERSCRIPT - italic_m italic_E end_POSTSUPERSCRIPT := italic_ϵ , where l = ( n k ) 𝑙 binomial 𝑛 𝑘 l=\binom{n}{k} italic_l = ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) . From the upper and lower bounds on binomial coefficients ( n k ) binomial 𝑛 𝑘 \binom{n}{k} ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) ,
where e 𝑒 e italic_e is the natural number, and 1 ≤ k ≤ n 1 𝑘 𝑛 1\leq k\leq n 1 ≤ italic_k ≤ italic_n , for the failure probability, we have
Thus, for the number of measurements, we have
(III.3) |
Therefore, Θ ( k log ( n ) E − 1 ) Θ 𝑘 𝑛 superscript 𝐸 1 \Theta(k\log(n)E^{-1}) roman_Θ ( italic_k roman_log ( italic_n ) italic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) , where E 𝐸 E italic_E is introduced in ( III.2 ), samples are enough for identifying the k 𝑘 k italic_k anomalous samples with high probability. ∎
Each random variable among the n 𝑛 n italic_n numbers of random variables has the same probability of being an abnormal random variable. Thus, possible locations of the k 𝑘 k italic_k different random variables out of n 𝑛 n italic_n follow uniform prior distribution; namely, every hypothesis has the same prior probability to occur. Algorithm 1 is based on maximum likelihood detection, which is known to provide the minimum error probability with uniform prior [ 50 ] . Additionally, since the Likelihood Ratio Test (LRT) can provide the same result as the maximum likelihood estimation when the threshold value is one, Algorithm 2 , which is an LRT algorithm, can provide the same result as Algorithm 1 with a properly chosen threshold value in the Neyman-Pearson test.
We also remark that the error exponent (Chernoff information) for the Neyman-Person test is tight, in the sense that a lower bound on the error probability for pairwise Neyman-Person test scales with the same exponent.
If we are allowed to use time-varying sketching functions, we may need fewer samples. In the next subsection, we discuss the performance of time-varying mixed measurements for this problem.
We propose the maximum likelihood estimate method with random time-varying measurements over ( n k ) binomial 𝑛 𝑘 \binom{n}{k} ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) hypotheses in Algorithm 3 . For the purpose of analyzing the error probability of the maximum likelihood estimation, we further propose a hypothesis testing algorithm based on pairwise comparison in Algorithm 4 . The number of samples required to find the abnormal random variables is stated in Theorem III.3 . Before we introduce our theorem for hypothesis testing with random time-varying measurements, we newly introduce the Chernoff information between two conditional probability density functions, named it as the inner conditional Chernoff information, in Definition III.2 .
(III.4) |
With the definition of the inner conditional Chernoff information, we give our theorem on the sample complexity of our algorithms as follows.
(III.5) |
(III.6) |
Here ( III.5 ) is obtained. By further working on ( III.5 ), we have
(III.7) |
(III.8) |
and the first equation holds for any realization vector 𝒂 ′ superscript 𝒂 ′ {\boldsymbol{a}}^{\prime} bold_italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in the domain of 𝒂 𝒂 {\boldsymbol{a}} bold_italic_a . We take the minimization over 𝒂 ′ superscript 𝒂 ′ {\boldsymbol{a}}^{\prime} bold_italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in order to have the tightest lower bound of the inner conditional Chernoff information. Notice that due to the Holder’s inequality, for any probability density functions f ( x ) 𝑓 𝑥 f(x) italic_f ( italic_x ) and g ( x ) 𝑔 𝑥 g(x) italic_g ( italic_x ) , we have
(III.9) |
In conclusion, we obtain
(III.10) |
Overall, the smallest possible error exponent between any pair of hypotheses is
(III.11) |
Without loss of generality, we assume H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the true hypothesis. Since the error probability ℙ e r r subscript ℙ 𝑒 𝑟 𝑟 {\mathbb{P}}_{err} blackboard_P start_POSTSUBSCRIPT italic_e italic_r italic_r end_POSTSUBSCRIPT in the Neyman-Pearson testing satisfies
(III.12) |
by the union bound over the l − 1 𝑙 1 l-1 italic_l - 1 possible pairs ( H 1 , H w ) subscript 𝐻 1 subscript 𝐻 𝑤 (H_{1},H_{w}) ( italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) , where l = ( n k ) 𝑙 binomial 𝑛 𝑘 l=\binom{n}{k} italic_l = ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) , the probability that H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is not correctly identified as the true hypothesis is upper-bounded by l × 2 − m E 𝑙 superscript 2 𝑚 𝐸 l\times 2^{-mE} italic_l × 2 start_POSTSUPERSCRIPT - italic_m italic_E end_POSTSUPERSCRIPT in terms of scaling. Hence, as shown in the proof of Theorem III.1 , m = Θ ( k log ( n ) E − 1 ) 𝑚 Θ 𝑘 𝑛 superscript 𝐸 1 m=\Theta(k\log(n)E^{-1}) italic_m = roman_Θ ( italic_k roman_log ( italic_n ) italic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) , where E 𝐸 E italic_E is introduced in ( III.11 ), samples are enough for identifying the k 𝑘 k italic_k anomalous samples with high probability. ∎
In this subsection, we consider mixed measurements which are varied over time. However, each sensing vector is predetermined. Hence, for exactly p ( 𝑨 = 𝒂 ) m 𝑝 𝑨 𝒂 𝑚 p({\boldsymbol{A}}={\boldsymbol{a}})m italic_p ( bold_italic_A = bold_italic_a ) italic_m (assuming that p ( 𝑨 = 𝒂 ) m 𝑝 𝑨 𝒂 𝑚 p({\boldsymbol{A}}={\boldsymbol{a}})m italic_p ( bold_italic_A = bold_italic_a ) italic_m are integers) measurements, a realized sensing vector 𝒂 𝒂 {\boldsymbol{a}} bold_italic_a is used. In contrast, in random time-varying measurements, each sensing vector 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A is taken randomly, and thus the number of measurements taking realization 𝒂 𝒂 {\boldsymbol{a}} bold_italic_a is random. We define the predetermined sensing vector at time j 𝑗 j italic_j as 𝒂 j superscript 𝒂 𝑗 {\boldsymbol{a}}^{j} bold_italic_a start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT .
For deterministic time-varying measurements, we introduce the maximum likelihood estimate method among l = ( n k ) 𝑙 binomial 𝑛 𝑘 l=\binom{n}{k} italic_l = ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) hypotheses in Algorithm 5 . To analyze the error probability, we consider another hypothesis testing method based on pairwise comparison with deterministic time-varying measurements in Algorithm 6 . Before introducing the sample complexity of hypothesis testing with deterministic time-varying measurements, we define the outer Chernoff information between two probability density functions given hypotheses and a sensing vector in Definition III.4 .
For λ ∈ [ 0 , 1 ] 𝜆 0 1 \lambda\in[0,1] italic_λ ∈ [ 0 , 1 ] , two hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( 1 ≤ v , w ≤ l formulae-sequence 1 𝑣 𝑤 𝑙 1\leq v,w\leq l 1 ≤ italic_v , italic_w ≤ italic_l ), and a sensing vector 𝐀 𝐀 {\boldsymbol{A}} bold_italic_A , define
(III.13) |
With this definition, the following theorem describes the sample complexity of our algorithms in deterministic time varying measurements.
(III.14) |
For readability, we place the proof of Theorem III.5 in Appendix X-A . It is noteworthy that from the Jensen’s inequality, the outer Chernoff information introduced in ( III.5 ) is greater than or equal to the inner Chernoff information introduced in ( III.5 ).
Compressed hypothesis testing is especially effective when the number of samples allowed is in the subsampling regime, where the number of samples is small, sometimes even smaller than the number of random variables. Now we give a lower bound on the error probability of determining the set of anomalous random variables when the number of individual samples is smaller than the number of random variables.
Consider n 𝑛 n italic_n independent random variables, among which ( n − k ) 𝑛 𝑘 (n-k) ( italic_n - italic_k ) random variables follow a known probability distribution f 1 ( ⋅ ) subscript 𝑓 1 ⋅ f_{1}(\cdot) italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) ; while the other k 𝑘 k italic_k random variables follow another known probability distribution f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . Suppose that we take m < n 𝑚 𝑛 m<n italic_m < italic_n samples of individual random variables (no mixing). Then the probability of misidentifying the k 𝑘 k italic_k abnormal random variables is at least
𝑘 𝑚 𝑛 i\geq\max{(0,k+m-n)} italic_i ≥ roman_max ( 0 , italic_k + italic_m - italic_n ) and i ≤ min ( k , m ) 𝑖 𝑘 𝑚 i\leq\min{(k,m)} italic_i ≤ roman_min ( italic_k , italic_m ) ) random variables that follow the abnormal distribution f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) are observed in these m 𝑚 m italic_m samples. Then there are at least ( n − m ) 𝑛 𝑚 (n-m) ( italic_n - italic_m ) random variables are never sampled, and among them there are ( k − i ) 𝑘 𝑖 (k-i) ( italic_k - italic_i ) random variables that follow the abnormal distribution f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) . Determining correctly the ( k − i ) 𝑘 𝑖 (k-i) ( italic_k - italic_i ) random variables that follow the abnormal distribution f 2 ( ⋅ ) subscript 𝑓 2 ⋅ f_{2}(\cdot) italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ⋅ ) is at most of probability 1 ( n − m k − i ) 1 binomial 𝑛 𝑚 𝑘 𝑖 \frac{1}{\binom{n-m}{k-i}} divide start_ARG 1 end_ARG start_ARG ( FRACOP start_ARG italic_n - italic_m end_ARG start_ARG italic_k - italic_i end_ARG ) end_ARG . So the probability of correctly identifying the k 𝑘 k italic_k abnormal random variables is at most
This proves the lower bound on misidentifying the k 𝑘 k italic_k abnormal random variables in separate measurements, where m < n 𝑚 𝑛 m<n italic_m < italic_n . ∎
As we can see, if m > k 𝑚 𝑘 m>k italic_m > italic_k and m ≪ n much-less-than 𝑚 𝑛 m\ll n italic_m ≪ italic_n , the error probability can be very close to 1 1 1 1 . In contrast, compressed hypothesis testing can potentially greatly lower the error probability even with m ≪ n much-less-than 𝑚 𝑛 m\ll n italic_m ≪ italic_n samples. In fact, following the proof of Theorem III.5 , we have the following results on the error probability for deterministic time-varying measurements.
If the outer Chernoff information for the mixed measurements is sufficiently small, the error probability can indeed be made close to 0 even if the number of measurements is smaller than the problem dimension n 𝑛 n italic_n . In contrast, according to Theorem IV.1 , for small m 𝑚 m italic_m , the error probability of separate observations is lower bounded by a number close to 1, and it is impossible at all for separate observations to achieve low error probability. We remark that, compared with earlier theorems which deal mostly with the error exponent (where the number of samples m 𝑚 m italic_m is large and goes to infinity), Theorem IV.2 is for the undersampling regime where the number of samples is smaller than the number of variables. Through various numerical experiments in Section VIII , we validate the observations above.
In this section, we provide simple examples in which smaller error probability can be achieved in hypothesis testing through mixed observations than the traditional individual sampling approach, with the same number of measurements. Especially, we consider Gaussian probability distributions in our examples.
In this example, we consider n = 2 𝑛 2 n=2 italic_n = 2 , and k = 1 𝑘 1 k=1 italic_k = 1 . We group the two independent random variables X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in a random vector [ X 1 , X 2 ] T superscript subscript 𝑋 1 subscript 𝑋 2 𝑇 [X_{1},X_{2}]^{T} [ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT . Suppose that there are two hypotheses for a 2 2 2 2 -dimensional random vector [ X 1 , X 2 ] T superscript subscript 𝑋 1 subscript 𝑋 2 𝑇 [X_{1},X_{2}]^{T} [ italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT , where X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are independent:
H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : X 1 ∼ 𝒩 ( A , σ 2 ) similar-to subscript 𝑋 1 𝒩 𝐴 superscript 𝜎 2 X_{1}\sim{\mathcal{N}}(A,\sigma^{2}) italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_A , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and X 2 ∼ 𝒩 ( B , σ 2 ) similar-to subscript 𝑋 2 𝒩 𝐵 superscript 𝜎 2 X_{2}\sim{\mathcal{N}}(B,\sigma^{2}) italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_B , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ,
H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT : X 1 ∼ 𝒩 ( B , σ 2 ) similar-to subscript 𝑋 1 𝒩 𝐵 superscript 𝜎 2 X_{1}\sim{\mathcal{N}}(B,\sigma^{2}) italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_B , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and X 2 ∼ 𝒩 ( A , σ 2 ) similar-to subscript 𝑋 2 𝒩 𝐴 superscript 𝜎 2 X_{2}\sim{\mathcal{N}}(A,\sigma^{2}) italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_A , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) .
Here A 𝐴 A italic_A and B 𝐵 B italic_B are two distinct constants, and σ 2 superscript 𝜎 2 \sigma^{2} italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is the variance of the two Gaussian random variables. At each time index, only one observation is allowed, and the observation is restricted to a linear mixing of X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . Namely,
We assume that the sensing vector [ a 1 , a 2 ] T superscript subscript 𝑎 1 subscript 𝑎 2 𝑇 [a_{1},a_{2}]^{T} [ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT does not change over time. Clearly, when a 1 ≠ 0 subscript 𝑎 1 0 a_{1}\neq 0 italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≠ 0 and a 2 = 0 subscript 𝑎 2 0 a_{2}=0 italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0 , the sensing vector reduces to a separate observation of X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ; and when a 1 = 0 subscript 𝑎 1 0 a_{1}=0 italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0 and a 2 ≠ 0 subscript 𝑎 2 0 a_{2}\neq 0 italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≠ 0 , it reduces to a separate observation of X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . In these cases, the observation follows distribution 𝒩 ( A , σ 2 ) 𝒩 𝐴 superscript 𝜎 2 {\mathcal{N}}(A,\sigma^{2}) caligraphic_N ( italic_A , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for one hypothesis, and follows distribution 𝒩 ( B , σ 2 ) 𝒩 𝐵 superscript 𝜎 2 {\mathcal{N}}(B,\sigma^{2}) caligraphic_N ( italic_B , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for the other hypothesis. The Chernoff information between these two distributions is
(V.1) |
superscript subscript 𝑎 1 2 superscript subscript 𝑎 2 2 superscript 𝜎 2 {\mathcal{N}}(a_{1}B+a_{2}A,(a_{1}^{2}+a_{2}^{2})\sigma^{2}) caligraphic_N ( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_B + italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_A , ( italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) is given by
(V.2) |
where the last equality is obtained by taking the measurement vector [ a 1 , a 2 ] T = [ a 1 , − a 1 ] T superscript subscript 𝑎 1 subscript 𝑎 2 𝑇 superscript subscript 𝑎 1 subscript 𝑎 1 𝑇 [a_{1},a_{2}]^{T}=[a_{1},-a_{1}]^{T} [ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT = [ italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , - italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT . Therefore, with mixed measurements, we can double the Chernoff information. This shows that linear mixed observations can offer strict improvement in terms of reducing the error probability in hypothesis testing by increasing the error exponent.
Here 𝚺 1 subscript 𝚺 1 {\boldsymbol{\Sigma}}_{1} bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝚺 2 subscript 𝚺 2 {\boldsymbol{\Sigma}}_{2} bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are both n × n 𝑛 𝑛 n\times n italic_n × italic_n covariance matrices.
Suppose at each time instant, only one observation is allowed, and the observation is restricted to a time-invariant sensing vector 𝑨 ∈ ℝ n × 1 𝑨 superscript ℝ 𝑛 1 {\boldsymbol{A}}\in{\mathbb{R}}^{n\times 1} bold_italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × 1 end_POSTSUPERSCRIPT ; namely
Under these conditions, the observation follows distribution 𝒩 ( 𝑨 T 𝝁 1 , 𝑨 T 𝚺 1 𝑨 ) 𝒩 superscript 𝑨 𝑇 subscript 𝝁 1 superscript 𝑨 𝑇 subscript 𝚺 1 𝑨 {\mathcal{N}}({\boldsymbol{A}}^{T}{\boldsymbol{\mu}}_{1},{\boldsymbol{A}}^{T}{% \boldsymbol{\Sigma}}_{1}{\boldsymbol{A}}) caligraphic_N ( bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT bold_italic_A ) for hypothesis H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , and follows distribution 𝒩 ( 𝑨 T 𝝁 2 , 𝑨 T 𝚺 2 𝑨 ) 𝒩 superscript 𝑨 𝑇 subscript 𝝁 2 superscript 𝑨 𝑇 subscript 𝚺 2 𝑨 {\mathcal{N}}({\boldsymbol{A}}^{T}{\boldsymbol{\mu}}_{2},{\boldsymbol{A}}^{T}{% \boldsymbol{\Sigma}}_{2}{\boldsymbol{A}}) caligraphic_N ( bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , bold_italic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT bold_italic_A ) for the other hypothesis H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . We would like to choose a sensing vector 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A which maximizes the Chernoff information between the two possible univariate Gaussian distributions, namely
In fact, from [ 51 ] , the Chernoff information between these two distributions is
We first look at a special case when 𝚺 = 𝚺 1 = 𝚺 2 𝚺 subscript 𝚺 1 subscript 𝚺 2 {\boldsymbol{\Sigma}}={\boldsymbol{\Sigma}}_{1}={\boldsymbol{\Sigma}}_{2} bold_Σ = bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . Under this condition, the maximum Chernoff information is given by
Taking 𝑨 ′ = 𝚺 1 2 𝑨 superscript 𝑨 ′ superscript 𝚺 1 2 𝑨 {\boldsymbol{A}}^{\prime}={\boldsymbol{\Sigma}}^{\frac{1}{2}}{\boldsymbol{A}} bold_italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_Σ start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT bold_italic_A , then this reduces to
From Cauchy-Schwarz inequality, it is easy to see that the optimal λ = 1 2 𝜆 1 2 \lambda=\frac{1}{2} italic_λ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 𝑨 ′ = 𝚺 − 1 2 ( 𝝁 1 − 𝝁 2 ) superscript 𝑨 ′ superscript 𝚺 1 2 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{A}}^{\prime}={\boldsymbol{\Sigma}}^{-\frac{1}{2}}({\boldsymbol{% \mu}}_{1}-{\boldsymbol{\mu}}_{2}) bold_italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_Σ start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) , and 𝑨 = 𝚺 − 1 ( 𝝁 1 − 𝝁 2 ) 𝑨 superscript 𝚺 1 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{A}}={\boldsymbol{\Sigma}}^{-1}({\boldsymbol{\mu}}_{1}-{% \boldsymbol{\mu}}_{2}) bold_italic_A = bold_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) . Under these conditions, the maximum Chernoff information is given by
(V.3) |
Note that in general, 𝑨 ′ = 𝚺 − 1 2 ( 𝝁 1 − 𝝁 2 ) superscript 𝑨 ′ superscript 𝚺 1 2 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{A}}^{\prime}={\boldsymbol{\Sigma}}^{-\frac{1}{2}}({\boldsymbol{% \mu}}_{1}-{\boldsymbol{\mu}}_{2}) bold_italic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_Σ start_POSTSUPERSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT ( bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) is not a separate observation of a certain individual random variable, but rather a linear mixing of the n 𝑛 n italic_n random variables. Therefore, a mixed measurement can maximize the Chernoff information.
Let’s look at mixed observations for Gaussian random variables with different variances. Consider the same setting in Example 2, except that we now look at another special case when 𝝁 = 𝝁 1 = 𝝁 2 𝝁 subscript 𝝁 1 subscript 𝝁 2 {\boldsymbol{\mu}}={\boldsymbol{\mu}}_{1}={\boldsymbol{\mu}}_{2} bold_italic_μ = bold_italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = bold_italic_μ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT . We will study the optimal sensing vector under this scenario. Then the Chernoff information becomes
(V.4) |
To find the optimal sensing vector 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A , we are solving this optimization problem
(V.5) |
For a certain 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A , we define
Note that B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 . By symmetry over λ 𝜆 \lambda italic_λ and 1 − λ 1 𝜆 1-\lambda 1 - italic_λ , maximizing the Chernoff information can always be reduced to
(V.6) |
The optimal λ 𝜆 \lambda italic_λ , denoted by λ ⋆ superscript 𝜆 ⋆ \lambda^{\star} italic_λ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , is obtained by finding the point which makes the first order differential equation to zero as follows:
By plugging λ ⋆ superscript 𝜆 ⋆ \lambda^{\star} italic_λ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT to ( V.6 ), we obtain the following optimization problem:
(V.7) |
We note that the objective function is an increasing function in B 𝐵 B italic_B , when B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 , which is proven in Lemma V.1 .
The optimal objective value of the following optimization problem
is an increasing function in B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 .
𝜆 1 𝜆 𝐵 superscript 𝐵 1 𝜆 \left(\frac{\lambda+(1-\lambda)B}{B^{1-\lambda}}\right) ( divide start_ARG italic_λ + ( 1 - italic_λ ) italic_B end_ARG start_ARG italic_B start_POSTSUPERSCRIPT 1 - italic_λ end_POSTSUPERSCRIPT end_ARG ) is an increasing function in B ≥ 1 𝐵 1 B\geq 1 italic_B ≥ 1 . In fact, the derivative of it with respect to λ 𝜆 \lambda italic_λ is
Then the conclusion of this lemma immediately follows. ∎
This means we need to maximize B 𝐵 B italic_B , in order to maximize the Chernoff information. Hence, to find the optimal 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A maximizing B 𝐵 B italic_B , we solve the following two optimization problems:
(V.8) |
(V.9) |
Then the maximum of the two optimal objective values is equal to the optimal objective value of optimizing B 𝐵 B italic_B , and the corresponding 𝑨 𝑨 {\boldsymbol{A}} bold_italic_A is the optimal sensing vector maximizing the Chernoff information. These two optimization problems are not convex optimization programs. However, they still hold zero duality gap from the S-procedure [ 52 , Appendix B] . In fact, they are respectively equivalent to the following two semidefinite programming optimization problems:
(V.10) | ||||||
subject to | ||||||
(V.11) | ||||||
subject to | ||||||
Thus, they can be efficiently solved via a generic optimization solver.
For example, when 𝚺 1 subscript 𝚺 1 {\boldsymbol{\Sigma}}_{1} bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝚺 2 subscript 𝚺 2 {\boldsymbol{\Sigma}}_{2} bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are given as follows:
Let us introduce a specific example to help understanding before considering more general examples on the compressed hypothesis testing. In this example, we have six random variables following the distribution 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , and the other random variable following the distribution 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) , where σ 2 > 1 superscript 𝜎 2 1 \sigma^{2}>1 italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT > 1 . We assume that all random variables X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ,…, and X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT are independent. So overall, there are seven hypotheses:
H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : ( X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , …, X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) ∼ similar-to \sim ∼ ( 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) , 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , …, 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) ),
H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT : ( X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , …, X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) ∼ similar-to \sim ∼ ( 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) , …, 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) ),
H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT : ( X 1 subscript 𝑋 1 X_{1} italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , X 2 subscript 𝑋 2 X_{2} italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , …, X 7 subscript 𝑋 7 X_{7} italic_X start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) ∼ similar-to \sim ∼ ( 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) , …, 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) ).
In this example, we will show that the Chernoff information with separate measurements is smaller than the Chernoff information with mixed measurements. We first calculate the Chernoff information between any two hypotheses with separate measurements. In separate measurements, for a hypothesis H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , the probability distribution for the output is 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) only when the random variable X v subscript 𝑋 𝑣 X_{v} italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT is observed. Otherwise, the output distribution follows 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) . Then, for any pair of hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , when the random variable X v subscript 𝑋 𝑣 X_{v} italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT is observed, the probability distributions for the output are 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) respectively. Similarly, for hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , when the random variable X w subscript 𝑋 𝑤 X_{w} italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT is observed, the probability distributions for the output are 𝒩 ( 0 , 1 ) 𝒩 0 1 {\mathcal{N}}(0,1) caligraphic_N ( 0 , 1 ) and 𝒩 ( 0 , σ 2 ) 𝒩 0 superscript 𝜎 2 {\mathcal{N}}(0,\sigma^{2}) caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) respectively. For the separate measurements, seven sensing vectors from 𝒂 1 T superscript subscript 𝒂 1 𝑇 {\boldsymbol{a}}_{1}^{T} bold_italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT to 𝒂 7 T superscript subscript 𝒂 7 𝑇 {\boldsymbol{a}}_{7}^{T} bold_italic_a start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT for seven hypotheses are predetermined as follows:
And we take deterministic time-varying measurements using these seven sensing vectors.
The Chernoff information between any two hypotheses, e.g., H v = H 1 subscript 𝐻 𝑣 subscript 𝐻 1 H_{v}=H_{1} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H w = H 2 subscript 𝐻 𝑤 subscript 𝐻 2 H_{w}=H_{2} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , with separate measurements is calculated as follows:
(V.12) |
Let us then calculate the Chernoff information between hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT with mixed measurements. For the mixed measurements, we consider using the parity check matrix of ( 7 , 4 ) 7 4 (7,4) ( 7 , 4 ) Hamming codes as follows:
𝑗 mod 3 1 i=(j\;\text{mod}\;3)+1 italic_i = ( italic_j mod 3 ) + 1 . Thus, we use total three sensing vectors for mixed measurements repeatedly.
For the pair of hypotheses H v subscript 𝐻 𝑣 H_{v} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and H w subscript 𝐻 𝑤 H_{w} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , we can have total 21 ( = ( 7 2 ) ) annotated 21 absent binomial 7 2 21(=\binom{7}{2}) 21 ( = ( FRACOP start_ARG 7 end_ARG start_ARG 2 end_ARG ) ) cases in the calculation of the outer Chernoff information. For that, we have the following lemma:
Given mixed measurements in the parity check Hamming code matrix, for a pair of hypotheses, the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) is the minimum one out of 21 ( = ( 7 2 ) ) annotated 21 absent binomial 7 2 21(=\binom{7}{2}) 21 ( = ( FRACOP start_ARG 7 end_ARG start_ARG 2 end_ARG ) ) combination cases.
From the definition of the outer Chernoff information introduced in ( III.5 ), we can calculate the outer Chernoff information as follows:
For example, for a pair of hypotheses H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 2 subscript 𝐻 2 H_{2} italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , we have
For another pair of hypotheses H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT , we have
Among 21 cases, the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) has only one remaining term in the calculation. Then, to complete the proof of this lemma, let us introduce the following lemma:
(V.13) | |||
where j 𝑗 j italic_j can be any number between 1 and m / 2 𝑚 2 m/2 italic_m / 2 .
From Lemma V.3 , we can conclude that the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ), which has only one remaining term, is the minimum one out of 21 cases. ∎
Then, let us calculate the outer Chernoff information between H v = H 1 subscript 𝐻 𝑣 subscript 𝐻 1 H_{v}=H_{1} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H w = H 4 subscript 𝐻 𝑤 subscript 𝐻 4 H_{w}=H_{4} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT as follows:
(V.14) |
superscript 𝜎 2 3 \sigma^{2}+3 italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 3 , 1 1 1 1 , …, 1 1 1 1 in diagonal.
When σ 2 ≫ 1 much-greater-than superscript 𝜎 2 1 \sigma^{2}\gg 1 italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≫ 1 , for separate observations, we have
where e 𝑒 e italic_e is the natural number. For the comparison in the Chernoff information between separate measurements and mixed measurements, let us subtract the values in logarithm and check whether the subtracted result is positive or negative. For large enough σ 𝜎 \sigma italic_σ , the following condition holds
Therefore, we can conclude that for large enough σ 𝜎 \sigma italic_σ , the Chernoff information with mixed measurements, i.e., the outer Chernoff information, becomes greater than that with separate measurements. Fig. 1 shows the outer Chernoff information, denoted by OCI, and the Chernoff information with separate measurements, denoted by CI, in the figure. From Fig. 1 , it is clearly shown that the Chernoff information with mixed measurements can be larger than the Chernoff information with separate measurements, and the outer Chernoff information between H 1 subscript 𝐻 1 H_{1} italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT (or H 4 subscript 𝐻 4 H_{4} italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and H 7 subscript 𝐻 7 H_{7} italic_H start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT ) is the minimum one among other cases. For simplicity of the figure, we present the outer Chernoff information results for only a few cases of hypotheses pair in Fig. 1 .
Additionally, the inner Chernoff information between H v = H 1 subscript 𝐻 𝑣 subscript 𝐻 1 H_{v}=H_{1} italic_H start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and H w = H 4 subscript 𝐻 𝑤 subscript 𝐻 4 H_{w}=H_{4} italic_H start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_H start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT can be calculated as follows:
By considering the first-order condition, we have the following critical point, which is between 0 and 1; namely,
COMMENTS
Hypothesis Testing | A Step-by-Step Guide with Easy ...
Formulate the Hypotheses: Write your research hypotheses as a null hypothesis (H 0) and an alternative hypothesis (H A).; Data Collection: Gather data specifically aimed at testing the hypothesis.; Conduct A Test: Use a suitable statistical test to analyze your data.; Make a Decision: Based on the statistical test results, decide whether to reject the null hypothesis or fail to reject it.
In hypothesis testing, the goal is to see if there is sufficient statistical evidence to reject a presumed null hypothesis in favor of a conjectured alternative hypothesis. The null hypothesis is usually denoted H0 while the alternative hypothesis is usually denoted H1. An hypothesis test is a statistical decision; the conclusion will either be ...
Hypothesis testing is a statistical method to assess the plausibility of a hypothesis by using sample data. Learn the four steps of hypothesis testing, the difference between null and alternative hypotheses, and an example of coin flipping.
Hypothesis Tests. A hypothesis test consists of five steps: 1. State the hypotheses. State the null and alternative hypotheses. These two hypotheses need to be mutually exclusive, so if one is true then the other must be false. 2. Determine a significance level to use for the hypothesis. Decide on a significance level.
Statistical Hypothesis Testing Overview
Hypothesis Testing. A hypothesis test is a statistical inference method used to test the significance of a proposed (hypothesized) relation between population statistics (parameters) and their corresponding sample estimators. In other words, hypothesis tests are used to determine if there is enough evidence in a sample to prove a hypothesis ...
Statistical hypothesis test
This process, called hypothesis testing, consists of four steps. State the hypotheses. This involves stating the null and alternative hypotheses. The hypotheses are stated in such a way that they are mutually exclusive. That is, if one is true, the other must be false. Formulate an analysis plan.
Explore the intricacies of hypothesis testing, a cornerstone of statistical analysis. Dive into methods, interpretations, and applications for making data-driven decisions. In this Blog post we will learn: What is Hypothesis Testing? Steps in Hypothesis Testing 2.1. Set up Hypotheses: Null and Alternative 2.2. Choose a Significance Level (α) 2.3.
a. x = salary of teacher. μ = mean salary of teacher. The guess is that μ> $30, 000 and that is the alternative hypothesis. The null hypothesis has the same parameter and number with an equal sign. H0: μ = $30, 000 HA: μ> $30, 000. b. x = number od students who like math. p = proportion of students who like math.
Hypothesis testing is a method of statistical inference that considers the null hypothesis H ₀ vs. the alternative hypothesis H a, where we are typically looking to assess evidence against H ₀. Such a test is used to compare data sets against one another, or compare a data set against some external standard. The former being a two sample ...
Hypothesis testing is a statistical method that helps to determine whether a hypothesis is true or not. It is a procedure that involves collecting and analyzing data to evaluate the probability of the null hypothesis being true. The null hypothesis is the hypothesis that there is no significant difference between a sample and the population.
3.1: The Fundamentals of Hypothesis Testing
Learn what hypothesis testing is, how to perform it, and what types of tests are used. Hypothesis testing is a statistical tool that tests assumptions and determines how likely something is within a given standard of accuracy.
Understanding Hypothesis Testing
What is Hypothesis Testing in Statistics? Types and ...
Hypothesis Testing — The What, Why, and How | ...
Hypothesis Testing, P Values, Confidence Intervals, and ...
Learn what hypothesis testing is, why it is important, and how it works in research. Find out the types of hypotheses, the stages of hypothesis testing, and the examples of hypothesis testing in different fields.
Hypothesis testing is a statistical procedure in which a choice is made between a null hypothesis and an alternative hypothesis based on information in a sample. The end result of a hypotheses testing procedure is a choice of one of the following two possible conclusions: Reject H0 (and therefore accept Ha), or.
Real World Hypothesis Testing. Hypothesis testing forms the backbone of data-driven decision making across science, research, business, public policy and more by allowing practitioners to draw statistically-validated conclusions. Here is a sample of hypotheses commonly tested: Business. Ecommerce sites test if interface updates increase user ...
Hypothesis Testing - Definition, Procedure, Types and FAQs
Understanding Hypothesis Testing
Unlike the standard compressed sensing, in compressed hypothesis testing, 𝒙 𝒙 {\boldsymbol{x}} bold_italic_x is a vector of n 𝑛 n italic_n random variables taking independent realizations across different measurements. As discussed below, our problem is relevant to multiple hypothesis testing such as in communication systems [26, 27, 28].