The 4 Types of Validity in Research Design (+3 More to Consider)

The 4 Types of Validity in Research Design (+3 More to Consider)

The conclusions you draw from your research (whether from analyzing surveys, focus groups, experimental design, or other research methods) are only useful if they’re valid.

How “true” are these results? How well do they represent the thing you’re actually trying to study? Validity is used to determine whether research measures what it intended to measure and to approximate the truthfulness of the results.

Unfortunately, researchers sometimes create their own definitions when it comes to what is considered valid.

  • In quantitative research testing for validity and reliability is a given.
  • However, some qualitative researchers have gone so far as to suggest that validity does not apply to their research even as they acknowledge the need for some qualifying checks or measures in their work.

This is wrong. Validity is always important – even if it’s harder to determine in qualitative research.

To disregard validity is to put the trustworthiness of your work in question and to call into question others’ confidence in its results. Even when qualitative measures are used in research, they need to be looked at using measures of reliability and validity in order to sustain the trustworthiness of the results.

What is validity in research?

Validity is how researchers talk about the extent that results represent reality. Research methods, quantitative or qualitative, are methods of studying real phenomenon – validity refers to how much of that phenomenon they measure vs. how much “noise,” or unrelated information, is captured by the results.

Validity and reliability make the difference between “good” and “bad” research reports. Quality research depends on a commitment to testing and increasing the validity as well as the reliability of your research results.

Any research worth its weight is concerned with whether what is being measured is what is intended to be measured and considers the ways in which observations are influenced by the circumstances in which they are made.

The basis of how our conclusions are made plays an important role in addressing the broader substantive issues of any given study.

For this reason, we are going to look at various validity types that have been formulated as a part of legitimate research methodology.

Here are the 7 key types of validity in research:

  1. Face validity
  2. Content validity
  3. Construct validity
  4. Internal validity
  5. External validity
  6. Statistical conclusion validity
  7. Criterion-related validity

1. Face validity

Face validity is how valid your results seem based on what they look like. This is the least scientific method of validity, as it is not quantified using statistical methods.

Face validity is not validity in a technical sense of the term.  It is concerned with whether it seems like we measure what we claim.

Here we look at how valid a measure appears on the surface and make subjective judgments based on that.

For example,

  • Imagine you give a survey that appears to be valid to the respondent and the questions are selected because they look valid to the administer.
  • The administer asks a group of random people, untrained observers if the questions appear valid to them

In research, it’s never enough to rely on face judgments alone – and more quantifiable methods of validity are necessary in order to draw acceptable conclusions.  There are many instruments of measurement to consider so face validity is useful in cases where you need to distinguish one approach over another.

Face validity should never be trusted on its own merits.

2. Content validity

Content validity is whether or not the measure used in the research covers all of the content in the underlying construct (the thing you are trying to measure).

This is also a subjective measure, but unlike face validity, we ask whether the content of a measure covers the full domain of the content. If a researcher wanted to measure introversion, they would have to first decide what constitutes a relevant domain of content for that trait.

Content validity is considered a subjective form of measurement because it still relies on people’s perceptions for measuring constructs that would otherwise be difficult to measure.

Where content validity distinguishes itself (and becomes useful) through its use of experts in the field or individuals belonging to a target population.  This study can be made more objective through the use of rigorous statistical tests.

For example, you could have a content validity study that informs researchers how items used in a survey represent their content domain, how clear they are, and the extent to which they maintain the theoretical factor structure assessed by the factor analysis.

3. Construct validity

A construct represents a collection of behaviors that are associated in a meaningful way to create an image or an idea invented for a research purpose. Construct validity is the degree to which your research measures the construct (as compared to things outside the construct).

Depression is a construct that represents a personality trait that manifests itself in behaviors such as oversleeping, loss of appetite, difficulty concentrating, etc.

The existence of a construct is manifest by observing the collection of related indicators.  Any one sign may be associated with several constructs.  A person with difficulty concentrating may have A.D.D. but not depression.

Construct validity is the degree to which inferences can be made from operationalizations (connecting concepts to observations) in your study to the constructs on which those operationalizations are based.  To establish construct validity you must first provide evidence that your data supports the theoretical structure.

You must also show that you control the operationalization of the construct, in other words, show that your theory has some correspondence with reality.

  • Convergent Validity – the degree to which an operation is similar to other operations it should theoretically be similar to.
  • Discriminative Validity -– if a scale adequately differentiates itself or does not differentiate between groups that should differ or not differ based on theoretical reasons or previous research.
  • Nomological Network – representation of the constructs of interest in a study, their observable manifestations, and the interrelationships among and between these.  According to Cronbach and Meehl,  a nomological network has to be developed for a measure in order for it to have construct validity
  • Multitrait-Multimethod Matrix – six major considerations when examining Construct Validity according to Campbell and Fiske.  This includes evaluations of convergent validity and discriminative validity.  The others are trait method unit, multi-method/trait, truly different methodology, and trait characteristics.

4. Internal validity

Internal validity refers to the extent to which the independent variable can accurately be stated to produce the observed effect.

If the effect of the dependent variable is only due to the independent variable(s) then internal validity is achieved. This is the degree to which a result can be manipulated.

Put another way, internal validity is how you can tell that your research “works” in a research setting. Within a given study, does the variable you change affect the variable you’re studying?

5. External validity

External validity refers to the extent to which the results of a study can be generalized beyond the sample. Which is to say that you can apply your findings to other people and settings.

Think of this as the degree to which a result can be generalized. How well do the research results apply to the rest of the world?

A laboratory setting (or other research setting) is a controlled environment with fewer variables. External validity refers to how well the results hold, even in the presence of all those other variables.

6. Statistical conclusion validity

Statistical conclusion validity is a determination of whether a relationship or co-variation exists between cause and effect variables.

This type of validity requires:

  • Ensuring adequate sampling procedures
  • Appropriate statistical tests
  • Reliable measurement procedures

This is the degree to which a conclusion is credible or believable.

7. Criterion-related validity

Criterion-related validity (also called instrumental validity) is a measure of the quality of your measurement methods.  The accuracy of a measure is demonstrated by comparing it with a measure that is already known to be valid.

In other words – if your measure has a high correlation with other measures that are known to be valid because of previous research.

For this to work you must know that the criterion has been measured well.  And be aware that appropriate criteria do not always exist.

What you are doing is checking the performance of your operationalization against criteria.

The criteria you use as a standard of judgment accounts for the different approaches you would use:

  • Predictive Validity – operationalization’s ability to predict what it is theoretically able to predict.  The extent to which a measure predicts expected outcomes.
  • Concurrent Validity – operationalization’s ability to distinguish between groups it theoretically should be able to.  This is where a test correlates well with a measure that has been previously validated.

When we look at validity in survey data we are asking whether the data represents what we think it should represent.

We depend on the respondent’s mindset and attitude in order to give us valid data.

In other words, we depend on them to answer all questions honestly and conscientiously. We also depend on whether they are able to answer the questions that we ask. When questions are asked that the respondent can not comprehend or understand, then the data does not tell us what we think it does.