The Science

TL;DR: Why Our Assessment Actually Works

Most personality assessments make bold claims about accuracy, but we back ours up with rigorous scientific validation. Our methodology centers on factor analysis—the gold standard for verifying that psychological assessments actually measure what they claim to measure. Through Kaiser-Meyer-Olkin (KMO) tests and Bartlett's sphericity analysis, we've confirmed that our assessment has robust construct validity across all seven attributes.

What does this mean for you? When our assessment says someone is high in Influence or Precision, you can trust that result. Our latest validation study, involving over 400 participants, demonstrates exceptional test-retest reliability across all eight attributes (with many in the exceptional range) and strong internal consistency. We don't just meet EEOC requirements for selection tools—we exceed them with comprehensive bias testing that shows minimal demographic impact across gender, ethnicity, and age groups.

The bottom line: This isn't just another personality test. It's a scientifically validated selection tool that measures eight distinct behavioral attributes, giving you confidence in your hiring and development decisions.

Now Let's Get Nerdy: The Deep Dive

Ready to peek behind the curtain? Here's what separates legitimate psychometric science from the assessment tools that just look scientific.

The Foundation: Factor Analysis

Factor analysis is like having a quality control inspector for psychological assessments. It examines whether the questions in our assessment actually cluster together in meaningful ways and whether they're measuring distinct, reliable constructs. Think of it as the difference between a legitimate medical test and a horoscope—one has statistical backing, the other just feels insightful.

Our latest factor analysis results tell a compelling story:

  • Kaiser-Meyer-Olkin (KMO) values ranging from 0.781 to 0.892 (anything above 0.7 is considered good; above 0.8 is excellent)
  • Bartlett's Test results showing significance at p < 0.001 across all eight attributes
  • Component matrices that demonstrate clean factor loadings without problematic cross-loadings

Reliability: Does It Work Consistently?

A reliable assessment gives you the same results when someone takes it multiple times under similar conditions. We tested this with 200 participants who took the assessment twice with a 6-day interval, and all eight attributes passed with flying colors—many achieving correlations in the exceptional range (above 0.90).

Internal Consistency (Cronbach's Alpha):All our attributes scored between 0.714 and 0.863, well above the 0.7 threshold for acceptable reliability. This means the questions within each attribute work together harmoniously to measure a single construct.

Discriminant Validity: Are We Measuring Different Things?

One of the biggest problems with many assessments is that they claim to measure distinct traits, but statistically, those traits are just measuring the same thing with different labels. Our correlation matrix shows moderate to low correlations between attributes, proving each measures something genuinely different.

Take pace/patience versus adaptability/routine preferences, for example. Many older assessment models incorrectly assume these are the same behavioral dimension—that someone who prefers a steady pace must also prefer routine, or that adaptable people are automatically fast-paced. Our validation studies reveal these are actually distinct behavioral traits that can exist independently. Someone can be highly patient and methodical while also being highly adaptable to change, or prefer a fast pace while also craving routine and structure.

This research has led us to innovations in behavioral psychology that challenge some outdated models still used by legacy assessment providers. We're not just measuring behavior—we're advancing the science of how behavioral traits actually relate to each other in real people.

EEOC Compliance: Fair and Unbiased

We conducted comprehensive bias testing across demographic groups using Cramér's V analysis (a statistical measure of association strength). Our results show:

  • Gender bias: Cramér's V values between 0.04 and 0.15 (anything below 0.3 is considered weak association—which is good, meaning minimal bias)
  • Ethnic bias: Values between 0.07 and 0.18
  • Statistical significance testing: Most attributes show no significant differences across demographic groups

This isn't just about legal compliance—it's about fairness. When you use our assessment, you're using a tool that evaluates people based on job-relevant traits, not demographic characteristics.

Continuous Improvement: Science Never Stops

Unlike many assessment providers who validated their tools once and called it done, we maintain an active research program. We continuously monitor assessment performance, incorporate new findings from behavioral psychology, and conduct ongoing validation studies to ensure our methodology remains at the cutting edge.

Our commitment to scientific rigor means your assessment results today are more accurate than they were last year, and they'll be even better next year.

The Bottom Line for Your Organization

This level of scientific validation translates directly into practical benefits:

  • Confident hiring decisions backed by reliable data
  • Reduced turnover through better role fit prediction
  • Enhanced team dynamics with validated personality insights
  • Legal defensibility with full EEOC compliance documentation
  • Ongoing improvement through continuous validation research

Ready to dive even deeper? Download our complete validation study below for the full statistical analysis, methodology details, and comprehensive results.

Aptive Index uses cookies to offer
you a better experience.
Decline
Accept