Evaluation serves requirements
Every evaluation system is designed backwards. We pretend to discover merit, but we actually manufacture it to serve predetermined institutional requirements.
The evaluation doesn’t reveal what’s valuable. The evaluation creates what becomes valuable by defining the criteria for value recognition.
──── Requirements precede evaluation
Organizations don’t design evaluation systems to find the best candidates. They design evaluation systems to justify hiring the candidates that meet their operational requirements.
Tech companies need people who will work 80-hour weeks without burning out. They don’t evaluate for programming skill—they evaluate for psychological compliance with unsustainable work cultures.
Universities need students who will generate tuition revenue and boost ranking metrics. They don’t evaluate for intellectual potential—they evaluate for ability to navigate bureaucratic systems and pay full fees.
Corporations need employees who will maximize shareholder value. They don’t evaluate for innovation—they evaluate for alignment with existing profit structures.
The evaluation criteria get reverse-engineered from the institutional requirements.
──── Merit becomes circular
Merit gets defined as “whatever our evaluation system measures,” and our evaluation system measures “whatever we define as merit.”
Standardized tests measure test-taking ability, then claim test-taking ability represents intelligence. Intelligence gets redefined as whatever standardized tests measure.
Performance reviews measure compliance with managerial expectations, then claim this represents job performance. Job performance gets redefined as whatever managers expect.
Academic peer review measures conformity to disciplinary norms, then claims this represents scholarly quality. Scholarly quality gets redefined as whatever peer reviewers approve.
The circular logic becomes invisible because the evaluation system itself defines the terms of evaluation.
──── Exclusion by design
Evaluation systems are primarily exclusion mechanisms. They exist to systematically filter out people who don’t serve institutional requirements while appearing to select for merit.
Credit scores don’t measure financial responsibility—they measure integration into debt-based economic systems. People who avoid debt get lower scores than people who successfully manage debt loads.
Job interviews don’t measure work capability—they measure cultural fit with existing organizational hierarchies. People who challenge institutional assumptions get filtered out regardless of their competence.
College admissions don’t measure learning potential—they measure family economic status and cultural capital. The evaluation reproduces class structure while appearing meritocratic.
The evaluation system’s primary function is maintaining institutional requirements, not discovering merit.
──── Value construction through measurement
Evaluation systems don’t measure pre-existing value. They construct value by making certain qualities measurable and others invisible.
GDP doesn’t measure national well-being—it measures economic activity that generates quantifiable transactions. Unpaid care work becomes worthless because it’s unmeasurable within the GDP framework.
Citation metrics don’t measure scholarly impact—they measure academic network effects and publication strategies. Knowledge that doesn’t circulate through formal publication channels becomes academically worthless.
Customer satisfaction scores don’t measure actual satisfaction—they measure willingness to participate in feedback systems. Customer experiences that don’t translate into numerical ratings become invisible to management.
Whatever can’t be measured by the evaluation system simply doesn’t exist as value within that system.
──── Institutional survival drives design
Evaluation systems evolve to ensure institutional survival, not to optimize for stated objectives.
Hospital quality metrics measure billing efficiency and liability reduction, not patient health outcomes. Hospitals optimize for metrics that ensure financial survival while claiming to optimize for patient care.
Police performance evaluations measure arrest quotas and clearance rates, not community safety or justice. Police departments optimize for metrics that justify their budgets while claiming to optimize for public safety.
Teacher evaluations measure test score improvements and administrative compliance, not actual learning. Schools optimize for metrics that ensure continued funding while claiming to optimize for education.
The evaluation system serves the institution’s survival requirements, not its stated mission.
──── Gaming becomes optimization
Once people understand how evaluation systems work, they optimize for the evaluation rather than the underlying goal.
Students optimize for grade maximization rather than learning. They develop sophisticated strategies for exploiting evaluation criteria while avoiding actual education.
Researchers optimize for publication metrics rather than knowledge creation. They fragment findings across multiple papers and cite their own work to boost their evaluation scores.
Employees optimize for performance review criteria rather than productive work. They focus on visible activities that evaluators notice while avoiding valuable work that doesn’t register in evaluation systems.
The evaluation system becomes the actual goal, replacing whatever it was supposed to measure.
──── Evaluation technology amplification
Digital evaluation systems amplify the disconnect between measurement and value by scaling up arbitrary criteria.
Algorithmic hiring systems optimize for keyword matching and pattern recognition rather than human capability. They systematically exclude qualified candidates who don’t fit algorithmic assumptions about merit.
Social media metrics optimize for engagement and virality rather than information quality. They amplify content that triggers emotional responses while suppressing thoughtful analysis.
Credit scoring algorithms optimize for prediction accuracy within existing economic structures rather than individual financial capability. They perpetuate discriminatory patterns while appearing objective.
Technology makes evaluation systems more efficient at serving institutional requirements while making their bias less visible.
──── Professional evaluation classes
Entire professional classes exist to design and operate evaluation systems that serve institutional requirements.
HR consultants develop evaluation frameworks that protect organizations from liability while appearing to optimize for talent. Their expertise is in evaluation design, not in the domains they’re evaluating.
Testing companies create standardized assessments that generate revenue through repeated administration rather than accurately measuring capabilities. Their business model depends on evaluation complexity, not evaluation accuracy.
Management consultants design performance metrics that justify predetermined organizational changes while appearing to measure objective performance. Their value is in evaluation legitimacy, not evaluation validity.
These professional classes have vested interests in maintaining evaluation systems that serve institutional requirements rather than discovering actual merit.
──── Resistance through evaluation
Even resistance movements get co-opted through evaluation systems designed to serve institutional requirements.
Diversity metrics measure demographic representation rather than systemic change. Organizations optimize for visible diversity while maintaining discriminatory structures.
Sustainability reporting measures compliance with reporting standards rather than environmental impact. Companies optimize for evaluation criteria while continuing environmentally destructive practices.
Social impact assessments measure program activities rather than community benefit. Nonprofits optimize for fundable activities while avoiding systemic challenges to the problems they claim to address.
Evaluation systems transform resistance into manageable metrics that serve institutional stability.
──── Value hierarchies through requirements
Evaluation systems create value hierarchies by making certain requirements visible and others invisible.
Medical school admissions prioritize MCAT scores and GPA over empathy and community connection. The evaluation system produces doctors optimized for test-taking rather than patient care.
Judicial appointments prioritize legal credentials and political connections over justice and community understanding. The evaluation system produces judges optimized for institutional loyalty rather than fairness.
Corporate leadership prioritizes financial metrics and strategic planning over worker welfare and social responsibility. The evaluation system produces executives optimized for shareholder value rather than stakeholder benefit.
The requirements embedded in evaluation systems shape entire professional cultures and social hierarchies.
──── Alternative evaluation possibilities
Different evaluation systems would produce different value hierarchies and different social outcomes.
Community-defined merit would prioritize local knowledge and relationship-building over abstract credentials. Evaluation systems designed by communities would serve community requirements rather than institutional requirements.
Process-focused evaluation would prioritize learning and growth over achievement metrics. Evaluation systems focused on development would serve human potential rather than institutional selection needs.
Collaborative evaluation would prioritize collective benefit over individual performance. Evaluation systems designed for cooperation would serve social harmony rather than competitive advantage.
These alternatives remain largely unexplored because existing institutions design evaluation systems to serve their own requirements.
────────────────────────────────────────
Evaluation systems don’t discover merit—they manufacture it according to institutional specifications. They create the values they claim to measure by defining the requirements for value recognition.
Understanding this reveals why reform efforts focused on “better evaluation” often fail. The evaluation system isn’t broken—it’s working exactly as designed to serve institutional requirements rather than the values it claims to measure.
The question isn’t how to evaluate better, but who gets to define the requirements that evaluation systems serve.