Impact assessment serves funders
Impact measurement has become the dominant framework for evaluating social programs, but its primary function isn’t to improve outcomes for beneficiaries. It’s to maintain funder control over organizations and legitimize funding decisions that serve donor interests rather than community needs.
──── The measurement trap
Organizations seeking funding must conform to metrics designed by funders, not by the communities they serve. This creates a system where success is defined by what donors want to see measured, not by what communities actually need.
Theory of Change documents force organizations to predict social transformation in ways that satisfy funder logic rather than reflect complex social reality. The required causality chains oversimplify human behavior to fit funding timelines and measurement capabilities.
Logic models transform community organizing into linear processes that can be tracked and quantified. This framework eliminates the emergent, unpredictable aspects of social change that often represent the most meaningful transformations.
Impact assessment doesn’t measure impact. It measures compliance with funder expectations.
──── Metrics as control mechanisms
Impact metrics function as remote control systems that allow funders to shape organizational behavior without direct management oversight.
Predetermined indicators force organizations to focus on measurable outputs rather than meaningful outcomes. Groups working on systems change get pressured to demonstrate short-term individual improvements rather than long-term structural shifts.
Reporting requirements consume organizational resources that could be devoted to program activities. Small nonprofits spend 20-40% of their time on measurement and reporting rather than service delivery.
Data collection mandates transform service recipients into data points, often requiring invasive personal information sharing that damages trust between organizations and communities.
The measurement system ensures that organizations remain focused on what funders want to track rather than what communities need to change.
──── Professionalization of social change
Impact assessment requires specialized knowledge that excludes community members from evaluation of their own programs. This creates a professional class of evaluators whose livelihood depends on maintaining complex measurement systems.
Evaluation consultants sell assessment services to organizations that lack internal capacity for funder-required measurements. This creates a secondary market that extracts resources from direct service provision.
Academic partnerships for evaluation legitimize programs through university affiliation while generating research opportunities for faculty. The academic benefit often exceeds any program improvement from the evaluation.
Certification programs for impact measurement create credentialing systems that exclude community knowledge in favor of technical expertise.
The professionalization ensures that evaluation serves professional interests rather than community needs.
──── Quantification bias
Impact assessment prioritizes quantifiable outcomes over meaningful but unmeasurable changes. This creates systematic bias toward interventions that produce easily tracked metrics rather than deep social transformation.
Individual behavior change gets valued over systems change because personal transformations can be measured while structural shifts resist quantification. Organizations get rewarded for changing people rather than changing systems.
Short-term outcomes receive more attention than long-term impacts because measurement timelines align with funding cycles rather than social change timeframes.
Standardized indicators across diverse contexts ignore local knowledge and community-defined success metrics in favor of funder-comparable data.
The bias toward quantification systematically undervalues the types of change that matter most to affected communities.
──── Competitive measurement
Impact assessment creates competition between organizations for limited funding based on metric performance rather than community relationships or actual effectiveness.
Ranking systems force organizations to demonstrate superiority over other groups rather than collaboration for community benefit. This undermines collective action in favor of individual organizational advancement.
Best practices identified through impact measurement get universalized across different contexts, ignoring local conditions and community preferences.
Evidence-based interventions prioritize programs with strong measurement capacity over community-led initiatives that resist evaluation. This systematically advantages well-resourced organizations over grassroots groups.
Competition through measurement serves funder interests in maintaining multiple service providers while undermining community solidarity.
──── Data extraction systems
Impact measurement creates data extraction relationships where communities provide information that benefits funders and researchers more than program participants.
Baseline data collection often duplicates social services intake processes, requiring vulnerable people to repeatedly share personal information for different reporting systems.
Follow-up surveys track people after program completion, creating ongoing surveillance relationships that extend organizational reach into personal lives.
Aggregate reporting allows funders to claim credit for community improvements without acknowledging the actual sources of change or community agency in transformation.
The data flows upward to funders while communities rarely receive meaningful feedback about evaluation results or how their information gets used.
──── Legitimacy production
Impact assessment produces legitimacy for funding decisions that might otherwise appear arbitrary or self-serving. The technical complexity of measurement systems obscures the subjective value judgments embedded in evaluation frameworks.
Scientific language makes funder preferences appear objective and evidence-based rather than reflective of donor interests and assumptions about social change.
Randomized controlled trials import medical research methodology into social contexts where control groups may be unethical and randomization ignores community self-determination.
Cost-effectiveness analysis reduces complex social interventions to financial calculations that favor cheaper programs over transformative but resource-intensive approaches.
The scientific legitimacy allows funders to claim their decisions are based on evidence rather than personal preferences or political priorities.
──── Innovation theater
Impact measurement often functions as innovation theater that creates the appearance of learning and improvement while maintaining existing power structures and funding patterns.
Pilot programs with extensive evaluation requirements allow funders to appear cutting-edge while testing interventions on vulnerable populations before wider implementation.
Continuous improvement language suggests ongoing refinement while evaluation results rarely lead to fundamental program redesign or strategy shifts.
Learning communities create networks for sharing measurement practices rather than community organizing strategies or structural analysis.
The innovation focus diverts attention from structural inequality toward technical solutions that don’t threaten existing systems.
──── Community resistance strategies
Some organizations and communities have developed strategies to resist or subvert impact measurement requirements while maintaining funding relationships.
Dual reporting systems provide funders with required metrics while maintaining internal evaluation frameworks that serve community interests.
Participatory evaluation engages community members in defining success metrics and evaluation questions, though this often gets reduced to token involvement in funder-designed processes.
Narrative reporting supplements quantitative data with stories that convey change processes that resist measurement, though these often get marginalized in funding decisions.
Coalition advocacy challenges measurement requirements through collective action, though this risks funder retaliation against individual organizations.
──── Alternative evaluation frameworks
Community-controlled evaluation would prioritize community-defined success metrics over funder requirements:
Community indicators developed by affected populations would measure changes that matter to local residents rather than external stakeholders.
Participatory action research would engage communities in defining research questions, collecting data, and interpreting results for their own strategic planning.
Movement evaluation would assess contributions to broader social change rather than isolated program outcomes.
Systems thinking would examine how programs interact with broader structural forces rather than isolating intervention effects.
──── The accountability question
Impact assessment claims to ensure accountability, but primarily creates accountability to funders rather than communities. True accountability would prioritize community needs over donor preferences.
Upward accountability to funders gets prioritized over downward accountability to program participants and affected communities.
Financial stewardship receives more attention than program quality or community relationship building.
Compliance monitoring ensures adherence to funder requirements rather than responsiveness to changing community needs.
Real accountability would center community voices in evaluation design, implementation, and utilization rather than treating communities as data sources for funder reporting.
────────────────────────────────────────
Impact assessment has become a sophisticated system for maintaining funder control over social change organizations while creating the appearance of evidence-based decision-making and accountability.
The measurement systems don’t improve programs or increase community power. They discipline organizations to focus on funder priorities and legitimate funding decisions that often contradict community needs and self-determination.
Understanding impact assessment as a control mechanism rather than an improvement tool reveals how measurement systems can serve power rather than justice, even when wrapped in the language of effectiveness and accountability.
The question isn’t whether organizations should be accountable, but whether accountability should serve community empowerment or donor control.