Much of my work on monitoring and evaluation is with organisations that work on human rights and governance. It is harder for them to show they make a difference than for people who build roads or wells. Building a road is complicated, but the builders have reasonable control over the process and at the end they can say, "we have built a 50-kilometre road". Human rights and governance work is more indirect: it is not the intervention of human rights organisation "X" that frees a prisoner - it is a prison guard, the last piece in a huge puzzle of actors and actions. It is not campaign Y that ends domestic violence in a woman's life - it is herself, when she leaves an abusive relationship, or the abusive partner when he stops battering, for reasons that are far beyond the reach of campaign "Y". And you can't blame, say, Amnesty International if the US government fails to close down the Guantánamo detention camp.
The highly commendable DFID Working Paper 38 Broadening the Range of Designs for Impact Evaluations (2012) uses the term "contributory causes" to designate initiatives that take many "helping factors" to reap success. Such causes cannot be adequately represented by log-frame type "input > output > impact" cascades, which are poised on the assumption that a single strand of activities controlled by a small number of actors produces the desired change. Plenty of people have written about such complexity issues (try, for example, Ben Ramalingan's blog Aid on the Edge of Chaos). There is a growing consensus that it takes rich sets of methods and tools to make informed judgements as to how change happens in such complex set-ups. By the way, DFID paper 38 shows a few interesting avenues in that direction.
But for some reason many evaluators still seem to believe that impact is simply achieved by setting goals "in terms of improving a specific number of people's lives, and then [basing the organisation's] decisions in project planning around those numerical targets". (This is an original quote from a report by a reputed international consultancy firm, in its evaluation of a world-wide movement of rights activists.) I find that worrying. One cannot measure the impact of complex interventions by counting people, just like one cannot measure time with a yardstick. Forcing activitists and their movements (often capable of generating ideas that are way beyond the average evaluator's imagination) into a single, linear pattern of thinking and working can be destructive. I doubt we will improve this world by admonishing social innovators to stop doing things that cannot be counted. Better start by trying to understand what activists are doing and why they are doing it - for example, through participatory "sense-making" exercises or other forms of qualitative research, and then define together (i) whether it is really necessary to measure "impact" (rather than the more immediate changes the intervention may bring about), and (ii) if "impact" must be measured, how best to capture it over the years in a way that makes sense with the specific intervention. Everything else risks being extremely expensive (multi-year academic research) or tokenist (i.e. an unjustifiable waste of resources).