Is Violent Crime Going Up or Down in America? Nobody Actually Knows, But the Debate Illustrates How #Grant #Proposal Needs Assessments are Written
A new Grant Writing Confidential Post: Go to www.seliger.com to sign up for FREE WEEKLY GRANT ALERTS and click on BLOG to read more than 450 posts about grant writing at Grant Writing Confidential.
____________________________
One of our past posts described how to write proposal needs assessments. A spate of recent articles on the so-called Ferguson Effect provides a good example of how proficient grant writers can use selected data and modifying words to shape a needs assessment to support whatever the project concept is.
Last week Heather Mac Donald’s Wall Street Journal editorial “Trying to Hide the Rise of Violent Crime” claimed that violent crime is rising, due to “the Ferguson Effect,” but that “progressives and media allies” have launched a campaign to deny this reality. Right on cue, the New York Times ran a front page “news” story telling grumpy New Yorkers that “Anxiety Aside, New York Sees Drop in Crime.” Both articles cite the same Brennan Center for Justice study, Crime in 2015: A Preliminary Analysis, to support their arguments.
This reminds me of the old joke about how different newspapers would report that the end of the world will happen tomorrow: the New York Times, “World Ends Tomorrow, Women and Minorities Hurt Most;” the Wall Street Journal, “World Ends Tomorrow, Markets Close Early;” and Sports Illustrated, “Series Cancelled, No World.” One can frame a set of “facts” differently, depending on one’s point of view and the argument being made.
Neither the NYT or WSJ writers actually know if violent crime is going up or down in the short term. Over the past few decades, it is clear that crime has decline enormously, but it isn’t clear what causal mechanisms might be behind that decline.
Perhaps, like Schr?dinger’s cat being alive and dead at the same time to explain quantum mechanics, crime is up and down at the same, depending on who’s doing the observing and how they’re observing.
One of the challenges is that national crime data, as aggregated in the FBI Uniform Crime Reporting (UCR) system, is inherently questionable. First, police departments report these data voluntarily and many crimes are subject to intentional or unintentional miss-categorization (was it an assault or aggravated assault?) or under/over reporting, depending on how local political winds are blowing (to see one public example of this in action, consider “NYPD wants to fix stats on stolen Citi Bikes,” which describes how stealing a Citi Bike counts as a felony because each one costs more than $1,000). A less-than-honorable police chief, usually in cahoots with local pols, can make “crime rates” go up or down. Then there is the problem of using averages for data, which leads to another old joke about the guy with his head in the oven and his feet in the freezer. On average, he felt fine.
But from your perspective as a grant writer, the important question isn’t whether crime rates decline or whether “the Ferguson Effect” makes them fall. If residents of a given city/neighborhood feel vulnerable to perceived crime increases, the increases are “real to them” and can form the basis for a project concept for grants seeking. Plus, when data to prove the need is hard to come by, we sometimes ask our clients for anecdotes about the problem and add a little vignette to the needs assessment. A call to the local police department’s gang unit will always produce a great “end of the world” gang issue quote from the Sergeant in charge, while a call to the local hospital will usually yield a quote about an uptick in gun shoot victims being treated, and so on. Sometimes in proposals anecdotes can substitute for data, although this is not optimal.
Within reason and the rather vague ethical boundaries of grant seeking and writing, a good grant writer can and should pick and choose among available data to construct the needs assessment argument for funding anything the agency/community sees a need for.
For example, if we were writing a proposal for an urban police department to get more funds for community policing, we would use up or down crime rate data to demonstrate the need for a new grant. If the crime is trending down, we’d use the data to argue that the police department is doing a good job with community policing but needs some more money to do an even better job, while being able to provide technical assistance to other departments. If the crime data is trending upward, we’d argue that there’s a crisis and the grant must be made to save life and limb. If we were working for a nonprofit in the same city that wants grants for after school enrichment for at-risk youth, we’d cherry-pick the crime data to argue that a nurturing after-school setting is necessary, to keep them protected from the false allures of gangs, early risky sexual experimentation, and/or drugs.
Most grant needs assessments are written backwards. One starts with the premise for the project concept and structures the data and analysis to support the stated need. It may be hard for true believers and novice grant writers to accept, but grant writing is rarely a blue sky/visioning exercise. The funder really sets the parameters of the program. The client knows what they want the grant for. It’s the job of the grant writer to build the needs assessment by including, excluding, and/or obfuscating data. This approach works well, because most funders only know what the applicant tells them in the proposal. Some grant programs, like our old pals DOL’s YouthBuild and ED’s Talent Search, try to routinize needs assessments and confound rascally grant writers by mandating certain data sets. We’re too crafty, however, and can usually overcome such data requirements through the kind of word and data selections that Mac Donald cites in her article.
Here's the link to the post: https://bit.ly/1TnDQls