Toronto's Budget Survey Deeply Flawed
Torontoist has been acquired by Daily Hive Toronto - Your City. Now. Click here to learn more.




Toronto’s Budget Survey Deeply Flawed

Since long before he was elected mayor, Rob Ford has championed the idea that Toronto was spending its way towards fiscal disaster. Believe that or not (and many don’t), Ford swept into office on a wave of anti-gravy promises, so it’s no surprise that he’s launched a massive review of City-run services—via a series of roundtable discussions and an extensive online survey—with the aim of determining which are Torontonians’ greatest priorities, and which might be suitable for spending cuts. That is, after all, what people voted for.
However, despite a lot of noise about broad public consultation, the review is not likely to generate much meaningful public input. For one thing, the roundtables have been crammed into a whirlwind two-week schedule, with only limited participation available due to registration limits (there is one remaining session, taking place tomorrow night at 7 p.m. at the Scarborough Civic Centre). Moreover, City has chosen to purchase a DIY survey tool rather than commissioning a qualified polling firm to design its questions properly.

The massive survey asks people to weigh in on which services Toronto should drop or contract out to close a $774-million budget gap. City Hall could have handed this critical part of the review to any of Toronto’s numerous research firms with strong track records in public affairs—a shortlist would include Ipsos Reid, Harris/Decima, Polaris, Environics, and Vision Critical. Instead, they opted to purchase a simple tool which allowed them to design the survey themselves, from a company called Qualtrics.
The document resulted, according to Glenys A. Babcock, a former VP at Ipsos Reid who now works as a consultant, is poorly designed and suffers from inherent political biases.

You Can Say Anything We Want

Do you want to tell City Hall that public transit is an important issue for Toronto as a whole? Well, you can’t. The survey lists seven broad issues and asks respondents to rate them by importance; transit is not among them. Your only option is to tick of “infrastructure,” which includes everything from water to roadways. Are affordable daycare, support for the elderly, or universal accessibility important to you? We can’t even guess which category those fall under. “Meeting the basic needs of vulnerable people” seemed likely, but later on in the survey it becomes clear that “vulnerable people” is a code-word for “crime-prone youth in poor neighborhoods.” A blank space after the question allows for write-ins, but it doesn’t let you rate issues by importance, and provides little-to-no basis for comparison.
“We have to ask why […] such an obviously lousy survey was sent out,” Babcock says. “This is about Rob Ford and accountability. Where is the accountability here?”
As she goes through the survey, Babcock’s frustration grows. She chalks up most of its flaws to inexperience, but some oddities make her suspicious. The online survey lets you click through almost every screen without ever answering a question, but you must provide your postal code. Already annoyed at the survey’s weak privacy policy, Babcock was less than thrilled by this. “I thought, ‘Isn’t that interesting—they want to know what ward I’m in.'”
Though it aims to sort respondents, to discern the different needs and opinions of various demographics, the survey’s categories seem illogical. You wouldn’t normally group 15-year-old high-schoolers with 24-year-old university grads who live and work on their own, would you? Or, to give another example, if you rode the TTC once over the past year, would you put yourself in the same ridership group as people who rely on it every day, or those who buy tickets for their children? If you were designing this survey, apparently, you would. “How are these the same people?” Babcock wants to know.

Garbage In, Garbage Out

When it offers more than a handful of possible responses, the survey goes overboard and sabotages itself. It asks for in-depth feedback on 35 different service categories, each of which is subdivided into “activities,” to create a laundry-list of decisions on what services the City should provide, farm out, cut, or improve. This single question from the online survey fills almost three printed pages. Included are such items as the police, the fire department, and Emergency Medical Services (all as separate entries), and a single entry encompassing all “arts, culture, and heritage programs” but another one for “city-run live theatres.” Garbage collection is there, and public health, and “funding and programs for vulnerable groups,” and the Toronto Zoo.
It’s a little surprising to see bedrock services like health and firefighting on this list at all—what would happen if everyone said that the City should drop them? Presumably, the City would ignore those responses and keep providing the services. So why are they padding out this list? With so many choices in front of them, many people would be reticent to rate every service highly, which means that on an overcrowded list, some entries will get bumped down arbitrarily.
In Babcock’s view, the main consequence of such overcrowding will be a tendency for respondents to answer randomly, seeing a wall of options to get through, instead of a set of core services that need case-by-case evaluation.
“The results are likely to have an enormous random element to them and not provide meaningful input,” she says. Or, as she also put it: “garbage in, garbage out.”

More = Less

After deciding which of the services should be City-run, respondents are asked to choose only three from the large list for further discussion. At that stage, it’s not hard to see how things that matter a little bit to a lot of people will overshadow those that are crucial to a few.
Asked to choose between police services and “community-run heritage programs,” how many will voice their opinions on the latter? The survey, in other words, can push respondents towards thinking in terms of the bare minimum level of acceptable service. Is the outcome of such an exercise likely to be something most Torontonians will be happy with?
One of the most striking features of the survey: respondents are asked, for any given service, whether “maintaining the quality is more important” or “lowering the cost to the City is more important.” Think the service should be improved? There’s no check-box for that. It provides another misleading set of choices when it asks respondents how they would choose to pay for any cost increases—via increased property taxes, higher user fees, or a combination thereof. Conspicuously absent: the array of other revenue-generating tools the City has at is disposal, such as the now-cancelled Vehicle Registration Tax or the Land Transfer Tax Ford has promised (but cannot afford) to cut. The survey simply chooses from among the full range of options the City could consider, and presents only some of these to the public for deliberation.
There are other issues. The survey ascertains respondents’ nationality, but never mentions settlement or newcomer services. It asks whether they have received a “university diploma” instead of a degree. Some of these things would be funny, if they didn’t point to an unsettling lack of attention to detail in such an important document. Overall, the whole thing is a toxic blend of incompetence and self-assurance, delivered with a populist spin and a political agenda.

Ford has enjoyed a honeymoon of sorts, with his popularity buoyed by hopes that he will steer Toronto’s economy towards the right without causing much damage. The structure of this review makes it clear that these are fantasies indeed. The survey, in the guise of speaking to the public, does little more than steamroll over Toronto’s diversity of perspectives.
Although Babcock filled out the survey, she did so with a deepening sense of futility. As that feeling spreads, it will poison future attempts to connect City Hall with Toronto residents. Rather than squeezing savings from the budget using faulty analysis, those behind the review should ask themselves whether the City, in the long run, can afford to burn so much public trust over a manufactured panic and some poorly chosen questions.

CORRECTION: June 8, 2011 When we originally published this post we attributed the poor design of the survey to Qualtrics, the firm the City chose to help facilitate this part of the consultation process. However, Qualtrics sells do-it-yourself survey tools and did not itself design the survey; the company is therefore not accountable for the survey’s poor design. We have amended this article to reflect this, and send our apologies ot Qualtrics.