Using Data to Build a Better Toronto
Torontoist has been acquired by Daily Hive Toronto - Your City. Now. Click here to learn more.



Using Data to Build a Better Toronto

Other cities are making open data a priority—and Toronto needs to step up its game.

When the New York Fire Department began investigating ways to save more lives, they turned to data. As part of a strategic undertaking, New York’s data team went to the front lines and talked to firefighters and building inspectors to understand which features or factors were common to buildings that were burning down.

As a result of these discussions, the team identified about 60 factors (using data from several divisional data sources) and built an algorithm that would identify priority buildings to inspect. The inspection process went from somewhat random to informed, and the number of building fires has been reduced.

A similar process would work wonders in Toronto. But that can’t be done without more access to data. Our city needs more open data—that is, data freely available to anyone and everyone.

The City of Toronto has not made accessibility to data a priority. Yet, gaining access to open data is a first step to helping our city function more efficiently.

The fire data project was not a one-off in New York. The City of New York has a strategic approach to using data in City business through the Mayor’s Office of Data Analytics.

“The Mayor’s Office of Data Analytics (MODA), the Department of Information Technology and Telecommunications (DOITT), and NYC Digital work together to collect, analyze, and share NYC Data, to create a better City supported by data-based decision making, and to promote public use of City data,” the site reads.

Screen Shot 2016-07-12 at 9.49.27 AM

Such an approach would work in our city, too. There are complex problems facing Toronto government, in all divisions, that involve data from across divisions.

For one, Toronto has known safety and regulatory issues with rooming houses. As outlined by John Lorinc, using data to identify risky buildings coupled with the creation of laws to support safe shared living spaces and not displace residents is both possible and urgently needed.

One way to address these problems is to create a small team with a few members that specialize in data. Connecting them on a project-by-project basis with policy experts and creating a relationship where the policy experts can help identify urgent policy challenges and the related available data. These data experts can suggest proposed data approaches to help inform solutions. It’s a relatively low-cost program with big potential impact.

City staff do a lot to support residents within their current job descriptions. Adding to their workload to create open data sets when the return is uncertain is questionable. Asking City staff to create open data sets when the base data was never collected, managed, and stored with the intention of being public is inefficient.

But using data within the City to serve residents and building software systems that do the same makes a lot of sense. Making sure these systems and processes are built so they publish open data is a defensible way to get to the same outcome: opening data for public use. Open data then becomes a job of the City as a whole through its technology approach, not a piecemeal side-of-the desk adventure that many staff would rightly challenge.

As of publication, the City’s open data portal has 217 open data sets. The data available is a mixed bag: it includes data sets about cooling centre locations, the TTC, public consultation results from the casino debate, one-way streets, food banks, election results, and the lobbyist registry.

The City’s 2015 Open Government report [PDF] released last week identified the top five most popular data sets of 2015: on-street parking permit maps, a massing of 3D buildings, zoning boundaries, building permits, and a map of Toronto’s streets.

Screen Shot 2016-07-12 at 10.05.23 AM

But one data set receiving little fanfare is the “Inventory of Applications.” It is the only data set that IT Strategic Planning and Architecture, Corporate I and T has released to the open data portal. This is unfortunate, given that “the Information and Technology Division acts as a city-wide co-ordination point for driving business improvements by assisting city divisions in re-designing business operations, translating business needs into information and technology solutions and implementing those solutions.” The data set is old, from 2012. But it is one of the few available unfettered, uneditorialized glimpses into how the City is using technology.

An updated version of this data set would begin to provide more insight into how the City is actually using data. Related information about the City’s IT strategy that could begin to flow from this data might include:

  • how the City is planning to structurally make open data by default part of its IT policy (or how it isn’t)
  • how it’s making open data publishing features a requirement from government IT vendors through procurement (or isn’t)
  • how divisional data systems and users work together (or don’t)
  • how any Smart City work done in conjunction with corporations that collect resident data also engage residents in the design of data management and privacy (or don’t)

At this stage in the open data movement, there needs to be strategic thinking about which open data sets will provide the most insight into government IT operations.

There are other areas to be strategic about as well. Aligning open data releases with public consultation topics and aligning open data releases with proposed policy motions would signal a true commitment to open government for engagement. It would feed open data into active, high-profile conversations and allow the civic tech community to use the data to help inform these discussions. This isn’t happening yet either.

But the first priority is that the City of Toronto use data more directly and frequently in its work—not as PR for its degree of openness and engagement but as a vital operational input. In doing so, it can also create systems that make the release of open data regular, easy, and intentional—by design.