That's a timely question, because I just came from a meeting with my colleagues at DFATD, where we had a real brainstorming session on what the next phase will look like. Essentially we've learned a lot of lessons over the past few years about what works and what doesn't work. I'm going to share two examples with you that I think are coming to fruition right now and they will take us into the future.
The first example is that our network undertook the development of a metrics portal. This portal is an online, open access data system that allows our partners to enter data on common indicators. We selected the 11 indicators that were published in a document by the Commission on Information and Accountability for Women's and Children's Health.
Here I am going to step back. When the every woman every child global strategy was launched in 2010, Canada and Tanzania took the lead on creating the Commission on Information and Accountability. In that commission there were 11 defined indicators that all countries, all partners, were encouraged to measure and report on. We came to be a few years after that, but we created this portal so our partners could enter that data online. It was a pilot project. We had 22 partners enter data from 49 countries. We learned a lot about some of the challenges of data collection, data reporting, and data disseminating. But it was a really good template that will take us into the next phase of our efforts.
Another project I want to tell you about is a project with four of Canada's largest NGOs. I'm not going to get all four names right, so I'm not going to name them, but these NGOs worked with an academic institution, Sick Kids' global child health program, to collect all of the data from their partner projects and collate that data and then address some research questions, so they could better explore what worked and what didn't work. Again, that was a really challenging initiative. Some of the challenges came because that consortium—they call it a consortium—started after the projects had already rolled out, so they really had to retrofit, which is not the best way to do evaluation work. But in a short timeframe they've been able to kind of coalesce the work they're doing and create really neat outputs that will let us evaluate those programs more effectively than we had in the past. It's a really good example of university and NGO collaboration.