Sure, I'll follow up with a little more technical detail that can address your second question.
Our projection is model-based, so we have an estimated macro-model, in conjunction with assumptions and outlooks for the U.S. economy and commodity prices. These are key inputs. This contrasts with the Department of Finance's approach, which uses a survey of 15 or so private sector forecasters to prepare its outlook. One of the key weaknesses of this approach is that it doesn't necessarily ensure consistency in the forecast. You can have divergent views on, let's say, the exchange rate and commodity prices that you wouldn't have to reconcile in a single model, which we have to do. Some forecasters may not provide an outlook for certain variables. The survey itself doesn't ensure internal consistency as a macroeconomic model would.
The last point I would make is that we did look at forecast performance and quality in a report last year, and we found that in terms of accuracy, at least for the headline macroeconomic variables such as nominal GDP, we were in line with the survey-based outlooks from the Department of Finance. But one of the key differences was that our forecasts were less biased, so that when we did make an error, it wasn't typically over- or under-predicting the economy. That's another key difference.
How do you explain that? Why are we less biased than a survey-based result? As Mostafa said, maybe it's because we're not working for a chartered bank with incentives to talk about a bullish outlook, for instance. We don't have that kind of sentiment in the background.