One of the challenges is in terms of which scientific fields get favoured and the nature of publication in the literature that occurs among those scientific fields. Observations that people make in some branches of science may be seen by a small number of specialists. You can be immensely productive, but the citation rates for those papers may be very limited.
I would give the example of systematics, which enables us to describe the biological diversity of planet Earth. If you don't have capacity in this area, you are basically just looking at everything and you don't have names for them or how they evolved or what the future of that evolutionary pathway might look like. Systematists do not tend to be particularly highly cited. In a field that is more familiar to me personally, citation rates can be quite a lot higher.
The simplicity and reductive quality of those kinds of metrics prejudice our directions based on momentary popularity. I'll give you some examples from the 19th century. Charles Darwin was really into barnacles, but he was hardly ever cited for it for a long time afterwards. The discoveries that he made in those areas have changed the world in the most fundamental ways, but at the time, nobody recognized this. They are ultimately the way you get to the jewels in the crown.
The idea that we should follow a simple counting process to estimate and measure, as though it were reliable, the value of science.... It's just a popularity contest. I say this as somebody who has some experience with these kinds of metrics. If I was being perfectly selfish about it, I would be thrilled to see all of us just rely on the h-index, but the fact is that it is a reflection of a whole bunch of things, only some of which actually have to do with the importance of discoveries that I may have made. It may have to do with whether or not I'm social-networking effectively or something. It's reductive.
DORA exists so that we will be thoughtful about this, rather than simplistic.