Of course. The directive on automated decision-making explicitly recognizes that harms can be done to individuals or communities, but when it defines harm in proposed subsection 5(1), AIDA has repeated references to individuals for harm to property and for economic, physical and psychological harm.
The thing is that harms in AIDA, by their nature, are often diffuse. Oftentimes they are harms to groups, not to individuals. For a good example of this, think of AI bias, which is covered in proposed subsection 5(2), not in 5(1). If you have an automated system that allocates employment, for example, and it is biased, it is very difficult to know whether a particular individual got or didn't get the job because of that bias or not. It is easier to see that the system may be biased towards a certain group.
The same goes for representation issues in AI. An individual would have difficulty in proving harm under the act, but the harm is very real for a particular group. The same is true of misinformation. The same is true of some types of systemic discrimination that may not be captured by the current definition of bias in the act.
What I would find concerning is that by regulating a technology that is more likely to affect groups rather than individuals under a harm definition that specifically targets individuals, we may be leaving out most of what we want to cover.