Yes. By technology being “neutral and future-proof”, we also mean definitions that are narrow and specific to current trends in artificial intelligence. For example, in clause 5 of the bill, there's a definition of “biased output”, but it focuses too much on the outputs that systems generate, when harms emerge throughout the AI life cycle. We should be having definitions that are more inclusive of the development, design and deployment of technologies, rather than focusing too much on the output.
As a reminder, I would also like to say that the contexts don't really change when we use technology—that is, education, health care and government—so we should be focusing on regulating the contexts in which they're used as well. Prohibitions on systems that process biometric data are a way to be technologically neutral, in my opinion, and future-proof as well.