Privacy-preserving classification of compute workloads
Establish whether the high-level customer and workload telemetry already collected by cloud and data center providers can be used to develop reliable, privacy-preserving workload classification techniques that detect (a) AI training runs above specified compute thresholds and (b) inference workloads involving malicious cyber activity, and specify how such techniques must adapt to changes in hardware, software packages, and algorithms over time.
References
An open question is thus whether it is possible to use this data to develop reliable workload classification techniques, for example, determining whether a training workload exceeds certain compute thresholds, or whether an inference workload involves malicious cyberactivity. Such techniques would need to account for changes in the hardware, software packages, and specific algorithms used in AI workloads over time.