Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Abstract: Human motion prediction is still an open problem, which is extremely important for autonomous driving and safety applications. Although there are great advances in this area, the widely studied topic of adversarial attacks has not been applied to multi-regression models such as GCNs and MLP-based architectures in human motion prediction. This work intends to reduce this gap using extensive quantitative and qualitative experiments in state-of-the-art architectures similar to the initial stages of adversarial attacks in image classification. The results suggest that models are susceptible to attacks even on low levels of perturbation. We also show experiments with 3D transformations that affect the model performance, in particular, we show that most models are sensitive to simple rotations and translations which do not alter joint distances. We conclude that similar to earlier CNN models, motion forecasting tasks are susceptible to small perturbations and simple 3D transformations.
- Revisiting DeepFool: generalization and improvement.
- MotionMixer: MLP-based 3D Human Body Pose Forecasting.
- Towards Evaluating the Robustness of Neural Networks.
- Smoothing Adversarial Training for GNN. IEEE Transactions on Computational Social Systems, 8(3):618–629.
- MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction.
- BASAR:Black-box Attack on Skeletal Action Recognition.
- Boosting Adversarial Attacks with Momentum.
- Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.
- All You Need Is Low (Rank). In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 169–177, New York, NY, USA. ACM.
- Poisoning Attacks to Graph-Based Recommender Systems.
- Learning Constrained Dynamic Correlations in Spatiotemporal Graphs for Motion Prediction.
- Explaining and Harnessing Adversarial Examples.
- Back to MLP: A Simple Baseline for Human Motion Prediction.
- An Adversarial Attacker for Neural Networks in Regression Problems. In IJCAI Workshop on Artificial Intelligence Safety (AI Safety), Montreal/Virtual, Canada.
- AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds.
- Shape-invariant 3D Adversarial Point Clouds.
- Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7):1325–1339.
- Adversarial Attack on Skeleton-based Human Action Recognition.
- TrajectoryCNN: A New Spatio-Temporal Feature Learning Network for Human Motion Prediction. IEEE Transactions on Circuits and Systems for Video Technology, 31(6):2133–2146.
- 3D Human Motion Prediction: A Survey.
- Progressively Generating Better Initial Guesses Towards Next Stages for High-Quality Human Motion Prediction.
- Data Augmentation for Human Motion Prediction. In 2021 17th International Conference on Machine Vision and Applications (MVA), pages 1–5. IEEE.
- AMASS: Archive of Motion Capture As Surface Shapes. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 5441–5450. IEEE.
- History Repeats Itself: Human Motion Prediction via Motion Attention.
- Multi-level Motion Attention for Human Motion Prediction. International Journal of Computer Vision, 129(9):2513–2535.
- Context-based Interpretable Spatio-Temporal Graph Convolutional Network for Human Motion Forecasting.
- DeepFool: a simple and accurate method to fool deep neural networks.
- Adversarial Attacks, Regression, and Numerical Stability Regularization.
- Space-Time-Separable Graph Convolutional Network for Pose Forecasting.
- Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems, volume 34, pages 15498–15512. Curran Associates, Inc.
- Adversarial Attack and Defense on Graph Data: A Survey.
- Adversarial Bone Length Attack on Action Recognition.
- MLP-Mixer: An all-MLP Architecture for Vision.
- Recovering Accurate 3D Human Pose in the Wild Using IMUs and a Moving Camera. pages 614–631.
- Adversarial Detection: Attacking Object Detection in Real Time.
- Generating 3D Adversarial Point Clouds. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9128–9136. IEEE.
- Backdoor Attacks to Graph Neural Networks.
- Robust Graph Convolutional Networks Against Adversarial Attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1399–1407, New York, NY, USA. ACM.
- Adversarial Attacks on Graph Neural Networks. ACM Transactions on Knowledge Discovery from Data, 14(5):1–31.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.