Follow
Samyak Jain
Title
Cited by
Cited by
Year
Efficient and effective augmentation strategy for adversarial training
S Addepalli, S Jain
Advances in Neural Information Processing Systems 35, 1488-1501, 2022
392022
Scaling adversarial training to large perturbation bounds
S Addepalli, S Jain, G Sriramanan, R Venkatesh Babu
European Conference on Computer Vision, 301-316, 2022
35*2022
Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks
S Jain, R Kirk, ES Lubana, RP Dick, H Tanaka, E Grefenstette, ...
arXiv preprint arXiv:2311.12786, 2023
192023
Dart: Diversify-aggregate-repeat training improves generalization of neural networks
S Jain, S Addepalli, PK Sahu, P Dey, RV Babu
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
132023
Boosting adversarial robustness using feature level stochastic smoothing
S Addepalli, S Jain, G Sriramanan, RV Babu
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
92021
How does fine-tuning affect your model? Mechanistic analysis on procedural tasks
S Jain, R Kirk, ES Lubana, RP Dick, H Tanaka, T Rocktäschel, ...
R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Large Foundation …, 2023
2023
Supplementary: DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
S Jain, S Addepalli, PK Sahu, P Dey, RV Babu
Supplementary Material: Towards Achieving Adversarial Robustness Beyond Perceptual Limits
S Addepalli, S Jain, G Sriramanan, S Khare, RV Babu
Supplementary Material: Scaling Adversarial Training to Large Perturbation Bounds
S Addepalli, S Jain, G Sriramanan, RV Babu
Supplementary Material: Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
S Addepalli, S Jain, G Sriramanan, RV Babu
The system can't perform the operation now. Try again later.
Articles 1–10