Daniel Soudry
Daniel Soudry
Associate Professor
Verified email at technion.ac.il - Homepage
Cited by
Cited by
Binarized neural networks
I Hubara, M Courbariaux, D Soudry, R El-Yaniv, Y Bengio
Advances in neural information processing systems 29, 2016
Quantized neural networks: Training neural networks with low precision weights and activations
I Hubara, M Courbariaux, D Soudry, R El-Yaniv, Y Bengio
The Journal of Machine Learning Research 18 (1), 6869-6898, 2017
Simultaneous denoising, deconvolution, and demixing of calcium imaging data
EA Pnevmatikakis, D Soudry, Y Gao, TA Machado, J Merel, D Pfau, ...
Neuron 89 (2), 285-299, 2016
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
E Hoffer, I Hubara, D Soudry
arXiv preprint arXiv:1705.08741, 2017
The implicit bias of gradient descent on separable data
D Soudry, E Hoffer, MS Nacson, S Gunasekar, N Srebro
The Journal of Machine Learning Research 19 (1), 2822-2878, 2018
Memristor-based multilayer neural networks with online gradient descent training
D Soudry, D Di Castro, A Gal, A Kolodny, S Kvatinsky
IEEE transactions on neural networks and learning systems 26 (10), 2408-2421, 2015
Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights.
D Soudry, I Hubara, R Meir
NIPS 1, 2, 2014
Implicit bias of gradient descent on linear convolutional networks
S Gunasekar, J Lee, D Soudry, N Srebro
arXiv preprint arXiv:1806.00468, 2018
No bad local minima: Data independent training error guarantees for multilayer neural networks
D Soudry, Y Carmon
arXiv preprint arXiv:1605.08361, 2016
Post training 4-bit quantization of convolutional networks for rapid-deployment
R Banner, Y Nahshan, D Soudry
Advances in Neural Information Processing Systems 32, 7950-7958, 2019
Characterizing implicit bias in terms of optimization geometry
S Gunasekar, J Lee, D Soudry, N Srebro
International Conference on Machine Learning, 1832-1841, 2018
Scalable methods for 8-bit training of neural networks
R Banner, I Hubara, E Hoffer, D Soudry
arXiv preprint arXiv:1805.11046, 2018
Norm matters: efficient and accurate normalization schemes in deep networks
E Hoffer, R Banner, I Golan, D Soudry
arXiv preprint arXiv:1803.01814, 2018
Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis
Y Dordek, D Soudry, R Meir, D Derdikman
Elife 5, e10094, 2016
Kernel and rich regimes in overparametrized models
B Woodworth, S Gunasekar, JD Lee, E Moroshko, P Savarese, I Golan, ...
Conference on Learning Theory, 3635-3673, 2020
Exponentially vanishing sub-optimal local minima in multilayer neural networks
D Soudry, E Hoffer
International Conference on Learning Representations, workshop, 2017
Convergence of gradient descent on separable data
MS Nacson, J Lee, S Gunasekar, PHP Savarese, N Srebro, D Soudry
arXiv preprint arXiv:1803.01905, 2018
Augment your batch: Improving generalization through instance repetition
E Hoffer, T Ben-Nun, I Hubara, N Giladi, T Hoefler, D Soudry
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
How do infinite width bounded norm networks look in function space?
P Savarese, I Evron, D Soudry, N Srebro
Conference on Learning Theory, 2667-2690, 2019
Efficient" shotgun" inference of neural connectivity from highly sub-sampled activity data
D Soudry, S Keshri, P Stinson, M Oh, G Iyengar, L Paninski
PLoS computational biology 11 (10), e1004464, 2015
The system can't perform the operation now. Try again later.
Articles 1–20