Artificial Intelligence (AI) has recently revolutionised various fields of science and has also started to pervade commercial applications in an unprecedented manner. Despite great successes, most of AI’s enormous potential is still to be realised. The recent surge of AI can be attributed to advances in the machine learning field known as “Deep Learning”, that is, large deeply-layered artificial neural networks (ANNs) trained by modern learning algorithms on massive datasets. In its core, Deep Learning discovers multiple levels of distributed representations of the input, with higher levels representing more abstract concepts. These representations led to impressive successes in different research areas. In particular, artificial neural networks considerably improved the performance in computer vision, speech recognition, and internet advertising.
Sepp Hochreiter, heading this research group, is considered a pioneer of Deep Learning with his discovery of the vanishing gradient problem and the invention of long-short term memory (LSTM) networks.
recent publications in Deep Learning:
- ICLRNormalization is dead, long live normalization!In ICLR Blog Track 2022
- CoLLAsFew-Shot Learning by Dimensionality Reduction in Gradient Space2022
- arXivLearning 3D Granular Flow Simulations2021
- arXivTrusted Artificial Intelligence: Towards Certification of Machine Learning Applications2021
- ICMLMC-LSTM: Mass-Conserving LSTMIn Proceedings of the 38th International Conference on Machine Learning 2021
- bioarXivDeepRC: Immune Repertoire Classification with Attention-Based Deep Massive Multiple Instance Learning2020
- arXivCross-Domain Few-Shot Learning by Representation FusionarXiv preprint arXiv:2010.06498 2020
- First Order Generative Adversarial Networks2018
- Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields2018
- NeurIPSSelf-Normalizing Neural Networks2017
- GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium2017