Publication Date



Technical Report: UTEP-CS-24-25

To appear in Proceedings of the 9th World Conference on Soft Computing, Baku, Azerbaijan, September 24-27, 2024.


Current deep learning techniques have led to spectacular results, but they still have limitations. One of them is that, in contrast to humans who can learn from a few examples and learn fast, modern deep learning techniques require a large amount of data to learn, and they take a long time to train. In this paper, we show that neural networks do have a potential to learn from a small number of examples -- and learn fast. We speculate that the corresponding idea may already be implicitly implemented in Large Language Models -- which may partially explain their (somewhat mysterious) success.