Publication Date

1-1-2025

Comments

Technical Report: UTEP-CS-25-1

Abstract

Large Language Models (LLMs) like ChatGPT have spectacular successes -- but they also have surprising failures that an average person with common sense could easily avoid. It is therefore desirable to incorporate the imprecise ("fuzzy") common sense into LLMs. A natural question is: to what extent will this help? This way, we may avoid a few simple mistakes, but will it significantly improve the LLMs' performance? What portion of the gap between current LLMs and ideal perfect AI-based agents can be, in principle, covered by using fuzzy techniques? Judging by the fact that few researchers working on LLMs (and on deep learning in general) try fuzzy methods shows that most these researchers do not believe that the use of fuzzy techniques will significantly improve LLMs' performance. Contrary to this pessimistic viewpoint, our analysis shows that potentially, fuzzy techniques can cover all the above gap -- or at least a significant portion of this gap. In this sense, indeed, all LLMs need to become perfect is fuzzy techniques.

Share

COinS