ScholarWorks@UTEPCopyright (c) 2023 University of Texas at El Paso All rights reserved.
https://scholarworks.utep.edu
Recent documents in ScholarWorks@UTEPen-usThu, 08 Jun 2023 02:30:56 PDT3600Borderplex Business Barometer, Volume 7, Number 5
https://scholarworks.utep.edu/border_region/168
https://scholarworks.utep.edu/border_region/168Mon, 05 Jun 2023 15:04:48 PDTThomas M. Fullerton Jr. et al.Paso del Norte Economic Indicator Review, May
https://scholarworks.utep.edu/hunt_techrep/38
https://scholarworks.utep.edu/hunt_techrep/38Thu, 18 May 2023 13:48:59 PDTHunt Institute for Global CompetitivenessBorderplex Business Barometer, Volume 7, Number 4
https://scholarworks.utep.edu/border_region/167
https://scholarworks.utep.edu/border_region/167Tue, 16 May 2023 08:15:46 PDTThomas M. Fullerton Jr. et al.Mexico Consensus Economic Forecast, Volume 26, Number 2
https://scholarworks.utep.edu/border_region/166
https://scholarworks.utep.edu/border_region/166Thu, 04 May 2023 14:21:53 PDTThomas M. Fullerton Jr. et al.How People Make Decisions Based on Prior Experience: Formulas of Instance-Based Learning Theory (ILBT) Follow from Scale Invariance
https://scholarworks.utep.edu/cs_techrep/1802
https://scholarworks.utep.edu/cs_techrep/1802Tue, 02 May 2023 07:42:16 PDT
To better understand human behavior, we need to understand how people make decisions, how people select one of possible actions. This selection is usually based on predicting consequences of different actions, and these predictions are, in their turn, based on the past experience. For example, consequences that occur more frequently in the past are viewed as more probable. However, this is not just about frequency: recent observations are usually given more weight that past ones. Researchers have discovered semi-empirical formulas that describe our predictions reasonably well; these formulas form the basis of the Instance-Based Learning Theory (ILBT). In this paper, we show that these semi-empirical formulas can be derived from the natural idea of scale invariance.
]]>
Palvi Aggarwal et al.Low-Probability High-Impact Events Are Even More Important Than It Is Usually Assumed
https://scholarworks.utep.edu/cs_techrep/1800
https://scholarworks.utep.edu/cs_techrep/1800Tue, 02 May 2023 07:42:15 PDT
A large proportion of undesirable events like earthquakes, floods, tornados occur in zones where these events are frequent. However, a significant number of such events occur in other zones, where such events are rare. For example, while most major earthquakes occur in a vicinity of major faults, i.e., on the border between two tectonic plates, some strong earthquakes also occur inside plates. We want to mitigate all undesirable events, but our resources are limited. So, to allocate these resources, we need to decide which ones are more important. For this decision, a natural idea is to use the product of the probability of the undesirable event and possible damage caused by this event. A natural way to estimate probability is to use the frequency of such events in the past. This works well for high-probability events like earthquakes in a seismic zone near a fault. However, for low-probability high-impact event the frequency is small and, as a result, the actual probability may be very different from the observed frequency. In this paper, we show how to take this difference between frequency and probability into account. We also show that if we do take this difference into account, then low-probability high-impact events turn out to be even more important than it is usually assumed.
]]>
Aaron Velasco et al.What Do Goedel's Theorem and Arrow's Theorem Have in Common: A Possible Answer to Arrow's Question
https://scholarworks.utep.edu/cs_techrep/1801
https://scholarworks.utep.edu/cs_techrep/1801Tue, 02 May 2023 07:42:15 PDT
Kenneth Arrow, the renowned author of the Impossibility Theorem that explains the difficulty of group decision making, noticed that there is some commonsense similarity between his result and Goedel's theorem about incompleteness of axiomatic systems. Arrow asked if it is possible to describe this similarity in more precise terms. In this paper, we make the first step towards this description. We show that in both cases, the impossibility result disappears if we take into account probabilities. Namely, we take into account that we can consider probabilistic situations, that we can make probabilistic conclusions, and that we can make probabilistic decisions (when we select different alternatives with different probabilities).
]]>
Miroslav Svitek et al.Wormholes, Superfast Computations, and Selivanov's Theorem
https://scholarworks.utep.edu/cs_techrep/1799
https://scholarworks.utep.edu/cs_techrep/1799Tue, 02 May 2023 07:42:14 PDT
While modern computers are fast, there are still many practical problems that require even faster computers. It turns out that on the fundamental level, one of the main factors limiting computation speed is the fact that, according to modern physics, the speed of all processes is limited by the speed of light. Good news is that while the corresponding limitation is very severe in Euclidean geometry, it can be more relaxed in (at least some) non-Euclidean spaces, and, according to modern physics, the physical space is not Euclidean. The differences from Euclidean character are especially large on micro-level, where quantum effects need to be taken into account. To analyze how we can speed up computations, it is desirable to reconstruct the actual distance values -- corresponding to all possible paths -- from the values that we actually measure -- which correspond only to macro-paths and thus, provide only the upper bound for the distance. In our previous papers -- including our joint paper with Victor Selivanov -- we provided an explicit formula for such a reconstruction. But for this formula to be useful, we need to analyze how algorithmic is this reconstructions. In this paper, we show that while in general, no reconstruction algorithm is possible, an algorithm is possible if we impose a lower limit on the distances between steps in a path. So, hopefully, this can help to eventually come up with faster computations.
]]>
Olga Kosheleva et al.People Prefer More Information About Uncertainty, But Perform Worse When Given This Information: An Explanation of the Paradoxical Phenomenon
https://scholarworks.utep.edu/cs_techrep/1798
https://scholarworks.utep.edu/cs_techrep/1798Tue, 02 May 2023 07:42:13 PDT
In a recent experiment, decision makers were asked whether they would prefer having more information about the corresponding situation. They confirmed this preference, and such information was provided to them. However, strangely, the decisions of those who received this information were worse than the decisions of the control group -- that did not get this information. In this paper, we provide an explanation for this paradoxical situation.
]]>
Jieqiong Zhao et al.Integrity First, Service Before Self, and Excellence: Core Values of US Air Force Naturally Follow from Decision Theory
https://scholarworks.utep.edu/cs_techrep/1797
https://scholarworks.utep.edu/cs_techrep/1797Tue, 02 May 2023 07:42:12 PDT
By analyzing data both from peace time and from war time, the US Air Force came with three principles that determine success: integrity, service before self, and excellent. We show that these three principles naturally follow from decision theory, a theory that describes how a rational person should make decisions.
]]>
Martine Ceberio et al.Conflict Situations Are Inevitable When There Are Many Participants: A Proof Based on the Analysis of Aumann-Shapley Value
https://scholarworks.utep.edu/cs_techrep/1796
https://scholarworks.utep.edu/cs_techrep/1796Tue, 02 May 2023 07:42:10 PDT
When collaboration of several people results in a business success, an important issue is how to fairly divide the gain between the participants. In principle, the solution to this problem is known since the 1950s: natural fairness requirements lead to the so-called Shapley value. However, the computation of Shapley value requires that we can estimate, for each subset of the set of all participants, how much gain they would have gained if they worked together without others. It is possible to perform such estimates when we have a small group of participants, but for a big company with thousands of employers this is not realistic. To deal with such situations, Nobelists Aumann and Shapley came up with a natural continuous approximation to Shapley value -- just like a continuous model of a solid body helps, since we cannot take into account all individual atoms. Specifically, they defined the Aumann-Shapley value as a limit of the Shapley value of discrete approximations: in some cases this limit exists, in some it does not. In this paper, we show that, in some reasonable sense, for almost all continuous situations the limit does not exist: we get different values depending on how we refine the discrete approximations. Our conclusion is that in such situations, since computing of fair division is not feasible, conflicts are inevitable.
]]>
Sofia Holguin et al.Towards Decision Making Under Interval Uncertainty
https://scholarworks.utep.edu/cs_techrep/1795
https://scholarworks.utep.edu/cs_techrep/1795Tue, 02 May 2023 07:42:09 PDT
In many real-life situations, we need to make a decision. In many cases, we know the optimal decision in situations when we know the exact value of the corresponding quantity x. However, often, we do not know the exact value of this quantity, we only know the bounds on the value x -- i.e., we know the interval containing $x$. In this case, we need to select a decision corresponding to some value from this interval. The selected value will, in general, be different from the actual (unknown) value of this quantity. As a result, the quality of our decision will be lower than in the perfect case when we know the value x. Which value should we select in this case? In this paper, we provide a decision-theory-based recommendation for this selection.
]]>
Juan A. Lopez et al.Foundations of Neural Networks Explain the Empirical Success of the "Surrogate" Approach to Ordinal Regression -- and Recommend What Next
https://scholarworks.utep.edu/cs_techrep/1794
https://scholarworks.utep.edu/cs_techrep/1794Tue, 02 May 2023 07:42:08 PDT
Recently, a new efficient semi-heuristic statistical method -- called Surrogate Approach -- has been proposed for dealing with regression problems. How can we explain this empirical success? And since this method is only an approximation to reality, what can we recommend if there is a need for a more accurate approximation? In this paper, we show that this empirical success can be explained by the same arguments that explain the empirical success of neural networks -- and these arguments can also provide us with possible more general techniques (that will hopefully lead to more accurate approximation to real-life phenomena).
]]>
Salvador Robles et al.Why Gliding Symmetry Used to Be Prevalent in Biology But Practically Disappeared
https://scholarworks.utep.edu/cs_techrep/1793
https://scholarworks.utep.edu/cs_techrep/1793Tue, 02 May 2023 07:42:08 PDT
At present, many living creatures have symmetries; in particular, the left-right symmetry is ubiquitous. Interestingly, 600 million years ago, very fee living creatures had the left-right symmetry: most of them had a gliding symmetry, symmetry with respect to shift along a line followed by reflection in this line. This symmetry is really seen in living creatures today. In this paper, we provide a physical-based geometric explanation for this symmetry change: we explain both why gliding symmetry was ubiquitous, and why at present, it is rarely observed, while the left-right symmetry is prevalent.
]]>
Julio C. Urenda et al.The World Is Cognizable: An Argument Based on Hoermander's Theorem
https://scholarworks.utep.edu/cs_techrep/1792
https://scholarworks.utep.edu/cs_techrep/1792Tue, 02 May 2023 07:42:06 PDT
Is the world cognizable? Is it, in principle, possible to predict the future state of the world based on the measurements and observations performed in a local area -- e.g., in the Solar system? In this paper, we use general physicists' principles and a mathematical theorem about partial differential equations to show that such prediction is indeed, theoretically possible.
]]>
Miroslav Svitek et al.Everything Is a Matter of Degree: The Main Idea Behind Fuzzy Logic Is Useful in Geosciences and in Authorship
https://scholarworks.utep.edu/cs_techrep/1791
https://scholarworks.utep.edu/cs_techrep/1791Tue, 02 May 2023 07:42:05 PDT
This paper presents two applications of the general principle -- the everything is a matter of degree -- the principle that underlies fuzzy techniques. The first -- qualitative -- application helps explain the fact that while most earthquakes occur close to faults (borders between tectonic plates or terranes), earthquakes have also been observed in areas which are far away from the known faults. The second -- more quantitative -- application is to the problem of which of the collaborators should be listed as authors and which should be simply thanked in the paper. We argue that the best answer to this question is to explicitly state the degree of authorship -- in contrast to the usual yes-no approach. We also show how to take into account that this degree can be estimated only with some uncertainty -- i.e., that we need to deal with interval-valued degrees.
]]>
Christian Servin et al.Causality: Hypergraphs, Matter of Degree, Foundations of Cosmology
https://scholarworks.utep.edu/cs_techrep/1790
https://scholarworks.utep.edu/cs_techrep/1790Tue, 02 May 2023 07:42:04 PDT
The notion of causality is very important in many applications areas. Because of this importance, there are several formalizations of this notion in physics and in AI. Most of these definitions describe causality as a crisp ("yes"-"no") relation between two events or two processes -- cause and effect. However, such descriptions do not fully capture the intuitive idea of causality: first, often, several conditions are needed to be present for an effect to occur, and, second, the effect is often a matter of degree. In this paper, we show how to modify the current description of causality so as to take both these phenomena into account -- in particular, by extending the notion of directed acyclic graph to hypergraphs. As a somewhat unexpected side effect of our analysis, we get a natural explanation of why, in contrast to space-time of Special Relativity -- in which division into space and time depends on the observer, in cosmological solutions there is a clear absolute separation between space and time.
]]>
Cliff Joslyn et al.SUCCESS (Studying Underlying Characteristics of Computing and Engineering Student Success) Survey: Non-Cognitive and Affective Profiles in Engineering and Computing Students at UTEP (2018-2022)
https://scholarworks.utep.edu/cs_techrep/1789
https://scholarworks.utep.edu/cs_techrep/1789Tue, 02 May 2023 07:42:03 PDTSanga Kim et al.Interval-Valued and Set-Valued Extensions of Discrete Fuzzy Logics, Belnap Logic, and Color Optical Computing
https://scholarworks.utep.edu/cs_techrep/1788
https://scholarworks.utep.edu/cs_techrep/1788Tue, 02 May 2023 07:42:01 PDT
It has been recently shown that in some applications, e.g., in ship navigation near a harbor, it is convenient to use combinations of basic colors -- red, green, and blue -- to represent different fuzzy degrees. In this paper, we provide a natural explanation for the efficiency of this empirical fact: namely, we show that it is reasonable to consider discrete fuzzy logics, it is reasonable to consider their interval-valued and set-valued extensions, and that a set-valued extension of the 3-values logic is naturally equivalent to the use of color combinations.
]]>
Victor L. Timchenko et al.Why Fractional Fuzzy
https://scholarworks.utep.edu/cs_techrep/1787
https://scholarworks.utep.edu/cs_techrep/1787Tue, 02 May 2023 07:42:00 PDT
In many practical situation, control experts can only formulate their experience by using imprecise ("fuzzy") words from natural language. To incorporate this knowledge in automatic controllers, Lotfi Zadeh came up with a methodology that translate the informal expert statements into a precise control strategy. This methodology -- and its following modifications -- is known as fuzzy control. Fuzzy control often leads to a reasonable control -- and we can get an even better control results by tuning the resulting control strategy on the actual system. There are many parameters that can be changes during tuning, so tuning usually is rather time-consuming. many parameters. Recently, it was empirically shown that in many cases, quite good results can be attained by using a special 1-parametric tuning procedure called fractional fuzzy inference -- we get up to 40% improvements just by selecting the proper value of a single parameter. In this paper, we provide a theoretical explanation of why fractional fuzzy inference works so well.
]]>
Mehran Mazandarani et al.