The lower order comes up unexpectedly (to me) in coevolutionary algorithms (what I studied in my PhD), but really in search and optimization more broadly.
Say you have a notion of "context", and a way of ordering these so that some contexts are larger, more expansive than, or "above" others. And let's say in each context, there is a set of things that are identifiable as "best". I'm being vague because you can instantiate this basic idea pretty broadly. For instance, maybe the contexts are states of information in a search algorithm and "best" refers to the possible solutions that seem best in each state of information; as you search, you change (increase) your state of information, and might change your might about which possible solutions are the best one. As another example, the contexts could be possible worlds and "best" refers to which propositions are true in each possible world; as you progress from one possible world to the next, you might change your mind about what propositions are true.
Anyway, with that simple setup you can associate to each thing the set of all contexts in which it appears best. This set could be empty or could be very large or anything in between. Then the lower order shows up as a weak preference relationship among all the things: one thing is lower preference than another if, for each context in which it appears best, there's a larger or equal context in which the other thing seems best. Put differently, any time you think the first thing is best, there's a way to increase your context such that the other thing appears best. This is exactly the lower order between the sets of contexts in which each thing seems best. If the set of contexts in which one thing seems best is higher up the lower order (

) than the set of contexts in which the other seems best, then the former thing is weakly preferred to the latter.
The intuition in a search setting is that contexts are states of information, a kind of compendium of what you've learned so far in your search. If
x and
y are possible solutions, and for every context (state of information) in which you think
x is the best there is always a bigger context--i.e., with more information--in which you think
y is best instead, you ought to prefer
y to
x. The rationale is that any time you think
x is best there's a way to learn a little more and change your mind to think
y is best instead, which justifies preferring
y to
x.
Applied to modal logic, this notion corresponds to validity: if in every possible world where the proposition
p is true there is an accessible world in which proposition
q is true, then "
p implies possibly
q" is true in every world (valid).
The appearance of "possibly" is suggestive I think, and concords with this being a
weak preference. "Necessarily" would be a strong preference, but I'd expect (in the sense of demand) a search process follow such a preference directly.
#math #ComputerScience #search #CoevolutionaryAlgorithm #SolutionConcept #ModalLogic