Just thought I would share this post by Keith Ramsay on various methods of defining incredibly large numbers.  It's from way back in 2004, but it's very informative.

John Tapper: |What is the largest (finite) number ever written down?

I don't know.

Probably the contestants need to be divided into categories here: numbers given in decimal, numbers for which a recursive algorithm has been given, and numbers given by mathematical definitions. The "recursively given" category is perhaps the meatiest, but I have a few things to say about the (nonconstructive) "arbtrary definition" category.

Defining big numbers in any arbitrary way becomes a matter of defining strong languages. One greater than the largest number which can be defined using a given number of symbols (a million, say) in a well-defined language ordinarily requires just over the given number of symbols to define in that language, but it's relatively easy to define in a slightly more expressive language, which can define the original language.

So we have a kind of loose hierarchy. Values of the busy beaver function are good for getting above the contestants in the recursively-exhibited number category. But the first-order language of arithmetic is a stronger means of defining numbers, so we can beat those by referring to the largest natural number N which is the unique natural number satisfying a formula P(n) with one free variable in the first-order language of the natural numbers with + and *, but with some carefully formulated bound on the complexity of the formula P.

Second order arithmetic is stronger than first order arithmetic. Once on sci.math a "name a big number" contest was held, and I submitted an entry based on second order arithmetic, although nobody made any reply to it. As far as I could tell, it was the biggest number named then.

The first-order language of set theory (for the cumulative hierarchy, the usual intended model of ZFC) is stronger still. At this point, one runs into the sticky problem that some (more) people at least would stop considering the number definitions to be well-defined. Some people would stop before we got here, but my impression is that a considerably larger group would want to stop here.

For instance, "the n for which aleph-n is the continuum, or 0 if the continuum is not aleph-n for any natural number n" might define a natural number bigger than all the ones mentioned so far! I mean "might" in the sense that for each natural number n>1, it's consistent with the usual axioms of set theory that the continuum is aleph-n. (It's also consistent that the continuum is greater than aleph-n for each natural number n.) If we defined a number to be the largest natural number definable in set theory using some substantial-sized expression, we would be taking a maximum of a set which implicitly includes that n, whereas there are plenty of people who are not so convinced that whether the continuum hypothesis (which says that the continuum is aleph-1) is true has a well-defined answer, which implies that this is not genuinely a definition.

If one "believes in" set theory, one can take things further, although this starts to get messier. One way to extend set theory is with truth-predicates. First-order set theory extended with a predicate T(n) which is true when n is the code number of a true sentence in set theory is a stronger language. (We could strengthen it a bit more by allowing parameters: T(n, x1,...,xn) defined to mean that n is the code of a predicate P such that P(x1,...,xn) holds.)

Once we've extended the language that way, the next natural step is to iterate the extension. We define a sequence of truth-predicates T0, T1, T2, ... where T0 is the truth predicate for first-order set theory, and for each n, T_{n+1} is the truth-predicate for set theory extended by T-n.

One doesn't need to restrict oneself to only natural numbers n. We can define T_alpha for ordinal notations alpha. T_omega for instance can be defined as the truth predicate for a language consisting of set theory augmented by the T_n for all natural numbers n. A little caution is needed because the encoding of formulas into natural numbers depends on our having a code for each of the symbols being added to the language. That's why I wrote "ordinal notations" instead of ordinals. For the language with {T_x : x < alpha} to be encoded as a natural number, we need essentially to have a countable enumeration of {x: x< alpha}.

On the other hand, there's no reason why formulas can't be taken to have ordinals embedded in them, so long as when all is said and done we have a language with an enumerated list of symbols. We could define T_x(n, x1,...,xn) where n is no longer a natural number, but a structure with T_y's in it, where y is simply an ordinal itself. I'm not sure whether this actually enables one to define a language stronger than what one can define using notations for countable ordinals.

I believe all of these methods of defining language (and consequently big natural numbers) are trumped by extending set theory with a hierarchy of "superclasses" above the usual hierarchy of ordinary sets. The inductive definitions of these truth predicates fail to work in ordinary set theory because they require dealing with proper-class-sized constructions. One wants to define the relationship of a formula P(x1,...,xn) to an n-tuple of sets (a1, which says P(a1,...,an) holds. As long as the number of variables and quantifications is fixed, one can define the relationship inside ordinary set theory. Each relationship can be represented as a single formula in that language. The formula can be considered as in Goedel-Bernays set theory to correspond to a proper class of n+1 tuples (P,a1,...,an). But in order to define the general relationship between arbitrary formulas P and substitutions into its free variables, one needs an inductive definition, which implicitly involves considering a _sequence_ of such relationships, which is an object of one higher rank than the proper classes in it.

But if we soup up the language by permitting references to proper class and "superclasses" whose elements are proper classes, then the truth predicate for the original language of set theory can be defined. Likewise, iterated truth predicates can be defined. So it seems to me that this is a stronger method of making the language more expressive.

One can conceivably keep going in this same direction by adding more ranks beyond the point where the objects stop being sets. I remember once seeing a reference to an imagined hierarchy which has "as many" additional ranks as the original hierarchy has, i.e., one for each ordinal. If the class of all ordinals is denoted On, then the ranks of this realm are ordered like On+On.

Perhaps this allows us to define almost inconceivably large natural numbers! At this point, however, I have a harder time seeing how a language which refers to such a hierarchy should be interpreted, and whether it makes well-defined sense at all. If the hierarchy keeps going, then in what sense do the elements stop being sets when we get to the proper classes?

Looking out at the big natural numbers, we see on the horizon flickering images which look sort of like numbers definable only using the most tenuous of notions, but it's hard to say whether they are there, or just mirages.

Keith Ramsay

So we get methods for going beyond Rayo's number, using truth predicates and ultimately superclasses.  I'm not sure what he means by "there's no reason why formulas can't be taken to have ordinals embedded in them", but it looks intriguing. Here's a question:  how does LittlePeng9's idea of going to higher order logic compare with taking superclasses?  Are they equivalent?  How high can you go in higher-order logic anyway?