## Original Post

So I'm pretty new here, and to googology as a whole, but I'm just trying to get the hang of the ropes, while trying to avoid salad number type stuff. My goal is to come up with something of my own that is provably valid and then (hopefully) be able to prove where that falls into the current hierarchy (I'm not a mathematician by any means, I lean towards statistics more than anything, so a lot of this is most definitely guesswork (apologies in advance).

Anyway, I have a word document I'm working on formatting for this page, so that others can critique it so I can better learn how to write mathematical proofs, that sort of thing. (Below I've included it as a gallery, with links to high-res imgur versions of those same photos). Please please please, tear it apart, I expect it to be bad and very, very much appreciate any criticism!

## Editing a Bit Thanks to the Suggestions of PsiCubed2

### Part 1

Consider the symbol 1 for the quantity of one, an arbitrary symbol. We could say x = 1 or giraffe = 1.

So suppose we have a computer and we tell it that a bit value of 1 is equal to a googolplex.

Now imagine a computer with a memory so large, that it is as large as the entire observable universe, so that every tiniest particle has an "on" (1) state and an "off" (0) state.

Let's say there are 10^82 such particles. If we set every single one of these particles to a "on" state (with on [1] representing a googolplex), then our computer value is 10^82 * 10^10^100.

So we have a simple program which changes all the values to "1". Some might already be "1" and others might be "0", so this program would simply check the value and keep it a "1" if it was a one and change it to "1" if it was a zero.

Kolmogorov complexity is basically the shortest length possible for that program to still give us that expected output. It is uncomputable, but it does exist for a given program in a given language.

So we use that value to compress our program (and its output: 10^82 * 10^10^100), let's just call this compression K(P(10^82 * 10^10^100)). K for Kolmogorov complexity and P for program (with the result of the program in the parenthetical).

### Part 2

Now let's take another computer with 10^82 * 10^10^100 particles. In that computer, let's say that "1" is equal to K(P(10^82 * 10^10^100)).

Thus, we will get (10^82 * 10^10^100) * K(P(10^82 * 10^10^100)), which is actually just 10^82 * 10^10^100 * 10^82 * 10^10^100.

But--we can compress that too using the Kolmogorov complexity! Then we get K(K(10^82 * 10^10^100) * 10^82 * 10^10^100).

We can even take another computer with 10^82 * 10^10^100 * 10^82 * 10^10^100 particles and use the Kolmogorov complexity to get K(K(K(10^82 * 10^10^100) * 10^82 * 10^10^100) * 10^82 * 10^10^100).

Let's just save ourselves some time and write this expansion as .

Like anything in googology, we take this to the logical extremes by doing

### Part 3

Okay, hopefully we're up to this point unharmed. Essentially we're just taking outputs of huge computers as short programs and feeding them into larger computers recursively.

The next step is to take advantage of symbols--like we mentioned above with the giraffe thing. We can set anything as anything, it's all essentially arbitrary as long as the rules hold. So basically, everything needs a unique identifier.

So how many unique identifiers are possible? How many symbols can we have? In human-readable language the symbols "0", "z", and "micro-" have different meanings and bit values, but to our computer we have symbols that are unique combinations of zeroes and ones.

In fact, we just built a large number to be the memory for our new computer! The output of .

So the number of symbols we can have is (see here for an explanation).

Let's call this number (om-null). Now this means that we can do what we did in Part 2, again and again and again until we run out of om-null symbols to represent the new numbers in our absolutely enormous computer.

Okay, so now let's find the Kolmogorov complexity of that, and set that as 1 is an even bigger computer with a memory size of the output of . And let's set that output/program equal to "1" in this new computer and then repeat the whole process again.

Let's call this number dingir-null.

### Part 4

And here's where things get worse!

We're going to build bigger computers with dingir-null-sized memories. But we're going to build dingir-null computers with those huge memories (a lot of computers).

And then we're going to do everything we did above again. The Kolmogorov complexity, the building bigger computers from that, reaching (om-one) symbols, even reaching dingir-one.

It's all going to happen again, like a pretty bad Groundhog Day.

### Part 5

And yes, it continues. We can take a set of size dingir-one, each containing a set of size dingir-one, which contains dingir-one computers and do the same thing to get om-two and dingir-two and then om-three and dingir-three... Until...

We get to . By extension, we could even get or .

Let's represent the number of oms (ॐ) in a chain as *n* in the function OMNI(*n*). Thus, is equal to OMNI(2).

From here, continue from the OMNI(n) part of part 3 in the original post above (I'm still trying to figure out how to format it exactly, but I'm getting there!).