# Ackermann function

*10,540*pages on

this wiki

Ackermann function | |
---|---|

Based on | Successor function |

Growth rate | \(f_{\omega}(n)\) |

The **Ackermann function** \(A(x,y)\) is a recursive function originally invented by Wilhelm Ackermann, but revised and simplified by Rosza Peter and then by Raphael M. Robinson. Robinson's version, which is known today as the Ackermann function, is defined as follows:

- \(y + 1\) if \(x = 0\),
- \(A(x - 1,1)\) if \(y = 0\), or
- \(A(x - 1,A(x,y - 1))\) otherwise.
^{[1]}

The function grows at similar rates as the much less known Sudan function.

As an example, the expansion of A(2,2) is given below:

\[ \begin{array}{cclclcl} A(2,2) &=& A(1,A(2,1))\\ &=& A(1,A(1,A(2,0)))&&\\ &=& A(1,A(1,A(1,1)))\\ &=& A(1,A(1,A(0,A(1,0))))\\ &=& A(1,A(1,A(0,A(0,1))))\\ &=& A(1,A(1,A(0,2)))\\&=& A(1, A(1, 3))\\ &=& A(1, A(0, A(1, 2)))\\ &=& A(1, A(0, A(0, A(1, 1))))\\ &=& A(1, A(0, A(0, A(0, A(1, 0)))))\\ &=& A(1, A(0, A(0, A(0, A(0, 1)))))\\ &=& A(1, A(0, A(0, A(0, 2))))\\ &=& A(1, A(0, A(0, 3)))\\ &=& A(1, A(0, 4))\\ &=& A(1, 5)\\ &=& A(0, A(1, 4))\\ &=& A(0, A(0, A(1, 3)))\\ &=& A(0, A(0, A(0, A(1, 2))))\\ &=& A(0, A(0, A(0, A(0, A(1, 1)))))\\ &=& A(0, A(0, A(0, A(0, A(0, A(1, 0))))))\\ &=& A(0, A(0, A(0, A(0, A(0, A(0, 1))))))\\ &=& A(0, A(0, A(0, A(0, A(0, 2)))))\\ &=& A(0, A(0, A(0, A(0, 3))))\\ &=& A(0, A(0, A(0, 4)))\\ &=& A(0, A(0, 5))\\ &=& A(0, 6)\\ &=& 7 \end{array} \]

Using Knuth's up-arrow notation, \(A(x,y) = 2\uparrow^{x-2}(y+3)-3\).

## Friedman's definition Edit

The definition of Ackermann function may slightly vary. For example, Harvey Friedman defines it like so:^{[2]}

- \(A(x,y) = 1\) (if \(y=0\))
- \(A(x,y) = 2y\) (if \(x=1\))
- \(A(x,y) = A(x-1,A(x,y-1))\) (otherwise)

\(A(x,y)\) is equal to \(2 \uparrow^{x-1} y\).

The Ackermann function is related to the Ackermann numbers, because they exhibit equivalent growth rates. "Ackermann function" often refers to the single-argument function \(A(n) = A(n, n)\). This is also known as the gag function.

The inverse of the single-argument Ackermann function \(\alpha(n)\) is called the **inverse Ackermann function**.^{[3]} Since the Ackermann function grows rapidly for small input values, the inverse Ackermann function grows slowly. It has applications in time complexity theory.

## Goucher's definition Edit

A.P. Goucher in his blog post proposed the following definition of the Ackermann function:^{[4]}

- \(f_1(n)=n+2\)
- \(f_{m+1}(n) = f_m^n(2)\)
- \(A(n) = f_n(n)\)

Also, the post describes yet another variant of the Ackermann function, which is related to the following problem:

Given a row of boxes, and some number of coins in each, we can pick one box and operate by one of the following rules:

- Remove one coin from this box and add two coins in the box n+1.
- Remove one coin from this box and invert the number of coins in boxes n+1 and n+2.

We can choose a strategy of picking boxes and applying rules for them. Consider the situation when all boxes except the rightmost one are empty. Now, given a row of n boxes with one coin in each of them, f(n) is the largest number of coins in the rightmost box. Computing exact values of this function can be tricky, but it is trivial to see that \(f(1) = 1\), \(f(2) = 3\) and \(f(3) = 7\). Proving that \(f(4) = 28\) is slightly harder.