Let \(a(1) = 1\).
Define \(a(n) = a(n-1) + n^k\), where \(k\) is the lowest positive integer such that \(a(n-1)+n^k\) has more decimal digits than \(a(n-1)\).

Is it always true that \(a(n)\) has \(n\) digits?


One question, why make k non zero ? perhaps there could be a situation when just adding 1, could increase the number of digits.

Thinking naively, if n=100 then the numbers of digits could easily increase by 2

Also... experimentally ...

I think that a(22) = 3113440232155803840515
which has 22 digits

a(23) = 144163479792818772766618

which has 24 digits

See image for haskell...


3113440232155803840515 + 23 ^ 16 = 9246050647836802489476

3113440232155803840515 + 23 ^ 17 = 144163479792818772766618

@christianp Also... I couldn't find the sequence on oeis!
1,17,260,1284,16909,296845,1120388,17897604,405318093,1405318093 ...

@kat @christianp
n=25 would have been another candidate as counterexample.
(Tried to come up with bounds, there's quite a lot of cases where there's only one possible value of k)

Sign in to participate in the conversation

A Mastodon instance for maths people. The kind of people who make \(\pi z^2 \times a\) jokes.

Use \( and \) for inline LaTeX, and \[ and \] for display mode.