Let \(a(1) = 1\).
Define \(a(n) = a(n-1) + n^k\), where \(k\) is the lowest positive integer such that \(a(n-1)+n^k\) has more decimal digits than \(a(n-1)\).
Is it always true that \(a(n)\) has \(n\) digits?
One question, why make k non zero ? perhaps there could be a situation when just adding 1, could increase the number of digits.
Thinking naively, if n=100 then the numbers of digits could easily increase by 2
Also... experimentally ...
I think that a(22) = 3113440232155803840515
which has 22 digits
a(23) = 144163479792818772766618
which has 24 digits
See image for haskell...
3113440232155803840515 + 23 ^ 16 = 9246050647836802489476
3113440232155803840515 + 23 ^ 17 = 144163479792818772766618
@christianp Also... I couldn't find the sequence on oeis!
@kat you're right, non-negative k would do
A Mastodon instance for maths people. The kind of people who make \(\pi z^2 \times a\) jokes.
\) for inline LaTeX, and
\] for display mode.