You are not logged in.
Pages: 1
The "proof" that says 0.999... = 1 goes like this:
(1) x = 0.999...
(2) 10x = 9.999...
(3) 9x = 9
(4) x = 1
(5) 0.999... = 1
In (1) it's stated that x is equal to 0.999... with an infinite number of 9s. When you multiply it by 10 in (2) you get 9.999... with an infinite number of 9s. But this can't be true. In (2) you must get 9.999... with 1 less than infinite number of 9s. If "infinite" was 3 for example, you would have x = 0.999 in (1), and 9.99 in (2). 9.99 - 0.999 isn't 9. It's 8.991. Divide that by 9 in (4), and you get 0.999, not 1.
By saying x = 0.999... in (1), you have already defined a number of decimals, which is arbitrarily large. (2) can't have the same number of decimals; it must be (infinite - 1) number of decimals. For this proof to be true, you have to add another decimal in (2). 9.999... - 0.999... will not become 9 if the number of decimals aren't the same.
Just a thought.
Joakim
Pages: 1