Sorry for starting a new thread, but SY suddenly closed
the thread where this was discussed and I was not allowed
to email millwood through the forum, thus leaving me no
other option.
Millwood said something about proving "1+1" and I commented
that this cannot be proved or disproved since it is not a
statement and suggested that maybe he meant proving
whether "1+1=2". To this he responded:
I am puzzled what you mean by this. you can only prove or
disprove a logical proposition and neither "1+1" nor "1+2"
are logical propositions (claims, statements, whatever you
wish to call it. Could you please elaborate a bit on what you
mean, especially since you made the claim of one of these
being partially proven. Assuming the usual algebraic system
(which is not obvious to do) you could just as well say that
it is not proven whether "2" but it is partially proven that "3",
which makes clearer what the problem is, I think.
the thread where this was discussed and I was not allowed
to email millwood through the forum, thus leaving me no
other option.
Millwood said something about proving "1+1" and I commented
that this cannot be proved or disproved since it is not a
statement and suggested that maybe he meant proving
whether "1+1=2". To this he responded:
sure you can. both 1+1 and 1+2 come from a very old mathematical conjecture. and 1+2 has been proven (partially) but 1+1 remains.
I am puzzled what you mean by this. you can only prove or
disprove a logical proposition and neither "1+1" nor "1+2"
are logical propositions (claims, statements, whatever you
wish to call it. Could you please elaborate a bit on what you
mean, especially since you made the claim of one of these
being partially proven. Assuming the usual algebraic system
(which is not obvious to do) you could just as well say that
it is not proven whether "2" but it is partially proven that "3",
which makes clearer what the problem is, I think.
You also said:
This reminds me of cables discussions. Why would you like to choose the system for which it is not true? 😉
Christer said:Perhaps he meant
proving whether "1+1 = 2", which is or isn't true depending
on our choice of algebraic system.
This reminds me of cables discussions. Why would you like to choose the system for which it is not true? 😉
I don't want to start another thread here, but millwood also said;
As far as I understand, awarness is a consciousness and knowledge, and this, unlike taste, is not biased. It is also not a gift, but something one gains if he follows the right path.
millwood said:
Again, awareness is an acquired taste and not everyone is gifted with it.
As far as I understand, awarness is a consciousness and knowledge, and this, unlike taste, is not biased. It is also not a gift, but something one gains if he follows the right path.
Christer:
1+1, 1+2 (and 1+3) are all different forms of a mathematical conjecture, Goldbach conjecture (actually strong Goldbach conjecture as presented by Euler). Essentially, it states that any even number can be expressed as the sum of two prime numbers. (thus 1+1).
in the 1970s, it was proven that sufficiently large even numbers can be expressed as the sum of a prime number and the product of no more than two prime numbers. (thus 1+2);
1+1 remains unproven at this point.
that should be sufficient as starters for this discussion.
1+1, 1+2 (and 1+3) are all different forms of a mathematical conjecture, Goldbach conjecture (actually strong Goldbach conjecture as presented by Euler). Essentially, it states that any even number can be expressed as the sum of two prime numbers. (thus 1+1).
in the 1970s, it was proven that sufficiently large even numbers can be expressed as the sum of a prime number and the product of no more than two prime numbers. (thus 1+2);
1+1 remains unproven at this point.
that should be sufficient as starters for this discussion.
You're right, 1 + 1 = 2 can't be proved any more so than any other axiom. That's why it's an axiom and not a theorem. You can construct all kinds of arithmetical systems where this isn't true (one trivial example is binary, where 1 + 1 = 10).
BTW, I got a reference wrong in the other thread; treatment of these issues is in Courant and Robbins, "What Is Mathematics?".
BTW, I got a reference wrong in the other thread; treatment of these issues is in Courant and Robbins, "What Is Mathematics?".
<Not one to delve into soon-to-be-hugenormous threads but whatever>
Seems easy enough to me.. define a number line with counting numbers 0, 1, 2, 3, . . . (on to infinity, and also going to negative infinity left of zero if you want to be complete) and then we define addition as moving a point on the line an according number of spaces to either side. (Add algebraically, i.e. 4 + (-2) = 2.) If it is a real object like a ruler and the numbers are evenly spaced, then it's simple to measure a length and put it either side of the point, mark the new point, and there you have it.
But then again I get the feeling I missed the point. But I don't get the point to begin with; 1+1 is obvious enough. If you have two equal quantities (1 = 1), then adding them together will double the quantity (algebraically, x + x = 2x (adding common terms), and let x=1, 1 + 1 = 2*1 = 2).
I think I still missed the point. But then I'm no philosopher, I tend not to argue the logic itself, because I know logic works.
Tim
Seems easy enough to me.. define a number line with counting numbers 0, 1, 2, 3, . . . (on to infinity, and also going to negative infinity left of zero if you want to be complete) and then we define addition as moving a point on the line an according number of spaces to either side. (Add algebraically, i.e. 4 + (-2) = 2.) If it is a real object like a ruler and the numbers are evenly spaced, then it's simple to measure a length and put it either side of the point, mark the new point, and there you have it.
But then again I get the feeling I missed the point. But I don't get the point to begin with; 1+1 is obvious enough. If you have two equal quantities (1 = 1), then adding them together will double the quantity (algebraically, x + x = 2x (adding common terms), and let x=1, 1 + 1 = 2*1 = 2).
I think I still missed the point. But then I'm no philosopher, I tend not to argue the logic itself, because I know logic works.
Tim
1+1=2 is basically a definition. Goldbach's conjecture does not "prove" it. In fact, it has very little to do with "proving" 1+1=2.
You can't just define the numbers like that (referring to the "ruler" comment). It is more involved than that. Your "axiom" usually starts with the definition of a set, and the empty set.
1+1=10 is the same arithmetic, you're just redefining what the symbols mean.
1+1=0 (arithmetic mod 2) IS a different arithmetic system. Note that in this system, 2 = 0, which is not a coincidence.
Whatever. I dealt with this a long time ago.
Won
You can't just define the numbers like that (referring to the "ruler" comment). It is more involved than that. Your "axiom" usually starts with the definition of a set, and the empty set.
1+1=10 is the same arithmetic, you're just redefining what the symbols mean.
1+1=0 (arithmetic mod 2) IS a different arithmetic system. Note that in this system, 2 = 0, which is not a coincidence.
Whatever. I dealt with this a long time ago.
Won
Heck??...what has all this thread have to do with DIYAUDIO>???....shall we get into boolean algebra and hex code...or is it that someone has a chip on their shoulder that needs to be slapped off....all I can say is stick with the theme guys,we have some smart minds here and I can tell the ones who are fresh out of school (or are in!)...but rifling off this verbatim in THIS FORUM is fruitless because your wasting the ink in your keyboard
DIRT®
DIRT®
Re: Re: Is "1+1" provable?
There are lots of situations. For instance if you are a binary computer then 1+1=10
Millwood, 1+1 by itself cannot be proven, when you add the context that makes it the Goldbach conjecture it is a completely different beast.
A Christer says, proving 1+1 is topologically equivalent of proving "spot"
dave
Peter Daniel said:This reminds me of cables discussions. Why would you like to choose the system for which it is not true? 😉
There are lots of situations. For instance if you are a binary computer then 1+1=10
Millwood, 1+1 by itself cannot be proven, when you add the context that makes it the Goldbach conjecture it is a completely different beast.
A Christer says, proving 1+1 is topologically equivalent of proving "spot"
dave
JOE DIRT® said:Heck??...what has all this thread have to do with DIYAUDIO>???
Well, this is the Everything Else forum, so implying that any post here is offtopic doesn't make sense.
Few thoughts on this from someone who studied logic, math, and some related subjects.
This, btw, is just what i can deduce myself, and not something i possibly was taught explicitly in school.
(btw, all of this was said at some point in time, but its not as structured)
First of all, I assume you guys are talking about proving 1+1=2 (or 1+2=3 for that matter, etc).
You don't prove something like that. Its an atomic concept, I believe that (as previously stated) its an axiom (although i do not know for sure if you can consider it an axiom or not).
When you say 1+1 = 2, you are using elementary concepts taken to be true within the domain of what we define as algebra. It comes from the definition of the addition operation (which btw, i do not know the 'official' definition of addition), but being an elementary concept we do not attempt to prove it.
We use axioms (like 1+1=2) in order to prove things further however. We use axioms to deisgn theorems. Theorems, in order to fit a description, have to be proveable to be valid, or disproveable. This axiom on the other hand we know to be true a-priori (at least i think so ..)
Also, someone mentioned that in binary 1+1=10. That is exactly the same as saying 1+1=2, the only difference being is that in binary, '10' is 2 in decimal. Its just a different representation of the number system.
In roman numeral, its I+I=II, in hex its 1+1=2, in my own system (where A=1, B=2, C=3, etc), A+A=B. Its all the same concepts, represented differently.
This is all limited by what we define to be addition in the algebraic/arithmetical system. As someone pointed out, if you use modular arithmetic, and we use mod 2, then 1+1=0 (yes, it IS true). Or, if you want another illustration of that, take mod 1, where 1+1=1.
Now, you may be outraged at the fact that 1+1=0, but you have to remember, you are not dealing with traditional arithmetic, you are in fact dealing with what's called modular arithmetic (which is several courses worth of studying, if not more .. but one for me was more than enough!)
Anyway, if anyone would like to offer a formal proof for 1+1=2, i'll congratulate them. However, that's not very likely 😉
(While you're at it, attempt to prove how "TRUE AND FALSE = FALSE" when dealing with boolean logic.)
This, btw, is just what i can deduce myself, and not something i possibly was taught explicitly in school.
(btw, all of this was said at some point in time, but its not as structured)
First of all, I assume you guys are talking about proving 1+1=2 (or 1+2=3 for that matter, etc).
You don't prove something like that. Its an atomic concept, I believe that (as previously stated) its an axiom (although i do not know for sure if you can consider it an axiom or not).
When you say 1+1 = 2, you are using elementary concepts taken to be true within the domain of what we define as algebra. It comes from the definition of the addition operation (which btw, i do not know the 'official' definition of addition), but being an elementary concept we do not attempt to prove it.
We use axioms (like 1+1=2) in order to prove things further however. We use axioms to deisgn theorems. Theorems, in order to fit a description, have to be proveable to be valid, or disproveable. This axiom on the other hand we know to be true a-priori (at least i think so ..)
Also, someone mentioned that in binary 1+1=10. That is exactly the same as saying 1+1=2, the only difference being is that in binary, '10' is 2 in decimal. Its just a different representation of the number system.
In roman numeral, its I+I=II, in hex its 1+1=2, in my own system (where A=1, B=2, C=3, etc), A+A=B. Its all the same concepts, represented differently.
This is all limited by what we define to be addition in the algebraic/arithmetical system. As someone pointed out, if you use modular arithmetic, and we use mod 2, then 1+1=0 (yes, it IS true). Or, if you want another illustration of that, take mod 1, where 1+1=1.
Now, you may be outraged at the fact that 1+1=0, but you have to remember, you are not dealing with traditional arithmetic, you are in fact dealing with what's called modular arithmetic (which is several courses worth of studying, if not more .. but one for me was more than enough!)
Anyway, if anyone would like to offer a formal proof for 1+1=2, i'll congratulate them. However, that's not very likely 😉
(While you're at it, attempt to prove how "TRUE AND FALSE = FALSE" when dealing with boolean logic.)
Another way to say it is that 1+1 is the definition of 2 (where the operators = and + are assumed). Indeed, by having only the concepts of 1 and an increment/successor operator (+), then you can define from there the wole set of natural numbers (without 0) by iterative application of {+, =, 1, ...} are mere symbols and their meaning depends on the formal system you are working in.
Prune said:Another way to say it is that 1+1 is the definition of 2 (where the operators = and + are assumed). Indeed, by having only the concepts of 1 and an increment/successor operator (+), then you can define from there the wole set of natural numbers (without 0).
That's just it though, if you choose to use something like the arithmetic, you take for granted certain things because they form the foundation of arithmetic. For example we take for granted what addition is because its a fundemental part of arithmetic. It is defined for us, there is no need to prove it.
You don't prove a definition. You can't prove something like (and i'm probably going to think of a bad example, but ...) .. ok, maybe this one will work
a line is a collection of points all lying in the same plane, etc etc (can't remember the exact definition of what a line is from grade 11 geometry!) Using that you can prove whether a certain geometric object is a line or not, but you can't prove what a line is. Its a definition.
Same with 1+1. We define what addition is, and through the definition of addition we know what 1+1 is. Actually that makes me think.
That would mean that 1+1 IS proveable because of the definition of addition (which of course, i do not know the formal version of).
Alright, hello google, here we go:
First of all we deal with natural numbers. Peano has 5 postulates dealing with natural numbers, of which we are concerned with the following:
1) There is a natural number 0.
2) Every natural number a has a successor, denoted by a+1
this defines the incrementation function for us (successor of 1 is 2, successor of 2 is 3, etc)
Now, addition (for natural numbers) is defined as:
a+b=c
where a, b, and c are natural numbers
there are 2 axioms that deal with addition:
1) a+0=a
a+(b*) = (a+b)* (where b* is the successor of b, also denoted as b+1)
Along with axioms we have the rules of associativity and commutativity (which are proveable, but i won't go into it!)
So, we take a=1 and b=0
we get:
a+(b*) = (a+b)*
1+(0*) = (1+0)*
left side first:
1+(0*) = 1+1 (we know 0* is defined as 0+1, by axiom 1 we know that 0+1=1)
now, we have 1+1 on the left side
right side:
(1+0)* = 1* = 2
There is your quickly assembled proof that 1+1=2
Re: Re: Is "1+1" provable?
OK, then I see what you mean, but you left out the necessary
context in your previous post. "1+1" and "1+2" are still not
something that can be proven, "any even number can be
expressed as the sum of two primes" is on the other hand
a logical statement and thus is either true or false. So what
you meant is whether this is proven for the special cases
of "1+1" and "1+2". Seems you got something wrong though,
since "1+2" is not an even number in usual algebra.
millwood said:1+1, 1+2 (and 1+3) are all different forms of a mathematical conjecture, Goldbach conjecture (actually strong Goldbach conjecture as presented by Euler). Essentially, it states that any even number can be expressed as the sum of two prime numbers. (thus 1+1).
OK, then I see what you mean, but you left out the necessary
context in your previous post. "1+1" and "1+2" are still not
something that can be proven, "any even number can be
expressed as the sum of two primes" is on the other hand
a logical statement and thus is either true or false. So what
you meant is whether this is proven for the special cases
of "1+1" and "1+2". Seems you got something wrong though,
since "1+2" is not an even number in usual algebra.
Re: Re: Is "1+1" provable?
Mathmematics per se is not concerned with the real world, it is
just a tool that can or cannot be used for real-world applications,
so mathematicians often study the properties of mathematical
concepts without regard to practical usefullness.
However, different algebras from the standard one are useful
in many cases, especially in computer science and the digiatal
world. Consider for example an 8-bit digital system and for
simplicity code only positive numbers. We then have a limited
number range from 0 to 255. That means for instance that
255+1 = 0 and not 256 as it would in standard algebra.
For a one-bit system we have, 0+0=0, 0+1=1+0=1 but
1+1=0, which also happens to describe the behvaiour of
an XOR gate. What happens here is bascally that we limit
the domain of numbers and redefine the "+" operator.
From a logical point of view, we may even redefine the meaning
of the symbols "1" and "2", but that is usually of lesser
practical use.
Peter Daniel said:This reminds me of cables discussions. Why would you like to choose the system for which it is not true? 😉
Mathmematics per se is not concerned with the real world, it is
just a tool that can or cannot be used for real-world applications,
so mathematicians often study the properties of mathematical
concepts without regard to practical usefullness.
However, different algebras from the standard one are useful
in many cases, especially in computer science and the digiatal
world. Consider for example an 8-bit digital system and for
simplicity code only positive numbers. We then have a limited
number range from 0 to 255. That means for instance that
255+1 = 0 and not 256 as it would in standard algebra.
For a one-bit system we have, 0+0=0, 0+1=1+0=1 but
1+1=0, which also happens to describe the behvaiour of
an XOR gate. What happens here is bascally that we limit
the domain of numbers and redefine the "+" operator.
From a logical point of view, we may even redefine the meaning
of the symbols "1" and "2", but that is usually of lesser
practical use.
As for the claim/assumption by some people that 1+1=2 etc.
are given to us as axioms, that is not true. Yes, we could
choose to define such axioms, but since the natural numbers
are infinitely many, although only countably infinite, we would
have to write down an infinite number of such axioms (to
surprise some people even more, the number of such axioms
will be the square of the number of natural numbers, but yet
there are no more such axioms than there are natural numbers!!).
Hence, we have to go about doing it in a different way so we
get only a finite number of axioms. That is why the algebra
for natural numbers is based on the Peano axioms as I
mentioned in the previous thread. It seems elizard gave a
fair account of these in his googling results, so I don't think
I need to add anything more there.
Someone, Won I think, also mentioned defining the number
0 as the empty set. That is not necessary and not defined
by the Peano axioms. However, there are cases in computer
science, for instance, where we need to go beyond the size
of the set of natural numbers but still stay countable in a
pseudo sense. Then it is common to define 0 as the empty
set {}, 1 as the set of the empty set {{}}, 2 as the set of
0 and 1, ie. {{},{{}}} etc. (if I remember this correctly).
Now. let N be the set representation of the size of the set of
natural numbers written in the above notation (which we cannot
write down, since it is infinitely long). Then we can step beyond
infinity, since we can write "infinity+1" as {N, {N, {{}}}. Note
though that we cannot write this out in full, since N is an
infinitely long string, so we must use a symbol, in this case N,
to denote this string. I am somewhat unsure of the details here,
though, since I haven't had any use of this since I took the logic
programming course almost 20 years ago.
are given to us as axioms, that is not true. Yes, we could
choose to define such axioms, but since the natural numbers
are infinitely many, although only countably infinite, we would
have to write down an infinite number of such axioms (to
surprise some people even more, the number of such axioms
will be the square of the number of natural numbers, but yet
there are no more such axioms than there are natural numbers!!).
Hence, we have to go about doing it in a different way so we
get only a finite number of axioms. That is why the algebra
for natural numbers is based on the Peano axioms as I
mentioned in the previous thread. It seems elizard gave a
fair account of these in his googling results, so I don't think
I need to add anything more there.
Someone, Won I think, also mentioned defining the number
0 as the empty set. That is not necessary and not defined
by the Peano axioms. However, there are cases in computer
science, for instance, where we need to go beyond the size
of the set of natural numbers but still stay countable in a
pseudo sense. Then it is common to define 0 as the empty
set {}, 1 as the set of the empty set {{}}, 2 as the set of
0 and 1, ie. {{},{{}}} etc. (if I remember this correctly).
Now. let N be the set representation of the size of the set of
natural numbers written in the above notation (which we cannot
write down, since it is infinitely long). Then we can step beyond
infinity, since we can write "infinity+1" as {N, {N, {{}}}. Note
though that we cannot write this out in full, since N is an
infinitely long string, so we must use a symbol, in this case N,
to denote this string. I am somewhat unsure of the details here,
though, since I haven't had any use of this since I took the logic
programming course almost 20 years ago.
Re: Re: Re: Is "1+1" provable?
"1+1" and "1+2" are both "notions", or expressions, of the Goldbach conjecture. They do not in any way, shape or form, mean anything in the arithematic sense of 1+1 = 2, or 1+2 = 3.
People who don't know goldbach conjecture may not recognize either, as this example demonstrated, but for those who are in the know, they know.
so maybe the way to go is to ask a mathematician who works on the number theory what it is and it should be self-explainatory.
Christer said:
OK, then I see what you mean, but you left out the necessary
context in your previous post. "1+1" and "1+2" are still not
something that can be proven, "any even number can be
expressed as the sum of two primes" is on the other hand
a logical statement and thus is either true or false. So what
you meant is whether this is proven for the special cases
of "1+1" and "1+2". Seems you got something wrong though,
since "1+2" is not an even number in usual algebra.
"1+1" and "1+2" are both "notions", or expressions, of the Goldbach conjecture. They do not in any way, shape or form, mean anything in the arithematic sense of 1+1 = 2, or 1+2 = 3.
People who don't know goldbach conjecture may not recognize either, as this example demonstrated, but for those who are in the know, they know.
so maybe the way to go is to ask a mathematician who works on the number theory what it is and it should be self-explainatory.
Re: Re: Re: Re: Is "1+1" provable?
Maybe those into number theory use it as a shorthand, but
without the context of number theory it doesn't denote anything
that can be proven, and we didn't have that context in the
other thread. It is a bit like if I would mention that we still don't
know whether P=NP. Anybody with some knowledge of
theoretical computer science immediately knows what I mean,
while most others wouldn't have a clue what I am talking
about even if I mentined it was in the context of computer science
or even if I explained what it means.
Edit: I should add that take back what I said about there being
something wrong since 1+2 isn't an even number. I seem to
have missed the second half of your first post. Hey, I still had
only had one cup of coffee when reading it. 🙂
millwood said:
"1+1" and "1+2" are both "notions", or expressions, of the Goldbach conjecture. They do not in any way, shape or form, mean anything in the arithematic sense of 1+1 = 2, or 1+2 = 3.
People who don't know goldbach conjecture may not recognize either, as this example demonstrated, but for those who are in the know, they know.
so maybe the way to go is to ask a mathematician who works on the number theory what it is and it should be self-explainatory.
Maybe those into number theory use it as a shorthand, but
without the context of number theory it doesn't denote anything
that can be proven, and we didn't have that context in the
other thread. It is a bit like if I would mention that we still don't
know whether P=NP. Anybody with some knowledge of
theoretical computer science immediately knows what I mean,
while most others wouldn't have a clue what I am talking
about even if I mentined it was in the context of computer science
or even if I explained what it means.
Edit: I should add that take back what I said about there being
something wrong since 1+2 isn't an even number. I seem to
have missed the second half of your first post. Hey, I still had
only had one cup of coffee when reading it. 🙂
I remember seeing what was suppose to be a proof of "1+1=2" somewhere, I am quite sure it was taken from "Principia Mathematica" by Russel&Whitehead.
Unfortunately I don't remeber which axioms they were using.
Unfortunately I don't remeber which axioms they were using.
Re: Re: Re: Is "1+1" provable?
That again reminds of the cables disputes. Most of the participants are not concerned with real-world applications, but try to discuss it theoretically without regard to practical usefullness.
I just couldn't resist it😉
Christer said:
Mathmematics per se is not concerned with the real world, it is
just a tool that can or cannot be used for real-world applications,
so mathematicians often study the properties of mathematical
concepts without regard to practical usefullness.
That again reminds of the cables disputes. Most of the participants are not concerned with real-world applications, but try to discuss it theoretically without regard to practical usefullness.
I just couldn't resist it😉
- Status
- Not open for further replies.
- Home
- General Interest
- Everything Else
- Is "1+1" provable?