Discussion:
Solution to Alan Turing’s 1936 Halting Problem Version(4) [4 paradoxes rolled into one]
(too old to reply)
peteolcott
2018-11-23 15:08:49 UTC
Permalink
So far no one besides Noam Chomsky understands that syntactically
correct expressions of language can be semantically incorrect.
https://en.wikipedia.org/wiki/Colorless_green_ideas_sleep_furiously
You clearly weren't paying attention in the 1970s when this was being debated in linguistics.
Go read up on one of the generative semanticists' favourite examples: "Spiro conjectures Ex-Lax" (from Morgan 1973).
Who here knows the criterion measure for semantically incorrect WFF?
Anyone who isn't too lazy or too stupid to learn how models work, i..e.., not you.
We are finally at the point where it makes perfect sense that I totally
understand the one single aspect of model theory known as satisfiability.
Consider ∀x(∃y(P(x,y) & (Q(x) v R(y)))). Do you know how to rigorously specify what makes it satisfiable?
Changing the subject does not help.
I didn't. I'm talking about satisfiability. Since you (obviously) can't answer the question, it's clear that you don't understand satisfiability.
I am not shooting to be all knowing about satisfiability.
You scored perfectly, then!
I only need to know enough about the satisfiability of
the second expression on this page to prove the point that
this make makes.
You won't. You can't. You are physically incapable of understanding what you need to understand.
EFQ
It would seem this way from the POV of a conventional misconceptions.
The key misconception is that unsatisfiability has never bothered to
account for infinitely recursive structure.

Instead of realizing that an expression of language is erroneous
because it has an infinitely recursive structure: {logic, math,
and computer science} are thought to have fundamental limitations.

No one has ever understood what is going on with the Liar Paradox.
I can now show this so that Peter Pervical totally understands:

LP := ~True(LP)

What no one besides me understands is that the three expressions shown below
have exactly this same problem. It is documented on USENET that I have known
this for thirty years.

(1) x ∉ Pr ↔ x ∈ Tr

(2) G ↔ ~(F ⊢ G) (as defined below)
∃F ∈ Formal_Systems (∃G ∈ Language(F) (G ↔ ~(F ⊢ G)))

(3) H ([Ĥ], [Ĥ]) (as defined below)
Definition of Turing Machine H (state transition sequence)
H.q0 Wm W ⊢* H.qy // Wm is a TMD that would halt on its input W
H.q0 Wm W ⊢* H.qn // else

Definition of Turing Machine Ĥ (state transition sequence)
Ĥ.q0 Wm ⊢* Ĥ.qx Wm Wm ⊢* Ĥ.qy ∞
Ĥ.q0 Wm ⊢* Ĥ.qx Wm Wm ⊢* Ĥ.qn

Copyright 2018 Pete Olcott
peteolcott
2018-11-27 20:27:48 UTC
Permalink
To prove the point that this page makes I must show that conventional
understanding of unsatisfiability does not account for expressions of
language having infinitely recursive structure.
What do you mean by expressions of language having infinitely recursive structure?
(1) Liar Paradox
(2) 1931 GIT
(3) 1936 HP
(4) 1936 UT
Since you understand this: LP ↔df ~True(LP)
Do I?  Is that supposed to be your account of what the liar paradox is?
No it is the absolute truth about what the Liar Paradox is.
Consider this -
    This sentence is false.            (***)
Supposing that (***) is a declarative sentence, it ought to be true or false.  But if it is assumed to be false, it turns out to be true.  And if it assumed to be true, it turns out to be false.  *That* is the paradox.  Note that (***) isn't a definition,
so it can't be LP ↔df ~True(LP) which is a definition.  Note that (***) is self-referential, so it can't be LP ↔df ~True(LP) which isn't self-referential.
Thirty years?  Really?
I read a scientific journal article last night that the fact
the self-reference was explicitly left out of FOPL is the
reason why no one has ever understood this error before.
The error was inexpressible.
It is exactly this same sort of thing for all of them.
It cannot be understood to be exactly the same sort of thing for all of them
G ↔df ~Provable(G)
So far as GIT is concerned, G isn't defined to be ~Provable(G).  How could it be?  That would be circular.  A theory T is defined.  G is a sentence in the language of that theory which is such that neither T|-G nor T|-~G.  That is to say that G is defined
in a certain way (that depends on T) and it is *proved* to have the stated property.  You don't get the proving business, do you?
H ↔df Halts(H)
Copyright 2018 Pete Olcott
This sentence is not true.
is precisely formalized by this: LP ↔df ~True(LP)

This sentence is not provable.
is precisely formalized by this: G ↔df ~Provable(G)

G is materially equivalent to a statement of its own unprovability.
is precisely formalized by this: G ↔ ~(F ⊢ G)

My key new discovery is that all three are logically equivalent:
(∀x ∈ Closed_WFF(F) ( Satisfiable(x) ∨ Falsifiable(x) ∨ ~Sentence(x) ))

Copyright 2018 Pete Olcott
peteolcott
2018-11-27 21:11:16 UTC
Permalink
Post by peteolcott
This sentence is not true.
is precisely formalized by this: LP ↔df ~True(LP)
You could ask your WCExpert.
You already know that I am correct.
{This sentence} (in English) refers directly to itself.
The only way to specify direct self-reference

x:= y means x is defined to be another name for y

Tarski got confused as Hell over this, always conflating self-reference
as indirect reference of an expression to its name, rather than a direct
reference of an expression to itself.

I had no respect for his convention T at all because it only specified
circular reasoning.

(1) "P" is true if, and only if, P.
For example,
(2) 'snow is white' is true if and only if snow is white.

"snow is white" is true ONLY because one of the properties associated
with the physical phenomenon of {snow} is color and this property has
the value of {white}.

"snow is white" is true because:
HasProperty(snow, color) & Snow.Color = {white}

Loading...