Welcome!

edit

Hi Waterwizardm! I noticed your contributions and wanted to welcome you to the Wikipedia community. I hope you like it here and decide to stay.

As you get started, you may find this short tutorial helpful:

Learn more about editing

Alternatively, the contributing to Wikipedia page covers the same topics.

If you have any questions, we have a friendly space where experienced editors can help you here:

Get help at the Teahouse

If you are not sure where to help out, you can find a task here:

Volunteer at the Task Center

Please remember to sign your messages on talk pages by typing four tildes (~~~~); this will automatically insert your username and the date.

Happy editing! Andrew nyr (talk, contribs) 06:48, 18 May 2020 (UTC)Reply

  Welcome to Wikipedia. We appreciate your contributions, but in one of your recent edits to P versus NP problem, it appears that you have added original research, which is against Wikipedia's policies. Original research refers to material—such as facts, allegations, ideas, and personal experiences—for which no reliable, published sources exist; it also encompasses combining published sources in a way to imply something that none of them explicitly say. Please be prepared to cite a reliable source for all of your contributions. You can have a look at the tutorial on citing sources. Thank you. —David Eppstein (talk) 07:09, 6 June 2020 (UTC)Reply

P versus NP

edit

I admire your enthusiasm, but I am afraid that you are missing several fundamentals needed to have a discussion about this topic. Wikipedia:Reference desk (science, mathematics, or computing) may be appropriate for clarifying any points of confusion you have about the fundamentals, but not for proposing a new idea. The sources you tried to link to do not help your case, because they do not make the exact argument you are trying to. You cannot synthesize your own argument from them. There are several concepts you are using incorrectly in your comment:

  1. You need to distinguish between space and time complexity. In fact, NP is contained in PSPACE so whatever the answer to P versus NP, we in principle never need more than polynomial space to solve even NP-complete problems.
  2. Polynomial time does not mean the algorithm is a polynomial function of its input. While some polynomial-time algorithms are indeed galactic algorithms, if even a galactic polynomial-time algorithm were shown for an NP-complete problem, then it is not considered a sure bet that all polynomial time algorithms for NP-complete problems are galactic. You are essentially asserting that either P does not equal NP, or that any polynomial time algorithm for an NP-complete problem must be galactic, both of which you absolutely lack proof for.
  3. Please read the rigorous statement of the verifier-based definition of NP (complexity class). It has nothing to do with "knowledge of the answer can alter the external physical resource of the problem". Nothing. If P does not equal NP, then just having a candidate problem instance does us no good because we would need superpolynomially many runs of the verifier to find the answer, no different from if we didn't have the candidate answer. If we already had the answer, what was the point of trying to solve the problem in the first place?
  4. As I already said, but which you conveniently glossed over, "exponentially large" has to be measured accordingly, because pseudopolynomial time is not considered polynomial time (why is the dynamic programming algorithm for the knapsack problem not a polynomial-time algorithm?). As someone who has done work with computer databases before, I can unequivocally say that in the worst case (when we know next to nothing about the structure of the data), no algorithm to search that database for a particular entry takes time less than linear in the number of entries in the database. Your "counterexamples" are for naught, especially since a counterexample must have a statement that is refuted, which you have not even clarified.
  5. You keep talking about cryptographic hash functions, but you are asserting that they are all provably one-way functions, which would imply P does not equal NP, which in turn sounds like a claim to have solved the problem. All of the currently used cryptographic hash functions rely on the assumption that one-way functions exist and that the functions implemented are one-way. This is in fact an unproven assumption. Can you tell me why the lossy function   is not a one-way function, even though there are   n-bit inputs to it? For a less contrived example, can you tell me why the function   is not one-way? Hint: see the next point.
  6. Having an input space of size exponential in the size of the input does not have as much to do as you think it does with the complexity of a problem. The decision problem "Given an m-bit number x, and a n-bit number y, is y the square of x?" is decidable in just two arithmetic operations (each taking polynomial time in m and n) even though there are   inputs.
  7. The distinction between exponential and polynomial time matters for numerous real applications of computing, especially with Moore's law. If the best algorithm for a problem requires exponential runtime, then the size of the problem we can tackle grows only linearly at best with time. If however we have a polynomial-time algorithm that is not galactic, then the maximum size of solvable problems grows at least subexponentially. In particular, the efficiency of the Lucas-Lehmer primality test is the reason why Mersenne numbers can be tested and found prime in realistic time. The largest Mersenne prime known, for example, has   bits. The Lucas-Lehmer test's runtime is bounded by a polynomial function of the number of bits, and therefore its runtime is on the order of maybe one quadrillion operations. That may seem like a lot, but modern computers can execute billions of these arithmetic operations a second, and the remainder of the gap can be easily overcome with distributed computing. If the best-known algorithm to test the primality of this number required exponential time in the number of bits, then the number of operations balloons to somewhere around  , which is far larger than a googol (roughly  ) and therefore infeasible.

I'm going to be brutally honest. You come off as pretending to understand computer science. A scientist like you should learn to recognize your own limitations, and in any case, use rigorous arguments. Your only realistic chance to change my mind is to get your reasoning published in a highly reputed peer-reviewed journal in computer science.--Jasper Deng (talk) 08:21, 12 June 2020 (UTC)Reply

First of all, let me correct a simple mistake. My talk was informal and I forgot to add 'except P' after 'NP',so you must reread 'NP' as 'true NP, excluding P'.
Now it's too late;the post has been locked down. But I think the context here is rather clear.— Preceding unsigned comment added by Waterwizardm (talkcontribs)
At this point, I don't even really care what corrections you want to make to the comment because it is not even wrong.--Jasper Deng (talk) 09:00, 12 June 2020 (UTC)Reply

I think my use of 'exponential' is very accurate. When I wrote 'exponential', that quantity is exactly exponential to the other quantity I already (implicitly) reffered.

What "other quantity"? Please actually read my comment. Note that I am providing this explanation for your benefit on my own time; I do not have to do that. Please ask any questions you have at the reference desks as I pointed you to.--Jasper Deng (talk) 09:23, 12 June 2020 (UTC)Reply
Again, the paradigm you keep using is different from mine. I borrowd the semantics from applied physics,which is very trivial to me. 'exponential' is literally exponential. No philosophical implied meanings.
Really funny how you claim to be a scientist and yet you are completely and utterly failing to satisfy the burden of discussing this topic with sufficient rigor, and really funny that you think you know better than Professor David Eppstein. I gave you a clear explanation of why your wording is inadequate above, including an explicit example of how these concepts apply in real computing. I'm unwilling to believe it is somehow "trivial" that the world's greatest computer scientists (whose papers are cited by the article) were wrong about this. But have fun deluding yourself with your not-even-wrong hand-waving. This will be my last reply to you; Wikipedia is not interested in your unsound original research.--Jasper Deng (talk) 09:48, 12 June 2020 (UTC)Reply

Before answering anything, let me first clarify that, to simplify the point, 'data server' here stands for a randomly-accessible memory. Any randomly accessible memories, including (especially) recursively connected ethernet hubs,are direct counterexamples..

This is a visualization of constructive exponential resource. If you want a counterexample, this is the only hardware construction possible as far as I know.


The hide-and-seek resource is initially exponentially costly for an exhaustive search. After the first agent has successfully found the secret message(that is, the plain text for the hide-and-seek game), the second agent can verify it almost freely.[1] [2] The access cost with the compressed(that is,the usual) 'URL' is always linear. It was too trivial for me to think of real examples; the original ideas are very common in physics. The recursively connected ethernet hub network grows exponentially, which is shaped like a snowflake. Although it's exponential, the access cost is always linear.

Again, 'exponential' here means the literally exponential capacity relative to the URL length.

Similarly, when you want to find alost file, the first agent needs an exponentially large 'ping resource'. But the second agent always verifies it for free [3] because he can reuse the resource of the first agent.[4] This resource-dividing game is always subadditive.[5]

Anti-collision is a strictly quantifiable[6] resource. [7] Secure signatures usually need exponential irreversibility and exponential anti-collision. Popular siganture functions use incomplete anti-collision resources. You can recursively inject linear irreversibility for post-quantm exponentially secure signatures. For example, a recursive irreversible small prime generator (less than 1024bit) is exponentially exhaustive to decrypt for the adversary. Each prime is protected by irreversible collapse of the original information. If you inject similarly irreversibility-protected random input(that is, the random input resource of the encrypted/hashed message blocks), (the linear fingerprint of) the pseudo-random walk with irreversibiliy injection (collapses) is exponentially undecryptable. Asymtotic bit-by-bit linear 'orthogonality' of p-adic prime ensembles asserts average difficulity.[8] The injected random text asserts exponential security.

Wikipedia is WP:NOTAFORUM, take this elsewhere. I don't care where, as long as you stop wasting our time. Your conceptions about how web servers work in practice are utterly and completely incorrect (data structures exist to obviate the need for brute force searching over the entire set of values of input). Think of it: if web servers really did have to do a brute-force search over every possible URL they would never be usable in practice, because even the set of 100-character URL's is of cardinality vastly larger than a googol and therefore, a brute force search over them would never be practical–and yet, the very web server you are using right now does not need to do that! It's not "always linear". e meaning of a verifier as it pertains to P versus NP is absolutely not what you think it is. A verifier needs to take in a certificate for each candidate input, see for example primality certificate. The certificate need not be findable in polynomial time. A nondeterministic Turing machine by definition "gets lucky" by always finding an accepting computation path of the verifier, but in practice we cannot do that, and if our initial guess is wrong then we have gained nothing, thus rendering ridiculous anything you say about "reuse of resources". Memoization is the closest thing computers have to "reusing" resources but where does the computer "remember" the result of something (read up about computer caches)? Your assertions about signature functions are equivalent to claiming that they are provably one-way and as I said above, you lack proof for that. The primary reason why cryptographic hash functions work in practice is we don't know of any way to efficiently do preimage, collision, or second preimage attacks on them. The assumption is that they don't exist, but P = NP would imply that is not true. If P versus NP "didn't matter" then why are you attempting to argue something that implies a solution to that problem? Also, if it were this easy, it would not have baffled decades of researchers who spent their whole lives on this.
You have absolutely no clue what you are talking about. None. Stop pretending to us that you do, and leave if you have no sources that present your exact arguments. Consider that perhaps, just perhaps, you are mistaken, and that you need to learn much more about how computers work before trying to tell experts on them what to do. Clearly, P versus NP matters enough for the greatest computer scientists to pursue and it matters for real-world applications.
If I see more of this advocacy of your ridiculous nonsense then I will request for you to be blocked from participation here on the grounds that you are WP:NOTHERE to improve the encyclopedia. Take a seat, and learn for once. If you try to propose this on Quora (e.g.) then you will almost certainly get the same response.--Jasper Deng (talk) 08:51, 15 June 2020 (UTC)Reply

You are not familiar with some of the basic and established definitions such as 'irreversibility quantification´,which is trivially used by many scientists,and I am not familiar with some of your quotations, the detailed discussions are useless. The paradigms are different.

If your theory can't endure the trivial subadditivity test, please throw it away, the relative gauges are broken. When you prove something from axioms, you must strictly conform them.

I have many sources for this novel subject.

Without doubt, Spekkens is the most important pioneer of the resource theory, which is now commonly used in mathematical physics. [9]

If you want to know what ´logical irreversibility' means here, see this paper. [10] It covers both theoretical and practical discussions.(back -forward speed asymmetry[11][12] is always atomic-irreversible[13],not atomic-isomorphic,if you understand)

The resource theory is a basic tool to analyse fundamental consistency of mathematics and science. [14] [15] [16] [17]

[18]


Wikipedia has articles about some of the basic notions I used. If you are curious about what 'knowledge' means, see Spekkens toy model. The basic idea is that physical states are just incomplete informations.

My points are very simple.

The resource model lacks strict subadditivity, which implies paradoxes. The resource model lacks the explicit notion of agents, which implies subjective frame inconsistency. You are trying to prove some abstract theory that can't be constructively visualized.

Logical irreversibility[19] and random-accessibility are closely related. They are the two sides of the same coin.

References

Notice of noticeboard discussion

edit

  There is currently a discussion at Wikipedia:Administrators' noticeboard regarding an issue with which you may have been involved. The thread is "Waterwizardm and NOTHERE". Thank you.--Jasper Deng (talk) 10:14, 18 June 2020 (UTC)Reply

Notice about this account

edit

Waterwizardm is a compromised account. His legitimate alternative account is User:aquahabitant. —Preceding undated comment added 04:45, 6 September 2020 (UTC)Reply