Wikipedia:Reference desk/Computing

(Redirected from Computing reference desk)
Latest comment: 1 day ago by Nil Einne in topic Small Blue-Ray player
Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


August 7

edit

Single versus Multiple Exit Points in a Function

edit

When I was in school back in the 90s, we were taught that a function should have only one exit point. Do they still teach this? I'm asking because I'm coming across a lot of code when doing code reviews where the developer has multiple exit points and I'm wondering if I should ask them to change their code to have one exit point or let it slide. For example, I often see code like this:

        private static bool IsBeatle1(string name)
        {
            if (name == "Paul") 
            {
                return true;
            }
            if (name == "John")
            {
                return true;
            }
            if (name == "George")
            {
                return true;
            }
            if (name == "Ringo")
            {
                return true;
            }
            return false;
        }

Personally, this is how I would have written this code:

        static bool IsBeatle2(string name)
        {
            bool isBeatle = false;
            if (name == "Paul")
            {
                isBeatle = true;
            }
            else if (name == "John")
            {
                isBeatle = true;
            }
            else if (name == "George")
            {
                isBeatle = true;
            }
            else if (name == "Ringo")
            {
                isBeatle = true;
            }
            return isBeatle;
        }

So, my question is two fold:

  1. Do they still teach in school that a function should have a single exit point?
  2. When doing code reviews, should I ask the developer to rewrite their code to have one single exit point? Yes, I realize that this second question is a value judgement but I'm OK with hearing other people's opinions.

Pealarther (talk) 11:21, 7 August 2024 (UTC)Reply

If there was only one school with only one instructor, your answer could be "yes" or "no." However, there are millions of schools with millions of instructors. So, the only correct answer is "both." Yielding functions and scripting languages have changed what is considered optimal when writing functions. So, it comes down to what the function does, what language is being used, and what the instructor feels like teaching. 75.136.148.8 (talk) 11:49, 7 August 2024 (UTC)Reply
  • Many things taught in the '90s, and especially the '80s, are now realised to be unrealistic.
There is no reason why functions should only have one exit point. What's important is that some boundary exists somewhere where you can make absolute statements about what happens when crossing that boundary. Such boundaries are commonly functions, but it's broader than that too. In this case, we can define a contract, 'Leaving this function will always return a Boolean, set correctly for isBeatleness.' That's sufficient. To then mash that into this type of simplistic 'Only call return once, even within a trivial function' is pointless and wasteful.
You might also look at 'Pythonic' code, the idiomatic style of coding optimised for use in Python. This raises exceptions quite generously, see Python syntax and semantics#Exceptions. The boundary here is outside the function, but instead the scope of the try...except block. In Pythonic code, the exception handler that catches the exception might be a very long way away. Andy Dingley (talk) 13:58, 7 August 2024 (UTC)Reply
Yes, it was accepted wisdom (at least in academic teaching of programming) in the 1980s, and Pascal (the main teaching language in a lot of academic settings) effectively enforced it (at least in the academic versions people taught - I rather think Turbo Pascal, which was always more pragmatic, will not enforce this). But it leads to some horrible patterns:
  1. checking inputs and other preconditions are acceptable leads to deep nested ifs, with the "core" of the function many deep.
  2. "result" accumulation - especially where "break" is also prohibited (with the same reasoning), where the function has "finished" its calculation, but has to set a result variable, which then trickles down to the eventual real return at the end of the function. This (and the break prohibition) leads to fragile "are we done yet" booleans.
So the restriction was an attempt to avoid bad code, but in doing so produced lots of different kinds of bad, unreadable, fragile code. So it's a daft restriction.
I've no idea what academics teach now, and frankly what universities (often in toy or abstract cases) do is seldom what industry does. So let's look at what industry does:
  • Code Complete reads "Minimize the number of returns in each routine. It's harder to understand a routine when, reading it at the bottom, you're unaware of the possibility that it returned some-where above. For that reason, use returns judiciously—only when they improve readability."
  • Neither the CoreCPPGuidelines nor Google's C++ styleguide seems to say anything on the topic
  • Notable codebases like Chrome, the Linux Kernel, PDF.js, IdTech3, MySQL, LLVM, and gcc all frequently use multiple return paths.
That doesn't mean "just return willy-nilly wherever", as that can be as bad - Code Complete gives smart advice. But it's a bad rule, which won't produce better code in real circumstances, and will frequently produce worse code. "Write good code" can't be boiled down to such simple proscriptions. -- Finlay McWalter··–·Talk 14:13, 7 August 2024 (UTC)Reply
I tend to agree with the OP. However, the example of multiple exits he gives is not that bad because they are all right together. It would be worse practice to have four exits randomly spread out in a routine. Bubba73 You talkin' to me? 04:08, 8 August 2024 (UTC)Reply
The underlying rationale for the directive to have a single exit is to make it easier to ascertain that a required relationship between the arguments and the return value holds, as well as (for functions that may have side effects) that a required postcondition holds – possibly involving the return value. If the text of the body of a function fits on a single screen, forcing a single exit will usually make the code less readable. As long as it is easy to find all exits – much easier with on-line editors than with printouts of code on punch cards as was common before the eighties – the directive no longer fulfills a clear purpose.  --Lambiam 08:14, 8 August 2024 (UTC)Reply
  • Back when I was at school we didn't have computers and nobody taught software at all.  
I would view a long if...else... chain as generally evil, and would see the sequence of if...return tests as somewhat "better".
However, I don't do C#, so I would have written something in PHP like this:
function IsBeatle3(string $name): bool {
  return $name == "Paul" || $name == "John" || $name == "George" || $name == "Ringo";
}
This single statement highlights that there is one test and one exit point and that the function always returns a value of the same type.
For something heavier, or perhaps if there were more than two return values, I might have used a switch statement. Perhaps something like this:
function IsBeatle4(string $name): bool {
  switch($name) {
    case "Paul":
    case "John":
    case "George":
    case "Ringo":
      return true;

    default:
      return false;
  }
}
Here, the switch is doing the comparisons for you and using the switch statement highlights that all cases have been handled and there is no other way out. — GhostInTheMachine talk to me 17:58, 18 August 2024 (UTC)Reply

How are one-time passwords secure?

edit

To log into my Mailchimp account, I need a password plus a one-time code I either read off the Google Authenticator app on my Samsung tablet, or off the iCloud keychain. The two sources always give the same code, and to set them up, I had to enter a 16-letter code. My question is: how does any of this increase security? To get the one-time code, all a hacker needs is the 16-letter code used, and they're good to go. It just seems like a second password but more complicated. I thought the idea of one-time codes was that it would be something I know (password) and something I have (my tablet). But in fact the something I have is only useful because of the 16-letter code (something else I know). Amisom (talk) 15:48, 7 August 2024 (UTC)Reply

If you know the secret key (the code you started with), the current time, and the algorithm, you can produce the OTP key at any point in time. 75.136.148.8 (talk) 17:21, 7 August 2024 (UTC)Reply
Or indeed, as I said, all you need is the secret key and a widely available app like Google Authenticator. So my question is how and why that is more secure than a password alone. Amisom (talk) 17:23, 7 August 2024 (UTC)Reply
The issue is if your communication is being intercepted, someone is looking over your shoulder, or a bug in the browser state means the text you entered (which should be forgotten immediately) is retained in memory, and a wrongdoer can recover it later. If you were sending a shared secret (e.g. a password), now the enemy has your password. If all you enter is the OTP, which expires in a minute or two, the enemy has only seconds to use it. As the OTP is generated from the 80-bit shared secret with a one-way function (in this case, a cryptographic hash function), they can't reverse OTP to recreate the 80-bit secret. The 80-bit shared secret key should not be your regular password, nor derived from it. Typically, when setting up a HOTP entry in Authenticator, the service (e.g. Mailchimp) should generate an 80 bit random key and usually shows this on screen with a QR code (for Google Authenticator to read). After that, the 80-bit shared secret is never passed between the two parties. -- Finlay McWalter··–·Talk 18:02, 7 August 2024 (UTC)Reply
Technically, as the concern mentions, if I had thousands of computers all attempting to authenticate at the same time, I could have each one attempt thousands of possible OTP keys based on trying every possible original seed value used to set up the OTP. If one works, I can continue using it without the extra overhead of trying millions of combinations. But, even if only 16 hex values were used in the random initial seed, there would be over 1,000,000,000,000 possible values to try. As mentioned above, you can't intercept this as you can with a password. It is not transmitted anywhere. The user never types it into anything after setting up the OTP. But, the concern is not completely without merit. It is possible that someone could randomly pick out the original value used to set up the whole thing and then have their own copy of it to use. It comes down to the old analogy of you can spend billions to build a system to work out a person's OTP and hack into their bank account or you can spend $5 on a good hammer and force them to give you their phone so you can use it to log in easily. 75.136.148.8 (talk) 19:53, 7 August 2024 (UTC)Reply
There are 1616 different hexadecimal strings of length 16, which is more than 1.8×1019. This is a whole lot more than 1,000,000,000,000.  --Lambiam 21:51, 7 August 2024 (UTC)Reply
The comment above that mentions an 80-bit shared secret. Assuming 8 bits per character, that is 10 characters, not 16. Regardless, it is correct to state that it is not likely someone will brute force an OTP secret easily. 12.116.29.106 (talk) 12:40, 8 August 2024 (UTC)Reply
I was reacting to the comment immediately above my reaction, which stated,
"But, even if only 16 hex values were used in the random initial seed, there would be over 1,000,000,000,000 possible values to try."
If "16" was a typo for "10", the hexadecimal strings of length 10 are more than 1.2×1024 in number, still more than 1,000,000,000,000 by over a factor of 1,000,000,000,000. These days you can buy a 7168-core GPU with a clock speed of 2.4 GHz, so trying 1,000,000,000,000 values is not an obvious impossibility. Five-character random passwords are not safe against brute force.  --Lambiam 22:16, 8 August 2024 (UTC)Reply

August 8

edit

A new test for comparing human intelligence with Artificial intelligence, after the Turing test has apparently been broken.

edit

1. Will AI discover Newton's law of universal gravitation (along with Newton's laws of motion), if all that we allow AI to know, is only what all physisists (including Copernicus and Kepler) had already known before Newton found his laws?

2. Will AI discover the Einstein field equations, if all that we allow AI to know, is only what all physicists had already known before Einstein found his field equations?

3. Will AI discover Gödel's incompleteness theorems, if all that we allow AI to know, is only what all mathematicians had already known before Gödel found his incompleteness theorems?

4. Will AI discover Cantor's theorem (along with ZF axioms), if all that we allow AI to know, is only what all mathematicians had already known before Cantor found his theorem?

5. Will AI discover the Pythagorean theorem (along with Euclidean axioms), if all that we allow AI to know, is only what all mathematicians had already known before pythagoras found his theorem?

If the answer to these questions is negative (as I guess), then may re-dicovering those theorems by any given intelligent system - be suggested as a better sufficient condition for considering that system as having human intelligence, after the Turing test has apparently been broken?

HOTmag (talk) 18:08, 8 August 2024 (UTC)Reply

Most humans alive could not solve those tests, yet we consider them intelligent. Aren't those tests reductive? Isn't it like testing intelligence by chess playing? We consider some chess machines very good at chess, but not "intelligent". --Error (talk) 18:31, 8 August 2024 (UTC)Reply
According to my suggestion, the ability to solve the problems I've suggested, will not be considered as a necessary condition, but only as a sufficient condition. HOTmag (talk) 18:39, 8 August 2024 (UTC)Reply
It is impossible to test whether something will happen if it may never happen. The only possible decisive outcome is that it does happen, and then we can say in retrospect that it was going to happen. It does not make sense to expect the answer to be negative.  --Lambiam 21:36, 8 August 2024 (UTC)Reply
I propose that we ask the AI to find a testable theory that is consistent with both quantum mechanics and general relativity (in the sense that either emerges as a limit). This has two advantages. (1) We do not need to limit the AI's knowledge to some date in the past. (2) If it succeeds, we have something new, not something that was known already. Alternatively, ask it to solve one of the six remaining Millennium Prize Problems. Or all six + quantum gravity.  --Lambiam 21:49, 8 August 2024 (UTC)Reply
I suspect that any results from the test proposed in the first post would be impossible to verify. AI needs data to train on: lots of it. Where exactly would one find data on "what all physisists (including Copernicus and Kepler) had already known before Newton found his laws" in the necessary quantity, while ensuring that it wasn't 'contaminated' by later knowledge? AndyTheGrump (talk) 21:56, 8 August 2024 (UTC)nReply
The same problem plagues the Pythagorean theorem, which most likely was dicovered independently multiple times before Pythagoras lived (see Pythagorean theorem § History), while it is not known with any degree of certainty that it was known to Pythagoras himself. Euclid does not ascribe the theorem to anyone in his Elements (Book I, Proposition 47).[1].  --Lambiam 22:34, 8 August 2024 (UTC)Reply
Where tests based on intellectual tasks fail, we might need to start relying on more physical tests.
Humans take less energy to do the same intellectual tasks (I've mainly got inference in mind) as AI. While in the future it might not prove that I am a biological being, measuring my energy consumption to perform the same tasks and comparing it with that of an AI trained to do general tasks could be a way forward.
For bio-brains, training to do inference tasks is based on millions of years of evolution, the energy for training might be more than for LLMs, I don't know - but it is already spent and optimised for day-to-day efficiency. I think natural selection is an inefficient and wasteful way to train a system, but it has resulted in some very efficient inference machines.... Komonzia (talk) 04:31, 9 August 2024 (UTC)Reply

I think all of you are missing my point. You are claiming from a practical point of view, while I'm asking from a theoretical point of view, which may become practical in a thousand years, or will never become practical.

I'll try to be more clear now:

1. If we let our sophisticated software be aware of a given finite system of axioms, and then ask our software to prove a given theorem of that axiom, I guess our sophisticated software will probably do it (regardless of the time needed to do it).

2. Now let's assume, that X was the first (person in history) to discover and prove the Pythagorean theorem. As we know, it had happened long before Euclide phrased his well known 6 axioms of Euclidean Geometry, but X had discovered and proved the Pythagorean theorem, whether by implicitly relying on the Euclidean axioms, or in any other way. Let's also assume, theoretically speaking, that we could collect all of the works in mathematics that had been written before X discovered and proved the Pythagorean theorem. Let's also assume, theoretically speaking, that we could let our AI software be only aware - of this mathematical collection we are holding (i.e. not of any other mathematical info discovered later). Since it does not include the Euclidean axioms, then what will our AI software answer if we ask it whether the well-formed formula reflecting the Pythagorean theorem is necessarily true, for every "right triangle" - according to what the mathematicians who preceded X and who wrote those works meant by "right triangle"? Alternatively, I'm asking whether (under the conditions mentioned above about what the AI is allowed to know in advance), AI can discover the Pythagorean theorem, along with the Euclidean axioms.

3. Note that I'm asking this questions (and all of the other questions in my original post, about Newton and Einstein and Gödel and Cantor), from a theoretical point of view.

4. The task of turning this theoretical question into a practical question, is technical only. Maybe, in a hundred years (or a thousand years) we will have the historical collection I was talking about, so the theoretical question will become a practical one.

5. Anyways, if the answer to my question is negative (as I guess), then may this task of re-dicovering those theorems by any given intelligent system - be regarded as a better sufficient condition for considering that system as having human intelligence? Again, as of now I'm only asking this question from a theoretical viewpoint, bearing in mind that it may become practical in some years. HOTmag (talk) 08:49, 9 August 2024 (UTC)Reply

We can give pointers to what scientists and philosophers have written about possible replacements or refinements of the Turing test, but this thread is turning unto a request for opinions and debate, which is not what the Wikipedia Reference desk is for.  --Lambiam 10:58, 9 August 2024 (UTC)Reply
My question is a yes/no question. HOTmag (talk) 11:01, 9 August 2024 (UTC)Reply
OK. Yes. At some point in time there will be a case where a computing device or system of some kind will discover some proof of something. No. That isn't going to happen today. What you are truly asking is for an opinion about when it will happen, but you haven't narrowed down your question to that point yet. It is an opinion request because nobody knows the future. The suggestion is to narrow your question to ask for references published on the topic, not for a request about what will happen in the future. 75.136.148.8 (talk) 16:06, 9 August 2024 (UTC)Reply
Again, it's not a question "about when it will happen". It's a yes-no question: "may this task of re-dicovering those theorems by any given intelligent system - be regarded as a better sufficient condition for considering that system as having human intelligence?". As I said, It's a yes-no question. Now you answer "Yes" (at the beginning of your answer). Ok, so if I ignore the rest of your answer, then I thank you for the beginning of your answer. HOTmag (talk) 16:23, 9 August 2024 (UTC)Reply
Your extreme verbosity is getting in the way of your question. Now that you've simplified it to a single question and not a diatribe about AI math proofs, the answer is more obvious. Turing test is a test of mimicry, not a test of intelligence. So, replacing it with a different test to see if it is "better" does not really make sense. Computer programs (that could be called AI) have already taken axioms and produced proofs. They are not tests of intelligence either. They are tests of pattern matching. 75.136.148.8 (talk) 16:37, 9 August 2024 (UTC)Reply
People with brains discovered it. A computer with a simulated brain will be able to rediscover it. Not necessarily with Generic Pre-trained Transformer algorithms, because the current generation is only trained to deceive us into thinking it can do things involving language, conversation, etc. But if a computer sufficiently can simulate a brain, no, there is nothing stopping it from following the same process that a human has done, possibly better or more correctly. In my opinion, there is no deeper soul than that which our brains trick us into thinking we have.
Note: even if simulating a brain is not possible (with every chemical reaction and neuron, if that ends up being needed), then there is nothing theoretically stopping it from being capable of growing a brain and using that -- or utilizing an existing brain. See wetware computer. Komonzia (talk) 16:52, 9 August 2024 (UTC)Reply
Ask an AI to devise some much better test than the pitiful bunch above. NadVolum (talk) 18:19, 9 August 2024 (UTC)Reply
See automated theorem proving. Depending on the logic involved (in particular for propositional logic and first-order logic), we have programs that will, in the spherical cow abstraction, prove any valid theorem of a set of axioms. This also implies that we can, in theory (obvious pun not intended), enumerate all valid theorems of an axiomatisation, i.e. enumerate the theory (logic). This is just mechanical computation (though there is a lot of intelligence involved if we want to prove interesting theorems quickly). Finding an interesting set of axioms is a very different task, and, I would think, a much harder one. But see Automated Mathematician for a program that discovered mathematical concepts by experimentation. --Stephan Schulz (talk) 16:31, 18 August 2024 (UTC)Reply

The list of 5 human discoveries that the OP proposes as new "litmus tests" of capability of a computer based AI have in common that they are all special cases of general work (e.g. Pythagoras' theorem is the case of Law of cosines reduced when an angle of a triangle is a right angle so its cosine is zero) that subsequent human commentators regard as notably elegant, insightful and instructive or have been shown later to be historically important. Each discovery statement can be elicited by posing the appropriate Leading question (such as "Is it true that...."). AI has not yet impressed us with any independant senses of elegance, insight, scholarship or historical contextualization, nor is AI encouraged to pose intelligent leading questions. AI therefore fails the OP's tests. If this question was an attempt to reduce investigative human thought to a sterile coded algorithm then that attempt also fails. Philvoids (talk) 22:54, 9 August 2024 (UTC)Reply

There is one case I know of where AI was tasked with improving an algorithm heavily looked at by humans already, and did so: https://arstechnica.com/science/2023/06/googles-deepmind-develops-a-system-that-writes-efficient-algorithms/
That case is especially remarkable because it wasn't trained on code samples that would have led it to this solution. However as User:75.136.148.8 noted above, it's not necessarily a marker of intelligence - the method used to devise the optimization is the optimization equivalent of fuzz testing. Komonzia (talk) 08:55, 10 August 2024 (UTC)Reply
That is an interesting example, but the popular reporting is quite bad. The original Nature (journal) article is here. In particular, the approach was only used for sorting networks of small fixed size (sequences of 3-8 elements) and "variable length" algorithms with a maximum size that essentially are built by selecting the right fixed-size sorting network. This is useful and interesting, but it's very different from finding a fundamentally new sorting algorithm. --Stephan Schulz (talk) 16:40, 18 August 2024 (UTC)Reply

August 9

edit

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Should Similarweb be cited to report web traffic rankings on Wikipedia?

edit

I added this to the Similarweb talk page, but I discovered it doesn't belong there & I believe the question is better posted here. The original question was posed on https://en.wikipedia.org/wiki/Talk:Similarweb#Should_Similarweb_be_cited_to_report_web_traffic_rankings_on_Wikipedia? & contains further discussion of the subject.

(I apologize if I've used the incorrect template. If so, please replace it with the appropriate one.)

This topic came up on Talk:GunBroker.com where I have a COI, and merits further discussion by the community at large, given the large number of pages that could be affected (to date, 166 pages). It is not my intention to engage in Wikipedia:Edit warring, but to work toward Achieving consensus.

User:Lightoil stated on 4 May 2023 that "Similarweb may be used if it is considered a reliable source."

On 24 August 2023, User:Spintendo implemented a COI edit request to cite Similarweb web traffic data.

On 26 September 2023, User:Graywalls removed the cited data and maintains that "Similarweb.com is not really a data source. [...] Similarweb is just a data aggregation."

Graywall and I have not been able to reach consensus on this matter, so it seems opening up the topic is warranted.

Should Similarweb be cited to report web traffic rankings on Wikipedia?

Similarweb is used to report rankings all over Wikipedia, most notably the entire List of most-visited websites page, which relies solely on Similarweb as the source.

There are at least 165 other Wikipedia pages (to date) relating to website traffic for entities like Facebook, Weather Underground (weather service), WebMD, and numerous international entities. Other notable pages using these metrics include List of most popular Android apps, List of employment websites (which sorts the data based on Similarweb traffic rank), and List of online video platforms, to name a few.

The question is whether or not Similarweb rankings are a valid source, as it is common practice to use them as an exclusive source on Wikipedia pages (as evidenced by the above links and articles). Since data from sources like Alexa Internet has been discontinued, I'm at a loss to find other secondary sources for website traffic data that could be used on any pages. I would welcome other reliable secondary sources if any could be provided. LoVeloDogs (talk) 21:03, 16 October 2023 (UTC)Reply

I think to start with it's best for someone to establish why a data aggregator cannot be used as a source on Wikipedia. Aggregation does not make data less reliable, it just means you're taking data from different places and putting it into one place. An ETL pipeline usually involves aggregation. That makes data more usable, normally, not less reliable. Komonzia (talk) 18:53, 9 August 2024 (UTC)Reply
In my opinion starting a discussion on the Reliable Sources Noticeboard would be best to settle the issue on whether Similarweb is a reliable source. Lightoil (talk) 20:13, 9 August 2024 (UTC)Reply
Indeed. The Reference desk is not the right venue for resolving issues concerning Wikipedia policy.  --Lambiam 20:47, 9 August 2024 (UTC)Reply
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

August 10

edit

Printer Makes Power Supply Beep ?

edit

I have a Dell Inspiron 3910 running Windows 11. That isn't the issue, because I am not asking about it. I have it connected to a Schneider APC battery backup power supply. I also have a Canon ImageClass D570 all-in-one printer and copier. If I have the Canon printer plugged into one of the six battery-power sockets of the power supply, and I print a page, there is usually (not always, but usually) a beeping sound that I think is coming from the battery backup. What is causing this? Does it mean that it draining power from the battery at a faster rate than is preferred? I also have two sockets on the power supply that are surge-protected but not on battery power. If I move the printer plug to one of these, it no longer beeps. However, if there is a transient loss of power, as happened a few times in the past two days during Debby, the printer hums when normal power resumes, because it is powering back on. The computer continued normal operation during these transient losses of power, and that is what the power supply is for.

What is causing the beeping? Is my hypothesis plausible? Is there any reason why I shouldn't reconnect the printer to surge protection only without battery backup? The beeping is annoying.

Robert McClenon (talk) 19:43, 10 August 2024 (UTC)Reply

Yes, laser printers typically draw a huge current when they get ready to print, to warm up the fuser. Move it to the non-battery side. Dicklyon (talk) 23:21, 10 August 2024 (UTC)Reply
Thank you for that explanation, User:Dicklyon. That answers one question and leaves another to be asked and answered. Obviously the initial draw of a laser printer heating the fuser is less than 15 amperes, and enough less than 15 amperes so that a desktop computer can also be running normally while the laser printer is heating the fuser. So that would seem to mean that the power supply goes into alarm when there is a current draw that both the battery and the line current can handle. So why would the power supply vendor build in that alarm? Robert McClenon (talk) 01:51, 12 August 2024 (UTC)Reply
This study found startup transients exceeding 50 A for several printers, giving inverters a hard time (like the one in your UPS). The beep probably means it's unable to keep the voltage up to nominal. Dicklyon (talk) 02:16, 12 August 2024 (UTC)Reply
Also, your manual has cautions on p. 5 that implies it will suck down a lot of current:

"When connecting power

  • Do not connect the machine to an uninterruptible power source.
  • If plugging this machine into an AC power outlet with multiple sockets, do not use the remaining sockets to connect other devices.
  • Do not connect the power cord into the auxiliary outlet on a computer
(my bold). Dicklyon (talk) 02:24, 12 August 2024 (UTC)Reply
To add to the above - I also use a small UPS, but don't use printers.
The UPS is a Powercool PC-650VA - the documentation is available online in PDFs, but I won't bear the risk of linking a particular PDF file.
The manual has a large caution stating "NEVER connect a laser printer or a scanner to the UPS unit. This may cause damage to the unit."
Before, this caused me to research different types of UPS, curious how they work - there is one marketing video that explains pretty well when the different kinds of UPS rely on passthrough, battery, simulating AC, etc.
I'm unsure but the alarm might be due to the surge protection, not necessarily the battery - so connecting a normal surge protector might not be enough either. I say that because apparently the UPS is only meant to be relying on the battery when input current is abnormally low. Komonzia (talk) 17:12, 13 August 2024 (UTC)Reply
Good point. He did say it would not beep if plugged into the surge-protection-only side, but they may have different limits. The manual says a steady alarm beep means "The connected equipment exceeds the specified “maximum load” as defined in Specifications at the APC web site, www.apc.com. The alarm remains on until the overload is removed." So maybe a brief overload makes a brief alarm beep. Also, it can be disabled (along with all the other alarms); the Alarm Control factory default is Enable, but can be changed: "User can mute an ongoing alarm or disable all existing alarms permanently." Dicklyon (talk) 15:07, 14 August 2024 (UTC)Reply
Thank you again, User:Dicklyon. Thank you, User:Komonzia - I am leaving the printer for now plugged in to the surge protection. So I concede that I should have RTFM. Robert McClenon (talk) 17:10, 16 August 2024 (UTC)Reply

August 13

edit

May there be any connection between, absence of base system device driver on my laptop, and its overheating followed by shutting down sometimes?

edit

HOTmag (talk) 09:43, 13 August 2024 (UTC)Reply

Yes. I'm simplifying but in the old days thermal protection was controlled directly from the BIOS. Nowadays it's done by ACPI. If the OEM driver is missing the fans won't kick in when they should and the hardware may shut itself down (in an uncontrolled manner) to protect itself. For details see: https://uefi.org/htmlspecs/ACPI_Spec_6_4_html/11_Thermal_Management/thermal-control.html#:~:text=6.-,Critical%20Shutdown,a%20predetermined%20time%20has%20passed. 41.23.55.195 (talk) 14:15, 13 August 2024 (UTC)Reply


August 15

edit

Fast Fourier transform

edit

In the website there's an example FFT algorithm structure which uses a decomposition into half-size FFTs. Does the 🔝 half-size FFT which consists of x[0], x[2], x[4], x[6] pertain to switching circuit theory which "is the mathematical study of the properties of networks of idealized switches?" Below the example FFT algorithm structure is "a discrete Fourier analysis of a sum of cosine waves at 10, 20, 30, 40 and 50 Hz." Afrazer123 (talk) 22:05, 15 August 2024 (UTC)Reply

For context, this presumably refers to the first picture in the article Fast Fourier transform. And here's Switching circuit theory.  Card Zero  (talk) 03:34, 16 August 2024 (UTC)Reply
No, there is no relation between this FFT structure, which is about linear operators on real and complex numbers, and switching circuit theory, which is about digital logic operations (except that both can be characterized and diagrammed as graphs). In the FFT diagram, E(0), E(1), E(2), and E(3) are complex numbers that represent the DFT of the Even-numbered input numbers; similarly O(0)... for the DFT of the odd-numbered inputs. The Ws are complex "weights", numbers that are multiplied by the Os; those products are added to the unmultiplied Es that they are paired with, to make the output complex numbers that represent the DFT of the full input sequence of numbers. Dicklyon (talk) 15:49, 16 August 2024 (UTC)Reply

August 18

edit

Small Blue-Ray player

edit

Where can I get a Blue-Ray player which (1) plugs into an HDMI port on a desktop PC, (2) is small enough to be placed on top of said desktop PC's tower unit without any problems, and (3) does not require deliberately circumventing DRM in order to watch Blue-Ray disks? I've just now bought a compact Blue-Ray player which plugs into a USB port, but discovered that it won't work unless I install AnyDVD (which I don't want to do, for obvious reasons!) Just to be clear, the issue is not cost, but size -- my computer is so old that it can only read DVDs and not Blue-Rays, so I need an external Blue-Ray player, but at the same time the space for its installation is severely limited, so I need one small enough to place on top of my PC or else I don't have any room left for it. As possible alternatives, would it be possible to (B) use a USB to HDMI adapter (is that even an actual thing?), or (C) when it comes time for my computer's next overhaul (which I wanted to do this winter, but didn't), replace its internal optical drive with one which can read Blue-Rays while remaining backwards compatible with DVD's and CDs? 2601:646:8082:BA0:F993:EB64:4176:5F31 (talk) 03:12, 18 August 2024 (UTC)Reply

So how big is the top of your PC? For instance, this Blu-ray player from Sony is 320 x 45 x 212 (mm). I think that will fit fairly neatly on top of a typical full or mid size tower case, overhanging by only half a centimeter left and right. You might clarify point (3). Why would your newly bought hardware refuse to work at all without third-party software to defeat digital restrictions? Is this to do with region locking, in fact? Is that Sony player likely to have the same problem? (I'm not fussy about resolution, so I've only ever owned regular DVDs, excuse me if I'm ignorant of any well-known facts.) I also wonder what software you're playing the media with, and whether, say, VLC media player or mpv (media player) is the actual solution here. Actually I just read in the mpv faq "It is a fact that playing DVDs/BDs gives a much better user experience when ripped to files", so perhaps the page Blu-ray ripper is helpful.  Card Zero  (talk) 07:00, 18 August 2024 (UTC)Reply
Region locking is most emphatically not an issue here, because the player was set for region 1 and all my Blue-Ray disks are also region 1 (except for one or two which are all-region), and I tested it with a region-1 disk -- plus, after it refused to play, I used the Cyber-Link advisor tool for diagnostics, and it said specifically that the problem was HDCP, or more specifically it said that a USB port won't do for a Blue-Ray player and I have to use an HDMI port or similar! (And yes, I didn't know about this either until it came up just the other day -- in fact, I had to look up what HDCP is and what it requires for compliance!) So, if the Sony player plugs into the HDMI port (my computer has a spare one), then there should be no problems of that sort! As for dimensions, they should be acceptable -- it would hang off by 3-4 inches on either side, but that's not a big deal! Thanks for the info! 2601:646:8082:BA0:F993:EB64:4176:5F31 (talk) 11:02, 18 August 2024 (UTC)Reply
Oh, I mixed up width with depth, my apologies. It would fit even better if turned sideways ...
The HDCP article has this quote: "the main practical effect of HDCP has been to create one more way in which your electronics could fail to work properly with your TV." I wonder whether you might have better results connecting the player directly to the monitor, I don't see that involving the PC will bring much joy to the experience in this restrictive climate.  Card Zero  (talk) 11:15, 18 August 2024 (UTC)Reply
Did you actually buy a USB (standalone) Bluray player? I'm not convinced such things exist, since I'm not sure what the USB would be for. Some Bluray players may accept USB sticks but I don't see the purpose of connecting to a computer via USB. Cyberlink definitely would not be involved. It seems much more likely that you bought a USB Bluray reader or reader+writer i.e. an external optical drive with a USB interface. This is what you need to play Blurays on your computer and USB should not be a problem. The connection interface of the reader is largely irrelevant since the content is being decrypted on the computer not on the Bluray reader. However any commercial authorised software will require HDCP. This means your monitor needs to be connected to your computer with HDCP enabled (and sufficient version). HDMI is the best solution here since it's sort of designed for that sort of thing, so connecting your monitor to your computer via HDMI is the best solution. Failing that DisplayPort and DVI might work. You should check settings on both your monitor and GPU and ensure HDCP is enabled and if you can choose version, chose the highest version that works. Again, 4k is far more likely to be a problem. If for some reason your connecting your monitor to your computer via VGA, there's no way to have HDCP so it won't work. If it's an AIO or laptop, it might still need HDCP to be enabled internally to work. If you're using USB to connect your monitor to your computer, it will depend on how it's connected and possibly other stuff. I'm fairly sure it can work but it can be complicated. [2] IMO if at all possible try to just connect your monitor to your computer directly with HDMI. I suspect, but don't know, that this is what the Cyberlink advisor was suggesting. Connect your computer to your monitor via HDMI not via USB or DisplayPort. Of course if your monitor doesn't support HDCP, or doesn't support sufficient version, then unfortunately you're probably SoL if you want to use a simple authorised solution. Again unfortunately your only real option there is to get a monitor which supports HDCP of sufficient version. As said, 4k will be far more of a problem than traditional Blurays. But it's also possible that Cyberlink has been forced to require HDCP higher than a standalone Bluray player needs so it's possible you could use a standalone Bluray player even if you can't get playback working on your computer. Nil Einne (talk) 15:31, 19 August 2024 (UTC)Reply
BTW, be aware that even if you get it working, it might not work for newer titles without updates, and note this applies whether you use a computer solution or standalone player. See e.g. [3] Nil Einne (talk) 15:38, 19 August 2024 (UTC)Reply
I should also clarify that besides HDCP, I think there also needs to be internal protections on the computer probably TPM of some variety since they're also supposed to stop the decrypted stream from being intercepted on the computer rather than just after it is outputed. Nil Einne (talk) 15:49, 19 August 2024 (UTC)Reply
Unless your computer happens to have a HDMI capture card or capture device, I fairly doubt a Bluray player exists that can connect to your computers HDMI port. The HDMI port on your GPU (including any iGPU) is almost definitely only capable of acting as an output device, it cannot accept any input. (I mean most GPUs can't even do HDMI-CEC.) Even if you did happen to have a HDMI capture device like an Elgato or whatever, fair chance it still won't simply work. The Bluray player will almost definitely need HDCP enabled when it's playing a Bluray (perhaps you can disable it or it's not enabled for DVDs, not sure), and the capture device is unlikely to support HDCP. You could potentially add a HDCP stripper or HDMI splitter but that's adding a lot of work and you might need more than that. This seems way too complicated for what you want to do. Instead of trying to send the Bluray output to your computer, you should concentrate on sending it to your monitor. If your monitor has a HDMI port and it supports whatever version HDCP your Bluray player requires which I think for traditional Bluray's this isn't very high so it probably does, then you can likely just plug the Bluray player into your monitor and it will work. If you're already using the sole HDMI port on the monitor for your computer, the easiest solution would be just to unplug the HDMI from your computer and plug it into the Bluray player when you want to watch something. Alternatively you can get cheap and small HDMI switches from AliExpress and the like. Some fancy Bluray players may let you plug in a HDMI from a source device and act as a switch itself, like an audio system etc, but these are likely to be expensive and I expect on the bulky side. (Not because they really need the space.) If you don't have a HDMI input on your monitor, you could try looking for one with DisplayPort output assuming this is what your monitor is using but frankly I'm not sure they exist. You could try using some sort of active HDMI to DisplayPort converter, but especially with HDCP, I'm not sure how well this will work. Frankly, you might be better of just getting a monitor with HDMI input. If you don't have a standalone monitor but instead an AIO or laptop frankly you're IMO better of just getting a small HDMI capable monitor. The alternative is to get playback on the computer working. Trying to capture the HDMI output from Bluray computer is to be blunt a very daft solution however you spin it. Nil Einne (talk) 15:06, 19 August 2024 (UTC)Reply


August 21

edit