Wikipedia:Reference desk/Archives/Computing/2013 December 20

Computing desk
< December 19 << Nov | December | Jan >> December 21 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 20

edit

How to determine which remaining paragraphs of Avicenna was written by Jagged 85?

edit

Has someone written a program that could determine how much of Avicenna was written by User:Jagged 85?

As per Wikipedia talk:Requests for comment/Jagged 85/Cleanup content written by that user needs to be checked and cleaned up. Because he actually inserted things that are true, instead of massively deleting his content, editors need to actually check over his edits which makes things worse.

If a program could compare his revisions to the text in the article currently, maybe editors could zero in on the paragraphs that he wrote and go to Wikipedia:RX and ask for the relevant sources so they can check them.

Thanks, WhisperToMe (talk) 02:11, 20 December 2013 (UTC)[reply]

Wikipedia:WikiBlame solves the problem another way, showing who did what text. Graeme Bartlett (talk) 12:06, 21 December 2013 (UTC)[reply]

Any way to tile an image you're linking to in the link?

edit

Hello I'm wondering if it would be possible to tile an image I'm linking to. Let me explain: on my blog, I have a link to an image from a different site (my friend's deviantart account). The link leads directly to the image (not to the deviantart page) and displays the image when clicked against a blank background. My question is, is there anything I can add to the hyperlink that will make the image tiled when clicked on? I've never heard of editing hyperlinks for html effects, so I'm not sure if it's possible, but I figured I'd ask anyway. Thanks! 74.69.117.101 (talk) 02:57, 20 December 2013 (UTC)[reply]

If I understand what you want to do, the answer is probably no. If deviantart has a way for you to display the image 'tiled' (I presume you mean you want lots of copies of the image displayed in a tiled fashion), then all you have to do is to change your link to tell deviant that when the links is clicked on (which is usually but not always possible depending on the way deviant actives the image tiling mode). But if they don't and I'm guessing they don't, then what you have to do is to make a page hosted on your server which will specify that the image should be opened and tiled. The trouble is this would require image hotlinking which many sites don't like and take action to prevent. (Some adfiltering and other security software may disable such hotlinking as well thinking it's being done for unwanted reasons.) Nil Einne (talk) 03:25, 20 December 2013 (UTC)[reply]
A bit of javascript might do what you want. The following code changes every link with the "tilelink" class so that when you click the link it loads the image the link points to and then tiles the background of an element in the page you are currently on.--Salix alba (talk): 11:14, 20 December 2013 (UTC)[reply]
<html>
<head>
<title>Tile</title>

<script type="text/javascript">

function setup() {
	var eles = document.getElementsByClassName("tilelink");
	for(var i=0;i<eles.length;++i) {
		eles[i].onclick=tile;
	}
}
function tile (event) {
	event.preventDefault();
	var ele=document.getElementById("target");
        var src=event.target.href;
	ele.style.backgroundImage="url("+src+")";
}
</script>
<style type="text/css">
#target {
  width: 400;
  height: 400;
  border: 1px solid black;
}
</style>
</head>
<body onLoad="setup();">
<a class="tilelink" href="http://upload.wikimedia.org/wikipedia/en/7/70/Example.png">example</a>
<a class="tilelink" href="http://upload.wikimedia.org/wikipedia/en/b/ba/1974_Iceland_1100_year_coin_%28reverse%29.jpg">coin</a>

<div id="target"> </div>
</body>
</html>

iPod Updates

edit

How do I stop my iPod 4Gen from telling me to update apps which need an iPod 5Gen with iOS 7 on it for the update? My AppStore app now has an unsightly big red circle next to it with a large number, which is increasing daily, because many of the apps cannot be updated. Alternatively, is there a way to get iOS 7 onto a 4Gen? KägeTorä - (影虎) (TALK) 09:39, 20 December 2013 (UTC)[reply]

You could just turn off the 'badge notifications' for the App Store. I'm afraid I've already updated to iOS7, so I can't really remember how to do it on '6, and I can't find any instructions on 'tinterweb either, but if you poke around in the settings you should find a list of apps and what notifications are allowed for each one. Simply turn off the ones for the App Store to remove the red circle. - Cucumber Mike (talk) 11:44, 21 December 2013 (UTC)[reply]
Well, I thought of that, but that will mean I will never know which apps I CAN update on this architecture. What I really would want is for the AppStore to notify me if I can update, and not do anything if I can't. KägeTorä - (影虎) (TALK) 22:10, 21 December 2013 (UTC)[reply]

Are algorithms universal?

edit

When analyzing the efficiency of an algorithms can we assume that they would work in any kind of computer architecture? (even if it's something completely different to what we have right now). Are they something like 2 + 2 which should be valid anywhere? OsmanRF34 (talk) 12:56, 20 December 2013 (UTC)[reply]

What you're analyzing is the number of times a specific operation will run relative to input size. Any system that follows the algorithm will run those operations the same number of times. The simplest step to a "different" type of architecture is moving to a parallel system. In that case time can be saved by doing some of the operations at the same time, depending on how independent the operations are. The same tools and techniques apply, but you also need to understand how the new system affects the evaluation of the algorithm in order to do the analysis. The parallel system may do 5000 multiplications faster than the one-processor system because some are run in parallel, but the complexity in terms of operations performed is the same. Several laws were derived that define maximum gains in speedup from adding more cores and other similar relationships between one- and multi-processor systems; it seems reasonable to think that the same sort of work would be done with a novel architecture. Katie R (talk) 13:55, 20 December 2013 (UTC)[reply]
So, the paraphrase the above, "No". Also, if you consider embedded computers, then each system will be tweaked to favor one algorithm or another to accomplish a specific task, depending on the actual hardware. StuRat (talk) 14:08, 20 December 2013 (UTC)[reply]
Yes, but what about sorting something and going through it? Wouldn't it be always more efficient than just going through it? No matter where? At least, can't we postulate the existence of some universal algorithms? OsmanRF34 (talk) 14:12, 20 December 2013 (UTC)[reply]
Well, consider that the optimal sort method depends on how much memory is available. It's possible to sort in-place, but such a sort is less efficient than one which uses lots of RAM. So, your ideal sorting method would vary depending on the hardware (as well as other factors). StuRat (talk) 17:59, 20 December 2013 (UTC)[reply]
Running a linear search will always take O(n) comparisons, and a binary search will always take O(log n). But without knowing the details of the new architecture, it is impossible to say how much time those operations take. Maybe the new architecture can run n comparisons simultaneously, in which case a linear search could be O(1) in time (being generous here - if you define the algorithm as "check each element to see if it matches, and return true if one does", then it could run on either system). A binary search would still be O(log n), because each check is dependent on the one before it. Katie R (talk) 16:32, 20 December 2013 (UTC)[reply]
Computer science is the abstraction of mathematical principles for the purpose of computing. It studies algorithms using techniques that are independent of the specific design constraints of any machine.
Computer engineering is the application of computer science, electronic engineering, and related disciplines, to solve problems related to computers, using specific machines that we know how to design - usually, electronic digital computers implemented in VLSI integrated circuits.
When we study an algorithm, we are talking about a pure mathematical representation of a process. When we implement the algorithm, we have to place it in a form that can be understood by a machine - a programming language. Then we can study how machine limitations might impact the performance of the algorithm.
A few computer scientists spend time thinking about algorithms that are well-suited to run on machines that are very different from today's machines. But in general, a representation of an algorithm - as studied by a computer scientist - is already sufficiently abstract that it is not bound to a specific computer architecture. Nimur (talk) 16:48, 20 December 2013 (UTC)[reply]


Here's a great example - the infamous spaghetti sort. To a novice student of computer science, the spaghetti computer looks like a fantastic way to break the "big O" rules - the time complexity of a sorting algorithm. (We all learned that sorting a list takes n·log(n) time, and if you have a way to do better, you'd make a lot of smart people very happy). Spaghetti sort naively claims to run in O(1) - constant time - to execute. Jumping on the opportunity, the eager computer scientist and the NSA stop purchasing electronic computers, and start buying immense quantities of spaghetti, so that they can start breaking those pesky cryptographic hashing algorithms that everyone is always trying to break.
But the skilled computer-scientist actually thinks about the algorithm, and recognizes that the plain-english description of the spaghetti-computer has glossed over some very real, very important details. The spaghetti computer just "finds" the longest rod of spaghetti - which is done "by inspection." But the description forgot to mention the sort time to prepare the spaghetti - which is linear - and the search time - which is still order of n·log(n). Because the analogy was so convenient, and because an ordinary handful of spaghetti can be "sorted by inspection," the naive computer science student has incorrectly confused "really fast" with "constant execution-time." That's a big error! It's tantamount to saying that if we just build our L1 cache large enough, then we can store the entire internet in it; compute any problem and retrieve any data in zero time! It's just a stupid error. The computer scientist actually has to think about why the algorithm takes time, by breaking the procedure into its most fundamental and atomic steps. If we assume that the "spaghetti computer" has constant run-time for a sorting algorithm, it implies that we don't need to compare - which is a flaw in the basic logic of the algorithm. Another day, I might use the same approach to poke some holes in a lot of the claims made about the oft-lauded, infrequently-defined quantum computer).
And finally, the skilled computer engineer starts looking at the problem even more critically. So, you want to sort a list, and you want to do it with Spaghetti. How large is a spaghetti? How much energy does it take to move around one spaghetti rod? The naive approach is to conflate "very little effort" with "zero effort." And the same goes for space: each spaghetti rod is very small: we can hold "a lot" of spaghetti in one handful; but if we sort n rods, we need n times as much space! What if n goes to 10200? That spaghetti is gonna get pretty heavy and we're going to need a few billion billion trucks. This is all because the simplified description of the algorithm relies on an unfounded assumption: infinitesimally-small is conflated with zero-size. That's just a stupid mathematical error. It also takes very little energy to move around a few electrons, but your computer still requires energy. Each transistor is very small, but when we build one billion of them, the chip becomes quite large. Engineers have to count these things. A very small amount, multiplied by a very large number of repetitions, is no longer negligible - this is a theoretical underpinning of most of modern mathematics.
Today, when we look at all the possible things we might build a computer out of, we find that the smallest, lowest energy devices are electronic logic gates, manufactured using photolithography on simple semiconductor substrates. Yet, no matter how much we optimize the processes, and no matter how we finagle with the physical processes that represent our information, we still can't beat the algorithmic complexity. This is a mathematical fact. Nimur (talk) 19:22, 20 December 2013 (UTC)[reply]
What about quantum computers? Is believed Integer factorization is in the bounded error quantum complexity class but suspected to be outside of polynomial time complexity class.--Salix alba (talk): 20:06, 20 December 2013 (UTC)[reply]
Personally, I think the term "quantum computer" is just poor word choice. Exactly what is "quantized"? Digital computers already quantize every quantity that they deal with - in time, space, voltage, current, everything - at the microscopic and at the macroscopic level. I'd bet money that your computer quantizes the images it displays to you; it quantizes the voltages and currents that flow through its transistors; it quantizes the information that it processes. So, which part of the computer you use today isn't already quantized? Or perhaps, the terminology is wrong, and "quantum computer" as used in the popular press really means "probabilistically-correct computer whose information is stored using certain specific elementary physical properties of simple atomic-scale structures (only, never yielding quite as high a probability of correctness as the existing commercial computers that use different specific physical properties of more complex atomic-scale structures)?
The word "quantum" is bandied about as if it has some sort of magic power. It is lumped together with aspects of atomic physics. It is implied to have mysterious characteristics. It is used as an incorrect surrogate to describe probabilistic models, whether they are quantized or not. But then, if you spend any serious amount of time studying either the atomic physics associated with quantum mechanics, or the fundamental mathematics associated with quantization and discretization of continuous quantities, you find most of the mystery evaporates. So, you've got a bunch more tools, but you're still solving the same old problems.
So, what about quantum computers? Nimur (talk) 20:31, 20 December 2013 (UTC)[reply]
I think what Salix alba wrote is perfectly reasonable. The fact that there are things that are efficiently computable in a quantum world but seem not to be in a classical world is surprising. Unlike the case of the spaghetti sort, there's no extra computation hidden in the setup or readout phases. Shor's factoring algorithm is classical with a quantum "subroutine", and both parts are included in the overall time analysis. -- BenRG (talk) 22:07, 23 December 2013 (UTC)[reply]
I'm not sure I understand your question, but Church-Turing thesis may be relevant. It says that any general-purpose computing machine can simulate any other, so they can all run each others' algorithms (but not necessarily very efficiently). -- BenRG (talk) 22:07, 23 December 2013 (UTC)[reply]

Prolonging life of electronics

edit

Which is better to prolong the life of electronics - keeping it always on or in standby or switching it off completely whenever it's not in use? Clover345 (talk) 21:37, 20 December 2013 (UTC)[reply]

It depends:
1) Risks from leaving it on include overheating. Here I'm not just talking about the critical overheating which causes an immediate shutdown, but long-term heat damage. Electronic devices which spend years close to the upper temperature limit will tend to break down.
2) Risks from restarting include a voltage spike which can cause damage, too.
So, how these risks are weighed against each other depends on the device and how you use it. Take light bulbs. Incandescent bulbs used a lot of electricity when left on, and got very hot, so were prone to long-term thermal damage (the filament would slowly sublimate). CFL bulbs, on the other hand, don't use much electricity and don't get all that hot, so the voltage spike when turning them on and off is more likely to damage them. Thus, while incandescent bulbs should be turned off when you leave the room, CFL's should probably be left on, if you plan to return soon. StuRat (talk) 07:33, 21 December 2013 (UTC)[reply]
This is a multifaceted issue. In the old days it was simple. Electronic circuitry used thermionic valves. The thermal cycle of being switched on and off caused stress on the metal-to-glass seal (both which have different coefficients of expansion and contraction) leading to the ingress of air and failure. Also, soldering irons of that era were un-plated copper bits. Some of the copper dissolved into the solder forming an alloy the would more readily 'work harden' and lead to a 'dry soldered joint'. Modern electronic circuits are a little different. The soldered joints are smaller, run at lower temperature and are less prone to these issues. The IC packaging, likewise are less prone to thermal cycling. Yet, these days we expect 'consumer' electronics to run faultlessly for ten years or more, where a a few decades ago we had to have the TV repair man in a least one a year. So, to get to the gist of your question: StuRat mentions: 1) Risks from overheating. True, yet you get what you pay for. In a good quality board, the components should not be operating at their limit and thus should last for years. A good example of where this is not done, is that in some routers for instance (by some well know companies) have used cheap Japanese capacitors in their switch-mode-powers-supplies and so last just about the two years before the warranty runs out (good news – cheap to replace). 2) Risks from voltage spikes. Again, it depends on what you pay for. Well designed electronic should cope with spikes. So, if your are looking for general guidance, I would say leave things switched on (or in sleep mode) all the time. Buy one of the many  Power Cost Monitors and consider if, the few cents a day it cost you to leave the equipment on is worth it. Then look at what your system is doing. One of my computers was constantly read/writing to a terrabit external drive – regardless of whether I was using it. OK. it might last three years and amortized to a few cents a day, at that usage, but it turned out to be a bug. So, always take time to occasionally look at the bigger picture too. Some things need to be powered down. Like some external hard drives (and OK... someone has just shouted out over my shoulder, not to forget to switch off your wife’s Vibrator – which I assume is Androids latest release) ( P.S those in the room with me are quaffing down all the vintage Port (an expensive vintage at that) which was meant as a reward for father Christmas). Just think. If Farther Christmas had licensed his intellectual property as to how he can be descending all chimney at once. He could have sold the rights to quantum computing to Microsoft . Then he would be rich enough, to give me that train set I asked him for when I was ten years old. In Britain, we were just coming out of the post WW2 recession. It came with little signals posts and something that filled the water tender up, and a little lead-cast station master with a flag. There was also a signal box and some points (railroad switches), were I could arrange two trains to come together and crash! If it had come... but it didn’t! So I spent all of that Christmas, just sitting in front of the Christmas tree, just cracking walnuts. If your reading this father Christmas (and I know you are... my parents told me you know everything about me and whether I have been naughty or nice). All I am asking for, (very humbly) is for my very own Dublo train set. Your truly ...--Aspro (talk) 18:31, 21 December 2013 (UTC)[reply]

Another thing to consider if you run electronics continuously is the fans driving the dust inside of the cases. I realized it long time ago and since then I've always turned my computers off when I am done. AboutFace_22 — Preceding unsigned comment added by AboutFace 22 (talkcontribs) 16:41, 21 December 2013 (UTC)[reply]

On the subject of dustThe case, tower, (or what ever the box is called that houses your mother board, power supply etc.), can be opened. First, vacuum out all the fluff – not hard! Second, (and as you vacuum) use a good quality artists brush. I don't know were in the world your are situated but in the UK I would ask for something like a 'squirrel brush' (less than a dollar). Purchase also, a can of Gas duster. With brush and can, will blow out all the fluff from the CPU heat sink (a big aluminum thing with fins on the mother board) and other hard to reach places. Replace cover. Off the top of my head, I can't estimate how long it would take you to do it, because when I first did it, I already knew a little bit about where fluff would collect. However, use your common sense. You know what fluff looks like and that it can impede cooling. As long as you don't poke the board with anything sharp (I assume you use common sense and have already disconnected it from the mains/ UPS before taking the cover off) then you will not do any harm (err, should I mention grounding here?). The reason I have gone to lengths over this, is that other readers may be reading this, whom might not be able to buy a new computer. Yet, they may know of someone that is throwing out a computer because it no longer works. Very often it is because (as you warn) its fluffed up. 20 minutes of de-fluffing and hey-presto, you have a functional computer (an' OK, before any other Smart alecks get a chance to say it – another forty minutes more, to instal Linux and you will have a FULLY functioning computer. Just wanted to get that bit in before anybody else did). --Aspro (talk) 19:17, 21 December 2013 (UTC)[reply]