Wikipedia:Reference desk/Archives/Computing/2009 April 6

Computing desk
< April 5 << Mar | April | May >> April 7 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 6

edit

Why do Pentium IIIs run at different speeds?

edit

The article says the same versions run at different speeds. Is this speed built into the hardware of the CPU and cannot be changed? Are these different-speed Pentium IIIs truely different, or are they just the same hardware that has been 'crippled' by Intel to run at lower speeds as part of product differentiation? 78.147.28.51 (talk) 12:15, 6 April 2009 (UTC)[reply]

As time went on, the technology for producing P3 cores improved so faster speeds were possible with less heat generated, some CPUs are as you say 'crippled' but not (usually) to make more money, the CPUs are thoroughly tested at various speeds and are rated near the highest speed at which they'll reliably operate.
That said some CPUs are crippled just to make money for the manufacturer, the current 3 core AMD chips for example can have the fourth core enabled on certain motherboards, AMD don't make three core chips, they make 1, 2 and 4 core chips, a 3 core chip is either a chip with a failed core (using those as 3 core chips will increase the yield a great deal) or a chip with a core simply disabled so AMD can target a lower price point.
You can overclock the Pentium 3 CPUs fairly easily but this will probably shorten the life of the chip and could break it completely so unless you are sure of what you're doing overclocking is not recommended. —Preceding unsigned comment added by Hideki.adam (talkcontribs) 12:47, 6 April 2009 (UTC)[reply]

Tips on buying a second-hand computer for Linux

edit

The small computer-repair type shops near me sell off old pre-owned reconditioned computers at modest prices, I imagine they obtain them from trade-in deals. I'm tempted to buy one, as they are better than the even older computer I am currently using, and they may save having to buy a new computer for several times the price in the future. The CPUs are typically 1.1 to 1.4GHz or less, usually AMD, and they run XP. I would buy one with the intention of eventually installing Linux. I am not interested in games, but I would like to be able to run things such as the statistical language "R" and its GUIs. What should I look out for when buying one, with regard to being future proof? Is there any problem with an AMD CPU and Linux, eg Ubuntu?

As an aside I've had Puppy Linux recommended to me for old computers, but I am concerned there might not be much software available for it, even though a version of Open Office 3 will run with Puppy Linux I understand. I've never used Linux yet. 78.147.28.51 (talk) 12:29, 6 April 2009 (UTC)[reply]

You certainly won't have problems running linux on slightly older AMD processors. Linux tends to run best on hardware that's been out there for a while, since it is a community effort and more people have had a chance to use it. Of course, you might run into problems with any oddball peripherals that might come with the system (a Winmodem, for example, will rarely work under linux). However, if they're being sold with linux on them, you can be pretty sure they'll have already testing that hardware. If you want to use R on Ubuntu, I suggest installing the r-recommended package (apt-get install r-recommended), which installs the base and a bunch of useful CRAN additions. -- JSBillings 12:45, 6 April 2009 (UTC)[reply]
I believe they come with XP on them, not Linux. StuRat (talk) 13:51, 6 April 2009 (UTC)[reply]
It's worth mentioning that Linux does require a reasonable amount of RAM to run effectively, a fast P2/P3 will probably be adequate as far as CPU speed goes but if you're running demanding X applications such as Open Office you should have at the absolute minimum 256MB of RAM and 512MB ideally. Any P4/Athlon system is going to be well above the requirements for Linux so perhaps get one of those, I'd recommend Ubuntu if you're new to Linux or Debian if you're not, both have a powerful package management system meaning you will rarely if ever have to compile anything yourself (Incidentally, my old Linux internet server was a 64MB P166, it didn't run X, just handled my firewall, a webserver and a few other bits but that should give you an idea of the sort of hardware you can run linux on, replaced last year with an Athlon 2000 (because the 72 pin SIMMs in the old P166 were needed for my Amiga ^^)) But yes, short answer, anything P3 or greater with 256MB or more of RAM should do what you want. —Preceding unsigned comment added by Hideki.adam (talkcontribs) 12:53, 6 April 2009 (UTC)[reply]
One suggestion is to create a Linux boot disk and try it out in various used computers before agreeing to buy one. This, of course, requires that they allow you to do this test (call them and ask first). You need a fairly small version of Linux to fit on a single boot disk. Puppy Linux should fit, as will Damn Small Linux. Hopefully your current computer would be adequate to create the boot CD and maybe test it out. StuRat (talk) 13:51, 6 April 2009 (UTC)[reply]

Thanks. For the best future-proofing would it be a good rule of thumb to simply buy the computer with the fastest CPU? Would a faster CPU usually have a higher maximum memory capacity in the motherboard please? The actual computer may not currently have the maximum memory installed, but as extra memory can be bought from Ebay very cheaply, then that is not a consideration when buying. Unfortunately due to the low price, I do not think the retailers are willing to spend time setting the computer up to allow testing (actually only the tower is for sale), or even allow looking inside the case, although they do offer a money-back guarentee. 78.147.28.51 (talk) 13:58, 6 April 2009 (UTC)[reply]

That's a shame. The ability to try something out before making the purchase decision is the main advantage of a brick-and-mortar store over an Internet purchase. StuRat (talk) 06:29, 7 April 2009 (UTC)[reply]

The computer that you describe sounds similar to the one I use now for Kubuntu. It's likely to be OK for the job. Note that there are (or were when I last looked) different versions of (K)ubuntu for Intel and AMD. -- Hoary (talk) 00:29, 13 April 2009 (UTC)[reply]

Excel rectangles in a chart?

edit

Hi, I'm wanting to draw rectangles in a chart in Excel 2007. So say I have co-ordinates (x1,y1) = (0.1,5) and (x2, y2) = (2,6), I'd like to draw a rectangle with it's bottom left corner at (x1, y1) and top right at (x2, y2). The tricky bit is that I'd like the rectangle to be filled in solid, ideally with the full range of excel options (transparancy etc.) bu this isn't fully necessary. Anyone got any ideas?

Cheers, LHMike (talk) 13:00, 6 April 2009 (UTC)[reply]

For the non-filled rectangle you enter the series (x1,y1) (x1,y2) (x2,y2) (x2,y1) (x1,y1) and then make a scatterplot with lines showing and point marks not showing. For the filled rectangle, I don't know. Jørgen (talk) 13:10, 6 April 2009 (UTC)[reply]


You can insert an "autoshape" (Insert → Picture → AutoShapes) on top of the chart. This will allow you full control over the formatting of the rectangle, but it doesn't technically become part of the chart (meaning changing/moving/resizing the graph can be somewhat complicated). For a more "integrated" approach, you could try adding a new data series with one or more points and enable "data labels" for that series, then move the data label(s) into place and format them appropriately (controlling the size of the labels might prove difficult though). – 74  13:35, 6 April 2009 (UTC)[reply]
Basically the answer is that Excel cannot generate such things programmatically. You draw them in yourself. In which case you're better off not using Excel for such things, but a real vector editor. --140.247.242.83 (talk) 19:17, 6 April 2009 (UTC)[reply]
Sigh... thanks all. I think the shapes answer is the way forward, it's just a shame it won't be linked to the real numerical data. Microsoft always come so close to getting things right, but they always seem to just fall short. Like vector drawing in Word. Almost, but not quite, good enough for technical drawings. LHMike (talk) 21:05, 6 April 2009 (UTC)[reply]

Blocking deep packet inspection?

edit

Is there any way to block deep packet inspection? — Twas Now ( talkcontribse-mail ) 15:20, 6 April 2009 (UTC)[reply]

Yes: encryption. --Sean 16:58, 6 April 2009 (UTC)[reply]
Specifically protocol encryption. 121.72.192.28 (talk) 19:30, 6 April 2009 (UTC)[reply]

Thanks. — Twas Now ( talkcontribse-mail ) 01:56, 7 April 2009 (UTC)[reply]

Potentially some sort of steganography would also work. – 74  04:34, 7 April 2009 (UTC)[reply]

New, but related question: Is there a way to hide from your ISP which sites you are going to? HTTPS only works on some websites, but not all. Proxies could be the answer but they are slow. Tor is available but I worry that someone exits from my IP and I get nailed for child porn because a jury can't understand what an onion router is. Taggart.BBS (talk) 06:42, 7 April 2009 (UTC)[reply]

You could also add some IP headers, or perform IP fragmentation, which may confuse the technology, both of which will slow you down. To hide from your ISP, dial up to another ISP, preferably one in another country! Graeme Bartlett (talk) 06:56, 7 April 2009 (UTC)[reply]
HTTPS won't necessary hide what websites you're going to, the ISP would still be able to detect what website you went to, but not the the actual content of the pages as that would be encrypted. You already mentioned Tor which is one solution, but I think there may be some confusion: You don't need to make it use your computer as an "Exit" and in fact unless you actually configure it to do so it won't. Aside from that you would need a proxy server to relay your request on another external network and then connect to that via either a VPN or an SSH tunnel (or if using a VPN just route the Internet traffic down the VPN too). ZX81 talk 04:10, 8 April 2009 (UTC)[reply]
Also, you don't need to be an exit node if you want to use Tor -- just set the exit policies to block everything. (Actually, you can even set Tor to "client-only" mode, but that isn't very recommended.) --grawity 11:41, 8 April 2009 (UTC)[reply]

Backup question

edit

My USB hard drive, which I had been using to make regular backups of the pictures I have taken with my digital camera (over 27000 so far) just failed. The pictures are safe, as my internal hard drive is still working OK, but if it fails, at least some of them will be lost forever. This made me realise that the method I had been using: keeping a USB hard drive continuously connected to my computer and powered on, and making regular backups to it, is not reliable enough. A single mechanical failure to the drive and the backups have been for naught. How should I do it instead? I have considered some options:

The same way I have done it previously:

  • Pros: Very easy and fast.
  • Cons: Very prone to failure.

Using a USB hard drive, but only connecting it for backups/restores, instead of keeping it continuously connected:

  • Pros: Less prone to failure than the above.
  • Cons: I need to remember to connect/disconnect the drive.

Using DVD discs:

  • Pros: Optical media is far less prone to failure than magnetic media.
  • Cons: The pictures already take over 10 DVDs. The process of burning them all takes several hours.

Using magnetic tape:

  • Pros: Less prone to failure than hard drives, and easier than using DVD discs.
  • Cons: Tape drives and cartridges are very expensive.

Online backup:

  • Pros: Very easy and fast.
  • Cons: Requires constant payment. Risk of failure (albeit tiny) that is beyond my control.

What would you suggest? JIP | Talk 18:25, 6 April 2009 (UTC)[reply]

I rotate through a small set of inexpensive USB hard disks, kept up to date with rsync. Do a backup session like a ritual, at say the same time every week. Don't keep the drive connected the rest of the time. Keep at least one drive at a friend or relative's house (so if you have a fire then that one survives). In addition, can I add some cons:
  • online backup: really it's not that fast, if you've taken a bunch of photos - but as you can probably get it to work in the background, you probably don't care
  • magnetic tape: a big con is that, if your machine is destroyed or unavailable (fire, flood, etc.), then you need to buy another expensive tape drive, whereas the rest can be accessed from any machine.
  • DVD disks: limits on filenames (acceptable characters and path lengths) either make for imperfect backups (or with rubbish backup programs, need endless babysitting for each backup session); people innocently exceed the really quite modest limits of Joliet all the time
On both a price-per-MB and MB-per-unit-volume basis hard disks have this battle totally won, and you get speed and convenience to boot. 87.115.166.150 (talk) 19:15, 6 April 2009 (UTC)[reply]
OP missed an important advantage of the "Pros" of a USB hard disk that is continuously on and attached: it will actually be used to make backups. Don't short the human factor! You'll sometimes be too tired at night to mess about with the cabling and software, etc. In any case, I'm not sure about your assertion that option #2 lengthens the life of your hard disk. Back 15 years ago I remember being told that the "brushes" in the hard disk motor would have a longer lifespan if you left the hard disk continuously on. Anyway, it doesn't seem to hurt the Google search engine rigs, which use consumer-grade hard disks. Tempshill (talk) 22:46, 6 April 2009 (UTC)[reply]
My biggest 'niggle' with permanently connected HD/flash based backups is the possibility that a virus/other corruption could manifest itself in your backups, effectively rendering them useless. I would personally recommend a cloud computing solution for small data, and in the future I fully expect this 'outsourcing' of backups to be commonplace. For large data, no-one has mentioned Blu-ray disks - you could fit all your pictures on two of them. However, writing would still probably take ages, and as Tempshill says, actually getting round to doing it is so often the tricky bit. LHMike (talk) 22:56, 6 April 2009 (UTC)[reply]
My home backup plan is simple (to me). I wanted a backup server that would not go down if power went down, would connect to my wireless network, and would be small enough to grab if I had to run out of the house. I got an old cheap laptop with the largest drive I could find. I put console-only Linux on it (no need to waste disk space for a GUI). Now, I can backup my computer to it and my wife can back up hers as well. The worry is that all my backups are in my home. So, I signed up for Amazon's S3 service and I use s3sync (similar to rsync) to back up the laptop. It costs me about $3/month. Now, I have an in-house backup if I lose a drive and need to rebuild right away and I have an offsite backup in case I lose the house and need to start from scratch. -- kainaw 03:16, 7 April 2009 (UTC)[reply]

I understand that by price-per-MB and MB-per-unit-volume basis hard disks are the best option, and it's fast and convenient too, but all this is negated by what actually happened: the USB hard drive developed a mechanical failure. This not only made the drive useless, I feel it made the entire idea of using it useless - when the back-up medium fails more often than the main medium that it's supposed to back up, the back-up doesn't actually help. It doesn't hurt either, but now I feel like I've wasted almost 100 €, and the end result is the same as if I'd never bought the drive in the first place. I figure that what I've learned from this, and what I should do in the future, is to use more than one USB hard drive (but only one at a time), and only turn it on for the back-up (or eventual restore) process, otherwise keep it turned off. What are your thoughts about this? JIP | Talk 22:26, 7 April 2009 (UTC)[reply]

As far as i know, "USB hard drives" consists of ordinary ATA or SATA HDD and USB adapter (which converts USB signals to ATA or SATA). There are 2 things which can break: USB adapter - it does not contains any moving parts, so it theoretically should be durable; HDD - it is exactly the same kind as internal HDD's, so it have the same possibility of breaking (ok, if that external HDD is being moved arround a lot and receives a lot of shocks, it might increase possibility of breaking). If these data are important, then probably multiple HDD's is the best solution.
Also, there might be available USB-ATA/SATA adapters without HDD, so you might buy one such a device and separate ATA or SATA hdd's (it might be cheaper, than multiple USB HDD's). Also, if you have access to multiple computers (whisch has large internal HDD's), and are connected with fast network, you might store backups there (it might have incresed risk, that someone gets those files (it might be unacceptable)(encryption might be hard to manage)).
It these photos does not changes much, it might be worth to write them to DVD, it will have to be done only once. -Yyy (talk) 07:06, 8 April 2009 (UTC)[reply]

External USB hard drives are cheap. So you could have two, one connected all the time and the other backed up to just once a month or whatever seems the most appropriate frequency. All this in addition to whatever you're doing "offsite", though you could of course keep that second external drive somewhere else. If you're worried about malware, then do all of this with a computer that either is rarely connected to the net or uses an OS that's of no great appeal to the writers of malware. If you have a huge amount of photos in a complex file structure, you might put them in their own partition (or logical drive); this would allow the whole lot to be backed up with Clonezilla or similar from time to time onto yet another hard drive. But with all those external hard disks around, make sure that they're nowhere anyone could take them: if I had so much stuff available in so compact a form, I'd start to wonder about the risks of its help in identity theft and the like. -- Hoary (talk) 08:52, 8 April 2009 (UTC)[reply]

MNP protocol and the OSI model

edit

I can't find it. At what level would the MNP protocol be in the OSI model? --62.57.9.8 (talk) 20:50, 6 April 2009 (UTC)[reply]

If you mean Microcom Networking Protocol, it is a layer 1. It happens in the modem hardware, the interface to the upper layers is a stream of characters. So although this function is often performed at layer 2 or 4, here it is done at a lower level. Graeme Bartlett (talk) 21:23, 6 April 2009 (UTC)[reply]
Thanks, mate. --62.57.197.195 (talk) 10:51, 7 April 2009 (UTC)[reply]

Does a hard disk live longer if it's continuously on?

edit

Ha ha, I had to ask this separately after mentioning it 2 questions ago. Does a consumer-grade hard disk last longer if you leave it on forever, or does part of it wear out prematurely if the hard disk is switched on and off daily? Tempshill (talk) 22:49, 6 April 2009 (UTC)[reply]

In all matters of hard drive reliability, I defer to the Google study: here(pdf). They studied an ungodly number of hard drives over years and years - so this is a highly statistically significant sample. Sadly, they hardly ever turn their drives off because they stay busy 24/7. About the most they say is that "...for drives 3 years and older, higher power cycle counts can increase the absolute failure rate by over 2%."...which suggests a small influence on reliability. However, they go on to speculate that the drives that get turned on and off most often at Google are for computers that are repaired most frequently...which suggests a self-fulfilling prophesy! So if there is an effect - it's probably very small. SteveBaker (talk) 02:34, 7 April 2009 (UTC)[reply]
There is also the realization that a drive will not fail under any reasonably normal circumstances when it is turned off. So, if the only risk of failure is when the drive is turned on, leaving it off indefinitely will greatly increase the lifespan of the drive. Of course, that means you can't use it. So, this brings up the consideration of usage. My mother-in-law uses her computer about 2 hours a week. Should she leave it on all the time to increase the lifespan? I can't see why. I use my computer about 20 hours a day (not actively sitting at it - I do a lot of data mining for work). For me, leaving it on all the time is very similar to turning it off for those few hours a day that I may not be using it. Then, just to complicate the matter, there is a very common belief that electronics never fail when they are running. They only fail when you turn them on. So, if you never turn off the drive, you never have to turn it on. That means it will never fail. Of course, it is a fallacy that electronics never break when turned on - especially drives. They don't spin at a constant speed all the time. They speed up and slow down. They sometimes stop (depending on the controller and OS). So, the motor inside them is under a lot of power up and power down cycles even when the drive is always on. When you aren't using the computer for anything, the drive will usually stop spinning (ie: sleep mode). When you start using it, the drive will spring back to life and potentially have the "power up failure" that is often feared by the "never turn it off" crowd. This leads me back to my first point... You need to consider how often you will actually be using the computer. That is a very large factor in the failure rate. -- kainaw 03:10, 7 April 2009 (UTC)[reply]