Wikipedia:Reference desk/Archives/Computing/2009 August 19

Computing desk
< August 18 << Jul | August | Sep >> August 20 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 19

edit

Difference between Torrent and BitTorrent

edit

What's the difference between Torrent and BitTorrent? Do torrents refer to the .torrent files while BitTorrent refers to the file-sharing protocol itself? --BiT (talk) 03:35, 19 August 2009 (UTC)[reply]

I don't think either of these have dictionary definitions yet. What you are saying sounds fairly right, but I don't think it's incorrect to use them synonymously. Vespine (talk) 04:49, 19 August 2009 (UTC)[reply]
Is it maybe possible that the term "torrents" is used for the files themselves while "torrent" and "BitTorrent" may be used synonymously when referring to the protocol? The Wikipedia article doesn't mention this, so I was a bit confused after reading it. --BiT (talk) 05:18, 19 August 2009 (UTC)[reply]
BitTorrent can refer to a specific application (as well as a network application), the protocol, and a company. ···日本穣? · 投稿 · Talk to Nihonjoe 05:29, 19 August 2009 (UTC)[reply]
What about simply torrent? Are they completely synonymous? --BiT (talk) 06:13, 19 August 2009 (UTC)[reply]
A torrent file is used with BitTorrent software, just as a DOC file is used with Microsoft Word, or an HTML file is used with a Web browser. --FOo (talk) 06:38, 19 August 2009 (UTC)[reply]
BitTorrent refers to the company, protocol, and application. Bittorrent is a general term describing the system and community. A torrent is either the individual .torrent file, the files in the swarm that the .torrent file unites, or both concepts together. Mac Davis (talk) 17:22, 21 August 2009 (UTC)[reply]

Basic Java

edit

I was wondering if you have create an object with the following code; classA B; B = new classB(); I know that the methods would be overridden but I was wondering how much memory is allocated for object B. I thought the object decleration told the compiler to allocate space in the memory for a new object of classA so does the memory allocated for class A contain the methods in classB or classA or both? Since if both classes have a method randomMethod() and you call it the classB one will override the classA method. 66.133.202.209 (talk) 03:49, 19 August 2009 (UTC)[reply]

Let's say 'A' has two fields that are 'int's. One 'int' is four bytes, so that makes eight. There is an object header which is another 8 bytes (I'm assuming a 32-bit Sun JDK). So an 'A' object is total 8+8 = 16 bytes. If 'B' adds another 'int' field, a 'B' object will be 16 + 4 = 20 bytes in size. Methods are not copied for each object, they exist once in memory. If a method overrides another method, there will be two methods, even if you allocate a million of each kind of object. The object header I mentioned above contains a pointer to the object's methods, so the runtime will know whether to call the original method or the overridden one (something like a virtual method table). 62.78.198.48 (talk) 05:29, 19 August 2009 (UTC)[reply]
I thought the object decleration told the compiler to allocate space in the memory for a new object of classA No. You have to recognize that classA B; declares a reference (called a pointer in some languages). It does not allocate space for any objects. --Spoon! (talk) 05:58, 19 August 2009 (UTC)[reply]

WD My Passport Essential

edit

Hi all; I just bought a Western Digital My Passport Essential external hard drive [1] and I'm wondering if anyone know what type of hard drive it is? Solid state like a USB flash drive or is it magnetic? Thanks —Preceding unsigned comment added by 99.249.132.110 (talk) 05:12, 19 August 2009 (UTC)[reply]

Based on the MSRP, I would guess it's a standard platter drive, not solid state. a 500GB solid state drive would likely be more expensive than about $150 USD. ···日本穣? · 投稿 · Talk to Nihonjoe 05:32, 19 August 2009 (UTC)[reply]
I should note that the specifications on the page you link to do not indicate which it is. ···日本穣? · 投稿 · Talk to Nihonjoe 05:33, 19 August 2009 (UTC)[reply]
Well a hard drive is by definition... a hard drive. If it were anything they'd call it something else. --antilivedT | C | G 06:20, 19 August 2009 (UTC)[reply]
Yes, you're technically correct, but whether a drive is a platter system or uses solid-state memory, people will still call it a "hard drive". Marketing people aren't known for being all that tech savvy, and consumers are even less so when it comes to nitty-gritty details like that. All they care about is if it works; not so much about how it works. ···日本穣? · 投稿 · Talk to Nihonjoe 06:31, 19 August 2009 (UTC)[reply]
It is a laptop-style 2.5" SATA hard disk (the exact Western Digital model number shouldn't be hard to figure out) in a SATA to USB enclosure. 70.90.174.101 (talk) 09:13, 19 August 2009 (UTC)[reply]

Beating a Keylogger

edit

The scenario is this: Your internet connection is disconnected. The library and any other publicly accessibly locations where there are computers are closed. You have only one friend.

Today is the day that the Advent Calendar on Neopets is giving out free Super Magical Faerie Double-Bladed Scimitars and hence, it is imperative that you log onto your account and obtain one of these rarities. Your friend is willing to let you use his computer, however, it is installed with a Keylogger.

He wants to steal your account. He already has your account name, but not your password.

As you go to type your password, you type a random garble of letters, then using your mouse, you highlight a portion of the letters, then write over them. You do this continually for, say, 5 minutes until you have entered in a total of several hundred letters and numbers. Your password is formed from the leftover letters that you did not erase. Since your friend does not know how long your password is, nor which letters you've erased, nor which portion of the inputted letters/numbers you have erased, will this be an effective strategy to combat a Keylogger?

Thanks. Acceptable (talk) 07:19, 19 August 2009 (UTC)[reply]

Depends if there is a mouse and screen logger as well, or an internet or browser data logger. Graeme Bartlett (talk) 07:26, 19 August 2009 (UTC)[reply]
This is a pretty artificial situation, but I'll take it at face value and suppose that there is only a keylogger and nothing else. In that case you can certainly hide your password. For example, for every possible password character in some canonical order, type that character, copy it to the clipboard, then paste it back in the appropriate location(s) using mouse actions. -- BenRG (talk) 09:17, 19 August 2009 (UTC)[reply]
There is no way to prevent your "friend" from logging every input event (keyboard, mouse, screen) that goes through the computer so he can later replay everything you saw on your screen and everything you did. If Neopets lets you log in with (say) OpenID, you could enroll beforehand with an openid service that uses one-time passwords, or otherwise lets you lock out your account (say by attempting 3 logins with the wrong password) after you have ordered your scimitar. Higher security sites like Paypal will let you enroll a hardware authentication token (keychain gizmo with an LCD display showing a number that changes every 30 seconds, that you type in instead of a re-usable password) but these aren't widely deployed. Basically your best bet for this scenario is to carry your own computer that you've somehow assured yourself is free of loggers, and also be sure that Neopets uses an encrypted (TLS) socket to receive your password, so that your friend can't sniff it from the network connection. 70.90.174.101 (talk) 09:24, 19 August 2009 (UTC)[reply]
If your friend is not around, boot into safe mode, disable the keylogger, reboot. I know, boring. --98.217.14.211 (talk) 01:39, 20 August 2009 (UTC)[reply]
What you need is to find a website with a high turnover of text...the Wikipedia "Recent changes" page for example. Open one browser window there and the other to the NeoPets site. Use the mouse/keyboard to cut/paste individual characters from the Wikipedia page into the NeoPet's password area to spell out your password. Neither keylogger nor mouse logger can store what was on the web page at the time you did your cut/paste operation - so replaying your mouse moves and keystrokes won't work because the Wikipedia page now has different text on it. I suppose you should flush the browser cache at some point...just to be really sure. SteveBaker (talk) 02:47, 20 August 2009 (UTC)[reply]
Start > Programs > Accessories > System Tools > Character map? —Preceding unsigned comment added by 148.134.37.3 (talk) 15:03, 20 August 2009 (UTC)[reply]

Float

edit

I need an help with the float comand. In a project external to wiki but based on wiki software I need to float a table with other taables around it but it doesn't run. Is there a solution? Thank you very much.--F.noceti (talk) 12:30, 19 August 2009 (UTC)[reply]

"float" in this context is the cascading style sheets float property. That's a powerful, but often difficult to use feature (when you have more than very basic requirements). I'd recommend you work through a css-float tutorial like this one. It's not specific to Mediawiki. -- Finlay McWalterTalk 12:33, 19 August 2009 (UTC)[reply]
You might also read Wikipedia:Reference desk/How to ask a software question to help us help you better. --Sean 15:26, 19 August 2009 (UTC)[reply]
Thank you very much. Iwrite as single person (with my real name) but i work with an equipe and I think that we've found the right way for our target. Thank you very much for your help! --F.noceti (talk) 20:14, 19 August 2009 (UTC)[reply]

Light spreadsheet tool?

edit

Is there some kind of 'light' spreadsheet tool, focusing solely on manipulating data structurally? I mean, I only use excel and openoffice for some really basic purposes: to clean up spreadsheets, convert between formats, and visually do text-to-columns conversion. So it'd be really cool if I have something fast, and memory sparing, to do just that.--Fangz (talk) 13:29, 19 August 2009 (UTC)[reply]

Visicalc was very light and can be downloaded for free. --80.176.225.249 (talk) 20:54, 19 August 2009 (UTC)[reply]
Perhaps something in the List of spreadsheet software would be what you are looking for. 89.241.32.157 (talk) 21:37, 20 August 2009 (UTC)[reply]
I would personally recommend Google Docs:
It's fast and simple and won't consume too much resources.

custom url

edit

In Firefox, I need a way to automatically change a specific url to another one when a page loads. So for example instead of wikipedia loading http://en.wikipedia.org/skins-1.5/chick/main.css it would load http://mysite.com/custom.css when I view the page. Any ideas how to do this? —Preceding unsigned comment added by 82.43.89.136 (talk) 13:47, 19 August 2009 (UTC)[reply]

I imagine this is the sort of a thing a greasemonkey script could do, but which would work only for you. I guess there are a range of scripting method to do this from the server. --Tagishsimon (talk) 13:57, 19 August 2009 (UTC)[reply]
I only need this for me. The problem is the website I wish to view is having trouble and it's css files aren't loading. I have a offline copy of the css file, I just need a way to implement it when I view the page. —Preceding unsigned comment added by 82.43.89.136 (talk) 14:03, 19 August 2009 (UTC)[reply]
Then greasemonkey is your friend. I've never used it so cannot offer advice on operation, but it sounds as though it'll be at the very simple end of GM's capabilities - it's just an on-the-fly string replacement in an HTML file. --Tagishsimon (talk) 14:05, 19 August 2009 (UTC)[reply]
Thanks, but I don't know how to put my .css file into greasemonkey. —Preceding unsigned comment added by 82.43.89.136 (talk) 14:27, 19 August 2009 (UTC)[reply]
I'd have thought that the CSS would stay on your drive, and you'd have a greasemoney script amend the CSS references in the incoming HTML files to point at your CSS. Here's a manual for GM which might help to orientate your thinking. --Tagishsimon (talk) 14:31, 19 August 2009 (UTC)[reply]
Here's a GM script that does what you want:
// ==UserScript==
// @name           ChangeCSS
// @namespace      http://greasemonkey-question.com/
// @include        http://the-site-you-want-to-modify/*
// ==/UserScript==

(function() {

    // The format is:
    //      "bad-URL"
    //          : "good-URL"
    var url_map = {
        "http://en.wikipedia.org/skins-1.5/chick/main.css"
            : "http://mysite.com/custom.css",

        "http://en.wikipedia.org/skins-1.5/chick/some-other.css"
            : "http://mysite.com/my-other-thing.css"
    };

    var links = document.getElementsByTagName('link');
    for (var i = 0; i < links.length; ++i)
    {
        var link = links[i];
        var new_url = url_map[link.href];
        if (new_url)
            link.href = new_url;
    }
})()
--Sean 16:03, 19 August 2009 (UTC)[reply]
You are my hero Sean! You always come along and help me with this sort of stuff! Once again, thank you :D —Preceding unsigned comment added by 82.43.89.136 (talk) 16:16, 19 August 2009 (UTC)[reply]
Thanks for the nice compliment, but I only do it because I'm supposed to be working. :) --Sean 16:37, 19 August 2009 (UTC)[reply]

Rotating PDF pages

edit

Hi. Is there a free tool (preferably graphical) for Windows that can rotate PDFs? It only needs to do 90 degree rotations, typical use would be for portrait/landscape orientation of single page graphs. You would think this would be the most obvious task in the world, but Googling for "rotate pdf" brings up a crapload of rubbish and some expensive PDF editing software. On a second point, why don't the standard PDF readers (Adobe, Foxit etc.) support page rotation AND SAVING? The documents always open up in their original orientation even if you choose to "save as.../save a copy..." after rotating it. Regards, Zunaid 13:58, 19 August 2009 (UTC)[reply]

pdftk can rotate 90º... it's command-line but pretty easy to use.
As for why they don't support it... Adobe doesn't because they want you to buy professional. Foxit, I don't know. They ought to. I'm still waiting for the open source people to decide that a lightweight, free version of Adobe Professional is worth their while—something that would let you easily rotate pages, OCR them, move pages around in files, etc... the code to do all this is out there (e.g. pdftk), but nobody's put it into a useful GUI, as far as I know. --98.217.14.211 (talk) 14:31, 19 August 2009 (UTC)[reply]

Thanks! Got it and it's working perfectly, after a bit of tweaking and hassle. For anyone else who hates the command line and JUST wants to rotate pages (clockwise) from the comfort of Windows Explorer, follow these instructions:

  1. Download pdftk.
  2. Extract pdftk.exe to your favourite directory (let's call it "C:\Windows")
  3. Create pdftk.bat in the same folder as above with the following contents:
@echo off
pdftk.exe %1 cat 1-endR output poknmihytfx.pdf
move /Y poknmihytfx.pdf %1

In the above code, the poknmihytfx is simply a place-holder name since pdftk doesn't allow you to overwrite files (in the next line you move poknmihytfx over the original file anyway). The %1 refers to the input file (the PDF you want to rotate), cat is the pdftk command for catenate, 1-end means "select page 1 to the last page" and R means "rotate the page RIGHT i.e. clockwise".

Now comes the tricky bit:

  1. In Windows Explorer go to Tools --> Folder Options. Then click on the "File Types" tab.
  2. Find you pdf file type in the list, click on "Advanced", then click on "New".
  3. For "Action:" give it a descriptive name like "Rotate Clockwise".
  4. Under "Application used to perform action:" enter the following: C:\Windows\pdftk.bat %1, where the C:\Windows refers to the folder you initially create pdftk.bat in.
  5. Click on OK all the way out. You should now be able to right-click on any PDF file and to rotate it clockwise from the pop-up menu.
  6. The above WILL NOT WORK if you are using Adobe Reader 9 as your default PDF viewer, its screws up the PDF file associations, I managed to get it working by deleting lots of PDF stuff out of the registry, but I see I've lost the PDF preview in the explorer window as a result. YMMV.

I'm leaving this open for now, would still like to know if there's a graphical tool out there. Zunaid 18:20, 19 August 2009 (UTC)[reply]

Why not just copy and paste everything (from the default pdf viewer) into another tool? Or capture the screen and paste into any number of graphical editors? Sandman30s (talk) 21:23, 20 August 2009 (UTC)[reply]
Because those approaches only work either on graphics-only PDFs or really on single-page PDFs. To try and rotate every page of a multi-page PDF in, say, Photoshop, is ridiculous, as it can only handle one page at a time, and then you still have to use some other tool to merge them together again. Editing PDFs in graphics editors is *NOT* really editing the PDF -- it's rasterizing a page of a PDF as a bitmap image, and then maybe converting that to a PDF. It is not a substitute for a program that can edit PDFs quickly and natively. (I find that the people who suggest it don't really deal with PDF manipulation on a regular basis... it is very, very impractical, which is clear if you try to do it more than once.) --98.217.14.211 (talk) 20:25, 23 August 2009 (UTC)[reply]

My forward button is no longer blue

edit

The details of my computer are in a previous question I asked. This [2] is where the question was answered but the link to the question doesn't work now.

I clicked the back button many times since I've been doing a lot. The forward button just turned gray and wouldn't let me go back to where I was, so I had to just keep going back.Vchimpanzee · talk · contributions · 16:46, 19 August 2009 (UTC)[reply]

The "forward" button will turn gray and cease to function if you click anything on a web page other than the "back" and "forward" browser buttons. Tempshill (talk) 17:10, 19 August 2009 (UTC)[reply]
I know, but I'm pretty sure I didn't. Do you know how to find the previous, related question?Vchimpanzee · talk · contributions · 17:35, 19 August 2009 (UTC)[reply]
The two best ways are to search for "Vchimpanzee" using the search box at the top of this page, and, alternatively, to use Google and search for vchimpanzee reference desk site:en.wikipedia.org and see if that comes up with anything better (I see 104 results). I think your previous problem had been that the Back button had turned gray, which is a different problem (it occurs when a new page or tab is created, for example). Tempshill (talk) 18:06, 19 August 2009 (UTC)[reply]
There was a way to search for my questions on Wikipedia, but I made a mistake and ended up with a search engine that couldn't find the address starting with http, and then when I tried again, all it would give me was http://http://[url].
But the back button turned gray even when there was supposedly something to go back to. It's the same thing here and I'm trying to understand what happened.Vchimpanzee · talk · contributions · 19:18, 19 August 2009 (UTC)[reply]
It worked. Here is the information about my computer:[3]
Sorry, does that mean that the problem is now fixed? Tempshill (talk) 19:24, 19 August 2009 (UTC)[reply]
And here [4] is my older question on this subject.
No, I'm trying to figure out why what happened this morning happened.Vchimpanzee · talk · contributions · 19:25, 19 August 2009 (UTC)[reply]
I have to assume you clicked something within the web page (or possibly hit a key on the keyboard, which can count as a mouse click to the web designer). When working on Wikipedia or elsewhere, I try not relying on the Forward button remaining viable; it is a fragile way to work. If you need specific pages to refer to, did you know you can create a new browser window by hitting ctrl-N ...then you could copy and paste the URL from the first window to the second window, and then you can switch between the two windows as needed, and not rely on the Back and Forward browser buttons as much. Tempshill (talk) 21:05, 19 August 2009 (UTC)[reply]
Assuming you are using Internet Explorer then you could press Ctrl-H which will bring up the list of web pages viewed. Choose the "View by order visited today" option. Find the page you want and click on it. 89.241.32.157 (talk) 19:19, 20 August 2009 (UTC)[reply]

HP Notebook running Vista won't shutdown

edit

My new HP Pavilion Notebook PC running Windows Vista Home Premium won't shutdown when on AC power. It simply restarts instead. I have found that it will shutdown normally only when on battery power. HP Tech Support seemed not to be aware of this problem, thought my OS was corrupted and had me do a complete Windows Restore. The symptom remained after the restore. Is there a fix for this? --Thomprod (talk) 16:59, 19 August 2009 (UTC)[reply]

How are you shutting down? Are you hitting the physical power button on the laptop, selecting the turn off button on the start menu, or are you specifically selecting "shut down" from the submenu by the turn off button? —Akrabbimtalk 18:20, 19 August 2009 (UTC)[reply]
I have tried the two latter options, both of which result in a restart rather than a shutdown. --Thomprod (talk) 18:24, 19 August 2009 (UTC)[reply]
Is this a brand new laptop, or has the problem just recently cropped up? System restore would only work if the restore point was set up before the problem emerged. A fresh reinstall may be necessary if that is the case. If that doesn't work, I would guess that the problem is in the hardware or bios or something, where a 'shut down' command is being interpreted as a 'restart' command. —Akrabbimtalk 18:31, 19 August 2009 (UTC)[reply]
This is a brand new laptop. --Thomprod (talk) 01:06, 20 August 2009 (UTC)[reply]
Simple Answer: Unplug the notebook before shutting down.
That's my current workaround. --Thomprod (talk) 01:06, 20 August 2009 (UTC)[reply]
Long and Complicated Answer: I have had this happen before on XP. Most likely, an HP-provided driver for some AC power management is causing Vista to crash on shutdown. If you had XP, i would tell you to go to the Control Panel>System>Advanced and click on settings under "Startup and Recovery" and uncheck the "Automatically Restart" box. However, I have no experience with Vista. Try googling "automatic restart on shutdown in Vista."  Buffered Input Output 22:27, 19 August 2009 (UTC)[reply]

Dell Startup Problem

edit
  Resolved

I have a Dell Dimension 8250 desktop computer (it's ~5 years old), and when I turn it on it displays a black screen with a command prompt cursor in the upper lefthand corner, and that's all it can do anymore. No Dell or Windows XP startup screens. What's causing the problem, and what is the solution? All the diagnostic lights are green ("no problems"), and the manual doesn't address this problem. Tried putting in a Windows XP reinstall disk but the computer doesn't run it. Bonus question: what's a fair price to sell this computer if I can't get it to work? How about if I can? Thank you very much for any help you can give. ~EdGl 19:08, 19 August 2009 (UTC)[reply]

Try unplugging any USB devices you have and see if that makes any difference. Possibly it's a bad USB device, but I believe the Dell code for that is Yellow/Green/Yellow/Green which you're not seeing. As for selling it... no offence intended, but a 5 year old, non-working computer... I'd be surprised if you could even give it away :( ZX81 talk 19:55, 19 August 2009 (UTC)[reply]
Finally got Windows XP to reinstall, woohoo! Anyway, as for resale, I assumed at least some parts/accessories (e.g. monitor) would be worth something... ~EdGl 22:04, 19 August 2009 (UTC)[reply]
You could have tried selling it on Ebay (with honest description of the problem) - I've spend some time earlier this year trying to find a replacement screen for my (4.5y old Dell Inspiron 510m) that way. Going price for a dead laptop with working screen was about £40.195.128.251.195 (talk) 22:22, 19 August 2009 (UTC)[reply]

C/C++: optimal use of fread() and fwrite()

edit

I'm writing a program that splits huge files into suitable chunks, and a corresponding program that reassembles these. Doubtlessly, others have done this before, but there are a couple of tweaks that I want to include, and I think I'll spend less time writing my own than finding a program that does exactly what I would like it to do.

I first tried the naive approach, fputc and fgetc, thinking the compiler would take care of the buffering. The files are pretty large (40Gb and upwards). I was absolutely amazed at how long time this took - ten to twenty times as long as a simple file copy to the same USB device. The program was compiled in release mode with Microsoft Visual C++ 6.0.

So obviously, fread and fwrite are the way to go. I want the program to be as fast as absolutely possible, and did some experimentation with fread and fwrite, with encouraging results.

fread and fwrite have the syntax:

size_t fread( void *buffer, size_t size, size_t count, FILE *stream );
size_t fwrite( const void *buffer, size_t size, size_t count, FILE *stream );

I will allocate the buffer using an std::vector<char>.

I am writing here to ask for advice on what values to select for size and count in each read and write operation, in order to archive optimal results.

  • Does it matter whether I ask for one block of 1048576 bytes, 1048576 blocks of one byte or 1024 blocks of 1024 bytes, and if so, which choice is preferable?
  • I assume that it's a good idea to avoid that the data being read is swapped to the hard disk before it is written to the device, so I suppose the size of count * size should be smaller than the amount of available RAM. Correct?
  • Is there a simple way (with the MSC++ 6.0 compiler) to determine the amount of free RAM?
  • What would be sensible choices for count and size on a Windows XP PC with 520Mb RAM, 1Gb RAM and 2Gb RAM (assuming no other applications are running at the same time)?

Thanks, --NorwegianBlue talk 19:44, 19 August 2009 (UTC)[reply]

You might consider reading mmap(). Our article covers mmap and its Windows alternative, MapViewOfFile(). These methods will allow the compiler (rather the operating system's runtime library) to do the buffering for you. When you use low-level standard IO calls like fputc(), you are specifically requesting unbuffered reads and writes - so the compiler should not optimize those with a buffering scheme. In Java, the New IO (java.nio.*) package, (documentation) and its Channels methodology, allow you to do the same - with the VM overseeing the demand paging and swapping of buffers into and out of memory. I have yet to find a more efficient method for reading gigabyte- and terabyte- size files than Java NIO. (I attribute this to intelligent pre-buffering based on the VM's reasonably accurate assessment of when and where you will leap to in the file next). Nimur (talk) 20:17, 19 August 2009 (UTC)[reply]
[f]getc and [f]putc are buffered. I assume the speed problem comes from the constant checks to see whether the buffer is full/empty, combined with the inherent inefficiency of a byte-by-byte memcpy(). -- BenRG (talk) 20:58, 19 August 2009 (UTC)[reply]
My mistake - I was confusing the memory copy, which is single-byte-at-a-time. It seems BenRG is correct. Nimur (talk) 21:07, 19 August 2009 (UTC)[reply]
Hm - I can't find a good standard library call to determine the amount of available physical memory in C (and I don't even remember ever learning it!) In Java, you can call Runtime.getRuntime().freeMemory() - with the caveats that (a) this is an estimate, and (b) this is only the maximum memory allocatable to the JVM (not total free system memory). In C, the convention I have always used is to malloc() and check for NULL; if failed, wait-and-retry or exit. I don't know if it's good design methodology to try to allocate exactly as much memory as is reported available - so be sure to use some "margin of error". Nimur (talk) 20:30, 19 August 2009 (UTC)[reply]
In C#, or C++ .NET, you can use a PerformanceCounter - MSDN documentation and example in C#. Nimur (talk) 20:39, 19 August 2009 (UTC)[reply]
Ah - here's what you want - Win32 API GlobalMemoryStatusEx(). This is the most portable version (for Windows computers) and works at the lowest level of abstraction. Nimur (talk) 20:44, 19 August 2009 (UTC)[reply]
It seems like the standard way to get the current memory status on linux is to check the values in /proc/meminfo (which can be accessed like a file, although it is not a regular file). I'd be curious if some more expert linux systems guys have better insight - surely there's a system call? Nimur (talk) 20:55, 19 August 2009 (UTC)[reply]
You shouldn't be allocating huge buffers. The buffer just needs to be large enough that the system call overhead (some constant * file size / buffer size) is negligible. 64K should be more than large enough for that. It doesn't matter how you split it between count and size—the C library just multiplies them together. What's really important when copying between different devices is that the reading and writing happen in parallel. If they alternate it will cut your speed in half. On the read side, you want the OS to do readahead; on the write side, you want write-back caching, not write-through. There's probably nothing you can do about the write-side caching. Windows uses write-through caching on USB devices by default, because people have a tendency to yank them out as soon as Explorer says their files are copied. I seem to recall that reading large chunks can cause NT to disable readahead, so this is another reason to use small chunks (but maybe I'm thinking of Win9x). Annoyingly, there is a reason you might want to write large chunks: it will decrease fragmentation because NT will search for a large contiguous region of free space when you use large writes. You can avoid this problem by preallocating the file, but you only want to do that on NTFS: on FAT I think it will cause the whole file to be written twice (the first time with zeroes). I would stick to small chunks.
If you want to get fancy, the fastest way to do disk I/O in NT is overlapped I/O. The idea is that instead of the read/write function returning when it's done, it returns immediately and then later you get a completion callback. Between your request and the callback the OS owns your buffer and you can't touch it. The advantage is that when you have several requests pending the OS can schedule the I/O better because it knows what's coming; it doesn't have to guess. To use overlapped I/O, allocate two or three buffers (maybe a megabyte each?) and start read requests on them, then go into a wait state (using SleepEx). When you get a read completion callback you trigger the corresponding write; when you get a write completion callback you reassign the buffer to another part of the file and trigger a read. Everything is single-threaded, so you don't have to worry about synchronization. It's actually quite easy and it will perform optimally regardless of FAT/NTFS and caching and so on. The main problem is that it's NT-specific. -- BenRG (talk) 20:58, 19 August 2009 (UTC)[reply]
Java NIO also implements a platform-independent asynchronous IO: [5]. Nimur (talk) 21:03, 19 August 2009 (UTC)[reply]
If I was committed to getting the best performance out of vanilla reads and writes, I would use a binary search to get the biggest chunk of memory malloc() would give me, and then do another binary search on the best buffer size (benchmark each). The second number is probably pretty stable on a given platform, so I'd cache the result in a file somewhere. --Sean 01:47, 20 August 2009 (UTC)[reply]
Using large blocks and multiple asynchronous I/Os allows the system to do good disc scheduling. This means it may read or write the blocks in order of where they are on the disc rather than logically in the file, this cuts down seek time which is a major component of file copy times. Dmcq (talk) 08:54, 20 August 2009 (UTC)[reply]
Thanks everyone for lots of good input! I think the OS-dependent simultaneous read-and-write calls will require too work much for this job (I will need to port this to Linux afterwards), and be risky, because removable media are involved. My reason for wanting to select as large a buffer as possible, was exactly what Dmcq pointed out, but the buffer still ought to be smaller than the available RAM, no? I didn't understand Seans suggestion for using malloc to estimate free RAM, I thought malloc used virtual memory, not necessarily physical RAM. --NorwegianBlue talk 12:11, 20 August 2009 (UTC)[reply]
Oops, you're right of course; too much time in kernel land.  :( --Sean 14:56, 20 August 2009 (UTC)[reply]
Asynchronous I/O is not risky. If the output device is configured for write-through caching then the writes won't complete until they are done. It's no different from an ordinary synchronous write. In fact, when you do a synchronous write on NT it just does an asynchronous write and then waits for it to complete before returning. But stdio will work fine if you want to stick to plain C. You may as well try benchmarking different buffer sizes as Sean suggested, but large buffers (more than a few megabytes) are a bad idea; they won't help performance and probably will hurt it. -- BenRG (talk) 19:04, 20 August 2009 (UTC)[reply]
Boost has a cross-platform asynchronous I/O library: boost asio. --Sean 14:58, 20 August 2009 (UTC)[reply]
Thanks again. Copying 40Gb to a 64Gb memory stick using a buffer size of 512Mb took approximately 80 minutes (including calculation of MD5 sums of each chunk). The memory stick was FAT32, and empty, but the file I copied from (a file which holds a truecrypt volume, unmounted of course) was rather badly fragmented.
Regarding fragmentation, the defragger that comes with WinXP was unable to do anything about it. I defragmented the disk before allocating the 40Gb file, but the program was happy as long as the individual files were contiguous. It's not like in the olden days, when the Norton utilities defragmentation tool maximized contiguous free space. When I tried to defragment the disk after allocating the volume, it just gave up, even if 25% of the disk was still unused. I think I'll move the 40Gb file to external media, fill the disk up with moderately large dummy files (4-8Gb?), and try to defrag again.
Anyway, I strongly suspect that the limiting factor is the write speed of the memory stick. According to this review, writing 1.8 Gb took 8.45 min, which should correspond to 40Gb taking 188 min, so I beat the test in the review by a factor of 2.3. Therefore, there's probably little to be gained in attempting further improvements. I liked the boost thing, though, I think I'll have a look into it just for the fun of it. --NorwegianBlue talk 00:51, 21 August 2009 (UTC)[reply]
Why are you using a 512MB buffer? I told you in both my responses that large buffers would hurt performance, and you ignored me. A buffer that large will disable all of the OS's caching mechanisms, leaving one device or the other idle virtually all of the time. If your faster device is k times faster than your slower device then performance with a huge buffer will be about k/(k+1) of optimum. For equal read and write speeds that's a 50% reduction. The Fudzilla review is obviously wrong about the speed as your result demonstrates. Searching the web I find quoted figures ranging from 8 to 17 MB/sec. You're getting 8.5 MB/sec.
There's no reason to use XP's bundled defragmenter. There are free alternatives that are better, like JkDefrag. Use one of them instead of trying to coerce XP's defragmenter into doing what you want. -- BenRG (talk) 10:30, 21 August 2009 (UTC)[reply]
Boost makes heavy use of generic programming, so gird your loins for the compilation error messages, which are sadly not going away anytime soon. --Sean 12:11, 21 August 2009 (UTC)[reply]
@BenRG: I didn't ignore you. My first test was with a 512MB buffer. You and Dmcq gave conflicting advice, and Dmcq's advice was closer to my prejudices than yours was, so I tried that first. An increase from 8.5 to 17 MB/s would be most welcome. I am planning to test the performance with both a smaller and a larger buffer (it turned out that the source PC had a lot more RAM than I thought). I'll be back with more results! And thanks a lot for making me aware of JkDefrag!
@Sean: Heh,heh, I know. I've used some of the boost libraries in previous projects. Painless on linux, except for the error messages. A bit more problematic on Windows, as I stubbornly insist on using an ancient compiler. --NorwegianBlue talk 13:45, 21 August 2009 (UTC)[reply]
Here's why you should be differently prejudiced. When you call fwrite it has to copy all of your data somewhere else before returning since you might modify the buffer as soon as it returns. It either has to physically store it on the device or copy it to an OS-owned buffer. But no OS is going to allocate 512MB of kernel memory to store your data. At most it will allocate a few megabytes, so most of your data will have to be written during the fwrite call. When calling fread you have the same problem in reverse. It has to fill the buffer before returning, either from cache or from the device. You only read the data once, so the only way anything will be in cache is from speculative readahead. But no OS is going to read ahead 512MB. At most it'll read ahead maybe 256K, so almost all of the physical reading will have to happen during the fread call. Since there will never be an fwrite and an fread in progress at the same time in a single-threaded application, one or the other device is sitting idle for the vast majority of the overall runtime. On the other hand, if you use a 64K buffer then writes will copy the data to kernel memory and return immediately and reads will copy the data from the readahead cache and return immediately. The actual reading and writing will happen in the background, simultaneously on both devices. There are three reasons you might want to write larger chunks: to reduce seek times (not an issue when copying between devices), to reduce fragmentation (not an issue on flash drives) and to avoid redundant filesystem metadata updates (possibly an issue on flash drives). If you use overlapped I/O you're no longer relying on kernel buffering, so you can use larger buffers (where larger means, I dunno, 16MB) and get the best of both worlds. 512MB is insane. There's no need to test it because there's no situation in heaven or earth where it would be a sensible choice. Try sizes between 64K and 1MB and go with whatever's fastest. -- BenRG (talk) 14:57, 21 August 2009 (UTC)[reply]
Thanks a million for spelling it out in crystal clear detail. I understand, and am convinced. I'll modify my program, and do the tweaking in the 16-256kb range instead of the 128Gb-2048Gb range. I'll do some benchmarking. I hope to get the time in the week-end, and hope to be able to be back with the results before this thread is archived. --NorwegianBlue talk 16:55, 21 August 2009 (UTC)[reply]
I realise you have your program working now, but the standard utility for cutting a large file up into convenient-sized chunks is split (Unix). There will surely be versions available for Windows.-gadfium 23:07, 21 August 2009 (UTC)[reply]
Thanks. As I stated at the beginning of the thread, the reason I do this at all, is that there are a couple of tweaks that I would like to include (blockwize md5 sums being the most important). Moreover, I don't want to type a bunch of parameters, just
                mysplit SOURCEFILE DESTFILE_NO_EXTENSION
                and
                myjoin SOURCEFILE_NO_EXTENSION DESTFILE
I know, of course, that avoding command line parameters can be solved by writing a script/bat-file. I tried the the md5 checker I had available on the original 40Gb file. It was insanely slow (I don't know exactly how slow, as I didn't have the patience to wait for it finishing, but we're talking MANY hours). I'm on a different computer now, so I can't check exactly which md5 checker it was. Probably the md5 checker will behave better on the smaller chunks than on the 40Gb file. I have cygwin (which includes split) installed on my home computer, but not on the source computer for this project. I'll include split when I do the benchmarking. --NorwegianBlue talk 10:27, 22 August 2009 (UTC)[reply]
You can set your I/O to work unbuffered and asynchronous and then you can use multiple large buffers. This is almost equivalent to doing memory map operations. Dmcq (talk) 11:26, 22 August 2009 (UTC)[reply]

Benchmarking

edit

I checked out the source code for Split (Unix) here. It uses a buffer size of MAXBSIZE for each write. Googling has given values for MAXBSIZE of 32k, 56k and 64k, in good agreement with BenRG's recommendations. The most time-critical part of this project, is the transfer of data from the harddisk to the USB stick. Since the write speed of the USB stick is vastly slower than the read speed of the harddisk, I reckoned it would make sense to determine the ideal block size for achieving the fastest possible write speed to the USB stick, so I wrote a program to that purpose. I did this benchmarking on a rather old (2004-2005) AMD64 XP PC with 1Gb of RAM. I killed every killable process in the task manager, stopped AVG, but kept the Visual C++ IDE alive while the program was running in a DOS box. The program wrote a 3Gb file from a RAM buffer to the USB stick. In case anyone is interested, here is the program:

Click "show" (at the right side of this box) to read the program!
Please note that the program has been slightly "tidied up" before being posted, and that errors may have crept in!
#include <io.h>
#include <string>
#include <vector>
#include <iostream>

typedef __int64 LongLong;
typedef unsigned char Byte;
typedef FILE* FILE_POINTER;
const FILE_POINTER NULL_FILE = 0;

const long SIXTY_FOUR_KILOBYTES              = 65536L;
const long TWOHUNDRED_FIFTY_SIX_KILOBYTES    = 262144L;
const long ONE_MEGABYTE                      = 1048576L;
const long TWO_MEGABYTES                     = 2097152L;
const long FOUR_MEGABYTES                    = 4194304L;
const long SIXTEEN_MEGABYTES                 = 16777216L;
const long SIXTY_FOUR_MEGABYTES              = 67108864L;
const long ONENUNDRED_TWENTY_EIGHT_MEGABYTES = 134217728L;
const long TWOHUNDRED_FIFTY_SIX_MEGABYTES    = 268435456L;
const long HALF_A_GIGABYTE                   = 536870912L;
const long ONE_GIGABYTE                      = 1073741824L;

// ---------------------------------------------------------------------------------

//#define BUFFER_SIZE         SIXTY_FOUR_KILOBYTES
//#define NUM_ITERATIONS      49152L

//#define BUFFER_SIZE         TWOHUNDRED_FIFTY_SIX_KILOBYTES
//#define NUM_ITERATIONS      12288 

//#define BUFFER_SIZE         ONE_MEGABYTE
//#define NUM_ITERATIONS      3072

//#define BUFFER_SIZE         TWO_MEGABYTES
//#define NUM_ITERATIONS      1536L

//#define BUFFER_SIZE         FOUR_MEGABYTES
//#define  NUM_INTERATIONS    768L

//#define BUFFER_SIZE         SIXTEEN_MEGABYTES
//#define NUM_ITERATIONS      192L

//#define BUFFER_SIZE         SIXTY_FOUR_MEGABYTES
//#define NUM_ITERATIONS      48L

//#define BUFFER_SIZE         ONENUNDRED_TWENTY_EIGHT_MEGABYTES
//#define NUM_ITERATIONS      24L

//#define BUFFER_SIZE         TWOHUNDRED_FIFTY_SIX_MEGABYTES
//#define NUM_ITERATIONS      12L

//#define BUFFER_SIZE         HALF_A_GIGABYTE
//#define NUM_ITERATIONS      6L

#define BUFFER_SIZE         ONE_GIGABYTE
#define NUM_ITERATIONS      3L


// =================================================================================
//      Helper class: Self-closing file
// =================================================================================

class File
{
    FILE*       m_filePointer;
    std::string m_fileName;

public:
    File();
    ~File();
    void open_for_reading(const char* name);
    void open_for_writing(const char* name);
    int fileHandle();
    FILE* filePointer();
    const std::string& fileName();

// Disallowed:
    File(const File&); // Not implemented
    File& operator=(const File&); // Not implemented
};


File::File()
:   m_filePointer(NULL_FILE),
    m_fileName("<File has not been opened>")
{
    m_filePointer = NULL_FILE;
};


File::~File()
{
    if (m_filePointer != NULL_FILE)
    {
        fclose(m_filePointer);
    }
};


void File::open_for_reading(const char* name)
{
    m_fileName = name;
    m_filePointer = fopen(name, "rb");
}


void File::open_for_writing(const char* name)
{
    m_fileName = name;
    m_filePointer = fopen(name, "wb");
}


FILE* File::filePointer()
{
    return m_filePointer;
}

// =================================================================================
//      file_exists()
// =================================================================================

bool file_exists(const std::string& name)
{
    File f;
    f.open_for_reading(name.c_str());
    return (f.filePointer() != NULL_FILE);
}


// =================================================================================
//      Write Three GB.
// =================================================================================

bool write_3_GigaBytes(const char* fileName, Byte* buffer)
{
    File f;
    f.open_for_writing(fileName);
    if (f.filePointer == 0)
    {
        std::cerr << "Error opening file: " << fileName << '\n';
        return false;
    }

    long j;
    for (j = 0; j < NUM_ITERATIONS; ++j)
    {
        int st = fwrite(buffer, BUFFER_SIZE, 1, f.filePointer());
        if (st != 1)
        {
            std::cerr << "Error writing block number " << j+1 << " to file: " << fileName << '\n';
            return false;
        }
    }


    return true;
}

// =================================================================================
//      Write Marker file
// =================================================================================

void write_dummy_file(const char* fn)
{
    File f;
    f.open_for_writing(fn);
}

// =================================================================================
//      Main
// =================================================================================

int main(int argc, char* argv[])
{
    if (argc != 2)
    {
        std::cerr << "Syntax: write_benchmarking FILENAME\n" << argv[1];
        exit(2);
    }
    if (file_exists(argv[1]))
    {
        std::cerr << "Error: file exists: " << argv[1] << '\n';
        exit(2);
    }
    const char* fn_start = "F:\\just.about.to.start.writing.3GB"; // Note: For convenience, I've hard-coded the drive letter of the USB stick here
    const char* fn_done = "F:\\finished.writing.3GB"; // Note: For convenience, I've hard-coded the drive letter of the USB stick here

    Byte* buffer = (Byte*) malloc(BUFFER_SIZE);
    if (buffer == 0)
    {
        std::cout << "Out of memory!\n";
        exit(2);
    }

    long i;
    for (i = 0; i < BUFFER_SIZE; ++i)
    {
        buffer[i] = (i & 0xff); // Whatever...
    }

    write_dummy_file(fn_start); // This file's time stamp is equal to the time when writing the big file starts.
    bool ok = write_3_GigaBytes(argv[1], buffer);
    if (! ok)
    {
        std::cerr << "Something went wrong when writing the 3 GB file " << argv[1] << '\n';
    }
    write_dummy_file(fn_done); // This file's time stamp is equal to the time when writing the big file finishes.
    if (buffer != 0)
    {
        free(buffer);
    }
    return 0;
}

I ran each combination of buffer size and number of iterations twice. The results were repoducible - with the same parameters, the results were virtually identical. Here are the results:

 64 kB buffer:   6.0 MB/sec
256 KB Buffer:   8.2 MB/sec
  1 MB buffer:  10.1 MB/sec
  4 MB buffer:  10.5 MB/sec
 16 MB buffer:  11.0 MB/sec
 64 MB buffer:  10.8 MB/sec
128 MB Buffer:  11.0 MB/sec
256 MB buffer:  10.8 MB/sec
512 MB buffer:  10.8 MB/sec
  1 GB buffer:   7.6 MB/sec

It seems as if the write speed, on this particular computer, levels out at about 16MB, but the difference between 1MB and 16 MB is tiny, and it may well be that the disadvantages that BenRG has pointed out, will make a choice of 1MB more sensible. The fact that performance doesn't drop when buffer sizes reach "insanely" high levels, indicates to me that the OS may allocate more physical RAM than what BenRG expected. I promised to compare performance to Split (Unix), but will not have time to do so before this thread is archived (benchmarking takes a lot of time). If it turns out that Split outperforms my tailor-made program, however, I will humbly be back with a post linking to this one, entitled "Reinventing the wheel", with the results. --NorwegianBlue talk 20:55, 23 August 2009 (UTC)[reply]

I've been checking out some internet forum such as this one. As our article also states, writing data to a flash drive, implies that a large amount of disk space has to be erased, for a large drive typically 2MB, as indicated in the external link. Hence, I conclude that 2MB is probably the ideal buffer size for the application I'm working on. Note also that performance starts to drop only at a buffer size of 1GB. To me, this indicates that no swapping to disk is going on when 512MB is malloc'ed (after all, win XP can run acceptably on the remaining 512MB).
I also tested the benchmarking on a much faster computer with 4Gb RAM today. It performed slightly poorer than the figures I have quoted above (but I was unable to disable its anti-virus software). --NorwegianBlue talk 18:40, 24 August 2009 (UTC) 17:45, 24 August 2009 (UTC)[reply]

GPU Clock and Memory Clock

edit

Hello there, I want to know something graphics cards clock. what is the differences between GPU Clock and Memory clock? Which one increases the temperature of Graphics card? In order to avoid overheat issue which one should be decreased? GPU clock or memory clock or both? (Please do not ask me to use diagnostic tool to check overheating). Thank you--119.30.36.36 (talk) 20:25, 19 August 2009 (UTC)[reply]

Overclocking your GPU will cause the GPU to heat up more. Overclocking your graphics memory will cause the graphics RAM to heat up more. Since your RAM probably does not have a thermal shutdown safety, this is more likely to permanently damage your hardware. As far as which overclocking scheme will improve performance, it depends whether your graphics operations are memory- or compute-bound. This will depend very heavily on the specific game, tool, or application you are running; and the ugly nitty-gritty details like your current graphics card, GPU series and core model, current device driver; and CPU load and bus load. Nimur (talk) 20:41, 19 August 2009 (UTC)[reply]
  • I dont want to overclock. My GPU Clock is 850MHz and Memory clock is 975MHz. So which one is responsible for system overheating and freeze while playing game and browsing net (I have freezing problem in both)? You said "graphics operations are memory- or compute-bound". I don't understand it. Can you please elaborate which application requires memory or computer bound? thanks--119.30.36.36 (talk) 21:06, 19 August 2009 (UTC)[reply]
What evidence is there that the graphics card is the cause of the problem? And why bother asking questions here if you refuse to use the obvious diagnostic tool to determine if it is? 87.113.69.234 (talk) 23:27, 19 August 2009 (UTC)[reply]
You haven't told us what graphics card you are using, so there is no way for us to know whether those clock speeds exceed the norm for those chips. You also haven't told us whether you have already tried to overclock the GPU or the GPU memory, or whether this is merely a concern of yours based on something you read. Tempshill (talk) 00:01, 20 August 2009 (UTC)[reply]
  • My graphics card is XFX ATI Radeon 4890 1 GB. I have never tried overclocking GPU or GPU memory. I have freezing issue and tried ALL (using diagnostic tools, blowing fan, changing chasis etc. etc) the necessary steps to prevent it. Unfortunately, I have come to know that GPU Clock or Memory clock increases this heat issue which causes freezing. My other devices are ok. There is no way to get the card replaced. If I reduce GPU Clock do I also need to reduce memory clock. I am planning the following activities:
  • Reduce GPU Clock from 850 MHz to 800 MHz (5%)
  • Reduce Memory Clock from 975 MHz to 925 MHz (5%) (if required)

Should I only stick with GPU clock in order to get rid of overheating problem?--119.30.36.54 (talk) 15:12, 20 August 2009 (UTC)[reply]

Those are the stock frequencies according to this page. If your card can't run stable at stock frequency you should go and exchange for a new one. --antilivedT | C | G 20:09, 20 August 2009 (UTC)[reply]
One further question. How exactly are you certain that it is your system temperature that is causing the lockups? It's a clever idea to think of underclocking the graphics card in order to reduce the system temperature, but I have never heard of anyone doing this, and I would think it's far more likely that your unstable system is due to a software problem or some hardware problem that won't go away merely by lowering the temperature by a couple of degrees from stock. If you're certain that it's the temperature that's causing this, you'll need to run through with us all the steps you've tried so we don't lecture you about this sort of thing, sorry. Improved heat sink on the CPU? Improved fan on the heat sink of the CPU? Improved case fans? Tempshill (talk) 21:44, 20 August 2009 (UTC)[reply]
  • My card can run at stock frequency. But problem is sometimes it lockups during browsing (mouse pointer and keyboard stop working for a while then work and again stop. Music gets crashed if play while browsing. All symptoms happen if system runs for several hours). I use USB Modem.

Previously, I had overheating problem. At that time system got frozen within 5 to 10 minutes during playing game. I took my card to the shop where bought it. Everything went fine there as they have air conditioned room. So I changed pc case and get full tower case with better cooling system and it almost solved system freezing while playing game. My room temperature sometime raises and sometimes lowers (since I live in warm country) which easily enter into pc case. But I can't get rid of "system locks up problem while browsing".

One more thing I can't get my graphics card exchanged unless I prove the problem to the shops technician. I have to prove it which I am facing in home, but I failed. They told me to disable system restore point but nothing happened. --119.30.36.40 (talk) 07:16, 21 August 2009 (UTC)[reply]

It sounds like the same problem you had before - which I thought was due to a serial device overheating - probably due to the overall heat levels in the machine. Would a portable air conditioner for the room be an option (should cost less than either your graphics card or cpu).
Still it doesn't make much sense that your system is ok for games, and not for browsing - games should make it hotter83.100.250.79 (talk) 10:55, 21 August 2009 (UTC)[reply]
It might not be heat hardware problem, but with a 4890 I really think you need to get the room temperature below 20C - it's not really a mass market part. (By the way do you have another graphics card, or integrated graphics on the motherboard - if so does the computer run ok with these.)83.100.250.79 (talk) 11:07, 21 August 2009 (UTC)[reply]
Also find out which chip on the motherboard handles serial devices - make sure that air flow to cool it is not being blocked by wires etc.83.100.250.79 (talk) 11:23, 21 August 2009 (UTC)[reply]
  • I don't have much money to buy portable air conditioner (since I invested all of money to new hardware). I have graphics card but it is not PCI 2.0 motherboard supported. It is 5 years old. Is my USB Modem incompatible with motherboard? Is it also causing overheating? What is least option to stop this lockup problem? Reducing GPU or memory clock?

Pc case has better ventilation which provides sufficient air flow to motherboard. Wires are placed aside to mobo.--119.30.36.34 (talk) 13:34, 21 August 2009 (UTC)[reply]

Mac Apps on Windows

edit

I like Windows the best and I"m running it on my Mac with VirtualBox. But there are some Mac apps I like. How can I run Mac apps in Windows like how one can run Windows software on a Mac with Parralell's Coherence? --Melab±1 20:31, 19 August 2009 (UTC)[reply]

You can't, basically. Apple doesn't allow emulation of Macs on non-Apple hardware. The closest you can do is a thing called Hackintosh... but it's not the easiest thing to set up, and not really what you are asking for (which isn't "how do I install OS X on a Windows machine"). --98.217.14.211 (talk) 01:24, 20 August 2009 (UTC)[reply]
I think that OSx86 (same link as Hackintosh) will run in VirtualBox and VMware and the like, but it's probably illegal. -- BenRG (talk) 10:38, 20 August 2009 (UTC)[reply]

Archive of some sort

edit

I might have posted something like this earlier, but i don't remember and now i have more information.

The file is named main.cat and is used by a game. No tools are given to open the file and the game is pretty much nonexistent. When opened in a hex editor (HHD Hex Editor Neo Free), the first four characters come out to be "CAT1". This is not the same as a Security Catalog header (I checked) and HHD Hex Editor Neo identified it as a possible RAR archive. But neither WinRAR nor 7Zip could open it as an archive, and Game Extractor Basic couldn't do it either.

The characters near the end of main.cat correspond to files that are used by the game (like NAPALM.WAV, TITLE.XM, and so forth) with some offset bytes between them. I am assuming that the middle section of the file is some sort of compressed data (it's gibberish, btw) but i cannot identify the compression scheme.

I've tried to extract with several programs, with no luck. If anyone with knowledge of the "CAT1" header or knows what i'm rambling about, please don't hesitate to respond! ANY help at all is appreciated! Thanks!

 Buffered Input Output 22:20, 19 August 2009 (UTC)[reply]

I gather that you are trying to get some kind of data from the file. What is your objective here? What is the name of the game? Tempshill (talk) 00:02, 20 August 2009 (UTC)[reply]
If the game authors had wanted you to get at the data, they'd have made it easy. Since they didn't, they obviously don't intend for you to mess with this file - and almost certainly made it virtually impossible by writing the file with some kind of unpublished custom tool. SteveBaker (talk) 02:35, 20 August 2009 (UTC)[reply]
I don't know why you'd call it "virtually impossible"; you must know that people routinely reverse-engineer these formats for modding purposes. This one doesn't even sound very difficult, given that the filenames are in plaintext. If it does use compression there's a good chance it's zlib, since rolling your own data compression is a lot harder than rolling your own archive format. If it uses some other compression then you might have to disassemble the game executable to figure it out. -- BenRG (talk) 10:59, 20 August 2009 (UTC)[reply]

Whoops, I spoke too soon. The data wasn't compressed, but was more or less just stored. I could have probably done it a simpler way than I did; I hex-edited the file and extracted what I wanted (and backed up the original just in case). Yayz! Thanks for your help!  Buffered Input Output 00:40, 23 August 2009 (UTC)[reply]