Wikipedia:Reference desk/Archives/Computing/2016 June 8
Computing desk | ||
---|---|---|
< June 7 | << May | June | Jul >> | June 9 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 8
editData transfer
editThis is probably a very basic question, but I don't know the answer. Is there a simple way to transfer files from one PC to another? Is there such a thing as a transfer cable or something? → Michael J Ⓣ Ⓒ Ⓜ 01:49, 8 June 2016 (UTC)
- A crossover UTP fly-lead can be used to connect the PCs back-to-back. Nowadays there should be no problem using a non-crossover cable as Ethernet ports have crossover detection. Give 1 PC an IP of say 192.168.0.1 subnet 255.255.255.0 default gateway 192.168.0.1 and the other PC an IP of say 192.168.0.2 subnet 255.255.255.0 default gateway 192.168.0.2 Depending on your operating system and/or anti-virus you will need to configure a few File Sharing/network options. You should choose the home/work/private network and NOT a public network. It is then trivial (using File Explorer) to navigate to the shared folder on the "other" PC. 196.213.35.146 (talk) 06:26, 8 June 2016 (UTC)
- If both PCs have a wireless facility, then you could, alternatively, set up a local wireless network. For just a few occasional files, I find it easier to copy to USB storage, then copy to the other machine, even though this involves two operations. Dbfirs 08:03, 8 June 2016 (UTC)
- It is also very easy to email yourself a file from one computer and then check your email on another computer. I use that to bypass all the security crap on my phone since it won't do plain USB mode anymore. 209.149.114.20 (talk) 14:49, 8 June 2016 (UTC)
- Depending on what price you want to pay, you could obtain a USB flash drive, or a USB external hard drive, and transfer files that way. -- 143.85.169.19 (talk) 15:01, 8 June 2016 (UTC)
- A simple UTP cable works fine in most cases, and is very cheap. You can connect the computers directly to each other but it may be easier to connect them both to the same router and configure file sharing in your operating system. I don't know what operating system(s) you use, otherwise I would've posted a how-to. Many routers have an USB port you can attach an external hard drive or USB stick to. I also use a HP EX490 MediaSmart Server. See Network-attached storage. Dropbox and Google Drive also work. Dropbox can sync over LAN, which is a lot faster than syncing via the cloud, but that requires a LAN connection. The Quixotic Potato (talk) 15:12, 8 June 2016 (UTC)
- General advice: when helping people, try to avoid jargon. "UTP" here stands for "unshielded twisted pair", which is a term I've never heard anyone who isn't an electronics nerd use. Average people call this thing an "Ethernet cable", "network cable", or "Cat-5 cable". Anyway, to sum up, the convenient way is to set up network file sharing. Plugging "file sharing <your operating system>" into a search engine should give you plenty of information. Of course both systems have to be connected to the same LAN, which is why other respondents were telling you to use a "UTP cable". They mean hook up both computers to the same LAN. You can literally plug both systems into each other with one Ethernet cable and then manually assign IP addresses, but if you already have a LAN set up it's simpler to just use that. If you have a "router" or "network gateway", you have a LAN. This includes Wi-Fi. And yes, both systems can talk over Wi-Fi as well. Anyway if you have specific issues with doing this, give us the details and we can probably help. --71.110.8.102 (talk) 21:53, 8 June 2016 (UTC)
Weird Windows Media Player ripping behaviour?
editI've been doing some spectro-analysis of my music collection; albeit very rarely, some ripped physical albums (320kbps CBR setting) have some cutoffs like 16.5KHz or 18KHz (esp. old and live ones) when ripped with WMP. Why? WMP normally renders most other albums 20.5+- KHz (as it should). Further bizarrely, I'm having issues really noticing a sound quality difference, I ripped the same test tracks as WAV (lossless) and with Sennheiser HD 650s I believe I'm mostly getting a placebo effect, if anything. Matt714 (talk) 05:10, 8 June 2016 (UTC)
- On the last point, I recommend an ABX test, always a good idea when testing anything audio quality related [1]. There's no point worrying about which is is better until you're sure you can even hear a difference. Nil Einne (talk) 05:43, 8 June 2016 (UTC)
- Bit rate, by itself, is only one metric: it is the most simple single-number representation of compression quality. It is possible to use the same algorithm and the same bit rate, yet yield different subjective or objective fidelity and quality when compared against the original input signal. Our article on MP3 has a section on quality, and that explains some of the mathematical details in simple language. WMP software uses a different method, described in our article; but fundamentally, it is still a lossy quantizing frequency-domain compression method, so the same general observations apply. The specifications for these file formats permit a lot of room for different subjective quality at the same bit-rate, and even if your software user-interface doesn't show these details, your compressor or transcoder may be doing fancier heuristic signal analysis under the hood.
- If the programmers and designers of the software found a way to discard some of the frequency-content, and they knew (with high confidence) that you wouldn't notice a sound quality difference, they might have explicitly or implicitly implemented a filter to discard those unnecessary signal components, improving the compressibility.
- Human ears (and brains) are far worse at detecting perfect signal fidelity than the run-of-the-mill audiophile would like you to believe. The overwhelming majority of the perceptual signal can be represented using just a few hundred bits per second - sufficient to recognize the signal, at least. This isn't even "theoretical" - it's how your mobile phone audio works! Take a look at, e.g., GSM Half Rate. It's not great for listening to music, but it demonstrates that your brain can still make sense of sound when it's compressed to surprisingly low bit-rates. In other applications, bitrates are even lower. At the extreme end of the spectrum, consider thinking about MIDI control signals as a form of compression: using an average of just a few bytes per second, it is possible (in practice!) to encode enough information to faithfully recreate a complex song, if we are willing to put sufficient effort into creating the encoded stream.
- Nimur (talk) 16:16, 8 June 2016 (UTC)
- ... but MIDI isn't a form of compression, unless you allow a Google search for a symphony performance to be an even more extreme from of "compression". Dbfirs 09:24, 11 June 2016 (UTC)
- Are you saying that when you rip an album to a lossy format (WMA? MP3?) at 320kbps, then frequency-analyze the result, you see a 16.5kHz or 18kHz cutoff, but when you rip the same album to WAV, and frequency-analyze that, you see higher frequencies? That would be very strange. If you haven't done that experiment, I'd guess that the albums (CD-Audio?) were mastered without the higher frequencies, and WMP is just faithfully preserving that. -- BenRG (talk) 17:37, 8 June 2016 (UTC)
Indeed, that's what I'm saying. I took a few screenshots that might help, as ripped from the same CD. iTunes 320kbps CBR mp3 (http://i.imgur.com/Uu0nGaW.jpg), Windows Media Player 320kbps CBR mp3 (http://i.imgur.com/FVg8gIe.jpg) Windows Media Player WAV Lossless (http://i.imgur.com/uQ26HLx.jpg) Matt714 (talk) 20:02, 8 June 2016 (UTC)
- Sure - when you encoded, you used Windows Media Player - so you didn't have any control at all over the encode settings.
- If you were writing software to program the encoder behavior explicitly, you could have specified whether the encoder performs band truncation, and specified the minimum coded bandwidth, among many other configurable properties.
- By default, Windows Media Player uses encoder settings that were considered "appropriate for consumer use" (by the audio engineers who designed it). Most consumers don't spectrally analyze the output! You're well into "pro-user" territory now. You might want to consider using a more sophisticated media encoder, or writing your own using Microsoft's WMA software API. Here is Microsoft's tutorial, Encoding a WMA File.
- Alternatively - and this is a totally viable option - you can simply accept that you probably can't hear the difference, and this band truncation behavior is improving the file sizes by some non-negligible amount. The exact nature of how the encoder algorithm knows when it may band-stop above 16 kHz might just be a detail that is best left to its implementer, as long as the audio sounds good.
- If you are very interested in the theory of audio codecs, Julius Orion Smith publishes four absolutely excellent full-length textbooks, available on his webpage at no cost:
- This last book (...covering advanced applied material, to which the previous three books are building) is a great overview to the general theory of modern audio compression (or at least, to the mathematics that is peripheral to compression theory, as it particularly pertains to audio signals). It is an excellent free online resource. If you like having it on your shelf, you can toss $46 to Julius and he'll have a very nice hardcopy printed and mailed to you.
- Introduction to Digital Audio Coding and Standards is a lot more cut-and-dry, gets into the details of individual CODEC specifications, and is generally a great reference - but it makes for a lot less recreational reading. It covers all the MPEG audio specifications, including MP3 and AAC (which are nearly identical to WMA in their principle of operation). If you really must know about WMA, you must refer to Microsoft's technical documentation: start at their overview page.
- Nimur (talk) 21:05, 8 June 2016 (UTC)
- "Codec" isn't an acronym. No one capitalizes it. --71.110.8.102 (talk) 21:40, 8 June 2016 (UTC)
- Well that's not true, or at the very least, your statement is overly broad. Without engaging in pedantry over the exact nature of the definition of "acronym," I can still speak to the convention on capitalization, and the claim that "no one" does it. CODEC is a redirect to Codec, so evidently somebody on Wikipedia has capitalized it before. The word "CODEC" is an engineering neologism that usually refers to "Compressor / Decompressor", but in common parlance, it often means "anything peripherally related to the audio or (other) system."
- Intel sometimes capitalizes CODEC, and sometimes they do not. The AC97 CODEC is ... well, nobody who ever programmed one was ever very sure what it was, but it conforms to some kind of standard related to audio hardware. It contains many audio pieces, including some hardware for compression and decompression, and other hardware related to digital-analog audio interfaces.
- ALSA documentation capitalizes CODEC. ... Sometimes.
- AppleOnboardAudio ... sometimes capitalizes it.
- Forgive me for a little hubris, but I sometimes capitalize CODEC - and sometimes I do not - so irrespective of whether it is "correct" or "incorrect", I am in good company.
- Nimur (talk) 22:01, 8 June 2016 (UTC)
-
- The obvious thing to do is to preserve/indicate the portmanteau nature of the word with capitals, and write it as CoDec. Also works well for the analogous term MoDem. This spelling pattern, while less common, has the advantage of following the precedents set by ScarJo and related cases of smashing words together to make a name ;) SemanticMantis (talk) 14:26, 9 June 2016 (UTC)
- In the Unofficial Guide to Stanford book published many years ago, there was a fantastic rundown of our hackerish cant: we have a CoHo, a MemChu, a MemAud, a HooTow... one could make the case that MODEMs and CODECs are veritable shibboleths that evolved out of the very mouths of Silicon Valley's ancient pioneers... contrast this to the very East-Coast-style jargon rooted in the culture of a distinct social set of technology pioneers... I'd love to see a proper linguistic historian tear into that case study! Nimur (talk) 15:17, 9 June 2016 (UTC)
- Here it is - an intro to our Lingo. We tend to elide a lotta syllables! To quote the jargon file once again: "This is not ‘poor grammar’, as [they] are generally quite well aware of what they are doing when they distort the language. It is grammatical creativity, a form of playfulness. It is done not to impress but to amuse..." Our wacky-walking valley culture predated the invention of the silicon-based semiconductor computer by several decades, and to omit that cultural context from any encyclopedic history of technological progress would be a grave error! Nimur (talk) 16:36, 10 June 2016 (UTC)
- In the Unofficial Guide to Stanford book published many years ago, there was a fantastic rundown of our hackerish cant: we have a CoHo, a MemChu, a MemAud, a HooTow... one could make the case that MODEMs and CODECs are veritable shibboleths that evolved out of the very mouths of Silicon Valley's ancient pioneers... contrast this to the very East-Coast-style jargon rooted in the culture of a distinct social set of technology pioneers... I'd love to see a proper linguistic historian tear into that case study! Nimur (talk) 15:17, 9 June 2016 (UTC)
- The obvious thing to do is to preserve/indicate the portmanteau nature of the word with capitals, and write it as CoDec. Also works well for the analogous term MoDem. This spelling pattern, while less common, has the advantage of following the precedents set by ScarJo and related cases of smashing words together to make a name ;) SemanticMantis (talk) 14:26, 9 June 2016 (UTC)
- "Codec" isn't an acronym. No one capitalizes it. --71.110.8.102 (talk) 21:40, 8 June 2016 (UTC)
- not related to anything, but I once recorded something from FM radio and could later clearly see the 19 kHz pilot as a spike in the spectrum tool in Audacity. I think I could even hear it when I concentrated real hard, but I'm not sure. Naturally, it was gone after converting the file to MP3 Asmrulz (talk) 01:31, 9 June 2016 (UTC)
- Although it seems to make no sense to low-pass filter the input at high bit rates, this thread says "The official Frauenhofer codec by default uses a 16kHz low pass filter for all bitrates", and Windows Media Player says that it uses Fraunhofer's encoder (or did as of version 10). I don't know whether this behavior can be disabled in WMP's settings. If you want an alternative ripper for Windows, try Exact Audio Copy and LAME. -- BenRG (talk) 02:57, 9 June 2016 (UTC)
- I think LAME may still cut even with high bitrate CBR [2] [3] [4] [5] [6] [7] although at 320 kbps it's intended to be at a level where few humans will hear very well. At these high levels, LAME will also significantly reduce the accuracy of encoding if needed [8]. I think the more confusing thing about the OP's statement is it sounds like they're saying this only happens sometimes not all the time. This sounds like some sort of psychoacoustic model, perhaps similar to LAME's Y switch but filtering instead of just using few bits. Whether it's because it feels it needs to use the bandwidth for the low frequencies (like LAME's Y switch seems to do) or perhaps has decided the high frequencies are mostly noise rather than intentional sounds, I don't know. Nil Einne (talk) 14:07, 9 June 2016 (UTC)
- This ought not be confusing. The technical documentation for WMA (which I linked earlier) explains how the band cut filtering is applied dynamically based on a quality factor, which is estimated by comparing the compressed/reconstructed waveform to the original waveform. This process can also be performed using a variable bit-rate encoder with multiple passes to aim for an average desired bitrate. In other words, it is expected behavior to see the filtering turning on or off multiple times within a single song, and/or between different songs, or to have the filter change its parameters. The same kind of strategy is used by smart programmers who design or implement other codecs besides WMA. Nimur (talk) 15:00, 9 June 2016 (UTC)
- Not quite sure what you mean about "not be confusing". The reason I mentioned confusing is because BenRG suggested that "The official Frauenhofer codec by default uses a 16kHz low pass filter for all bitrates" and WMP used to used a variant so this may be the reason for the behaviour the OP observed. But since the OP observed the behaviour varied, it seems that the explaination can't be so simple. And remember it's already been established, we're talking about MP3 not WMA. The info on WMA is interesting, but not surprising, and basically what I suggested as one possibility. It may be that the developers of the MP3 codec for whatever version of WMP the OP was using have implemented the same behaviour that is used in WMA. (Incidentally is this actually required by the spec, or is just a recommendation, and the behaviour used by Microsoft's implementation of WMA?) It may be they implemented different behaviour. In the absence of info on precisely what is implemented in the version of WMP the OP was using, we still don't know precisely why there was the variance for the OP's case which was one of my main points. So was far as I can see there is still confusion or uncertainty over the cause of the OP's observations. Sure we can come up with different possibilities, and I already gave two, one basically what you explained in more depth, another slightly different but we still don't know which, if any, is the exact cause for the OP's case. Nil Einne (talk) 17:10, 9 June 2016 (UTC)
- This ought not be confusing. The technical documentation for WMA (which I linked earlier) explains how the band cut filtering is applied dynamically based on a quality factor, which is estimated by comparing the compressed/reconstructed waveform to the original waveform. This process can also be performed using a variable bit-rate encoder with multiple passes to aim for an average desired bitrate. In other words, it is expected behavior to see the filtering turning on or off multiple times within a single song, and/or between different songs, or to have the filter change its parameters. The same kind of strategy is used by smart programmers who design or implement other codecs besides WMA. Nimur (talk) 15:00, 9 June 2016 (UTC)
- I think LAME may still cut even with high bitrate CBR [2] [3] [4] [5] [6] [7] although at 320 kbps it's intended to be at a level where few humans will hear very well. At these high levels, LAME will also significantly reduce the accuracy of encoding if needed [8]. I think the more confusing thing about the OP's statement is it sounds like they're saying this only happens sometimes not all the time. This sounds like some sort of psychoacoustic model, perhaps similar to LAME's Y switch but filtering instead of just using few bits. Whether it's because it feels it needs to use the bandwidth for the low frequencies (like LAME's Y switch seems to do) or perhaps has decided the high frequencies are mostly noise rather than intentional sounds, I don't know. Nil Einne (talk) 14:07, 9 June 2016 (UTC)
I'm using Windows Media Player 12, but I've ripped most CDs in the Vista era. However I don't believe this changes things much so. For example I recently tried ripping Iron Maiden - Iron Maiden (1980), and it stuck with a 16.5kHz cutoff despite 320kbps being clearly chosen. Matt714 (talk) 00:22, 10 June 2016 (UTC)
Just did some testing and there is a marked difference between iTunes and Windows Media Player. https://i.imgur.com/yukJAol.jpg How this really translates audio quality wise I don't know. Full spectro: https://i.imgur.com/S6AjIca.jpg Matt714 (talk) 01:20, 10 June 2016 (UTC)