Wikipedia:Reference desk/Archives/Computing/2013 July 24
Computing desk | ||
---|---|---|
< July 23 | << Jun | July | Aug >> | July 25 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
July 24
editIntel's Tick-Tock and performance
editIntel Tick-Tock says that a tick (e.g. Ivy Bridge) is a reduction in die size and a tock (e.g. Sandy Bridge) is a new microarchitecture. This says that Ivy Bridge should show about a 20% performance increase (per clock speed) over Ivy Bridge because if the larger number of transistors. Is that right? Do ticks or tocks represent the bigger performance gain? Bubba73 You talkin' to me? 04:32, 24 July 2013 (UTC)
- Whoops - I read it wrong. The 20% increase is in transistors, not performance. But still the question is about performance in ticks vs. tocks, e.g. Ivy Bridge over Sandy Bridge.
- My question is partially answered by Ivy Bridge (microarchitecture), but what can be said in general? Bubba73 You talkin' to me? 04:35, 24 July 2013 (UTC)
- You need to understand that increasing transistor number is generally increasing the length electricity have to cross. Intel increase both performance and transistor. Certain RISC (ex:MIPS) architecture don't increase transistor number.
- There is a simpler example : human brain/eye can process 24 fps. A fly can process hundreds. 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 11:24, 24 July 2013 (UTC)
- That is untrue. The human eye can easily tell the difference between a 50 and 70 FPS animation.Zzubnik (talk) 12:45, 24 July 2013 (UTC)
- But generally more transistors means a faster CPU. Bubba73 You talkin' to me? 14:45, 24 July 2013 (UTC)
- Can you cite a well-known CISC architecture (other than x86 based) were more transistor mean more powerful? 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 16:55, 24 July 2013 (UTC)
- But generally more transistors means a faster CPU. Bubba73 You talkin' to me? 14:45, 24 July 2013 (UTC)
To put it in more concrete terms, I was deciding between buying a computer with a Sandy Bridge i5-2320 and one with an Ivy Bridge i5-3330. (Last night I ordered the i5-3330 one.) Both run at 3GHz and have the same amount of L2 and L3 cache. The 2320 has one notch better TurboBoost with one or three cores running, and the same with two or four cores running. Despite that, might the 3330 outperform the 2320 by a small amount? Bubba73 You talkin' to me? 14:53, 24 July 2013 (UTC)
- I didn't look here yet. 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 16:55, 24 July 2013 (UTC)
- I'm confused by your two questions. On a clock per clock basis, Ivy Bridge processors do out perform Sandy Bridge processors, this is pretty much a given and shouldn't be surprising because the Ivy Bridge is newer with both a die shrink and microarchitecture refinements expected from any 'tick'. There a variety of reasons for it, the increase in transistor count is just one of them. This is distinct from whether or not you generally get more performance increase with a 'tick' or a 'tock'. The comparison for the Sandy Bridge is the Westmere (microarchitecture) (tock) cf of course Sandy Bridge vs Ivy Bridge (tick). Alternatively you can compare the Sandy Bridge vs Ivy Bridge and Ivy Bridge vs Haswell (tock). Note that these sort of comparisons are difficult and not necessarily that meaningful, as the clock per clock performance increase would generally vary particularly with a major microarchitecture change, and would also depend significantly on the app. While the days of Netburst and clockspeed being marketed extensively are long gone with concentration on other things like power efficiency at the fore, there may still be reasons why a company may choose a design which clocks higher despite lower performance per clock so the more meaningful comparisons are things like cost per performance and a consideration of what matters to you, like power per performance. (In other words, while you don't see this so much with Intel any more, it's still possible a next generation may have lower performance per clock. This doesn't mean the next generation is worse, if a 3ghz of the next generation is sold at the same price as the 2ghz of the older generation and the performance per clock is only 10% worse, the next generation would generally have better performance.) Of course, price depends on many factors unrelated to cost and despite some inroads of ARM and GPGPU, and the presence of AMD and other competitors, Intel still has a lot of the server, desktop and high end laptop market to themselves. And for the end user, I'm not sure that these actually matter anyway unless perhaps you're deciding whether to wait. Why would you care that Haswell had say a 15% performance/clock or price or whatever increase over the Ivy Bridge which had a 20% performance increase over the Sandy Bridge which had a 30% performance increase over Westmere? (Completely made up numbers.) Ultimately when you're buying you have to look at what's available, consider the price and the various advantages like performance differences and power differences and choose what's best to fit you needs. Nil Einne (talk) 04:33, 25 July 2013 (UTC)
- Thank you - I didn't really know that a tick was more than a die shrink. The other day I was deciding between an Ivy Bridge and a Sandy Bridge that have the same specs except some of the TurboBoost numbers are better on the Sandy. But I ordered the Ivy. A few percent better performance does make a difference to me on the things it will be used for. Bubba73 You talkin' to me? 15:13, 25 July 2013 (UTC)
memory performance
editI'm going to be adding memory to a Windows desktop computer. I have three basic choices, at about the same price:
- DDR3-1333 PC3-10600 CL=9
- DDR3-1600 PC3-12800 CL=11
- DDR3-1333 PC3-10600 CL=7-7-7-24
Which should give the best performance? Bubba73 You talkin' to me? 05:24, 24 July 2013 (UTC)
- DDR3-1600 sounds faster, but can your motherboard and CPU support it? Graeme Bartlett (talk) 10:16, 24 July 2013 (UTC)
- I'm not sure about the speed of the motherboard. But Crucial says that all of these will work. So that probably means a 1333MHz bus instead of 1600. In that case, I doubt that there would be any advantage to the 1600 one.
- Does CL=7-7-7-24 mean that the first three words read from memory would be relatively fast but the fourth one would be very slow? Bubba73 You talkin' to me? 14:09, 24 July 2013 (UTC)
- That question is answered at SDRAM latency. Bubba73 You talkin' to me? 16:19, 24 July 2013 (UTC)
- no, no, in your example cl is only the first seven... the rest are rp, and rcd… the large number is the RAS
- You have to check your memory controller to see if 1600 is supported, if it is not supported the most possible scenario is that the system use the memory at 1333 and maybe with a bit better latencies than the originals at 1600…
- For a sandy or lower config I guess your 3rd option is easy the best, but maybe for an ivy or a 2011 you could be better with the 1600
- Any way… I have try a lot (truly a lot) of different configurations of ram for my work and except in benchs the difference is almost unnoticeable… you change your disk or cpu and wow know we are talking…
- Iskánder Vigoa Pérez (talk) 00:17, 25 July 2013 (UTC)
- Yes, after figuring on it, there is probably not much difference. A CL=11 on 1600 is just about the same as CL=9 on 1333. Bubba73 You talkin' to me? 00:53, 25 July 2013 (UTC)
- Note that there is no perfect correlation in performance between lower latency, lower frequency RAM and higher latency higher frequency RAM. However, I suspect that even on the Ivy Bridge the CL7 will generally be faster. That said, I agree on both the Ivy Bridge and Sandy Bridge the difference is minimal. AFAIK this applies to most real world (i.e. not sythentic) benchmarks in general although there are cases which can gain an advantage, although 1333 CL9 sounds a little low. Amongst other things, both have a very good memory controller and the GPU is fairly weak. See also [1] [2] [3]. Note that this is not the case with the AMD Trinity. The Trinity memory controller seems to be a bit slow and while there's still little advantage for the CPU in general, the GPU may often gain a noticeable advantage with faster memory in real world benchmarks (whether a person will notice or not does depend on the individual and the app but it's entirely plausible you will in some scenarios). Nil Einne (talk) 02:21, 25 July 2013 (UTC)
- I ordered 1333 CL=9 for an Ivy Bridge earlier today, but it hasn't shipped yet. The CL=7 was only $3-4 more per 8GB. Reading up on it, it seemed like it was more for overclocking, which I don't do. Some of the things I run are very CPU and memory intensive and GPU is of no concern. Bubba73 You talkin' to me? 03:06, 25 July 2013 (UTC)
- No, overclockers may sometimes go nuts over high performance memory but with the death of the FSB, it's not generally needed for overclocking except perhaps at the real borderline when you run out of dividers (although in cases like that memory speeds is more important than latency but you may easily be able to overclock the low latency RAM to at least the same frequency). You may be more likely to have memory becoming a bottleneck if you're overclocking a lot but in reality frequently they need it as much or as little as non overclockers (unless their sole goal is to show off some record or synthetic benchmark results). That said, as the results show (although they didn't really compare low latency 1333) if you aren't using the GPU only a small number of real world CPU cases have performance increases large enough to be noticeable, in particular the only real example there was WinRAR although of course only your own or someone else's comparisons can tell you what's the case your for your specific app. Nil Einne (talk) 04:51, 25 July 2013 (UTC)
- I ordered 1333 CL=9 for an Ivy Bridge earlier today, but it hasn't shipped yet. The CL=7 was only $3-4 more per 8GB. Reading up on it, it seemed like it was more for overclocking, which I don't do. Some of the things I run are very CPU and memory intensive and GPU is of no concern. Bubba73 You talkin' to me? 03:06, 25 July 2013 (UTC)
- I’m running right now on a (4x4)16gb mushkin silverline memory kit, 1333 at 7-9-9-22
- And a colleague with the same board p67 and the same 2600 cpu just put a (4x8)32 gb corsair extremethomething (very expensive) 1866 running at 10 – 11 –10 – 30, and there isn’t anything faster thatyou can actually see or measure, I means, literal not better render times, not better visor work, not better time to wake the system, not better time to fire up the applications… the only thing improved was the work in adobe AE, double de ram size… more room for ram previews…. and that have nothing to do with speed… but hey he gets avery nice win8 index! — Preceding unsigned comment added by Iskander HFC (talk • contribs) 05:07, 25 July 2013 (UTC)
- Thank you both. Bubba73 You talkin' to me? 13:32, 25 July 2013 (UTC)
- Note that increasing the amount of RAM will definitely increase the speed in case where you run out of memory otherwise and by amounts way more than anything we're discussing here. And faster RAM will likely make a measurable differences with certain applications using the CPU like probably WinRAR (although the tests which found this were for Ivy Bridge not Sandy Bridge) even if probably not many. It will potentially make a measurable difference more commoly with the GPU although again this is harder to say since I'm less sure of the memory scaling of the Sandy Bridge GPU (in addition your 1333 RAM has better timings then is commoly tested). You've only mentioned 4 applications and two of them (waking up the system and loading programs) the sort of stuff where it's unresonable to expect RAM speed to make much difference so it's not exactly surprising if you found no difference in your small subset of case but that doesn't change the fact it will likely make a measurable difference in real world applications in small set of cases. I presume your friend is actually overclocking the memory subsystem, from what I can tell the 2600 doesn't official support higher than 1333 memory so the memory subsystem will need to be overclocked to run at 1866 (some boards may support this by default and the lack of official support on the CPU/memory controller may not mean it lacks the appropriate defaults). Nil Einne (talk) 03:40, 26 July 2013 (UTC)
The board is also the same, an Asrok, and it seems to manage itself to set the memory that way… I don’t get in these overclock stuffs, it’s… I just don’t get it A curious thing, the timings of our ram are not the same than the pictured in their respective boxes neither the listed by the seller (newegg), in the case of my colleague I cannot remember, but mines are supposed to be at 8-8-8-24 (by SPD) and are currently at 7-9-9-22 I had try to set it manually but the board just display an error code in its lcd and then reboot with the same values In our group we do a lot of heavy work… 3ds max, Mudbox, Maya, Adobe Mocha, PS, AE, Premiere, ME, and an “in house” tool for proses normal maps… maybe neither of that prove to be a challenge to the ram speed (and indeed, until now, it don’t) but I just don’t quite believe that some files compression in winrar or other results in synthetic tests can give you a real notion of what improvement to expect when upgrading to a faster memory… at least that you’ll have to do a lot of files compression or you want to participate in some OC showdown or something — Preceding unsigned comment added by Iskander HFC (talk • contribs) 06:25, 26 July 2013 (UTC)
3d From 2d
editFor certain types of figures, it seems like given an image from front, back, and each side and the general dimensions of the object, that you could "extract" other angles of the figure by blending the appropriate parts together. While, obviously, this wouldn't work for every object, and lighting may be a problem, is there anything out there that looks into this? Honestly, I'm just curious; if nothing else, I'd love any suggestions on what might be a good way of doing this, then I could just try it myself. Thanks:-)Phoenixia1177 (talk) 07:27, 24 July 2013 (UTC)
- 3D reconstruction from multiple images is the general topic, although the article leads somthing to be desired. There is a lot of work in the topic and it is considered a hard Computer Vision problem. It does get a lot easier if you can label points so you don't have to worry about which points match with which.--Salix (talk): 09:18, 24 July 2013 (UTC)
- Yes, for example you can use a laser to scan points from 3 orthogonal directions, and use those to reconstruct a surface. Of course, it's not so good for reconstructing hollow spots on the inside. StuRat (talk) 05:03, 25 July 2013 (UTC)
- Thank you both:-) I never thought about the computer vision angle. What I was thinking of is something much more simple: I have a collection of 2d sprites, I'm not a good 3d artist, but I am good at math and programming; for my own use, I was toying with the idea of making 3d versions of these from the 2d (for a game I'm making, just for my own playing, so it doesn't need to be good).Phoenixia1177 (talk) 06:53, 25 July 2013 (UTC)
- If you have two orthogonal views of a curve, then constructing a 3D curve from it is relatively simple. Say one is in the XY plane and another in the XZ plane. Here's how you "marry" the curves, point by point:
Point XY plane XZ plane 3D point ===== ======== ======== ======== 1 0,0 0,0 0,0,0 2 1,2 1,3 1,2,3 3 2,6 2,9 2,6,9
- Of course, actually running a curve through the sample points is a bit tricky, and a small error in the point locations can cause wild results. StuRat (talk) 07:21, 25 July 2013 (UTC)
Excel 2010 startup
editYet another Excel 2010 / Windows 7 question! Is there a way of making it startup without opening a new worksheet? On older versions I would add a "/e" option to the target, but I appear to have a write protected target that doesn't list the exe file anyway, so I can't add /e. -- SGBailey (talk) 08:44, 24 July 2013 (UTC)
- Answered my own question. I made a NEW shortcut to the actual .exe file (rather than whatever shortcut it is that Windows creates when you install Office). That did have a target and I could add /e to it. So I'll use that and delete all the auto-magic Office created shortcuts. Cheers. -- SGBailey (talk) 08:49, 24 July 2013 (UTC)
RECORDING A VIDEO FILE
editHow can I record a video file (e.g. BBC news or some animation showing heart beat etc.)from internet to my computer? Is there any simple software? Thank you.175.157.149.54 (talk) 09:34, 24 July 2013 (UTC)
- See Wikipedia:Reference_desk/Computing#Converting_from_an_ages-old_Video-for-Windows_codec_to_.22modern.22_h264. 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 12:12, 24 July 2013 (UTC)
- If using FireFox, then you can use the excellent addon VideoDownloadHelper - I love it. --Yellow1996 (talk) 16:19, 24 July 2013 (UTC)
unit step
editis it unit step is energy or power? — Preceding unsigned comment added by 223.196.163.250 (talk) 15:10, 24 July 2013 (UTC)
is it unit step signal is energy or power signal? Mkrtwr (talk) 15:27, 24 July 2013 (UTC)
- It's a power signal. From [4]
- "The unit step signal is a Power signal. Since when we find the power it comes to 1/2 (i.e finite value). And when we find its energy, we got INFINITY. If a signal has energy as infinity and power as a finite non-zero value, then it is a power signal, not an energy signal." --Yellow1996 (talk) 16:49, 24 July 2013 (UTC)
When i use MS Word, I keep the calculator open, but
editAnytime i come back to word, it disappears, and than i need to annoyingly go and start it all over again from the Taskbar. any way to shut it into there? Ben-Natan (talk) 16:03, 24 July 2013 (UTC)
- Without knowing what OS you are running, I can suggest you try this 3rd party program. I haven't used it myself, though. Hope this helps! --Yellow1996 (talk) 16:59, 24 July 2013 (UTC)
- Without giving context either, not much help can be given. Of course, you can try alt-tabbing, using a 3rd-party app to keep things on-top, not having the word window maximized, etc. --Sigma 7 (talk) 15:27, 25 July 2013 (UTC)
Windows NT
editWhat does NT mean? This applies to the whole world. Couldn't find the answer on Wikipedia, and I can't access other sites than YouTube, Wikipedia and Armor Games on my computer. Pubserv (talk) 18:49, 24 July 2013 (UTC)
- Third sentence of our Windows NT article: "NT" was expanded to "New Technology" for marketing purposes but no longer carries any specific meaning. 209.131.76.183 (talk) 18:54, 24 July 2013 (UTC)
- Nt is a kernel that is used for posix win32/64 wince. Those OS have each time a different PE based executable format. Learn how to code for native executables if you are curious. 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 20:39, 24 July 2013 (UTC)
- WinCE isn't based on NT. There is a POSIX subsystem for NT but it's rarely used. NT is the OS kernel used in modern versions of Microsoft Windows (Windows NT, 2000, XP, Vista, 7, 8), but 3.1, 95, 98, and ME all supported Win32 but weren't NT-based. The PE format is the same for all of these, except for the obvious changes needed for 64-bit executables. One of the PE fields is a subsystem ID, which is different for Win32, POSIX, NT native, and probably WinCE. -- BenRG (talk) 07:53, 26 July 2013 (UTC)
- The stub message proceduced by 64-bits PE is still a regular DOS MZ exectable (16-bits code in 64bits). You can just take a PE file rename it .exe and run it under DOS. — Preceding unsigned comment added by 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 19:38, 26 July 2013 (UTC)
- WinCE isn't based on NT. There is a POSIX subsystem for NT but it's rarely used. NT is the OS kernel used in modern versions of Microsoft Windows (Windows NT, 2000, XP, Vista, 7, 8), but 3.1, 95, 98, and ME all supported Win32 but weren't NT-based. The PE format is the same for all of these, except for the obvious changes needed for 64-bit executables. One of the PE fields is a subsystem ID, which is different for Win32, POSIX, NT native, and probably WinCE. -- BenRG (talk) 07:53, 26 July 2013 (UTC)
does Processing (language) run on ARM?
editI'd like to know whether I can run my Processing (the language) file on an embedded ARM system running Ubuntu (for arm architecture). Since it's just java, I would expect the answer to be yes but only 32-bit and 64-bit versions are listed. will one of these be good? e.g. with the Marvell Armada 510. thanks. 178.48.114.143 (talk) 19:03, 24 July 2013 (UTC)
- This discusses installing Processing on a Raspberry pi, which is ARM. It seems that the binary Linux packages for Processing include an x86 JRE, but once one removes that and makes sure the correct ARM openjdk-jre is used, it works. -- Finlay McWalterჷTalk 19:38, 24 July 2013 (UTC)
- Thanks! This is appreciated. 178.48.114.143 (talk) 00:19, 25 July 2013 (UTC)
Packet loss in Mumble
editMumble is a VOIP program used for gaming.
My friend sounds like he's underwater when he's using it. The person who is hosting the server we're on reports that he's got 30% packet loss, which would explain it. My friend's download speed is in excess of 100Mb/s. The server is fine; no-one else is having the problem. What else could be causing this/how might we fix it? — Preceding unsigned comment added by 5.69.78.94 (talk) 19:08, 24 July 2013 (UTC)
- I doubt that the speed is in excess of 100Mb/s -- do you mean 100Kb/s? Anyway, if you are losing packets, there is a problem with your network connection. Is this a wireless connection? Looie496 (talk) 19:15, 24 July 2013 (UTC)
- Verizon offers 500Mb/s packages, so 100Mb/s is not unheard of. And, what matters is not the download speed but the upload speed, since the data is getting lost from the gamer on its way to the server. RudolfRed (talk) 20:21, 24 July 2013 (UTC)
- In france numericable offer fiber connections up to 200M for individuals. It is symmetric and in unlimited access. 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 21:03, 24 July 2013 (UTC)
- Verizon offers 500Mb/s packages, so 100Mb/s is not unheard of. And, what matters is not the download speed but the upload speed, since the data is getting lost from the gamer on its way to the server. RudolfRed (talk) 20:21, 24 July 2013 (UTC)
- MTR (software) (and its Windows incarnation winMTR) can be useful for identifying whether you really have packet loss, and goes some way to finding where it's happening. -- Finlay McWalterჷTalk 20:32, 24 July 2013 (UTC)
- Thanks for all the responses. My friend appears to have lost all connection tonight, so I won't be able to get this information to him until...some time. But this will be useful later, since this problem has happened before. 5.69.78.94 (talk) 20:43, 24 July 2013 (UTC)
- [speedtest.net] gives (actual) speed, ping, jitter and packet loss measurements. It's worth checking if the 100Mbps is a real speed, or a theoretically possible advertised speed. MChesterMC (talk) 08:51, 25 July 2013 (UTC)