Wikipedia:Reference desk/Archives/Computing/2006 December 18
Computing desk | ||
---|---|---|
< December 17 | << Nov | December | Jan >> | December 19 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
December 18
editConverting .EXE to .SWF
editWe created an EXE video game (available on this site) and we want to submit it to many popular game sites. However, they will only accept games in flash format (.SWF). Is there a way to convert it to a .SWF file without starting over?
Thanks, Magnaverse productions. BOTW 03:01, 18 December 2006 (UTC)
- I doubt it. Who created the .exe to begin with? And what did they use to create it? The exe would have started life as some sort of source code, your only chance is to get the source code and port it to flash. I doubt that can be done automatically, at least it would probably require someone who slightly familiar with both applications, at most, it may be impossible short of starting from scratch. Vespine 04:27, 18 December 2006 (UTC)
- The .EXE was created using a progam called game maker workshop. That software is awailable for free download at this other site. If you have questions about how the game works, you can download the game from the first link. BOTW 12:32, 18 December 2006 (UTC)
- To provide an SWF version, the game would have originally needed to have been developed in Shockwave. With your current game, you can't get there from here. Your best bet would be, if it's practical, to re-do the game in SWF from the beginning using your Game Maker Workshop executable as a model, reusing artwork, etc. There is no way to convert it as is. - CHAIRBOY (☎) 21:24, 18 December 2006 (UTC)
- Yeah. EXE and SWF are totally different. EXE is a natively-executable file, while SWF means it is being executed by a Shockwave interpreter, i.e. written in a very specific language. --140.247.249.64 18:49, 19 December 2006 (UTC)
- Executable files aren't necessarily native, they can be interpreted too, even in windows. The .NET framework is a prominent example --frothT C 23:00, 21 December 2006 (UTC)
font
editWhat is the font used to display the (Asian/European) electrical underwriter symbol "CE"? 71.100.6.152 06:11, 18 December 2006 (UTC)
- See CE mark. It's not really a font, just an image. --h2g2bob 12:41, 18 December 2006 (UTC)
- Correction, it's both. Anchoress 12:46, 18 December 2006 (UTC)
- Is there actually a full set of letters in this font? (link please)83.100.250.252 13:44, 18 December 2006 (UTC)
- Try http://www.myfonts.com/fonts/ef/expressa . —PurpleRAIN 20:14, 18 December 2006 (UTC)
- Cheers - I was actually looking at 'construction details of .." and couldn't help wondering how the spacing for an O was worked out , perhaps even letting it overlap..eg in the case of "..OE.."87.102.11.137 21:31, 18 December 2006 (UTC)
- Try http://www.myfonts.com/fonts/ef/expressa . —PurpleRAIN 20:14, 18 December 2006 (UTC)
- Is there actually a full set of letters in this font? (link please)83.100.250.252 13:44, 18 December 2006 (UTC)
- Correction, it's both. Anchoress 12:46, 18 December 2006 (UTC)
What does it mean by Single Shared Platforming in IT
editStrike-through text —The preceding unsigned comment was added by 220.247.254.255 (talk) 08:28, 18 December 2006 (UTC).
- Typically, in sectors like finance, the bounds in technology were so fast that development begun and continued on many different platforms. Now it is at the point where in most banks a single transaction likely needs to pass through several platforms before being processed, some of them legacy and some of them cutting edge. If it isn't obvious, this in it self causes many problems. Specialist knowledge, compatibility, multiple support teams, multiple points of failure, duplicated redundancy, they all cause issues in infrastructure with many platforms. The idea on Single Shared Platform is that there is literally one single shared platform that runs everything. Vespine 21:28, 18 December 2006 (UTC)
Audio analysis question
editSome high school friends and I are trying to develop a free audio editor to do various live and recorded edits. We are discussing how to apply a digital Equalization using the frequency domain. We realize there exist time-domain algorithms, but preffer to stick in frequency domain, at least initially (while we try to learn enough background for the other.) So, we are now able to do fast & discreet fourier transforms, but we are unsure how to aply it. One constraint (which I am told is common and not avoidable) is that out transfor algorithm only accepts samples of length samples. The options we see are:
- Apply the transform to the entire track, adding 0's to the end of the waveform to make the number of samples right. Multiply by the equalization curve, and reconstruct the wave.
- Interpolate enough points into the track to make the number of samples correct, and apply the transform to the whole song. Multiply by the equalization curve, and reconstruct the wave.
- Cut the data into sections of samples, and apply the transform to each of those. Adding either 0's or interpolated data to the last one. Then multiplying each by the equalization curve, reconstructing, and stitching the waveforms (time domain) together, in the origonal order.
If anyone has ideas, resources, refrences, or suggestions, they'd be much appriciated. Thanks, 48v 19:30, 18 December 2006 (UTC)
- Applying the transform to the whole track at once doesn't sound like a good idea, as you would then lose the time-domain variation. I've done similar things by splicing the data into small, 50% overlapping 2n pieces, then applying a Blackman window to each and padding with zeroes before the transform. (But I'm not sure how it would apply to this.) –mysid☎ 21:20, 18 December 2006 (UTC)
- Thanks for the response. I should have added, my understanding is that as long a sour FFT includes very low frequencies dow to 1/(2*track length) that we shouldn't lose the time domain variation because we would be reconstructing only one cycle of the function. Am I incorrect on this point? 48v 21:27, 18 December 2006 (UTC)
- As far as I remember, FFT only outputs powers at different frequencies (from 0 to n/2) and phases, "averaged" from the whole sample. So if you feed a 3-minute piece into an FFT at once and try to reconstruct the original from it, all you get is a 3-minute constant chord. That's why it would have to be done in small time slices. –mysid☎ 21:37, 18 December 2006 (UTC)
- Since the lowest a human can hear (20hz) with 44.1kHz sampling is about 2200 samples per waveform, a 4096 sample window is plenty for human hearing. You'll also want to use progressively smaller sample sizes for higher frequencies (consider the number of samples per waveform as 1/2 the minimum for the FFT window size), and overlap your boundaries as mentioned above. Droud 03:27, 19 December 2006 (UTC)
- According to the books I researched from, frequencies that people cannot hear (below about 15Hz for those lucky enough to have really good ears) dare used to basically describe the overall shap of the data, rather than a specific audibly frequency. Essentially, it ought to modulate the audible data. 48v 08:55, 23 December 2006 (UTC)
- Since the lowest a human can hear (20hz) with 44.1kHz sampling is about 2200 samples per waveform, a 4096 sample window is plenty for human hearing. You'll also want to use progressively smaller sample sizes for higher frequencies (consider the number of samples per waveform as 1/2 the minimum for the FFT window size), and overlap your boundaries as mentioned above. Droud 03:27, 19 December 2006 (UTC)
- As far as I remember, FFT only outputs powers at different frequencies (from 0 to n/2) and phases, "averaged" from the whole sample. So if you feed a 3-minute piece into an FFT at once and try to reconstruct the original from it, all you get is a 3-minute constant chord. That's why it would have to be done in small time slices. –mysid☎ 21:37, 18 December 2006 (UTC)
- Thanks for the response. I should have added, my understanding is that as long a sour FFT includes very low frequencies dow to 1/(2*track length) that we shouldn't lose the time domain variation because we would be reconstructing only one cycle of the function. Am I incorrect on this point? 48v 21:27, 18 December 2006 (UTC)
You might try: The Scientist and Engineer's Guide to Digital Signal Processing, chapter 18.EricR 03:47, 19 December 2006 (UTC)
- Thanks. Much Appriciated. 48v 08:55, 23 December 2006 (UTC)
This reminds me a little of gapless playback. You can use padding to increase the length, and sometimes you'll want to trim the padding. --Kjoonlee 14:34, 19 December 2006 (UTC)
- Thanks, I'll take a look at that as well. 48v 08:55, 23 December 2006 (UTC)
Xcode, Universal Binary, & Multiprocessing
editIs it safe to say that all modern programs developed with Apple's Xcode are multi-processor/core aware? Therefore, are all Universal Binary applications multi-processor/core aware? Lastly, do all Universal Binary programs have to be created with Xcode? (Is it possible to make a UB app without Xcode?) --24.249.108.133 23:16, 18 December 2006 (UTC)
- As Xcode states Xcode can build universal binaries which allow software to run on both PowerPC and Intel-based (x86) Macs, and can build 32- and 64-bit applications for both architectures., so this means it CAN build universal binaries, but only if the application is coded that way of course, not that every application compiled with Xcode is a universal binary. Also on Universal binary it is stated that Many software developers have provided universal binary updates for their products since the 2005 WWDC. but also However, such updates are yet to be released for some high-profile applications, such as Adobe Creative Suite, Adobe After Effects and Microsoft Office 2004.. So I recon it is safe to say that not all modern programes developed with Xcode are multi-core aware. Aetherfukz 23:38, 18 December 2006 (UTC)