Wikipedia:Reference desk/Archives/Computing/2014 April 16
Computing desk | ||
---|---|---|
< April 15 | << Mar | April | May >> | April 17 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 16
editIpad won't play songs from cds
editHi, I've uploaded my cds to itunes on my computer, and they work fine. I've also played them on my ipad, and in the past, they have also worked fine. Lately, they have just refused to play on the ipad. I've tried turning the ipad on and off, and resyncing, but to no avail. The ipad doesn't say what's wrong. I downloaded a song for free on the internet, and tried to get it to play, but I don't know why that would affect any other songs. That song also works on the computer, but hasn't transferred to the ipad. Purchased songs are not affected, and a small few songs from cd are also not affected. Any idea what's going on? IBE (talk) 02:27, 16 April 2014 (UTC)
- What format are the files? Did you "rip" them to MP3 format from your CDs? (If not, then you could try that as an alternative.) Dbfirs 06:12, 16 April 2014 (UTC)
Well, it seems to have fixed itself with an update. Of course I knew that it would be a good idea to update the software, but somehow iTunes hasn't let me download the latest iOS for a few weeks. I was thinking it was maybe just blocked in China. So good news there.
Well for what it's worth, if anyone should ever encounter the problem: They are .m4a. For example, "Intervention" by Arcade Fire has 3 files: (1) "04 Intervention" with a miniature semi-quaver logo on the left, and it is listed as an MPEG-4 Audio file, 8.42MB; (2) "04 Intervention.m4a.files" - folder with 0 bytes; and (3) "04 Intervention.m4a.smfmf" - size 8KB. The first two are in the folder C:\Users\IBE\Music\iTunes\iTunes Media\Music\Arcade Fire and the third is in the folder ...\Neon Bible, ie. the same tree with an extra folder for the name of the album. IBE (talk) 11:22, 16 April 2014 (UTC)
Login/signup - terms
editA crossover of computing and language: Is there consistency in English for terms meaning
- Create an account here!
- Get into your account here!
As far as I have encountered, the first activity is called
- register or sign up, the second
- login or sign in
But is this handled consistently in the English-speaking world? Or are there websites with different usage of those terms? --KnightMove (talk) 08:00, 16 April 2014 (UTC)
- Many sites have different text for those two things. Choose whatever is appropriate for your audience.217.158.236.14 (talk) 08:14, 16 April 2014 (UTC)
- Ok, the article Login helps on one topic - is there one on the other (I failed to find one in Register). --KnightMove (talk) 08:28, 16 April 2014 (UTC)
- I would say your distinction is correct, and would be understood by any English speaker (certainly any computer-literate British English speaker). You might consider moving this to the Language Reference Desk. Rojomoke (talk) 12:03, 16 April 2014 (UTC)
- "Log on" or "sign on" are sometimes used instead of log in or sign in. Login is sometimes one word, etc. 70.36.142.114 (talk) 12:56, 16 April 2014 (UTC)
- You could check what popular web sites do - twitter, facebook, google, wikipedia, amazon etc. They have probably settled on terms that are understandable to a lot of people. If you are building a specific type of web site, check existing sites that have a similar style of registration (e.g. e-commerce sites may have preferred terms, or email sites might call it "create mailbox") - let them do the market research for you. 88.112.50.121 (talk) 15:48, 16 April 2014 (UTC)
- I've seen some web pages that don't initially make a distinction between signing up and logging in. They just ask for your username and password. If that username doesn't exist in their DB, they say "That username is not in our database, would you like to register using that username and password ?". StuRat (talk) 16:06, 16 April 2014 (UTC)
- Don't fall for the common mistake that many sites do... "Log-in" (also "login", which isn't really a word...) is really be a noun (like "registration", rather than "register"), and the verb is "log in". So a link that takes you to a page where you're able to log in could reasonably be labelled "log-in", but I'd avoid it. A button that actually logs you in should say "log in", not "log-in", and definitely not "login"!
- Having both "sign up" and "sign in" could be confusing. Wikipedia uses "create account" and "log in", which are clear, grammatical, and unambiguous; how about those? 14:07, 20 April 2014 (UTC) — Preceding unsigned comment added by Yeryry (talk • contribs)
- I don't have any direct advice on the current form of these terms, but I'll describe how the terms arose, which might influence your choice. As recently as the 1980s computers or computer terminals were scarce, especially on college campuses. It was common to have a log book listing all the hours of the day and night. One would reserve time on the computer or terminal by writing one's name in the log book, and then would actually sign and write the time one started and stopped using the equipment. People who reserved time in the wee hours of the night but didn't show up to use the equipment were severely frowned upon. Much of the equipment used in those days didn't have any automated logging software, so the paper log book was the only record of who used the equipment and how much. Jc3s5h (talk) 14:19, 20 April 2014 (UTC)
raw ip logs
editHow common is it for internet services (not necessarily web sites) to log raw IP packets and retain the logs for any period of time? Are any places known to do this? Reason for asking: I'm wondering about the possibility of scanning old logs for signs of the heartbleed attack. Normal application logs such as web server logs are not sufficient for this. 70.36.142.114 (talk) 12:54, 16 April 2014 (UTC)
- It is common for some types of software developers to use packet analyzers. It is less common for a production-level service to perform packet analysis; that level of profiling is more useful for development and debugging, while web-services can usually be analyzed at higher layers of abstraction. However, high-volume web services may still do packet-level studies to tune performance; or as part of a network security policy. Commercial support for packet capture software is available. Google's advice, from its developer page, "Network Capture Tools for API Developers", essentially suggests that packet-sniffing is useless when SSL is involved (because, if SSL were working correctly, interception of the packets provides very little useful data). It seems plausible that major web services take the same approach: it would be pointless to capture and store packet-level data (i.e. to log the transport layer in the OSI model), if the security system worked correctly. Barring the sinister, it would be hard for a service-provider to justify budgeting and maintaining storage space for a bunch of logs of impenetrable data. In retrospect, we now know that such data (if logged) might be vulnerable to the CVE-2014-0160 problem. A benevolent entity with those logs might be able to detect a history of intrusions; and a sinister entity with such logs might be able to decode the (previously-"secure") encrypted data by perpetrating new attacks to grab private keys.
- It has been speculated that highly capable entities - like the NSA - actually do have the budget and resources to log immense volumes of encrypted data. It has been shown that highly-capable entities may already be collecting such data - for example, here's EFF's brief on the NSA surveillance scandal from 2006. Even if the NSA could not decode such data when it is collected, it may justify its data-collection effort in case a future capability (say, a hypothetical software bug that was present but unknown during the collection) enables future decryption. Nimur (talk) 18:41, 16 April 2014 (UTC)
- Yes, that type of NSA data retention was used to great effect in the Venona project. But in this case, if someone was exploiting Heartbleed 2 years ago, the NSA would be one of the main suspects, and they'd never tell. It's certainly common for large web sites to log the contents of all incoming HTTP queries, for data mining purposes. The data volume of the raw IP packets would be comparable. On the other hand, logging them could be a questionable security practice, since they would contain unhashed user passwords. 70.36.142.114 (talk) 23:53, 16 April 2014 (UTC)
- I find it very unlikely that most ISPs log wholesale packet data (pcaps) as a matter of course. There are tremendous space issues, and in fact legal issues too, even for an ISP hiding behind a wall of TOSes. Even the NSA is selective about what they log and store. The amount of traffic coming through even a modest ISP will quickly overwhelm storage ability. I don't know why 70.36 thinks the "data volume of the raw IP packets would be comparable" to HTTP headers alone. Those headers would be included in the raw packet data, plus much much more.
- Yes, that type of NSA data retention was used to great effect in the Venona project. But in this case, if someone was exploiting Heartbleed 2 years ago, the NSA would be one of the main suspects, and they'd never tell. It's certainly common for large web sites to log the contents of all incoming HTTP queries, for data mining purposes. The data volume of the raw IP packets would be comparable. On the other hand, logging them could be a questionable security practice, since they would contain unhashed user passwords. 70.36.142.114 (talk) 23:53, 16 April 2014 (UTC)
- The best bet for finding early instances of heartbleed are honeypot servers, or extremely paranoid sysadmins who log at extended levels. Even then, if such a valuable vulnerability was known it's unlikely to have been used widely; instead, if it was used at all, it was probably targeted. Shadowjams (talk) 06:58, 17 April 2014 (UTC)
- I don't understand the tremendous space issues. When I said "size of the http queries" of course that includes the post data and not just the headers, though typical queries on most web sites are just GET. Plus their would be some packet headers etc, but all that compresses pretty well, so it's not that big a deal (consider that the HTTP data typically goes into analytics and indexing systems where its space consumption is enlarged rather than compressed). It's usually taken as par for the course that web sites log all their HTTP traffic, so why would the IP packets (except for passwords) bring on more legal issues? The NSA is in a different situation because the queries they log were not sent to THEIR servers, so their accessing the data at all is legally dubious to begin with. You raise a good point that it's possible some people have been running SSL honeypot servers for long periods. I may ask around about that. But, I think someone trying to exploit Heartbleed before the disclosure would have been at least somewhat selective about targets, so random honeypots might not get touched (they are of course getting scanned with heartbleed all the time now). It would have to be a site with sensitive data that someone was trying to snag. 70.36.142.114 (talk) 23:34, 17 April 2014 (UTC)
- The best bet for finding early instances of heartbleed are honeypot servers, or extremely paranoid sysadmins who log at extended levels. Even then, if such a valuable vulnerability was known it's unlikely to have been used widely; instead, if it was used at all, it was probably targeted. Shadowjams (talk) 06:58, 17 April 2014 (UTC)
Uniq Mismatch
editI have a file, asdf2, with four identical lines, that does not get consistent results from the linux uniq command. There is no carriage return in the file, only line feeds.
The file has four lines:
- $ cat asdf2 | wc -l
- 4
Uniq claims only the first two lines are identical:
- $ cat asdf2 | uniq -c | wc -l
- 3
Removing a special character makes all lines identical:
- $ cat asdf2 | sed 's/\xFE//g' | uniq -c | wc -l
- 1
This character is not in the file:
- $ cat asdf2 | sed 's/\x1C/@/g' | tr -dc '@'
Replacing with a different character makes all lines identical:
- $ cat asdf2 | sed 's/\xFE/\x1C/g' | uniq -c | wc -l
- 1
How can something like this happen? — Preceding unsigned comment added by 198.208.251.48 (talk) 15:57, 16 April 2014 (UTC)
- What is the result if you run a hex dump on the input file:
xxd asdf2
- That would help us understand if anything is unusual about the file. I suspect shenanigans related to unusual (or malformed) character encoding. Nimur (talk) 17:23, 16 April 2014 (UTC)
I have cut the file down as far as possible while still obtaining the same behavior. It seems to require about 120 characters per line in order to misbehave. The hex dump:
- 0000000: 506c 616e 6e65 6420 436f 7374 fe41 6374 Planned Cost.Act
- 0000010: 6976 6974 79fe 4163 7469 7669 7479 2047 ivity.Activity G
- 0000020: 726f 7570 fe41 6374 6976 6974 7920 4772 roup.Activity Gr
- 0000030: 6f75 7020 4944 fe41 6374 6976 6974 7920 oup ID.Activity
- 0000040: 4944 fe41 64fe 4164 2049 44fe 4164 2053 ID.Ad.Ad ID.Ad S
- 0000050: 7461 7475 73fe 4164 2054 7970 65fe 4164 tatus.Ad Type.Ad
- 0000060: 7665 7274 6973 6572 fe41 6476 6572 7469 vertiser.Adverti
- 0000070: 7365 7220 4772 6f75 70fe 4164 7665 7274 ser Group.Advert
- 0000080: 6973 0a50 6c61 6e6e 6564 2043 6f73 74fe is.Planned Cost.
- 0000090: 4163 7469 7669 7479 fe41 6374 6976 6974 Activity.Activit
- 00000a0: 7920 4772 6f75 70fe 4163 7469 7669 7479 y Group.Activity
- 00000b0: 2047 726f 7570 2049 44fe 4163 7469 7669 Group ID.Activi
- 00000c0: 7479 2049 44fe 4164 fe41 6420 4944 fe41 ty ID.Ad.Ad ID.A
- 00000d0: 6420 5374 6174 7573 fe41 6420 5479 7065 d Status.Ad Type
- 00000e0: fe41 6476 6572 7469 7365 72fe 4164 7665 .Advertiser.Adve
- 00000f0: 7274 6973 6572 2047 726f 7570 fe41 6476 rtiser Group.Adv
- 0000100: 6572 7469 730a 506c 616e 6e65 6420 436f ertis.Planned Co
- 0000110: 7374 fe41 6374 6976 6974 79fe 4163 7469 st.Activity.Acti
- 0000120: 7669 7479 2047 726f 7570 fe41 6374 6976 vity Group.Activ
- 0000130: 6974 7920 4772 6f75 7020 4944 fe41 6374 ity Group ID.Act
- 0000140: 6976 6974 7920 4944 fe41 64fe 4164 2049 ivity ID.Ad.Ad I
- 0000150: 44fe 4164 2053 7461 7475 73fe 4164 2054 D.Ad Status.Ad T
- 0000160: 7970 65fe 4164 7665 7274 6973 6572 fe41 ype.Advertiser.A
- 0000170: 6476 6572 7469 7365 7220 4772 6f75 70fe dvertiser Group.
- 0000180: 4164 7665 7274 6973 0a50 6c61 6e6e 6564 Advertis.Planned
- 0000190: 2043 6f73 74fe 4163 7469 7669 7479 fe41 Cost.Activity.A
- 00001a0: 6374 6976 6974 7920 4772 6f75 70fe 4163 ctivity Group.Ac
- 00001b0: 7469 7669 7479 2047 726f 7570 2049 44fe tivity Group ID.
- 00001c0: 4163 7469 7669 7479 2049 44fe 4164 fe41 Activity ID.Ad.A
- 00001d0: 6420 4944 fe41 6420 5374 6174 7573 fe41 d ID.Ad Status.A
- 00001e0: 6420 5479 7065 fe41 6476 6572 7469 7365 d Type.Advertise
- 00001f0: 72fe 4164 7665 7274 6973 6572 2047 726f r.Advertiser Gro
- 0000200: 7570 fe41 6476 6572 7469 730a up.Advertis.
In fact, pasting this directly into the command line seems to work as well:
- $ echo 'Planned CostþActivityþActivity GroupþActivity Group IDþActivity IDþAdþAd IDþAd StatusþAd TypeþAdvertiserþAdvertiser GroupþAdvertis
- Planned CostþActivityþActivity GroupþActivity Group IDþActivity IDþAdþAd IDþAd StatusþAd TypeþAdvertiserþAdvertiser GroupþAdvertis
- Planned CostþActivityþActivity GroupþActivity Group IDþActivity IDþAdþAd IDþAd StatusþAd TypeþAdvertiserþAdvertiser GroupþAdvertis
- Planned CostþActivityþActivity GroupþActivity Group IDþActivity IDþAdþAd IDþAd StatusþAd TypeþAdvertiserþAdvertiser GroupþAdvertis' | uniq -c | wc -l
- 3 — Preceding unsigned comment added by 198.208.251.48 (talk) 20:37, 16 April 2014 (UTC)
- There's an 0xFE character, which is probably meant as a broken vertical bar separator using an Extended ASCII variant. However, uniq is UTF-8 compliant; so, in tandem with the next character, this byte sequence forms a valid unicode UTF-8 multibyte encoded thorn ( þ ), among others. Can you replace your bar with a UTF-8 equivalent? There are a few that you can copy-paste out of our article. Nimur (talk) 21:04, 16 April 2014 (UTC)
- 0xFE is never valid in UTF-8 encoded text. (This could be the reason it was chosen as a record separator for this file.) 0xFE codes for þ in Latin-1 and related character sets such as Windows-1252. -- BenRG (talk) 23:56, 16 April 2014 (UTC)
- BenRG is correct, 0xFE is not a valid way to start an UTF-8 sequence. 0xFE could appear in UTF-16 or UTF-32, but it would need to be followed by a legal value. So, this particular sequence is malformed if interpreted as one of those encodings. Nimur (talk) 00:17, 17 April 2014 (UTC)
- 0xFE is never valid in UTF-8 encoded text. (This could be the reason it was chosen as a record separator for this file.) 0xFE codes for þ in Latin-1 and related character sets such as Windows-1252. -- BenRG (talk) 23:56, 16 April 2014 (UTC)
- There's an 0xFE character, which is probably meant as a broken vertical bar separator using an Extended ASCII variant. However, uniq is UTF-8 compliant; so, in tandem with the next character, this byte sequence forms a valid unicode UTF-8 multibyte encoded thorn ( þ ), among others. Can you replace your bar with a UTF-8 equivalent? There are a few that you can copy-paste out of our article. Nimur (talk) 21:04, 16 April 2014 (UTC)
It may be that uniq is combining the xFE byte with the following bytes to create a single character for the comparison (and I would like to know if there is a way to verify whether that is happening), but that does not seem like it would break the comparison and cause identical rows to compare as different. — Preceding unsigned comment added by 198.208.251.48 (talk) 21:42, 16 April 2014 (UTC)
- There is a way to verify! You can compile from source, and run uniq in the debugger. I'm running uniq on OS X, so I grabbed its source-code from text_cmds-87, compiled it, and ran in lldb. I can symbolically step through the comparison logic.
- uniq makes liberal use of wcscoll and getwc to do the heavy lifting. You might want to read C string handling; it's probable that when you copy/paste the text, your terminal emulator intelligently converts LF into CRLF, so copy-pasting the file-contents yields different behavior than reading the file.
- I should say: I cannot reproduce your problem, even when I intentionally use 0xFE instead of using a valid UTF-8 bar.
- It is very probable, because you are running Linux instead of OS X, that your uniq is based on gnu CoreUtils source - available from gnu.org - so your program's logic may behave differently.
- Nimur (talk) 22:25, 16 April 2014 (UTC)
- I can't reproduce it on linux (uniq from coreutils 8.20) either, at least with the data from the hexdump. -- Finlay McWalterᚠTalk 22:41, 16 April 2014 (UTC)
- The problem is undoubtedly the locale settings. uniq --help actually mentions that "comparisons honor the rules specified by ‘LC_COLLATE’", probably because this bites a lot of people. (I know it's bitten me.) Original poster, type locale at the command line to see your locale settings, and try LC_COLLATE=C uniq instead of just uniq to tell uniq to use the C locale, which treats characters other than linefeed as uninterpreted bytes. (I haven't tried that, but it should work...) -- BenRG (talk) 23:40, 16 April 2014 (UTC)
Side comment on style
editYou never need to cat
a single file into a pipe like that. Instead of
cat asdf2 | wc -l
you can just redirect the input of wc
:
wc -l <asdf2
You can also write it this way if you prefer:
<asdf2 wc -l
Using cat
would be appropriate if there are, or might be, several files and you want to treat the content as one file:
cat asdf1 asdf2 | wc -l cat *sdf* | wc -l cat "$@" | wc -l
Also, almost† all UNIX/Linux commands where you're likely to want to do this will accept one or more filenames as arguments, so you can write:
wc -l asdf2 wc -l *.foo
although with some commands this produces subtly different results; for example, with wc
it causes the filename to appear in the output, and if more than one file is named, it gives a the count for each file and then a total.
†The most important exception is probably tr
; I often find myself writing something like tr x y file
and then remembering it has to be tr x y <file
.
--50.100.193.30 (talk) 21:50, 16 April 2014 (UTC)
- Thank you for this information :) — Preceding unsigned comment added by 198.208.251.48 (talk) 22:05, 16 April 2014 (UTC)
- Just a warning: these instructions conflate a feature of the shell (i.e. the bash shell) and the utility program. This makes a difference! As an example: if I run uniq in the debugger (and absent a shell), it behaves correctly. If I exec the exact same binary from bash using Terminal.app, I get "uniq: /private/dev_scratch/example.txt: Illegal byte sequence" ! So, in this case, your problem almost certainly stems from something that your shell is doing to munge the bytes before they actually arrive at the stdin for uniq! You might want to avoid `cat`ing non-UTF-8 sequences if your terminal is configured for UTF-8; or if your locale is "non-default," or so on. (Konsole and GNOME Terminal - the most popular terminal emulators on desktop Linux flavors - are a lot more intrusive than you might realize!) Nimur (talk) 22:52, 16 April 2014 (UTC)
- Shells and terminals do not munge byte streams (such as stdin redirected from a file). What is happening is that uniq itself is interpreting the bytes according to the locale. -- BenRG (talk) 23:40, 16 April 2014 (UTC)
- ...and the locale is set by the shell. Nimur (talk) 00:00, 17 April 2014 (UTC)
- The locale is controlled by environment variables and can be set by a shell script, the GUI launcher, the terminal emulator, or ssh, for example. It would typically already be set before the shell starts in a GUI environment. On Ubuntu it looks like it's normally set by pam_env(8). In any case, the problem definitely isn't "something that your shell is doing to munge the bytes before they actually arrive at the stdin for uniq", since shells don't do that regardless of locale settings. -- BenRG (talk) 05:18, 17 April 2014 (UTC)
- ...and the locale is set by the shell. Nimur (talk) 00:00, 17 April 2014 (UTC)
- Shells and terminals do not munge byte streams (such as stdin redirected from a file). What is happening is that uniq itself is interpreting the bytes according to the locale. -- BenRG (talk) 23:40, 16 April 2014 (UTC)
- Just a warning: these instructions conflate a feature of the shell (i.e. the bash shell) and the utility program. This makes a difference! As an example: if I run uniq in the debugger (and absent a shell), it behaves correctly. If I exec the exact same binary from bash using Terminal.app, I get "uniq: /private/dev_scratch/example.txt: Illegal byte sequence" ! So, in this case, your problem almost certainly stems from something that your shell is doing to munge the bytes before they actually arrive at the stdin for uniq! You might want to avoid `cat`ing non-UTF-8 sequences if your terminal is configured for UTF-8; or if your locale is "non-default," or so on. (Konsole and GNOME Terminal - the most popular terminal emulators on desktop Linux flavors - are a lot more intrusive than you might realize!) Nimur (talk) 22:52, 16 April 2014 (UTC)
- I believe that the comments from "Just a warning" down to here are misplaced: they belong in response to the main part of this item, not the side comment. --50.100.193.30 (talk) 11:02, 17 April 2014 (UTC)
Drawing software
editI recently upgraded my computer, as my laptop is in the process of dying slowly and painfully, and on this new computer am running ubuntu, and whilst I have used linux before, this is my first time of relying almost solely on it for everything. As such, I need to find some sort of drawing software to download and install. Previously I have enjoyed using paint.net for most things, but it seems there is not an ubuntu friendly version of it, at least not that I can find. My alternative, then, is to seek some other linux compatible option, that is as good as what I have been used to, though I am not sure where to start, as there are so many free drawing programs, and I have little idea what most of them are like. The only ones I have previously tried are Inkscape, which I wasn't too fond of and found complicated and awkward to use, and Gimp, which I quickly stopped using, finding it somewhat lacking in a few areas. Wondering if people can suggest any other alternatives. Doesn't need to be anything fancy, just a basic program that will let me import sketches, trace over them in different layers, colour and shade and so on. Some amongst the many options must be suitable, but I'd rather not download every single one to find out.
Thank you,
213.104.128.16 (talk) 22:45, 16 April 2014 (UTC)
- GIMP is definitely the gold standard of image manipulation on Linux. Tracing and layers sounds exactly like the kind of thing it excels at.. What was it you found lacking? Vespine (talk) 22:55, 16 April 2014 (UTC)
- Ibid: GIMP may require a step learning curve but it pays dividends in the long term. A miserly $2.59 on (say) Beginning GIMP: From Novice to Professional (Beginning from Novice to Professional) and you'll run rings around anyone using Paint, Inkscape, ad nauseam. --Aspro (talk) 23:44, 16 April 2014 (UTC)
- As they say:No pain, no gain. Why remain mediocre all ones-life?--Aspro (talk) 23:48, 16 April 2014 (UTC)
- Thinking about it, if you start off with MyPaint (which is maybe the Linux equivalent to what you seek). Then proceed to GIMP, you can maybe achieve results like this: [1]. Sit back and enjoy for 20 mins. Mind you, I have never seen little red hatted vätte in broad daylight like this but I guess that is just artistic license. More on MyPaint --Aspro (talk) 01:25, 17 April 2014 (UTC)
- The OP was using raster so they may want to stick there and there are many reasons why someone wold use raster for a variety of things (even vector images may sometimes be editing in GIMP or some other raster program for final display). But you can't know if you are making the right choice if you don't properly understand the choice you are making and the choice is definitely far more complicated than running rings around vector or the learning curve or any one program being definitely better than the other when they have very different goals. This may not matter if you are just dealing with stuff at a basic level like the OP is now, but seems a poor idea if they are going to really get in to it to not first understand what they are doing.
- In other words, if the OP is going to read a tutorial and they don't already know the difference, free reading online may be useful in the first instance, rather than spending $2.59 on something which may or may not cover such essential basics like the difference between vector graphics and raster graphics and the therefore difference between a program which is primarily designed for dealing with one and one which is primarily designed for dealing with the other, so they don't make the mistake of thinking GIMP is a general substitute or alternative to Inkscape. Notably AFAIK, for many graphic design professional the choice is not binary, they in fact learn both (or some other vector and raster program) and choose the best tool for whatever they plan to do. Some may even throw in a 3D tool like Blender and other stuff as appropriate. In fact, I'm guessing for certain use cases, a professional who only knows one is more likely to generally be considered mediocreno matter how good they are at it.
- Nil Einne (talk) 06:42, 17 April 2014 (UTC)