Wikipedia:Reference desk/Archives/Computing/2016 January 23

Computing desk
< January 22 << Dec | January | Feb >> January 24 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 23

edit

Certifying that picture is not older than a certain time

edit

How can someone certify that a picture is not older than the timestamp it has in it? Is using tamperproof specialized hardware the only option? Notice that this is different from certifying that the picture already existed at time t. --Scicurious (talk) 15:38, 23 January 2016 (UTC)[reply]

You could incorporate unpredictable information in the one of the free-form EXIF fields and then submit it to a timestamping service. The unpredictable information might be the numbers drawn in a famous lottery. Jc3s5h (talk) 16:18, 23 January 2016 (UTC)[reply]
I am afraid that this won't work, and the task at hand is impossible. You could still include the unpredictable information into the EXIF years after the picture was taken, and submit it anyway. The problem remains, there is no difference between an old bit and a new bit of information. And every bit in my machine can be changed by me at will. You could obviously take a picture of a current newspaper in what is called authentication by newspaper or newspaper accreditation. You would have to perform some digital forensic analysis on the picture to exclude a possible photoshopped image.--Llaanngg (talk) 16:32, 23 January 2016 (UTC)[reply]
The simple reason why the task is impossible is that anyone could always take a new picture of the old picture. --76.69.45.64 (talk) 23:18, 23 January 2016 (UTC)[reply]
  • If this is just a general poser, and you aren't looking only for a digital timestamp, you can date certain historical events such as a picture of Obama being inaugurated to no earlier than his inauguration, but that's not very useful when you are dealing with generic items. μηδείς (talk) 02:53, 24 January 2016 (UTC)[reply]
Yeah, maybe - but also, see: Photo_manipulation. If I know that some event is likely to occur ("One of the N Republican candidates will formally accept the parties' presidential nomination on a specific date"...that's something I know with pretty good certainty that lies in the future) - then I could certainly fake a set of pictures, one for each candidate, a solid month beforehand and present the appropriate photo as real some days after the actual event. Perhaps not in that exact case - but in many others, it should be possible to defeat the "historical events" approach. Much depends on the context and how much effort would be spent on debunking my fake. SteveBaker (talk) 15:26, 25 January 2016 (UTC)[reply]
See Trusted timestamping, if it is a photo that you've taken recently. This will allow others to verify that the photgraph existed at the time you had it notoriesed, but they can't verify how long it had existed prior to you notoresing it - including a newspaper, etc, as LLaanngg mention, will give an earliest possible date. If the document is very sensitive, you could submit another document that outlines the original, and includes a Cryptographic hash of it. LongHairedFop (talk) 12:23, 24 January 2016 (UTC)[reply]
It doesn't matter how clever your timestamping system is because you still don't know how old the photo already was at the moment it was timestamped. For that you'd need to embed the timestamping algorithm (and secure access to the Time Stamping Authority) into the camera itself. If timestamping were a separate activity (take the photo, transfer it to some other system, timestamp it), then you have no way to know that I didn't take a 100 year old photo and timestamp it today. Even then, you can't guarantee that I didn't use the "trusted" camera to take a photo of an older photo and thereby embue it with a more recent time-stamp.
Timestamping can guarantee that something is no more recent than... some date - but our OP wants no older than... - and for that, you need something embedded in the camera that is at least as secure as Trusted Timestamping...and which somehow captures information that could not be inserted into the camera by other means (eg, if the camera captured the body temperatures of the subject or the distance of each pixel from the camera). That amounts to "tamperproof specialized hardware" - which our OP wishes to avoid.
So I think the answer is "No".
SteveBaker (talk) 15:26, 25 January 2016 (UTC)[reply]

Pseudocode: good enough for teaching, not good enough for programming

edit

Could a compiler for some form of pseudocode be created? Otherwise, why would it only be precise enough for teaching, but not good enough for been compiled into a program? --Llaanngg (talk) 16:34, 23 January 2016 (UTC)[reply]

It's an artificial intelligence problem. We don't know how to write a compiler that's as good as humans at filling in "obvious" gaps. -- BenRG (talk) 18:03, 23 January 2016 (UTC)[reply]
Could you cite a concrete example of pseudocode that would be an "obvious" gap for a human, but a stumbling stone for a compiler? --Llaanngg (talk) 18:25, 23 January 2016 (UTC)[reply]
Here's a random example from a problem I was just thinking about: given n "red" points and n "blue" points in general position in the plane, find a pairing of them such that the line segments between paired points don't intersect. An algorithm that works for this is "Pick an arbitrary pairing. While there are still intersecting line segments { pick a pair of intersecting segments and uncross them }." (This always terminates because uncrossing reduces the total length of the segments (triangle inequality), but that isn't part of the algorithm.) Turning this into a program requires a pretty good understanding of the problem statement and plane geometry. For example you have to figure out what "uncross" means and that there's only one way to do it that preserves the red-blue pairing. You also need to choose an input and output encoding (how you provide the points to the program and how it reports the answer). Still the pseudocode is useful because it contains the nonobvious core idea, and everything else is straightforward. -- BenRG (talk) 19:08, 23 January 2016 (UTC)[reply]
But I think many pseudocode algorithms are close enough to executable functions that you might as well just use Python as your "pseudocode". -- BenRG (talk) 19:10, 23 January 2016 (UTC)[reply]
The main issue is pseudocode is generally not a formally defined language, hence the name. "Real" programming languages have formally defined grammar and syntax. Read something like the C standard to get an idea of how much goes into doing this. This is so, ideally, every statement that can be possibly written in the language has an unambiguous meaning that can be interpreted by computer programs (here I mean "interpreted" in the general sense; I'm not specifically referring to interpreted languages). A program written in C, for instance, will, ideally, always mean the exact same thing to any standard-compliant C compiler or interpreter. Contrast this with the state of machine translation; natural languages aren't well-defined, so there's tons of ambiguity, and consequently the programs we have at present often get things completely wrong. You could consider pseudocode a kind of "natural language for programming"; it's intended to convey general ideas to other humans. If you formally define the language you're using, it's no longer pseudocode; it's a programming language. --71.119.131.184 (talk) 06:38, 24 January 2016 (UTC)[reply]
I think there are two main problems: context and background knowledge.
  1. The meaning of pseudocode often tends to depend on the context in which it is used. And unlike real code this context is not limited to the program itself. This puts the problem into the area of natural language understanding as BenRG and 71.199 point out.
  2. Pseudocode tends to be highly declarative and domain-specific. It makes statements like "now solve problem X", where it is not clear how this should be turned into a computational process. The reader is assumed to have background knowledge which allows them to do so.
See Buchberger's algorithm and Quickhull#Algorithm for some good examples.
Ruud 14:08, 25 January 2016 (UTC)[reply]

Someone has probably said this, but pseudocode is for humans. If a computer could compile it, then it wouldn't be pseudocode. Bubba73 You talkin' to me? 04:40, 26 January 2016 (UTC)[reply]

Pseudocode always has the proper level of abstraction, and always the right library functionality to compactly represent the problem I'm talking about. That said, as BenRG suggests: Nowadays I often use Python as "executable pseudocode" for first year algorithms. --Stephan Schulz (talk) 13:45, 27 January 2016 (UTC)[reply]