Image as superresolution

edit

Hello,

Does that Image really display superresolution? As I understood it, super-resolution was when the image was able to identify features smaller than the diameter (radius?) of the Airy disc used for imaging. I thought super-resolution required physical analyses such as time-reversal for active signals or negative refractive index metamaterials. Either that or deconvolution techniques such as the Lucy-Richardson algorithm. It seems to me that the image is merely displaying supersampling or something akin to this, and thus doesn't belong here. Please comment, as I am unsure about this to some extent. User A1 07:39, 31 August 2007 (UTC)Reply

Futhermore papers by people such as John Pendry and others in this area may help us here User A1 —Preceding unsigned comment added by User A1 (talkcontribs) 07:49, 31 August 2007 (UTC)Reply

It increases the resolution beyond what the camera is capable of, so I would imagine it's superresolution in at least a loose sense. All I know about the algorithm is that it depends on knowledge of the camera model. (I got a free copy of the software by taking test images with my camera for them to process and put in their libraries.)

By all means upload something else if you know of a better example. — Omegatron 22:55, 31 August 2007 (UTC)Reply

I read the IEEE paper in the article, I understand where my confusion comes along. There are two types of super-resolution in the literature. One where the CCD is considered to be the limiting factor for resolution, and as such multiple images can be used to provide an increased resolution estimate, the other is where you have diffraction limiting behaviour, which is a different kettle of fish entirely.... I will try to improve my understanding here and see what i can do to improve the article. User A1 00:27, 1 September 2007 (UTC)Reply

no "list of programs that do superresolution" or somthing?

edit

why there isn't a list of programs capable of doing superresolution (dunno what to call this, "doing super resolution" doesn't sounds right for me, dunno for sure) linked from this article?--TiagoTiago (talk) 15:04, 15 July 2008 (UTC)Reply

The EL is full of such cruft. Please don't add any more; WP is not a directory. --Adoniscik(t, c) 15:27, 15 July 2008 (UTC)Reply
Examples are good. You know, like on 99% of the other articles on Wikipedia about software features. Also, I have no idea what the EL is. I Googled it, but there was nothing about lists of software. — Preceding unsigned comment added by 99.49.222.163 (talk) 05:00, 14 January 2012 (UTC)Reply

Microscan

edit

One of the industry standard terms, which I've verified has no Wiki search entry cross referencing, is "microscan". Many military, engineering, and patent doccuemnts on the web, describe "superresolution", using the term microscan, along with the suppoert algorithms of matrix operation sub-pixel motion estimation, for use in uncontrolled microscan application, where absolute sensor position information is not present, to produce a microscan using known pixel displacement, and fourier inverse filtering of known displacement microscan frames, when being integrated into higher resolution images, derived from the pixel frequency point spread function definition. Especially with regards to the low resolution military FLIR camera systems using resolution enhancement applications like microscanning, to allow small payload IR cameras to resolve better with minimal spplication weight and power increases. LoneRubberDragon (talk) 23:56, 17 August 2008 (UTC)Reply

Perhaps a paragraph, and links can be added, to also point to this standard term, along with super resolution. LoneRubberDragon (talk) 23:56, 17 August 2008 (UTC)Reply

And you are correct that an optic system that is nyquist limited to the sensor, cannot be used for microscanning. To perform real microscanning, requires an optical system with a nyquist limit of the optical transfer function, of the right resolution fold over the sensor, to produce real microscanning resolution enhancement. Without such an optical system to sensor match, one can only perform morphological edge enhancements to an image, but features lower reolution than the baseband image limited by optical transfer function, will not be brought out by any microscan / superresolution algorithm, like the old saying, you can't enhance a blurry image, because you cannot create what isn't there (without implicit truth data subject modeling information to artificially enhance the image being guessed at).

http://en.wikipedia.org/wiki/Nyquist_limit

http://en.wikipedia.org/wiki/Optical_transfer_function

http://en.wikipedia.org/wiki/Point_spread_function

http://en.wikipedia.org/wiki/Airy_disk

LoneRubberDragon (talk) 23:56, 17 August 2008 (UTC)Reply

Actually you can recover frequencies beyond that filtered out by the point spread function, usually at the cost of a noise increase, if you make statistical assumptions about the underlying process and attempt to recover these statistical parameters, such as in expectation maximisation algorithms. So you can make a blurry image unblurry and have your freq's too, so it isn't just sharpening (nasty fourier-ringing). User A1 (talk) 07:34, 18 August 2008 (UTC)Reply
QUOTE"Super-resolution (SR) are techniques that in some way enhance the resolution of an imaging system. There are different views as to what is considered an SR-technique: some consider only techniques that break the diffraction-limit of systems, while others also consider techniques that merely break the limit of the digital imaging sensor as SR."
True and false, mostly false, given the ambiguity of your language, *actually*, especially breaking the "diffraction limit claim", unless you are using morphological model inferrence of subject recognition below "normal" resolution limitations. Otherwise, the MTF drops quite rapidly for an Airy disk, limiting at nyquist your reconstructions, and moreso with a blurry image with gaussian edges. And I grant you are true, that sharp edge blurrs (disks) can recover *some* detail, in some contrast image subject contexts, but not in other. Microscan has no limits to frequency construction, if the airy disk is smaller to the sensor pixels by the ratio of sensor-resolution microscanning required. You can't statistically recover beyond diffraction limiting nyquist data, otherwise with ANY SENSOR, for standard incoherent light optics. The PSF's MTF drops faster than the rolloff of a first order LRC network, thus Nyquist limiting your attempts at reconstruction to handfuls of percents, for standard incoherent light optics; only giving tens of percent superresolution worth counting, over some official equations of resolution limit, Airy, Dawes, Sparrow. Please read the theory of diffraction limited optics, when given an infinite resolution sensor, as there are SOLID NYQUIST LIMITS to all supperresolution, as you describe, you may not know. But microscanning for high resolution optics, given a low resolution sensor, IS what is signifigantly posible, by *actually* giving hundreds or thousands of percent resolution enhancement, up to the limits of the optics, with a low resolution sensor, when taking tens to hundreds of images, to be microscanned together. And perhaps "microscan" is a different, superior improvement sensor resolution enhancement old algorithm, compared to a more basic superresolution article. LoneRubberDragon (talk) 23:56, 19 August 2008 (UTC)Reply
Likewise, a proper wiener deconvolution doesn't do ad-hoc nasty fourier ringing, that your conception proposes, but simply reconstructs the frequencies of a sensor PSF from a microscan resolution enhancement. It is a mathematically sound root equation to control what is to be reconstructed, without unwarranted ringing. There are also wiener adjustments possible to conteract the poisson noise distribution of pixel intensity that can cause unwanted noise artifacts in a weiner deconvolution of a low SNR imager. LoneRubberDragon (talk) 23:56, 19 August 2008 (UTC)Reply

MICROSCAN

http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA390373 LoneRubberDragon (talk) 23:56, 19 August 2008 (UTC)Reply

ENHANCEMENT LIMITATIONS

http://en.wikipedia.org/wiki/Dawes_limit

http://en.wikipedia.org/wiki/Rayleigh_criterion

http://en.wikipedia.org/wiki/Airy_disk

http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details

http://en.wikipedia.org/wiki/Diffraction_limited

http://www.quantumfocus.com/publications/2005_Optical_and_Infrared_FA_Microscopy.pdf

http://en.wikipedia.org/wiki/Point_Spread_Function

http://support.svi.nl/wiki/NyquistCalculator

RECONSTRUCTION METHOD BASE

http://en.wikipedia.org/wiki/Wiener_deconvolution

http://en.wikipedia.org/wiki/Deconvolution

http://en.wikipedia.org/wiki/Poisson_distribution

http://en.wikipedia.org/wiki/Signal-to-noise_ratio

LoneRubberDragon (talk) 23:56, 19 August 2008 (UTC)Reply

Lucy's original paper. [1] ; its quite clever. User A1 (talk) 12:50, 20 August 2008 (UTC)Reply

Yea, that's a very good appearing document, yet I will have to digest that one more, especially given its age. First face, it appears it has its statistical limitations based on low SNR data sets, just as microscanning does for sensor noise. But, is it not for smoothed-morphology data-modeling constructions, with some possible applications for a sensor's PSF estimation step in microscanning algorithms? Am I groking the paper right? LoneRubberDragon (talk) 10:47, 25 August 2008 (UTC)Reply

If you scan through Irvine Sensors Corporation, Costa Mesa, CA, SBIR projects, there is a project on Microscan circa 1995-7 +- couple years, which uses gradient matrix motion based image displacement estimation, used for the proper subpixel multiple-frame motion displacement estimates, that are required for making the displaced microbinning image stack. And then ther's an iterative process to estimate (if I recall right) the point spread function from the stacks of the low resolution image sensor that is being microscanned to the optics' and sensor noise figure limits, to perform the wiener type deconvolution in the presence of poisson sensor illuminance noise. If I can find the original article here and code work, I'll post some equations from the core, but my paperwork here is quite behind, arresting proper information flow. LoneRubberDragon (talk) 10:47, 25 August 2008 (UTC)Reply

"But, is it not for smoothed-morphology data-modeling constructions, with some possible applications for a sensor's PSF estimation step in microscanning algorithms?" Richardson expanded (or rather explained Lucy a little) on the algorithm. It can be used to enhance frequencies that have been severely diminished due to point spread. The iteration scheme proposed by Lucy appears to converge, but whilst not an expert in the field, I have been unable to find any papers that say exactly *why* it works to the extent that I understand, only that as a maximum likelihood method it does seem to work, a more complex version of fixed point iteration if you will.
The idea is that if you have a unit impulse, which has been spread by some PSF function via convolution, you can actually (to some extent, and as you say, given sufficiently low noise) reverse the convolution integral to some extent - without the problems of simple Fourier division. Somewhere I have Richardsons paper, which is a hell of a lot easier to understand.
Getting back on to the point, which is discussion of super resolution :). This wiki article has an unfortunate name as it describes a field of possible mechanisms for increasing the resolution of an image. This seems to fall into two categories. Firstly as you say, sub-pixel estimation by morphological constraints. Secondly the use of statistical modelling of sensor acquisition to try to estimate the underlying statistical parameters, and estimating un-filtered or the noise-free signal from real data. Both of these are important, and based upon my highly non-expert opinion, the morphological analysis method is the more active area of research. To be honest I don't actually know *lots* about this area, I just have used it once or twice. User A1 (talk) 12:18, 25 August 2008 (UTC)Reply
Ah, Richardson *does* sounds familiar, I believe coming later than the Lucy paper, at around 1991+- couple years. I had never seen the Lucy paper before, for that work, so it's nice to see more of the historical foundations, as the Lucy paper wasn't *that* hard to read, on first glance, though you mention its PSF power is stronger than the equations let on, first face. To reiterate (no pun?), the best I can describe sans Richardson(?) paper's equations on hand, is that, first, the frames of data are stacked on the super resolution plane using the gradient matrix based motion estimator operator for producing displacement estimates, for appropriately placing multiple frames of sensor impulses for the SR/MS pixels on the superresolution plane, and then any holes in the superresolution plane from X frames are filled with nearest neighbor frames impulse data, where no frame impulses were present, in order to fill the gaps from a non-systematic non-raster microscanning platform (a generic microscanning algorithm for free low motion non-rotating cameras). At that point comes the analysis of frequency content of the whole image at the pixel scale, to *estimate* only somewhat robustly against the noise margin, the system PSF and noise figure both, in the frequency domain, which is used to deconvolve the microscan/superresolution plane. The effects are quite dramatic, bringing out resolutions 2x and 4x (easily 200-400% resolution improvement 4-16 times pixels) over the sensor resolution, given a good optics and sensor SNR, for that ratio of superresolution/microscan. So Richardson seems to be the paper (et.al.) that ISC based that step of the microscan system on, under M. Skow at ISC, for the gradient based sensor resolution motion estimate plus multi-frame impulse stack deconvolution, of the optic-sensor PSF system chain. LoneRubberDragon (talk) 14:47, 25 August 2008 (UTC)Reply
Now to explain the mystery you mention, most of the resolution improvement comes from enhancing and restoring the PSF spread about the highest PSF gradients with the most edge/phase information, the ring around the edges of the pixel's PSFs, that is where the image to pixel transfer function is maximized on those salient PSF high frequency features. Mainly low frequency luminanace information, and some of morphological step=edge-gradient-pixel=crossing DC-to-AC information, is contributed by the core of the PSF. I really do have to find that paper, too, which I have *somewhere* around here, so I can post that form of the equations. Though I can add, I remember that in practice, the methods of the system, even with 2-D FFT's, took alot of computation power, and there is supposed to be an even better algorithm via ISC, out there, for producing microscanning more efficiently, probably using a one time optic-sensor-pixel PSF estimate, and conversion to a 2-D FIR filter, as ISC continued refining and altering the algorithms of Richardson(?) and inter-frame motion estimators, that was developped under new management at ISC around 1998+= couple years. One method may be to only process the salient high frequency ringing to deconvolve a HPF details plane, and combine that plane with the mass of data of the LPF of the superresolution impulse stacking. This would restore the high frequency data, and might use less computation load, as only the steepest gradient of the PSF grid is utilized, but I never tried that algorithm, to validate that approach. In addition, heirarchical binned image based motion estimates can be performed to expand the motion based frame impulse stacking distances into super-pixel displacements over the more limited sub-pixel-limited gradient=based-matrix-displacement=estimates algorithm. Also, the image based motion estimates can be quadrant partitioned, or more thouroughly processed on an iterated polar coordinate gradient matrix operator plane, to handle additional freedoms of camera motion of,, rotation, and perspective (trapezoidal), displacements for affine stacked frame placement of the impulses, like for an aerial sensor system, flying over a ground target at an oblique angle zoom shift, which I advanced, but was only somewhat forwarded at that time. And one may also attempt a model based annealing algorithm where the superresolution model starts flat gray and is altered steepest descent of the MS plane, through an estimate of the PSF, into the individual frames pixels, and is error backpropagated into the original frames of data for the PSF and noise error figures, to converge the model to the properly placed PSF'ed frames, which is probably very close to ML/EM. I cannot recall if Richardson's(?) method went from the model to converge to the frames, or used the impulse stacked frames to reverse convolve the best lowest error model, or perhaps they are equivalent, just different in approach. But alas, it was over 10 years ago since I assisted and advanced on that set of codes, so alot of the details have degraded in memory, and the "durned" X papers get buried in time. I can't even offhand find a web reference SBIR or patent reference on the details that had been on the net up to 8 years ago, but are not googleable today ... information degredation in war time, I guess, and maybe the SBIR website still has those datas searchable as they catalog past SBIR's. Ill look for the Richardson(?) paper tonight, but can't promise anything as my technical papers are distributed at two sites, now, among masses of boxes of data and books. LoneRubberDragon (talk) 14:47, 25 August 2008 (UTC)Reply
Oh well, I must have been having a crosslinked brain cell with Richardson Extrapolation. The paper I have, of more than one,, is, "Restoration of Aliased Video Sequences via a Maximum-Likelihood Approach", from 1996 Feb. 11., which puts the phase 2 of the SBIR around 1997. The paper is under S. Cain, R. Hardy, and E. Armstrong, of the University of Dayton, Dept. of EE., under WPAFB contract. I believe they are just reiterating the earlier works as part of a spearhead DoD program. No Lucy is present in the references, nor Richardson. I cannot place any equations here, as it is marked for DoD contractors distribution only, and I know not the current status after 12 years, but I presume it may be requested on WL/AARI-3 WPAFB OH, 45433-7409. Plus the paper's algorithm estimates both frame displacements, as well as the system PSF simultaneously, where one can still calculate frame displacements separately from the PSF EM/ML section. If you play with the blocks you know of, you can generate any number of nearly equivalent convergences of the model image sans noise. LoneRubberDragon (talk) 15:07, 25 August 2008 (UTC)Reply
Another paper here, is, "The Application of Wiener Filters to Microscan Imaging", from 1996 Feb. 15. The paper is under E. Armstrong, J. Bognar, B. Yasuda, R. Hardie, under University of Dayton, under WPAFB, under AFC F33601-95-DJ010. It, likewise, is under DoD contractor restrictions of distribution, circa 1996. Likewise no Lucy, nor Richardson in the references. Pity, many of the best algorithms come from astronomical image processing in the 1960's and 1970's. One reference potentially easily retrieveable is E.A. Watson, R.A. Muse, F.P. Blommel, "Aliasing and Blurring in Microscanned Imagery", Proceedings of the SPIE, Volume 1689, 242-250 (1992). And I agree that the Weiner method, when applied as a uniform global image operator, and not locally adaptive, does produce more ringing, than a locally adaptive ML/EM model iterative annealing system construction. LoneRubberDragon (talk) 19:27, 25 August 2008 (UTC)Reply
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=220576 LoneRubberDragon (talk) 02:34, 26 August 2008 (UTC)Reply
With regards to the *other* definition combined under superresolution=microscanning-resolution=enhancement, is more of a subjective morphological enhancement of images based enhancement of images, which I mentioned earlier, and you have differentiated. For example, a line in a low resolution television scan image, can be modeled by an assumed line model that explains the luminance data, which can be subtracted and replaced by the line model at any resolution. Another example is a solid form, where one scans across the edge of the shape to form a model assumed to be forming the low resolution data. Another example, is if one has, say a herringbone pattern, or known high resolution pattern, it can be affine distortion mapped to explain the low resolution image data. All of these are subjective, multi-pixel averaging shape formers with their own sets of subjective errors, but allow one to, say, cheat in an HDTV broadcast, by using less data bandwidth to represent the bulk of image shapness data, while also using a low resolution camera and image transmission, including with it a transmission of MPEG4 shape model information, all in the place of a true HDTV broadcast signal. It is not true superresolution, but image modeling. WHY IT ISN'T true microscanning=superresolution is that it does have systematic subjective representation errors of the high resolution features, so IT IS NOT APPLICABLE OR RECCOMMENDED to targeting systems or precision measurement devices, so it should be classified differently for safety or life preserving purposes, but it is acceptable for "faking" high resolution HDTV data transmissions and webcasts and DVD players, as examples. LoneRubberDragon (talk) 11:12, 26 August 2008 (UTC)Reply
If you have any questions on terminology or system steps, I can clarify some of the terminology and systems. LoneRubberDragon (talk) 11:12, 26 August 2008 (UTC)Reply

Bad Citations

edit

It wasn't a mistake, it was intentional. The current "reference" section is longer than the article, and is not being used to support statements in the article as is policy at WP:CITE. If I wanted a list of super resolution articles I would simply run a search at a journal search engine. Readers are in no way benefitting by having a dozen papers thrown in their face. User A1 (talk) 23:13, 20 November 2009 (UTC)Reply

I think to the contrary that related works should not be removed without justification, as they are of real value to the reader in their selection and organisation. Saying "they could just search for them" is not an argument (such an argument would exclude essentially all content at Wikipedia). I do think they should be divided between "Related works" and "References" actually used to support article statements, which I've done. You seem to be treating the papers as some kind of advertising, which I'm sure wasn't the intention. I also don't understand your purpose in removing links to papers. Dcoetzee 00:50, 21 November 2009 (UTC)Reply

they are of real value to the reader in their selection and organisation : This whole article is essentially a list, and continually adding to a list of papers makes this more the case. The perfect article is supposedly engaging, self contained and not a list of information. Prose is key, and needs to be good prose that explains the concept to the reader. Lists are not useful for explaining a topic, which is the key point of this article.

Saying "they could just search for them" is not an argument (such an argument would exclude essentially all content at Wikipedia One cannot simply search for encyclopaedic content, in fact I would say very few good articles could be searched for readily -- one would have to simultaneously span the "complex enough to contain information and not so complex that one needs to be an academic" gap -- a gap in knowledge that is very apparent, particularly if you start reading any of those papers. Secondly: there is no "organisation" to this list, not even alphabetical organisation.

I do think they should be divided between "Related works" and "References" actually used to support article statements, which I've done. I would like to see a scientific paper that can be taken seriously which uses references in this manner. This is most appaling for a wikipedia article and in direct opposition to the citation guidelines

You seem to be treating the papers as some kind of advertising, which I'm sure wasn't the intention. I also don't understand your purpose in removing links to papers.. This simply relates to my first point -- I see no value (and indeed a reduction in value) by simply citing literature ad nauseum. This topic needs explaining by someone who actually understands what it is that the topic is about (which i do not, as evidenced by the above).

Simply put I find it most disturbing that [1] There is not a single inline citation. and [2] the so called "references" are longer than the text, which leads to the situation that I have no idea how super-resolution works, short of having to read academic papers, which only an academic, or student would do. Most people simply do not understand academic papers, as they almost always assume a thorough understading of the topic already at hand, and a significant understanding of other principles involved (note how linear algebra, nor matrix nor vector calculus nor cost function etc is not linked in the text?

Apologies if I am a bit blunt, and I realise that I have written more on this talk page than on the article, but I feel unable to improve this article personally, and am watching it deteriorate. User A1 (talk) 08:58, 21 November 2009 (UTC)Reply

Microscan II

edit

For UserA1:

If the Irvine Sensors Corporation / Wright Patterson Air Force Base systems math is a bit complex, try this basic method. First take a multiple frame slightly-moving-camera image sequence, from a non-nyquist limited CCD sensor - which is a Prerequisite. Use least squares displacement search to stack the frames on top of each other with these "saccadic" movements, to place them within one pixel displacement of the first frame. Then take X and Y differential gradiant functions dot producted with eigen-estimates of the X and Y dispalcement based on the image context function, for each frame against the first image, in order to estimate each frame's sub-pixel displacment measurement to the first frame. The sub-pixel motions of the stacked frames, can then be used to interleave the low resolution baseband images onto a high resolution image map of 2, 3, or 4 times the resolution of the original CCDs. Then you can fill in the blanks with interpolations, for any remaining empty cells of the oversampled high resolution image map. Then you can use an inverse point spread function estimate for the CCD pixels, to adjust the point spread functions of the pixels, using FFT image processing. Noise will be enhanced by performing this inverse PSF correlation operation, but it will approximate a microscan algorithm resolution enhancement. If you want to improve performance against noise in the inverse Pixel-PSF filter, try the Weiner Method Link that you called "Useless". LoneRubberDragon (talk) 20:32, 15 January 2010 (UTC)Reply

If your camera is not optically nyquist limited more-sharply than the CCD, by the resolution enhancement desired, the algorithm will simply produce a blurry image 2, 3, or 4 times the size of the original data, limited by the optical blur. Also, the more frames you use, the lower the noise effect, and convergence to a stable high resolution answer. Most cameras focus algorithms are fixed, and may only derive near CCD resolution focus, thus preventing all microscanning superresolution capability. Some cameras, I've noticed, are not even focused to the resolution of the native CCD, producing blurry image cross-pixel-talk, before one even begins algorithms. Also, given the undersampled effects by Bayer Patterend Cameras that "cheat", one may also not be able to perform superresolution without using monochromatic only scenes for the algorithm application. LoneRubberDragon (talk) 20:32, 15 January 2010 (UTC)Reply

http://en.wikipedia.org/wiki/Crosstalk

http://en.wikipedia.org/wiki/Bayer_pattern

Additionally, once a high resolution image is created, with the accompanying differential noise increase, a Median Filter on the high resolution map is often useful to preserve morphology, and smooth the noise enhancement. Also, once a high resolution image has been derived, robust-rank morphological modeling can be performed on the high resolution image to rotoscope the image forms at the higher resolution. LoneRubberDragon (talk) 13:57, 17 January 2010 (UTC)Reply

Another solution you may try, is Maximum Entropy type of processing. Repeating much of the above, take the high resolution map, and converge each high resolution pixel in a ROI (Region Of Interest), so that the error of the multiple frames represented by the ROI of the high resolution model image, is minimized. It is like seismic analysis, where one models the earth with modifyable voxels that minimize the error of the multiple seismic responses of known seismometers. LoneRubberDragon (talk) 20:32, 15 January 2010 (UTC)Reply

I should also add, that increasing the resolution beyond 4 times oversampling of the CCD native resolution, creates increasingly long high resolution image model convergence times of analysis, due to the Poisson Pixel noise margins in the differential analysis, and per-CCD-pixel irregularities, without longer integration times, and a more adaptive model of Entropy Maximization modeling, as one example, and the above example. LoneRubberDragon (talk) 13:36, 17 January 2010 (UTC)Reply

And, if all you seek is creating animated models based on low resolution CCD images, simply use contextual edge detection algorithms, and morphological-rotoscope modeling to create objects that can be scaled arbitrarily like polygons and such, with sharp edges at any resolution. But it is not reolution enhancement per-se, but merely digital animation morphological rotoscoping. The intrinsic resolution of the models will still be merely limited by CCD resolution, with fake edges, that are off by the nyquist model of the native CCD resolution. Also, inter-frame smoothing algorithms can be made to smooth the nyquist limited low resolution modeling "moire effects" of the CCD and inherently low resolution methods. LoneRubberDragon (talk) 13:40, 17 January 2010 (UTC)Reply

http://en.wikipedia.org/wiki/Rotoscope

http://en.wikipedia.org/wiki/Mathematical_morphology

http://en.wikipedia.org/wiki/Max_Fleischer

http://en.wikipedia.org/wiki/Gradients

http://en.wikipedia.org/wiki/Moire

Okay, Okay, my unction tells me, Here, to help you in detail with the hardest step, related to linear algebra and calculus. LoneRubberDragon (talk) 15:03, 17 January 2010 (UTC)Reply

Take a stack of constant exposure, and constant image-context frames, with an unknown sub-pixel displacement, I[X,Y,Frame]. For each frame, calculate the image X and Y point gradients thus (for X gradient to example): IGradX[X,Y,Frame] = -1*I[X-1,Y,Frame] + 0*I[X,Y,Frame] + 1*I[X+1,Y,Frame]. Or even take blurred gradients like this extended kernel Gradient, IGradXBlur[X,Y,Frame] = -1*I[X-1,Y-1,Frame] + 0*I[X,Y-1,Frame] + 1*I[X+1,Y-1,Frame] + -1*I[X-1,Y,Frame] + 0*I[X,Y,Frame] + 1*I[X+1,Y,Frame] + -1*I[X-1,Y+1,Frame] + 0*I[X,Y+1,Frame] + 1*I[X+1,Y+1,Frame]. LoneRubberDragon (talk) 15:03, 17 January 2010 (UTC)Reply

Then take each frame gradient, and dot product it with the first frame, as the reference. Like thus (for X gradeint to example): GradientDotX[Frame] = SUM(over X,Y || I[X,Y,1] * IGradX[X,Y,Frame]). LoneRubberDragon (talk) 15:03, 17 January 2010 (UTC)Reply

Then you have both an X and Y gradiant dot product for each frame, relative to the first frame, as the reference, appearing as thus: (GradientDotX[Frame], GradientDotY[Frame]). Now here you have to play a little with the math, in order to normalize the Gradient Dot Products, because they have a scalar relative to the integral sum of the image brightness function, for a point gradient. To normalize the Point Gradient Dot Products, divide each Dot Product, (GradientDotX[Frame], GradientDotY[Frame]), by the integral of the image found by GradNormXY=SUM(over X,Y || I[X,Y,1]), so that a cordinate in X and Y of the sub-pixel displacements, can be estimated, yielding: (DisplacementX,DisplacementY) = (GradientDotX[Frame] / GradNormXY, GradientDotY[Frame] / GradNormXY). If forget the exact corrections when using blurred gradeients ... likely divide by the abolute value integral divided by two, like 1.0 for point gradient, and 1/3 for a blurred 3 by 3 gradient. That is up to you to include or exclude, or even investigate. LoneRubberDragon (talk) 15:03, 17 January 2010 (UTC)Reply

At this point, once the Gradients are normalized into sub-pixel displacements, the low resolution images of the native CCD can be stacked into a high resolution frame buffer, according to the estimated unknown dispalcements, to integrate the multiple frames of saccadic motion, on the high resolution plane, in raw pixel intensity format. LoneRubberDragon (talk) 15:03, 17 January 2010 (UTC)Reply


Is this down to earth enough to understand, now? You are correct, the article is quite small compared to the so-called Policy Error in posting useful help toward the topic of discussion. Perhaps, Heisenberg Uncertainty, by having such a small article, has allowed a portion of judgement to cross into the Policy Error Window of analysis. The resolution is too low, in your interpretation of the posts Good has led me to place here. And where is the fun for the student, if I simply give you the raw code, in this context of earth? Good led me to Euler and almost Runge-Kutta modeling of the solar system gravity equations in Middle School into ninth grade. If you stop complaining and ask more pointed questions, I may attempt a language translation into your frame of reference more accurately, given the bandwidth of yourself. LoneRubberDragon (talk) 20:32, 15 January 2010 (UTC)Reply

Why 9 images for only 2x super-resolution?

edit

The example image of super-resolution says:

The right half shows the image (at native resolution) that results when the software combines nine images together and performs 2x superresolution.

But would not 2X super-resolution require only 4 images? Or to put it another way, is not the information content of 9 images equal to 3X (squared) of 1 image? —Preceding unsigned comment added by 202.6.86.1 (talk) 23:03, 3 October 2010 (UTC)Reply

Because some of the images will have repeated data. This repeated data is required so that the software can figure out how the images fit together. Since neither you nor the software knows the exact angle of the multiple images, each little bit helps.

I hope that makes sense. I wish the article could show you each of the nine shots, as then it would make more sense. Remember, each photo is laid on top of the other, not put together from pieces. — trlkly 12:47, 28 September 2011 (UTC)Reply

Change Entry from Super-resolution to Superresolution?

edit

The unhyphenated form superresolution, used even at the beginning of the subject [1], seems now to have superseded the less-often used super-resolution -- see Optics Info Base. I propose that the redirect be not from superresolution to super-resolution but the other way. If the abbreviation SR is used in the rest of the entry, there should be no problem.Gwestheimer 16:36, 15 May 2012 (UTC)

  1. ^ McCutchen C. J. opt. Soc. Am. 57:1190, 1967

Trying for an entry that gives an overall layout of the subject and conveys the essential concepts, I have opted for a separate page entitled "Superresolution" rather than shoehorning the material into the current more technically-oriented page "Super-resolution." This involves undoing the re-direct from "Superresolution" to "Super-resolution" and inserting into each of these pages a reference to the other under "See also."
Gwestheimer 17:34, 19 May 2012 (UTC) — Preceding unsigned comment added by Gwestheimer (talkcontribs)