Talk:GeForce 10 series

Latest comment: 2 months ago by 86.33.28.43 in topic 1050 & 1050 Ti Mobile GP106 variant?

HBM or GDDR5?

edit

I am slightly confused. The article starts off by saying that these chips "will" use HBM2, but then ends off by saying that they use GDDR5 or GDDR5x. These are completely different memory technologies, no? Which is it? 184.170.93.22 (talk) 01:50, 13 July 2016 (UTC)Reply

Article name

edit

This should be called GeForce 1000 series, since it comes after the GeForce 900 series, not after the GeForce 9 series. --uKER (talk) 04:10, 7 May 2016 (UTC)Reply

I would agree with this, unfortunately Nvidia has decided to confuse everyone and actually name the series the 10 series. We can only hope that everyone else decides to call it the 1000 series regardless and that Nvidia ends up changing their minds about the naming since everyone else calls it the 1000 series anyway, but we can't really do anything for now given: http://www.geforce.com/hardware/10series/geforce-gtx-1080 Erik.Bjareholt (talk) 08:09, 7 May 2016 (UTC)Reply
'10' is the prefix, so it's 10xx. This pronunciation is correct, but the 10xx series is the successor to the 9xx. While the 9xxx already existed, but there was no 10xxx series. Millzie95 (talk) 23:32, 8 May 2016 (UTC)Reply
10-series designation is more user-friendly, since the vast majority of regular users don't remember 9xxx-series anyway. So this is a good idea to call the new series 10-series, since it is more laconic and less confusing. TranslucentCloud (talk) 09:56, 16 May 2016 (UTC)Reply
Yes, it's a logical progression, since the 900 series is nine-hundred, the 10 series is ten-hundred, rather than one-thousand. So 1080 is pronounced ten-eighty, rather than one-thousand-eighty, which makes more sense, as the naming system remains. Millzie95 (talk) 20:39, 16 May 2016 (UTC)Reply
@Millzie95: That makes sense, thanks for explaining! Erik.Bjareholt (talk) 19:55, 21 May 2016 (UTC)Reply

References & syntax

edit

References at the bottom of this page need tidying up, with consistent syntax and dates for each of the links, a caption or brief description would help. Millzie95 (talk) 22:59, 8 May 2016 (UTC)Reply

Also noticed there are multiple references linking to the same webpages, should we link them all to one reference instead of having duplicate references? References 2 and 5 for example. Millzie95 (talk) 07:18, 16 May 2016 (UTC)Reply

Citations needed

edit

The table in the Products section, in particular, seems to be speculating about some of the information. A lot of the information is cited,b ut not all of it. But Wikipedia is not a crystal ball. All the information must be cited with a reliable source (not a speculative blog post). This article is actually pretty good. The authors have worked hard to list the information we know for sure. But there's still room for improvement. Without a reliable citation, information should be removed. Unfortunately, that includes much of the info for the 1070 and some of the info for the 1080. --Yamla (talk) 14:57, 13 May 2016 (UTC)Reply

I have updated the products table with new information and added 2 references, most of the specs for the 1080 should be nailed down by now. Specs in the 1070 row may still change as new information becomes available, I tried to get the syntax as consistent as possible but the links could use some work. Millzie95 (talk) 16:32, 13 May 2016 (UTC)Reply
Thanks. If the specs for the 1070 are subject to change, they should be removed immediately and only reintroduced once we have a reliable source for them. --Yamla (talk) 16:39, 13 May 2016 (UTC)Reply
I have attempted to clean up the table to remove uncited information. --Yamla (talk) 12:42, 15 May 2016 (UTC)Reply
Hi, just to clear something up about my last post. I was referring to cells in the table marked with a "-", all values on my last edit were either confirmed specs, approximate values (~) or calculations based on them (~Ax~B=~C). Could you please revert to the previous table (then possibly remove any ballpark values?) as some columns have gone missing, as it's also tricky to get everything aligned correctly. Millzie95 (talk) 14:17, 15 May 2016 (UTC)Reply
I specifically searched the citations given and could not find the numbers specified. You say these are confirmed, and that's great. That certainly means we can replace the numbers, but only if we have a reliable source (not a speculative source). I understand the embargo lifts in two days, so if we can't find a reliable source today, we should be able to very shortly. --Yamla (talk) 14:30, 15 May 2016 (UTC)Reply
We need better citations, I have made a new table with the columns re-added but cleared a few cells in the top row, the references are from Nvidia and GPU-Z, they should show all the listed specs, tell if if I missed anything. Millzie95 (talk) 14:42, 15 May 2016 (UTC)Reply
As previously noted, neither source seems to cite GP104-200 vs GP104-400. Neither source cites transistor count or die size. Neither source cites GDDR5 (vs 5X) in the 1070. It's critically important we only display information we have a reliable citation (rather than a speculative source) for. Please immediately remove any of the information you added unless backed up by the citations. Note that I did make a mistake removing support for OpenGL 4.5 and Vulkan; both are indeed provided in the citation, so long as you click a link. I still can't find any claim of OpenCL in the cites, but you haven't reintroduced those. --Yamla (talk) 15:05, 15 May 2016 (UTC)Reply
In the first link, click on specs, then view full specs. In the second link, scroll down, then look at the cropped screenshot. Nvidia's own slides said the 1070 will use regular GDDR5 (not 5X) and GP104-400/GP104-200 are the variants of GP104 used in the 1080 and 1070. I haven't reintroduced the specs in the API section, but we can assume APIs supported by the 900 series will continue to be supported unless support is dropped for some reason or we have a source which tells us otherwise. Millzie95 (talk) 16:26, 15 May 2016 (UTC)Reply

Approximate vs confirmed specs

edit

Approximate spec values in the Products table should be marked with a tilda (~) rather than being calculated first then corrected later, this will help to differentiate between confirmed and rough specs. Millzie95 (talk) 16:52, 13 May 2016 (UTC)Reply

No. We don't allow original research. If something is not confirmed, it should be removed from the article. --Yamla (talk) 12:43, 15 May 2016 (UTC)Reply
I wasn't referring to original research, rather confirmed rough values we don't know exactly. ~6500 could be 6382 or 6616, but we know it's around 6500. Millzie95 (talk) 14:32, 15 May 2016 (UTC)Reply

Founders Edition & MSRP

edit

Should Founders Edition pricing be included alongside MSRP? Millzie95 (talk) 17:04, 14 May 2016 (UTC)Reply

Sure it should. Done. TranslucentCloud (talk) 10:16, 16 May 2016 (UTC)Reply
Thanks for your contribution TranslucentCloud. Millzie95 (talk) 20:30, 16 May 2016 (UTC)Reply
Nvidia and partners deceive buyers by setting an inflated price for the video card. $ 379 for a non-existent price GTX1070 Spronkin1 (talk) 18:51, 04 July 2016 (UTC)Reply

Article improvements

edit

The article is comming along nicely, more specific detail added and products table is updated with the most recent information we have, references have been improved and see also section has been expanded. Millzie95 (talk) 08:06, 15 May 2016 (UTC)Reply

Single-precision performance of 1070

edit

@210.187.221.194: is claiming [1] cites his/her claim that the 1070 has a single-precision processing power of 6500 GFLOPS. However, the article states, " Single precision performance is calculated as 2 times the number of shaders multiplied by the boost core clock speed." The article claims the 1070 has 1920 shader processors and the citation shows a boost clock of 1.6 GHz. 2 * 1920 * 1.6 is 6144, not the 6500 that the user is claiming. Perhaps the article is explaining how to do this calculation incorrectly? I'm concerned that this user is adding uncited and incorrect information and then launching personal attacks when challenged. --Yamla (talk) 16:22, 17 May 2016 (UTC)Reply

http://images.nvidia.com/geforce-com/international/images/050616-geforce-dot-com/nvidia-geforce-gtx-1070/nvidia-geforce-gtx-1070-introducing-the-geforce-gtx-1070.png

Are you stupid or really unable to use your brain to find info? Nvidia says 6.5 TFLOPs right there on their website. — Preceding unsigned comment added by 210.187.221.194 (talk) 16:27, 17 May 2016 (UTC)Reply

Enough with the personal attacks. Any more and you'll have to take a time-out. For other editors, our claim and the citation no longer matches up. Which is correct? Our claim on how you calculate the single-precision performance, or the nvidia website? --Yamla (talk) 16:30, 17 May 2016 (UTC)Reply

I think the same can be said of GTX1080. 8223 is calculated by base core clock (1.607 * 2 * 2560 = 8227.84). but it could be 8873 calculated by boost core (1.733 * 2 * 2560 = 8872.96) — Preceding unsigned comment added by 124.35.68.250 (talk) 09:43, 18 May 2016 (UTC)Reply

The GTX 1070 does have 1920 CUDA cores, 120 TMUs and 64 ROPs, the difference comes in the way single precision performance is calculated. Now, we have a number from Nvidia of 6500 GFLOPs, but this is an approximate figure. GFLOPs can be calculated to the nearest whole number, this can be done by multiplying the CUDA core count by 2, then multiplying that by the base clock or boost clock, both results are now listed in the products table but the boost clock figure is in brackets. Millzie95 (talk) 12:12, 19 May 2016 (UTC)Reply

Reliable sources

edit

210.187.223.163 (talk · contribs) removed external links in this edit, saying "Stop adding links that are not official Nvidia links". Actually, we prefer journalistic sources over vendor sources. That is, journalistic sources are better and we should use them in preference to vendor sources. See WP:RS, specifically the "Vendor and e-commerce sources" section. I have not reverted the removal because I have not verified whether or not the removed sources are of sufficient journalistic integrity, or whether they just serve as spam in this article. Their removal may be entirely appropriate. --Yamla (talk) 19:26, 18 May 2016 (UTC)Reply

I agree, we need a mix of official and journalistic sources, claims from Nvidia's site might be comming straight from the horse's mouth, but we need third party sources to verify the flow of information from them. Now I see that journalistic sources have been removed from the products table, new journalistic sources should be added now we have the full specifications confirmed, as not all specs are listed on Nvidia's website so we can't 100% verify any of their claims, some independent journalism would be appreciated. Millzie95 (talk) 12:19, 19 May 2016 (UTC)Reply

Floating point performance calculations

edit

I've noticed there's a bit of back and forth from editors and users as new information comes out on the official clock speeds of the known 10 series parts. Some are editing floating point performance numbers to reflect (shaders * 2 * boost clock), some are instead following the proper (shaders * 2 * base clock) calculation. In some ways, they're both valid. With parts like these having a minimum boost clock that is much higher than their base clock, their peak FLOPS can be significantly higher than their rated base FLOPS (both the GTX 1070 and GTX 1080 are gaining somewhere in the region of 600-700 GFLOPS at minimum turbo). However, the technically correct calculation is (shaders * 2 * base clock), and no previous articles have displayed calculations using the boost clock of any particular card, such as the article for the GeForce 900 series. I personally feel that the boost clock calculation is unnecessary, but also inaccurate as Nvidia's Turbo Boost technology will allow cards to turbo well beyond the minimum rated turbo boost. --Arbabender (talk) 01:41, 19 May 2016 (UTC)Reply

I guess you're right about how to calculate performance. but, first, you said, "Single precision performance is calculated as 2 times the number of shaders multiplied by the boost core clock speed." and "by the boost core " is clearly wrong. I think you should have written "by base core" so good. — Preceding unsigned comment added by Byunsangho (talkcontribs) 09:28, 19 May 2016 (UTC)Reply

Thanks for your comments, both base and boost clocks for FP32, FP64 and FP16 are now included with the boost clocks in brackets. Millzie95 (talk) 12:26, 19 May 2016 (UTC)Reply
Related to this, I have been having an similar conversation over on the RX 400 series talk page, but in relation to the Texture and Pixel fill rates. Both in the GeForce 10 series and AMD Radeon 400 series articles these values are calculated based on the Base clock. However, in reliable sources the value is higher and clearly calculated using the Boost clock.[1] Shouldn't the Boost value should be included/used (and preferentially so IMO) as it is published by a reliable source? Using the Base clock appears to me to be original research, if used to the exclusion of a citable values. Dbsseven (talk) 22:12, 24 August 2016 (UTC)Reply

References

  1. ^ Angelini, Chris (17 May 2016). "Nvidia GeForce GTX 1080 Pascal Review". Tom's Hardware. Retrieved 24 August 2016.

Whitepaper

edit

Thanks whoever added the whitepaper link, it's a good source and provides plenty of useful info. Millzie95 (talk) 14:45, 23 May 2016 (UTC)Reply

edit

there is no volta article so the link just went right for the geforce article. — Preceding unsigned comment added by 84.212.73.96 (talk) 10:26, 11 June 2016 (UTC)Reply

Semi-protected edit request on 26 August 2016

edit

GTX 1060 pixel fillrate should be 32 px/clock, not 48. While it has 48 ROPs, each GPC can only handle 16 px/clock. Same as why 1070 has lower than 1080 in this article.

EDIT: So to be in line with rules, under desktop it should be 48.2 GP/s and 44.9 GP/s for notebook. 74.66.132.81 (talk) 03:15, 26 August 2016 (UTC)Reply

  Not done: please provide reliable sources that support the change you want to be made. Dat GuyTalkContribs 05:26, 29 August 2016 (UTC)Reply

1050 GTX new Data

edit

See https://www.techpowerup.com/gpudb/2875/geforce-gtx-1050 — Preceding unsigned comment added by 91.67.67.49 (talk) 15:56, 7 October 2016 (UTC)Reply

Spec refs

edit

I notice that the spec references keep being changed from 2nd parties to Nvidia itself. I do not believe this is okay as the Nvidia specs do not include full specs (core config, for one example), and Wikipedia's policy on sources encourages 2nd and 3rd parties. Anybody else have any thoughts? Dbsseven (talk) 16:58, 19 October 2016 (UTC)Reply

GTX 1050 (Ti) is manufactured using 14-nm (not 16)

edit

http://www.anandtech.com/show/10768/nvidia-announces-geforce-gtx-1050-ti-gtx-1050
https://techreport.com/review/30822/nvidia-geforce-gtx-1050-and-gtx-1050-ti-graphics-cards-unveiled — Preceding unsigned comment added by Bouowmx (talkcontribs) 13:46, 21 October 2016 (UTC)Reply

Direct3D support

edit

The official information is now on Nvidia support, SLI official not supported and all other implantations, async computing not supported, time for an official Direct3D support on Pascal issue on this page? — Preceding unsigned comment added by 84.80.157.6 (talk) 14:01, 12 November 2016 (UTC)Reply

Titan Xp Base core clock

edit

There's a vandal adding deliberately uncited information to the chart stating the Titan Xp's base clock is 1405. Now, that number is probably correct, but we need a citation for it. nvidia's provided spec sheet shows the boost clock is 1582 MHz but doesn't list the base clock. Anyone have a citation which meets WP:RS which we can use here? --Yamla (talk) 00:21, 18 April 2017 (UTC)Reply

I wouldn't even go so far as to say the 1405 'number is probably correct'. The only reputable source I could find describing the base clock suggest the base is 1481MHz, but is clearly uncertain.[1] Calculating based on the base clocked GFLOPs (from here[2]), I get 1562MHz. Dbsseven (talk) 17:25, 18 April 2017 (UTC)Reply
This has now been cited (at 1480 MHz). Apparently the 1405 number was indeed wrong. --Yamla (talk) 11:07, 2 May 2017 (UTC)Reply

https://www.techpowerup.com/gpudb/2948/titan-xp Did you 2 morons even look at the source you cited? It says "GPU Clock: 1405". It really shows that ignorant morons like both of you should not be allowed to edit at all. — Preceding unsigned comment added by 175.138.233.40 (talk) 15:27, 15 May 2017 (UTC)Reply

No personal attacks. That's not what the page said when it was initially added to the article. The number, 1405, did not appear on that page. It was subsequently updated. --Yamla (talk) 16:00, 15 May 2017 (UTC)Reply

References

  1. ^ Smith, Ryan (7 April 2017). "Nvidia Announces Nvidia Titan Xp Video Card". Anandtech. Retrieved 18 April 2017.
  2. ^ Broekhuijsen, Niels (6 April 2017). "Nvidia's New Titan Xp Graphics Card Unlocks The GP102's Full Potential". Tom's Hardware. Retrieved 18 April 2017.

GeForce GT 1030

edit

The 1030 has recently been announced, with the specs on Nvidia's website . Apparently it's the last Pascal card, and needs to be added to the table. — Preceding unsigned comment added by Hoolahoopz (talkcontribs) 02:09, 17 May 2017 (UTC) ^^ Not true. NVIDIA has plans for 1070 Ti. — Preceding unsigned comment added by 2602:306:B826:6A30:4DB6:3DB5:770D:E476 (talk) 00:39, 20 October 2017 (UTC)Reply

there are 3 of them now, all GP108 / MX150   [1] 68.101.121.218 (talk) 04:03, 12 September 2018 (UTC)Reply

edit

@210.187.151.217: I see you have repeatedly added links to the current nvidia drivers for this product series article (and a number of other nV GeForce product articles). I appreciate your effort to add more links to wikipedia. However these links are not encyclopedic, and not in keeping with WP policy (see WP:NOTDIRECTORY and WP:LINKFARM). Multiple editors (Yamla, Denniss, and myself) have reverted your edits, and provided comments explaining why this editing style is disruptive. If there is a specific, justifiable reason for these links, please discuss it here (or the appropriate talk pages). The policy is edit-revert-discuss, repeated restoration of your edits without discussion is edit-warring. Continued disruptive editing will lead to referral to the administrator's noticeboard for edit-warring. Dbsseven (talk) 16:25, 10 January 2018 (UTC)Reply

High-end products

edit

@Cautilus: (and anyone else interested) I see there has been some back-and-forth on the categorization of the Titan X. It seems this and similar categorization issues could be resolved with better defined categories. In the case of the Titan X, multiple RSs describe it as "prosumer" and targeted compute rather than gaming markets [2][3][4] Finding a better consensus and perhaps adding or changing the infobox categories might alleviate some of these issues. A related discussion is open on the graphics processing unit infobox talk page, please feel free to chime in there. Dbsseven (talk) 15:38, 5 June 2018 (UTC)Reply

DisplayPort 1.4

edit

Apparently, DisplayPort 1.3/1.4 support is disabled on Pascal cards by default. It's listed as "DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready" in the official specs.
Nvidia released a firmware update that enables DP1.4 support: https://nvidia.custhelp.com/app/answers/detail/a_id/4674 — Preceding unsigned comment added by 212.178.20.226 (talk) 20:24, 7 June 2018 (UTC)Reply

Successor

edit

I reverted this change. The provided citation does not actually state that the Turing architecture is the successor to the GeForce 10 series, nor does it mention that the next round of GPUs will be called GeForce 20. There's a lot of rumours floating around about the next GPUs, but Wikipedia requires reliable citations, not predictions. There's a good chance the next GPUs will be called GeForce 20 and will be based on the Turing architecture, but that's not the point. WP:V. --Yamla (talk) 13:41, 14 August 2018 (UTC)Reply

GeForce 1660 is an actual thing

edit

Actually, both 1660 and 1660Ti. Link 1, Link 2 Not sure where this is supposed to go. Is this part of the GeForce 10 series? They're Turing cards, so... GeForce 16? --uKER (talk) 08:15, 22 January 2019 (UTC)Reply

Well, rumoured anyway. I think best to wait until there are reliable sources for this. videocardz.com is notoriously unreliable if memory serves. --Yamla (talk) 11:04, 22 January 2019 (UTC)Reply

Mebibytes vs Megabytes

edit

I see someone has touched up some of the GeForce series' articles changing some of the units (apparently at random) in the tables' headers to refer to kibibytes, mebibytes and gibibytes, instead of the usual kilobytes, megabytes and gigabytes. Personally I feel the change as one that someone could do out of novelty from coming to know about the existence of such units, but despite the terms being technically correct, using them seems quite pedantic in that it introduces a technicality that not only goes against the way the whole world refers to these units of information, but it even causes the article to contradict all of its cited (official) sources, which employ the usual "non binary-based" units. The ambiguity between the two possible values of the "non bi" units is a known, expected fact, acknowledged in the lead of the Gigabyte article, probably among others. --uKER (talk) 14:13, 24 February 2019 (UTC)Reply

I admit that maybe I was too attached to the pages that I had to make the tables by myself and by a few other people, the reason I changed it from megabytes to mebibytes is the operating system Windows has always named the space on your 2 terabyte hard drive to something like 1.8 terabytes because the hard drive manufacturers and many other people use the binary system, the problem is that windows thinks its using the binary system use TB but fails to mention that in the code of it measuring the space on your hard drive its actually measuring in TiB a completely different way of measuring the space on your hard drive. Because of that I was thinking since Windows fails to mention that it uses the non binary system it does on graphics cards too, though from the looks of how much vram I have either nvidia uses the nonbinary system like windows or windows corrected it only on the vram section of it so that it says the correct amount of vram in the binary system. In any case I hope we can we reach a good solution thanks TurboSonic (talk) 15:28, 24 February 2019 (UTC)Reply
I understand your reasoning, and yes, you're technically right, but the whole world has come to use the regular units with their binary-based meaning, so introducing the technically accurate names does more harm than good at this point, as it introduces an element that won't be familiar to the reader, and will create discrepancies with the sources. Hope you understand. Cheers. --uKER (talk) 20:38, 24 February 2019 (UTC)Reply
Oh, and about the drive capacity reported by Windows, I'd say the issue you're facing, more than the binary base of the units, is that the space reported by Windows is formatted space, which is total space minus the space taken up by the file tables, which is why you don't see any discrepancy on the VRAM value. The discrepancies you may encounter due to the "gibibyte" thing in everyday life are pretty harmless (24 units in 1000 is only a 2.4%), so you can forget about it and you'll be fine. --uKER (talk) 20:43, 24 February 2019 (UTC)Reply
I've thought about it for a bit and I've decided that you're correct about how it doesn't change much and is harmless change, while I won't change every single article that uses gibibytes vs gigabytes, I will agree with you on that and will let you feel free to edit what you think is right for Wikipedia. After all you look like a pretty good editor on Wikipedia, Thanks TurboSonic (talk) 01:18, 25 February 2019 (UTC)Reply
Glad we could reach an agreement. Cheers! --uKER (talk) 04:23, 25 February 2019 (UTC)Reply

Should There Be Separate Articles For Each GPU?

edit

It might be more convenient for background info on a single card, compared to having to scroll past everything.

I really don't think so. This article isn't that long and I don't see much need to break it up into 15 additional articles. --Yamla (talk) 00:15, 27 May 2019 (UTC)Reply
No point. There isn't much to say about each individual one. They're basically the same thing with different clocks and some other minor differences. --uKER (talk) 01:39, 27 May 2019 (UTC)Reply

Please document the MX250

edit

The MX 250 is available on laptops. Finding technical information is a challenge. — Preceding unsigned comment added by 68.109.194.172 (talk) 01:37, 7 September 2019 (UTC)Reply

Broken/incorrect template in infobox?

edit

As you can see by my recent edit, I temporarily replaced the release date template with standard typing as it still incorrectly read 7 years, at least as a temporary fix until the issue is solved completely. Hope you understand and thank you Aliy Dawut (talk) 06:44, 4 June 2024 (UTC)Reply

1050 & 1050 Ti Mobile GP106 variant?

edit

https://www.techpowerup.com/gpu-specs/geforce-gtx-1050-ti-mobile.c3013

Looking by some sites, there are 1050 and 1050 Ti Mobile that are apparently GP106, you can also see that Nvidia's GPU Decode Encode Matrix site lists two separate versions, need confirmation about this info. 86.33.28.43 (talk) 18:57, 12 August 2024 (UTC)Reply