This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
I do not think that Floating Point Units deal with Trigometric functions. Such functions are usually in a software library and use geometric series for evaluation. The evaluaton may be done using an FPU. Some FPU's implement square root I think! User:Rjstott
- Some FPUs do trigonometric functions. For example, the 80387 has FSIN, FCOS, FSINCOS, FPTAN and FPATAN instructions. --Zundark, 2002 Jan 30
- You are right about using series for evaluation and the square root but the 68881/68882 FPU, for instance, designed for the 68xxxx family of processors, include cos, sin, atan in microcode. When the FPU functions were integrated in the 68040, trigonometric functions were emulated, though. OprgaG
Unsorted text
editwhen was it invented and by whom?
- I do not know if he invented the mathematics of floating-point, but I believe the first implementation of floating-point was done by Konrad Zuse in the Z1 (computer). — Preceding unsigned comment added by Jeff.science (talk • contribs) 15:01, 26 April 2017 (UTC)
---
Why was this page deleted? There's no talk history discussing it and it doesn't show up in the deletion log. It does need some cleanup, but I'm just going to restore it. Nkedel 00:54, 6 March 2006 (UTC)
Should the second section be merged with coprocessor? Nkedel 02:03, 6 March 2006 (UTC)
---
The section on coprocessors is unnecessarily biased towards the x86. Many other chips until the early-mid 90s had separate coprocessors, even if they were not explicitly mentioned in the literature. MIPS comes to mind, but this was also true of the VAX and if memory serves HPPA. I can rewrite if necessary, but ideally someone with more specific knowledge than me would be better suited.
Examples of implementing SIMD using the FPU
editThe current comment that AMD64 is an example of a CPU architecture that uses the FPU to implement SIMD is technically correct, but may give an impression that this is only recently done. At the risk of being x86-centric, the example should any processors that implemented MMX or 3Dnow!, which dates back to the Pentium-MMX and K6. Calyth 06:51, 8 September 2006 (UTC)
illegal to export
editThere was a story (urban legend?) floating around my programming class, to the effect of: ... exporting floating point chips to certain countries was illegal and violates the terms of the CoCom ... CPU manufacturers specifically decided not to integrate FPUs on the CPU chip, but instead have separate CPU chips (that they could sell to those countries) and FPU chips ... after the law changed, making it legal to sell FPUs to those countries, CPU manufacturers immediately stopped making separate FPU chips and immediately began making integrated CPU+FPU chips. ...
Any truth to this story? If so, please mention the facts in the article. --68.0.124.33 (talk) 04:23, 13 April 2008 (UTC)
- It seems unlikely. A more reasonable explanation is that changes in transistor count limits made it practical to include FPUs on-chip and the performance enhancements that this would provide make it attractive to do so.
- --PeterJeremy (talk) 21:23, 19 February 2010 (UTC)
- There is a small amount of truth to this. The US government places export control restrictions on computer components based upon their capability. Aggregate floating-point throughput is one of the parameters used to determine what components are restricted. The limits are quite generous and rarely affect widely-used computer components. A non-expert discussion of this topic can be observed in http://www.nytimes.com/1990/03/07/business/us-revising-guidelines-on-computer-exports.html, while the formal rules are released by the US Department of Commerce. See https://www.bis.doc.gov/index.php/documents/product-guidance/865-practioner-s-guide-to-adjusted-peak-performance/file and https://www.bis.doc.gov/index.php/documents/regulations-docs/federal-register-notices/federal-register-2014/1055-ccl4-3/file. Sometimes, the restrictions are specific to a particular institution, rather than an entire country (e.g. https://www.theregister.co.uk/2015/04/10/us_intel_china_ban/). Jeff.science (talk) 14:57, 26 April 2017 (UTC)
x86 Floating Point Processor
editHow about writing about how much the x86's floating point processor sucks in terms of accuracy? 141.225.144.225 (talk) 21:35, 24 May 2008 (UTC)
- I think that the x87 uses 80-bit FPU registers to store 64-bit floating point values in extended precision. Anyways, it is IEEE compliant, so there is no reason to believe that it is "inaccurate". All FPUs are inaccurate to some extent, limited by the width of their registers. You were not referring to the Pentium FDIV bug were you? That only existed in the original Pentium and was due to a hardware bug, not the design. Anyways, such statements (that x87 sucks) is not suitable for Wikipedia (violates neutrality). Rilak (talk) 04:59, 25 May 2008 (UTC)
More Logical Article Layout
editThe current article layout appears rather haphazard and concentrates on the x87 family, with very little discussion about earlier FPUs. I believe a more reasonable approach would be something like:
- Floating point library (as per the current section)
- Floating point as an option - Covering early mainframes and minis where floating point was part of the instruction set but the hardware was an option Typically, the OS would emulate any missing instructions.
- Floating point as a peripheral - Early microprocessors had no provision for floating point and AMD 9511, Intel I8231 and Weitek boards were treated as peripherals.
- Floating point co-processors - I8087, M68881 and NS32081
- Integrated Floating point
This is a roughly historical perspective. From a hardware perspective, 2, 3 & 4 are similar, whilst from a software perspective, 2, 4 & 5 are similar. There was some overlap between 3 & 4 - the M68881 and NS32081 could be used as peripherals on earlier CPUs that did not support the co-processor interface or other vendor's CPUs.
--PeterJeremy (talk) 22:20, 19 February 2010 (UTC)
Citations Needed
editLarge portions of the articles have no citations at all. Not sure where the info came from. Wqwt (talk) 00:46, 10 March 2018 (UTC)
CORDIC
editCORDIC is commonly used for transcendental functions on hand (and previously desk) calculators. Traditionally, they do digit serial BCD arithmetic. Though the usual explanation for CORDIC is binary, there is a decimal version. Computers using hardware floating point, with multiply, commonly use polynomial expansions. Gah4 (talk) 06:57, 18 November 2023 (UTC)