This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
The inappropriate link to Talk:PL above is because of a technical restriction.
Early discussions
editI deleted the following, which is not true:
- The original IBM mainframes used the same symbol 'I' for both the letter 'I' and the number '1'. Thus, the way that it is commonly written and normally spelled are different.
IBM mainframes used separate symbols for the letter and the number. The 'I' in PL/I is a roman numeral. -- Simon J Kissane
- Thank you. That's what I thought, too, but wasn't sure enough about 3 decade old memories to make the change. --Buz Cory
- I did more research and I hope this clarifies what I meant to say:
I was trying to make a distinction between the character and the font. Unfortunately, I used the wrong symbols. It was the small "L" and the number "1" that used to identical font symbol (i.e., l and 1 look the same - more or less - even though I typed the "el" character first, and the "one" character second).
I was not intending to say that the characters were identical (i.e., had identical values in EBCDIC), merely that they looked the same. This was true for the Courier font ball used in IBM Selectrics, which were also used as operator terminals in some S/360 installations, and the font symbols may have been identical in the print cylinder in the 1403 printer. Because I don't have the original context - or don't know how to get my original contribution for the context, I'm not sure why I even brought up the character confusion thing... Cheers --17:29, 31 December 2006 (UTC)RSzoc
- this book was typeset on a 1403 with a TN (upper/lower case) print train. I believe that one and ell are different, but with the blurring of the ribbon, it might be hard to tell them apart. For oh and zero, the oh is more square, and the zero more round. Gah4 (talk) 22:15, 13 March 2019 (UTC)
- The characters were not the same, but we were taught to be vigilant both at writing code to be punched (on cards) and when testing and troubleshooting for too easyly overlooked mistypes of 0O and Il1. So on coding forms to given to others to be punched for zeros 0 needed to be crossed, as here with courier font, and O not. I don't remember if there was some similar trick with lI1. Marjan Tomki SI (talk) 00:53, 5 January 2023 (UTC)
- Lower case L (l) and the digit One (1) were distinct but, as previously noted by Gah4, easy to confuse, but I don't recall any convention for them on coding forms. I remember a Tower of Babel for letter O and digit 0; while I was taught to bar the O and slash the 0, BTL had a chain that barred the digit rather than the letter. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:53, 5 January 2023 (UTC)
- When I learned it (many!) years ago, it was crossed ohs, and not zeros. Also, crossed Z's, so as not to confuse with 2. I still cross my capital Z, if I write one. Not usually ohs, unless it seems necessary. (As in writing programs or, more recently, confirmation codes.) Gah4 (talk) 14:19, 5 January 2023 (UTC)
- This one has a scan of a coding form (in green!). There is a place on top, where you show which graphics you use, and which punches go with it. Could have been interesting in the BCDIC days, when there were two punch codings for '-'. Gah4 (talk) 02:10, 6 January 2023 (UTC)
- When I learned it (many!) years ago, it was crossed ohs, and not zeros. Also, crossed Z's, so as not to confuse with 2. I still cross my capital Z, if I write one. Not usually ohs, unless it seems necessary. (As in writing programs or, more recently, confirmation codes.) Gah4 (talk) 14:19, 5 January 2023 (UTC)
- Lower case L (l) and the digit One (1) were distinct but, as previously noted by Gah4, easy to confuse, but I don't recall any convention for them on coding forms. I remember a Tower of Babel for letter O and digit 0; while I was taught to bar the O and slash the 0, BTL had a chain that barred the digit rather than the letter. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:53, 5 January 2023 (UTC)
SABRE
editThe bit about SABRE is from personal experience. I was offered (and turned down) a position with the SABRE project ca 1965. Don't know if they stayed w/ PL/I or changed to a different language.
The (military/aerospace) project I was on at the time would have used PL/I if it had been useable early enough. We went through major contortions to do a real-time/multi-tasking system requiring dynamic allocation of records (structs to you C weenies) in FORTRAN IV. Further, FORTRAN required 512K of RAM to compile, the PL/I compiler would work in 128K. --Buz Cory
Evidently, the SABRE folks didn't get a really solid PL/I compiler for their platform until the early 1970s. They may have wanted to use PL/I earlier than that, but I can vouch that at least the portions of the system that American Airlines dealt with were almost entirely 7090 assembler before the early 1970s. (Working off my father's notes on this one; he wrote the compiler, so there's a chance his history is biased, of course.) --Cliff Biffle
Case insensitive
editNot so sure about that "case insensitive" part. At that time virtually all work was done entirely in upper case due to hardware limitations. --Buz Cory
- I don't know about others, but most of OS/360 is upper case only. The JCL intepreter, and all the OS/360 compilers I know, are upper case only. It is usual for printers to map lower case to upper case, making this problem harder to find. Gah4 (talk) 02:36, 17 May 2017 (UTC)
- The Multics PL/I compiler was case-sensitive for identifiers; see page 3-3 of the Multics PL/I Reference Manual from 1976. I don't know whether keywords were recognized only in lower case or not; PL/I code for Multics was usually written in lower case.
- Not that the article currently says anything about case-sensitivity. Guy Harris (talk) 03:07, 17 May 2017 (UTC)
History
editNice job on the history. Some references to written documents (which I understand may not exist) would be nice. If its personal experience, then say so. --drj
- ----
- The parts of the history that I wrote are personal experience. If one can find copies of Datamation from ca 1965, the arguments that I mentioned about not using PL/1 are in there. The part about NELIAC is from a book I read ca 1962 whose name and author I cannot recall.
- The part above about system requirements for FORTRAN and PL/1 is also personal recall. --Buz Cory
Most of the history stuff I added is purely from memory. I read it somewhere. I wasn't even alive when PL/I was being developed. -- Simon J Kissane
Retrospective
editA couple of inaccuracies in the "Retrospective section".
First, it states that the "user wouldn't know whether a statement was a declaration or a executable statement until one saw the period at the end. " The truth is that PL/I uses (and has always used) the semi-colon as a statement terminator. COBOL is that language that used periods. Additionally, a declaration statement ALWAYS begins with the keyword "DECLARE" or "DCL", so it's easy to figure out what kind of statement it is. Also, "format free" languages that could be spread along a number of lines before being terminated had been around since at least the early 1960/61 if not earlier. Even assembly language could span multiple lines if one put a "continuation" character in Column 72 of the "card".
Second, the article states that "the decision to make spaces unimportant, as in Fortran" (I paraphrase) is simply not true. In Fortran, spaces were truly irrelevant. Thus, one could write an "do statement" as follows:
DO I = 1, 10 or DOI=1,10
Both were considered to be the same (this is the Fortran for the time of the late 60's early 70's).
In PL/I spaces did matter. Taking the same example above:
DO I = 1 to 100; OR DOI=1 to 100;
are very different statements; I think the second one won't even compile.
Perhaps you are thinking of Keywords. Like all languages, PL/I has keywords, (such as DECLARE, DO, IF, END, and so on), but, because PL/I had so many keywords, the language designers decided that none of them were to be RESERVED. Going to the Fortran example above, the word DO is a keyword AND is reserved, and cannot be used for any other purpose but in a DO statement. In PL/I, one can use DO in a do statement and also as a variable. The "headache" in writing a PL/I compiler was the requirement that it had to figure out from the context whether a word was being used as a keyword or not.
The example that I remember from back in the day was this. The following is a perfectly valid PL/I statement (because no keywords are reserved):
IF IF = EQUALS THEN THEN = ELSE; ELSE ELSE = IF;
Cheers RSzoc 01:02, 26 November 2006 (UTC)
Adding to our understanding of the name (I would rather PL/1, since that was intended), "A Genealogy of Computer Languages" by James Haddock says "In 1964, IBM was developing its System/360. Never happy with ALGOL, they wanted to have a dialect of their own as a system implementation language, but with the ability to handle COBOL style applications. The result was PL/1 (they also copyrighted the names PL/2 through PL/100 just in case)." Gc9580 (talk) 06:12, 21 March 2009 (UTC)
- In addition to blanks not being significant (in fixed form), Fortran also does not have reserved keywords. The language structure makes it less obvious, though. WATFIV restricts the seven character sequence 'FORMAT(' as only the beginning of an actual FORMAT statement, as it simplifies the parsing. Otherwise, FORMAT is a convenient variable (array) name for variable formats. Gah4 (talk) 22:06, 13 March 2019 (UTC)
- Also, the Fortran example above wouldn't apply until Fortran 90. You would need a statement label in there. Fortran 90 and later offer both fixed form (traditional) and a newer free form, where blanks are significant in the latter form. As for the above:
IF(IF.EQ.THEN) THEN THEN=ELSE ELSE ELSE=IF ENDIF
(I believe that is right, but didn't actually try it.) Gah4 (talk) 22:11, 13 March 2019 (UTC)
- A famous (?NASA) failure involved a FORTRAN statement resembling:
- DO 1800 J=1.60 /* S/B comma is the "point"! and FORTRAN IV had no inline comments */
- As for WATFIV not having
- INTEGER FORMAT(250)
- ...
- FORMAT(I)=0
- or perhaps, better yet,
- FORMAT(I3)=0
- I don't know. Pi314m (talk) 18:06, 12 February 2020 (UTC)
- INTEGER FORMAT(250)
- As for the first characters of a statement being 'FORMAT(', the WATFIV[1] manual mentions it on page 55. Gah4 (talk) 20:49, 12 February 2020 (UTC)
- A famous (?NASA) failure involved a FORTRAN statement resembling:
- The parsing problem with non-reserved keywords is overblown. I have found only a couple of cases (which I forget at the moment) where this is a problem which needs a bit of extra coding for the compiler to resolve. Peter Flass (talk) 21:43, 12 February 2020 (UTC)
- Slightly more complicated when blanks don't count, but the parsing methods for Fortran are well understood by now. As new features are added, someone has to verify that there aren't any ambiguities added. The H format descriptor complicated FORMAT statement parsing. Note, though, that Fortran 66 run-time formats have to be in arrays, and a convenient name for such array is FORMAT. I believe I did that in Fortran 66 days. Gah4 (talk) 02:29, 13 February 2020 (UTC)
References
- ^ "WATFIV USER'S GUIDE" (PDF). www.jaymoseley.com. University of Waterloo. Retrieved 12 February 2020.
Year developed
editIs 1964 the year PL/1 was first developed? Same as System/360? -- sabre23t 02:17, 19 Sep 2004 (UTC)
- I added to the article that PL/I was used at MIT in 1964, which was the oldest reference that I could find. Morris 23:02, Apr 12, 2005 (UTC)
- See Fernando J. Corbató#Further reading for the PL/I As a Tool for System Programming reference, which probably gives a lot of detail on this. Might we worth including that as a reference here too. Noel (talk) 18:55, 13 Apr 2005 (UTC)
The PL/I language specification was still being developed in 1964, and there was no PL/I compiler available from IBM when S/360 and OS/360 were released. Documentation for S/360 and OS/360 was available in 1964, but the first systems - at the lower end - did not begin shipping till 1965. My personal recollection is that IBM's production of system 360s didn't begin in high volume until 1968 or so. RSzoc 01:08, 26 November 2006 (UTC)
---
I was working on PL/I for IBM in England in 1964-1965, and I recall writing assembler language interrupt processing routines for the "Tape Operating System" TOS, that was available for the machine before OS/360, but of course not for release.
The 'F' in PL/I F refers to the fact that it was intended to run on a machine with only 64K (64 Kib in the ugly new abbreviation) of 'core', which was our random access memory. That's with an OS as well, of 20 K.
The '360' of System 360 was intended to imply completeness, as in a full circle of 360 degrees, which makes the same nonsense of S/370 and S/390 as "I support you 110%".
We had an apocryphal story that IBM UK was warned by the National Physical Laboratory that if they did not drop the NPL name, the real NPL would publish a standard or something called Infernal Bloody Mess, to be known by its initials.
As for the reluctance to adopt PL/I, there was a feeling at Hursley that part of the reluctance was the "Not Written Here' attitude of IBM USA.
DaveyHume (talk) 18:04, 9 March 2016 (UTC)
As for Multics and MIT, the web site of http://www.multicians.org/pl1-raf.html states that "Multics PL/I" was based on an IBM language spec that was not available until 1968. Again, I believe the difference is the time difference between when specifications are made (e.g., 1965 in the case of Multics) and when a final software product containing that specification is actually produced. The above referenced Web site states that the Multics PL/I compiler was developed in 18 months. If we take March, 1968 as the starting point - when the PL/I language spec used by MIT was available, and add 18 months, we get September 1969 as when Multics Pl/I compiler was actually available. MPearl - what is your reference for the 1964 date??RSzoc 01:08, 26 November 2006 (UTC)
As an aside, UNIX was named as a form of word play for Multics. Because the Multics OS was so big - many computer types felt at the time - Dennis Ritchie and Ken Thompson at Bell Labs named their OS Unix, implying small and lean.
Now, of course, all OSes are pretty much out of control, with nothing begin small and lean, except for maybe whatever drives microwave ovens (that last sentence is an editorial comment by me - feel free to snip it out)... RSzoc 18:52, 18 November 2006 (UTC)
The first ship of PL/I F by IBM was definitely 1966. I have added the chronology of the 5 releases of PL/I as they show some interesting PL/I history. F had the advantage that the language manual was being written on the floor above the one John Nash's F compiler team inhabited in "B block" in Hursley. There was an advertised DIgitek compiler that claimed to have shipped ahead of F - but the Multics team thought it was still vapourware, hence doing there own thing PL/I. EPL has been quoted as 1964, but it cannot have been a PL/I at this stage. I suspect it was a special purpose language initially and was adapted to PL/I as the PL/I Spec appeared in 1965/66,
—Preceding unsigned comment added by RogerofRomsey (talk • contribs) 16:06, 2 February 2010 (UTC)
The paragraphs on Multics and year implemented are still just plain wrong. The text states and implies that multics PL/I (or EPL) compiler was used in 1964. However, this reference (http://www.multicians.org/pl1.html) shows that EPL is still not available as a working product even in 1966-1967. The other references for multics refer to the IBM Language Specification document in 1968 as the relevant language definition. So, my suggested timeline still holds. Until, at least, we get a statement from someone who was there and can state that:"We had a fully operational PL/I compiler in 196.... ". The other references cited in the section on Multics state things like: "The compiler implemented most features except for ... input and output... "... Which seems like a severe "to-do" item to make it usable... RSzoc (talk) 01:45, 18 July 2010 (UTC)
Subpage
editPerhaps this isn't really important, but this article is actually a subpage: the subpage "I" of the article "PL". This obviously isn't the intention, but it seems to work alright. Deco 00:37, 17 Nov 2004 (UTC)
- I presume someone figured that out along the way. The Usenet newsgroup is named comp.lang.pl1, as slash isn't allowed in names. Gah4 (talk) 19:30, 11 February 2020 (UTC)
PL/I newsletter
editPerhaps a link to the current (Jan 2005) PLI newsletter would be useful? [1] --ClemMcGann 00:50, 20 Mar 2005 (UTC)
PL/I and C
editThis page contains the claim (in the "External links" section) that "The C programming language was heavily modelled after PL/I". Alas, this is untrue. The C Progamming Language (by Ritchie, Johnson, Lesk and Kernighan, 1977) says that "Many of its most important ideas stem from ... BCPL", and there is no mention of PL/I. As for BCPL, the BCPL Reference Manual (by Richards, Evans and Maybee, 1974) says that "BCPL is related to CPL", and again no mention of PL/I. (The CPL documents cited date from 1963/65, making it impossible for CPL to have been influenced by PL/I.) So I am going to remove that claim. Noel (talk) 22:20, 12 Apr 2005 (UTC)
- I see still that the article has a statement that PL/I influenced the development of C. I've never seen anything anywhere else that states such. Although it is possible that PL/I served as a good example to K&R of what not to do if you want to develop a compiler capable of running on a system with limited memory, influencing them that way. --64.238.49.65 (talk) 03:29, 7 February 2009 (UTC)
- Other than the comment syntax, I'm not sure what C took from PL/I. Guy Harris (talk) 19:03, 9 March 2016 (UTC)
- Well, and the use of semicolon as a statement terminator, and the names "static" and "auto"(matic) for storage classes. But a lot of C isn't PL/I derived, such as the variable declaration syntax and the compound statement bracketing. Guy Harris (talk) 19:44, 9 March 2016 (UTC)
- Other than the comment syntax, I'm not sure what C took from PL/I. Guy Harris (talk) 19:03, 9 March 2016 (UTC)
- Kernighan and Plauger, in their superb "Elements of Programming Style", illustrate their principles with examples in both the Fortran and PL/I languages. They are of course far more widely applicable. This shows that K was fluent enough in PL/I.
- But the C language was designed in such a way as to be able to write a C compiler in its own language. Much as I prefer PL/I to Cobol and Fortran, I would run ten miles before I'd attempt to have a PL/I compiler written in PL/I. It has been said that "whereas Fortran has absurd restrictions, PL/I has absurd generalizations".DaveyHume (talk) 18:26, 9 March 2016 (UTC)
- The Iron Spring PL/I compiler and runtime are all written in PL/I, with a bit of assembler. This was ported from System/z to x86 with only a few lines of modification to the compiler (and, of course, a new code generator module). The Multics PL/I compiler was written entirely in PL/I. I would walk 100 miles before I'd have written it in C, but of course everyone has a favorite language. The nice thing about the absurd generalizations is that the compiler will usually take whatever you write, as long as it's syntactlly correct, and do something sensible with it.Peter Flass (talk) 23:28, 9 March 2016 (UTC)
- Consider yourself lucky that you didn't work for General Electric. (And it ran on an operating system primarily written in PL/I.) Guy Harris (talk) 19:01, 9 March 2016 (UTC)
Lost text?
editA large block of text was lost here during a spam/vandalism spree earlier this year. Assuming that this removal was an error, I have added it back, and done some copyediting to remove duplicate material that was added after that piece was lost; I also attempted to organize the material a bit better. Noel (talk) 19:40, 13 Apr 2005 (UTC)
IBM System/360
editI know it's nitpicking, but IBM didn't refer to System/360 as the System/360. It was just System/360. My reference is a 1964 copy of the IBM Systems Journal where the entire architecture of the machine was described in Iverson notation. Shoaler 5 July 2005 17:22 (UTC)
"powerful" vs "ambitious"
editI think I disagree with the revert done to my change of the adjective "powerful" to "ambitious." I appreciate the differences between PL/1 and C, but both are NP-complete languages, and I think the use of the word "powerful" in this context is misleading; it gives a false impression, and is (in my opinion) a bit too cheerleading of a word to lead off an article with.
I prefer "ambitious" because it more accurately captures the scope of PL/1 as a featureful language, without reference to the perceived "power" of the language (which really, could mean anything).
Thoughts? I'm also open to other suggestions. As a test, I've changed powerful to "feature-rich", which I think also captures the true nature of the distinction.Nandesuka 15:27, 15 July 2005 (UTC)
- I like "feature-rich" (or maybe some other more grammatically correct adjective with the same meaning). I think that "powerful" is too vague with regards to a programming language. JYolkowski // talk 21:30, 15 July 2005 (UTC)
- The words “power” and “powerful” have long been associated with PLI. (Remember the Black Panther logo). You can achieve a great deal in a single PLI statement. Just as other languages can claim flexibility, portability, ease-of-use, easy-to-learn, and so on.
- A possible issue with “feature rich” is that it could refer to graphics or other features absent in PLI
- Perhaps “powerful” should be explained and an example given.
- (speaking of examples, I’m removing the hello world with the infinite loop)--ClemMcGann 23:27, 15 July 2005 (UTC)
Was PL/I the first commercial language to compile itself?
editThe article reads: PL/I was probably the first commercial language where the compiler was written in the language to be compiled.
- (moved posts from talk pages here) -- ClemMcGann 01:08, 6 August 2005 (UTC)
- I am of the opinion that Burrough's Algol was before PL/I (compiler written in its own language)
- (and Burrough's PLI for the B6700 was written in Algol)
- --ClemMcGann 13:13, 5 August 2005 (UTC)
First off, thank you for getting back to me. I guess the easiest way to bring up concerns about factual information like that is to bring it up in the discussion pages, but the way you removed the line without a clear comment did make it impossible to know that you were doing so to dispute the timeline.
I suppose that some of this is my fault for confusing two languages. I could have corrected the error when I first saw it. I actually had a counter-example from NELIAC, which was first publicly announced in 1958, based on publications from the Naval Research Laboratory. What year was the compiler that you were refering to written?
- This was discussed last year on alt.folklore.computers, have a look at [2] and seach for the string "Suspect Algol on Burroughs mainframes might have compiled itself first" Or go to message 164 - regards --ClemMcGann 16:21, 5 August 2005 (UTC)
References
edit- Huskey, H. D., Halstead, M. H., and McArthur, R., "NELIAC- A Dialect of ALGOL" in ACM/ CACM 3(08) August 1960
- JOHNSEN, R. L. JR. Implementation of NELIAC for the IBM 704 and IBM 709 computers. NEL Tech. Mere. No. 428, Sept. 1960.
- HOPL entry for NELIAC which is the source of the following quote:
- Significant in that it provided the first ever bootstrap implementation, and the standard reference for NELIAC (Halstead's book) was for many years the primary reference for such compilers.
- Ok, I just re-read what I wrote on PL/I. It's odd that you deleted what you did, since your delete did not argue the point specifically. That is, you removed one of three sentences in that paragraph, the only one that did not claim PL/I was the first commercial (and this is a key point) language designed this way.
- That said, I think the idea here is that BALGOL was a research language used in commercial settings, much like C was at first. PL/I was designed to be a commercial language. At least that's how I read it from the source I got that from. Now, of course, I can't find that source. Stupid me, I should have put it in the article, but not being a PL/I guy, I just assumed that it was common knowledge.
- Easy things first: do you dispute the dates involed? E.g. do you dispute that NELIAC came before BALGOL or that PL/I came (about 8 years) after both of them? Of not, then I think we can move on too the question of if PL/I can claim to be the first commercial bootstrapped language, no? -Harmil 22:57, 5 August 2005 (UTC)
- I only expressed a doubt, not a certainty.
- That doubt was based on comments made last year in alt.folklore.computers, It was a long thread.
- The claim “Suspect Algol on Burroughs mainframes might have compiled itself first and done it very well.” Is here [3]
- A reply “Multics PL/I was written in PL/I, though it was bootstrapped via a temporary compiler (EPL) not written in PL/I.” [4]
- I do not dispute that “NELIAC came before BALGOL or that PL/I came (about 8 years) after both of them?”
- Actually I don't dispute anything, I just expressed a doubt.--ClemMcGann 01:08, 6 August 2005 (UTC)
- What means it "commercial", anyway? I would imagine that most assembly languages were compiling themselves before this, and it seems likely that some high-level languages were, too. —Preceding unsigned comment added by 173.13.229.185 (talk) 19:49, 11 June 2010 (UTC)
- fair point ClemMcGann (talk) 18:36, 12 June 2010 (UTC)
- In IBM terms, commercial would mean a language like COBOL, used for business purposes. Are there any COBOL compilers written in COBOL? 19:34, 11 February 2020 (UTC)
- fair point ClemMcGann (talk) 18:36, 12 June 2010 (UTC)
First commercial compiler
editThe first commercial version of the PL/I compiler by IBM was the (F) level compiler. In those days, IBM identified many of its software development/system applications with the letters E, F, G, and H. Each letter specified the minimum S/360 memory configuration needed to run that software. The memory configurations for the different letters were:
Level E: 32KB Level F: 64KB Level G: 128KB Level H: 256KB
(Source: IBM System/360 Operating System Introduction Release 21, IBM Publication number GC28-6534-3 avaiable from www.bitsavers.org).
Thus the PL/I (F) level compiler needed a minimum of 44KB to run(Source: IBM System/360 Operating System, PL/I (F) Compiler Program Logic Manual, IBM Publication Number Y18-6800-3, available from www.bitsavers.org).
For comparison purposes, when I run Microsoft's Outlook in Windows XP, it takes up about 14,604KB to sit there waiting for email to come in. This is about 331 times (33,100%) more memory) that the first IBM PL/I compiler.
I also have the code for that first compiler, (available for a small fee to cover cost of media from cbttape.org) and it is most definitely in assembly language. The PL/I (F) compiler is comprised of about 270 source files and about 385,000 lines of IBM assembly language code. This figure does not include code for the libraries, built-in functions, I/O functions and all that.
Now perhaps the PL/I Optimizing Compiler was written in PL/I but I doubt it very much, because the PL/I (F) compiler had a reputation as being a real dog, and one would not use something with that reputation to write an arguably "better" compiler. I used both compilers "back in the day", and there was no way one could shoehorn a compiler of such a massive language as PL/I into 44KB using a high level language.
However, it has been documented that the PL/I "Checkout" compiler WAS written using the PL/I "Optimizing Compiler", the arguably much improved compiler that came after the PL/I (F) level.
The following quote from an article from 1978 summaries the systems that PL/I was used to program:
PL/I, and a variant for systems programming, has been used successfully to program several large operating systems and compilers (notably MULTICS, OS/VS Release 2 Version 2, the PL/I Checkout compiler).
This is from The Early History and Characteristics of PL/I by George Radin, ACM SIGPLAN Notices, 13(8), 1978 RSzoc 18:34, 18 November 2006 (UTC)
- The PL/I optimizing compiler required the PL/I runtime library to execute. Of course, this was rarely a restriction, but it was possible under VM to locate the two on different minidisks, which could mean access to one and not the other. Robert A.West (Talk) 08:26, 30 December 2006 (UTC)
PL/I AREAS
editCan we insert something about notable/novel features in PL/I, in particular I am curious about AREAS.
NevilleDNZ 21:50, 23 May 2007 (UTC)
- What's notable or novel about it? Seems to be similar to FORTRAN COMMON areas, and COBOL Segmentation, both of which preceded PL/I. T-bonham (talk) 05:01, 30 August 2008 (UTC)
- As I remember it, an area is an unstructured space in which based variables (usually structures) can be allocated. These are referenced through the use of an OFFSET variable defined with respect to the area. The offset is like a pointer, except safer because out-of-bounds errors can be diagnosed at run-time. The offset must always point within the area. I'll have to check my PL/I (F) Reference Guide when I get back to my office.CSProfBill (talk) 17:08, 13 August 2009 (UTC)
Also, AREAs can be read or written as a block, and the offsets will still be valid. Additionally they make it easy to clean up storage by just deleting the AREA, rather than having to delete lots of separate data items individually. Peter Flass (talk) 21:13, 20 August 2011 (UTC)
other drawbacks
editAs an ordinary programmer who has read up on PL/1 after learning C and Pascal, I would like to suggest 3 more reasons that the language is not more widely accepted. (1) The requirement that a subprogram could be embedded ANYWHERE inside a parent routine put another burden on the compiler writer, who would potentially have to save off data on the parent routine and then restore it when the subprogram finished. The feature itself was overkill; Pascal (and later ADA) confined subprograms to the declaration area with no loss of power.. (2) Although amazingly full of built-in data types, the language design completely missed out on the concept of the user-defined datatype, a very hot topic in the '70s. The only way that a programmer could declare two variables of the same complicated type was to use a clumsy "A LIKE B" mechanism, which would be a maintenance headache -- suppose B turned out to be obsolete during a later revision? (3) Wierd rules such as writing TRUE as '1'B, when both COBOL and FORTRAN already had much clearer syntax. CharlesTheBold 03:28, 30 July 2007 (UTC)
- As a compiler writer, I can assure you that where a procedure is placed does not add complexity to compilation. The code just branches around the block, which is entered only when called, like a procedure placed anywhere else. Normally I don’t do this, but it’s particularly useful for a adding a new feature, where the new code and required data declarations can be inserted and called where it’s used, thus having everything in one place while the new code is being developed, instead of having to skip around in the source to make changes. Peter Flass (talk) 13:41, 7 August 2024 (UTC)
- Why branch around it? Just put the code for the procedures at the end. Unless you're writing a one-pass compiler that should be trivial. In HLASM terms, put a unique LOCTR in front of the generated code for each procedure. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:00, 7 August 2024 (UTC)
- I think it means that the object code branches around it. As always, one should write for readability. I suspect that some small internal procedures might be nice. Big ones should go at the end, or be external. As internal procedures have access to variables in scope, it has to be done at least slightly carefully. It took a long time for Fortran to get internal procedures, and I believe even now, you can't nest them. It gets interesting when you pass one as an argument, and then call it from somewhere else. Gah4 (talk) 21:35, 7 August 2024 (UTC)
- The object code only needs to branch around the internal procedure if the compiler puts the object code for the internal procedure in the middle of the object code for the containing procedure, with a branch around the internal procedure's object code put in the code for the containing procedure. There's no inherent requirement for the compiler to do that; it could put the object code for the internal procedure after the object code for the containing procedure, as @Chatul said. Guy Harris (talk) 22:17, 7 August 2024 (UTC)
- I think it means that the object code branches around it. As always, one should write for readability. I suspect that some small internal procedures might be nice. Big ones should go at the end, or be external. As internal procedures have access to variables in scope, it has to be done at least slightly carefully. It took a long time for Fortran to get internal procedures, and I believe even now, you can't nest them. It gets interesting when you pass one as an argument, and then call it from somewhere else. Gah4 (talk) 21:35, 7 August 2024 (UTC)
- Why branch around it? Just put the code for the procedures at the end. Unless you're writing a one-pass compiler that should be trivial. In HLASM terms, put a unique LOCTR in front of the generated code for each procedure. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:00, 7 August 2024 (UTC)
- Well, as someone who wrote PL/I programs in the 1970s, I can tell you that none of these were problems at the time.
- While PL/I can certainly be accused of overkill, this wasn't an egregious example, and certainly wasn't the biggest imposition the language made on the compiler writers. If you want to find the part that made them go nuts, you only need to look as far as the macro language.
- The PL/I "structured data" type is exactly a "user-defined datatype", and it predated the C
struct
and Pascalrecord
. This was a very common concept in the 1970s, and no serious language lacked it. PL/I even had C's*struct
, usingBASED(pointer)
, which was uncommon for a HLL of the time (of course, Assembler programmers used DSECTs for exactly that all the time). And nobody codedDECLARE A LIKE B;
unless B was a structure and existed solely for that purpose. - How hard is it to
DECLARE TRUE BIT INIT('1'B); DECLARE FALSE BIT INIT('0'B);
? And from a boolean perspective, C's "true is 1, false is anything else" concept is just wacky.
- RossPatterson 00:57, 31 July 2007 (UTC)
- Actually it was 0 is false, anything else is true. But you're right, it wasn't very clean.
- --64.238.49.65 (talk) 03:50, 7 February 2009 (UTC)
Thank you for the historical perspective. I was of course evaluating the language from the point of view of 2007. I can tell the language was a powerful tool in the 1970's when compared to COBOL and FORTRAN. I understood about PL/1 structures, my point being that PL/1 seemed to operate by cloning a sample structure rather than instantiating a template -- this is a theoetical point which probably didn't make much practical difference in writing programs. CharlesTheBold 03:08, 10 August 2007 (UTC)
- My opinion is that C and Pascal won out over PL/I largely because programmers weren't all that much productive using PL/I versus C or Pascal. C and Pascal's big advantages were that they were quick to compile and compileable on machines with only around 64K or 128K bytes of memory. All three languages provided large productivity increases over FORTRAN (and COBOL, I suspect). —Preceding unsigned comment added by 64.238.49.65 (talk) 04:13, 7 February 2009 (UTC)
- Neither your nor my opinion matter for this article, since we're not citing sources, but I disagree. PL/I, FORTRAN, and COBOL were all compilable in 64KB or memory in those days - remember, we ran a bunch of programs in memory at the same time, and 1MB was considered a large-memory system. As noted above, IBM's "F Level" PL/I compiler ran in 64K, and it was a full-blown implementation of the language. And we defined programmer productivity differently at that time, when it was considered wasteful to compile a program that had syntax errors - programming wasn't largely typing at that time, most programs were still initially written by hand on paper.
- What C and Pascal benefited from wasn't that they were better, it was that Computer Science had a change of heart. In the early 70s, CompSci programs taught PL/I dialects to undergrads, XPL was the cutting edge in compiler research, and the only other contender was LISP (but for grad students and profs, not undergrads). Fast forward 10 years and almost every CompSci program had become a Unix fandom, rightly or wrongly, and had started teaching C instead of PL/I, in part because there was no Unix PL/I. Just as C++ and now Java have replaced C because of what the schools teach today, so C replaced PL/I (and PL/I replaced FORTRAN, at least outside the Engineering schools). But COBOL, like cockroaches and crabgrass, will be with us forever :-) RossPatterson (talk) 05:14, 7 February 2009 (UTC)
- The terrible "Year 2K threat" to ancient programs was caused by the two digit year embedded in so many COBOL programs, I'm not sure about Fortran. But most of it was the ignorance of management people.
- I myself, working for an office regulating gas pipelines, asked why a pig in a pipe would need to know the date? Pipelines transporting oil or gas can put in a sort of moving partition, called a pig, for various purposes. If it has electronic intelligence aboard, it should only need to know the time elapsed since it was launched.DaveyHume (talk) 19:23, 9 March 2016 (UTC)
I worked for IBM on the mathematical library subroutines for the PL/I library, when hardware floating point didn't have exponential and trigonometric functions. I think I keypunched cards from my own handwritten programs. It could cost you a day to submit a program to the machine, and have it come back with syntax errors. It is now more economical of a programmer's time to let the computer report them in less than a minute.
- But note that the compiler itself was written as loadable 4096-byte segments, in order to run code that consisted of megabytes of data. So the vast generality of the language tended to make it slow.
- Nevertheless, the team that devised both C and Unix was tighter and therefore more agile, quite probably smarter, than the IBM consortium that defined PL/I. I'm not quite sure that one could write an OS in Pascal, but Niklaus Wirth is certainly a genius.
- C and Pascal are better languages than PL/I, having the advantage also of being invented more recently. Unix is a better OS than anything earlier.
- Writing Perl or Python programs to run on a machine with gigabytes of RAM is dead easy. The machines are fast enough that you rarely have to care whether it's compiled or just interpreted.DaveyHume (talk) 19:07, 9 March 2016 (UTC)
Typo in examples
editIs the semi-colon missing from the PUT SKIP line of the second example intentional? --MarkMLl (talk) 13:49, 25 December 2007 (UTC)
Reference to criticism
editThe 3rd paragraph, "PL/I is considered by some..." was flagged for fact. I edited in a reference that demonstrates that Edsger Dijkstra (at least) made that criticism. That's still not a particularly good reference because it is merely an example of a famous person making the claim, rather than something that states authoritatively that several people made that claim. It's just something that I had just come across and could easily reference. If someone has something better, please fix it again. EMan (talk) 19:21, 5 May 2009 (UTC)
Criticism of Criticism
editAs modified, the 3rd paragraph reference does not support the criticism to which it is attached. While I believe the statement is true, it is unsupported by a fact. In denial of the creation and existence of ADA, some people do think PL/I was a turning-point. In the reference (to Dijkstra's Turing Lecture), Dijkstra does lambaste PL/1 for its feature-full nature, but nowhere describes it as a turning-point. Furthermore, in the same article he also says that "Algol 60 is expensive to implement and dangerous to use" and that he sees "a great future for very systematic and very modest programming languages. When I say 'modest' I mean that, for instance, not only AlGOL 60's 'for clause', but even FORTRAN's 'DO loop' may find themselves thrown out as being too baroque." It is rather unfair and non-NPOV to throw such a pot-shot into the introduction of the article.
The trade-off between adding complexity to a language where a documented, common function is available to all and asking each developer to create or find a function implementation for themselves is a complex and contested discussion. Perhaps the view that Guy Steele ("Growing a Language") expresses about growth of language from primitives and embedding features in libraries rather than language represents the proper comprimise. But the point is that if made, such a discussion should be confronted directly, later in the article where it can be presented in full form and not as a rhetorical comment without anything other than an appeal to authority (for which counter-authorities can be found as well). As far as I know there are NO actual studies to illustrate the value trade-offs. The statement should be removed and anyone wishing to discuss the issue in greater depth should please do so - preferably in a context in which it doesn't appear to apply to one programming language among many.
I'm not particularly defending PL/1 - I'm defending the substitution of rationalization for rhetoric when attaching unsubstantiated value judgments.CSProfBill (talk) 18:16, 13 August 2009 (UTC)
Hypotetical response to PL/1 by the programmers of that time based on my experience
editOne thing is to add complexity, other to add verbosity. PL/1 added new features needed for modular design (COBOL is was poor in that aspect, Fortran was modular), more flexibility to represent data structures (Fortran was very poor in that sense, although a tricky use of format allows to represent records, but it lacks pointers, COBOL also lacks pointers although it had arrays, records and unions), PL/1 also had math operations, (Fortran was the king, COBOL has a very poor design, it lacks math operations, only restricted to the basic arithmetic operators using English, although the compute verb allowed formulas, but forget other computations like logarithms needed to compute interest rates but that was not considered business oriented).
Maybe PL/1 seemed too verbose to Fortran programmers. If one browse books on numerical methods of those years, the prevalent language is Fortran. However some books on Data Structures or other areas requiring a more elaborated data structures, had some PL/1 examples. I don't know any of such books that use COBOL, I wouldn't because it lacks pointers and recursion (one has to use arrays and indexes), although it has records and unions. (I knew about a rare brand registering program written in COBOL performing syntax analysis and phonetic comparison with registered brands.)
Fortran is used in some books, implementing trees with arrays and indexes, also implementing stacks with arrays to handle recursion. And those requiring more elaborated data representation, like the IA researchers, used Lisp, small, elegant and powerful despite of all the parenthesis.
The business programmers had COBOL, although PL/1 is less verbose than it, it has features like generic and entry that allow overloading, something that they rarely think about in a world predominated by batch processing of files. They just scanned records in files to add, delete, and change them according to other transactions file. Although unions allow to represent different kinds of records, is rare to see them in those batch programs (COBOL has unions). Why should them learn PL/1?
- In many ways, Fortran is finally catching up to what PL/I did almost 60 years ago. For a long time, Fortran only had single precision
COMPLEX
. Fortran only recently got internal procedures, and I believe still not nested. PL/I is more consistent in its rules, partly because it was pretty much defined before the first compiler was written. The rules were not modified to make compiler writing easier. Fortran has the complication of initialization expressions and specification expressions, which I can never remember long enough to know when to use each one. Gah4 (talk) 21:45, 7 August 2024 (UTC)
Dataypes
editCan someone add a table of the PL/I datatypes? I remember only a few (BINARY
, DECIMAL
, FLOAT
, CHAR(n)
, POINTER
[BASED
], FILE
, etc.). — Loadmaster (talk) 21:47, 22 October 2009 (UTC)
- It would be a big table - about 60 entries ClemMcGann (talk) 22:53, 22 October 2009 (UTC)
- I've added this, to the level of the Standard. PL/I Optimizer had a few more, e.g Normal/Abnormal, Event and Task.Haven't decided what to do about the Latest IBM PL/I for Z/os - long after my time! RogerofRomsey (talk) 22:20, 21 January 2010 (UTC)
- Those are attributes. A data type is the combination of attributes that a variable can have, which is (more or less) exponential in the attributes. Numeric variables are either REAL or COMPLEX, either FIXED or FLOAT, either BINARY or DECIMAL, with precision from 1 to some maximum, and for float, with a scale factor from some negative to some positive value. That should leave many thousands of actual data types. Strings can be CHAR or BIT, VARYING or not, and length from 1 to some maximum. Gah4 (talk) 21:53, 13 March 2019 (UTC)
Some of the history section needs clarification
editParts of the history section sound extremely suspicious when you add them together, for instance it says that PL/I was developed in Europe, but later "PL/I was designed by a committee drawn from IBM programmers and users drawn from across the United States".
While it is possible that the language was "designed" in the USA with input only from USA and then implemented in Europe that sounds a) like a roundabout way of doing thins b) very unlike IBM c) means that the implementers had no input on the language etc etc etc
There are other strange things going on on the pagereiknir (talk) 03:45, 26 November 2009 (UTC)
RogerofRomsey (talk) 18:29, 15 January 2010 (UTC)
- I was in the PL/I language group in those years. Compilers were done in UK and Germany. Language was initially in New York then transferred to Hursley in 1967, with the language group working on one floor of "B-Block" and the F Compiler and Library groups on the next floor. I am working with "fabulous flowers" preparing an edit of the history section.
- Appreciated reiknir (talk) 05:44, 13 February 2010 (UTC)
Replacing Sections - Retrospective (Design and implementation issues, Programmer preference issues, Improved features), Variables, Storage classes
editThe sections do not relate well to the structure of the articles on PL/I's fellow languages Algol, Cobol, and Fortran. There is excellent debating material, but it does not have the perspective of an Encyclopedia.
I have already rewritten the History section to deal with PL/I's failure to displace Fortran and Cobol and its being overtaken by C for systems work, and the goals section to say that PL/I was bound to be large and complex. I will be removing material from the subject Sections that has been dealt with in History and Goals. Some paragraphs that are significant issues of opinion will be moved to footnotes. Less contentious points will be retained under Implementation Issues, Programmer Issues, Special PL/I Topics.
I introduced a Usage section to hold my material on why PL/I did not displace Cobol and Fortran RogerofRomsey (talk) 15:29, 29 January 2010 (UTC)
Comments welcomed on my talk RogerofRomsey (talk) 21:05, 26 January 2010 (UTC)
The language summary is keyed to the Standard to avoid contention. But there were important elements missed out in the standard, and significant developments have occurred since. To capture these I am adding an Evolution of the language section. —Preceding unsigned comment added by RogerofRomsey (talk • contribs) 15:25, 28 January 2010 (UTC)
The Variable names section is redundant - the fact that there are no reserved words in PL/I has been made already, and this example doesn't add anything to that. I would expect most compilers to warn the user when IF,THEN,ELSE, and DO are used as identifiers. Let's not tempt people to write bad programs.
RogerofRomsey (talk) 16:20, 2 February 2010 (UTC)
Language Summary and C++
editThe "Language Summary " section cites SGML and C++ as Special Purpose languages. This seems to be not precise enough. SGML is a "language" of sorts - ok, it's part of the name - , but not a Computer Language in the same sense as Algol, Fortran, PL/I, JOVIAL, etc. It's more of a specification. C++ is (really) a general purpose language. I've NEVER heard of it being described as a special purpose language. If special purpose, then what is that special purpose? Other example languages should be chosen as special purpose examples, no? RSzoc (talk) 01:53, 18 July 2010 (UTC)
- Agreed, that sentence is a load of rubbish. I have just removed C++ and tagged it as OR. 205.228.108.58 (talk) 02:56, 20 July 2010 (UTC)
Criticism Section
editCurrently, though the article has lots of criticism, it has no section called that. Perhaps it got lost at some stage. Cobol and Fortran do have such a section. I am moving the current content - Implementation Issues and Language issues into the section. It will need thinning to reduce it to attributed criticisms, Dijkstra and the like. RogerofRomsey (talk) 17:01, 29 January 2010 (UTC)
Implementations Section
editI have elevated the material to a section and divided it into a subsection per "major" compiler, a subsection on other subsets, and one on dialects. A major compiler is one that had a substantial impact on the development of the language and its general usage; interesting single customer compilers are not major, however interesting.
At this stage I am gathering data and descriptions - platform, year of release or announcement,subset/dialect level. I have added Micro Focus Open PL/I, CDC compilers, PL/I for OS/2, Stratus and Honeywell as Multics derivatives, IBM Series 1 PL/I.
RogerofRomsey (talk) 18:43, 1 February 2010 (UTC)
I have put in further substructuring to help make sense of the lists of compilers
- 5.5 Conversational teaching subset compilers
- 5.6 Other Mainframe and minicomputer compilers
- 5.7 PL/I compilers for Personal Computers and UNIX
- 5.8 Special purpose and system PL/I compilers
- 5.9 PL/I dialect compilers
RogerofRomsey (talk) 09:45, 5 February 2010 (UTC)
I have added a large section on the current (2010) IBM PL/I compiler - a beast with several names and supporting all IBM platforms including AIX, Linux and Z/OS. The section risks dominating the implementation section, but I see no alternative as it is the main IBM vehicle and there is so much new function in it. There is no Standard for the added functions, so the extensions have been placed in the Implementation section. It would help if the major extensions to PL/I made by Kednos and Liant/MicroFocus were included. Anyone have any material?
I may have made mistakes in the addition as I have no experience with the new product.
RogerofRomsey (talk) 17:58, 9 March 2010 (UTC)
RETURN from ON-units
editI removed the following inline text by RogerofRomsey from the "'On' block error trapping" section of the article:
- /* Not true. ON-units cannot be terminated by RETURN statements for some good reasons. Would the writer please revise this recently added section. */.
I also adjusted the text a bit to reflect his point. The section still needs work, though. — (talk) 18:43, 19 April 2010 (UTC)
Thanks.RogerofRomsey (talk) 12:36, 20 April 2010 (UTC)
I have added a larger section covering more rationale and more of the language. Hope it includes what Loadmaster wanted to cover. RogerofRomsey (talk) 17:20, 20 April 2010 (UTC)
- I never thought about trying, but you should be able to GOTO the return statement of an enclosing procedure. Gah4 (talk) 21:49, 13 March 2019 (UTC)
Why PL/I failed
editI'm putting this here in the talk page because I don't know what else to do with it. It isn't really encyclopedic, but on the other hand isn't any less so than what is in the article under "Usage".
One significant reason that PL/I failed is sheer inefficiency. In particular (and I recognise this is a very very nerdy point) the implementation of the Static Back-Chain Pointer through register 4 in the Optimising Compiler added 14 machine instructions to each procedure call on the off-chance that someone would call a procedure with a another procedure as a parameter - something that no relatively sane person in the commercial world would consider doing even by accident.
In effect (on one program I wrote way back when) this meant a program that in Cobol took two hours to run took 23.5 hours when translated to PL/I (environment specific obviously). So everybody wrote PL/I as if it didn't have procedure calls at all and it might as well have been Cobol.
That seems to me a fundamental design flaw in at least the compiler, if not in the language. And it scares me a bit that after nearly thirty years that register number and the reason for it is still seared on my brain. PL/I might have been a marvellous language (and PL/S was, since it did without this nonsense) but it failed by trying to do everything for everybody and not doing it particularly well for any of us.
But it is a bit of a moot point whether it was the language design or the implementation that was at fault. Blowed if I can find any reference for this other than my own experience though. Would welcome feedback from others. Phisheep (talk) 01:56, 1 March 2011 (UTC)
- I'm skeptical that 14 additional instructions per procedure call could increase run time by 1175%. Did you try compiling with some optimization compiler option, perhaps one that specifically disabled the extra procedure call overhead? In any case, you can't extrapolate one example, or even one implementation, to explain why PL/1 failed to retain its (or grow in) popularity as a programming language. — Loadmaster (talk) 04:07, 1 March 2011 (UTC)
- A lot of features in PL/I could slow programs down. Dynamic allocation, along with the indirect references needed to use it, is slower than the static allocation of Fortran (at the time). And yes, some slow downs were due to rarely used features that PL/I allowed for. While only procedures with the RECURSIVE attribute are allowed to be used recursively, as far as I know, compilers ignore this and generate the same code in both cases. Static save areas used by Fortran result in much faster subroutine linkage than the dynamic save areas of PL/I. I believe the save area and automatic variables are allocated together, so don't count them separately. By the time C came along, the extra work for auto variables and recursion weren't so much of a problem, as hardware was faster. Even so, a lot of work is still being done on speeding up procedure calls. (Such as keeping arguments in registers, when possible.) Gah4 (talk) 21:46, 13 March 2019 (UTC)
It's the customers' fault
editSeems biased against the customers, using weasel words to describe their views and blaming them for the language's faults:
Programmer Issues
• "Many programmers were slow to move" // were they slow or did they decide not to move?
• "a perceived complexity" // so it's the programmers' fault, or is it complex?
• "Programmers were sharply divided ... with significant tension and even dislike between the groups" // it's the programmers again
• "Both COBOL and FORTRAN programmers ... were somewhat intimidated by the language and disinclined to adopt it." // more weasel words
• "jaundiced view of PL/I, and often an active dislike for the language." // ditto
QuentinUK (talk) 14:11, 14 June 2011 (UTC)
- Having been a programmer at the time, in a commercial shop, with a non-IBM machine, allow me to comment on the above points:
- There was no PL/I compiler available to a programmer until the first IBM/360 was installed. When it was installed, the programmers needed to get busy. Moreover, the management was reluctant to retrain their Cobol or Assembler programmers in an unknown language and face subsequent maintenance in more languages than seemed necessary.
- I always maintained that there was a usable subset of the language that was easier to learn than Cobol, of which I had experience. At least one experienced ex-Cobol programmer whom I recruited came back from his PL/I course and said "Now that's what I call a programming language!". However, I could (and did, when interviewing) fox experienced PL/I programmers by showing them a few lines of code that would not compile and ask them how to correct it. The answer was to delete one (syntactically valid) line. Experienced programmers also had problems understanding recursion, but then they didn't need to.
- A working group of technical and commercial programmers from the various divisions of my company agreed that the company as a whole should move towards PL/I as a common language as and when it was shown to fulfill it's initial promise. Management was still held back by the first point above and it never happened.
- Many, but by no means all, of these guys had barely healed scars from learning their first programming language and were in no hurry to learn another.
- This is pretty much the same point as the previous one, though I suspect that there may have been a certain amount of NIH in the States, since the early compilers came from Europe.
- Much as I enthused about PL/I, even so far as to be castigated (anonymously, I'm glad to be able to say) on one occasion by Dijkstra, I have to accept that the comments reflect more or less how I felt at the time, once I realised what we had let ourselves in for. MikeSy (talk) 11:24, 20 August 2011 (UTC)
- Having been a programmer at the time, in a commercial shop, with a non-IBM machine, allow me to comment on the above points:
Online compiler links
editRecent edits by 74.235.41.177 added links to PHP-based "online compilers" at VintageBigBlue.org
. Are these valid links for these WP articles? I tested the COBOL compiler on a simple "Hello, world" program and all it did was return a blank page. — Loadmaster (talk) 20:17, 31 October 2011 (UTC)
Implementation issues - undeclared variables
editDefault attributes for undeclared variables were put into the language for compatibility with FORTRAN, which of course didn't declare most variables. Should this be added? Peter Flass (talk) 21:32, 3 February 2012 (UTC)
- Why not? - go ahead - Lugnad (talk) 10:20, 4 February 2012 (UTC)
FORTRAN vs. Fortran
editThe original change wasn't mine, but I believe FORTRAN is correct: "All capitals are naturally used in many abbreviated forms such as NATO, FBI, etc.; see Acronyms and initialisms above." Peter Flass (talk) 19:03, 11 March 2012 (UTC)
- Fortran isn't an acronym. COBOL is. Every letter must stand for something. Most old computer usage was due only to lack of lower-case, and we don't replicate it. Even the current language standard now uses "Fortran" rather than "FORTRAN". Yworo (talk) 19:09, 11 March 2012 (UTC)
- Fortran was a contraction - FORmula TRANslation - not an acronym, but it was written in uppercase because of convention, not limitation. At the time, most proper nouns in computing were written in caps, even when publishing mixed-case academic papers about them (which was, after all, the primary discussion mode of the day). For example, the 1954 Backus et al. report entitled "Specifications for the IBM mathematical FORmula TRANslating system, FORTRAN.". And since the usage in this case is specifically FORTRAN IV, in support of the "PL/I" name, it probably should be shown in this article as such. RossPatterson (talk) 19:42, 11 March 2012 (UTC)
- A good rule of thumb is to see where the Wikipedia article is, it's at Fortran. I removed the comparison as unnecessary. In the Fortran article they choose to make an historical distinction using all-caps for the older versions, Fortran for the newer versions; however, this is explained in the lead of the article. In other articles, we should just use "Fortran", which is the current standard. Yworo (talk) 19:50, 11 March 2012 (UTC)
- Fortran was a contraction - FORmula TRANslation - not an acronym, but it was written in uppercase because of convention, not limitation. At the time, most proper nouns in computing were written in caps, even when publishing mixed-case academic papers about them (which was, after all, the primary discussion mode of the day). For example, the 1954 Backus et al. report entitled "Specifications for the IBM mathematical FORmula TRANslating system, FORTRAN.". And since the usage in this case is specifically FORTRAN IV, in support of the "PL/I" name, it probably should be shown in this article as such. RossPatterson (talk) 19:42, 11 March 2012 (UTC)
- Well, you learn something new every day. Fortran is indeed an acronym, in the proper, dictionary sense: a word from the first letters of each word in a series of words. Which doesn't change Yworo's valid point about using the modern "Fortran" form for the language in general. But references to the 1961 dialect should still be "FORTRAN IV" - even the FORTRAN IV section of the Fortran article reads that way. RossPatterson (talk) 10:41, 13 March 2012 (UTC)
- Still violates WP:ALLCAPS. Some definitions include this sort of acronym, some definitions don't. It makes since to distinguish in the Fortran article, and it has an explanatory paragraph in the lead, but do you intend to put an explanatory footnote in every other article that uses the all-caps form? Because otherwise knowledgeable editors will probably continue to reduce to normal capitalization. The only reason FORTRAN IV was in all-caps was that computers of the time did not have lower-case at all! Yworo (talk) 19:49, 13 March 2012 (UTC)
Object orientation
editThis section is really not about Object orientation or related features. Should be renamed to something like "Data types". Proposals? — Preceding unsigned comment added by Towopedia (talk • contribs) 10:47, 29 June 2012 (UTC)
Very true. I shall alter it. FreeFlow99 (talk) 11:27, 11 February 2020 (UTC)
Positive? I'm not so sure
edit"On the positive side, full support for pointers to all data types (including pointers to structures), recursion, multitasking, string handling, and extensive built-in functions PL/I was indeed quite a leap forward compared to the programming languages of its time. However, these were not enough to convince a majority of programmers or shops to switch to PL/I."
This makes it sound like programmers would want to use these features. From today's perspective, with most programmers familiar with languages like C and its derivatives, that might seem reasonable. However, from the perspective of thirty or more years ago, when PL/I was competing with lots of emerging languages (some on the newfangled microcomputers), things like pointers were alien to most COBOL and Fortran (i.e. mainframe) programmers. Most PL/I I came across then was little different than Fortran in coding style - pointer use was all but forbidden in some shops (I was forbidden to use recursion until the '90s). The biggest problem I had with most programmers coming onto my PL/I team was getting them to learn and use pointers properly!
Rather than being positive, these things were central to the fear and loathing that most mainframe programmers had for PL/I. IMHO, of course.
99.245.248.91 (talk) 19:50, 26 February 2013 (UTC)
- I disagree, but that's also only my opinion. Certainly string functions were a big step beyond Fortran's support for "hollerith" data. Peter Flass (talk) 20:41, 26 February 2013 (UTC)
- I probably should have taken the last two out of the list. But pointers, recursion and multitasking were things they didn't want to deal with. 99.245.248.91 (talk) 23:12, 26 February 2013 (UTC)
- Could be. Most programmers came to PL/I from a COBOL or FORTRAN background; certainly recursion and multitasking, at least, would have been a stretch for them. Anyone with an ALGOL background would have "gotten" recursion. I'll have to look at PL/I for FORTRAN Programmers and PL/I for Commercial Programmers to see what they presented, but I think they tried to map FORTRAN and COBOL features to PL/I features, probably leaving out those things you mention. Peter Flass (talk) 00:08, 27 February 2013 (UTC)
- I probably should have taken the last two out of the list. But pointers, recursion and multitasking were things they didn't want to deal with. 99.245.248.91 (talk) 23:12, 26 February 2013 (UTC)
Citation in first paragraph
editI noticed the citation in the first paragraph ([1], which points to "Sturm, Eberhard (2009). The New PL/I. Vieweg+Teubner. ISBN 978-3-8348-0726-7.") doesn't make sense. The paragraph states the language is in active use as of 2011, but the reference was written in 2011. Does anyone with more knowledge about the language have a better reference? Namnatulco (talk) 08:48, 17 March 2013 (UTC)
Syntax highlighting
editSince the switch from Geshi to Pygments for syntax highlighting (phab:T85794), support for PL/I (lang="pli") was unfortunately dropped, as can be seen with the plain text formatting on this page, PL/M, PL/0, IBM i Control Language, SabreTalk, and others such as Do while loop#PL/I , For loop, Record (computer science), Data descriptor, K-mer, Branch table, Stropping (syntax), Gap penalty, Stride of an array, Object composition, Karatsuba algorithm. While_loop, Subroutine and Goto. If you want PL/I syntax highlight support again, it will need to be added to Pygments. John Vandenberg (chat) 02:19, 12 July 2015 (UTC)
Standardization issues
editShould the article discuss the name change from PL/1 to PL/I, which as I recall happend during the standardization process?
Wasn't the ANSI standard based on the Vienna Definition Language (VDL) derfinition of PL/I from IBM? Shmuel (Seymour J.) Metz Username:Chatul (talk) 19:50, 28 March 2017 (UTC)
Array expressions
editIs PL/I the first compiled language with array expressions? Gah4 (talk) 02:43, 17 May 2017 (UTC)
External links modified
editHello fellow Wikipedians,
I have just modified 2 external links on PL/I. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Corrected formatting/usage for http://www-01.ibm.com/software/awdtools/pli
- Added
{{dead link}}
tag to http://www.kednos.com/ - Added
{{dead link}}
tag to http://www.bitsavers.org/pdf/ibm/360/pli/C28-6571-1_PL_I_Language_Specifications_Jul65.pdfIBM - Added
{{dead link}}
tag to http://www.kednos.com/ - Added archive https://web.archive.org/web/20150721052307/https://vintagebigblue.org/Compilerator/PLIF/dosvsPLICompile.php to https://vintagebigblue.org/Compilerator/PLIF/dosvsPLICompile.php
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 21:50, 1 December 2017 (UTC)
- The dead ones aren't actually dead, the fixed formatting works, and the added archive doesn't appear to work, probably because it hands stuff to a backend mainframe that isn't there any more. Guy Harris (talk) 23:33, 1 December 2017 (UTC)
Formal Definition
editThe claim "It was the first, and possibly the only, programming language standard to be written as a semi-formal definition." seems hard to justify, unless you rule out Algol 68 on the grounds that it was formal rather than semi-formal ;-) Mhkay (talk) 15:45, 9 September 2018 (UTC)
- Since PL/I was before 1968, that seems to still qualify as first. The possibly only leaves some wiggle room. I don't know the ALGOL 68 details at all. As well as I know it, the PL/I language was fairly well defined before the first compiler was written. Was that also true for ALGOL 68? Were features removed from the (compiler independent) Language Specification later, or just slow in getting implemented? Gah4 (talk) 21:36, 13 March 2019 (UTC)
- At least Lisp, published in 1960, had both syntax and semantics formally specified. Algol had syntax formally specified, semantics semi-formal as far as I know. Does PL/1 had formal semantics specified? — Preceding unsigned comment added by 201.124.188.198 (talk) 11:02, 10 December 2019 (UTC)
PL/M
editI believe that PL/M is more like PL/I, in most ways that matter, than it is to ALGOL, but not enough to qualify as a subset. As they say at the beginning of docudramas, based on a true story (but maybe many important details were changed). I suppose PL/360 could also claim some similarity, but also isn't a subset. Gah4 (talk) 21:32, 13 March 2019 (UTC)
performance
editRegarding: Performance of compiled code competitive with that of Fortran (but this was not achieved)[citation needed]. Until Fortran 90, Fortran didn't allow recursion. Compilers tended to use static allocation for data, and at least IBM used static allocation for return addresses. PL/I used dynamic allocation for enough things that I don't think you could get away without it on each procedure call. If you were careful with data types and using STATIC variables, you might be close. But then you are competing against the optimization of the Fortran H compiler, which is very good. I doubt that there were enough properly controlled comparisons between them, though. Gah4 (talk) 19:51, 11 February 2020 (UTC)
orthogonality
editRegarding: Orthogonality helps makes the language "large".[clarification needed] PL/I is good at allowing things if it made a tiny bit of sense to allow it. Fortran, on the other hand, seems to disallow things until it makes too much sense not to. One of my favorite is that PL/I allows COMPLEX and CHAR values in DO statements. (In the CHAR case, it does a string comparison.) The original language specification was mostly done before any compiler design was started, so it might not have been easy to know what features are easy and what are hard to implement. There are extra challenges related to multitasking, requiring in PL/I (F) a feature called pseudo-registers. (Task specific data.) Note also that there is not a restriction on nesting of internal procedures. I forget now if Fortran yet allows for nesting at all. And all for a compiler that is supposed to be able to run in 44K of core. Gah4 (talk) 20:02, 11 February 2020 (UTC)
- Pseudo-registers are also used for CONTROLLED data and other things. Peter Flass (talk) 21:16, 11 February 2020 (UTC)
- I remember them most for SYSPRINT, but yes CONTROLLED, too. After I wrote the above, I remembered that I was thinking that it makes the compiler large. In theory, orthogonality should make the description of the language smaller. Much of the later Fortran standards is explaining all the special cases when things do and don't work. I don't remember now how it was that I figured out in high school that you could use CHAR variables (now fixed above) in DO statements.
DO S=' 1' TO ' 100' BY '1';
Before that I triedDO IMAG(J)=1 TO 100;
which also works. Is PL/I the only language with fixed point complex data types? Gah4 (talk) 00:21, 12 February 2020 (UTC)
- I remember them most for SYSPRINT, but yes CONTROLLED, too. After I wrote the above, I remembered that I was thinking that it makes the compiler large. In theory, orthogonality should make the description of the language smaller. Much of the later Fortran standards is explaining all the special cases when things do and don't work. I don't remember now how it was that I figured out in high school that you could use CHAR variables (now fixed above) in DO statements.
CALL/OS
editThe article mentions some of the lineage of IBM compilers. I did always wonder about the CALL/OS compiler. It wasn't all of PL/I (F), but often enough I could used the (F) manual. Gah4 (talk) 01:47, 25 June 2020 (UTC)
Dialects!
editThe {{Infobox programming language}} has |dialects=PL/M, XPL, PL/P, PL/C, PL/S, PL/AS, PL/X, PL-6, PL/8, EPL, SL/1
and |influenced=CMS-2, SP/k, B, REXX, AS/400 Control Language, C
.
B, C, CMS-2 and SL/1 look nothing like PL/I,; PL/M, PL/P, PL/AS, PL/S, PL/X, PL-6 and PL/8 are strongly influenced by PL/I but different enough not to call them dialects; and SP/k is arguably a set of PL/I dialects rather than just influenced by it. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:31, 3 January 2021 (UTC)
- Agree, although I'm not sure PL/M should be there at all either. AFAIK it's more like ALGOL than PL/I. I think it's pretty well established that B, and hence C, derive mostly from BCPL, although K&R were both familiar with PL/I from Multics. Peter Flass (talk) 17:49, 3 January 2021 (UTC)
- PL/M syntax has a strong PL/I flavor; the one Algol influence that I'm aware of is the := assignment operator within expressions. However, it's miising too much for mje to call it a dialect, e.g., array parameters, BIT, CHAR, POINTER. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:47, 3 January 2021 (UTC)
- The main influences PL/I had on C that I can think of are 1) semicolons at the ends of statements and 2) /* XXX */ comments. That's not necessarily worth noting as an influence. Guy Harris (talk) 21:06, 3 January 2021 (UTC)
- Semicolons can be argued to be from ALGOL, where presumably PL/I got them. How about static and auto storage declarators? C struct isn't all that much like PL/I structures, but maybe some ideas came through. The dot for structure qualifier? Gah4 (talk) 22:41, 3 January 2021 (UTC)
- Yes, semicolons came from ALGOL 60, and the way that C uses them is different from the way that PL/I does. I suspect that the dot came from PL/I; did BCPL and B use it? Shmuel (Seymour J.) Metz Username:Chatul (talk) 02:41, 4 January 2021 (UTC)
- Doesn't PL/I use semicolons as statement terminators rather than statement separators? What's different between the PL/I and C usage?
- BCPL and B were typeless and didn't have data structures (or a way to implement them other than, perhaps, arrays). the Sixth Edition UNIX version of the C manual doesn't mention PL/I at all. Ken Thompson's B manual does, but only indicating that B's comment syntax came from PL/I. Guy Harris (talk) 05:20, 4 January 2021 (UTC)
- Pascal uses semicolon as a statement separator, which mostly means there isn't one before the ELSE in an IF-THEN-ELSE statement. If you consider { as DO; and } as END; then it comes out about like PL/I. I believe PL/I got structures from COBOL, so it is possible that C did, too. Gah4 (talk) 08:00, 4 January 2021 (UTC)
- Yes, semicolons came from ALGOL 60, and the way that C uses them is different from the way that PL/I does. I suspect that the dot came from PL/I; did BCPL and B use it? Shmuel (Seymour J.) Metz Username:Chatul (talk) 02:41, 4 January 2021 (UTC)
- Semicolons can be argued to be from ALGOL, where presumably PL/I got them. How about static and auto storage declarators? C struct isn't all that much like PL/I structures, but maybe some ideas came through. The dot for structure qualifier? Gah4 (talk) 22:41, 3 January 2021 (UTC)
Some sample PL/I code (modified from an example in a Digital Research PL/I manual):
if i = 0 then
x = 0;
else
do;
x = foo(x);
if i = 0 then
y = 0;
else
if i = 1 then
y = mumble(x,1,1);
else
y = mumble(x,1,i-1};
return;
end;
with the C equivalent
if (i == 0)
x = 0;
else
{
x = foo(x);
if (i == 0)
y = 0;
else
if (i == 1)
y = mumble(x,1,1);
else
y = mumble(x,1,i-1};
return;
}
Both languages appear to be "semicolons are terminators" languages. The one semicolon-related difference is that do
and end
are statements in PL/I, but {
and }
aren't statements in C, so, while do
and end
are followed by semicolons in PL/I, {
and }
aren't followed by semicolons in C.
Pascal, however, is a "semicolons are separators" language.
However, none of them put a semicolon before the else
. Guy Harris (talk) 09:31, 4 January 2021 (UTC)
- A block (
{...}
in C orBEGIN; ... END;
in PL/I) is a statement; C does not require a terminating semicolon. Similarly, the C for statement does not require a terminating semicolon. In PL/I the semicolons are terminators but in C they are separators. Use an example withDO; ... END;
versus {...} and that should be clearer. - Yes, PL/I got structures from COBOL. Fortunately it left 77 and 88 behind ;-)
- With regard to ELSE, in PL/I ELSE and THEN are not reserved words; the terminating semicolon of the previous statement is what makes parsing possible. In PASCAL they are reserved. In ALGOL 60 "BEGIN" and "BEGIN" are distinct tokens. PL/I also differs in that it has THEN and ELSE statements rather than THEN and ELSE clauses. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:25, 4 January 2021 (UTC)
- "in C they are separators" Clang thinks otherwise:
$ cat foo.c int foo(int i) { i = i + 3; i = i + 17; return i } $ cc -c foo.c foo.c:6:10: error: expected ';' after return statement return i ^ ; 1 error generated.
- If semicolons were truly separators, they'd only be required between statements, not at the end of every statement, including the last statement of a block.
- The difference between PL/I and C is that, in PL/I, all statements end with a semicolon,[1] whereas in C, only a declaration, an expression-statement, or a jump-statement ends with a semicolon. Guy Harris (talk) 17:48, 4 January 2021 (UTC)
- I never did all that much Pascal programming, but I do remember some problems with semicolons. There isn't one before an ELSE, in the place that there is one for PL/I. I believe that there also isn't one after BEGIN, again where there is in PL/I. As far as C's for() and PL/I's DO, DO includes the nesting level that C doesn't. Never in writing C did I have a thought about needing a semicolon in the place where PL/I's DO needs one, but I did in Pascal's FOR statement. Gah4 (talk) 22:29, 4 January 2021 (UTC)
- To continue the PL/I vs. C example, compare. DO I=1 TO 10; to for(i=1;i<=10;i++) {, and my previous note that { works like DO;. In addition, there is the closing parenthesis in the for loop that PL/I doesn't have. Also, C if statements need parentheses where PL/I doesn't. One other interesting C feature is that you can have a trailing comma in an initializer list: int x[]={1, 2, 3,}; often very convenient but sometimes looks funny. Gah4 (talk) 00:57, 5 January 2021 (UTC)
- There are at least two things going on here:
- whether semicolons are statement separators or statement terminators;
- whether the "block start" and "block end" indicators are statements or not (a "block" may or may not include declarations, so it might just be, in ALGOL 60 terms, a "compound statement").
- In ALGOL 60, semicolons are separators, and the "block start" indicator
begin
and the "block end" indicatorend
are not statements (they're "statement brackets", to use the language of the "Revised Report on the Algorithmic Language Algol 60"). (Well, they can begin and end either a "compound statement" or a "block", depending on whetherbegin
is followed by declarations or not, but, at least as I read Block (programming)#History, that distinction is historical.) - In FORTRAN IV/FORTRAN 66, there are no semicolons, and the only blocks are DO loops;
DO
andEND
are statements. - In COBOL of that era, there are no semicolons, and I'm not sure there are any blocks - were, for example, IF statements where there's more than one statement executed under the condition done with PERFORM?
- In PL/I, semicolons are terminators, and there are "groups" and "blocks. The "group start" indicator
DO
, the "block start" indicatorBEGIN
, and the "group-and-block end" indicatorEND
are statements. I guess the difference is that a "group" has no name or allocation scope and a "block" does. - In Pascal, things are, as far as I know, pretty much the same way they are in ALGOL 60 ("The programming language Pascal (Revised Report)" refers to
begin
andend
as "symbols" that act as "statement brackets"). - In C, semicolons are terminators (so like PL/I), and the "block start" indicator
{
and the "block end" indicator}
are not statements (so like ALGOL 60 and successors, except that the indicators aren't words :-)). - A consequence of "semicolons are (statement) terminators" and "the "block start" indicator
DO
and the "block end" indicatorEND
are statements" is that every block ends with a semicolon, because the block terminator, being a statement, ends with a semicolon. That's not the case in ALGOL 60 or PASCAL, because 1) semicolons are separators and 2) the block terminator isn't a statement, and that's not the case in C because even though 1) semicolons are terminators, 2) the block terminator isn't a statement (there's a semicolon at the end of the last statement of a block, but that semicolon comes before the closing}
). - So:
- "There isn't [a semicolon] before an ELSE, in the place that there is one for PL/I." The semicolon is there in PL/I because, between
IF scalar-expression THEN
andELSE
there's a "unit", which is either a group (which can be a single statement or a DO-group) or a block, both of which always end with a semicolon. It's not there in Pascal because a single statement doesn't end with a semicolon (as semicolons are separators, not terminators) and a BEGIN/END block doesn't end with a semicolon (as semicolons aren't terminators and asEND
isn't a statement in any case). - "I believe that there also isn't one after BEGIN, again where there is in PL/I."
BEGIN
is a statement in PL/I, but not in Pascal. - "As far as C's for() and PL/I's DO, DO includes the nesting level that C doesn't." I'm guessing what you mean here, but, if I'm guessing correctly: DO, in PL/I, isn't just a looping construct, it's also a compound-statement construct. In C,
for()
andwhile()
are just looping constructs that iterate a statement; "a statement" here includes a block, but the block structuring is independent of the looping. Fordo
...while()
in C, there's a block in the middle, after thedo
and before thewhile()
- and, for more fun, there's a semicolon afterwhile()
.
- "There isn't [a semicolon] before an ELSE, in the place that there is one for PL/I." The semicolon is there in PL/I because, between
- (Having read various language documents, I find Pascal's and C's simple model of compound statements/blocks cleaner than PL/I's, because the "DO-group" seems to conflate iteration and compounding statements in a way whose primary benefit seems to be "it's more like the way FORTRAN IV does it so it won't frighten the horses as much", and cleaner than ALGOL 60's because it doesn't appear to introduce separate notions of "compound statements" and "blocks". I also find semicolon-as-terminator a bit cleaner, but that (as well as the previous statement) may in part or in whole reflect the fact that I've mostly been programming in C-adjacent languages for the past few decades.)
- I don't know why C has parentheses for the condition in
if
andwhile
, and for the loop expressions infor
; there's probably a rationale somewhere, unless Dennis Ritchie was the only one who remembered it. As for commas in initializers, "It's a separator!" "It's a terminator!" "It's a floor wax and a dessert topping!" For enums, though, a comma is only a separator. Consistency FTW! Guy Harris (talk) 02:38, 5 January 2021 (UTC)
- There are at least two things going on here:
- Utter nonsense. Semicolons in C are terminators, not separators like Pascal. All declarations and statements in C are required to be terminated with a semicolon. 67.241.240.42 (talk) 04:09, 9 July 2023 (UTC)
- This discussion is getting a bit long. COBOL uses periods (.) where other languages use ; - as a statement terminator. PERFORM performs a _paragraph_ or paragraphs, which can contain any statements, simple or compound. I think PL/I's handling of semicolons is cleaner, because there's never any doubt where they go, unlike ALGOL, etc. Peter Flass (talk) 03:17, 5 January 2021 (UTC)
- "COBOL uses periods (.) where other languages use ;" ...and doesn't use them for structure qualification (that's
IN
andOF
in COBOL, right?). - "PERFORM performs a _paragraph_ or paragraphs, which can contain any statements, simple or compound." So how is a compound statement indicated in COBOL? And was that in COBOL from Day One or was it added later?
- Also, C probably got the "shorthand for
(*a).b
" syntax,a->b
, from PL/I, too. Guy Harris (talk) 02:33, 6 January 2021 (UTC)- Compound statement in COBOL - i was thinking of IF THEN ELSE. I don’t think it had any structuring facilities, though it may now. Peter Flass (talk) 03:13, 6 January 2021 (UTC)
- OK, so . as structure qualifier separator didn't come from COBOL, so C didn't get it from there. Gah4 (talk) 03:22, 6 January 2021 (UTC)
- I sometimes like the C loop without nesting when one simple statement is needed. Many programming styles require the open/close brace no matter how simple, and often on their own line. In that case, I would prefer the PL/I style where it is included. Gah4 (talk) 03:22, 6 January 2021 (UTC)
- According to this IBM 7090 COBOL manual, the "statement-1" after
THEN
in anIF
statement can be more than one imperative statement, as can the "statement-2" afterELSE
, so I guess compound statements are defined by theIF
statement itself. That's COBOL 61, so that wasn't something new. - Another place where a compound statement shows up is in loop constructs, but
PERFORM
doesn't, as of that version of COBOL, support an "inline" perform - you have to point it at a named "procedure". Those appear to have shown up in COBOL-85. - They also show up is in case statements, but COBOL didn't appear to have a case statement until COBOL-85. (And C doesn't do them the way it does if statements and loops.) This page doesn't mention PL/I having them at all except as vendor extensions. Guy Harris (talk) 06:26, 6 January 2021 (UTC)
- COBOL didn't have groups as such, but the use of both comma and period as separators gave somewhat the same effect. As previously noted, COBOL eventually got more structured language constructs.
- PL/I didn't originally have a case statement. SHARE passed a requirement for a case statement, IBM rejected it and then IBM provided SELECT with the "optimizing" compiler. Shmuel (Seymour J.) Metz Username:Chatul (talk) 12:44, 6 January 2021 (UTC)
- Compound statement in COBOL - i was thinking of IF THEN ELSE. I don’t think it had any structuring facilities, though it may now. Peter Flass (talk) 03:13, 6 January 2021 (UTC)
- "COBOL uses periods (.) where other languages use ;" ...and doesn't use them for structure qualification (that's
References
- ^ IBM System/360 Operating System PL/I Language Specifications (PDF). IBM. July 1966. p. 14. C28-6571-3. Retrieved 2021-01-04.
A statement, which is a string of characters, is always terminated by the special character, semicolon.
PL/I dialect compilers versus Special purpose and system PL/I compilers
editIt is not clear what the intended as the distinction between PL/I dialect compilers and Special purpose and system PL/I compilers. Certainly HAL/S, PL/MP, PL/S, PL/8 and SPL/I are special purpose; equally clearly, PL/MP, PL/S and PL/8 are system languages. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:10, 25 February 2021 (UTC)
- @Chatul: Looking at it, it's confusing. XPL should be special-purpose (compiler writing), CIMS PL/I should probably be teaching subset. I'm not suse how to fix it. Peter Flass (talk) 16:02, 25 February 2021 (UTC)
- If the special purpose and system languages are languages that "overlap" PL/I, with some features PL/I doesn't have and without some features PL/I has, I'd say they could be considered dialects. Perhaps "Special purpose and system PL/I compilers" should be renamed "Special purpose and system PL/I dialect compilers", and treated as a subclass of "PL/I dialect compilers".
- And then there's the question of "compilers" vs. "languages" - perhaps for some of them the compiler introduced the dialect and nothing else implemented it, but for others it might make more sense to talk about them as languages rather than about the first compiler for them. Guy Harris (talk) 20:53, 25 February 2021 (UTC)
- IBM has a whole series of internal languages modeled after PL/I, but different. PL/M is too different to be a subset or dialect. PL/C is, as well as I know, meant to be a subset. PL/360 uses PL/I structure in a very different way. Most of the actual subsets or dialects are still named PL/I, but with some extra indication. (Or maybe they are just not yet finished.) Gah4 (talk) 22:41, 25 February 2021 (UTC)
- "IBM has a whole series of internal languages modeled after PL/I, but different." Would those be dialects, or just "PL/I-influenced" or whatever term would be applied to PL/M? If they're dialects, they'd probably be in the "Special purpose and system PL/I" group. Guy Harris (talk) 23:03, 25 February 2021 (UTC)
- The story I first knew about them, is that they generate assembly code, which is then released. (In the days when IBM did release assembly code.) I had the source to the Fortran (IHC) library in high school, and it was explained to me then, though as far as I know, that one is not generated that way. Mostly IBM seems not to let them leak out, so it is hard to say either way. Also, as well as I know, the need to keep those running helped keep the released PL/I compilers going, even if there was low demand. Gah4 (talk) 23:46, 25 February 2021 (UTC)
- @Gah4: BSL and PL/S[a] include embedded assembler code. They microfiche available to customers had assembly listing with the source code optionally inserted as comments; there was no standard as to what compiler options to use. Some of the distributed macros had both PL/S and assembler code. The PL.8 compiler generates an intermediate language (IL) based on the IBM 801 and the language is more strongly typed than PL/I. I don't know any details about PL/AS, PL/MP or PL/X. Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:50, 26 February 2021 (UTC)
- The story I first knew about them, is that they generate assembly code, which is then released. (In the days when IBM did release assembly code.) I had the source to the Fortran (IHC) library in high school, and it was explained to me then, though as far as I know, that one is not generated that way. Mostly IBM seems not to let them leak out, so it is hard to say either way. Also, as well as I know, the need to keep those running helped keep the released PL/I compilers going, even if there was low demand. Gah4 (talk) 23:46, 25 February 2021 (UTC)
- "IBM has a whole series of internal languages modeled after PL/I, but different." Would those be dialects, or just "PL/I-influenced" or whatever term would be applied to PL/M? If they're dialects, they'd probably be in the "Special purpose and system PL/I" group. Guy Harris (talk) 23:03, 25 February 2021 (UTC)
- @Guy Harris: I agree that "language" is more appropriate than "compiler". Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:50, 26 February 2021 (UTC)
- I pulled almost everything from "Special purpose and system PL/I compilers" and put them in "PL/I dialect compilers", as they appear to be languages that may be very PL/I-like but that aren't PL/I. The one remaining item is CIMS PL/I, and that's because I don't know what its purpose was, not having bought the article given as a reference.
- Perhaps "PL/I dialect compilers" should be made a level 2 heading, at the same level as "Implementations", and renamed "PL/I dialects" or "PL/I-derived languages" or "PL/I-influenced languages" or whatever's appropriate there. Guy Harris (talk) 05:28, 2 March 2021 (UTC)
- The CIMS compiler just seems like another mainframe PL/I, so I put it in that section, before the CDC PL/I. The paper about the CIMS compiler says "The research described in this paper was supported in part by Control Data Corporation Research Grant C08AA, and in part by United States Department of Energy Research Grant EY-76-C-02-3077."; I don't know whether CDC's sponsorship means that it ended up being the basis for CDC's compiler. Guy Harris (talk) 05:37, 2 March 2021 (UTC)
- I renamed "PL/I dialect compilers" to "PL/I dialects", and made it a level 2 heading. Guy Harris (talk) 06:37, 2 March 2021 (UTC)
- The CIMS compiler just seems like another mainframe PL/I, so I put it in that section, before the CDC PL/I. The paper about the CIMS compiler says "The research described in this paper was supported in part by Control Data Corporation Research Grant C08AA, and in part by United States Department of Energy Research Grant EY-76-C-02-3077."; I don't know whether CDC's sponsorship means that it ended up being the basis for CDC's compiler. Guy Harris (talk) 05:37, 2 March 2021 (UTC)
- IBM has a whole series of internal languages modeled after PL/I, but different. PL/M is too different to be a subset or dialect. PL/C is, as well as I know, meant to be a subset. PL/360 uses PL/I structure in a very different way. Most of the actual subsets or dialects are still named PL/I, but with some extra indication. (Or maybe they are just not yet finished.) Gah4 (talk) 22:41, 25 February 2021 (UTC)
- @Peter Flass: I would classify PL/I D as a subset. I would classify PL/C and SP/k as teaching dialects. The XPL language itself is just a generic PL/I dialect; the XPL distribution includes other tools that generate XPL source code.
- Does anybody know which compilers are ANSI compliant, and for which standard? Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:50, 26 February 2021 (UTC)
Notes
- ^ I'll use one name to refer to BSL, PL/S and PL/S II unless I am describing differences among them.
Assessment
editShould this article be a C class? What improvements are required? I'm not up on procedures enough to look truu the article and see whats missing. Peter Flass (talk) 16:10, 25 February 2021 (UTC)
Series/1 PL/I
editThe article mentions Series/1 twice; once with a reference and once with the text PL/G Subset for IBM Series/1 Mini Computer with Real Time extensions PL/I Language Reference GC34-0085-0
as a bullet item. I was going to correct the name to PL/I G subset when I realized that
- I don't know the correct name of the compiler
- I don't know whether the compiler implements the proposed real time extensions to subset G
- I couldn't find a copy of the manual.
If the text simply refers to the Series/1 PL/I[1] then I will revise it and include the citation. Can anybody clarify this? Thanks. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:02, 28 February 2021 (UTC)
- Looks like it's "Series/1 PL/I" - http://bitsavers.org/pdf/ibm/series1/GC34-0084-0_PL_I_Introduction_Feb77.pdf I can't tell whether this incorporates the proposed ANSI standard extensions, appendix B has a comparison that makes it look like something borrowed from PL/I(F). Do you happen to have a link to, or a copy of, the ANSI proposal? I don't see that I have a copy, that would be the best way to tell. Thanks. Peter Flass (talk) 15:35, 28 February 2021 (UTC)
- I'm a bit confused. The manual[1] mentions ANSI X3.53-1976, not the subsequent G subset, so why "PL/G"? Appendix B is a comparison to ANSI PL/I and shows some things missing that PL/I F has. Also, the multitasking has things that PL/I F lacks.
- I'm also puzzled about the claim that it has list processing when Appendix B shows ALLOCATE is not supported. I'd really like a copy of the language reference. Shmuel (Seymour J.) Metz Username:Chatul (talk) 02:00, 2 March 2021 (UTC)
- I would too, but google doesn’t turn anything up. Unless Al has a copy, we’re out of luck, at least for now.Peter Flass (talk) 02:28, 2 March 2021 (UTC)
References
- ^ a b IBM Series/1 PL/I Introduction Program Numbers 5719-PL1 5719-PL3 (PDF) (First ed.). IBM. February 1977. GC34-0084-0.
Programming languages mentioned in the "Early history" section
editPrior to this edit, the second sentence of PL/I § Early history said
Business users were moving from Autocoders via COMTRAN to COBOL, while scientific users programmed in General Interpretive Programme (GIP), Fortran, ALGOL, GEORGE, and others.
The edit in question remove the General Interpretive Programme from the list as being an "uncited programming language without notability (no wikipedia page, few Google results, etc.)".
It appears, from some Google search results and from Pilot ACE § Software, that the General Interpretive Programme was originally written for the Pilot ACE and rewritten for the English Electric DEUCE; was it available? on other computers? The page for GEORGE doesn't mention any machine other than the DEUCE, either; was it available on other computers?
Autocoder refers to some IBM assembler languages, and COMTRAN refers to an IBM language that was one of the predecessors to COBOL; IBM may have been a bigger computer company than English Electric, but are those languages worth mentioning here, either?
The only languages mentioned there that don't appear to be specific to particular machines or particular vendors are COBOL, Fortran, and ALGOL (Fortran may have been developed by IBM, but it didn't remain IBM-only). How common were the ACE/DEUCE or even the IBM-only languages relative to COBOL. Fortran and ALGOL? Should those sentences focus on the more commonly used languages? Guy Harris (talk) 07:22, 4 August 2021 (UTC)
- I think Autocoder was in fairly wide use, even if it was IBM-only. Peter Flass (talk) 10:13, 4 August 2021 (UTC)
- Would "assembly languages" or "assembly languages such as Autocoders" be less IBM-specific? And should COMTRAN be replaced by "FLOW-MATIC and COMTRAN" (possibly with other languages added)? Guy Harris (talk) 22:14, 4 August 2021 (UTC)
- Is it grammatically valid to pluralize a proper noun? How about "assembly languages such as 705 Autocoder"?
- I'd rather replace "COMTRAN" with "FLOW-MATIC, FACT and COMTRAN".
- Should the article mention specialized languages, e.g.,such as AED[1], COMIT, IPL-V, JOVIAL, LISP, Simula, SNOBOL, TRAC?
- Would "assembly languages" or "assembly languages such as Autocoders" be less IBM-specific? And should COMTRAN be replaced by "FLOW-MATIC and COMTRAN" (possibly with other languages added)? Guy Harris (talk) 22:14, 4 August 2021 (UTC)
References
- ^ D. T. Ross; J. E. Ward (May 31, 1967). "Chapter IV THE AED-0 LANGUAGE AND COMPILER SYSTEM" (PDF). INVESTIGATIONS IN COMPUTER-AIDED DESIGN FOR NUMERICALLY CONTROLLED PRODUCTION - Final Technical Report - 1 December 1959 - 3 May 1967 (PDF) (Report). MIT. pp. 29–43. ESL-FR-351.
Pointers
editFootnote 24 says the list-processing facilities were designed in 1966. The Wikipedia article on Harold (Bud) Lawson says he designed the facility in 1964. I believe the latter is correct and will change it unless proved wrong. Mdmi (talk) 22:59, 16 November 2021 (UTC)
- As well as I know it, it took some time for many language features to be added to PL/I (F), the main IBM compiler for some years. There is a manual that was written independent of any compiler, and then ones describing the language that each compiler revision supports. It might depend on which specific features you ask about. Gah4 (talk) 00:19, 17 November 2021 (UTC)
- OK, there is PL/I Language Specification which is the fourth edition (they start at 0), from 1966 which indicates that the CELL attribute is new. You can read the -0, -1, and -2 versions to see when each feature was added. (Not counting the time between when it was thought up and got into a published manual.) Gah4 (talk) 00:37, 17 November 2021 (UTC)
- The CELL attribut has nothing to do with pointers; it is what other languages call UNION. The relevant attributes are BASED and POINTER. I don't have, or can't find, the MPPL report, but I see that the NPL report[1] and the second edition[2] of the language specifications don't have BASED or POINTER but the fourth edition[3] does. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:36, 17 November 2021 (UTC)
- Interesting. I mentioned CELL, not because I thought it was needed for list processing (though it could be useful), but because it was listed as the major addition to the -3 version of the manual, on the second page. OK, but you don't have the -2 to compare, so that might be when it was added. I would think it would be mentioned if it was added in the -3 version. Gah4 (talk) 09:19, 18 November 2021 (UTC)
- IBM and SHARE issued reports under three names: NPL, MPPL and PL/I. I can only find the first on bitsavers. IBM printed multiple editions of the language specifications; I can only find -1 (Second ed.) and -3 (Fourth ed.) on bitsavers. That makes it difficult to say when BASED and POINTER entered the language. If anybody has copies of the missing documents, please scan them for bitsavers. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:38, 18 November 2021 (UTC)
- Interesting. I mentioned CELL, not because I thought it was needed for list processing (though it could be useful), but because it was listed as the major addition to the -3 version of the manual, on the second page. OK, but you don't have the -2 to compare, so that might be when it was added. I would think it would be mentioned if it was added in the -3 version. Gah4 (talk) 09:19, 18 November 2021 (UTC)
- The CELL attribut has nothing to do with pointers; it is what other languages call UNION. The relevant attributes are BASED and POINTER. I don't have, or can't find, the MPPL report, but I see that the NPL report[1] and the second edition[2] of the language specifications don't have BASED or POINTER but the fourth edition[3] does. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:36, 17 November 2021 (UTC)
References
- ^ NPL Technical Report (PDF) (Report). IBM. December 1964. 320-0908.
- ^ IBM System/360 Operating System PL/I: Language Specifications (PDF) (Second ed.). IBM. July 1965. C28-6571-1.
{{cite book}}
:|work=
ignored (help) - ^ IBM System/360 Operating System PL/I Language Specifications (PDF) (Fourth ed.). IBM. July 1966. C28-6571-3.
{{cite book}}
:|work=
ignored (help)
Varying strings in Multics PL/I?
editTalk:PL/I#Multics PL/I and derivatives claims EPL was a system programming language and a dialect of PL/I that had some capabilities absent in the original PL/I (like varying length strings).
with a citation of Griswold, Ralph (1978). "A history of the SNOBOL programming languages" (PDF). ACM SIGPLAN Notices. 13 (8). ACM: 275–308. doi:10.1145/960118.808393. ISSN 0362-1340. S2CID 5413577. Archived from the original (PDF) on 2019-03-02., an article that has no connection to either Multics or PL/I, and the claim conflicts with "Types of Data" (PDF), NPL Technical Report (PDF), IBM, December 1964, p. 14, 320-0908, String data may be either CHARACTER string or BIT string and may be declared to be either fixed or varying length.
. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 00:07, 30 December 2021 (UTC)
Need content about PL on SQL domain
editPL/I was adapted as Domain-specific language in SQL context. Main "PLs" today are PostgreSQL's PL/pgSQL and Oracle's PL/SQL as variations of the SQL/PSM ISO standard. Krauss (talk) 09:53, 9 January 2022 (UTC)
threads and tasks
editRecent changes mention (at least in the edit summary) a connection between threads and tasks. As well as I know, that IBM calls tasks is closer to what many (especially Unix users) call a process. That has the complication what multiprocessing is ambiguous. (What do Unix users call it when they have more than one processor?) An important distinction is the overhead for creating and terminating one. Gah4 (talk) 09:21, 9 May 2022 (UTC)
- Section 15 "MULTITASKING" of the PL/I F manual appears to indicate that you can create a new "task" with a CALL statement, and does so in a fashion that does not indicate that the new "task" shares no variables with the "task" making the CALL statement - for example, "SHARING DATA BETWEEN TASKS" says:
- It is the progranuner's reponsibility to ensure that two references to the same variable cannot be in effect at one instant. He can do so by including an appropriate WAIT statement at a suitable point in his source program to force temporary synchronization of the tasks involved. ...
- This does NOT sound like "separate processes" in the UN\*X/Windows/etc. sense; it sounds like separate threads within a given process. Section 17 "MULTITASKING" of the Checkout and Optimizing compiler manual says in the "PRIORITY Option" subsection that:
- When a number of tasks simultaneously require attention, a choice has to be made. Under the optimizing compiler, this choice is made by the operating system, based on the relative importance of the various tasks: a task that has a higher priority value than the others will receive attention first.
- which sounds as if a PL/I "task", at least with the Optimizing compiler, corresponds to some entity known to the operating system.
- Now, that's for OS/360, where everything ran in the same address space, so maybe the notion of a "process" in the UN*X/Windows sense, where each process has its own address space, doesn't quite work. I can't find any documentation on how PL/I "multitasking" works in MVS, where you do have Multiple Virtual (address) Spaces.
- It appears that the "create a new thread" operation in PL/I F, and at least in the S/360 Checkout and Optimizing compilers, was a CALL statement with a TASK option. The Enterprise PL/I for z/OS manual doesn't mention that option, and, instead, in its description of the "multithreading facility", describes an ATTACH statement that calls a procedure that "must be declared as having no parameters or as having exactly one BYVALUE POINTER parameter". Guy Harris (talk) 20:56, 9 May 2022 (UTC)
- Prior to Unix support, the PL/I multitasking for OS/360 and successors used the ATTACH macro.
- In OS/360, OS/VS1 and SVS, ATTACH creates a new task in the same partition or region, running with the same protection key as its parent. In MVS, ATTACH creates a new task in the same address space, making it analogous to a light wait process (thread); all memory is shared between the tasks.
- I haven't checked whether the current PL/I for MVS uses ATTACH or UNIX processes and threads. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 23:48, 9 May 2022 (UTC)
- So it sounds as if, in OS/360 and successors, the term "task" doesn't carry any implication of existing in a separate address space (or pre-MVS equivalent, as per "in the same partition or region, running with the same protection key as its parent"), unlike the term "process" in the UN\*X/Windows sense, where a "process" has a (user-mode) address space assigned to it, and a "thread" is part of a set of one or more threads of control within a "process" and its address space.
- The OS/360 documentation speaks of "subtasks"; are there cases where a "subtask" of a given "task" can have an address space (or equivalent) separate from its parent "task"? If not, then "subtask" may be the equivalent of "thread". Guy Harris (talk) 00:47, 10 May 2022 (UTC)
- No, a subtask is always in the same address space as its parent.
- Also, in IBM's MVS OpenEdition implementation of Unix, now known as z/OS UNIX System Services, a newly spawned process might be in the same address space as its parent. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 12:10, 10 May 2022 (UTC)
- "No, a subtask is always in the same address space as its parent." I.e., as you said in the edit comment, subtasks are threads.
- Presumably "...in IBM's MVS OpenEdition implementation of Unix ... a newly spawned process might be in the same address space as its parent." because they have to implement
fork()
, although anything mappedMAP_PRIVATE
will be copy-on-write (as is now done on UN*Xes) because, to quote the current Single UNIX Specification for fork():- Memory mappings created in the parent shall be retained in the child process. MAP_PRIVATE mappings inherited from the parent shall also be MAP_PRIVATE mappings in the child, and any modifications to the data in these mappings made by the parent prior to calling fork() shall be visible to the child. Any modifications to the data in MAP_PRIVATE mappings made by the parent after fork() returns shall be visible only to the parent. Modifications to the data in MAP_PRIVATE mappings made by the child shall be visible only to the child."
- so that, for example, data and stack areas aren't shared. Guy Harris (talk) 17:44, 10 May 2022 (UTC)
OK, maybe I am thinking of something different. I tend to think of threads as low overhead, and tasks in the OS/360 sense as high overhead, closer to Unix processes. I am not so sure now what Unix says about address space, and especially for fork() vs. vfork(). With threads, you should be able to create a large number of them, where I suspect OS/360 won't let you create hundreds or thousands of tasks. I am also not sure about scheduling of tasks, processes, and threads. Gah4 (talk) 22:47, 10 May 2022 (UTC)
- More precisely, in most cases
When programs issue fork() or spawn(), the BPXAS PROC found in SYS1.PROCLIB is used to provide a new address space. For a fork(), the system copies one process, called the parent process , into a new process, called the child process.
- However, the behavior of the shell is controlled by two environment variables BPX_SHAREAS=YES and _BPX_SPAWN_SCRIPT, extended attributes and parameters of spawn
To allow the caller to control whether the spawned child process runs in a separate address space from the parent address space or in the same address space, the spawn service allows for the specification of the _BPX_SHAREAS environment variable.
- I hope that's not TMI. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 23:10, 10 May 2022 (UTC)
- There are three different threading models, with different amounts of overhead. Windows, and most if not all UN*Xes these days, use 1:1 threading. In 1:1 threading, creating a thread requires a kernel transition, so it's not extremely low overhead. However, processes carry more state than threads, so creating a thread should have lower overhead than creating a process; for example, you don't have to duplicate the address space, even if duplicating it doesn't involve processing every page other than marking some pages copy-on-write and responding to whatever write permission faults that result from that by doing the lazy copy.
- As noted,
fork()
duplicates the address space, so that subsequent changes to the address space in the parent don't affect the child, and vice versa. vfork()
isn't in the Single UNIX Specification; typically, it "loans" the parent's address space to the child, and temporarily blocks the parent process, with the expectation that the child process will do very little to the writable parts of the address space and will use one of theexec
calls to set up a new address space with a new program. Once it does that, the parent process is unblocked and can continue running, as if it were to modify the address space or any data in the address space, that won't affect the child. The macOSvfork()
man page has two paragraphs of warnings about stuff you should be careful not to do in the child after avfork()
; those warnings may apply to some other UN*Xes as well. Other UN*Xes might implementvfork()
without the address-space loan, relying on copy-on-write for protection; however, they may still block the parent until anexec
call is done, because the code might implicitly or even explicitly require that the parent not run until the child finishes the setup and theexec
. (I have some memory of a UN*X wherevfork()
originally was the same asfork()
, but some stuff usingvfork()
broke because the parent ran before theexec
call was done, so the blocking was added. That might have been SunOS 4.0, but it's been so long that I can't be sure.)- There's also
posix_spawn()
, which is similar to WindowsCreateProcess()
in that it creates a child process that is running a new program, and that has some file descriptor/signal/etc. modifications done, but by the kernel rather than by the child process before running the program. No address space copying or loaning need be done there. - OS/360 ran on machines that, were I to run a simulation of them on a low-end laptop under Hercules, would probably be significantly outperformed by the simulated machine. As such, it probably couldn't create lots of tasks or even subtasks, but there were limits on the number of processes that V6 or V7 UNIX could create even on a PDP-11/70 - 100 processes might have been either near the upper limit or above the upper limit. For present-day systems, ps -ef | wc -l shows 1454 processes running on my machine, and z/OS running even on a low-end mainframe could probably handle that many tasks, all with their own address spaces. A program based on this source code reported 1455 processes and 4402 threads running on my machine, so it's not tens or hundreds of thousands of threads there.
- I can't speak for the scheduling of tasks, but I think that scheduling is generally done on threads, not processes, in modern UN*Xes and Windows, although there may be some process scheduling done in the decision for which thread to schedule, based on the processes to which threads belong (e.g., so that some process with a ton of threads doesn't get to monopolize the CPU). Guy Harris (talk) 00:12, 11 May 2022 (UTC)
The article mentions event-driven programming. I suppose that isn't so far off, but isn't what I would have thought about. There are EVENT variables, corresponding to the ECB (Event Control Block) used by OS/360 and successors. That is mostly used for I/O, but also for task synchronization. For I/O, programs can used double (or more) buffering to keep the I/O system busy, and similarly with asynchronous I/O in PL/I. Gah4 (talk) 23:38, 10 July 2023 (UTC)
- As you note, PL/I EVENTs seem to implement, at the language level, the OS/360 notion of events. I have the impression that, in that era, the notion, in programs written in higher-level languages, of directly requesting services from the operating system was less common than it is now, especially in operating systems largely written in assembler language. (In Multics, they could be invoked by procedure calls from PL/I and possibly other languages; I'm not sure were in the Burroughs MCP.)
- That notion may have originated when OS/360 was mainly batch-oriented, so there were few if any events that were the result of a user request (operator request, maybe), so "events" were mostly things such as "this asynchronous I/O operation completed" or "this subtask finished" or "this subtask finished doing XXX" - i.e., they represented the completion of some operation being performed asynchronously, having been started earlier by the program. They're not the sort of events referred to when speaking of "event-driven programming", which are indications that some external agent, whether a human or another computer, made a request.
- IBM Tele-Processing(TM) was another matter. I don't know whether a program could say "when something arrives on this communication line, post this event" and then wait on one or more such events, but that would be event-driven programming (even though the article in question seems to talk more about user input events than what would, these days, be network events). If so, then "event-driven programming" could be implemented atop the event mechanism. However, that's "atop"; primitives in the style of
WAIT
andPOST
(which some other OSes have, with varying degrees of similarity and generality) would be hidden inside the event loop and in the code that delivers events, respectively. - So, similarly, event-driven programming in the style described by its Wikipedia page in PL/I might be implemented atop
WAIT
statements and assignments toEVENT
variables, but it wouldn't necessarily involve programmers directly usingWAIT
statements, for example. - And the full statement:
The goals for PL/I evolved during the early development of the language. Competitiveness with COBOL's record handling and report writing was required. The language's scope of usefulness grew to include system programming and event-driven programming.
- needs some clarification. Did "grew to include" mean general language features were added for those purposes? Or does it mean that existing language features were sufficient? Guy Harris (talk) 06:14, 11 July 2023 (UTC)
- And to add more confusion, event-driven programming § Exception handlers in PL/I seems to be speaking of
ON
-units as event handlers; theON
mechanism seems more like a (less-structured) form of exception handling rather than anything like event-driven programming. Guy Harris (talk) 06:42, 11 July 2023 (UTC)- I thought, as noted above, that it is related to
EVENT
variables, and named after the OS/360 ECB. In any case, we also need an article about the Event Control Block. There are some fun endless channel programs for handling things like terminal I/O. That, and the handling of the I/O interrupt could do it. Gah4 (talk) 21:53, 11 July 2023 (UTC)I thought, as noted above, that it is related to
What is the "it" here? If it's "event-driven programming", I suspect it's unrelated to PL/IEVENT
variables, and named after the OS/360 ECB.EVENT
variables or the OS/360 Event Control Block; I suspect, instead, that the notion of an "event" to which a computer system responds predates all of those terms. Guy Harris (talk) 23:12, 11 July 2023 (UTC)- Yes it is supposed to be "event-driven programming", the title of the thread. But then again, threads often migrate to different subjects, so it isn't quite as obvious is it should be. Gah4 (talk) 00:39, 12 July 2023 (UTC)
- OK, so, as per my previous comment, I've seen nothing to lead me to believe that "event" in the term "event-driven programming" is at all either PL/I- or OS/360-derived. In fact, General Information Manual, IBM 7750 Programmed Transmission Control, describing a pre-S/360 programmable communication controller, says, on page 8, that
In real-time data processing, events are often independent, not only of the computer, but of each other.
- so it looks as if the notion of a real-time program responding to what are called "events" predates the use of "event" in PL/I and OS/360 - perhaps OS/360 chose the name because it was in general use, PL/I chose it either because OS/360 chose it or because it was in general use, and whoever coined the term "event-driven programming" chose it because it was in general use.
- In any case, I've asked for a citation for PL/I being used for "event-driven programming", so that we know what "event-driven programming" means here - is it event-driven programming in the sense described on that page, or does it just mean "there's a data type
EVENT
"? Guy Harris (talk) 02:39, 12 July 2023 (UTC)- OK, so give someone a chance to find the citation for it. Sounds fine to me. Gah4 (talk) 05:30, 12 July 2023 (UTC)
- Yes it is supposed to be "event-driven programming", the title of the thread. But then again, threads often migrate to different subjects, so it isn't quite as obvious is it should be. Gah4 (talk) 00:39, 12 July 2023 (UTC)
- I thought, as noted above, that it is related to
- And to add more confusion, event-driven programming § Exception handlers in PL/I seems to be speaking of
ALGOL
editAlgol is one of the legs of the ”three-legged stool” from which PL/I features were taken. This article contains lots of comparisons to COBOL and FORTRAN, some to C and Pascal, but (almost?) none to ALGOL. Presumably there are fewer programmers who know Algol these days, I certainly have only a nodding acquaintance. I’d like to see some discussion about Algol features adopted by PL/I, and maybe some that weren’t adopted. Peter Flass (talk) 13:49, 7 August 2024 (UTC)
- Many low level features came from Fortran and COBOL. But the idea of block structure, which should be more general, as well as I know, did come from ALGOL. I suppose also the idea of recursive procedure calls, which again should have been more general, but likely came from ALGOL. Gah4 (talk) 22:09, 7 August 2024 (UTC)
- Several other things came from ALGOL 60, with some changes to the nomenclature.
- Arrays came from ALGOL 60; Instead of forcing the lower bound to be a fixed value, PL/I allows the programmer to explicitly declare it.
- Use of paired keywords (BEGIN/END, DO/END, PROCEDURE/END) came from ALGOL 60.
- LABEL variables came from from ALGOL 60.
- Nested IF came from ALGOL 60.
- Recursion came from ALGOL 60. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:01, 8 August 2024 (UTC)
- The problem is, that those all seem so obvious now. I have a real hard time imagining the thoughts of a language designed in 1965. Now, there is one thing that PL/I has that ALGOL doesn't, which is the difference between BEGIN/END and DO/END.
- On the other hand, features from Fortran are at a finer level. Maybe micro in economics terms. And then changed in funny ways in PL/I.
- Since ALGOL didn't really have I/O it isn't so easy to explain PL/I STREAM I/O, but it is enough different from Fortran to confuse Fortran programmers. (Finally added to Fortran somewhat recently.)
- And PICTURE from COBOL, but also structures. I suspect scientific programmers would have never thought about structures, if they didn't come from COBOL.
- And then there is the difference between statement terminators and statement separators, that PL/I got different from ALGOL.
- It seems that multicharacter variable names came from Fortran. Seems so obvious, but mathematicians still like one character variables.
- (But okay, both Roman and Greek alphabets.) Gah4 (talk) 23:36, 8 August 2024 (UTC)
Arrays came from ALGOL 60
Fortran had them as well, right? And earlier? Guy Harris (talk) 23:45, 8 August 2024 (UTC)- Fortran first. maybe algol had adjustable arrays. I’m researching algol and Bob Beamer has an interesting list of terms, and presumably the associated concepts, that came from ALGOL. Peter Flass (talk) 00:27, 9 August 2024 (UTC)
- Both ALGOL 60 and PL/I fixed array extents on block entry; there was no equivalent to FLEX in ALGOL 68 and neither ALGOL 60 nor PL/I could change array extents dynamically. However, the array bounds could be computed from procedure parameters and variables in outer blocks. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:14, 9 August 2024 (UTC)
- There was a proposal for an ALGOL 60 I/O library, but I'm not aware of any influence on PL/I. PL/I record I/O seems to have a COBOL flavor while data and edit I/O seem to have a FORTRAN flavor.
- COBOL inherited PICTURE and structures from COMTRAN and FACT.
- All three precursor languages had multicharacter names.
- Fortran had arrays with a lower index bound of 1; in PL/I you can declare an array with an arbitrary lower bound, even negative. That came from ALGOL 60, not from FORTRAN. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:14, 9 August 2024 (UTC)
- Fortran first. maybe algol had adjustable arrays. I’m researching algol and Bob Beamer has an interesting list of terms, and presumably the associated concepts, that came from ALGOL. Peter Flass (talk) 00:27, 9 August 2024 (UTC)
There is some recent discussion of Storage class, and mention of Fortran's named common. I was wondering about where to link. It seems that Storage class links to the page on C, and C's storage class. Seems to me that it deserves its own page, comparing and contrasting its use in different languages. Also, it should explain Fortran's named common in storage class terms. (So we can link to that one.) Gah4 (talk) 22:05, 7 August 2024 (UTC)
- Yes, the concept should have its own page. The term dates back at least to a December 1964 "NPL Technical Report"; I don't see it in this copy of the Algol 60 report, so it may have originated in
NPLPL/I and adopted by at least some other languages. Guy Harris (talk) 22:34, 7 August 2024 (UTC)- Most of the google hits refer to storage class from SMS/HSM or similar external stotage maganers. The rest take you to C. Peter Flass (talk) 03:32, 8 August 2024 (UTC)
Most of the google hits refer to storage class from SMS/HSM or similar external stotage maganers.
If somebody wants to create a page for that concept, they can create "Storage class (external storage managers)", rename "Storage class" to "Storage class (programming languages)", modify all links to "Storage class" appropriately, and have "Storage class" be a disambiguation page, or they can rename "Storage class" to "Storage class (programming languages)", modify all links to "Storage class" appropriately, create "Storage class" as the page for external storage class storage managers, and give that page a hatnote pointing to "Storage class (programming languages)".The rest take you to C.
I guess there's so much stuff about C storage classes that PL/I or other language storage classes show up only on page 94. That doesn't mean the current state of affairs is correct. Guy Harris (talk) 10:18, 8 August 2024 (UTC)- I suppose, but it seems more complicated than necessary. We could just have External storage manager for that use, linked from External storage class. And then this page for the programming language case. Unless a lot of people are getting confused, that seems better to me. Gah4 (talk) 12:05, 8 August 2024 (UTC)
- The point I was trying to make are that neither 1) "Most of the google hits refer to storage class from SMS/HSM or similar external stotage maganers." nor 2) "The rest take you to C." argue against turning Storage class into a general article about storage classes in programming languages - if the issue "what about storage classes in external storage manager, which is the type of storage class that most Web search hits are about" pops up, there are several things we can do in response without removing the article about storage classes in programming languages, and "the rest take you to C" doesn't mean we shouldn't talk about storage classes in other languages, as that's just a form of recentism. Guy Harris (talk) 01:03, 9 August 2024 (UTC)
- I believe it is (external storage) class, and not external (storage class). So, I think that External storage manager is a fine name for the page. When someone gets to write that page. Gah4 (talk) 01:22, 9 August 2024 (UTC)
- The point I was trying to make are that neither 1) "Most of the google hits refer to storage class from SMS/HSM or similar external stotage maganers." nor 2) "The rest take you to C." argue against turning Storage class into a general article about storage classes in programming languages - if the issue "what about storage classes in external storage manager, which is the type of storage class that most Web search hits are about" pops up, there are several things we can do in response without removing the article about storage classes in programming languages, and "the rest take you to C" doesn't mean we shouldn't talk about storage classes in other languages, as that's just a form of recentism. Guy Harris (talk) 01:03, 9 August 2024 (UTC)
- I suppose, but it seems more complicated than necessary. We could just have External storage manager for that use, linked from External storage class. And then this page for the programming language case. Unless a lot of people are getting confused, that seems better to me. Gah4 (talk) 12:05, 8 August 2024 (UTC)
- Most of the google hits refer to storage class from SMS/HSM or similar external stotage maganers. The rest take you to C. Peter Flass (talk) 03:32, 8 August 2024 (UTC)