z/VM 50th - part 8
z/VM 50th - part 8
I took 2 credit hr intro to fortran&computers, then within a year of taking intro class, univ. hires me fulltime responsible of os/360. Univ. had 709/1401 and was sold a 360/67 for tss/360, however tss/360 never came to production fruition so ran (mostly) as 360/65 with os/360. Univ. shutdown datacenter and I had it dedicated to myself all weekend, although 48hrs w/o sleep made Monday morning classes difficult. Student fortran job ran under second on 709 (tape->tape). Initially on OS/360, ran over a minute. I install HASP and that cuts time in half. I then redo STAGE2 SYSGEN to carefully place datasets and PDS members to optimize arm seek and PDS directory multitrack search cutting it another 2/3rds to 12.9secs. It never got better than 709 until I install Univ. of Waterloo WATFOR.
Three people from science center came out install CP67 (3rd after CSC itself and MIT lincoln labs). I mostly played with it on weekends, rewriting lots of code, pathlenghs, paging&scheduling algorithms (GLOBAL LRU page replacement and dynamic adaptive resource management), disk ordered seek queuing, chained page requests (optimized for transfers per revolution), TTY/ASCII terminal support, bunch of other stuff. 2301 drum had been FIFO single transfers per I/O, around 70-80 4k transfers/sec; I got it up to 270 4k transfers/sec. (most of the changes were picked up by CSC and incorporated in distributed CP/67). Old archived post with part of 1968 SHARE user group presentation (mostly about pathlength work).
At about the same time, there were some academic publications about paging algorithms and working set controls.
Before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton is possibly largest datacenter in the world, couple hundred million in IBM 360 systems and 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton datacenter director and CFO, who only had a 360/30 up at Boeing Field fo payroll (although they enlarge the machine room and install 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join the IBM science center (instead of staying at Boeing).
Jim Gray had departed IBM SJR for Tandem in fall of 1980 (palming of some number DBMS/RDBMS & Sytem/R stuff on me). A year later, at Dec81 ACM SIGOPS meeting, he asked me to help a TANDEM co-worker get his Stanford PHD that heavily involved "GLOBAL LRU" (and the "local LRU" forces from 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving "GLOBAL LRU"). Jim knew I had detailed stats on the Cambridge/Grenoble global/local LRU comparison (showing global significantly outperformed local). Early 70s, IBM Grenoble Science Center had a 1mbyte 360/67 (155 4k pageable pages) running 35 CMS uses and had modified "standard" CP67 with working set dispatcher and local LRU page replacement ... corresponding to 60s academic papers. I was at Cambridge which had 768kbyte 360/67 (104 4k pageable pages, only 2/3rds the number of Grenoble) and running 80 CMS users, similar kind of workloads, similar response, similar throughput (but more than twice as many users) running "standard" CP67 that I had originally done as undergraduate in the 60s. In addition to the Grenoble APR73 CACM article, I also had loads of detailed background performance data from Grenoble.
IBM executives stepped in and blocked me sending a response for nearly a year (I hoped it was part of the punishment for being blamed for online computer conferencing in the late 70s through the early 80s on the company internal network ... and not that they were meddling in the academic dispute). Part of eventual response
recent related mentioning online computer conferencing
some refs:
L. Belady, A Study of Replacement Algorithms for a Virtual Storage Computer, IBM Systems Journal, V5N2, 1966
L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, V35N5
领英推荐
R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)
R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981
P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80
J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, CACM16, APR73
D. Hatfield J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971
Retired at Retired
1 年A little Jim Gray topic drift: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First https://bits.blogs.nytimes.com/2008/05/31/a-tribute-to-jim-gray-sometimes-nice-guys-do-finish-first/ During the 1970s and '80s at I.B.M. and Tandem Computer, he helped lead the creation of modern database and transaction processing technologies that today underlie all electronic commerce and more generally, the organization of digital information. Yet, for all of his impact on the world, Jim was both remarkably low-key and approachable. He was always willing to take time to explain technical concepts and offer independent perspective on various issues in the computer industry Tribute to Honor Jim Gray https://web.archive.org/web/20080616153833/https://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html Gray is known for his groundbreaking work as a programmer, database expert and Microsoft engineer. Gray's work helped make possible such technologies as the cash machine, ecommerce, online ticketing, and deep databases like Google. In 1998, he received the ACM A.M. Turing Award, the most prestigious honor in computer science. He was appointed an IEEE Fellow in 1982, and also received IEEE Charles Babbage Award.