It managed to rain this morning only as I was trying to get Tess out to the pasture. We both made it, though. The wet grass on her feet is a big reason I make a point of getting her out there.
Work was much as usual for the midweek day, so no point in belaboring that. (Get it?)
I did manage to squeak in a little more PI experimentation. I'm to the point now where jobs run for several minutes to nearly an hour, depending on the platform and how busy the machine is otherwise. I figured out some more foibles of various FORTRAN implementations, most notably the 8-bit Microsoft compiler for CP/M 2.2. In spite of documentation to the contrary, it appears that this compiler really implements integers only as 8-bit or 16-bit values. You can define 32-bit integer variables, but they will be treated as 16-bit without any warning or other correction. This of course makes things go bizarre when you try to create loops with counters larger than 32767. I'm not sure whether the double precision floating point numbers are really 32 bits either, though they should be. Taking this into account, I was able to modify my code to get 7 decimal places of accuracy: 3.1415926 (truncated, not rounded, so maybe only 6.5 decimal places.) In order to loop more than 32767 times, I had to nest one loop inside another, with one loop to run 1000 times and the other 100. I did not try for a million iterations because the CP/M emulation software was obviously overheating the CPU in the Linux machine as it ran this.
Results for three other microcomputer environments, two improved over yesterday:
Linux with g77 v3.4.6 yields 3.1415926535 (truncated, not rounded, so 9.5 decimal places. Much faster than any of the others, you press enter and the result bounces right back at you.
MicroVAX 3000 (emulated by simh, as was CP/M) with OpenVMS 7.3 and the VAX Compaq FORTRAN v6.6 yields 3.141592653 (truncated, not rounded, so 8.5 decimal places.)
Alpha DS10 with OpenVMS 8.3 and HP Alpha Fortran v8.0.2 yields 3.141592654 (9 decimal places, rounded.) The Alpha should be able to do better, but I need to study the FORTRAN manuals. They're huge, about 1800 pages or so.
And I opened a new field of experimentation this morning with the FORTRAN G compiler, a FORTRAN IV running on IBM's MVS 3.8 operating system on the Hercules emulated mainframe. This has not yet been optimized at all, so all I can say is that I got the program up and running to six decimal places, but the load the emulation placed on my desktop Linux was pretty severe. That was 10 million iterations and took 14.5 minutes to run. The next order of magnitude should be possible, and IBM's library routines are no problem, but the job as presently written would run for 150 minutes (2.5 hours) of full tilt CPU activity on the desktop supporting the emulation. I'm not sure I'm ready to try a continuous duty cycle that heavy with this five year old Dell.
Work was much as usual for the midweek day, so no point in belaboring that. (Get it?)
I did manage to squeak in a little more PI experimentation. I'm to the point now where jobs run for several minutes to nearly an hour, depending on the platform and how busy the machine is otherwise. I figured out some more foibles of various FORTRAN implementations, most notably the 8-bit Microsoft compiler for CP/M 2.2. In spite of documentation to the contrary, it appears that this compiler really implements integers only as 8-bit or 16-bit values. You can define 32-bit integer variables, but they will be treated as 16-bit without any warning or other correction. This of course makes things go bizarre when you try to create loops with counters larger than 32767. I'm not sure whether the double precision floating point numbers are really 32 bits either, though they should be. Taking this into account, I was able to modify my code to get 7 decimal places of accuracy: 3.1415926 (truncated, not rounded, so maybe only 6.5 decimal places.) In order to loop more than 32767 times, I had to nest one loop inside another, with one loop to run 1000 times and the other 100. I did not try for a million iterations because the CP/M emulation software was obviously overheating the CPU in the Linux machine as it ran this.
Results for three other microcomputer environments, two improved over yesterday:
Linux with g77 v3.4.6 yields 3.1415926535 (truncated, not rounded, so 9.5 decimal places. Much faster than any of the others, you press enter and the result bounces right back at you.
MicroVAX 3000 (emulated by simh, as was CP/M) with OpenVMS 7.3 and the VAX Compaq FORTRAN v6.6 yields 3.141592653 (truncated, not rounded, so 8.5 decimal places.)
Alpha DS10 with OpenVMS 8.3 and HP Alpha Fortran v8.0.2 yields 3.141592654 (9 decimal places, rounded.) The Alpha should be able to do better, but I need to study the FORTRAN manuals. They're huge, about 1800 pages or so.
And I opened a new field of experimentation this morning with the FORTRAN G compiler, a FORTRAN IV running on IBM's MVS 3.8 operating system on the Hercules emulated mainframe. This has not yet been optimized at all, so all I can say is that I got the program up and running to six decimal places, but the load the emulation placed on my desktop Linux was pretty severe. That was 10 million iterations and took 14.5 minutes to run. The next order of magnitude should be possible, and IBM's library routines are no problem, but the job as presently written would run for 150 minutes (2.5 hours) of full tilt CPU activity on the desktop supporting the emulation. I'm not sure I'm ready to try a continuous duty cycle that heavy with this five year old Dell.
no subject
Date: 2010-05-06 05:52 am (UTC)no subject
Date: 2010-05-06 11:18 am (UTC)You're right about advances in hardware speed and efficiency, of course. The language compilers, which is really what I started off about, have come a long way too. It's interesting that the open source g77 seems to outperform expensive commercial products and does it with very little fuss. I haven't even mentioned an MS-DOS version of FORTRAN that I have, from Lahey. That one is supposedly a complete FORTRAN 77 implementation to ANSI standards, and came with a thick manual. So far I haven't succeeded in getting it to compile my code, let alone run it. No error messages, the compiler just hangs there somewhere in space and never returns.
What started as a comparison of FORTRAN compilers and their performance has turned into a curiosity about PI algorithms, though. That's because one of the test examples I had for FORTRAN performance was supposedly a derivation for PI but did such a lousy job of it that I had to prove I could do better with less. XD My method easily surpasses the textbook benchmark (which admittedly is old, from the 1980s) but I'm sure there are better approaches that use the full breadth of precision on each new decimal place, rather than filling it with the ones already determined.
In the process of figuring out the CP/M deficiencies, though, I added a write statement for debugging that scrolled the values as they were calculated. It's rather amusing to watch the flickering numbers gradually settle into place, one digit at a time, like a slot machine settling down to a final display.
no subject
Date: 2010-05-07 10:23 am (UTC)FORTRAN seems to be still pretty popular for scientific calculations, my friend used to munch through some hefty sets of aerosol particles with it. I guess it should be pretty optimized for speed and accuracy, though the latter is a bit involved subject with all the binary stuff. We were pondering about using GPUs on the new video cards, they're plenty faster since they're dedicated for basic vector math, and have gotten nice extensions through pixel shaders too. But the university funding got a bit too hairy for him, and he decided to move on to something different for a while, thus we didn't get to try that. :-)
no subject
Date: 2010-05-07 10:35 am (UTC)no subject
Date: 2010-05-15 05:09 am (UTC)no subject
Date: 2010-05-15 07:31 pm (UTC)