Sunday, July 19, 2009

Operating systems


Supercomputer operating systems, today most often variants of Linux,[1] are at least as complex as those for smaller machines. Historically, their user interfaces tended to be less developed, as the OS developers had limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). These computers, often priced at millions of dollars, are sold to a very small market and the R&D budget for the OS was often limited. The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.
Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as AMD and NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.
Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos) and today's Linux.
In the future, the highest performance systems are likely to use a variant of Linux but with incompatible system-unique features (especially for the highest-end systems at secure facilities).[citation needed]

No comments:

Post a Comment