Monday, January 23, 2012

Running a Linux Machine Part 1

Running a Linux Machine, 1-21-2012

                My brother swears by this OS (I am not sure which distribution, but I believe it to be Debian) for his personal computers, and I've had to use Linux with a GUI for CS 2308, on the machines in the Derrick CS lab. But I've never installed or managed an open-source OS, and I know very little about what makes Linux unique.
                Unix was the product of research at Bell Labs throughout the 1960s, picking up where the previous MULTICS project had died. When Unix came out in 1969 written for assembly language, it was quickly adopted for some machines, but it received almost no publicity. When it was rewritten in C in 1972, it started to gain attention. After that, it became widely used in servers and workstations. Throughout the 1970s, Unix became ubiquitous because it was simply the most convenient, powerful, and complete OS available. It was adopted in over 100,000 machines by 1984. Still, though, it was proprietary software.
                Linux was a free and open-source variant of Unix developed in the early 1990s; its breakthrough was that it was to be for IBM PC-compatible machines.
                The main design aspect uniting all versions of Linux is a common kernel. The kernel is the most important part of the OS; it is responsible for communicating between hardware-level processes and applications that the user wishes to run. Linux has a monolithic kernel, meaning that all OS processes run in the same kernel space and always as administrator. This approach allows for improved performance due to powerful hardware access by all applications. Conversely, the system's stability is dependent on all system components. A buggy driver could crash the system. However, there have been modules developed which can be loaded by the OS at runtime, enabling a minimum amount of code to run in the kernel space, and greatly improve stability, as well as improving the performance and capabilities of the kernel. Although there still are disadvantages of the monolithic kernel, some developers say that it is still necessary to increase the performance of a system.
Fig. 1. Distinction between monolithic, microkernel, and hybrid kernel OSs.

                The quite rare opposite of a monolithic kernel is a microkernel, which only keeps a minimum of code in the kernel, and delegates the running code to different user accounts. A compromise is a hybrid kernel, which is utilized by most PC operating systems, including modern Windows and Mac OS (Windows prior to 1995 were not really operating systems but just DOS shells). These hybrids are basically microkernels which have some additional non-essential code moved from the user space to the kernel space, in order to improve performance while keeping some of the improved stability of the microkernel paradigm. It's obviously a good fit for home computing, or else they wouldn't have 
                The strength of Linux is that it is low-cost (technically free) and easily customizable, making it a good fit for many applications. A majority of web servers (and virtually all supercomputers) worldwide are run with Linux. Desktop computers lagged far behind in adopting Linux, principally because of its complexity for casual computers users, although this has begun to change in the past few years. There are literally hundreds of Linux distributions; Fedora, Debian, and Ubuntu are some popular groups of them.
                Ubuntu, Linux Mint, and PCLinux OS are all good choices for new Linux users who don’t want to learn all the complexities. If the only concern is stability and reliability, CentOS is a good choice. Fedora and Debian are closest to “middle of the road” for ease of use, functionality, and stability.
                Installation of a Linux distribution is similar to that of any proprietary OS, except that many of them are available as free web downloads, in addition to very low cost CDs. I am really not an expert on Linux by any means, but I think I would lean towards Debian installation because of how well-tested it is, resulting in stability and security. It also supports more architectures than any other Linux distro, and has a huge variety of software packages available. There are several graphical and CLI front-ends available.
                Actually, even though I’ve never done a Linux installation and they have a reputation as “geeky” among the uninitiated, it looks like there’s an incredibly detailed set of instructions and guidelines for installing and setting up the system:
                http://www.debian.org/releases/stable/amd64/
               
Other consulted sources:
http://brinch-hansen.net/papers/2001b.pdf      A history of operating systems
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf    Monolitich                                                                                                                                                               kernels vs. Microkernels
http://distrowatch.com/                                              Information on different Linux distributions


No comments:

Post a Comment