Monday, January 30, 2012

Top Gear Series 18, Episode 1: My thoughts

Way to knock it out of the park, Clarkson, Hammond, and May. I have never genuinely enjoyed a new TG episode so much since the time they did the Lancia special. The centerpiece of this episode is another road trip with three supercars, kinda like series 7, episode 3 where the cars were a Ford GT, a Zonda, and a Ferrari F430. Except it seems a bit more profound than that. All of these cars look absolutely beautiful and all can top 200 mph, which is useful when they were being hammered around Imola. It's not a playboy cruise like in series 7; it's actually using and abusing a handful of supercars.

The Lamborghini Aventador looks much better in Atomic Orange than it did in Rental Car White on series 17, episode 6, when it was first tested by Hammond. Clarkson gives this car the perfect treatment, not spoiling it with too many faux-Italian cliches. He is correct to praise the dramatic appearance. I was never happy with the front end of this car because of the pointless bulge straight in the middle, making it look like a body kit on an older Lamborghini, but with any color besides white, it blends well and you can focus on the beautifully sculpted sides and rear end. I have been warming up to this car very rapidly. Compared to 2001, when the Murcielago came out and I insisted the Diablo looked better for over a year aftward, the Aventador seems to be growing on me much more quickly. Although it looks to be too high-tech on the inside to be really enjoyable, I think it's only a matter of time before I find it to be the prettiest Lambo ever made, which must make it about the best-looking Italian supercar ever. They have made the new 6.5L V12 sound just as good as the outgoing 6.2L V12, which had an original design going back to the 1960s. Feast your ears on what may be the last-ever brand new V12 engine. It's so howlingly beautiful a noise, you just can't get enough.

The Noble M600. How far Noble has come! They first were shown on Top Gear back in the second-ever episode, in 2002. Back then their only car was the 2.5L Ford-engined M12, which looked like an ugly kit car and sounded like a mouth-breather. I respected the budget performance, but never wanted one. Noble now makes a premium supercar with 650 hp from a 4.4L turbo V8 from Yamaha (sorry, it's NOT British-made or British-designed). It still has a bit too much stupid turbo whistle, but it finally looks pretty darn hot, and Noble is still the coolest car company name around nowadays. 

I've always respected them for sticking around, and I really would hate to see this company go under like nearly all British car companies. When the clutch shattered on Hammond's M600, I felt genuinely sorry for the poor little car, and felt REALLY sorry for Noble. How can a company that sells 50 cars a year keep its doors open if 300 million people just see their only model car break down on a TV show, even when the host is TRYING to like the car? Noble, according to Hammond, got his phone call about the breakdown and immediately sent a driver to take him another M600 all the way from the factory in Leicester. I about wanted to cry when I heard that, because even though it was an extraordinary gesture of service and it was something that only a small, honest company would care to do, it's not enough to erase the vision of the spectacular, unscripted failure of a very important component on one of their cars.

The McLaren has been my personal favorite for a while, because it's simply the commonsense approach to supercars. McLaren is an actual racing firm, unlike the other two, and it shows in their top speed, which is "only" 205 mph. They have sacrificed top speed for agility, which is how real racing works, unless you've gone back in time to Group B or Can Am. McLaren couldn't find an engine that matched their specs, so they built their own: a 3.8L V8 with two turbos. V8 in a British supercar? Yes please. Consequently, the McLaren is the fastest around the Top Gear track, and it is amazingly said to have the most comfortable ride of any supercar. It is also significantly cheaper than the others. That's a neat 1-2-3 punch. It has a fairly anonymous front-end, but it has almost the same beautifully sculpted sides and rear as the Aventador. In profile, they look pretty darn similar. I do believe that the McLaren will age well and it will be significantly more reliable than the other two cars.

Oh, and they do some fun things in the episode. The Lamborghini's V12 will give you eargasms. There's some racing around an Italian Grand Prix track. Clarkson surprises everyone by getting a lap time of within 3 seconds of the Stig's pace in a Ferrari 458, for a time of 1:59. While the McLaren may have had the worst time of 2:06 in the hands of James May, it was still faster by a second than the Ferrari 458 around the TG test track. So I would chalk this up to May simply not being as fast a driver as Clarkson. 

The scene in the garage really spoke to me. Talking to your car might be nutty if it's taken too seriously, as though the car is actually a person. But when it's done properly, as a man invoking his thoughts and aspirations to a machine, it's beautiful. Think of it as though you're making your heart open to all the affection and love that went into the design and construction of the car from its makers. 

This depends on the car. Some cars aren't made with love and they can never be loving, honest machines. They were simply made to sell and earn money, and the engineers and assembly personnel are not especially proud of it. Examples of this would include Kias and many old GM cars.  If it's a complacently-designed and poorly-made car, it's like opening your heart to a serial abuser. But when Hammond gave the Noble a pep talk, and said "I'd love us to win," he was speaking with sincerity and emotion that comes from an honest experience with a machine. If it's an honest car, it's like opening your heart to your true love. British car makers, especially TVR and Noble, are almost invariably created with this kind of love. They didn't make much money for the company, but the company loved to make them.

Not all was sweetness and light, however. I seem to remember an interview with a rapper which was extremely embarrassing. Clarkson was meaninglessly pandering to the youth by making out will.i.am to a genius. I'm still washing my eyes out with bleach to clear the image of that gullwing Lancia Delta (iamauto, Mr. i.am's car company). Equally embarrassing is Intel proclaiming "it is very important to attract the youth culture" as a rationale for hiring will.i.am as a "creative consultant." Beg your pardon, but I just don't see the link. I understand that Mr. i.am has many diverse interests and talents, but to proclaim him "the most inspiring person we've ever had in that sofa" seems like hyperbole. A final note of thanks for requiring an automatic transmission, just like Alice Cooper (who is much cooler). Why do all American ambassadors of culture lack the ability to shift gears manually?

Saturday, January 28, 2012

I have a very cute dog and a very poor camera

Well, strictly speaking the camera is more of a phone than a camera... and the dog is better than cute. He's awesome!



There was an elderly woman at the Walmart parking lot giving away puppies on or about February 20th, 2010. She had one more left when my then-girlfriend (now ex-girlfriend) and I went there to get groceries. I never, ever stop to talk to these kinds of people. But Aimee and I had just had a big argument and she was trying to break the ice and asked if I wanted to see the puppies. And I presumed that was because she was actually interested, so I said yes. I found out later that day it was because she thought I was interested, not the other way around.

So we parked close by and got out and looked. There was a little black dog with a wide face that made him look very Pit-ish. He was absolutely adorable but very small. She said he was the "runt" and he was the last one. And that if I took him home, he'd be a "friend for life". I suppose I fell in love with that dog. I remember saying "We can't leave him" and then we took him home. But I somehow felt like I was still being the rational one and Aimee was the one who was more interested in the dogs than me. Stupid me, she already had a dog, and I was the one whose cat just ran away. Poor Parker. I still maintain that her damn cat, Fraidy, killed him. Parker was the best cat I've ever seen, but he's still just a cat, which means NO loyalty, NO predictability, and NO genuine affection. And here enters this old lady, about to set me on the path to dog-loving right now.

I have in the past felt like SHAMING this woman for giving away dogs without asking for even the slightest fee to deter animal cruelty to poor, defenseless puppies. But maybe she was actually wiser than I gave her credit for, and she was able to detect that I would take care of the dog and love him. If that is what she detected, she was right. And if that is the case (although I find it doubtful), I must give her tremendous credit for judging the character of those people to whom she was giving away the dogs. If it is not the case, I must still give praise nevertheless for her enabling me to have this great dog in my life.

Raising him as a tiny puppy was hard. He was less than a month old with not enough teeth to eat solid food. As a puppy, he obviously wasn't housetrained yet, and peed and pooped almost every hour. He whined like a banshee all night long when we put him to bed in the bathroom; he was just too small to sleep with us, and we couldn't risk him making a mess unless he was in an area that we puppy-padded. I am very disappointed that I have no pictures of him from this age. I have seen cute puppies, but I've never seen one that looked so helpless and adorable as he did. God, it made my heart melt.

Bud loyally waits in his kennel when I am away at work, and he plays joyfully when I return. I do my best to give him as much exercise as possible, but with school and two jobs I am so tired it rarely is enough. Still he makes do with everything I give him. He has never been the brightest dog, but he obediently reacts to the most important commands: down, leave it, go outside, go to your room, and (extreme cases, he reacts instantly) DROP IT. And he knows when I'm about to give him either a "treat" or a "bone" depending on which word I use. He still hasn't figured out how to "lay" or "roll over" or anything more advanced, but frankly I'm not using his tricks as a way to pull chicks (I've got a great one already!), so I couldn't care less. He destroys toys VERY quickly, so only a few survivors make it: only the sturdiest rope bones and rubber Kong toys make it for more than a few weeks. Anything that has stuffing and/or a squeaker inside, he will find a way to tear it apart within 24 hours if left unsupervised. Tennis balls, even if made of diamond, are toast inside of an hour. Guaranteed. If there was a squeaker inside a toy woven with kevlar and asbestos, he would still find a way to gnaw the seams away and get inside. A frisbee, admittedly, will last forever, but only because my poor Buddy is too simple to figure out how to catch it or to pick it up when it hits the ground. He tries to put a paw on it, but that keeps him from picking it up with his teeth. Net result: he ignores Frisbees, they're just a waste of time to him.

Buddy provides me with as much affection as a dog could possibly deliver. He is playful but never loses his discipline. He will follow my demeanor and console me if I'm sad. He will detect my energy and respond with his own outbursts. He knows when I'm just not feeling right, and those are the times when he unceasingly lays across my feet or licks my face. He was bad with accidents as a puppy (probably because I was very dense about how long his bladder could last), but now he is so obvious when he needs to go outside that even a dullbit like me can figure it out.

In return all he asks for is a bowl of Beneful twice a day, and water three times a day (or two if I'm going to be gone all day, sorry Buddy). He first drank formula from a bottle, then a bowl, and then he moved onto Moist n Meaty before he settled into dry dog food. He had some choices at first, but I felt better giving him the name-brand expensive stuff, and among all the dog foods I tried, the one he consistently ate with the most enthusiasm was Beneful. So that's what he gets to eat all the time, varying the flavors between each big bag. I have never been compelled to skimp even the slightest bit on dog food; he deserves every dollar I spend and then some. He might go through one $30 bag of Beneful every six weeks, while his fat-ass master might spend $200 on food in a month. He gets exercise every night, but most of that consists of him running around in huge figure-8s in the yard while I lazily cheer him on. He tuckers himself out so that I don't have to do it to myself. What an amazing dog.

He also has a beautiful, smooth coat (it has been complimented by everyone who has ever seen him) and very little "dog odor". In fact, he is generally so clean that I rarely have any objections to him sleeping in my bed with me, underneath my covers. He also somehow stopped growing at exactly the right size: 50 lb. Any heavier and he would be very formidable with his strong muscles. But I don't want him smaller; he's a man's dog, not a toy poodle. With his strength I believe he could certainly hold his own against other dogs, but I have conditioned him so strongly against violence that he simply will not fight with other dogs. He gets scared and confused when they try to tackle him and claw at him. I guess this makes him a sissy (he has also been neutered, which probably helped) but the function of a dog is not to be a good fighter, especially when he's a Pit Bull mix and forced euthanasia is available in 20 minutes after a phone call to the police from a nosy neighbor. He is built for survival in the human world, not the dog world. That's why he has no testicles and isn't allowed to play with dogs his age (older, slow dogs are fine).

He is loyal to his master(s), and when the lights go out, any whiff of an intruder will awaken his doggy senses, and he reacts appropriately with a growl or bark. The alertness of my dog has enabled me to sleep soundly, confident that anyone who tried to take life or property from me, would be revealed before his access was complete.

What more can I say? I love my dog.

Monday, January 23, 2012

Running a Linux Machine Part 1

Running a Linux Machine, 1-21-2012

                My brother swears by this OS (I am not sure which distribution, but I believe it to be Debian) for his personal computers, and I've had to use Linux with a GUI for CS 2308, on the machines in the Derrick CS lab. But I've never installed or managed an open-source OS, and I know very little about what makes Linux unique.
                Unix was the product of research at Bell Labs throughout the 1960s, picking up where the previous MULTICS project had died. When Unix came out in 1969 written for assembly language, it was quickly adopted for some machines, but it received almost no publicity. When it was rewritten in C in 1972, it started to gain attention. After that, it became widely used in servers and workstations. Throughout the 1970s, Unix became ubiquitous because it was simply the most convenient, powerful, and complete OS available. It was adopted in over 100,000 machines by 1984. Still, though, it was proprietary software.
                Linux was a free and open-source variant of Unix developed in the early 1990s; its breakthrough was that it was to be for IBM PC-compatible machines.
                The main design aspect uniting all versions of Linux is a common kernel. The kernel is the most important part of the OS; it is responsible for communicating between hardware-level processes and applications that the user wishes to run. Linux has a monolithic kernel, meaning that all OS processes run in the same kernel space and always as administrator. This approach allows for improved performance due to powerful hardware access by all applications. Conversely, the system's stability is dependent on all system components. A buggy driver could crash the system. However, there have been modules developed which can be loaded by the OS at runtime, enabling a minimum amount of code to run in the kernel space, and greatly improve stability, as well as improving the performance and capabilities of the kernel. Although there still are disadvantages of the monolithic kernel, some developers say that it is still necessary to increase the performance of a system.
Fig. 1. Distinction between monolithic, microkernel, and hybrid kernel OSs.

                The quite rare opposite of a monolithic kernel is a microkernel, which only keeps a minimum of code in the kernel, and delegates the running code to different user accounts. A compromise is a hybrid kernel, which is utilized by most PC operating systems, including modern Windows and Mac OS (Windows prior to 1995 were not really operating systems but just DOS shells). These hybrids are basically microkernels which have some additional non-essential code moved from the user space to the kernel space, in order to improve performance while keeping some of the improved stability of the microkernel paradigm. It's obviously a good fit for home computing, or else they wouldn't have 
                The strength of Linux is that it is low-cost (technically free) and easily customizable, making it a good fit for many applications. A majority of web servers (and virtually all supercomputers) worldwide are run with Linux. Desktop computers lagged far behind in adopting Linux, principally because of its complexity for casual computers users, although this has begun to change in the past few years. There are literally hundreds of Linux distributions; Fedora, Debian, and Ubuntu are some popular groups of them.
                Ubuntu, Linux Mint, and PCLinux OS are all good choices for new Linux users who don’t want to learn all the complexities. If the only concern is stability and reliability, CentOS is a good choice. Fedora and Debian are closest to “middle of the road” for ease of use, functionality, and stability.
                Installation of a Linux distribution is similar to that of any proprietary OS, except that many of them are available as free web downloads, in addition to very low cost CDs. I am really not an expert on Linux by any means, but I think I would lean towards Debian installation because of how well-tested it is, resulting in stability and security. It also supports more architectures than any other Linux distro, and has a huge variety of software packages available. There are several graphical and CLI front-ends available.
                Actually, even though I’ve never done a Linux installation and they have a reputation as “geeky” among the uninitiated, it looks like there’s an incredibly detailed set of instructions and guidelines for installing and setting up the system:
                http://www.debian.org/releases/stable/amd64/
               
Other consulted sources:
http://brinch-hansen.net/papers/2001b.pdf      A history of operating systems
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf    Monolitich                                                                                                                                                               kernels vs. Microkernels
http://distrowatch.com/                                              Information on different Linux distributions


Saturday, January 21, 2012

The best car in the world

Earlier this year, the internet was abuzz with postings about the new flagship model from Shelby Supercars (no relation to Carroll Shelby of Mustang fame). After some guessing about the name (Ultimate Aero II?), the company settled on a ridiculous name: Tuatara. It sounds better after you hear it spoken correctly (Twah-tah-rah, or Too-ah-tar-ah), and at least it's not as stupid as the Pagani Huayra or McLaren MP4-12C. I tried to dig up as much dirt as I could on the Tuatara, but there was nothing but renderings and a few company-sponsored videos with very little actual airtime of the completed vehicle.

Please, for God's sake, avoid the videos. Or mute them. You'll just hear Mr. Shelby, the company president, who looks like an ex-wrestler with a shiny bald head, tell you about how they chose the name Tuatara because it's a lizard that has the "fastest evolving DNA of any creature." It didn't make sense to me when I first heard it, and in retrospect, talking for 5 minutes about a name decision is kind of embarrassing when it's the weakest point on the whole car.

SSC Tuatara, the most beautiful car made today.

The last I heard, SSC visited the Dubai motor show with the Tuatara, and they so wowed the oil sheiks that Shelby came home with orders for 10 cars. Given that America's rich are selling off their private jets and anxious to deny their placement in the 1%, I can almost guarantee that we haven't sold 10 of these domestically. Rich people fear the wrath of jealous, envious neighbors. Our grandfathers may have approvingly ogled Duesenbergs from the windows of their Fords, but today, the average Joe would rather legislate away the right to buy supercars. A pity.



Perhaps this sheer rareness explains why we have yet to see anyone climb into it, and we don't even know what the interior looks like, apart from the CG rendering. They claim the top speed is 275 mph, which makes it the fastest car in the world, beating out the Bugatti Veyron Super Sport. It also does 0-60 mph in 2.5 seconds with a 7-speed gearbox, which in both cases matches the Veyron. Except that the Bugatti Veyron looks like a chubby hairless rat (it's so ugly I will spare your eyes and not post a picture), and the SSC looks like a fighter jet. It's even more over-the-top than the Zonda (which I do love), with nearly twice the power, and with a completely round, glass canopy.


Ferrari P4/5 Competizione; front end terrible, rest good.
Pagani Zonda F, sex on wheels.

The SSC reminds me of the Ferrari P4/5, which I didn't really like because I thought the front end had an ugly, straight-edged gaping mouth. Ferrari has also not really captured my respect for some time now. The last time I really looked up to them, it was when I was 8 because of a Hot Wheels Ferrari F40. Then I got a Hot Wheels Lamborghini Diablo and never looked back.


My family has never been big on American cars. I think the only reason that they bought Ford products in the old days, is that Honda didn't exist yet. When the 1980s rolled around, we moved into Accords and Civics, and never looked back. Still, at least my dad had a soft spot for cheap Fords, and he owned both a Festiva and an Escort wagon. I think this tainted my view of American cars as poorly-made, cheap boxes on wheels. I grew up never conditioned to the idea of American companies producing a world-beating car.  It wasn't until later that I acquired a taste for classic American cars. Perhaps in light of my family convictions, my prejudice is understandable.We just made ordinary, cheap cars for ordinary people, and imports dominated the market for the wealthy and tasteful.

Saleen S7: somehow it's just too ugly.
I am ashamed that I ever felt this way. How can any country remain great if it simply gives up in a field in which it was once dominant? Didn't we once make the most beautiful machines in the world? Didn't a 1965 Chevy Caprice once carry as much prestige and style as a person needed? How did we fall from the Cadillac Eldorado Brougham and Corvette Sting Ray, to the point where the most expensive American cars are flaccid Town Cars and souped-up Mustangs? Yes we still have the Corvette, and it has almost always been good, but it's not a supercar. It's at a very competitive price point, and it's a blast to look at and to drive (I actually have driven a C6 Vette! Had to keep it slow, but nevermind that). But it's common enough that it won't be seriously considered by the fat cats of the world. There's the Saleen S7, which is in supercar performance territory, but it's just dull to look at. I tried making it a wallpaper and I felt like a brainless jingoist for doing it, because clearly an Italian supercar was prettier and I was just deluding myself otherwise.

SSC Ultimate Aero TT: The original record breaker. Way too ugly.
And when SSC first made cars, the Ultimate Aero definitely had a cool name, but to look at it, it was even worse than the S7. It looked like a Chinese knock-off of the Lamborghini Diablo with a fussy rear-end and no unique touches whatsoever. It was so bland among supercars that I wasn't even thumping my chest when America recaptured the top speed record. The SSC Ultimate Aero TT in 2007 went up to 257 mph, beating the original Bugatti Veyron's 253 mph. If my memory serves me correctly, the Duesenberg SJ was capable of more than 130 mph, which was faster than any car produced until the Mercedes-Benz 300SL in 1954. That's a gap of 53 years. It should have made front-page news: AMERICA BUILDS FASTEST CAR. But that said, I didn't want to own one. I will never have a poster of this car on my wall, because it just isn't sexy at all.


If the Tuatara doesn't make American cars sexy again to the world, nothing will. I see it and I'm dumbfounded by its beauty. The first moment I laid eyes on a digital picture of its gleaming, pristine white bodywork, I knew that I was hooked. I'm still hooked. I don't care if it breaks down every week, if it has squeaks and rattles that can cause deafness, and if the clutch is so heavy that you need a bionic leg to use it. This car makes an Aston Martin look trite and a Zonda look unimpressive; it's the first mechanical thing I've ever seen which is more beautiful than Audrey Hepburn. And it has rear-wheel drive with a V8. And it's American-made by a team of just 16 engineers. I haven't yet seen proof that this car is capable of 275 mph, but I'm betting that under the right conditions it can. Remember that SSC was the same size back in 2007 and they gave a bloody nose to VW, one of the world's largest car manufacturers, when they overtook the Veyron in top speed. If they could take a giant leap forward in exterior design, why couldn't they do the same for top speed?

If you buy a Tuatara you'll be coughing up $1.3 million, which sounds like a lot, but the Veyron SS is about $2 million. So it's still a David-and-Goliath story. I'm not saying it's better because it's cheaper, because talking value comparisons is ridiculous at this price range. But the Bugatti Veyron SS is a masterpiece and it's still a bargain at $2 million. Now, if that's what a masterpiece is, then I don't have an English word to describe the brilliance of a tiny company in Washington state making a Veyron-beater for 60% of the price. Making Volkswagen look stupid is worth the price of admission alone.

SSC will never feature their car on Top Gear, Europeans will never give this car a fair shake, and millions of frustrated online pundits will call the car "vaporware" because of the lack of videos of the real deal. I'm sure despite the totally glass canopy, that visibility isn't great. I'm sure that the 7-speed manual with 1350 hp is a bit of a handful, especially with no ABS and no four-wheel drive. But if you bought one of these, you'd be buying a piece of priceless art, as well as investing in America's future. We need to be able to build things like these, even if it's a totally impractical concept. To see a design this beautiful and a car so well-engineered is just inspiring to everyone who has ever had an interest in machines. I pray that someday I can just catch a glimpse of one of these beautiful cars. Just looking at the pictures makes me feel like we still are as great as we were in 1969, when we landed on the Moon, the dollar was invincible, Hollywood still made good mvoies, and Corvettes looked like sharks.

Multi-millionaires of the United States (or abroad), if you want to be perceived as an incredibly tasteful, interesting, forward-thinking, and exciting person, please buy American.

Friday, January 20, 2012

FPGA Capabilities and Concepts

FPGA Capabilities and Concept 1/20/2012

                I believe Dr. Aslan also wants to use his server’s NetFPGA capability for experimentation. In DSP, our prototyping board will be a Xilinx FPGA board.  In this report, I begin research about NetFPGA (beginning with an introduction to FPGA, although I am not highly familiar with it yet) and some about networking in general.
A general-purpose microprocessor has pre-programmed hardware with an instruction set that is immutable and must be obeyed. By comparison, FPGA (“field programmable gate array”) is a form of integrated circuit where the dominant component is interconnect which can be modified into gates by the user. Although some logic components may be present, the user can define many of them using a hardware descriptive language such as VHDL or Verilog. The fact that the behavior of its hardware can be reprogrammed is a compelling advantage of this architecture.
However, FPGA is a recent phenomenon compared to CPUs or its predecessor, ASICs. The very first patents related to programmable logic gates were granted in 1985. By the early 90s, the market was still very small, but by the end of the decade, it was in the billions of dollars. Present-day technology allows for millions of programmable gates.
The NetFPGA architecture is a method of using FPGA to create a networking tool such as a router. The task of using NetFPGA to create an IP router is a common one seen in various academic institutions; approximately 2200 of these boards have been deployed. To start, here are some links I consulted to arrive at the research in this report:
In 2009, I was employed for a few months in DSL tech support (for a local firm called teleNetwork) and we had one week of training, which I do remember being a bit frenetic. I am certain that I learned some information about the theory behind the internet and networking in general, but most of it has left me.
What is a router and how does it work? Routers are pieces of hardware that send and receive data packets between computer networks. A data packet is just a quantity of formatted data. (it is distinct from simple, bit-by-bit transfer on point-to-point networks). The data is sent in octets with a header followed by a body. The header is the forwarding information, while the body is the actual data.
Internet Protocol (IP) is the set of design principles for sending data packets over internetworks (collectively referred to as the internet). The Internet Protocol suite is abbreviate as TCP/IP (Transfer Control Protocol/Internet Protocol) because it contains another protocol (TCP) which controls the flow of information via IP, makes requests for lost data, and orders the transmission so that transfer is maximized.  
I used to work in DSL tech support and I vaguely remember some of the levels of distribution that are involved. The idea of the internet goes back to the ARPANET of the early 1970s, which was designed to connect existing small networks nationwide, and allow survivability of the whole network even if huge portions of the country were destroyed in nuclear war. Privatization of internet access occurred in 1992, and by the turn of the millennium, most Americans had access to internet, either at their residence, at work, or in libraries, schools, or universities.
Nowadays the common home internet setup includes a DSL modem and a router, or an integrated setup called a gateway. A digital signal that contains the requested information is sent (modulated)  over the phone line. A modem will receive (demodulate) the signal.  The purpose of a router is to distribute an incoming signal to a network, which implies multiple machines.
NetFPGA is available in 1G and 10G speeds. The former has a standard PCI form factor and a quad-input gigabit Ethernet NIC.

Fig. 1. NetFPGA board

The makers of NetFPGA claim that because the datapath is entirely implemented in hardware, it is capable of sending back-to-back packets at full gigabit speed, with processing latency of “just a few clock cycles”. It also has enough onboard memory to allow full line rate buffering.
NetFPGA.org has a set of detailed tutorial videos which describe the product and the uses of it:
VIDEO 1: INTRODUCTION
There are three basic uses of the NetFPGA architecture
1.       Running the router kit to achieve hardware acceleration.
An unmodified Linux machine can use the “hardware accelerated Linux router” or router kit to achieve this. It uses one program called RKD (Router Kit Daemon) which monitors the Linux routing table and allows the user to modify the route table using standard commands. If the routing software uses the NetFPGA interface for forwarding, hardware acceleration is provided without any further modifications.

2.       Enhancing existing reference designs.
The reference designs provided include 1)Network interface card (NIC); 2)Ethernet switch; 3)IPv4 router; 4) hardware accelerated Linux router (discussed above); 5) SCONE (Software Component of the NetFPGA), which uses a protocol called PW-OSPF to handle exceptions to the hardware forwarding path..  The reference designs for the NetFPGA 1G board have a pipeline of Verilog modules which can be augmented using the NetFPGA driver. If the user wants, he could create a GUI in this way to visualize the functionality of the NetFPGA hardware.

3.       Building entirely new systems.
Using Verilog or VHDL to design, simulate, synthesize, and download to the board. Not all projects are an adequate fit for existing reference designs. This is more involved than just adding modules on reference designs.

                In addition to the provided material by the NetFPGA organization, there have been contributed designs by companies and universities, including an openflow switch, packet generator, zFilter router, and a DFA-based regular expression matching engine. There is also a wiki and forums on the website.
                The NetFPGA, as labeled by its manufacturers, has two defining features.

I.                    It is a line-rate platform. It can be used to process back-to-back packets, operate on packet headers, including switching, routing and firewall processing. It can also be used for content processing and intrusion prevention of packet payloads. 
II.                  It is open-source hardware. This is similar to open-source software in that all source code is available and has a BSD license. But it is considered more difficult than a software project because hardware components must meet timing and may have complex interfaces. All levels of design must receive adequate testing to ensure that they have consistently correct and repeatable results.

VIDEO 2: HARDWARE
                the NetFPGA architecture contains a Xilinx Virtex II Pro 50 FPGA  with 53,000 logic cells. It also contains block RAMs and two embedded PowerPC processors, enabling higher-level languages to be implemented as well.
                The board also has four onboard RJ-45 gigabit Ethernet ports.
                Onboard memory includes 4.5 MB SRAM (for storing forward table data) and 64 MB DDR2 DRAM (for packet buffering).
                Board has a standard PCI connector to interface with host PC. There is also SATA connectivity, which might be used for connecting multiple boards.
                All reference designs were tested on 32-bit and 64-bit systems. In order to make one’s own designs, one needs to have Xilinx ISE and ModelSim. There are complete prebuild systems (lacking the NetFPGA card) such as the NetFPGA “Cube” (desktop) and rackmount servers, suitable for high-density and high-performance servers.

VIDEO 3: NETWORKING
                Because the NetFPGA uses Ethernet, it places packets within Ethernet frames. IPv4 headers have many fields, including the version field, source and destination fields, TTL (time to leave) field (counter to prevent data being circulated endlessly), header checksum field (makes sure that the header is not corrupted).
                A simplified view of the internet can consider just routers and hosts. The routers forward traffic between hosts. Between two hosts, one host creates a packet with the appropriate header defining where the data is to go (IP address) which is sent to the router connected to that host. The router will consult the forwarding table to find the best place to send it next. Each router will pass the packet along in the same way until it has reached its destination.
                IPv4 addresses have 32 bits, which allows for 4 billion unique addresses. The naïve approach is to simply create a forwarding table with billions of entries, and although this is possible with current memory density, it would make routers more expensive and it would make updating the table extremely costly and time-consuming.
                The actual method routers use is to use grouping. Grouping will place hosts thatare “close” to each other (in terms of the steps required to reach each other) by grouping blocks of IP addresses by closeness. There will still be matching entries for close, but not identical, IP addresses.  The forwarding tables can be improved by sorting the entries from most specific to least specific.  Doing a linear search on the forwarding table will thus always provide the most specific entry first, resulting in the best possible match.
                Going beyond the forwarding table, there is a switching element to send data to the correct port. Further in line there is a queue which buffers the data to be sent. The routing protocol will then determine more closely the topology of the network, and find the shortest possible route to the destination. The routing protocol will update the forwarding table, and maintain a routing table, which is more detailed than the forwarding table. There is also manual control available by CLI in a “management” block.
                 Grouping together only those which are responsible for forwarding traffic (forwarding table, switching component, queue) maybe called the data plane, which processes every packet through the router. Data plane is implemented in hardware.  The other elements can be grouped together as the control plane, which is much slower than the data plane. Control plane is implemented in software, and is more complex than the data plane. As concerns the NetFPGA, the control plane is carried out on the host computer, and the data plane is implemented on the FPGA. For the FPGA, instead of a forwarding table, switching component, and queue, these repsonsibilities are carried out by an “output port lookup”, “input arbiter”, and “output queues”, respectively. NetFPGA includes two versions of the control plane: SCONE and the router kit (which doesn’t actually implement in software but in hardware).
VIDEO 4: REFERENCE ROUTER
                The reference router uses FPGA hardware to achieve the functionality of the data plane. The control plane can be managed by SCONE and a Java GUI. The GUI is not strictly necessary but it makes understanding the routing table much easier.
In this example, the lecturer considers a setup of 5 computers with NetFPGA installed. They are not all connected to each other, but all of them are in the network. If one wants to stream video from one computer to another, the NetFPGA will use the router kit to stream the data along the shortest path. If a link on the shortest path is broken, the video will continue to play for a few seconds from buffered data, and then stop. The SCONE will talk to other computers on the network and recognize that the topology has changed. Each SCONE will update its routing to reflect this changed topology. Each one will consequently find the next shortest path which is available, and resume the streaming. When the broken line is reconnected, SCONE will update the routing table and resume streaming via the shortest path, albeit without interruption of the video this time.

VIDEO 5: BUFFER SIZING
                The router reference design can be modified to experiment with buffer sizing. Buffers are needed in routers to handle congestion, internal contention, and enable pipelining. Congestion buffers are the largest and most important. Congestion happens when a router is receiving packets at the same time. The buffer will hold as many as possible in the order they arrived, and drop those that simply cannot fit. The “TCP sawtooth” refers to a drop in the TCP window size (number of outstanding, unacknowledged  packets) corresponding to when a packet is lost. Buffers must be sized large enough to absorb the variations in the traffic arrival rate and ensure a constant departure rate equal to the output link capacity.
                Most commercially available equipment cannot modify buffer sizes as needed to experiment and come up with ideal sizing for the network demands.
                With NetFPGA you can adjust the buffer sizes, capture packet events (add a module for this), and rate-limit a link (also needs a new module) to experiment with buffer sizing.  They add modules for a rate limiter and event capturer. Then they use the Advanced Router GUI script to activate the rate limiter and event capturer to log the packet drops. The generated output is the “waveform” of the TCP window size against time (as packets are received).
VIDEO 6: WHERE TO GET STARTED
                Mentions resources available online as well as a hands-on teaching session available biannually at Stanford or Cambridge, for prospective users to be trained on FPGA and perform a project.
PROJECT VIDEO 1: BUFFER SIZING IN INTERNET ROUTERS
                Review of purposes of buffers. Common buffer size of a router with 10G link is 1 million packets. The rule of thumb is that Buffer size = RTT*C.
RTT = (average two-way delay between traffic sender and receiver)
C = (output link bandwidth)
                Larger buffers will be good for reducing the number of packets dropped, but oversized buffers are undesirable because of the higher complexity, more queuing delay, cost, and power consumption associated with it.
                More than 90% of internet traffic is TCP, which has a closed-loop congestion control mechanism. TCP controls transmission rate by modifying “window size”. If there are a larger number of TCP flows, the buffer size required becomes smaller. If the number of flows is great enough, the buffer size can be reduced by a factor of the square root of the number of flows without damaging throughput. With 10,000 flows, the required buffer size could be reduced to 10,000.
                This could be reduced even more on fast backbone networks that are connected to lower speed networks. The buffer size in this case could be lowered to the order of log(W) where W is the window size. It does not vary with number of flows.  In this example, 20-50 packets would be enough for 90% throughput, rising to nearly 100% by 10,000 as before.
                NetFPGA’s buffers can be changed with extreme precision (1 byte at a time, if desired). The lecturer goes on to demonstrate the event capture software by showing how one can manually tune the buffer size and the number of TCP flows over a network, and have the system automatically create data points for the 99% throughput rate for a given flow number and buffer size. In the case of their example, with 200 flows, just 20 packets of buffer space is required.
PROJECT VIDEO 2: OPENPIPES- HARDWARE SYSTEM DESIGN WITH OPENFLOW
                OpenPipes is a tool that will distribute complex designs among several subsystems, for example FPGAs and CPUs. OpenFlow is a tool that gives the user control over routing traffic through the network. A controller for the switches could be implemented in an FPGA, ASIC, or even CPU.
                OpenPipes is capable of testing assistance, by simultaneously testing the implementation of a software and hardware design, and feed the results into a comparison module that verifies the results are the same.
                In an example, the lecturer shows how OpenPipes allows the user to modify a running system by changing the flow tables inside the system. OpenPipes has a GUI that can display all the available hosts and switches on the network. From one location (say, Stanford) one could download hardware modules from the local host to the locations in Houston and LA. This software enables hardware on different physical machines to be utilized at the same time.
                In summary, the OpenPipes platform can perform the following functions:
1.       Partition a design across resources
2.       Modify a running system
3.       Utilize a mix of different resources
4.       Assist in the testing and verification process

                

Wind Turbine Research, Day 1

An initial look at braking systems and blade design on wind turbines
1/18/2012

Summary

                It is theoretically possible to power a small office using 3 small wind turbines and a storage battery. The battery should be fairly large to guard against long bouts of windless weather; at this time I think something like a 130 Ah, Group 30 Trojan deep cycle battery should be suitable, for under $200. As for braking, I haven’t yet seen an ideal breaking solution, but comparing dynamic and frictional braking techniques, I think a dynamic braking resistor would be helpful. I have seen examples of software used to design ideal aerodynamic surfaces (like HyperSizer for very large turbines), but not yet come up with an idea to improve the blade design.

Problem

                A wind generator will produce power continuously, but demand is not necessarily continuous, so a storage battery is desirable to accumulate charge. The SolAir wind generator we are using can provide 12 V or 24 V. We have chosen to use a 12 V to 120 V inverter (producing AC from DC) so we should get a 12 V battery which is capable of being heavily charged and discharged.
                I found that the braking system on a wind turbine is crucial for all but the most trivial applications. Large wind turbines have extremely long blades (perhaps 50 m diameter) and although the rotational velocity might be small (20 rpm) the tip speed at the edge of the blade might be 200 mph. A small wind turbine (like the SolAir generator that we are using) has blades about 2 ft (61 cm) in length, the potential rotational velocity is much higher (manufacturer says that in high winds, it could exceed 1000 rpm, although this might be exaggeration). Furthermore, the blade is made of aluminum and has a very flat edge. When this blade is at speed, it is very capable of cutting off children’s limbs or killing birds that fly into it.
                Although the behavior of birds is hard to predict, it is desirable that even a small wind turbine have braking capability, even if mounted high above where children can reach it, if only so that when the turbine must be serviced, it can be done safely.
                Most large wind turbines achieve maximum efficiency at higher windspeeds; 33 mph (15 m/s) is common.  Most wind power stations cap the speed allowed at 45 mph to stop the blades moving so fast that they overcharge the battery.
                Because 33 mph is a pretty stiff wind, it is probable that the wind turbine will operate mostly at less than max power output. The blade design should not be a weak point. It should catch as much productive wind as possible.

Theoretical Limits

                 The theoretical maximum efficiency of a wind turbine is 59.3% as calculated by Albert Betz in 1919. That is, 59.3% of the kinetic energy of the wind can be turned into kinetic energy of a plane incident to it (the spinning blades). Since the electric generator is very efficient (over 90%) the max real efficiency is still over 50%, which would be enormous. The finality of the Betz limit (59.3%) casts doubt on the company's claim that this device is capable of converting up to 70% of the wind’s energy to electricity. But perhaps that figure includes the energy provided by the supplemental photovoltaic cells.
                Unfortunately, the wind is constantly changing direction and magnitude, so it’s extremely difficult to find the actual kinetic energy present in a quantity of air. What is more often done is list the capacity factor, which is a metric normally reserved for power plants to describe what percentage of their namesake power (say, a 1000 MW power plant) is actually produced in a given period of time. For baseload power plants, the percentage is generally above 90% as the demand for their power is continuous and predictable. For wind power plants, the capacity is generally much lower, at 25%, owing to the fickleness of the wind. So a wind generator which lists a possible 2 MW could only produce in the neighborhood of 250 kW in sustained operation.
                If we take 25% as a reasonable benchmark, then three 800W solar generators produce a net 600W continuous output. The demands of our office (one desktop computer, one server, fluorescent lighting, and perhaps other rarely-used testing equipment) will not be static, but vary throughout the day. Theoretically, if the server and computer were both at max power (400W apiece, say) they would be consuming more power than the wind generator could deliver. But unless engaged in heavy computing, they would not be using anywhere near that amount of power. For moments of peak demand, the battery would simply discharge slightly until demand once again was reduced. Assuming that the electric generator on the turbine and the inverter to get AC power from the battery are both very efficient, (>90% efficiency) the wind turbines would theoretically be able to power the office.
                 
Possible Solutions

                Car batteries are generally “starting” batteries, with very high cranking amps but very low tolerance for discharge. They are a poor choice for storage and are suitable only when continuously kept at >80% charge.
                A better option is “deep cycle” batteries, which are designed to be discharged more completely. They do have less cranking amps than a starting battery, and are costlier, but they are a decent choice for energy storage. A popular measure for storage capacity is “reserve capacity”, which states how long the battery can be discharged at a certain current drain. A more general one is “amp-hours”, which states the product of current and time for which the battery can discharge. Deep cycle batteries with 25-50 Ah are available for less than $100. On balance, I think going for a bigger battery would be better than building up from smaller batteries if the need increases. A 130 Ah Group 30 deep cycle battery from a maker called Trojan is available for less than $200.
                There exist many methods of causing the blades to stop. They can broadly be defined as frictional braking and dynamic braking.
Physical braking is possible by including a disc (or rotor) rotating with the rotor shaft of the blades, which would be stopped by friction surfaces pressed into it by hydraulic cylinders. This is exactly the principle of what stops a car. Low-cost off-the-shelf small disc assemblies are available by many tool and hardware companies for under $100 but most are designed for trailers, and are relatively heavy. The best option here would be mechanical disc brakes, designed for very light vehicles such as power chairs and scooters. I note that there are such calipers (with pads already in place) for about $25 on websites. The rotors themselves are sold separately for as little as $36.50, meaning that a complete braking solution could cost $60 per turbine. The provided mounting hardware might not be suitable for fitting to a wind turbine, as these were meant for wheels, but if needed we could fabricate a mount from bolts and plates available at any local hardware store for little cost. Although I have not yet done any calculations to the effect, I believe that if these little brakes can stop a scooter with a 150-lb occupant, they can stop an all-aluminum turbine.
                If it is found to be cheaper to use drum brakes than discs, these could also be mounted on the shaft in the same way, and achieve braking in the same way using a wheel cylinder. But unlike in passenger cars (where drum brakes are sometimes still used because they are cheaper), it seems that drums are not any cheaper. In my experience (I worked in a brake shop for four months), drum brakes are also significantly more difficult to service.
                I have some worries about using physical brakes, since any additional weight on the shaft could cause imbalance and would increase the inertia of the spinning shaft, making the precious little energy available from the wind become harder to catch. Although we would mount the caliper on a stationary bracket, the disc would have to be spinning with the shaft. The same issue would creep up for drum brakes as well. An issue with all mechanical brakes is that they need to be physically engaged by a user, which might not be feasible from a distance. A hydraulic brake could achieve the action at a distance by pressing a button or pedal of some kind, but if a hydraulic caliper is needed, it will be even heavier and yet more expensive.
                Dynamic braking is based on electrical principles. Freak gusts can cause the blades to easily spin too quickly for the generator to allow. To slow down the turbine when the battery is full, the power can be redirected to a dynamic braking resistor which will absorb the electricity being produced by the kinetic energy of the blades as heat. This is a safe way to restrict speed and is widely used in the industry. In order to bring a small wind turbine to a sudden halt, it is possible to simply disconnect the battery and short the terminals of the generator, permitting no voltage between them and bringing the shaft to a big halt. I have read mixed reports on the success of this. Some say that it is perfectly safe for small wind turbines, but some say it is always damaging to the generator and should never be attempted.  Microlog Technologies produces an “electric brake panel” that does exactly this kind of braking. Dynamic braking has been described as more reliable than frictional braking, which has moving parts that can fail and need to be replaced.

Some helpful links:
http://www.gokartsupply.com/discbr.htm                         Disc brake assemblies
http://www.micromediaplus.com/microlog_wind_brake.html                 Microlog Technologies: electric brake panel
http://www.windmission.dk/workshop/BonusTurbine.pdf        Very broad article on wind turbine operation and construction
http://www.reo.co.uk/files/dynamic_braking_resistors_02-08_engl.pdf                             REO dynamic braking resistors



Getting back on track

I've looked back on all these blog postings and it just occurs to me that for having intended this to generally focus on technology, I've really mostly just talked about cars and political things. Shame on me! Those things interest me during my free time, but as this new semester (and my last year of undergrad, hopefully) commences, I really need to stay on topic more than ever. Hopefully my blog can help me do this.

So most of the coming posts for the next few months will concern engineering, computers, electrical theory, and all that fun stuff. They will be tagged "Research"; every article with the label "Research" will simply be the product of my research for Dr. Aslan and my own passion for knowledge. I will post everything here. The projects we have on our plate are not totally specific yet, as he seems to have just given me a lab and free time (for which I will be in the employment of Texas State University) but I'm nailing down our main two goals as best I can.

1. Assembly and placement of three small 800W wind generators (with onboard photovoltaics) and an inverter and power storage system. The inverter and wind generators are already bought and with little assembly required. My unique design contribution will be the power storage system and braking system. He has said his ultimate goal is to take our lab off-the-grid and power it just by wind. Pretty specific goal.

2. Implementation of a server with NetFPGA chip for performing networking experiments of some kind. Very unspecific goal at this time, but it should be fun when we get it running.

Saturday, January 14, 2012

IBM System/360

IBM PC, early 1980s
Today's youth (speaking with no intent to impugn their intelligence) can look as far back as the 1980s and say that they understand the history of that era. This was the decade of the PC's mass acceptance, which is heralded as the beginning of the computer age as it affected most Americans.

But this is somewhat shortsighted. Computers were long in use before they became affordable to middle-class households. The first business computers were unveiled in the early 1950s. IBM was actually a latecomer with its 701 in 1952; Remington Rand's UNIVAC division had delivered the UNIVAC I to the US Census Bureau in 1951.  

The IBM 701 could be considered the first "mass produced" computer, in that each computer was assembled with the intent of precise replication for future models of the exact same type. Still, it could hardly be described as high-volume; Thomas Watson, Jr., IBM's CEO, unveiled the computer at a business meeting of which he noted: "as a result of our trip, on which we expected to get orders for five machines, we came home with orders for 18." A total of 19 were installed, which pales in comparison to 46 orders for the UNIVAC I. In either case, there was no general-purpose computer that sold in great numbers. Very large businesses could afford a system designed just for them, a process which financed the state of the art, but did not trickle down to medium-sized and smaller businesses, which have always held the decentralized but largest part of the economic clout in the American economy.

IBM 701 console
The problem facing widespread acceptance of computers in business and government is that one size does not fit all. The needs of American Airlines (for them, IBM developed the groundbreaking "Sabre" reservation system in 1960, a basic system still in use today) were not the same as a hypothetical Columbia Steel Pressings, Ltd. CSP, let us assume, is an energetic little company. Their foundry could use a computer that would quickly calculate coal-air mixtures that would produce the hottest flames at the lowest cost, but that was about it. With only 400 employees they didn't need a computer to handle hours or payroll or any other mundane tasks which were still possible by adding machine and ledger.

Before System/360, CSP didn't have a truly desirable option. They might do some research and realize that they just needed to get a small-scale computer for their limited needs at the moment, but if the company suddenly found great success, they would not be able to allocate much in the way of new tasks to the small computer they had bought, which certainly cost more than $100,000. If the demand for computer tasks escalated, they would need to spring for a computer of greater capability, which could cost in the millions. The initial investment was not only in the machinery but in the programming, which was much less portable and more system-specific. This labor would be lost altogether if the programs originally used were not compatible with the new system, which meant that virtually all companies would prefer to avoid costly experiments in the computer field, in some cases making them wary of even entering the field in the first place.

 Single 360 CPU at use at a Volkswagen office.
Massive 360 (Model 91) used by NASA

System/360, unveiled in 1964, was a truly innovative scheme. We know of computers now as being very mutable and modifiable machines, with easy-to-change modules and components. But prior to 1964, this concept wasn't yet actually realized. However, it was realized in the 360. When a small business bought a basic System/360 with a slow CPU and tiny onboard memory, it was always possible to migrate upwards with more peripherals, memory modules or even with a new system altogether. If they bought a new system in a few years, the old programs they had used were intact and fully compatible. (In fact, some programs written for the System/360 architecture are still compatible with present-day IBM System z mainframes.) The promise of no duplication of effort and cost was very alluring to many customers. Furthermore, IBM made the System/360 capable of emulating its earlier computers, so that existing customers would be able to buy the new system for its technical capabilities and still execute some of the same programs they previously used.

While the extremely good execution of IBM's idea is noteworthy, I am interested more in the idea itself.

The S/360 actually created the concept of a modular machine before experts even perceived that a demand for this market existed. For that reason, 360 was a spectacular gamble and was labeled as such in the press. At a time when IBM was one of the world's largest companies, it was literally betting the company's public image, fortunes, and marketability simply on the success of one product. It was so far ahead of its time, while simultaneously being so successful, that I cannot imagine a machine that so quickly and painlessly brought humankind further in technological development. 

The scale of Thomas Watson, Jr.'s gamble cannot be overstated. The project took three years and over $5 billion to develop. $5 billion in early 1960s dollars corresponds to nearly $38 billion today. In other words, the development of one computer system, by a single private company, was nearly one-quarter the cost of the entire Apollo program which placed the first humans on the Moon, which had the benefit of enormous public funding and literally hundreds of contracts for designs. The mind of the average person in the 1960s could no sooner grasp $1 billion than people of today can grasp $1 trillion.


This gamble paid off extremely handsomely. IBM's introduction of the machine was met with more than a thousand orders in the first 30 days. Between 1960 and 1968 (with the S/360 coming out in the midpoint of those years) the market for mainframes exploded from $600 million per year to $7 billion per year. That is compounded growth of 36% annually! IBM's market share ballooned to over 70% of computer sales globally


Seldom has a company ever been in a more enviable position. 


If we want to draw comparisons, General Motors's highest market share in the American market stood at almost 60% at various points in the 1970s. The Ford Model T outclassed everything else for almost two decades, but when it was retired in 1927, only about half the cars in the world were Model T's, and Chevy outsold them that very same year. Standard Oil might have achieved 90% control of the US oil market using somewhat extra-legal means in the 1890s, but it was not as dominant in other world markets. IBM's dominance was global, since no regional players in Europe, Japan, the UK, or the USSR had anything like what they offered. Protectionism in the computer industry, save for the Cold War alliances (which rendered profitable some rather backward computer firms in the Soviet bloc), has not been a major handicap for giants like IBM, and later Intel and Microsoft.

True innovation is so hard to come by. It's usually only present in the minds of those who haven't yet found great success. For a successful, large corporation to reinvent the business that it has previously dominated, and rely just on the strength of its new product, is absolutely unprecedented. This is what IBM did; it wasn't just looking for good year-on-year profits and a good 90-day sales report. What Watson and his company did was significantly advance the state of the art and show IBM's creativity and engineering competence. These strengths would carry them forward for decades.


My unscientific opinion is that this spirit of creativity and engineering know-how is all that America has left as we careen into the 21st century with a battered economy. First Japan, and then Korea, and now China have steadily taken over manufacturing of technologically-advanced parts and machines. However, the ideas, designs, and testing of the world's most popular software and hardware continue to come from American companies who employ many domestic engineers and programmers.  Intel might outsource every single fabrication process, but unlike with baby toys, socket wrenches, and DIY furniture, the most expensive part of building chips is the design and testing, which is referred to as NRE (non-recurring engineering) cost. And as far as American tech companies are concerned, most of this is still spent in American offices, labs and universities. That's happy news for me, since I hope to be joining the rank of electrical engineers in a year or so. But still I do worry about job security if we can't stay competitive.


Back on topic: Far be it from me to claim that the capabilities of these old dinosaurs are in any way competitive with modern machines. If you take a step back 40 years, you will have missed out on all those years of Moore's Law, and end up with very primitive machines indeed.  Still, they were extremely reliable and they didn't really adhere to the concept of "planned obsolescence." That's why I want an IBM Model M keyboard; they're built like nothing else.

It is worth noting that the System/360, and even mainframe computers in general, were maligned as "archaic" and in a state of great decline by the 1980s. Some businesses attempted to use parallel implementation of many cheaper PCs to achieve their computing needs. As computers became more powerful, computing demands did not necessarily keep pace, enabling firms to replace their "big iron" with smaller, cheaper machines. But if stability is the most important factor, mainframes are still essential to business. Sometimes the daunting purchase price would be worth it to receive excellent service and complete customer care that only IBM provided to its elite customers. It's a far cry from today's machines, which come with perhaps two pages of instructions in 43 languages and with a rough English translation.


IBM's reputation in mainframes continued to garner some support for their smaller machines; even as the minicomputers and PCs were seen as increasingly overpriced and less innovative later in the 1980s. It was still occasionally remarked that "Nobody ever got fired for buying an IBM." 


This is cold comfort for IBM, which has been hemorrhaging employees and profits since the early 1990s. Despite extremely good advertising and public image, IBM is as small and weak as it has been for most of the 20th century. As with IBM's glorious past, little physical evidence exists of the System/360. Few remain, as it is quite expensive to store and maintain such large computers, and there is significant scrap value in the precious metals inside the machine.

Still, in my mind, when I compare the sheer influence, innovation, and success of the S/360, I would place its greatness above any of the other pioneering mainframes, any Apple products, or even the original IBM PC. My opinion is that the IBM System/360 is the greatest computer of the 20th century, which must make it pretty close to the greatest technological achievement of all time.