Note for non-technical readers (not that I expect there to be many). The title of this post includes the word “hacker”. If you think that has anything to do with illegal acts or unethical behavior, you’ve fallen victim to what happens when the mainstream media latches on to a term they don’t understand. The definition of this word is far from negative. Within the geek community, the title “hacker” is the utmost compliment - something like Grand Master in the martial arts, or perhaps whatever title is given to an eminent artist. It both describes someone who is an expert in their field. Or, more generally, someone who enjoys seeking knowledge simply for the sake of knowledge - figuring out how things work, how to make them, and how to make them better. If you’re looking for a term that describes a criminal, “attacker”, “malicious user” or “computer criminal” work fine. While I wouldn’t by any extent consider myself a hacker in the super-genius-wizard sense of the term, I do definitely subscribe to the hacker ethic - the burning need to figure out how things work and make them better.

Thanks to the snow at the end of last week, and a long weekend, I actually got to do some reading that didn’t involve man pages or books strictly about software. I finally finished The Daemon, the Gnu, and the Penguin by Peter H. Salus, a wonderful book (with a great foreword by maddog Hall). I also finally got a chance to start reading The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary by Eric S. Raymond (ESR). I’m only up to page 50 or so, but it’s an equally good book, and I’ve been looking forward to reading it for years.

I’ve always been very interested in history (heck, I have a minor in it), and specifically the history of my other interests. When photography took up most of my time, I read every photo history book I could get my hands on (including many primary sources on now-archaic techniques). In the past few years, I’ve been amassing books on computing history (specifically ARPANET/the Internet and Unix/Linux/Free software) at a near-alarming rate.

Through all of my reading, two main things have struck me: the utterly amazing feats accomplished by previous generations, and how my own generation takes them for granted. I was born in 1987 which, I feel, makes me part of a very small group who were lucky enough to grow up during the real rise of the Internet. I remember playing simple games on my grandmother’s (business) 386DX long before I could read most of the words on the screen. But I also remember my father dialing in to an ISP (I honestly don’t remember which one) on a 9600 baud serial modem, and how unique that was at the time (at least among kids my age). By 13 or so, I had a 10BaseT network in my house, sharing a 56k dial-up connection between two computers. I feel that I’m part of a short historical period of kids who “grew up” with computers, used them in middle school, are perfectly at home with them, but still remember dial-up, the launch of Windows 98, and ordering Linux on CDs because you just couldn’t get it any other way (too young to have access to the resources of a college, only dial-up).

Anyway, on to my point…

As I read about those who stepped before me (and my generation), those who thought up such amazing ideas as Unix, the Internet, networking and most of the software and protocols we have today, I realize how big their shoes are, and how difficult it will be for the next generation to fill them. Sure, we have Facebook, RSS feeds, Web 2.0 and smartphones, but will we be able to innovate on the level that those who came before us did? And then it strikes me how much we young aspiring hackers take for granted. How many aspects of technology today would be seemed impossible 20 years ago, but we use without a second thought.

The last generation of hackers and programmers were raised on software distribution tapes. Their idea of “open” was formed by what they were used to - a Cathedral development model, with regular releases (production, perhaps beta, perhaps even less) and accompanying source code. However, in the pre-Internet days, they were still bound by physical media. They were still bound to the Cathedral development model, to a small and tight-knit group of sages determining when the world was ready to see the fruits of their labor.

The current generation - those of us just out of college or grad school, or even younger - think of Linux as the quintessential open source project. For those of us who came into computing when Linux was already around (I first ran Linux in 2001 when, at 14, I bought the newly-released CD set of SuSE 7.3), Linux sets the bar. It’s what we were raised on (at least in terms of open source). Fixed releases - even with source - seem antiquated, pre-Internet, our fathers’ open source. To us, open means nightly builds, world-readable ticket/bug trackers, anonymous Git or SVN access, and RSS feeds of every commit. It means being able to see every line of code at every moment in time, even if we’ve never e-mailed one of the developers.

Even just a few years ago, the word “open” was used by vendors to mean almost anything - everything from software based on Linux, to software that included source (regardless of the license) to software that just used (patent encumbered) documented protocols or formats. For the next generation, even the generation entering the workforce now, open means much more. It means transparency in development, in code, in documentation, in management.

Many times, I’ve found an “open” software project, and searched their web site endlessly looking for links to Git or SVN or CVS. Or looked endlessly for the (internal) bug tracker. Every time, I had to remind myself that the world, even many of the open source projects, are still far behind my expectations. Even Google’s Android Open Source project only has code merged in periodically from the production (closed) tree, and maintains a separate bug tracker. Far from my expectation of just having some parts of the tree unavailable on the Internet, and some classes of bugs filtered out from public view.

Nobody - not even Microsoft - can deny that the world is moving more and more to open source. It’s already the de-facto standard on the Internet, but it’s moving more and more to the desktop every day. And, as this happens, the expectations of what open means (increasingly more transparent than just “open”) are also increasing. The software world - both proprietary and open source - will have to keep up. And, hopefully, as the generation raised on the Internet begins to fill the ranks of geeks in the workforce, we’ll see more and more open source usage.

As a side note, I’d be very interested to see how open source use compares to demographics. I know that Linux use (on student-owned computers) at most colleges is way above the global average, and the same goes for Firefox.



Comments

comments powered by Disqus