Saturday, February 17, 2007
If you've ever bought music on iTunes, Walmart.com, or another legal music-downloading system, it'll be protected by Digital Rights Management (DRM). Protected from you, the consumer. For example, you can probably only play your songs in the program you used to buy them. What if you want to transfer it to an unsupported MP3 player or transfer it to another of your computers? These are legal activities (provided you do not distribute the results to others, which is a violation of copyright), but the music companies want you to listen to music on their terms. Here's how to break the locks off your tunes.
Beginner's Method
1. Burn an audio CD with the protected audio tracks.
2. Rip that new audio CD to MP3's.
Direct Way with Virtual CD-RW Software
There is a software named "NoteBurner" from http://www.noteburner.com which can process the above two steps directly and straight forward.
The most important thing you need to do is selecting the default cd burner to "NoteBurn CD-RW", and the software will do the rest works for you automatically. Compared the medthod described below, it does all the work automatically within one software. Refer to http://www.noteburner.com/howto.html on how to use it..
Image Burning Method
This method doesn't need a CD-R to burn on and might be a little faster. Another advantage is that you can probably burn more than 80 minutes of music at once (I never tested it, but I think it'll work). Many CD recording programs allow you to burn on a "virtual recorder", creating a CD Image file on your hard disk.
1. In Nero, do this by clicking "Recorder" > "Choose Recorder..." > "Image Recorder" and then creating a new CD as usual.
2. After clicking on "burn", you're asked where you want the file to be saved. Select a drive that has enough free space to save all the contents of the CD.
3. When Nero has finished, you need a virtual drive like CloneCD's "VirtualCloneDrive" or the virtual drive in "Alcohol 120%". You can get a 21-day trial version of VirtualCloneDrive at http://www.slysoft.com/en/download.html . A free alternative is Daemon Tools 3.47 or 4.00, both of which can be downloaded at http://www.daemon-tools.cc . However, be carefull, as the latest version of Daemon tools will install spyware unless you are careful to uncheck this "option" . Microsoft also provides a free Virtual CD-ROM driver for Windows 2000 & XP at Microsoft.com
4. A (simpler) alternative to a virtual drive is to use a good unzipping program such as Izarc (free download from http://www.izarc.org) which will "unzip" the "ISO" or image file into regular audio files.
5. Right-click on your virtual drive and select "open image file..." or something similar - depending on which software you use. Then open the image file you created.
6. After loading your image file, rip the CD in the virtual drive as you would do with a normal CD.
Advanced Method Using Audacity for All Protected Audio
1. Open your recording program. It should be one that can save as an MP3. If you don't have a recording program you can download Audacity, which is cool and free, but if you already have another good recording program you can use that instead. (If you download Audacity, don't forget to grab the LAME encoder.)
2. Switch your sound-recording mode. Go to your system tray (in the lower-right corner of your screen, next to the clock) and double-click on Volume Control. Pull down the Options menu and click Properties. In the "Adjust volume for" box, press Recording, check all the boxes, and click OK. Your computer is probably set to record from the microphone; check the box under "Stereo Mix". You should only need to do this once.
3. Set up your recorder. Switch back to your music-recording program and create a new file. Make sure it's in the format you want; Audacity defaults to Mono mode, so if you're using that you'll need to go to Edit -> Preferences and change the Channels drop-down box to "2 (Stereo)".
4. Do it. Once your recorder is ready, press Record. Then switch to your audio source (whether it be iTunes, Windows Media Player, or another program) and press Play. Listen to the rapturous sound of your music being freed from DRM . When the song ends, press Stop, then switch back to your recording program and press Stop there.
5. Clean up. If you're going to be using a microphone with your computer, go back to Recording Control and switch the recording mode back to Microphone. Delete any unwanted sound or silence on either end of the waveform. Amplify if necessary. Save the project (in Audacity you'll want File -> Export as MP3) and close. You're done!
Very Advanced Digital-Only Lossless Method
1. Purchase and install Virtual Audio Cable (the demo adds "trial" clips to your sounds, so you'll need to purchase).
2. Set the playback device in your player software to the Virtual Audio Cable driver's input, and the recording device in your recording software to the Virtual Audio Cable driver's output.
3. Record using the Advanced Method above. The audio you play back and record through the Virtual Audio Cable will be a perfect digital signal, since it will never be converted to and from analog on your sound card.
4. If you have a Mac you can use Audiohijack (it's fully functional demo but before purchase, noise is overlaid on all hijackings longer than 10 minutes) to record any audio going through your computer. You would follow the using the Advanced Method above.
Method Using Hymn for Songs Bought on iTunes
1. Use Hymn an open source application for converting protected iTunes songs to unprotected MP3 files under fair use. Download and run it according to the directions provided on the site.
Tips
* If you don't need MP3 specifically (say you have a player that won't take anything else), consider ripping to OGG instead, as it gives better sound at the same filesize and is completely free of any patents. Most rippers as well as the Audacity tip above can handle this, and many players work with it too nowadays.
* This technique works for ripping music from any source. Music and dialogue from DVDs, streaming radio, game sound effects--absolutely anything your computer can play, you can record. If you've got a favorite song from one of your DVDs, try turning its audio into an MP3 and dropping it in your playlist!
* This technique can only be used to transcode songs in real-time. The alternative is to simply burn all your protected songs to a CD and then rip them back onto the computer in the format of your choice. That only works if you have extra CD-R's, though. Of course if you use a CD-RW you can keep it specifically to convert protected audio and rip to MP3.
* You'll need to make sure that your computer is silent during the transcoding process except for the music playing. If an IM or email notification pops up, for example, and makes a noise, that will go into the recording. If you're good, you can go back in afterwards and clean that sort of thing out, but it's simpler just to turn off all your noisemakers before you start transcoding.
* Obviously you need to be able to play the file for this to work. If someone sends you a DRM-protected file that you can't open, this process won't help you. You can send the link for this page to your friend, though, and have him or her de-DRM it for you!
* If you are using iTunes version 6 or later, Hymn will not be able to remove the DRM on purchased songs. The development team is currently trying to find a way around the DRM, but Hymn will only run on iTunes verions 5 or earlier. In addition, you cannot switch to an earlier version of iTunes, because once you authorize your account with iTunes 6, you can't use anything but iTunes 6.
Warnings
* Circumventing DRM may be illegal in and of itself within the United States -- regardless of ownership of the IP or intent after disabling the DRM method. Read up on the DMCA and then contact your congressman.
* Please don't use this technique for piracy. Transcoding a song for your own collection is fine. Making your entire collection available for the whole Internet to download is illegal.
Monday, February 12, 2007
(Linux kernel developers offer free support to struggling hardware manufacturers)
Customers are getting annoyed. They spent good money on the latest and greatest PC peripherals, only to find out that the hardware is only partially supported on their operating system of choice. Without the kernel drivers necessary to power them, some of the best features of the new toys are going unused.
Oh, and just to be clear: The OS we're talking about is Microsoft Windows.
Hardware vendors seem to be having a tough time getting up to speed with Windows Vista, the latest iteration of Microsoft's client OS. Drivers have yet to emerge for many products that have worked for years under XP, and those drivers that do exist are buggy or missing features.
Nvidia is just one example. For months, it has been selling high-end graphics cards with a label on the box that reads, "Windows Vista Ready." And yet, although a rudimentary graphics driver ships with the Vista install disc, many of the advanced features supplied by Nvidia's ForceWare software have yet to be implemented for the new OS. The downloadable drivers Nvidia makes available on its Web site add some functionality but are still beta software.
The situation is frustrating enough for some customers that they're ready to take action. A Web site suggesting a class action lawsuit against Nvidia has over 1,300 registered users as I write this, and its forums are filling up with tales of woe from customers who aren't getting the capabilities they were promised when they bought their video cards. A sister site is collecting user accounts of bugs in Nvidia's drivers.
Given how many other companies are similarly under-delivering on hardware drivers for Vista, it's enough to make you wonder why more vendors don't do more to support Linux. If writing drivers for Vista is really this much of a chore, getting open source drivers for Linux will seem trivial by comparison.
In January, the Linux kernel developers offered hardware manufacturers a straightforward proposition: Free driver development. All a vendor has to do is supply specifications to its products, and the community will do the work.
Of course, this is what has been going on in the Linux world all along, with or without the support of the vendors. Under this new program, however, the kernel maintainers are explicitly reaching out to manufacturers to encourage them to use the community as a resource.
The benefits for manufacturers are compelling. Not only do they not need to spend a dime on actual driver development, but any drivers produced will eventually be distributed with the stock Linux kernel and supported by the community. That includes the so-called enterprise Linux vendors, such as Novell and Red Hat.
What's more, a little hardware support under Linux goes a long way. For example, anyone who's impressed with Vista's "Aero Glass" user interface should check out the amazing eye candy that's possible with Beryl, a new UI layer under development for Linux. And Beryl's hardware requirements don't even approach what Vista demands. Why wouldn't vendors want to support an OS that gives users the most bang for their hardware buck?
Unfortunately, still far too few vendors choose to make their hardware specs available to open source developers. Instead of relying on the help and support of the Linux community, they offer up closed, binary-only drivers, developed in-house. Often, these drivers are only of beta quality or don't offer the full functionality that's available under Windows XP.
In other words, Vista users: The Linux community feels your pain. Maybe you'd care to check out the progress we've been making on this side of the fence?
Thursday, February 01, 2007
Microsoft is playing down the possibility that the speech recognition system in Windows Vista could be hijacked to delete files or perform other unauthorised actions.
Vista contains improved speech recognition technology, a factor which prompted security researchers to see if it was possible to create MP3 files on hacker websites or audio tracks distributed on P2P networks to issue spoken commands which takes control of PCs running Vista.
Microsoft said the exploit is technically possible but unlikely to be much of a threat in practice. The attack scenario relies on activation of the speech recognition feature (with a user's microphone and speakers switched on to receive commands) and for a user to be away from his desk, so that the mischief takes place without anyone intervening. Many PCs are left on all the time, so hitting unattended PCs on, for example, the trading floor of a bank simply by targeting them at night might be possible.
A number of security researchers and Vista geeks have already tested the approach and were able to delete files and visit, albeit with considerable difficulty, arbitrary websites. But Microsoft says a number of additional factors make attacks based on the approach implausible, if not impossible.
"It is not possible through the use of voice commands to get the system to perform privileged functions such as creating a user without being prompted by UAC for Administrator credentials. The UAC prompt cannot be manipulated by voice commands by default. There are also additional barriers that would make an attack difficult including speaker and microphone placement, microphone feedback, and the clarity of the dictation," Adrian, a Microsoft security researcher wrote on Redmond's security response blog.
"While we are taking the reports seriously and investigating them accordingly I am confident in saying that there is little if any need to worry about the effects of this issue on your new Windows Vista installation," he added.
The SANS Institute's Internet Storm Centre (ISC), disputes Microsoft's assessment of the potential danger posed by the feature. "Downloading and executing a local privilege escalation is still eminently possible, you just need a suitable 0-day local privilege escalation for Vista. Indeed, any way to download and run arbitrary code as a valid user is never good news, this one just happens to be from the 'neat trick' pile," ISC duty staffer Arrigo Triulzi writes.
By John Leyden
Wednesday, January 24, 2007
Linspire announced that it plans to expand its CNR ("Click 'N Run") digital download and software management service to support multiple desktop Linux distributions beyond Linspire and Freespire, initially adding Debian, Fedora, OpenSUSE, and Ubuntu, using both .deb and .rpm packages. And, the standard CNR service will remain free.
CNR was developed by Linspire in 2002 to allow desktop Linux users to find, install, uninstall, manage, and update thousands of software programs on their Linspire-based Linux computers.
Previously available only for Linspire and Freespire desktop Linux users, the CNR Service will begin providing users of other desktop Linux distributions a free and easy way to access more than 20,000 desktop Linux products, packages and libraries, a Linspire spokesperson said.
Support for different Linux distributions will begin in the second quarter of 2007 via a new website, CNR.com. Debian, Fedora, OpenSUSE, and Ubuntu will be the first supported, with others planned to follow.
Even as the Linux desktop has made strong advances in usability and capabilities, the difficulties of finding, installing, and updating software -- with each distribution requiring its own installation process -- has remained one of the most commonly cited complaints among desktop Linux users. With more than five years of development behind it, Linspire CEO Kevin Carmony hopes that CNR will now normalize these tasks for the most popular Debian- and RPM package-based distributions.
You can read more of the article and view screenshots of Linspire's CNR here
Tuesday, January 16, 2007
In a report on business deployments of open source software, published in full late last week, the Commission said that in "almost all cases" savings would be made by switching from proprietary to open-source software.
The findings come in stark contrast to assertions from Microsoft that Linux savings are a myth.
The Commission's work is based on detailed analysis of open-source projects in six European Union countries.
"Our findings show that, in almost all cases, a transition toward open source (produces) savings in the long-term cost of ownership," said the report, which was written by academics at the United Nations University in Maastricht, Netherlands.
Microsoft has attempted to persuade IT professionals and businesses that Windows can be cheaper than Linux, though its Get the Facts campaign. Get The Facts cited examples where Microsoft's software had offered a cost advantage over open source software.
The EC report also issued encouragement for organizations considering the Open Office applications suite. "Open Office has all the functionalities that public offices need to create documents, spreadsheets and presentations," the report said. "Open Office is free and extremely stable." It added that people were as productive with Open Office as they were with proprietary software.
The report did list two notes of caution. First, it said, short term costs will be higher for organizations migrating, even partially, to open source, largely because of the initial cost of training. Second, some workers may feel undervalued if they are required to work with free software.
The EC, the executive arm of the European Union, has taken several strides towards encouraging the development of open-source software.
In October, it granted $3.88 million (3 million euros) toward a project, called SQO-OSS, to test the quality of open-source software. And just before that, the Commission extended its open-source Web portal, the Open Source Observatory and Repository, to develop interoperability between applications.
By Richard Thurston
Monday, January 15, 2007
You never quite wrap your head around how anti-consumer Microsoft's policies are until they bite you in the bum. Add in the customer antagonistic policies of its patsies, HP in this case, and vendors like Promise, and you have quite a recipe for pain. Guess what I did today?
It started out quite simply, a client needed to set up a small branch office, something I do almost every week. Four workstation and a repository for files, occasional backups, and a shared printer is all they would need, nothing special. Five HP 5100s, a printer, a Promise TX2300 with mirrored drives and a DVD-R was all I needed. That was the easy part.
Out came the anaemic 40GB drive from one HP, and in when the Promise controller and two WD 200GB SATA drives. The TX2300 was a snap to set up, the hardest part was rebooting 10 times until I caught that CTRL-F is the key to get into the card BIOS. A minute later, the RAID was built and it was time to restore the OS from the CDs. Two thumbs up to Promise here, it really could not be easier.
This is where the pain began. Microsoft has a policy where the vendors can't ship you a Windows CD so instead they have to send you a series of restore CDs. These option-free exercises in rookie programming mistakes are a shining example of what is wrong with the industry. HP, like the weak willed jellyfishes that they are, went along with this plan rather than stand up for the people paying them.
The problem? The #*&$ers at HP made it so the brain dead restore scripts would not see any hardware other than the parts they shipped, and it would not recognise the Promise controller. Fair enough, it isn't HP's duty to recognise everything, that would be well beyond anything I expected. You just press F6 and install the drivers manually, it gives you the standard Windows prompt there.
Looking past the problem of the machine not having a floppy, you can easily add one for the initial install, things got ugly quick. The problem? Those weasels at Captain Junior Spy Central disabled the F6 driver install on their restore CD! There is no Windows CD so you can do it manually, you either use theirs or have your own copy.
If you have a copy of XP to use, guess what? The key that comes with the HP box is restricted to the version of Windows on the restore CD. Vanilla XP will not work, nor will any of the copies I have lying around. Your choice, use only HP hardware or buy a copy of XP. A big FU to MS and HP for this little ray of sunshine.
Money grubbing and brain dead tactics aside, I figured I could boot from the Promise CD and possibly manually format the drives and dump the install CDs to the HD. That trick will often work to get you by initial unrecognised drives. That is when I learned half of the problems with Promise, the CD it provides is not bootable and contains nothing resembling a tool. Sparse would be a step up from what it offers.
Biting back my fervent desire to throw this mess out of a window, get a gun, and go to Redmond, I put in the original HD and booted into it to see if there were any interesting tools to help my plight. I tried to install the drivers and noticed the second problem, the #$&#ing Promise CD doesn't have drivers on it! No, I am not kidding, they ship the card with a CD, but that CD has no drivers on it! Honestly.
If you click the install drivers option, it prompts you to put a disk in the (nonexistent) A: drive to make a driver disk. There is no option to unpack, no option to put it in any other location, you are just screwed. Manually browsing the CD comes up with the same programs the moronic installer offers you. A: drive or the highway. In this day and age, there is no excuse for not shipping a driver with hardware, Promise really screwed this up.
So, unable to transfer the install easily, unable to legally use a different CD of Windows with my legally purchased key, and unable to install the drivers with the one I had, I was left with only one option. The machine was put in place Saturday running Ubuntu. The owner of the chain was informed of it, why it was done, and what the ramifications, mainly stability and security, were.
Luckily, he is a smart man, and from this point on, Linux will be the OS of choice on all his servers, it is cheaper to buy, cheaper to install, and much more secure. Desktops are under evaluation, but Microsoft lost this chain for sure on the server side. If it doesn't think their brain dead policies are costing them money, I am proof positive that they are, and I am willing to bet I am far from alone.
By Charlie Demerjian
Friday, January 05, 2007
Linux Migration Made Simple
Running a Microsoft Windows NT server these days is a brave (or, perhaps, stupid) thing to do: Support for the product has finished, and as far as Microsoft is concerned, the product should be put in a rest home for retired software. Windows Server 2000 is also getting long in the tooth, and in a few years it too will reach the end of its support lifecycle and be looking for its rocking chair and slippers.
So if you work for one of the many organizations around the world still running NT and 2000, like it or not, you are soon going to have to migrate to another operating system.
There are many reasons to consider migrating some or all of your data center servers to Linux, and we won't go into them here. But if you do decide to go open source, some ways of going about it are better than others.
It may sound boring and trite, but the one thing which may dictate the success or failure of a whole migration project is the initial planning stage. Before you can embark on a migration (any migration), you must decide the scope of the project. Are you planning to migrate only the Windows NT file and print servers and domain controllers to Linux, for example, or do you plan in the longer term to move your entire IT infrastructure (including Web and application servers and user desktops) to Linux?
For the initial phase, it's vital to build up a clear picture of what servers you will be replacing, what tasks they currently perform, and how they will accomplish those tasks using Linux.
The answers to these questions, together with the skill sets of the current IT staff, will help determine which Linux distribution to adopt. If staff members already have an extensive knowledge of a particular server-focused Linux distribution, it will likely influence your choice. If not, you'll want to choose a distribution with appropriate vendor support.
The next step is to estimate an approximate cost and time scale for the planned migration. The best way to do this is to break down the migration into as many manageable tasks as you can, and estimate a time and cost for each of these. The more detail you go into when describing these tasks, the more accurate your estimates are likely to be.
Later, in the pilot phase, the estimates can be checked and updated with the generated data.
Migrating an NT file/print server to Samba on Linux, should be fairly straightforward, and the potential to save money on Client Access Licenses (CALs) is high. "A properly configured Samba server is typically faster than a Windows NT or 2000 server, and clients will not be able to tell the difference," says Nick Lassonde, chief software architect at California-based Linux migration consultant Versora.
There are, however, a few pitfalls of which to be wary.
"The most common one comes from mapping security, as by default most Linux distributions only support POSIX security and not complete ACLs (access control lists). However, most modern file systems support ACLs, so this problem can be solved," he warns.
You'll probably also want to configure your file sever to authenticate against a domain controller, and there are plugins to achieve this. "Familiarize yourself with Samba's Vampire command," advises Lassonde. "This allows for automated migration of users from an NT domain controller to Samba."
In other words, it sucks the brains out of an NT server--hence the name. "Samba works flawlessly as an NT4 Domain Controller, but while Samba 4 has come a long way as an Active Directory Domain Controller, it is still not quite stable enough for production."
For Active Directory domains, it is possible to build Linux-based alternatives: IBM Software Group suggests a stack containing XAD (from PADL), LDAP, and Kerberos 5.0 running on Linux can serve as a viable alternative for Active Directory based Windows 200x domains, for example.
If your project involves migrating more of the data center to Linux, the next stage will probably be moving e-mail and messaging services from Microsoft Exchange to something like OpenXchange, which traditional Microsoft desktop clients can access, or IBM's Lotus Domino, which Outlook clients can also access.
Web and application server migration is much less straightforward. A number of questions must be asked:
* What server-side languages (e.g., ASP, ASP.NET, and PHP) are used on the server, and can these be used under Linux? If not, you will need to find a third-party solution or port the applications to Linux.
* What other machines do the servers connect to, and which be migrated first? For example, do you migrate a database to Linux first, or leave it on Windows?
* What sort of security options are required? Will you need to set up SSL connections on the new server? Will user authentication be local, or do you authenticate intranet users against a domain?
The obvious choice is to move from Microsoft's IIS Web server to an Apache Web server (which claims 65 percent of the Web server market according to Netcraft) and Linux-based databases including DB2, Ingres, MySQL, Oracle, and PostreSGL.
The hardest part of a Linux migration is migrating applications. If application migration is part of your project, it may be possible to use third-party solutions. Two examples of this are running ASP pages via Sun Java System Active Server Pages, or ASP.Net pages using Visual Mainwin, which provides a Windows library to which applications bind and run on Linux.
Tools that port applications from one environment to another are rarely worth adopting, says Lassonde. "Frequently the price of porting an application from one language to another will almost be the same as simply rewriting the application." He recommends leaving those applications on an ISS server, and then rewriting or porting them later in a more neutral language (such as Java) so future changes can be carried out more easily.
Whatever the scope of the migration project, before you start (especially if this is the first one you're attempting) bear in mind that however simple the project may seem, and however well prepared you think are, it's almost guaranteed problems will crop up.
Chances are, someone else will has had the same problems before. So at the very least, be sure to take full advantage of public support forums, and consider hiring an experienced migration consultant who--assuming he or she is any good--will have come across most of your problems before and will be able to suggest solutions to technical difficulties in minutes or hours that would otherwise take days or weeks to overcome.
By Paul Rubens
Friday, December 29, 2006
Wednesday, December 27, 2006
Switching to Linux is a huge step for those who still have some reservations about taking such a huge leap of faith. To make the transition as painless as possible, often times it simply helps to make sure that the person switching to Linux has a clear comparison of which applications they will be using on their new distribution.
Choosing the Right Software. When selecting an application list, we figured that it might be helpful to seriously look at which applications the average home user might be using the most. With this in mind, here’s our breakdown in no particular order:
Firefox: A web browser that will make sure the user is not feeling totally overwhelmed and out of their element. We would avoid loading it down with tons of extensions, however, just keep it lean and clean.
Thunderbird: Considering the likelihood that said user has been using their ISP assigned e-mail with a program, such as Outlook Express, this should not feel like a tremendous leap. Thunderbird is a rock solid, trustworthy program that can help wean those reluctant relatives over to the Linux side of the fence in no time.
MSN/AIM/Yahoo Messenger(s): Let's face it, the kids love to "IM" each other with every free moment. Our prescription would not be to go with GAIM in most cases though. For those who don't always favor such a drastic change, keeping things simple might be the best bet here. Since all three instant messaging programs have options that can be installed in a Linux flavor, this allows you to offer the user a choice - GAIM for all or aMSN, Yahoo Linux and AIM Linux?
Scanner Software Made Easier: For the most part, we tend to jump onto the SANE bandwagon in hopes that our lack of driver resources will not prove to be too huge of a problem. However, if you have run into the trouble that we have with SANE back-ends, then maybe an alternative is in order: something like VueScan perhaps? This scanning program is both commercial and effective if you are having some struggles with SANE.
Beagle: Until your new convert becomes a little more accustomed to the file system that Linux provides, enabling them to locate their downloads and other goodies without too much frustration may be in order. Beagle allows for this and does it with amazing style. Much like "Google Desktop," Beagle can locate just about any file you might be looking for with the tap of a few keys.
amaroK: By this time, it may be worth it to your new Linux convert to enjoy their favorite tunes. amaroK is a fantastic music manager that is sure to amaze, thanks to its close work with MusicBrainz. It should be said that getting it to just "work" with your iPod, as so many before me have claimed, is not all that easy. Gtkpod is always a wise bet though if fighting with the urge to make an iPod sync a simple reality.
OpenOffice.org: In reality, most people ought to be using this as their primary office suite regardless of the OS they choose to run. However, even if the new user you happen to be working with is still a Microsoft Office user, OpenOffice.org is easy enough to make just about anyone comfortable rather quickly.
Did I Miss Something? Are you finding yourself coming unglued at the prospect of us missing an important application? Not to worry, this is just a starter list and not to be taken as the "only way" to go. See, what is so great about introducing folks to Linux is the ability to set them up with the applications that will best meet with "their" needs; not just the needs that we feel they might have.
What are you waiting for? Grab a LiveCD of your favorite distribution and start pounding the pavement. We mean, nothing says "party" like a Linux installation party, right? Just remember, Linux installation parties are only as geeky as the people that host them. So get out there, install your perfect setup on a friend's PCs and have fun!
Sunday, December 24, 2006
I can't urge you strongly enough to read the article entitled How Vista Lets Microsoft Lock Users In.
It details how Microsoft has built into Vista the "trusted computing" ability to lock down Office files via DRM such that no unauthorized document reader will be able to decrypt and read them. This is perhaps one of the biggest hidden weapons Microsoft has in its arsenal that could sabotage Linux and OpenOffice.org if Microsoft succeeds in its attempt to plug SUSE and all Novell's "interoperability" bonuses.
Think of this, if you will, as the Tivoization of Office files, only with malicious intent. Microsoft could, indeed, open up the document format completely and swear before God that it will never sue anyone for patent infringement. However, this does not prevent Microsoft from locking Office files in such a way that only Vista users can read them. No one else will be able to do so without the proper authorization, thus rendering the open format and Microsoft compatibility entirely meaningless -- unless, of course, someone agrees to pay Microsoft for the keys to unlock those files.
The lesson here should be obvious. The FOSS community must avoid - at all costs - the practice of adopting or integrating anything into FOSS that is owned or generated by Microsoft. Ximian, and now Novell, has made it a mission to recreate Microsoft technologies on Linux. I urge the community only to allow Novell to continue to do this at its own peril. It was a massive strategic blunder to attempt to recreate dotNet on Linux as Mono. Microsoft has implicitly, by attempting to make patent deals, acknowledged Linux as a genuine threat. That makes it so much greater a danger to adopt Microsoft practices, whether it involves integration of Office document formats, Excel VBA compatibility, or anything else.
This is an ironic twist, to say the least, after all the fuss Microsoft made over the viral nature of the GPL. Microsoft, through Novell, is attempting to infect open source with hooks it can use to profit from the success of Linux at the expense of Linux users' freedoms.
By Nicholas Petreley
Thursday, December 21, 2006
For an early Christmas this year, my ex-wife (the somewhat dear) purchased a week of personal training at the local health club for me (yes, we still get along). Although I am still in great shape since playing football 20 some-odd years ago and climbing the Sandia Mountians since 1989, except for the last five years or so, I decided it would be a good idea to go ahead and give it a try.
Called the club and made my reservation with a personal trainer named Cindy, who identified herself as a 26 yr old aerobics instructor and model for athletic clothing and swim wear. My Ex seemed pleased with my enthusiasm to get started, since I wouldn't be bothering her! The club encouraged me to keep a diary to chart my progress.
Last Monday: Started my day at 6:00am. Tough to get out of bed, since having a few cold ones while watching the game, but it was well worth it when I arrived at the health club to find Cindy waiting for me (OMG). She was something of a Greek goddess -- with blonde hair, dancing eyes and a dazzling smile.
Woo Hoo!!!!! I feel like I'm 28 again....
Cindy gave me a tour and showed me the machines. She took my pulse after 5 minutes on the treadmill. She was alarmed that my pulse was so fast, but I attributed it to standing next to her in her Lycra aerobics outfit (once again, OMG). I enjoyed watching the skillful way in which she conducted her aerobics class, it almost gave me a heart attack.
Very inspiring, Cindy was encouraging as I did my sit-ups, although my gut was already aching from holding it in the whole time she was around. This is going to be a FANTASTIC week!!
Last Tuesday: I drank a whole pot of coffee (and a few no-doze), but I finally made it out the door. Cindy made me lie on my back and push a heavy iron bar into the air, and then she put weights on it! My legs were a little wobbly on the treadmill, but I made the full mile. Cindy's rewarding smile made it all worthwhile. I feel GREAT!! It's a whole new life for me, I think I found my next future ex-wife.
Last Wednesday: The only way I can brush my teeth is by laying the toothbrush on the counter and moving my mouth back and forth over it. I believe I have a hernia in both pectorals. Driving was OK as long as I didn't try to steer or stop. I parked on top of a GEO in the club parking lot.
Cindy was impatient with me, insisting that my screams bothered other club members. Her voice is a little too perky for early in the morning and when she scolds, she gets this nasally whine that is VERY annoying, and you wish you had duck-tape.
My chest hurt when I got on the treadmill, so Cindy put me on the stair monster. Why the hell would anyone invent a machine to simulate an activity rendered obsolete by elevators? Cindy told me it would help me get in shape and enjoy life. She said some other shit too.
Last Thursday: Cindy was waiting for me with her vampire-like teeth exposed as her thin, cruel lips were pulled back in a full snarl. I couldn't help being a half an hour late; it took me that long to tie my shoes.
Cindy took me to work out with dumbbells. When she was not looking, I ran and hid in the men's room.. She sent Andy in to find me, then, as punishment, put me on the rowing machine -- which I sank.
Last Friday: I hate that bitch Cindy more than any human being has ever hated any other human being in the his tory of the world. Stupid, skinny, anemic little cheerleader. If there were a part of my body I could move without unbearable pain, I would beat her with it.
Cindy wanted me to work on my triceps. I don't have any triceps. And if you don't want dents in the floor, don't hand me the *&%#(#&**!!@*@ barbells or anything that weighs more than a sandwich.
The treadmill flung me off and I landed on a health and nutrition teacher. Why couldn't it have been someone softer, like the drama coach or the choir director?
Last Saturday: Cindy left a message on my answering machine in her grating, shrilly voice wondering why I did not show up today. Just hearing her made me want to smash the machine with my Laptop. However, I lacked the strength to even use the TV remote and ended up catching eleven straight hours of the Weather Channel.
Last Sunday: I'm had the Church van pick me up for services so I could thank GOD that the week from Hell was over. I will also pray that next year, my ex-wife (the bitch), will choose a gift for me that is fun like root canal or a vasectomy.
Sunday, December 17, 2006
then you probably don't want to read the rest of this story. Just remember, we warned you.
The Software Freedom Law Center (SFLC), a non-profit organization that provides pro-bono legal services to protect and advance open-source software, filed a brief today with the U.S. Supreme Court in support of Microsoft's appeal of a software patent decision. Yes, Microsoft.
In the case of "Microsoft v. AT&T," the SFLC is asking the Supreme Court to decide against U.S. patents applying to software that is copied and distributed overseas. The Court of Appeals for the Federal Circuit, a specialized patent court known for allowing patents on software and business methods, originally decided in favor of AT&T. In that decision, the court said that U.S. software patents applied even if the violations happened outside the U.S.
Microsoft appealed the decision and the Supreme Court agreed to hear the case.
The SFLC explains in its brief that its unlikely championing of Microsoft's cause in this case is because the "SFLC has an interest in this matter because the decision of this Court will have a significant effect on the rights of the Free and Open Source Software developers and users."
In its brief, SFLC argues that software copied and distributed outside the U.S. cannot infringe U.S. patents. The brief also argues that the Federal Circuit's decisions declaring software to be patentable subject matter conflict with Supreme Court precedent, and thus should be overruled.
In a statement, SFLC Legal Director Daniel Ravicher said, "I expect many people will be surprised that the Software Freedom Law Center has filed a brief with the Supreme Court in support of Microsoft. In this specific case, Microsoft and SFLC are both supporting the position that U.S. software patents have no right to cover activity outside of the United States, especially in places that have specifically rejected software patents."
In Supreme Court decisions, the explanation for deciding a case is almost always more important than the outcome of the particular case at hand. In this case, the Court's decision will determine whether U.S. software patents can be used to restrict software development, distribution, and use throughout the rest of the world. While it's only a distant possibility the SFLC hopes that a Supreme Court ruling might even find that software patents are illegal.
Eben Moglen, SFLC's executive director and well-known free software attorney in a statement, noted that "in contrast to the Federal Circuit, the Supreme Court has maintained limits on patentable subject matter throughout U.S. history. The Supreme Court has consistently ruled that algorithms and mathematics cannot be patented. Since software is expressed as mathematical algorithms, it should not be patentable."
Steven J. Vaughan-Nichols
Monday, December 04, 2006
Even as Microsoft executives tout Vista as the operating system of the future, operating systems of the past continue to plague it.
Last week in Iowa, attorneys have once again taken Microsoft to court over anti-trust charges associated with its Windows operating system. In addition to the age-old complaints about squeezing out competitors and price-fixing, there is a twist: This case alleges that by bolting together Windows and Internet Explorer, Microsoft produced software that gummed up people’s computers.
The case promises to be a textbook contrast between the ponderous nature of the legal world and the mercurial nature of technology. Thanks to long hours of arguments by lawyers on both sides, the entire first day of the case was filled by Polk County District Judge Scott Rosenberg, who read through 110 of the 120 pages of instructions given to the jury.
That’s just the beginning. The opening statement by Iowa attorney Roxanne Barton Conlin is expected to last three to four days. She plans to show the entire 10-hour deposition given by Gates in 1998 to attorneys for the U.S. Department of Justice and will introduce some of the 25 million pages of documents gathered from other actions against the company.
Unless the parties settle, Bill Gates and Steve Ballmer will be called to take the witness stand, possibly as early as January. Complaints about lack of choice and high prices have been the theme song of most of the legal complaints against Microsoft. The Iowa case also alleges that Microsoft’s software caused “drained memory, decreased speed and an increased incidence of security breaches and bugs” in its customers’ computers.
The plaintiff lawyers contend that Iowan customers of Microsoft are entitled to as much as $329 million in damages as compensation for Microsoft overcharges between May 1994 and June 2006. The lawyers are also seeking compensation for the time people have had to spend repairing security breaches--a figure that they put at a minimum of $50 million. “The illegal bolting of Internet Explore to the Windows operating system created a larger ‘attack surface’” and made the operating system more vulnerable, asserts Richard Hagstrom, co-lead counsel for the plaintiffs. “The damages are based on what people need to do to protect themselves from security breaches.”
Although few consumers would disagree with the charge, it may be tough to prove that buggy software is an anti-trust violation. “They’re trying to hold us responsible or make us pay damages because someone out there is violating the law and writing viruses,” says Richard Wallis associate general counsel for Microsoft. “We’re not writing any viruses, I can assure you.”
Since the U.S. government won a 2000 anti-trust decision against Microsoft, the company has fought a rash of more than 200 anti-trust class-action lawsuits throughout the U.S., starting with a California suit. All but two--this case in Iowa and another in Mississippi--have reached settlements or preliminary settlements.
Although the government--and Microsoft--spent millions of dollars to wage the anti-trust court battles, the direct benefit to consumers has been minuscule. All told, Microsoft has had to earmark $2.4 billion for settling these suits. In most states, consumers who purchased Microsoft software in the past are eligible for vouchers for modest refunds when they buy new computer hardware or software. The settlements range in value between $5 to $29 per purchase. Consumers in California (which reached the first settlement in January 2003) wrung the best deal out of Microsoft. An appeal by an independent California attorney held up the settlement, California residents only began receiving their vouchers this past August--three and a half years after Microsoft reached a deal with the plantiffs.
Only a portion of vouchers are likely to be redeemed. In California, eligible businesses and consumers have applied for approximately 30-40% of the alloted vouchers, according to Microsoft. The experience of other mail-in rebate programs suggests that only a portion of those vouchers will ultimately be cashed in. Low-income schools will eventually receive a portion of the money that is not redeemed, although the precise amount varies across settlements. Microsoft repockets the rest. By contrast, the Bill and Melinda Gates Foundation has donated more than $230 million to U.S. libraries since it began its program in the late 1990s--which happened to be the same time anti-trust concerns were cresting.
Most of plaintiff’s petition reads like a history lesson in Microsoft’s anti-trust woes. The case aims to follow well-trod legal ground, revisiting the damage Microsoft inflicted on long-extinct competitors, including Netscape Communications, Be and Go, along with Novell's DR-DOS and IBM's OS/2. Among the witnesses who will testify are former Novell software developers and a former product manager for computer maker, Acer.
“I think Microsoft is as strong as ever,” contends Hagstrom. He helped lead a class action case against Microsoft in Minnesota that spent six weeks at trial before the parties settled.
“All this fanfare about Vista--it seems like it’s just going to be ‘Windows XP.1,’” he contends. “They’ve had to pull back on a lot of the features they said they’d have. Where’s the innovation?”
Silicon Valley entrepreneurs point to the resurgence of Apple Computer, Linux, and and Google's $150 billion plus market cap as signs that consumers value new approaches to software.
Elizabeth Corcoran
Tuesday, November 28, 2006
Monday, November 27, 2006
LinuxBasic.org, is an online community devoted to helping people learn to install and run Linux. They just announced free Linux classes. The "An Introduction to Linux Basics" aims to instill a basic understanding about Linux for beginners who want to know more about how the system works.
Also, advanced Linux users will find an opportunity to dig deeper into some areas they always wanted to know more about or to fill gaps in their knowledge, according to there team.
The course's study guide will be an "LBook," an edited version of the Introduction to Linux: A Hands on Guide by Machtelt Garrels, which is distributed under the GNU GPL open-source free license.
Students will need to join the group's mailing list in order to participate in the course. The class, which will run for six months, opened last month and is available for anybody wishing to join. Students can learn at their own pace.
To join the mailing list click here: http://linuxbasics.org/cgi-bin/mailman/listinfo/qna/
To get started in the courses, click here: http://linuxbasics.org/course/start
LinuxBasics.org is Germany-based and was founded two years ago. In addition to the courses, the site also provides tutorials and links to other sites that offer information needed to install and use Linux. Also available, are very "friendly" mailing lists for questions that arise when people start using Linux, and an IRC (Internet Rely Chat) chat channel.
They said: This course is free (as in free beer). However, a goodwill contribution in the form of active participation, revisions, suggestions or ideas is appreciated. So what are you waiting for? Here's a chance to learn a different operating system that is virtually bug, virus, malware and spyware free.
After all if it's good enough for NASA (being a rocket scientist is not required) its good enough for us.
Thursday, November 23, 2006
Thursday, November 16, 2006
For a number of weeks now, I’ve been pondering exactly who chooses to migrate to Linux and perhaps even more importantly, why. Seriously, what is the motivating factor when it comes to making the move to a new OS? Generally speaking, it comes down to a need for a change.
Whether this stems from the need to try something new, or the fact that Vista is making people in Windows land very nervous, the fact remains that there is a relative flood of new users coming over to the Linux world hoping to find a more effective alternative to proprietary operating systems.
Linux Adoption, Powered by PC Power Users. We might like to think otherwise, but the "great migration" Linux is generally being powered by advanced Windows users. These are people who are already comfortable enough with configuring their computers that the idea of opening up a shell prompt doesn't frighten them off easily.
This is not to say that beginners are not working off of Linux boxes themselves mind you. But in the end, most of the migrants will be switching, thanks in part to the free ISOs (CD'S) that are available for download from various Linux distribution sites.
I personally believe that Windows users are fed up with the need to continuously upgrade their systems with proprietary OS problems. And now that Microsoft has all but shot themselves in the foot with the promise of any rouge application sending the affected PC into a "bluescreen," many end users need a break from this madness.
Making the Switch: Challenges. One hurdle that I’ve seen with a number of people working to make the switch to Linux is the understanding that Linux is, in fact, quite different from what they are used to. Because so much of the Linux world is composed of community efforts, the user interface and unusual hardware is not always as "plug-n-play" friendly as the migrant user might like.
Unfortunately, even to this day, I still see so many instances of forum posts where a recent "switcher" makes a plea for assistance, only to receive some short posting with a URL to another thread in it. You know something - that was one of my biggest pet peeves when I first tried Red Hat a number of years ago. And from what I’ve seen, it's still happening often enough even to this day.
But in fairness to those posting these short responses to Linux support forums, it’s reflective of a frustration that veteran users feel as beginners are not taking the time to look for the answers first. The solution is to utilize clear communication techniques by pointing users to sticky posts with an explanation of how the posting can help them. Everything considered, it makes for a fair compromise.
A Glimmer of Hope. Today, we have looked closely at who specifically is moving to Linux and why. Even though this may not seem too important to the future of Linux for advanced users, I’d beg to differ.
Some of these new Linux migrants could one day become a strong voice in the open source and Linux movement. The impressions they have today could very well shape the Linux distributions of tomorrow. I believe in my heart that it’s damaging to dismiss newer users who may not have a firm grasp on what it truly means to be a user of this fantastic operating system.
As long as we are able to maintain a balance between the user and the needs of the Linux community, I feel very strongly that it will indeed be the newcomer to Linux that decides the operating system's fate in the long run. Just look at the Ubuntu phenomena - I rest my case.
by Matt Hartley
Monday, November 06, 2006
Everyone's worst nightmare; the normal comforting hum of your computer is disturbed by clicking, pranging, banging... It happens to everyone because it's inevitable (hard drives are mechanical, as sure as a car will break down your hard drive will fail eventually). However, no matter how often you see it you never quite get used to it happening, the heartache of all the files you lose forever because you were "just about to back it up, honestly". This is not a matter of explaining to you how you can best avoid data loss or how to protect against your hard drive dying.
I have had few hard drives die and it's a painful period where you're without a functional computer. I, like many others find it impossible to survive without computers and the internet and therefore the frustrations of being forcibly cut off from the e-world is one I endeavour to avoid. This led me on a quest for information where it suddenly dawned on me - you can't stop your hard drive dying (everyone knows that) but you can have a solid backup in place, one that cannot die... So let me start from the beginning:
So, your hard drive shows signs of dying, you spend a few days running chkdisk, kneeling on the floor cuddling your rig and listening for faults when it dies. When you're done crying you have to RMA the hard drive (if within warranty, if not order another) and then wait for the new drive to arrive. Traditionally you would have to leave your computer alone and wander off, scared and confused into the real world... Well no longer, Live-CD environments are so good that you can comfortably survive without a working hard drive if need be and provideding you have some way of saving the files.
A Solid Survival Pack:
1) An up to date Live-CD environment like the Knoppix live CD.
2) A secondary hard drive with a Fat32 partition OR a USB flash drive for saving your files
3) Backups of your work, documents, pictures, music etc for convenience.
If you have these basic provisions you can manage sufficiently. You have Firefox for web browsing (The Ubuntu live-CD environment picks up DSL and Cable broadband without issue). Evolution for email, GAIM for messaging, OpenOffice for your office needs (word processing, spreadsheets etc), GIMP for image manipulation and editing... More than sufficient to allow you to "get by". This coincides nicely with the wealth of applications that operate entirely at the web tier, removing the need for installing some applications. WebFTP clients, Meebo for messaging, you can even edit images online. There are even Flash based pseudo online operating systems that give you 1gb of storage - so to say the world ends when your hard drive dies is a gross exaggeration.
The dark ages of being helpless to the god of hard drive failure is over, Live-CD's are the way to go for an emergency.
You can remain connected and in charge from a Live-CD environment in reasonable comfort - you can even continue work on your essays or reports while listening to music (streamed from the internet or otherwise). This is of course an ideal situation, unless the lack of dual screen support leaves your face twisted in a ball of rage, unable to operate on a single screen. Or if your computer doesn't matter to you and you can leave it for days on end without use (the very thought of it makes me shudder) then you may as well wait until you get a replacement hard drive. Otherwise, a Live-CD environment such as the ones offered by Ubuntu, Knoppix and Mepis are ideal for keeping you connected while your beloved drive is replaced.
Monday, October 30, 2006
Microsoft released on Monday free business-management software aimed at either the smallest of small businesses or at painfully late adopters.
Microsoft Office Accounting Express 2007 is for "starting businesses and home-based businesses that currently use pen and paper or spreadsheets" to run their operations, according to the company's online FAQ.
The software is available for free download at Microsoft's IdeaWins Web site. It resembles the Microsoft Outlook e-mail client and integrates with other Microsoft Office software.
Link: http://www.ideawins.com/faq.html#q15
Functions include creating invoices, quotes, receipts and customizable reports, as well as expense tracking, payroll and tax processing, credit reporting, online sales and monitoring employee time. Office Accounting Express 2007 users can also import data from Intuit QuickBooks, Microsoft Money and Microsoft Office Excel, and they can use Office Live to share information with an accountant.
Office Accounting Express 2007 also links to third-parties that offer additional fee-based services, including ADP for payroll, eBay for online sales, Equifax for credit checks and PayPal for online payments.
The new software is another example of how Microsoft has been forced to change its business model as more software for small-business owners becomes freely available.
Office Accounting Express 2007 will also be included in the Small Business, Professional and Office Ultimate versions of Microsoft Office 2007.
Microsoft recommends that small businesses "with more complex needs such as inventory management, multicurrency invoicing, multiuser access and fixed asset management," use Office Accounting Professional 2007, which is set for release in early 2007 for $149.
Sunday, October 29, 2006
Friday, October 20, 2006
I think that we are going to see one very serious trend here: Microsoft users giving the easy to use distros a very serious look. While some of them may cost money, there are still plenty of cool free options, too.
Microsoft's recent announcement that the long-anticipated new version of its Windows operating system, Vista, has been delayed into January 2007 leaves several questions hanging. Will Vista offer enough benefits to make it worth the cost of upgrading? Will those who hang on to the current version of Windows end up locked out of new software and peripherals? A recent report by research firm Gartner said that as many as half of all PCs will not be able to run many of Vista's most sophisticated features. Given that, is upgrading even a smart option?
But if the confusion over Windows is deepening, the opposite is true of Linux. Linux, of course, is the alternative to Windows that comes from the world of open-source software--meaning no company owns it, it's available for free, and it boasts a worldwide network of programmers constantly trying to perfect it. For the most part, Linux has been used by geeks (me) who enjoy rolling up their sleeves and getting under the hood of their software. Even the mention of Linux is enough to pull most nontechie managers out of their comfort zone. But, on the flip side of that, gives others wood.
But the notion that Linux is a complicated, alien, experts-only operating system is no longer true. Not only is Linux becoming a mainstream product that can be considered a reasonable alternative to Windows for just about anyone, it's actually easy for nontechies to install. Linux looks pretty much like Windows these days, so you won't face a steep learning curve in putting it to work. And you'll probably never have to worry about a big-bang upgrade to a radically new version because Linux gets updated routinely every six months or so, incrementally, at little or no cost..
To see which Linux distrobution is right for you check out the Linux Distribution Chooser at: http://www.zegeniestudios.net/ldc/
Saturday, October 14, 2006
Problem #6: Imitation vs. Convergence
An argument people often make when they find that Linux isn't the Windows clone they wanted is to insist that this is what Linux has been (or should have been) attempting to be since it was created, and that people who don't recognize this and help to make Linux more Windows-like are in the wrong. They draw on many arguments for this:
Linux has gone from Command-Line- to Graphics-based interfaces, a clear attempt to copy Windows
Nice theory, but false: The original X windowing system was released in 1984, as the successor to the W windowing system ported to Unix in 1983. Windows 1.0 was released in 1985. Windows didn't really make it big until version 3, released in 1990 - by which time, X windows had for years been at the X11 stage we use today. Linux itself was only started in 1991. So Linux didn't create a GUI to copy Windows: It simply made use of a GUI that existed long before Windows.
Windows 3 gave way to Windows 95 - making a huge level of changes to the UI that Microsoft has never equaled since. It had many new & innovative features: Drag & drop functionality; taskbars, and so on. All of which have since been copied by Linux, of course.
Actually. . . no. All the above existed prior to Microsoft making use of them. NeXTSTeP in particular was a hugely advanced (for the time) GUI, and it predated Win95 significantly - version 1 released in 1989, and the final version in 1995.
Okay, okay, so Microsoft didn't think up the individual features that we think of as the Windows Look-and-Feel. But it still created a Look-and-Feel, and Linux has been trying to imitate that ever since.
To debunk this, one must discuss the concept of convergent evolution. This is where two completely different and independent systems evolve over time to become very similar. It happens all the time in biology. For example, sharks and dolphins. Both are (typically) fish-eating marine organisms of about the same size. Both have dorsal fins, pectoral fins, tail fins, and similar, streamlined shapes.
However, sharks evolved from fish, while dolphins evolved from a land-based quadrupedal mammal of some sort. The reason they have very similar overall appearances is that they both evolved to be as efficient as possible at living within a marine environment. At no stage did pre-dolphins (the relative newcomers) look at sharks and think "Wow, look at those fins. They work really well. I'll try and evolve some myself!"
Similarly, it's perfectly true to look at early Linux desktops and see FVWM and TWM and a lot of other simplistic GUIs. And then look at modern Linux desktops, and see Gnome & KDE with their taskbars and menus and eye-candy. And yes, it's true to say that they're a lot more like Windows than they used to be.
But then, so is Windows: Windows 3.0 had no taskbar that I remember. And the Start menu? What Start menu?
Linux didn't have a desktop anything like modern Windows. Microsoft didn't either. Now they both do. What does this tell us?
It tells us that developers in both camps looked for ways of improving the GUI, and because there are only a limited number of solutions to a problem, they often used very similar methods. Similarity does not in any way prove or imply imitation. Remembering that will help you avoid straying into problem #6 territory.
Problem #7: That FOSS (Free Open Source Software)thing.
Oh, this causes problems. Not intrinsically: The software being free and open-source is a wonderful and immensely important part of the whole thing. But understanding just how different FOSS is from proprietary software can be too big an adjustment for some people to make.
I've already mentioned some instances of this: People thinking they can demand technical support and the like. But it goes far beyond that.
Microsoft's Mission Statement is "A computer on every desktop" - with the unspoken rider that each computer should be running Windows. Microsoft and Apple both sell operating systems, and both do their utmost to make sure their products get used by the largest number of people: They're businesses, out to make money.
And then there is FOSS. Which, even today, is almost entirely non-commercial.
Before you reach for your email client to tell me about Red Hat, Suse, Linspire and all: Yes, I know they "sell" Linux. I know they'd all love Linux to be adopted universally, especially their own flavor of it. But don't confuse the suppliers with the manufacturers. The Linux kernel was not created by a company, and is not maintained by people out to make a profit with it. The GNU tools were not created by a company, and are not maintained by people out to make a profit with them. The X11 windowing system. . . well, the most popular implementation is xorg right now, and the ".org" part should tell you all you need to know.
Desktop software: Well, you might be able to make a case for KDE being commercial, since it's Qt-based. But Gnome, Fluxbox, Enlightenment, etc. are all non-profit. There are people out to sell Linux, but they are very much the minority.
Increasing the number of end-users of proprietary software leads to a direct financial benefit to the company that makes it. This is simply not the case for FOSS: There is no direct benefit to any FOSS developer in increasing the userbase. Indirect benefits, yes: Personal pride; an increased potential for finding bugs; more likelihood of attracting new developers; possibly a chance of a good job offer; and so on.
But Linus (Linux) Torvalds doesn't make money from increased Linux usage. Richard Stallman doesn't get money from increased GNU usage. All those servers running OpenBSD and OpenSSH don't put a penny into the OpenBSD project's pockets. And so we come to the biggest problem of all when it comes to new users and Linux:
They find out they're not wanted.
New users come to Linux after spending their lives using an OS where the end-user's needs are paramount, and "user friendly" and "customer focus" are considered veritable Holy Grails. And they suddenly find themselves using an OS that still relies on 'man' files, the command-line, hand-edited configuration files, and Google. And when they complain, they don't get coddled or promised better things: They get bluntly shown the door.
That's an exaggeration, of course. But it is how a lot of potential Linux converts perceived things when they tried and failed to make the switch.
In an odd way, FOSS is actually a very selfish development method: People only work on what they want to work on, when they want to work on it. Most people don't see any need to make Linux more attractive to inexperienced end-users: It already does what they want it to do, why should they care if it doesn't work for other people?
FOSS has many parallels with the Internet itself: You don't pay the writer of a webpage/the software to download and read/install it. Ubiquitous broadband/User-friendly interfaces are of no great interest to somebody who already has broadband/knows how to use the software. Bloggers/developers don't need to have lots of readers/users to justify blogging/coding. There are lots of people making lots of money off it, but it's not by the old-fashioned "I own this and you have to pay me if you want some of it" method that most businesses are so enamored of; it's by providing services like tech-support/e-commerce.
Linux is not interested in market share. Linux does not have customers. Linux does not have shareholders, or a responsibility to the bottom line. Linux was not created to make money. Linux does not have the goal of being the most popular and widespread OS on the planet.
All the Linux community wants is to create a really good, fully-featured, free operating system. If that results in Linux becoming a hugely popular OS, then that's great. If that results in Linux having the most intuitive, user-friendly interface ever created, then that's great. If that results in Linux becoming the basis of a multi-billion dollar industry, then that's great.
It's great, but it's not the point. The point is to make Linux the best OS that the community is capable of making. Not for other people: For itself. The oh-so-common threats of "Linux will never take over the desktop unless it does such-and-such" are simply irrelevant: The Linux community isn't trying to take over the desktop. They really don't care if it gets good enough to make it onto your desktop, so long as it stays good enough to remain on theirs. The highly-vocal MS-haters, pro-Linux zealots, and money-making FOSS purveyors might be loud, but they're still minorities.
That's what the Linux community wants: an OS that can be installed by whoever really wants it. So if you're considering switching to Linux, first ask yourself what you really want.
If you want an OS that doesn't chauffeur you around, but hands you the keys, puts you in the driver's seat, and expects you to know what to do: Get Linux. You'll have to devote some time to learning how to use it, but once you've done so, you'll have an OS that you can make sit up and dance.
If you really just want Windows without the malware and security issues: Read up on good security practices (CISSP); install a good firewall, IDS, IPS, malware-detector, and anti-virus; replace IE with a more secure browser like Firefox; and keep yourself up-to-date with security updates. There are people out there (myself included) who've used Windows since the 3.1 days right through to XP without ever being infected with a virus or malware (although I have experenced system crashes): you can do it too. Don't get Linux: It will fail miserably at being what you want it to be.
If you really want the security and performance of a Unix-based OS but with a customer-focused attitude and an world-renowned interface: Buy an Apple Mac. OS X is great. But don't get Linux: It will not do what you want it to do.
It's not just about "Why should I want Linux?". It's also about "Why should Linux want me?"
Tuesday, October 10, 2006
Problem #4: Designed for the designer
In the car industry, you'll very rarely find that the person who designed the engine also designed the car interior: It calls for totally different skills. Nobody wants an engine that only looks like it can go fast, and nobody wants an interior that works superbly but is cramped and ugly. And in the same way, in the software industry, the user interface (UI) is not usually created by the people who wrote the software.
In the Linux world, however, this is not so much the case: Projects frequently start out as someone's toy. He or she does everything himself or herself, and therefore the interface has no need of any kind of "user friendly" features: The user knows everything there is to know about the software, he or she doesn't need help. Vi (MS-Word like) is a good example of software deliberately created for a user who already knows how it works: It's not unheard of for new users to reboot their computers because they couldn't figure out how else to get out of vi.
However, there is an important difference between a Free & Open-Source Software (FOSS) programmer and most commercial software writers: The software a Free & Open-Source Software (FOSS) programmer creates is software that he or she intends to use. So while the end result might not be as 'comfortable' for the novice user, they can draw some comfort in knowing that the software is designed by somebody who knows what the end-users needs are: He too is an end-user. This is very different from commercial software writers, who are making software for other people to use: They are not knowledgeable end-users.
So while vi has an interface that is hideously and unfriendly to new users, it is still in use today because it is such a superb interface once you know how it works. Firefox (IE like) was created by people who regularly browse the Web. The Gimp (Photoshop like) was built by people who use it to manipulate graphics files. And so on.
So Linux interfaces are frequently a bit of a minefield for the novice: Despite its popularity, vi should never be considered by a new user who just wants to quickly make a few changes to a file, use Open Office Writer (MS Word like). And if you're using software early in its lifecycle, a polished, user-friendly interface is something you're likely to find only in the "To Do" list: Functionality comes first. Nobody designs a killer interface and then tries to add functionality bit by bit. They create functionality, and then improve the interface bit by bit.
So to avoid #4 issues: Look for software that's specifically aimed at being easy for new users to use, or accept that some software that has a steeper learning curve than you're used to. To complain that vi isn't friendly enough for new users is to be laughed at for missing the point.
Problem #5: The myth of "user-friendly"
This is a big one. It's a very big term in the computing world, "user-friendly". It's even the name of a particularly good webcomic. But it's a bad term.
The basic concept is good: That software be designed with the needs of the user in mind. But it's always addressed as a single concept, which it isn't. If you spend your entire life processing text files, your ideal software will be fast and powerful, enabling you to do the maximum amount of work for the minimum amount of effort. Simple keyboard shortcuts and mouseless operation will be of vital importance.
But if you very rarely edit text files, and you just want to write an occasional letter, the last thing you want is to struggle with learning keyboard shortcuts. Well-organized menus and clear icons in tool bars will be your ideal.
Clearly, software designed around the needs of the first user will not be suitable for the second, and vice versa. So how can any software be called "user-friendly", if we all have different needs?
The simple answer: User-friendly is a misnomer (example: Animal crackers are not crackers but cookies.), and one that makes a complex situation seem simple.
What does "user-friendly" really mean? Well, in the context in which it is used, "user friendly" software means "Software that can be used to a reasonable level of competence by a user with no previous experience of the software." This has the unfortunate effect of making lousy-but-familiar interfaces fall into the category of "user-friendly".
Subproblem #5a: Familiar is friendly
So it is that in most "user-friendly" text editors & word processors, you Cut and Paste by using Ctrl-X and Ctrl-V. Totally unintuitive, but everybody's used to these combinations, so they count as a "friendly" combination.
So when somebody comes to vi and finds that it's "d" to cut, and "p" to paste, it's not considered friendly: It's not what anybody is used to.
Is it superior? Well, actually, yes.
With the Ctrl-X approach, how do you cut a word from the document you're currently in? (No using the mouse!) From the start of the word, Ctrl-Shift-Right to select the word. Then Ctrl-X to cut it.
The vi approach? dw deletes the word.
How about cutting five words with a Ctrl-X application? From the start of the words,
Ctrl-Shift-Right
Ctrl-Shift-Right
Ctrl-Shift-Right
Ctrl-Shift-Right
Ctrl-Shift-Right
Ctrl-X
And with vi?
d5w
The vi approach is far more versatile and actually more intuitive: "X" and "V" are not obvious or memorable "Cut" and "Paste" commands, whereas "dw" to delete a word, and "p" to put it back is perfectly straightforward. But "X" and "V" are what we all know, so whilst vi is clearly superior, it's unfamiliar. Ergo, it is considered unfriendly. On no other basis, pure familiarity makes a Windows-like interface seem friendly. And as we learned in problem #1, Linux is necessarily different to Windows. Inescapably, Linux always appears less "user-friendly" than Windows.
To avoid #5a problems, all you can really do is try and remember that "user-friendly" doesn't mean "What I'm used to": Try doing things your usual way, and if it doesn't work, try and work out what a total novice would do.
Subproblem #5b: Inefficient is friendly
This is a sad but inescapable fact. Paradoxically, the harder you make it to access an application's functionality, the friendlier it can seem to be.
This is because friendliness is added to an interface by using simple, visible 'clues' - the more, the better. After all, if a complete novice to computers is put in front of a WYSIWYG word processor and asked to make a bit of text bold, which is more likely:
* He'll guess that "Ctrl-B" is the usual standard
* He'll look for clues, and try clicking on the "Edit" menu.
Unsuccessful, he'll try the next likely one along the row of menus: "Format". The new menu has a "Font" option, which seems promising. And Hey! There's our "Bold" option. Success!
Next time you do any processing, try doing every job via the menus: No shortcut keys, and no toolbar icons. Menus all the way. You'll find you slow to a crawl, as every task suddenly demands a multitude of keystrokes/mouse clicks.
Making software "user-friendly" in this fashion is like putting training wheels on a bicycle: It lets you get up & running immediately, without any skill or experience needed. It's perfect for a beginner. But nobody out there thinks that all bicycles should be sold with training wheels: If you were given such a bicycle today, I'll bet the first thing you'd do is remove them for being unnecessary encumbrances: Once you know how to ride a bike, training wheels are unnecessary.
And in the same way, a great deal of Linux software is designed without "training wheels" - it's designed for users who already have some basic skills in place. After all, nobody's a permanent novice: Ignorance is short-lived, and knowledge is forever. So the software is designed with the majority in mind.
This might seem an excuse: After all, MS Word has all the friendly menus, and it has toolbar buttons, and it has shortcut keys. . . Best of all worlds, surely? Friendly and efficient.
However, this has to be put into perspective: Firstly, the practicalities: having menus and toolbars and shortcuts and all would mean a lot of coding, and it's not like Linux developers all get paid for their time. Secondly, it still doesn't really take into account serious power-users: Very few professional wordsmiths use MS Word. Ever meet a coder who used MS Word? Compare that to how many use emacs & vi.
Why is this? Firstly, because some "friendly" behavior rules out efficient behavior: See the "Cut & Copy" example above. And secondly, because most of Word's functionality is buried in menus that you have to use: Only the most common functionality has those handy little buttons in toolbars at the top. The less-used functions that are still vital for serious users just take too long to access.
Something to bear in mind, however, is that "training wheels" are often available as "optional extras" for Linux software: They might not be obvious, but frequently they're available.
Take mplayer. You use it to play a video file by typing mplayer filename in a terminal. You fast forward & rewind using the arrow keys and the Page Up & Page Down keys. This is not overly "user-friendly". However, if you instead type gmplayer filename, you'll get the graphical front end, with all its nice, friendly , familiar buttons.
Take ripping a CD to MP3 (or Ogg): Using the command-line, you need to use cdparanoia to rip the files to disc. Then you need an encoder. . . It's a hassle, even if you know exactly how to use the packages (imho). So download & install something like Grip. This is an easy-to-use graphical front end that uses cdparanoia and encoders behind-the-scenes to make it really easy to rip CDs, and even has CDDB support to name the files automatically for you.
The same goes for ripping DVDs: The number of options to pass to transcode is a bit of a nightmare. But using dvd::rip to talk to transcode for you makes the whole thing a simple, GUI-based process which anybody can do.
So to avoid #5b issues: Remember that "training wheels" tend to be screwed-on extras in Linux, rather than being automatically supplied with the main product. And sometimes, "training wheels" just can't be part of the design.
Wednesday, October 04, 2006
Problem #3: Culture shock
Subproblem #3a: There is a culture
Windows users are more or less in a customer-supplier relationship: They pay for software, for warranties, for support, and so on. They expect software to have a certain level of usability. They are therefore used to having rights with their software: They have paid for technical support and have every right to demand that they receive it. They are also used to dealing with entities rather than people: Their contracts are with a company, not with a person.
Linux users are in more of a community. They don't have to buy the software, they don't have to pay for technical support, although there is also a fee based model for high end technical support. They download software for free & use Instant Messaging (IRC) and web-based forums to get help. They deal with people, not corporations.
A Windows user will not endear himself or herself by bringing his habitual attitudes over to Linux, to put it mildly.
The biggest cause of friction tends to be in the online interactions: A "3a" user new to Linux asks for help with a problem he or she is having. When he doesn't get that help at what he considers an acceptable rate, he starts complaining and demanding more help. Because that's what he's used to doing with paid-for technical support. The problem is that this isn't paid-for support (unless you choose to purchase Linux support from a third party). This is a bunch of volunteers who are willing to help people with problems out of the goodness of their hearts. The new user has no right to demand anything from them, any more than somebody collecting for charity can demand larger donations from contributors.
In much the same way, a Windows user is used to using commercial software. Companies don't release software until it's reliable, functional, and user-friendly enough. So this is what a Windows user tends to expect from software: It starts at version 1.0. Linux software, however, tends to get released almost as soon as it's written: It starts at version 0.1. This way, people who really need the functionality can get it ASAP; interested developers can get involved in helping improve the code; and the community as a whole stays aware of what's going on.
If a "3a" user runs into trouble with Linux, he or she will complain: The software hasn't met his or her standards, and he thinks he has a right to expect that standard. His mood won't be improved when he gets sarcastic replies like "I'd demand a refund if I were you"
So, to avoid problem #3a: Simply remember that you haven't paid the developer who wrote the software or the people online who provide the tech support. They don't owe you anything. Unless you decide to purchase Linux technicial support from an outside source.
Subproblem #3b: New vs. Old
Linux pretty much started out life as a hacker's hobby. It grew as it attracted more hobbyist hackers. It was quite some time before anybody but a geek stood a chance of getting a usable Linux installation working easily. Linux started out "By us geeks, for us geeks." And even today, the majority of established Linux users are self-confessed geeks, and the super geeks of today are like the rock and roll stars of the 80's (but with much more money) and yes, they're also in the fourms to help.
And that's a beautiful thing: If you've got a problem with hardware or software, having a large number of geeks (and sometimes the super-geeks) available to work on the solution is a definite plus.
But, Linux has grown up quite a bit since its early days. There are distrobutions (distros) that almost anybody can install, even distros that live on CDs and detect all your hardware for you without any intervention. It's become attractive to non-hobbyist users who are just interested in it because it's virus, malware, spyware and badware free, it's also cheap to upgrade too. It's not uncommon for there to be friction between the two camps. It's important to bear in mind, however, that there's no real malice on either side: It's lack of understanding that causes the problems.
Firstly, you get the hard-core geeks who still assume that everybody using Linux is a fellow geek. This means they expect a high level of knowledge, and often leads to accusations of arrogance, elitism, and rudeness. And in truth, sometimes that's what it is. But quite often, it's not: It's elitist to say "Everybody ought to know this". It's not elitist to say "Everybody knows this" - quite the opposite.
Secondly, you get the new users who're trying to make the switch after a lifetime of using commercial operating systems. These users are used to software that anybody can sit down & use, out-of-the-box.
The issues arise because group 1 is made up of people who enjoy being able to tear their operating system apart and rebuilding it the way they want, while group 2 tends to be indifferent to the way the operating system works, so long as it does work.
A parallel situation that can emphasize the problems is Lego's. Picture the following:
New: I wanted a new toy car, and everybody's raving about how great Lego's cars can be. So I bought some Lego's, but when I got home, I just had a load of bricks and cogs and stuff in the box. Where's my car??
Old: You have to build the car out of the bricks. That's the whole point of Lego's.
New: What?? I don't know how to build a car. I'm not a mechanic. How am I supposed to know how to put it all together??
Old: There's a leaflet that came in the box. It tells you exactly how to put the bricks together to get a toy car. You don't need to know how, you just need to follow the instructions.
New: Okay, I found the instructions. It's going to take me hours! Why can't they just sell it as a toy car, instead of making you have to build it??
Old: Because not everybody wants to make a toy car with Lego. It can be made into anything we like. That's the whole point.
New: I still don't see why they can't supply it as a car so people who want a car have got one, and other people can take it apart if they want to. Anyway, I finally got it put together, but some bits come off occasionally. What do I do about this? Can I glue it?
Old: It's Lego's. It's designed to come apart. That's the whole point.
New: But I don't want it to come apart. I just want a toy car!
Old: Then why on Earth did you buy a box of Lego's??
It's clear to just about anybody that Lego's is not really aimed at people who just want a toy car. You don't get conversations like the above in real life. The whole point of Lego's is that you have fun building it and you can make anything you like with it. If you've no interest in building anything, Lego's are not for you. This is quite obvious.
As far as the long-time Linux user is concerned, the same holds true for Linux: It's an open-source, fully-customizable set of software. That's the whole point. If you don't want to hack the components a bit, why bother to use it?
But there's been a lot of effort lately to make Linux more suitable for the non-hackers, a situation that's not a million miles away from selling pre-assembled Lego kits, in order to make it appeal to a wider audience. Hence you get conversations that aren't far away from the ones above: Newcomers complain about the existence of what the established users consider to be fundamental features, and resent having the read a manual to get something working. But complaining that there are too many distros; or that software has too many configuration options; or that it doesn't work perfectly out-of-the-box; is like complaining that Lego's can be made into too many models, and not liking the fact that it can be broken down into bricks and built into many other things.
So, to avoid problem #3b: Just remember that what Linux seems to be now is not what Linux was in the past. The largest and most necessary part of the Linux community, the geeks, hackers and the developers, like Linux because they can fit it together the way they like; they don't like it in spite of having to do all the assembly before they can use it.