Why you should not pay for extended warranty if you use Linux

Posted on Thursday, December 17, 2009 by Erlik

I have read a rather sad story today. Apparently the Best Buy Geeks squad refused to service the machine of someone who had purchased an $80 extended warranty for its netbook just because he had installed Ubuntu Linux. This story not only shows how best Best Buy's Geeks squad is far from having anything even close to the technical knowledge of a geek, but also raises 2 other questions: are extended warranties worth it, and are Linux consumers correctly protected in the US? Let's dig into these two rather important questions.

Extended warranties for netbooks: are they worth it?

Nowadays many electronics shops such as Best Buy will offer you a 2 or 3 year extended warranty if you pay them a little more money, usually around 20% of the price of the purchased item. In my opinion for a netbook this is not worth the money, especially if you are using Linux. First, the Best Buy accountants can do the maths: if they ask about 20% of the price of the computer for the protection plan this means that the probability of the computer failing between the end of the "free" warranty and the end of the extended warranty is lower than that, meaning that the odds are against you from the start. Second, if you use Linux you are probably knowledgeable enough to fix software issues yourself and are protected from most virus damage, leaving only hardware faults to cover. Now it is very likely that most hardware defects would appear during the legal warranty, so the extended one is not very useful. Finally, if you run the risk of having service denied to you because you use Linux or any piece of software that the store owner does not like it is simply not worth the hassle. If your netbook breaks after the legal warranty, you are probably better to buy a new one anyway. The only case where these extended warranties may make sense is if you purchase an expensive computer that you would have trouble replacing if it failed, or if you don't know anything about computers and expect to go back to the store for every little issue (and I don't know if that is even covered).

Are Linux consumers correctly protected in the US?

What is more worrying to me is the concept that changing your OS to Linux could constitute an unauthorized modification of your computer. This would mean that the manufacturer are selling the software and hardware as "one unit". This is very worrying because if that kind of bundling was accepted the consumers would actually lose the freedom of installing and running the software they like on their own computers. This is very bad because not only would that remove consumer choice from the equation when it comes to software (never a good thing) but it would create a virtual monopoly. If Microsoft and Corel got a deal with Asus to have windows and WinDVD as the only "authorized" software on their computers anybody wanting to buy an Asus computer would have to use that to avoid losing their warranty, even if Linux and PowerDVD are far better. We would go from a situation where the best software is selling to a situation where the cheapest or most common software is selling. If this kind of situation start to emerge it is important that consumer laws are adapted to prevent that kind of bundling like it is in other countries outside of the US. A good example is France where consumer law considers Hardware and software as two different items that can't be bundled and force OEMs to reimburse Windows at the consumer request if it is not possible to purchase a computer "naked". Furthermore the amount of money reimbursed as well as the procedure to follow must be published beforehand (usually the "price list" and reimbursement forms are available on the OEM website). Add to that the fact that in Europe the minimum legal (aka "free") warranty on computers is 2 years and you can see that consumer protection laws in the US are far from being the best in the world, especially for Linux users, and should be revised to protect the consumer better.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Why ChromeOS is a Smartbook OS

Posted on Monday, December 7, 2009 by Erlik

Now that Google's Linux based ChromeOS has been fully revealed and is actually available for some machines, one thing appears clearly: ChromeOS is more like a Smartbook OS than a Netbook OS. Let's sum up what ChromeOS is about: it is the Chrome Browser as an operating system. Remember a few years ago when people said that the browser would become the OS? That's what Google did!

Like Smatbooks ChromeOS is all about the web

ChromeOS is designed for one thing: let you surf the web quickly and cheaply. ChromeOS is designed to work with SSDs (and only SSDs) from the start. Since the objective is to get you online fast local storage does not need to be abundant: it needs to be fast and cheap. Nowadays 4 or 8 GBs of fast flash memory will be faster AND cheaper than almost any HDD, so Google choose to impose flash memory. Software-wise ChromeOS is little more than a Linux kernel, X , Clutter and the Chrome Browser, which is probably the fastest route to starting a browser. So thanks to it's minimalistic software stack and focus on local storage speed over capacity ChromeOS gets you online fast on inexpensive hardware. There are however some drawbacks.

Where ChromeOS fails

There is one huge drawback to this approach: if you can't get online, what you can do with the machine is severely limited. Of course the machine is not completely useless when offline: thanks to Google Gears you will still be able to write in Gmail or Google Docs, but that's pretty much it. That's where you see that this was not designed for netbooks, because netbooks are supposed to be able to still perform acceptably when offline, while with ChromeOS offline is an afterthought. Another difference is that a netbooks can run some pretty heavy applications: the GIMP works fine on an Atom processor and playing local video is OK as long as it is not in HD. ChromeOS on the other hand relies on Youtube and lightweight online apps to do pretty much everything, meaning you will not get the same level of functionality as a netbook, even when online.

The future of mobile computing?

ChromeOS is not the future of mobile computing, but a part of it. The way I see it mobile computing is branching in 3 main categories:

- Full laptops: These run mostly Windows (or in some cases Linux or OSX), have powerful processors, DVD drives etc... They only run for about 3 hours on batteries and weight 4 pounds or more but have a lot of local storage and are functional even without an internet connection.

- Netbooks: These run Moblin, Ubuntu netbook remix or Windows starter edition. They are lightweight multi-purpose computing devices that feature an Atom, Neo or CULV processor. battery life is up to 8 hours, local storage in up to 250 GBs. They can still work fairly well when not on the network.

- Smartbooks: These run ChromeOS or Windows CE. They are cheap single putpose devices that have one main function: get you on the web. They very portable and have exceptional battery life, but have little local storage and thus are not very useful when disconnected for long period of time (like when you travel).

The idea behind ChromeOS is really that consumers should have a full laptop or desktop as their main computers and purchase a ChromeOS device as a companion to use when on the road. This is close to the idea of the original EEPC 701. The problem is that in places where mobile bandwidth is still selling at premium prices and access points are rare ChromeOS devices may end up being either very expensive to keep connected or very useless as soon as the user's leave the range of their home's wifi network. Add to that the fact that a lot of online video content (like Hulu) is only available in the US and the usefulness of the machine as a source of multimedia is very compromised when you consider the international market. ChromeOS is a good idea in places where you have the network infrastructure and online media content to support the model. Unfortunately this is not the case in most countries beside the US.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Linux mint 8 is here

Posted on Monday, November 30, 2009 by Erlik

Just a small post to inform you that Linux Mint 8 (aka Helena) is here. Linux Mint is based on Ubuntu 9.10 (aka Karmic Koala), but includes DVD, java and flash support in the default installation, meaning that most users won't have to install any extra packages after the initial setup. For those that want the extras though a graphical software manager is available. The update system has been revamped: not only does it gives you a rating for the impact of most updates, but you can now configure the system to ignore updates completely, as well as configure what information appears in the update manager. It is also easier to perform OEM installs, a good thing if you are installing Mint for someone else. As usual the Linux Mint theme is very polished and looks very elegant. New user can rely on an updated user guide in pdf format to help get them started. On the technical side, you get the kernel 2.6.31 and Gnome 2.28 which include quite a few improvement if their own. You can download Linux Mint 8 from here!

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Silverlight multi-platform support is falling apart.

Posted on Wednesday, November 25, 2009 by Erlik

I had previously pointed out that the lack of supported platforms was a serious problem for Silverlight, especially when compared to Flash. The root of the problem was that Moonlight, the Linux version of Silverlight, is usually at least one release behind the Windows and mac versions of Silverlight. This caused confusion for developers as it was not clear which features would work on Linux. Rather than working to fix the problem it seems that Microsoft is making it worse by introducing Windows only features in Silverlight 4.

The need for COM

One of the most widely used api in Windows is COM. With it you can access almost anything on a Windows machine and that makes it a very powerful tool for developers. The problem is that it is a technology that only exists on Windows and that you can't easily retrofit it in OSX or Linux. Here comes a choice for Microsoft: either they give Silverlight developers access to COM, which will strongly increase the usefulness of Silverlight on Windows but will fragment the Silverlight market even more, or they don't and try to unify their supported base to compete with Flash. They choose the first option.

Silverlight gives the multi-platform market to flash

What I get from this decision is that the objectives Microsoft had with Silverlight have changed. It looks like competing with flash in the wider, multi-platform market is taking a back seat to the introduction of new functionality. What Microsoft is pushing is Silverlight as the default web based development platform for Windows, with some limited compatibility with non Windows platforms. This goes in the opposite direction to Adobe Flash which seems to favor a consistent set of functionality and compatibility across all platforms. Flash is not only available on Windows, Mac and Linux, but also on the Wii, and soon an ARM version should be released for smartbooks. And that does not even cover gnash, the open source version of flash that is more or less to Flash what Moonlight is to Silverlight. In short, Microsoft is giving up the multi-platform market to Adobe.

The impact for the developers

With many Linux based web devices based on ChromeOS in the works for next year and OSX market share on the rise, choosing Silverlight as a web development platform need to be carefully considered. The developer needs to be fully aware that some Silverlight 4 functionality will not be available if cross platform support is required (and on the web it is almost always required). If Linux support is to be assured the situation is even worse, as targeting anything above Silverlight 2 level could possibly break compatibility with Moonlight until late next year. This makes Adobe Flash the safer choice for Web development.

Is Silverlight COM support useless?

There are however cases where the COM support in Silverlight 4 will be useful: for enterprise development. If your company is a Windows shop you can use Silverlight 4 to develop very powerful web applications that run straight from the company intranet. You need of course to be sure that the application will not have to be made available to external customers that may use other clients. In these 'intranet' scenarios the addition of COM to Silverlight 4 is clearly a benefit and is indicative of the will of Microsoft to reposition Silverlight as an "enterprise" technology as well as a "web" technology.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Has PC gaming lost it's way?

Posted on Friday, November 20, 2009 by Erlik

Even if I my Linux netbook is my main machine nowadays, there is one thing for which it can't replace my trusty Windows XP desktop: gaming. I have long been a fan of playing on the PC rather than on consoles because in my opinion a mouse is required for the type of games that I like (FPS, RTS, RPG) and there are many more quality games available on the PC in those genres, some of which are even free. The problem is that my recent PC gaming experience has been less than stellar, making me consider consoles more and more as an alternative. Here are some of the main issues with modern PC gaming:

It takes forever before I can play

I haven't played Battlefield 2 for a few months, but 3 weeks ago I wanted to have a quick game online, only to find that no servers were available. After investigation it turns out that a 1.5 Gb patch was released, and that I have to download it before I can play online. Granted, the patch adds many maps so it is not all negative, but there was no hope of a quick game: the soonest I would be able to play would be tomorrow. I had a similar issue with Runes of Magic, a free MMORPG inspired by World of Warcraft. I had not played for some time, and when I wanted to have a look there again, it started by several hours of patch downloads and installation (and I am on a 6 Mbps ADSL connection). Although I do like that game publishers add new free content to their games, the update mechanism clearly needs improvement. Either there needs to be a way for that patching to be brought to my attention before I want to play the game (because when I want to play the game, I want to play the game, not patch it!), or the publisher needs to ensure that the game can start while the client is being updated, for example by keeping some servers compatible with older versions of the client.

Selling half finished software.

Recently I started playing League of Legends, a very good free online RTS / RPG crossover that plays a bit like Demigod. The game was officially out of beta earlier this month. The game is great except for a few things: there are only 2 maps to play on (and one of those is still in beta), the rest of the maps have not been released yet. The in game store that allows you to get "Runes" will only open Monday 23 November, so that part of the game doesn't work yet. If this was a free game still in beta that would not be much of a problem, but not only is the game officially released for a few weeks, but the League of Legends Collector Pack that cost about $30 is already selling on Amazon. It is clear here that people are asked to pay for a game that is only half finished and that is simply not acceptable.

I needs an internet connection to play a single player game

I recently purchased BioShock, which is an excellent single player game. I was surprised however to notice that to play the game you required an internet connection. It is not a problem for me, but there are still people without a broadband connection in many areas of the world. Why don't the publisher give the option to use the internet or the DVD to prove that you actually own the game? It is not as if these "protections" will prevent pirates to copy the game anyway, so why cause a problem for those of your customers that don't have an easy access to the internet?

It's not all bad

Now there are still a lot of advantages to PC gaming. As mentioned a lot of games are free or cheap, extra content can easily be added, the mouse and keyboard interface is a must for some games and online play is often free, but publishers active in the PC gaming market should not fall asleep at the wheel: if the issues mentioned above are not fixed PC gaming will stop to be the platform of choice for a lot of gamers.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Is Ubuntu broken?

Posted on Thursday, November 12, 2009 by Erlik

There seems to be quite a few concerns and complains about recent Ubuntu releases. Are there really that many regressions and instabilities with the latest releases of Ubuntu? Probably! Should we accept that in a production OS? No, but there is something that a many people tend to forget: the primary objective of these interim releases is not stability. I think that a lot of people tend to dismiss the Ubuntu release cycle, and for a good reason: that cycle is not a perfect solution. Lets look at the problem in detail:

Ubuntu 9.04 and 9.10 regressions

No one can argue that the two most recent releases of Ubuntu have been full of problems. The 9.04 release brought a lot of regressions and instabilities with the Intel video driver, which unfortunately is the most common graphic adapter in use. The 9.10 version seems to have it's own share of problems with a lot of people reporting troubles after upgrading in the Ubuntu forums. This certainly discredit Ubuntu as a consumer ready OS, but the problem is that Ubuntu 9.04 and 9.10 do not aim to be consumer ready but merely a rehearsal for the next LTS version of Ubuntu!

Ubuntu's misunderstood release cycle

Let's look in more detail at the Ubuntu release cycle. Every two year we get a Long term support (or LTS) release. That release is supposed to be stable, consumer ready and widely used. Currently the LTS release is version 8.04 and there are very little issues with it as long as you install it on supported hardware. In addition every 6 month you get an interim Ubuntu release. That release is not intended for mainstream users but rather for people who want (or need) the bleeding edge in Linux packages, drivers and kernel. They are not meant to used for extended period of time, so they have a short support cycle of only 18 months. The long term releases on the other hand is supported for a much more comfortable 3 years, and you can upgrade from LTS to LTS without ever having to touch an interim release.

LTS releases are the true consumer Ubuntu

If you take the time to think about it the message is clear: if you just want to use your Ubuntu computer without having to muck around too much with the OS, just install the LTS release and skip the interims! My MSI wind running Ubuntu 8.04 is still working fine, but Ubuntu 9.10 would not work with it. Interim releases don't focus on stability and reliability, that's the job of the LTS release, they focus on new features. You are probably wondering then why so many people install interim releases and complain about stability then? Well, it is part ignorance, but also part of a far more sinister issue with LTS.

The problem of Long Term Support releases

There is a major problem with LTS though: If you just bought a brand new computer, there are chances that some of the hardware won't work with Ubuntu 8.04. After all, the OS was released more than 18 months ago, in the meantime new hardware has appeared, and it was not possible at the time to included drivers for devices that did not even exist. As an example I recently purchased an Acer Aspire One for my wife and wanted to replace Linpus Linux by a newer version of Ubuntu or Linux Mint. In the end I used Linux Mint 7 (based on Ubuntu 9.04) because there were too many driver issues with Ubuntu 8.04. In the end it was easier to fix the problems with the Intel display driver in 9.04 than to sort out all the other issues with 8.04. Note that I won't upgrade the machine OS anytime soon, maybe I will reinstall when the next LTS release is available if it solves the few remaining issues.

The problem with interim releases

Interim releases have the opposite problem: they include bleeding edges software and drivers, but these have not been tested by a large amount of users, and as a result regressions and breakages are fairly common. Canonical started working on Ubuntu 9.10 six months ago, while Ubuntu 8.04 probably has 2 years worth of troubleshooting and patching behind it. It is not difficult to guess which release will be the best as far as stability is concerned. In 3 to 6 months Ubuntu 9.10 will be a lot better as the biggest issues are fixed by patches, but when that happens people will only talk about Ubuntu 10.04, and most of them will say that it is not as stable as 9.10. In my opinion it takes 3 to 6 months after initial release for an Ubuntu version to be ready for mainstream users. The problem is that by that date most users consider it outdated.

So is Ubuntu broken

I don't think so, at least not more than most other Linux distributions. The problem is that we have two kinds of Linux desktops with their own problems. On one side you have the sedate LTS releases that are stable and ready for the average user, but may be incompatible with newer hardware and software. On the other side you have the bleeding edge interim releases with all the their problems and breakneck 6 month release cycles. Most problems arise when someone wanting a long term solution (a LTS) is forced to use an interim release instead because of hardware compatibility. Is there a solution to this? Ubuntu could make LTS releases every year, reducing the problem. They could invest more in backporting drivers and applications to the current LTS (although this can be problematic since drivers are part of the kernel). Better driver support from hardware manufacturers could probably help too. In the end there is probably no perfect solution.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Desktop Linux needs salesmen!

Posted on Wednesday, November 4, 2009 by Erlik

Many Linux enthusiast are despairing of the low uptake of desktop Linux and its poor availability in high street shops. This is especially frustrating because most of the people using desktop Linux would consider it to be a superior solution to the Windows based machines on offer (and it probably is). I think I have fingered one of the causes for this problem though: desktop Linux needs salesmen!

To illustrate this principle I'll use the following anecdote from Rich Dad, Poor Dad:
One day a Journalist was interviewing the author of that best-seller. The journalist being a writer herself asked the successful author what she should do to produce a best seller like he did. Much to her surprise he told her: 'You should follow some sales training!' The Journalist was shocked and said: 'I want to be a writer, not a saleswoman, why would I lower myself by studying sales techniques?' The successful author took his book, turned it around and said: 'Here it says that I am a best selling author, not a best writing author!'

Now let's transpose that to the world of operating systems: there are many talented developers and programmers that are working on desktop Linux but there are very few talented salesmen that are working on selling desktop Linux. The result: desktop Linux doesn't sell! Of course, it sells to some people, the people "in the known", but it doesn't sell well to the mass market. It doesn't sell in high street shops because no one is selling desktop Linux to the big electronic retail chains. There is no advertising of desktop Linux so there is not an overwhelming demand for it, so the retailers won't stock Linux machines.

Let's try to see this from the point of view of the retailer. What he wants to do is sell as many computers as possible. He can do this 2 ways: either he sells a product that many people want, or convince people to buy what he has. Now predicting what people want is easy for heavily marketed items like iPods and iPhones, but it is much more tricky for computers. When it comes to computer operating systems a retailer is much more likely to stock something fairly generic and to convince its customers to purchase what he has, even if that is not the best product for that customer, or not exactly what that customer wants.

If we follow the reasoning above what desktop Linux needs is either:

- Salesmen who go "sell" desktop Linux to OEMs first, then to retailers and to a lesser extend consumers. This is the "top to bottom", sell what you have approach. The problem is that you need to have a very efficient selling structure and organization to do that. Ubuntu had some success selling Desktop Linux to Dell and Google seems to be gaining some traction with ChromeOS but beyond that there is currently not much progress being done.

- A lot of very visible advertising to consumers to generate a lot of consumer demand for desktop Linux. This is the "bottom to top", sell what the customer wants approach. The main problem is that this require not only a good marketing organization but also a large advertising budget, things that desktop Linux lacks right now.

The fact is that there are many projects and organizations devoted to maintaining and improving Linux, there are a few organizations devoted to the promotion of Desktop Linux, but there are almost no organizations devoted to the sales and advertising of desktop Linux. I think that one of the reasons why the Firefox browser is much more successful than desktop Linux is because the Mozilla foundation invested much more time and energy in advertising and promoting of Firefox as a product than most Linux distribution have. As long as Linux distributions focused on the desktop do not put much more effort in their sales and adverting desktop Linux will remain a "best writing" operating system rather than the "best selling" OS it deserves to be.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

5 ways to connect to the internet while on holiday abroad

Posted on Friday, October 30, 2009 by Erlik

As I mentioned in an earlier post I recently went on holiday abroad to the small Greek island of Kos. Since beside being a blogger I am also a stock investor I pretty much require to occasionally connect to the internet to check my emails and stocks. Even on a small foreign island there are several internet connection options, but some are more impractical or expensive than others. Here is a review of the possibilities you will be presented with when trying to connect to the internet from abroad.

Wifi connection in the hotel room

Some foreign hotels now offer the option of a wifi connection in your own room against some "per day" payment. This is the easiest option if you have a laptop or netbook: you can surf the internet in the privacy of your own room and on your own computer. additionally you do not have to move your computer, which can be significant advantage if you have a traditional 15 or 17 inch laptop instead of a netbook. The downside is that unless your hotel also has a conference center or is very modern that option is unlikely to be available.

Internet cafe in the hotel lobby

Most hotels that do not have wifi available do have a few computers in the lobby (or in a small room on the side) that you can rent. There are several disadvantages to this mode of internet connection: first it is usually expensive (in my case more than $5 an hour), then the computers don't look very well maintained, so the risk of viruses keyloggers is high, the browser software may be in a foreign language and finally privacy is not very good if the computers are in the hotel lobby.

Independent internet cafe

In a touristic place where internet connections in the hotels are not common you are likely to find independent internet cafes around the hotels. These have two major advantages over computers found in the hotel lobby: the price (less than $3 an hour in my case) and the state of the computer. Even abroad most internet cafe owners are somewhat knowledgeable and will keep their machines free of viruses and other nastiness. Privacy and foreign language software can still be problems though.

Bar with a free Wifi connection

These will require you to look around a bit, but some bars do offer access to a free Wifi connection to their customers. If you have a netbook or iPhone this is a great solution as this is essentially free (as long as you were going to purchase a drink anyway) and since you use your own equipment the risk of keyloggers is non-existent. The only problem is that it can get clumsy if you have a full size notebook rather than a netbook, as the tables are sometimes quite small.

Using your 3G connection abroad

Most foreign countries now have some 3G network that can support a mobile data connection and most operators will offer mobile data roaming when they customers are on holiday abroad. This is however a very bad solution because of one main factor: cost! In Kos for example the data roaming cost for me is $4 per MEGAbyte. This means that anything beside checking subject lines in you webmail or the overview of your stock portfolio can quickly become VERY expensive, so this connection method has to be kept for emergencies only!

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Android begins to gain wider acceptance

Posted on Tuesday, October 20, 2009 by Erlik

There are many Linux based mobile platforms available today: Maemo, Openmoko and Android. Of these, it is probably Android that has created the most noise in and out of open sources circles. Until now however there have not been many successful devices running Google's mobile OS, and the ones that exist didn't present much of a challenge to the iPhone. Things are starting to change however.

HTC: finally some decent hardware.

The first manufacturer to release an Android powered smartphone was HTC with the Dream G1 (pictured up left, picture cc by pandemia). This was far from being a success: the operating system felt unfinished and the hardware was clunky. The lack of a virtual keyboard and an headphone jack were inadmissible sins on something supposed to go head-to-head with the iPhone. Google and HTC quickly learned from their mistakes and released Android 1.5 with virtual keyboard support and the new HTC Hero with a standard 3.5 headphone jack (pictured right, picture cc by laihiu). This better model was followed by the HTC Magic, a slightly cheaper version without the headphone jack. HTC's offer is now completed by the HTC Tatoo, a cheaper model with Android 1.6 and an headphone jack, but with a lower resolution screen. This allow HTC to cover the full gamut of smartphones: an expensive all-rounder, an affordable surfing machine and a cheaper, music oriented phone.

The carriers are interested

Up until now carrier enthusiasm for Android was tepid at best. It looks like this changed yesterday when Verizon declared war on the iPhone in a commercial aired during the NFL football games. To my knowledge this is the first time that I see a wireless carrier make such a push for an open smartphone platform. It is true that there have been massive campaigns for closed platforms like the iPhone, but never for Linux based systems. I am pretty sure that the people in Redmond and Cuppertino are not happy right now!

Other manufacturers are joining HTC

If currently all Android phones are made by HTC, this is quickly changing. Motorola, one of the biggest global cellphone manufactures is fully commited to Android and is preparing to launch 2 models this year. The first one, the Cliq should be available soon. The second model, codenamed "shole" is more of a prototype, but could still be released this year. Motorola actually dumped its own 'in house' Linux based OS to join the Android cause. This is important because Motorola has a lot of experience with mass market phones and already released very successful models like the Razr. We should also soon see devices that are Android powered but are not phones, like the future "dual booting" Acer netbook.

Conclusion

With only 3 million devices sold and much critiscism over its app store, Google's Android platform failed to make a major splash this year. I think that this was because Android was a new untested and immature system. This year however the OS has matured tremendously, more devices have been released at attractive price points and carriers are finally getting on board. If 2008 was the year of the iPhone, 2010 will be the year of the Android.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Why the international Kindle is an achievement.

Posted on Tuesday, October 13, 2009 by Erlik

Amazon announced earlier this week that the Kindle would finally be available for international orders, and with Wispernet included. That's right, for $279 you can now purchase a Kindle for use outside of the US. Non US readers like me are probably inclined to say: not too soon! When looked at in more details however, the challenges Amazon must have faced to bring the Kindle to the international market must have been huge.

International licensing right: a nightmare.

Remember when Amazon had to pull 1984 from the Kindles of everyone that purchased the book? That was because whoever sold them the rights to publish the book on the Kindle didn't have them in the first place. This shows how difficult it is for a company to manage copyrights when you have to "publish" a book to a new format like the Kindle. Now when you go to the international market these problems are multiplied: not only is there the possibility that the rights to a single book are owned by different publishers in different countries, but the copyright "rules" may also be different. That means that for each single book Amazon may have had to sign several contract with different publishers and on different terms.

Dealing with mobile providers

Filling the Kindle virtual library with books is only half of the problem, the other half is delivering them. This means that Amazon needs to have a "roaming" contract with a sufficient number of international network providers to make the whispernet financially viable. Don't forget that in some countries mobile bandwidth is much more expensive than in the US. negotiating something like whispernet in places where mobile bandwidth is billed above $1 per megabyte must have been a real challenge! To be honest, some "network heavy" features like the blogs will not be available to international users, and I suspect it is for that reason: the mobile bandwidth cost would make this too expensive in some countries.

Still, when looking at all the challenges that Amazon had to overcome to bring the Kindle to the international market you have to admire them for pulling it off.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

European OS market share: the rise of OSX

Posted on Thursday, October 8, 2009 by Erlik

Most of you are probably aware of the huge market share gains made by Apple's OSX operating system in the US. Currently Apple practically owns the high end (above $1000) laptop market, and it's US market share is well above 10%. One problem for Apple was that these gains were limited to the US market and didn't extend to the international market. That seems to have changed in the last 6 months however.

The problems of Apple in Europe

The second largest market for Apple after the US is clearly Europe, led by Germany, France, Spain and the UK. These 4 countries alone probably represent a market of 200 million people. Up to this year however the market share of OSX stayed between 3 and 4 percent in Europe, mostly because of high price and low distribution. With the crash that the dollar experienced in the last 2 years Apple product have ended up being significantly more expensive in Europe than in the US. Here are a few examples: the cheapest Macbook available in Europe costs 949€, which translates to $1300. The cheapest Mac Mini goes for 599€, or around $850! Add to this the fact that until last year Macs were not widely available in Mediamarkt (the European equivalent of Best Buy) and you have poor conditions for Mac adoption.

The rise of European Macs

Things have changed the last year though: major electronic retailers are now selling Macs. The iPod craze allowed Apple to create a relationship with these distributors, so when consumer started to look for alternatives to the poorly accepted Windows Vista these shops started pushing Macs. The result is that following the latest study of the AT institute Apple managed to gain significant market share in Europe despite the high prices. Of the 4 countries analysed, only Spain still has an OSX population that falls under 5%. This is understandable because this was the poorest of the 4 countries covered by the study: in Spain Macs did start from a much lower market share because historically they were too expensive and could not gain any significant market share because of price.

More than 5% market share is significant!

Why is that 5% market share significant? Because it means that OSX can be considered a mass market in Europe, and not a US only phenomenon. This open the door for OSX to become a significant global player, and this at the expense of Microsoft Windows. In the last 6 months OSX market share gained close to 1%, Linux was either flat or progressed a little, and Windows fell. If we exclude Spain, Windows market share now hoovers around 92%, an all-time low. This indicates a significant change of attitude from European consumers that does not bide well for Microsoft.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Why Microsoft won't fight moblin

Posted on Monday, September 28, 2009 by Erlik

There have been quite a few Moblin related announcements these last weeks: The release of the final version of Moblin 2.0, the Moblin Garage and the preview release of Moblin 2.1. More interesting is the news released by Microsoft's Silverlight team that they will develop Silverlight 3 for Moblin. Unlike Moonlight that is a Novel sponsored open source rewrite of Silverlight available for all Linux distribution, this looks like a binary only package that will be developed directly by Microsoft and made available only for Moblin. Microsoft porting it's technologies to Linux, WTF... Well it doesn't look that far fetched once you think a little bit further.

Intel has a problem

For the past 20 years Microsoft and Intel have been the best of friends: Intel was releasing more and more powerful chips and Microsoft released more and more powerful operating systems to use them. This worked well until a problem cropped up: Intel was not able to increase a processor's frequency anymore. To get out of the the problem Intel tried to put several processor cores on one chip. This only worked to some extent in the consumer market, as most users don't benefit much from having more than 2 cores in their computers. Intel management quickly realized that if they wanted to continue selling CPUs to consumers they would have to sell more chips for less money.

The rise of the Atom

To reach that goal they created the Atom processor, a chip that would propel the Netbook category to the forefront of personal computing and sell countless millions of devices. The chip could also be scaled to Nettops and in the future smartphones, set-top boxes and consumer electronics. Intel is on the verge of attaining its goal: selling a lot of cheap devices with it's processors inside. A problem appeared on the horizon however: Microsoft did not want to play ball!

The price of Windows

Most of these cheap new netbooks and nettop are breaking the relationship that kept Microsoft and Intel happy for so many years: the chips can't support new advances in operating systems (like Windows Vista). Worse, because of the low price of the machines Microsoft can't charge much for Windows on these machines, opening a market for Linux. Linux on netbooks is not much of a problem for Microsoft as long as the interface makes it clear that the netbook is a "device" and not a multi-purpose computer with a start menu and applications able to rival Windows. Once that consumers started to install Windows XP on netbooks and that Linux manufacturers started to release distributions that featured the same interface and capabilities as a Windows computer, Microsoft had no choice but to enter the marked with a very discounted version of Windows.

Moblin: the return to the computing device

Microsoft does not like the current situation, what they want is for the price and capabilities of netbooks to increase so that they can sell more expensive versions of Windows (such as Windows 7). What Intel wants is to continue to sell more and more cheaper chips, meaning that they want the price of netbooks to go down. For this they need an operating system that is not only cheap (or free) but also one that doesn't look like a traditional computer. Why? Because they don't want consumer to purchase these device to replace their computers but in addition of their current desktops or laptops. Because of this Moblin is designed with most of the capabilities of a full computer, but with an interface that is more suited to a mobile use than a desktop use.

What is in for Microsoft?

This is actually a win for Microsoft too as this clearly differentiates Moblin "devices" from Windows "all purpose" computers. Microsoft can continue to sell more expensive versions of Windows on more expensive computers with a traditional desktop interface without fearing too much the competition from the cheap Moblin powered netbooks: these don't look like Windows computers and are clearly for a different purpose. When an OEM complains about the price of Windows 7, now it can be told: use Moblin on your line of cheap netbooks that are companion devices and install an expensive version of Windows on higher end models that can replace a "full" computer. It is in Microsoft's interest to insure that Moblin is a good platform for basic tasks like surfing the web (hence the Silverlight port) to ensure that users don't install Windows in its place as long as the most advanced computing tasks are more intuitively done in Windows.

Moblin vs Windows?

Moblin is a Linux that is very different from Windows: the emphasis is clearly on web based applications, social networks, contacts etc... It is half way between a computer and a smartphone or PDA. It can of course run powerful Linux applications (otherwise users may replace it with Windows or a more desktop-like Linux distribution), but it is not the focus. Windows on the other hand is designed for desktop computing and powerful applications. The web takes a back seat to what is installed locally on the machine. Of course it can run web applications, just like Moblin can run local applications, but that is not the focus.

Conclusion

Moblin is the solution to the problem of Intel: providing a free, lightweight and powerful OS to sell cheaper netbooks and devices. This allows Microsoft to get out of the "bargain basement OS" market and to focus on a more expensive, higher end market with Windows 7. The differentiation between both OS is large enough to ensure that most people won't buy a Moblin device to replace their computer but to complement it. It suits Microsoft better if consumer purchase a Windows 7 desktop AND a Moblin netbook than if their purchase only a cheap Windows XP netbook.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

I am back

Posted on Tuesday, September 22, 2009 by Erlik

My regular readers may have noticed that i didn't post for some time. Well there is a simple explanation: I went on holiday to Kos, a nice little Greek island. This made me realize all the challenges of keeping yourself connected when you don't have an internet connection at home.

This also made me realized how helpful a netbook actually was in these kind of situations, as some bars do offer free access to their Wifi access points if you bring your own computer. Internet cafes were they "rent" you a desktop for an hour or two cost a lot on the other hand, so my netbook saved me quite a few bucks in the last 2 weeks!

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

How the web changed the way we shop: a keyboard retail story!

Posted on Wednesday, September 9, 2009 by Erlik

Recently I went into town to shop for a new computer keyboard. I could have bought one on the web but I wanted the item before the week-end, so I could not wait for it to be delivered and decided to shop at retail. A computer keyboard is a fairly common item and I expected to find a suitable model quite easily, but I actually ended up at 7 different shops before I finally made a purchase. How did that happen? Ten years ago I would not have done that.

What happened was that I did perform some research on the web for the best keyboard and the average retail price of the models I might want to purchase. At the first shop they has a suitable model, but it was sold for a full 50% more than the average retail price if purchased on the web, so I walked out. The second shop I entered had very good price, but only offered one model and it was out of stock, so I was out of luck. I then decided to try the shop that sold Macs around the corner, but they only sold apple branded keyboards. Apple keyboardsare great, but I am not ready to pay the $50 retail price, it's only a keyboard even if it is a stylish one. Mac accessory retail is obviously a lucrative business. The 4th shop was also flat out of stock on keyboard, and the next one was exceptionally closed.

When I entered the 6th shop my attention was drawn by a big sign: "sale today: buy two items and the second is half price". My hopes of a retail bargain were quickly squashed though: the shop only carried fairly high end wireless keyboard models that were overkill for my purpose, and I did not have any need for anything else in the shop, so the wonderful retail offer was not so wonderful after all. Finally I purchased my keyboard on my way out of town at a large entertainment shop that is part of a local retail franchise. That shop had a wide assortment of keyboard models from Logitec and Microsoft, and the prices were close to the best deals I could find on the web. The Microsoft branded keyboard I purchased cost me about $20 and is very pleasant and silent to type on.


So how did the web change the way we shop at retail?

Ten years ago when a consumer entered a shop he or she usually had very little knowledge of what products existed and at what prices. Some compared the offers of 2, maybe 3 retailers, but that was it. The salesman could afford to sell equipment above the average price, or to carry only high end items. If the same situation had happened ten years ago I would have purchased either the overpriced keyboard of the first shop or the apple keyboard of the third shop. Maybe I would have waited for the first shop to restock, but I doubt it.

Nowadays a lot of consumers do some product research on the web and compare the prices and models on several sites before they go to the shop. When they push the store door they already know what category of product they want, how much they will pay for it and probably which models and which brands they would consider an acceptable retail purchase. The salesman job is not to convince his clients to purchase the items he has in stock anymore, but to have in stock what his clients have already decided they want to buy.

Also web retail has the typical consumer spoiled for choice. With practically every model available on the web consumers are much less willing to settle for a second choice or an unknown brand than before. If the model they want is not in stock they don't buy an alternative, they just go look somewhere else, confident in the knowledge that they can always buy the model they want on the webshould "brick and mortar" retail fail to provide it.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Why Linux does not look like Windows

Posted on Wednesday, September 2, 2009 by Erlik

One interesting remark I read in some comments is that Linux distributions are not successful because they don't look enough like Windows. Apparently if someone completely copied the interface of Windows and slapped that on top of Linux, Windows users would migrate in droves and Microsoft would be bankrupt. Well, not really. Let me explain.

We can nor plagiarize the Windows interface.

A lot of people agree on the fact that Microsoft copied the MacOS interface when creating Windows. Does Windows look exactly like MacOS? Absolutely not, if it did you can bet that Apple's lawyers would quickly have sent cease and desist letters to Redmond. The same is true for Linux: if a distribution copied the Windows interface to the point that users could be confused in believing that the Linux distribution actually was Windows, that distribution would quickly be taken to court. Remember the story of Lindows? In that case it was only a name!

We should not copy the Windows interface.

There are two major reasons why Linux distributions should not blindly copy the Windows interface. First because it not the best interface for everybody. Most people switched to Linux for a reason, usually because they didn't like something with Windows. That may very well be the interface! Even if the Windows interface is very familiar to a lot of people that does not make it the best interface there is!

The second reason is that Linux is different from Windows, so the interface should reflect that. For example in Windows the "Add / Remove program" applet is not very important as it is only used to remove programs. Many people may never bother with it and it is OK to bury it somewhere in the control panel. In Ubuntu the "add / remove program" applet is much more important as it is needed to install new applications and customize your computer to your purpose. As a result it should have a much more important place in the interface.

Delivering a familiar interface.

Some distributions like Linux Mint manage to deliver a very Windows-like interface while remaining true to Linux. The start menu, system tray and windows switchers stay where they are in Windows, but the theme and colors are very different from Windows. This way new Linux users will find their bearings easily, but will never be unaware that they don't use Windows. The start menu has been customized so that the "Add / remove program" applet is much easier to reach to reflect it's bigger role on a Linux system.

The future

There is no doubt that the user interface is one of the most important part of a desktop operating system, and it is one that has been somewhat neglected up to now. Desktop distributions like SUSE and Ubuntu are starting to change this by making usability studies and polishing the look of their desktops. Soon people will maybe not want Linux to copy the Windows interface but the other way around.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Where have the light netbooks gone?

Posted on Thursday, August 27, 2009 by Erlik

Recently I had a look for a light netbook, and got a nasty surprise: there are almost no netbooks under 1 kg anymore. If you want to travel light your options are severely limited. You can either purchase the uber-expensive Sony VAIO Lifestyle 8-Inch Netbook that weights less than 700 grams, or the more reasonably priced but still expensive ASUS Eee PC Touch T91 at 960 grams. After that all the recent netbooks now weight more than 1 kg.

Where have all the light nebooks gone?

The original EEEPC 701 was well under the 1 kg mark, what happened? Where have all the light nebooks gone? A better question would probably be "where have the actual netbooks gone?" In my opinion the hallmarks of a netbook are ultramobility, the presence of a real keyboard and a 5 to 9 inch screen able to display most web pages without side scrolling.

Nowadays however most of the machines that are sold as "netbooks" have large 10 to 11 inch screens and a weight above the 1.1 kg mark, these are not ultramobile anymore. These are machines that a lot of people would use as their primary computing device rather than something light that can easily be transported everywhere for a quick surf session. There is light at the end of the tunnel however:

The new Ultraportable netbooks: the smartbooks

Several manufacturers have presented prototypes of new ARM based machines that are much closer to the original light netbook concept of 2 years ago. We have the Mobinnova Elan that offers a battery life of 10 hours in a very light package: only 900 grams. If you plan a trip to Japan this autumn you may also be interested in the new sharp netwalker and its 5 inch 1024 x 600 touchscreen display.

For me the conclusion is clear: there are no reasonably priced light netbooks based on the x86 architecture anymore. If you want real ultramobility you will have to get an ARM based machine or invest a large amount of money in a Sony Vaio.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Does an upgrade to Windows 7 kill a netbook battery life?

Posted on Wednesday, August 26, 2009 by Erlik

One of the most touted advantage of Windows 7 over Windows XP and Linux is improved battery life. This may be true for future netbooks that would be optimized for Microsoft new operating system, but don't expect any gains from installing Microsoft's latest release on your current netbook! Many people have actually reported reduced battery life on their netbooks after an upgrade to Windows 7.

I suspect that the so-called improved battery life of Windows 7 has much to do with better communication between the OS and the hardware, especially the BIOS and graphic chip. This would allow the OS to take better advantage of the hardware's energy saving features. The problem is that you would only get these advantages if the BIOS and the hardware drivers were optimized for Windows 7. When this is not the case (such as with current netbooks designed for Windows XP and Linux) Windows 7 actually has a reduced battery life.

We should keep in mind that even if Windows 7 is more optimized than Vista, it still uses much more resources than Windows XP or even Linux. That it consumes more power than these operating systems should not be a surprise. Some netbook manufacturers have well understood this and intend to stick with Windows XP until the release of the next generation of netbook processors from Intel.

To conclude I would say that installing Windows 7 on current netbooks is a risky proposition: on some machines the newer OS will run fine, on some it will run but with a reduced battery life, and on some it will slow to a crawl once you open a few applications. In my opinion it is not worth the trouble.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Not too soon: Moonlight 2 finally reaches beta

Posted on Tuesday, August 18, 2009 by Erlik

The team of Miguel De Icaza has released the first feature complete beta release of Moonlight 2.0.

I must say: it is about time! Silverlight 3.0 for Windows has been released last month. Although I really admire the work of Miguel and his team, Linux is still the poor child when it comes to Silverlight support.

My opinion is the following: if Microsoft wants to compete with Adobe Flash they need to offer at least the same level of service as Adobe. Since Adobe releases Flash runtimes simultaneously on Windows, Mac and Linux, the minimum that Microsoft needs to offer to be credible is the same simultaneous release schedule. This is obviously still not the case!

The only silver lining that I see here is that contrary to Flash, Moonlight is open source. This may allow the runtime to be easily ported to other computing platforms such as ARM. Also it is possible to replace the video decoders provided by Microsoft by your own if you compile Moonlight yourself. This means that someone could create a version of Moonlight that takes advantage of video decoding acceleration protocols, like Nvidia VDPAU.

That said, the delay between Windows and Linux version releases is still too much of an issue for me to accept Moonlight / Silverlight as a credible alternative to Flash. If Microsoft was serious about competing with Adobe these delays would not exists. As it is now Silverlight looks more like a attempt by Microsoft to draw the developers attention away from Flash than to create a true multi-platform runtime.

I do think that if Microsoft really wanted they could make a success out of Silverlight, but that would require them to stop favoring their own platforms and become really agnostic: support all desktop and most mobile operating system, as well as most consoles! Granted, they support the Mac, but Macs only compete with Windows in the high end consumer computing segment. Where is Silverlight for the Wii browser? What about the PS3s? What about Symbian and Android smartphones? What about the iPhone, are they even working on it? As long as Microsoft does not solve these issues Silverlight will stay an also-ran.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Reader question: buying a computer for online video

Posted on Friday, August 14, 2009 by Erlik

Kevin Asks:

I read your article on tech-no-media about streaming video from the internet.

I am in the market for a new computer. Streaming video from sites like NBC.com, ABC.com, and Fox.com will be done a lot.

Would there be a noticeable difference between these two systems?

Intel Pentium Dual Core E5200 with a X4500 HD video card

Intel Core 2 Duo E7400 with a NVidia GeForce 9300M GS video card.

What would the percentage difference be in video quality?

Answer:

From a processor point of view the difference should be minimal, as both should be able to decode 720p flash video correctly as long as you don't run many CPU intensive programs at the same time you are watching the online video. Activities that don't use the CPU a lot like web browsing or typing should be OK as these would happen on the second CPU core and thus not interfere with video decoding.

The computer with the Nvidia card could have an advantage in the future, as the next version of flash should be able to use video acceleration on Nvidia cards. When that happen you should be able to watch even 1080p online video smoothly no matter how much your CPU is loaded.

From a quality perspective the limit will probably be the high compression of the online video and your network bandwidth rather than your computer's ability to decode the video. Again the Nvidia card may (nothing is sure) better compensate for artifacts in the future, but even if this is implemented the difference should be minimal.

So if you don't intend to multitask a lot when watching online video the cheaper computer should be OK. If you do intend to heavily multitask when watching video pick one with Nvidia graphics and better processor.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Does CPU power matter anymore?

Posted on Tuesday, August 11, 2009 by Erlik

A few year back the when you wanted to buy a new computer the first thing you looked at was the CPU. This was because the CPU was the component that had the most impact on your computing experience. In recent years however this has changed to the point that on some computing platforms (like the netbook market) the CPU has become a standardized, comoditized unimportant item. Let's look at how this did happen.

Why the CPU does not matter anymore

The main reason why consumer wanted more and more powerful CPUs was simple: it allowed them to do more things faster. The problem is that in most cases the current bottleneck on computing productivity is not the processor anymore. The most CPU power consuming activities a typical user is likely to do are playing video and games. In both of these case the largest part of the computations are not handled by the CPU anymore but by the GPU. The other thing that these activity require is a constant flow of data: this is handled by the chipset and the storage devices. Most other computing activities actually require nominal CPU power.

A powerful CPU can be a disadvantage

Another issue is that a powerful CPU comes with a lot of disadvantages. There is of course the price, but also power consumption. This is especially true for mobile computers that are supposed to run on batteries. The recent rise of the netbook is a clear indication that consumer now favor portability and long battery life over processing power. The only progress that can be made by the CPU here is by reducing power consumption, and if you look at ARM processors we are already pretty low here.

No help from the OS

In the past new operating systems required more and more powerful CPUs, but as of 2009 this is not the case anymore. Most modern Linux distributions will run perfectly on a 7 year old 1 GHz CPU. Even windows 7 does not require a very powerful CPU to run well, what it does requires is a huge amount of RAM and fast storage! This means that the next generation of Windows 7 based computer won't benefit much from an improved processor.

What to look for in a new computer in 2009 - 2010?

When shopping for a new computer the average consumer is now looking for many things beyond the CPU. In the portable market battery life, size and weight are important factors. If you wish to run Windows Vista or Windows 7 you should also look hard at the amount of memory: 2 GBs is really a bare minimum. If you intend to play games and video the GPU should be of special interest. Finally if you intent to handle large media files you should probably wait until next year and purchase a computer equipped with a super fast USB 3.0 bus to connect external hard disks. Another point to take into account is the prices: most consumers are more price sensitive these days, and decent computers can be had for well under $500, especially if Linux is enough for your computing needs. The CPU is not high (if at all) on the list.

Read more in the Hardware category

Image cc by pasukaru76

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Net Applications Changes Methodology: Windows & Linux Market Share Rises

Posted on Thursday, August 6, 2009 by Erlik

I have always claimed that the widely used Operating System market share statistics from Net applications were not really accurate when it came to MacOS and Linux market share. In my opinion there were two factors that prevented an accurate worldwide market share to be produced: Linux browsers potentially ignoring the counter and improper geographical distribution. The second problem has been fixed and it does impact the market share numbers significantly.

The problem was that a large portion of the website visitors that are counted come from the USA and other English speaking countries. This means that worldwide data was more representative of the USA than of the rest of the world. The problem is that the OS market share is currently very different in the USA than the rest of the world: the Mac OSX market share is much higher, but the Windows and Linux market share are lower. Now that the data has been adjusted MacOSX market share has fallen from 10% to around 5%, and Linux market share has increased slightly to 1.05% (with a peak at 1.17% in may). This is much more consistent with the data provided by other firms such as XiTi Monitor which recently placed the Linux market share at 1.2% and MacOSX at 4.6%

Astute readers will notice that the Linux market share numbers are still much lower than the ones provided by W3Counter which place Linux around the 2% mark. The explanation advanced by some would be that it is in fact Firefox's market share that is not correct. It is feared that some Firefox plug-ins like adblock and noscript would prevent a visitor from being counted. Since these are widely used by Firefox and Linux users their market share would be underestimated, and this would account for the difference. Personally I would trust the numbers from W3counter the most, but if you don't know what to do you can always try to count the Linux market share by Twibbons.

Read more in the Linux category

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Open Source: many advantages beyond price

Posted on Wednesday, August 5, 2009 by Erlik

When people think about adopting an Open Source solution, the first factor that comes to mind is the price: it is usually cheaper than proprietary alternatives. What a lot of people fail to consider is that there are a lot of other advantages to Open Source that can be much more important than the price factor. Let's have a look at a few of them:

No forced end of life

One of the most overlooked advantage of Open Source is that there is no real end of life for any project. If a driver is released as Open Source and part of the Linux kernel your hardware will probably work out of the box for as long as you care to use that piece of equipment. In the proprietary world it is common for hardware manufacturers not to release a decent driver to run older hardware on newer operating systems to drive sales of newer models. When Windows Vista was released Creative Labs released a Vista driver that did not support all the features present in the XP driver for its older hardware, thus consumer were forced to buy the newer models just to have on Vista the same features as their old hardware on XP. This could not have happened if the drivers were Open Source, as any developer would have been able to port the XP driver to Vista or to modify the Vista driver to support all of the old hardware features. The same is true for software: even if the company that built your software does not support it anymore as long as a developer is willing to maintain it you are good, and if you really need that software nothing prevents you to hire that developer.

True competition rather than lock in

One of the easiest way for any software company to make long term money is software 'lock in'. The idea is to sell you a piece of software without telling you its inner workings or how to convert the files it produces to other formats. This means that the original vendor is the only one that can sell you upgrades or maintenance on that piece of software since he is the only one that know how it was built. That exclusivity often comes at a premium price since the software vendor has virtually no competition for your custom. In the case of Open Source the inner working of your software and the files it produces are known, meaning that several companies can offer support and maintenance for it, as well as develop and sell compatible alternatives. This creates real competition, encourages innovation and brings prices down for the consumer.

Security transparency

Do you know if Windows is secure? Do you know if it has any back-doors? No you don't, only Microsoft knows that. With closed source software you have no way to know if the software was properly tested for security holes or if unwanted code has been added to the software. With Open Source everything is transparent: you know exactly what you are running and anybody can easily look for security vulnerabilities.

The right to fork

What do you have to say about the direction that Windows has taken in the recent years? Not , much! If you do not like what Microsoft did with Vista and Windows 7 too bad, it's that or nothing. With Open Source software you can Fork. This means that if you do not like the direction that a piece of software is taking you can always create your own version and push it in the direction you like. Of course this comes with some problems: it causes fragmentation and reduces the resources that can be invested in each fork, but often forking is actually not necessary. When the developers or maintainers of an Open Source software project realize that a significant part of their users are unhappy with what they are doing and are ready to fork they sometimes change their plans to make everyone happy. Sometimes forks also merge after some time, or sometimes the less popular fork dies. This means that users actually have much more control on the direction in which the Open Source software they use evolves than with closed source software.

These are only some of the advantages of Open Source. This is why I would always prefer to purchase hardware for which there is an Open Source driver, or application that are Open Source. It is not only a question of price!

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Tech-no-media most popular articles: July 2009

Posted on Monday, August 3, 2009 by Erlik

It is again the end of the month, the time for me to recap the 5 posts that were the most successful on Tech-no-media in July.

1) Glassbuntu: design a dark crystal Gnome theme for Ubuntu or Linux Mint

2) Microsoft reminds us that Windows is f*cking expensive

3) Linux is not an Operating System

4) The Free Netbooks are Coming

5) Linux Netbooks: 3 paths to a bright future

I must say that I am impressed by the result of the Glassbuntu article. This is the first post on a series on theming and in light of this success I would consider that the series has some big potential.

Other articles are mainly opinion pieces (which is normal for an opinion blog). Linux, operating systems and netbooks seems to still be popular topics so expect some more article on these. My article on ChromeOS just missed the top five, which surprised me somewhat given all the media noise that surrounded Google's OS and the number of positive votes this received on reddit.

In August expect the next part of the series on theming Ubuntu and Mint and more of the same opinion articles. I also plan to continue the trend to publish less "long articles" but to improve their quality.

For more posts like this subscribe to Tech-no-Media (rss) or Follow me on Twitter / identi.ca

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

How to extend a lithium ion laptop battery life?

Posted on Wednesday, July 29, 2009 by Erlik

lithium ion laptop batteryA sad fact about lithium ion laptop batteries is that they degrade over time. A laptop battery that lasted 4 hours 3 years ago probably won't last much more than 2 hours now. When this happens to a Nintendo DS batterythat you can replace for $10 it is not a major problem, however with a lithium ion laptop or netbook battery that often cost more than $100 to replace it is a major inconvenient. Another problem is that often a laptop's lithium ion battery has an unique shape, so even finding the proper replacement battery can become a problem. All this means that it pays to take steps to maintain your laptop's lithium ion battery from the start.

Why does a lithium ion laptop battery degrade over time?

The reason is quite simple. The way a lithium ion battery works is by moving lithium ions and electrons from one electrode of the battery to the other. The electrons move trough your your laptop circuitry to power it, while the lithium ions move trough some fluid to balance the electrons movement. If the lithium ions can't move through the fluid then the electrons can't move through your computer either. The problem is that the fluid that allows the lithium ions to move from electrode to electrode degrade and dry over time. As it becomes more and more difficult for the lithium ions to move it also becomes more and more difficult for the battery to power your laptop.

What makes the fluid containing the lithium ions degrade?

The fluid containing the lithium ions will naturally degrade over time at room temperature, but there are several factors that will accelerate the degradation. The first one is heat: the hotter the laptop battery the faster the fluid gets dry. The second factor is charge cycles. When you recharge your laptop battery this generates significant heat in the battery and helps degrade the fluid. Leaving the battery constantly charged also has a negative effect on the life of the fluid.

What can I do to ensure the life of my laptop lithium ion battery when purchasing it?

Always buy your battery or your computer from a source with an high volume of trade. If the battery or laptop that you just purchased spent a year in the store's storage room it is probably already degraded.

Do not buy a replacement battery years in advance unless you are persuaded that you won't be able to buy the proper battery by the time you need it.

If you purchased a replacement lithium ion battery that you do not intend to use right away leave it in it's sealed packing and store it in a dry place in your fridge, this will reduce the fluid degradation tremendously.

If you can purchase a laptop that uses a lithium ion polymer battery choose that, as the polymer batteries degrade more slowly. With these you can expect 5 years of useful battery life instead of 3 for a normal lithium ion battery.

How to protect my laptop lithium ion battery when using it.


The first time you charge your laptop 's lithium ion battery let it charge overnight, then when you use it the first time have it discharge completely. This will ensure that your battery is properly calibrated.

If you will use the laptop on mains power for an extended period of time (several days or weeks) use the battery until it is about 60% charged, remove the it from the laptop and store it in a cool, dry place such as a basement. If you can store the battery in a sealed package you can keep it in the fridge.

Once every two month or so have the battery discharge fully to recalibrate it. At some time I had an IBM laptop that did this automatically.

Never leave your laptop or it's lithium ion battery in an hot place like the trunk of your car during the summer!

Can I fix a degraded lithium ion battery?

If your laptop 's lithium ion battery has seriously degraded already you can try to charge and discharge it completely once or twice, as this sometimes helps to recalibrate it. If this doesn't fix the problem then the battery is permanently degraded and you will need to purchase a replacement battery.
image cc by playerx

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Friday Fun: Linux OS Market Share by Twibbon

Posted on Friday, July 24, 2009 by Erlik

One of the big problem of Linux is that it is difficult to estimate its market share. Web metrics give vastly different numbers based on the methodology used: from 2.11% for W3counter to 0.99% for Netstats. I thus propose a fun way to gauge the relative importance of Operating Systems: by Twibbon.

What the hell is a Twibbon?

A Twibbon is a little logo that you can add to your Twitter profile image to show your support to a cause, country, browser or operating system? You can have a look at my Twitter profile for an example. On the Twibbon website you can see how many Twitter users support the same cause as you, and thus conclude how important it is. Why only count Twitter users? Because Twitter is in with the hip people, anybody who is or will be somebody is on Twitter. These are the influencers, the people that matter! So without further ado let's see how the Twibbon numbers compare:

As of 23 July 2009 the Twibbon score for each operating system are:

I'm Linux: 94 supporters
I'm a PC: 124 supporters
I'm a Mac: 476 supporters
I use iPhone: 16 supporters

As you can see the most popular operating system is by far MacOSX followed by Microsoft Windows and Linux. The iPhone does not yet have many supporters, but I expect this to grow. Just for fun let's translate this into percentages and compare with the Netstat numbers:

Linux: 13% Twibbon market share, 1% Netstat market share
Windows: 17% Twibbon market share, 88% Netstat market share
MacOS: 67% Twibbon market share, 10% Netstat market share
iPhone: 2% Twibbon market share, 0.6% Netstat market share

This is absolutely not scientific of course, but it indicative of something: despite it's huge market share Windows does not seems to have lot of fans. If Netstats's numbers were actually correct (I doubt they are) and Windows was generating has much enthusiasm as MacOS it should have 4189 supporters. It should have 8272 supporters if it had the same level of support as Linux.

What this shows is that Windows is currently coasting on its past market share gains and on its dominance in the enterprise and the retail channel. If the young hip Twitter generation of today had shaped the computing landscape it would be vastly different from the one we know. Most of these don't seem to find Windows very interesting, but are willing to support MacOS or Linux.

In the end this is not much more that a bit of Friday fun, but maybe it is cause for a bit worry at Microsoft's headquarters and a bit of rejoicing in Cupertino and in Linux Land.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

Miro 2.5 released

Posted on Thursday, July 23, 2009 by Erlik

This is just a quick post to announce the release of Miro 2.5! There are numerous improvements to this famous video aggregator, the most noticeable is that Miro now starts and updates feeds faster. This is probably the most important update as on my netbook Miro took about 10 seconds to start and about a full minute to update my feeds! (I have a lot of feeds)

Another feature that netbook users will appreciate is the possibility to download videos from YouTube for offline watching. This is a neat way to avoid the performance problems linked to the playback of full screen flash videos on netbooks. Granted, there are Firefox extensions that do this too, but it is very practical to have the videos organized in the Miro library.

Another noteworthy feature is that Miro now has better support for audio feeds. Audio podcasts will now be present in the Miro guide, so they should be easier to find. Previously Miro was more of a video aggregator, but this update puts it on an equal footing with iTunes.

You can download Miro from here and have a look at the release notes here. Note that it looks like Miro's Ubuntu repositories are not yet updated. You can either build the application from source or wait for the update to hit the repositories.

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews

The Linux Kernel and Open Source Drivers

Posted on Wednesday, July 22, 2009 by Erlik

There has been a lot of talk about the Linux kernel and Open Source drivers this week. Most of it was about Microsoft that released drivers under the GPL V2 for inclusion in the Linux kernel. As pointed out by Steven J. Vaughan-Nichols this was planned for a long time and will benefit Microsoft as much (if not more) than Linux. The only important thing this shows is that Microsoft is ready to embrace the GPL if it serves it's business interests.

Much more interesting is the discussion on Phoronix about the case of the new VIA Chrome 9 DRM (Direct Rendering Manager). The gist of the story is this: VIA has a binary 3D driver for it's Chrome 9 IGP, but they want that some of the code (the DRM) is entered into the Linux kernel. The DRM code is open source, but not the driver itself. Now without the driver the DRM is useless, meaning that if it is accepted the kernel would contain some code whose only purpose would be to run VIA's binary driver.

This raises a lot of issues: how would this code be maintained? What if the kernel part of the code needs to evolve and updates to the driver are required? What about security? VIA could solve some of these issues by providing a complete documentation of the binary Chrome 9 driver, but currently this documentation is not available: critical pieces are missing.

Not designed for Open Source

The problem is that VIA did not design their product with Open Source in mind. What happened is that instead of developing their own technology from scratch they actually licensed another company's technology for use in their product. At the time they did not plan for open source drivers and agreed that the third party's code would have to remain secret. This probably did not matter a lot a few years ago, but since then the market share of Linux has grown a lot, and now VIA is stuck between a rock and a hard place.

If they can't release their drivers as Open Source they can't include it into the kernel, but to open source their driver they need the permission of the company that own the technology that they licensed. That company probably has no interest in Open Sourcing its technology, so VIA is stuck. They could probably rewrite some of the driver themselves, but it would cost a lot of resources. Intel is experiencing the same issue with their Poulsbo (mobile) driver: they used third party technologies that they can currently not release as open source. AMD encountered similar issues with their documentation efforts: some information licensed from third parties has to be cleared before it can be released.

Back to the kernel and the Microsoft

This bring us back to the kernel and Microsoft. Any Open Source driver that is incorporated in the kernel enjoy several advantage in the Linux world: it works out of the box and it is maintained pretty much forever. This is why Intel 's integrated graphics are so popular with Linux users. The chips do not offer great performance but a "full feature" open source Intel driver is shipped with all distributions, while AMD and Nvidia Open Source drivers are currently much more limited. If you want to get full performance and all features with these graphic chips you need to install a binary driver that may or may not work with your specific distribution and hardware.

So Open Source drivers are better and will help sell hardware to the Linux community, but Open Source is not something that you can add as an afterthought. You need to ensure from the start that all the technology that you intend to use can be released. This needs to be specified in contracts with all third-party technology providers and needs to be taken into consideration at all stages of product development. This requires some effort on the part of the hardware manufacturer. Now ask yourself this question: would Microsoft have released their virtualization drivers as Open Source if they could have been included in the kernel as binary drivers? Probably not! (especially if as some suggest Microsoft had little choice)

By requesting an open source driver (or the documentation to build one) as a prerequisite for the inclusion of the VIA DRM in the kernel the Linux community not only ensures that the kernel remains portable and secure but also encourages device manufacturers to ensure that their products are "Open Source compatible" and to eliminate third party technology that can't be released. It is possible that rejecting VIA's DRM would cause some pain to current VIA users, but it would give a strong message to device manufacturers: Plan Open Source support in your products and you will gain access to the kernel, remain closed and you will be at a disadvantage. This does not mean that all binary drivers are bad, sometime there is no other way to make hardware work, only they should not expect to be considered as first class citizens in the Linux world.

For more posts like this subscribe to Tech-no-Media (rss) or Follow me on Twitter.

Image cc by Henrique Vicente

Technorati Delicious StumbleUpon Reddit BlinkList Mixx Facebook Google Bookmark newsvine live slashdot Submit to OSNews