Wednesday, November 09, 2011

Australia in High Dynamic Range

I've been really impressed with the work +Trey Ratcliff has been sharing on Google+, which led to me wondering if HDR might be a way to add some "punch" to my landscape photography. My recent holiday back to Oz offered the perfect excuse to experiment. I've put my best HDR photos so far into this Google+ album.

I've been working on my photography skills for a while now, but there are certain scenes that I've  found particularly challenging. Dramatic sunsets, photos taken in bright sunlight (or towards the sun), or photos taken on dark, gray days have been a struggle to capture adequately on camera.

HDR seemed like a technique that might help me capture on film what I could see with my eyes, and what better place to practice than on a Western Australian beach at sunset?

We also spent some time in Ballarat, over in Victoria. In recent years rural Australia has been known for a crippling drought followed by devastating floods, but when we visited it was a velvet field punctuated by saphires, with the dams and lakes all filled to capacity.

We were staying within the shadow of Mount Buninyong, which provided the perfect opportunity to experiment with some midday shots taken from a high vantage point.

Bird's eye views are always stunning in person, but I've had difficulty turning that view into  interesting photos -- particularly as I seldom make it to these places at dusk or dawn when the natural light would be more favorable.  My initial results were definitely encouraging.

To help experiment, my trusty Canon EOD 500D has an exposure bracketing option that lets me take three consecutive pictures using different exposures. Photoshop comes with an automation plugin that merges multiple exposures to produce HDR images.

I learned a few things from my experience so far. The first - somewhat obviously - is to look for scenes with an abundance of color depth. Rich greens offset by deep blues and grays look fantastic.

Somewhat less obvious is the effect that a hint of rich color can add to an otherwise monochromatic scene. HDR will add layers of depth to grey clouds and dark seas, so a small splash of red or green can produce dramatic results.

I also learned that taking portrait photos in HDR is much more difficult. Close-ups can be incredibly unflattering as skin tones are exaggerated and people start to creep into the uncanny valley.

It's also tricky to photograph scenes with movement. When you merge the images, slight differences are often shown up as artifacts or ghosting. A steady hand is a must (my best results used a tripod), and shooting toward the sun will minimize your exposure times. Looking at the images blown up on my 24" monitor, it's also clear that there are more annoying pixel artifacts, halos, and ghosting that I need to work on to improve the final effect.

Overall, I need to practice to get better results, but I the progress so far is promising and HDR is definitely a tool I'll be adding to my amateur photography tool-belt.

[I've disabled comments here in favour of using Google+. Feel free to join the conversation over there.]

Tuesday, November 08, 2011

Memories in the White Space

A distinct melancholy accompanies me as I sort through the images and artifacts of my youth.

My wife and I left Australia almost 7 years ago. We lived in London and now the Bay Area, but for me home is still Perth. We're back this month—the first time in three and a half years—and I'm using the opportunity to free my parents of some of the detritus I left with them before taking off in 2005.

Our visit has been timed to coincide with the wedding of one of my very best friends. I've been friends with the groom and most of his side of the wedding since our first year at Duncraig Senior High. We were all members of the Academic Extension program (a particularly nefarious way to target those of us most likely to be on the fringes of high school social life and stigmatize us further by segregating us into separate classes.)

When we all get together for some quiet drinks the night before the wedding it's only a matter of minutes before my accent has slowed and thickened, and we're poking fun and chatting as though I'd never left. The same pattern repeats as we catch up with close friends I'm lucky to see every few years. We share a hug and a beer and talk about their new kids, houses, fiances, spouses, and business ventures with an easy comfort that makes it seem like only a or two week has passed since we last hung out.

Back at my parent's house, amongst the polyhedral dice, Star Trek VHS tapes, and school assignments are 10 A3 scrapbooks filled with photographs of me, my friends, and family from birth until I moved out at 21.

I grew up in the age before digital cameras and smartphones captured every moment (magic or otherwise) ready for posting to Facebook. As teenage boys, my friends and I were particularly adept at avoiding my mum's instamatic. As a result, flipping through the stacks of photo albums is a surreal experience. Christmases, birthdays, high school balls, and graduation ceremonies are all captured in full colour—but what strikes me most is the memories that live in the white space between the photos.

A thumb-obscured image doesn't capture the experience of all-night LAN parties spent playing Doom 2. A single photo of us playing pickup basketball (without the hoop in frame) is a faint reminder of the hours spent on court and the four broken arms collected between me and the aforementioned groom during games; plaster-cast testimony to our passion for the game.

A shot of me posing, awkward and gangly, in my inter-school sports uniform captures nothing about the day, but brings back the crowd of apathetic high-schoolers gathering around the high-jump mats, and the rush (and not a small amount of surprise) I felt as they genuinely cheered me on to jump my own height and break the school record.

There aren't any photos to commemorate the long nights spent playing AD&D, or the Friday nights we all spent at WesTrek watching boot-leg videos of each new episode of TNG, but the Player's Handbooks and mountains of Star Trek videos, books, and technical manuals bring back the memories all the same.

20 years. That's how long I've known some of my closest friends. Two thirds of my life. I'm a proper geek, so don't find it easy to build these effortless friendships, so the comfort of sliding back into them is tempered by the knowledge that it'll be years until I can next hang out with some of my best friends.

Email, Facebook status updates, and Google+ will help us stay in touch until the next time we voyage the 8,000 miles back home. When that happens there'll be more kids to meet, new houses to tour, and new businesses to hear about. We'll hug, share a few beers, and it'll be like we never left.

[I've disabled comments here in favour of using Google+. Feel free to join the conversation over there.]

Thursday, July 07, 2011

Obligatory Post Speculating on Google+

I love product launches. It's the perfect time to speculate with no inconvenient research or history to get in the way.

This goes for everything on this blog, but it's probably worth highlighting in this instance that these opinions are my own. They do not represent the thoughts and opinions of Google, the Google+ team, or anyone else who works at Google.

My history of speculating on products tends to be bullish on Google and cynical of social. I thought Android and Wave were going to change the world, and that Twitter was a waste of time.

Twitter with conversations

Despite my initial reservations I'm a big user of Twitter, but I find that most of my interaction there is effectively anonymous - I'm either reading things by interesting people I don't know, or sharing things I think are interesting with people I've never met.

I've found that half my use of Google+ works similarly - by posting publicly and creating a "My Stream" circle full of interesting folks who I don't know personally.

Where I think Google+ adds value is with threaded conversations. By attaching the conversation that emerges from each post, anonymity is reduced and the process of sharing and reading are suddenly more social.

Facebook with sharing controls

I remember quite clearly the moment my use of Facebook went from regular to sporadic. My manager's passing comment on my most recent status update (something along the lines of "I'm so bored I'm considering setting myself on fire just to liven up my day") prompted this blog post.

I've always maintained a policy of only adding people I know and would recognize in person as Facebook friends. Nonetheless, when your extended family, school friends, and current / former work colleagues are all reading the same stream, and seeing the same pictures, the intersection of "appropriate material" rapidly tends towards zero.

Using circles to fragment my audience has been an elegant solution for me.

I've created the obvious circles like "friends", "family", and "Googlers" but I've found smaller adhoc circles particularly useful when socializing


In the paleolithic age we used email to arrange social events and share the photos afterwards, but it never really worked.

Facebook is a good alternative, but it requires adding people you don't necessarily know to your "friends" list.

Being able to create an adhoc circle (or just add individual people to a post) - makes it easy to work out the details for a 4th July BBQ - and then post the photos - all in one place.

I've not spent a lot of time with Huddles or Hangouts yet, but they seem a natural extension. I can see using Huddle instead of SMS to let folks know you're running late, to get parking advice, or confirm the orders for a lunch-run. I've used similar products (most notably Beluga and GroupMe) to coordinate amongst a large group at conferences or events like MWC or Google I/O.

Social photo sharing without wanting to punch your screen

Photos are probably the reason most folks joined Facebook to begin with. It's also the reason many people hate Facebook.

I'll happily rave about the Google+ photo experience which is awesome. It's easy to share photos with just a small group, or post your best amateur photography for the world to critique.

I don't want to be social at work

For all the good uses, let me highlight a couple that I don't see catching on.

I admit to having been a little skeptical of Google+ during the dogfooding stage. With 20/20 hindsight, I think a lot of that had to do with it being effectively a corporate social network. My email inbox is full enough as it is; I really don't need another stream to monitor in order to be involved in work conversations.

This is not a blog

You'll note that I haven't posted this directly on Google+.

I don't want to read your essay in my social stream, just give me an abstract and link to your blog. For added bonus points, make sure your blog links back to your Google+ profile.

In Conclusion

Twitter is entirely public and as a result my interactions there are regular but tend towards the impersonal. Facebook is limited to people I know so the interactions are more personal, but (increasingly) less frequent.

Google+  lets me choose which group of people I'm comfortable sharing something with to a degree that lets me have regular, personal conversations.

As many of you have no doubt noticed - that makes for an addictive combination.

Wednesday, July 06, 2011

London 2005-2011 in Photographs

I really like the photo sharing and viewing experience in Google+ so I decided to sort through my massive collection of "London" photographs and share some of my favorites.

Selecting and preparing photos to share has a way of focussing your attention and allowing you to really look at them critically. As I sifted through the thousands of photographs I'd taken in London it quickly became obvious that I've got some work to do before I'm competing with Romain Guy.

It was also clear that I had a couple of preferred sources of inspiration.

The Seasons

Grey skies and light rain don't make for great photos and an overcast Winter that starts in October and ends around April does little to provide inspiration.

London is blessed with real seasons though, and Autumn and Spring (however brief) are an entirely different matter. They offer some of the most amazing light and color for taking photos. And when it snows? London transforms briefly into a winter wonderland.

By 10am the skies cloud over and the snow turns to mush, so to take advantage you need to be out there at dawn. I worked in banking, so that was never a problem.

Each of the following thumbnails links to a gallery of my pictures of London in Winter, Spring, and Autumn respectively.

The Sights

London has some of the most easily recognized landmarks in the world. Because of the seemingly perpetually grey and overcast skies, lots of tourist snaps come out flat and dull. To get around that I've taken most of them at night or very early in the morning.

Tuesday, June 28, 2011

A Deep Dive Into Location Part 2: Being Psychic and Staying Smooth

This is part two of A Deep Dive into Location. This post focuses on making your apps psychic and smooth using the Backup Manager, AsyncTask, Intent Services, the Cursor Loader, and Strict Mode.
The code snippets used are available as part of the Android Protips: A Deep Dive Into Location open source project. More pro tips can be found in my Android Pro Tips presentation from Google I/O. 
Being Psychic

You've just had to factory reset your device - never a good day - but yay! You've opted in to "backup my settings" and Android is happily downloading all your previously installed apps. Good times! You open your favourite app and... all your settings are gone.

Backup Shared Preferences to the Cloud using the Backup Manager

If you're not using the Backup Manager to preserve user preference to the cloud I have a question for you: Why do you hate your users? The Backup Manager was added to Android in Froyo and it's about as trivial to implement as I can conceive.

All you need to do is extend the BackupAgentHelper and create a new SharedPreferencesBackupHelper within it's onCreate handler.

As shown in the PlacesBackupAgent, your Shared Preferences Backup Helper instance takes the name of your Shared Preference file, and you can specify the key for each of the preferences you want to backup. This should only be user specified preferences - it's poor practice to backup instance or state variables.

public class PlacesBackupAgent extends BackupAgentHelper {
  public void onCreate() {
    SharedPreferencesBackupHelper helper = new
      SharedPreferencesBackupHelper(this, PlacesConstants.SHARED_PREFERENCE_FILE);
    addHelper(PlacesConstants.SP_KEY_FOLLOW_LOCATION_CHANGES, helper);

To add your Backup Agent to your application you need to add an android:backupAgent attribute to the Application tag in your manifest.

<application android:icon="@drawable/icon" android:label="@string/app_name"

You also need to specify an API key (which you can obtain from here:

<meta-data android:name=""
           android:value="Your Key Goes Here" />

To trigger a backup you just tell the Backup Manager that the data being backed up has changed. I do this within the SharedPreferenceSaver classes, starting with the FroyoSharedPreferenceSaver.

public void savePreferences(Editor editor, boolean backup) {

Being Smooth: Make everything asynchronous. No exceptions.

Android makes it easy for us to write apps that do nothing on the main thread but update the UI.

Using AsyncTask

In this example, taken from PlaceActivity, I'm creating and executing an AsyncTask class to lookup the best previous known location. This isn't an operation that should be particularly expensive - but I don't care. It isn't directly updating the UI, so it has no business on the main application thread.

AsyncTask<void, void, void> findLastLocationTask = new AsyncTask<void, void, void>() {
  protected Void doInBackground(Void... params) {
    Location lastKnownLocation =

    updatePlaces(lastKnownLocation, PlacesConstants.DEFAULT_RADIUS, false);
    return null;

You'll note that I'm not touching the UI during the operation or at its completion, so in this instance I could have used normal Thread operations to background it rather than use AsyncTask.

Using the IntentService

Intent Services implement a queued asynchronous worker Service. Intent Services encapsulate all the best practices for writing services; they're short lived, perform a single task, default to Start Not Sticky (where supported), and run asynchronously.

To add a new task to the queue you call startService passing in an Intent that contains the data to act on. The Service will then run, executing onHandleIntent on each Intent in series until the queue is empty, at which point the Service kills itself.

I extended Intent Service for all my Service classes, PlacesUpdateService, PlaceDetailsUpdateService, PlaceCheckinService, and CheckinNotificationService.

Each implementation follows the same pattern, as shown in the PlacesUpdateService extract below.

protected void onHandleIntent(Intent intent) {
  String reference = intent.getStringExtra(PlacesConstants.EXTRA_KEY_REFERENCE);
  String id = intent.getStringExtra(PlacesConstants.EXTRA_KEY_ID);

  boolean forceCache = intent.getBooleanExtra(PlacesConstants.EXTRA_KEY_FORCEREFRESH, false);
  boolean doUpdate = id == null || forceCache;

  if (!doUpdate) {
    Uri uri = Uri.withAppendedPath(PlaceDetailsContentProvider.CONTENT_URI, id);
    Cursor cursor = contentResolver.query(uri, projection, null, null, null);

    try {
      doUpdate = true;
      if (cursor.moveToFirst()) {
        if (cursor.getLong( cursor.getColumnIndex(           PlaceDetailsContentProvider.KEY_LAST_UPDATE_TIME)) >
            doUpdate = false;
    finally {

  if (doUpdate)
    refreshPlaceDetails(reference, forceCache);

Note that the queue is processed on a background thread, so I can query the Content Provider without having to spawn another background thread.

CursorLoaders are awesome. Use them.

Loaders are awesome; and thanks to the compatibility library, they're supported on every platform back to Android 1.6 - that’s about 98% of the current Android device install base.

Using CursorLoaders is a no-brainer. They take a difficult common task - obtaining a Cursor of results from a Content Provider - and implement, encapsulate, and hide all the bits that are easy to get wrong.

I've already fragmented and encapsulated my UI elements by creating three Fragments -- PlaceListFragment, PlaceDetailFragment, and CheckinFragment. Each of these Fragments access a Content Provider to obtain the data they display.

The list of nearby places is handled within the PlaceListFragment, the relevant parts of which are shown below.

Note that it's entirely self contained; because the Fragment extends ListFragment the UI is already defined. Within onActivityCreated I define a Simple Cursor Adapter that specifies which Content Provider columns I want to display in my list (place name and my distance from it), and assign that Adapter to the underlying List View.

The final line initiates the Loader Manager.

public void onActivityCreated(Bundle savedInstanceState) {
  activity = (PlaceActivity)getActivity();

  adapter = new SimpleCursorAdapter(activity,
    new String[]
      {PlacesContentProvider.KEY_NAME, PlacesContentProvider.KEY_DISTANCE},
    new int[] {,}, 0);

  // Allocate the adapter to the List displayed within this fragment.

  // Populate the adapter / list using a Cursor Loader.
  getLoaderManager().initLoader(0, null, this);

When the Loader is initiated we specify the parameters we would normally pass in to the Content Resolver when making a Content Provider query. Instead, we pass those parameters in to a new CursorLoader.

public Loader<cursor> onCreateLoader(int id, Bundle args) {
  String[] projection = new String[]

  return new CursorLoader(activity, PlacesContentProvider.CONTENT_URI,
    projection, null, null, null);

The following callbacks are triggered when the Loader Manager is initiated, completed, and reset respectively. When the Cursor has been returned, all we need to do is apply it to the Adapter we assigned to the List View and our UI will automatically update.

The Cursor Loader will trigger onLoadFinished whenever the underlying Cursor changes, so there's no need to register a separate Cursor Observer or manage the Cursor lifecycle yourself.

public void onLoadFinished(Loader loader, Cursor data) {

public void onLoaderReset(Loader loader) {

The PlaceDetailFragment is a little different; in this case we don't have an Adapter backed ListView to handle our UI updates. We initiate the Loader and define the Cursor parameters as we did in the Place List Fragment, but when the Loader has finished we need to extract the data and update the UI accordingly.

Note that onLoadFinished is not synchronized to the main application thread, so I'm extracting the Cursor values on the same thread as the Cursor was loaded, before posting a new Runnable to the UI thread that assigns those new values to the UI elements - in this case a series of Text Views.

public void onLoadFinished(Loader loader, Cursor data) {
  if (data.moveToFirst()) {
    final String name = data.getString(
    final String phone = data.getString(
    final String address = data.getString(
    final String rating = data.getString(
    final String url = data.getString(

    if (placeReference == null) {
      placeReference = data.getString(
      updatePlace(placeReference, placeId, true);
    } Runnable () {
      public void run() {

Using Strict Mode will prevent you from feeling stupid

Strict Mode is how you know you've successfully moved everything off the main thread. Strict Mode was introduced in Gingerbread but some additional options were added in Honeycomb. I defined an IStrictMode Interface that includes an enableStrictMode method that lets me use whichever options are available for a given platform.

Below is the enableStrictMode implementation within the LegacyStrictMode class for Gingerbread devices.

public void enableStrictMode() {
  StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder()

The only thing I hate more than modal dialogs in apps is apps that freeze because a network read or disk write is blocking the UI thread. As a result I've enabled detection of network and disk read/writes and reports using a modal dialog.

I've applied Strict Mode detection to the entire app by extending the Application class to instantiate the appropriate IStrictMode implementation and enable Strict Mode. Note that it is only turned on in developer mode. Be sure to flick that switch in the constants file when you launch.

public class PlacesApplication extends Application {
  public final void onCreate() {

    if (PlacesConstants.DEVELOPER_MODE) {
      if (PlacesConstants.SUPPORTS_HONEYCOMB)
        new HoneycombStrictMode().enableStrictMode();
      else if (PlacesConstants.SUPPORTS_GINGERBREAD)
        new LegacyStrictMode().enableStrictMode();

Thursday, June 23, 2011

How to Build Location-Based Apps That Don't Suck

If I were forced to choose between a smartphone that could make / receive voice calls, and one with Google Maps - I would choose Maps without blinking.

Here Back in London, getting a reliable 3G connection is a challenge at the best of times - getting one while sat in most venues is about as likely as a South West Trains running a good service. So it doesn't help when I go to view details for, checkin, or review a location and a lack of 3G signal thwarts my efforts.

Whether it's opening a FourSquare app to checkin, or Qype / Zagat / Where to choose where to eat, or the London Cycle Hire Widget to find a Boris Bike - I always feel like a douche standing around with my phone in my hand for half a minute while my phone gets a GPS fix and downloads the nearest locations.

High latency and a lack of offline support in location-based mobile apps is a blight that must be cleansed

Rather than (or indeed: after) shaking my fist at the sky in impudent rage, I wrote an open-source reference app that incorporates all of the tips, tricks, and cheats I know to reduce the time between opening an app and seeing an up-to-date list of nearby venues - as well as providing a reasonable level of offline support.

You can find out more in the associated deep-dive into location on the Android Developer Blog.

Android Protips: Location Best Pratices

It should came as no surprise to learn that I've borrowed heavily from my Android Protips presentation from Google I/O. Including (but not limited to) using Intents to receive location updates, using the Passive Location Provider, using Intents to passively receive location updates when your app isn't active, monitoring device state to vary refresh rate, toggling your manifest Receivers at runtime, and using the CursorLoader.

But Wait There's More!

The post on the Android Developer Blog focusses on freshness - I'll be posting another deep-dive into the code that examines how I've made the app psychic and smooth on this blog early next week. Stay tuned.

Wednesday, May 25, 2011

Answers to Unanswered Questions from the I/O Protips Q&A

There's never enough time for Q&A at the end of an I/O session - particularly when the session is immediately followed by lunch. In an effort to remedy this, here are the answers to most of the questions that were entered onto the Moderator page for my Android Protips talk.
    Eclipse is a wonderful development tool. However sometimes it is clunky. Generic error messages, mysteriously build problems solved by quitting & relaunching, etc. Do you ever get frustrated with Eclipse? Do you have any tips of working with Eclipse?
I do! Almost as much as I was frustrated by Visual Studio in a previous life spent writing C# Winform GUIs. I don't have any specific tips beyond the things that work with mode IDEs. Frequently restart and make sure you're using the latest stable build along with the latest version of the ADT.
    Do you have any tips for working with SQLite databases through eclipse? At the minute I rely on external tools to view data that is in the database on the phone to debug problem. This means manually copying it off the device. Any tips?
You can use the sqlite3 command line tool to examine SQLite databases on the phone. It's not built into Eclipse but might save you the extra work of pulling the database off the device first.
    Do the developers at Google use any hardware emulators to speed up development. If so, can you please recommend some. The soft emulator is too slow.
Unfortunately not. Generally we'll be working towards the release of the new platform on a given piece of hardware, so the internal teams will use that rather than an emulator where appropriate / applicable.
    Will there be a faster Android device emulator anytime soon?
Yes! Check out this session on Android Development Tools for a preview.
    Is there a suite of AVDs for Eclipse that emulate actual devices?
Some manufacturers make AVDs available for actual devices (I believe Samsung provide an AVD for the 7" Galaxy Tab). Generally speaking, no - there's no central repository or suite of all actual device AVDs.
    Will there soon be a legitimate way to replace the Android lockscreen (with a lockscreen application)?
Due to the security implications, I'm not aware of any plans to make the lock screen (or in-call screen) replaceable.
    You mentioned better not to loose your signing key. But how to update your app, when your certificate expired?
For now, certificates used to sign apps launched in the Android Market need to expire after 22 October 2033. We'll have a solution for replacing these certificates in place well before 2033 :)
    What's the recommended way to implement a horizontally scrolling, virtualized list?
No simple answer here as it depends on the kind of data you're displaying, how long your list is, and what the best user experience would be. There are some good articles online (including this answer on Stack Overflow) that explain how to create a virtual list in a ListView, but you can use a similar technique within a Gallery or even a HorizontalScrollView to achieve a horizontal virtualized scrolling list.
    Are Shared Preferences the bast way to store small piece of data?
It depends on what kind of small data you're storing. Shared Preferences are the best way to store user preferences and Activity / Application state information.
    A view from one app needs to be updated by another app. Can't use the widget paradigm, is there any other way?
This depends on a number of factors. Are both apps written by you, or is one a third party? How dramatic are the changes? New layouts or changed text in a TextView?

Generally speaking, the best approach is likely to be a Broadcast Intent. You can package the data that will be used to update the View in the "other" app by including them as extras. The "other" app simply registers a Broadcast Receiver that listens for the Intent, extracts the data, and updates its view accordingly.
    How would you test/optimize the apps that are not meant for the Android Market?
The principle of using Analytics for tracking bugs and doing A/B testing works just as well internally as it would on apps that will launch in Market. The biggest difference is distribution. Given the ability to side-load apps onto most Android devices, I'd most likely setup an internal website that would host the APKs you want to distribute for the Beta test.
    How can we know if a certain service is already running?
You can bind to a Service using bindService without starting the Service. The Service Connection you pass in to bindService will notify you using its onServiceConnected and onServiceDisconnected handlers when the Service starts and stops. You can use those callbacks to set a variable within your code to check if the Service is running at any given time.
    Is there any option to backup the default SharedPreferences via BackupManager? Do I have to use the packagename?
The default SharedPreferences file getSharedPreferences uses the Activity's class name as the preferences name.
    Is there a way for accessories to "push" an embedded app or service package to the device so that specialized services for certain types of accessories will be able to automatically add functionality to the device when connected?
As part of the process of starting the device in accessory mode, you send identifying string information to the device. This information allows the device to figure out an appropriate application for this accessory and also present the user with a URL if an appropriate application does not exist. It won't install the package for you, but it will prompt the user to download it.
    Just to make sure, now we can send requests to the devices in order to update the info, instead of having the refresh intervals?
That's right, you can use Cloud to Device Messaging to ping a device when it needs to perform an update.
    When will we have a UI Builder that is par to what we get for iPhone?!
Check out this session on Android Development Tools for a preview of some of the cool stuff the tools team have been working on.
    Can you describe your video and control hookup for your android tablet?
I wrote a blog post that describes the video and control hookup I used to do my presentation using a pair of Motorola Xooms.

Monday, May 23, 2011

My Attitude Towards Piracy of My Book

The short answer: I am against it.

Lest I be accused of bias, that goes for every book - not just the ones that result in a couple of bucks landing in my pocket.

The long answer

I was surprised recently when asked via email what my attitude was towards piracy of my book, and if an online donation might work as a "last resort" way for pirates to show their appreciation for my work.

For the record, I am a big supporter of making my book available in as many formats as possible. That's why I'm impressed that Wrox books are now available on Kindle, Google Books, Safari, and PDF eBooks. I get a royalty no-matter where you buy it from, so if you want a copy pick wherever offers the cheapest price for the format you prefer.

I'm also against DRM - I believe that DRM does nothing to prevent piracy while annoying the folks who legitimately paid for the content - so I was also thrilled with Wrox's decision to make their eBooks DRM free.

Your book is expensive: Do you have a donate link anywhere to show my appreciation but save a few bucks?

I can't speak for anyone else, but I don't write my books for the money. The advance and royalties go some way to compensating for the significant time and effort it takes to get the books written - but for me at least, it's not going to make me rich or let me give up my day job.

More importantly I don't write in a vacuum. Books cost money to make. I'm not talking about the paper, printing, and transport costs, I'm talking about all the people who were involved in making my book the best it could be.

There are 17 people on the "Credits" page for Professional Android 2 Application Development. They are not vanity credits. As an example, the following folks are the ones on that list who I had direct, repeated email contact with over the course of writing the book. Apologies to those I left out - they are equally important to the process.
  • Scott Meyers (Acquisitions Editor) suggested I write a second edition and shepherded it through the process.
  • William Bridges (Project Editor) made sure I handed in chapters on something resembling a schedule (without him I'd still be working on the 1st edition), as well as dispensing invaluable advice on everything from book and chapter structure to clarity and semi-colon use.
  • Milan Shah (Technical Editor) reduced the number of bugs in my code.
  • Sadie Kleinman (Copy Editor) corrected my comma use, spelling errors, grammatical issues, and generally ensured I didn't embarrass myself.
  • Mary Beth Wakefield (Editorial Manager) kept it all together when things got chaotic.
  • Kyle Schlesinger (Proofreader) ensured nothing slipped past us during the many edits and revisions before it went to print.
  • Michael Trent (Cover Designer) gave us the awesome Terminator cover (image by Linda Bucklin).
  • Robert Swanson (Indexer) provided a way to find things without a photographic memory.
  • David Mayhew (Marketing) made sure it was available from wherever people wanted to buy it.
These folks are an absolutely essential part of the writing process, and they don't work for free. Nor should they. There is a really simple way to show your appreciation for all the people involved in writing a book. Buy it.

Sure, but I need to know what the book teaches for my job / class, but I don't have the money to buy it.

Good news! You don't have to! At the risk of hurting my own sales, you don't need to buy my book to learn how to develop Android apps. I mentioned this during my Android Protips talk: Google aren't trying to keep this information a secret. There's a huge amount of information available online including:
Many people find the structured, consistent, and guided form of a book to be a great way to learn new material. Others find books a more useful reference while working. If you're one of those people, and you want to be able to continue using books in such a way, buying them is the only way to help ensure that will happen.

What are your thoughts on 2nd hand books and libraries?

Bring them on! I've probably bought 1,000 books in my life - of which more than half are 2nd hand, and I've probably borrowed a few hundred from libraries too.

Electronic books introduce new challenges to the 2nd hand book market and libraries. I'm firmly on the side that says that the rights we have with paper books should be mirrored in the electronic publishing age. If you're done with a book you should be able to sell it, loan it, or give it away without restriction.

If everyone downloads pirated PDFs of books instead of buying them, publishers will stop publishing them.

Without a publisher many books won't get written or released. I don't have the time, money, skills, or inclination necessary to produce a book of sufficient quality on my own. I need the 17 people on the credits page (and several more besides) to publish each new book or revision.

Writing isn't my livelihood, so as an author the end of publishing would be a serious disappointment, but I'd just stop writing and get on with my day job.

As a reader? I can barely imagine a future so grim.

Monday, May 16, 2011

Android Protips: Where to Download the Slides and Code Snippets

For those of you who want to take a closer look at my Android Protips session for Google I/O, you can now enjoy the video, slides, and code snippets in whichever format you prefer:
One of the nice things about SlideShare is that it lets you embed slideshows into your blog post:

I plan to do a series of more blog posts that dig into some of the topics I cover in the presentation in more detail. Where do you guys think I should start?

Android Protips at I/O: The Session Video (and How I Presented It)

[Update 16/May: Reposted after Blogger outage]
[Update 2: Working links to the sessions slides are available from here]

Another Google I/O, another jam-packed Android session room. This year they nearly doubled the room capacity for the main Android track making space for 1,000 seats. That still wasn't enough though - once again people were sitting on the floor and lining up to get in.

Android Protips: Advanced Topics for Expert Android Developers

After delivering my Android Best Practices for Beginners for the better part of last year, I was really excited to take things up a notch and deliver some real advanced content. To push things one step further, I presented my session using a pair of Xoom tablets. More on that after the video.

The awesome video content was created for me by an old friend of mine (he's still young, but we've been friends since high school) pandamusk - thanks panda!

How did you do that?

There were a lot of questions on Twitter asking:
  1. What app did I use to do the presentation using an Android tablet
  2. How did I live tweet my own presentation in real time?
  3. How did I not re-tweet everything when the tablet rebooted?
I'm an engineer so (of course) I took this as an excuse opportunity to write an app that does the former and built in functionality to do the latter (and come on — you think I didn't consider the case of having to restart? Please.)

How does it work?

One app, running on two tablets, both running Android 3.1 (with USB hostmode support) connected via Bluetooth.

Tablet one was wired up with HDMI out and a USB-connected clicker let me transition between slides. I added a "finger paint" View with a transparent background on top of the ImageView that displayed each slide which let me do the real-time annotations.

A second device (out of sight on the lectern) showed me my "Speaker View": My speaker notes, the current / next slide preview, my pre-written live tweets, and a countdown timer.

The two devices were paired and connected over Bluetooth, with the speaker view tablet set up as a slave to the presentation device. Whenever the display tablet transitioned slides, it transmitted the current slide to the speaker view tablet. It works the other way around too, so I can transition slides on the speaker view and have the live view update accordingly.

The tweeting happened on the speaker view tablet based on slide transitions (with a button for me to disable it if — for example — I had to restart half way through). I connected this one to a wired ethernet connection using a USB to ethernet dongle to avoid the notorious conference wifi syndrome.

I've got a bunch of ideas I'd like to incorporate (particularly around remote viewing), but ran out of time before I/O to get them implemented.

Can I Get the App? Can I See the Source?

Yes and yes. I need to make a few improvements before I release it on the Android Market and I need to refactor and tidy the code before I open source it. In the mean time I'll do a couple more posts the go into more detail on how each of the components work. Stay tuned.

Tuesday, May 10, 2011

How to Get Your Android Protips

I/O is always a great couple of days, and after last year's jam-packed Android room they've doubled our capacity and given us space for 1,000 folks in the audience. As if that wasn't enough, we're also live streaming the Android (and Chrome) sessions for your viewing pleasure.

What Am I Presenting and How Can You Watch?

I'm presenting Android Protips: Advanced Topics for Expert Android Developers at 11:30am PDT in room 11 up on the top floor (next to the keynote room) for those of you lucky enough to be at I/O in person.

If you're not here, you can still watch my session live as it's going to be live streamed across the intertubes (Stay tuned: that page will update when I/O starts in a few short hours).

If you are planning to tune in, that's 7:30pm BST and an ungodly 4:30am on the East coast of Australia.

If you've got questions, and won't be in the audience, you can pose them (and vote for others) on the session's moderator page.

I'm also going to try live-tweeting my own presentation using an Android app I've been working on for presentations (more on that later).

Once I'm done, the recorded video and slides will be available on the Android Protips session page. I'll also post a link to the code snippets for your copy/paste pleasure.

If You Just Can't Get Enough (or You Want to Know How to Avoid Bumping in to Me)

I'm also co-hosting "Web Apps versus Native" on Wednesday afternoon with Michael Mahemoff. Should be a good way to wind down after a long couple of days. When I'm not on stage I'll be hanging out at the Android office hours, so be sure to stop by and say hi!

Wednesday, April 27, 2011

Using Twitter4J to Tweet in Android

So I'm working on a little project for Google I/O that requires, amongst other things, the ability to post status updates to Twitter from within an Android app. I asked about it on Twitter and a couple of people asked me to post the results (and associated code snippets) so here you go.

I was hoping for a small code snippet that would let me do that without needing any third-party libraries, but the feedback from the lazy web suggested that jumping through the hoops of an OAuth implementation myself wasn't worth the effort.

The wisdom of crowds suggested Twitter4J as a simple alternative - and as the following code snippet shows - the most simple case is pleasantly simple to implement.

Twitter twitter = new TwitterFactory().getInstance();
AccessToken a = new AccessToken(oauth_token, oauth_token_secret);
twitter.setOAuthConsumer(consumer_token, consumer_secret);
twitter.updateStatus("If you're reading this on Twitter, it worked!");

In this instance I'm the only one who'll be using the app, so I'm dropping an auth token and auth token secret unique to my own Twitter login rather than going through the process required to obtain a user-specific auth token. If that matches your use-case you can grab those values by clicking "My Access Token" on the Twitter developer site after you've registered your app.

You can download Twitter4J for Android here. Then just add twitter4j-core-android-2.2.1.jar into your project as an external JAR.

Tuesday, April 12, 2011

I'm Saying Goodbye to London

Where to next?

Mountain View in sunny California!

Following this year's Google I/O, I'll be relocating to the home of the giant dessert sculpture garden, where I've been given the opportunity to take on the role of Tech Lead for the global Android Developer Relations team.

It's a chance for me to focus on some more strategic ideas and to work more closely with the core Android engineering team. It's a challenge I'm really looking forward to.

I've spent the last 6 years in London - the last 2 working here at Google - and it's been an amazing experience. I'll be leaving the Android developers of EMEA in the very capable hands of Nick ButcherRichard Hyndman, and Robert Rhode - and I'll still visit, I'm particularly looking forward to this year's round of Google Developer Days.

I'm still working at Google, and I'm still part of the Android team, so being based in Mountain View I'll have the opportunity to meet and work with some of our North American Android devs - so be sure to say hi if you're coming to Google I/O this year.

The move is still some months away, but in the mean time here are some of the things I will (and won't) miss about London, and what I'm looking forward to in California.

Things I'll Miss About London
  • World class theatre, restaurants, and concerts all at my door step.
  • Living in close proximity to the rest of Europe.
  • Proper bacon and real cheddar cheese.
  • Seasons (particularly Spring and Autumn).
  • The awesome Android developers I've worked with over the last 2 years.
  • Full English breakfasts.
Things I Won't Miss About London
  • Commuting for an hour every morning, and again every evening.
  • The Victoria line and South West Trains.
  • Hearing my neighbor snoring.
  • Driving in London.
  • Black pudding.
Things I'm Looking Forward to in California
  • Fruit that tastes like fruit.
  • Living on the West coast (the best coast).
  • Living in close proximity to an ocean.
  • Living in close proximity to the rest of the US.
  • Wide roads and cheap(er) petrol gas. 
  • American breakfasts.

Wednesday, March 30, 2011

My Failed Startup (or How I Nearly Become a 3D Animator)

Building awesome software and having the right business contacts is not sufficient to convince the latter to hand over money for the former.
There was a time in 2004/05 when, for several minutes, I thought my future lay not in compilers and debuggers, but in modelers and animation. As you can see, it turns out I don't have the skill, patience, or eye for detail required to transition from "enthusiast" to "someone who gets paid".

The journey that led me to consider adding "animator" to my resume is more interesting than my non-existent animation career. After 6 years writing oil & gas inspection software, I teamed up with a good friend and very smart guy - Big Stu - in a quixotic attempt to extract some serious coin from the bottomless money-pit that is the Western Australian oil & gas industry.

Step 1: Combine his electrical / mechanical engineering knowledge and business contacts with my ninja coding and amateur 3D animation skills to forge a killer app.
Step 2: ???
Step 3: Profit!

Here's our Pièce de résistance:

Our output wasn't Avatar, but the cinematic eye-candy was a side-effect of the tools we used rather than the goal.

We built a system to visualize and simulate anything in a sub-sea installation. Each scene was fully interactive, and the models were based on engineering diagrams and were perfectly accurate. The field layouts were created from sub-sea survey data to perfectly depict every twist and turn of the flow-lines and anchor chains. I've still never heard of a system that provides that level of detail and accuracy for sub sea environments.

Despite the power of the tool and the shiny eye-candy it produced, our venture never gained critical mass and eventually fizzled as Stu and I went our separate ways - Stu as the admiral of a veritable navy of ROVs, and me to London.

What follows is a look back at why a great idea generated zero profit.

Some companies are born of technology, some achieve technological greatness and some have technology thrust upon them

Technology is at the very heart of Google. Any chance to advance the technology of which it was born is seized upon as an opportunity for greater success. It's the philosophy behind our endeavor to "drive the web forward".

Like Google, the oil & gas industry has an absolute dependence on technology. It simply could not exist without an army of technologists creating oil-field prediction engines and well flow models.

Like super-heroes, you can learn a lot about industries from their origin stories

The Social Network and There Will Be Blood both include generous helpings of greed and betrayal, but while getting gypped out of half a billion dollars is a pretty bad day, it's still a significantly better outcome than having your head caved in.

Don't get me wrong, in the 6 years I worked in oil & gas I never once saw anyone beaten to death, so things have progressed significantly in the past hundred years or so. But in it's soul oil & gas isn't about technology; it's a business of hard bastards drilling absurdly deep holes into the earth's crust, praying to f**k it doesn't explode, all in the hope of wringing a few more drops out of the bottom of a rapidly emptying cup.

I drink your milkshake. I drink it up.
(Value of a resource * quantity of resource extracted) - cost of extraction = profit
Finding more of a scarce resource makes it, by definition, less valuable. It's nearly impossible to increase the quality of a natural resource, so the best way to increase profits is to decrease the costs of pulling it out of the ground.

As resources become scarcer, the difficulty (and cost) of locating and extracting said resources increases. At this point your dependence on technology increases, and that technology costs money.

At Google, technology is the product. Our success has come from search and advertising, but technologies like Android, Chrome, and cloud computing offer an opportunity for more success.

In oil & gas, oil & gas is the product - and technology is simply a tool necessary to extract it. This is fundamental, and it affects the way technologists are regarded within each industry.

What they want is fancier tools - not a new cost center.

Stu and I quickly discovered that while there is a bottomless pit of cash, it is allocated almost exclusively to parts of the business that generate revenue.

They will happily pay stupid money for is a shiny box that helps you find oil reserves, or one that lets you extract said nectar from Mother Earth. What they do not want is to pay the salary of a dozen engineers (software or otherwise) that build shiny boxes.

The significant supporting industries - everything from field inspections and environmental surveys to intervention engineering and remedial work - is all just costs. Most have been outsourced and the associated budgets minimized, allocated, and fixed. Entire companies are created through such outsourcing and their goals are to lower costs as far below the allocated budget as possible.

Our technology was about lowering costs - so we should have been golden, right?

The best technology in the world isn't valuable without a customer to sell it to.

The oil companies loved our technology. The shiny graphics are like catnip to executives, and our pitch was compelling:
Using this software, we can increase efficiency by shortening each job by up to 20%. We only charge 5% for using the software, giving you a net saving of 15%.
Big smiles and firm handshakes all 'round.
You need to go speak to Our Contractor. They should definitely be using this!
Next week we bring our roadshow to the Contractor. These guys wear coveralls for a living and recognize the smell of bullshit as it pulls up in the parking lot. They know the animated movies for the smoke and mirrors they are, but they're also engineers - so we switch our focus to the accuracy of our models. So far so good, until we get to the pitch:
Using this software, we can increase efficiency by shortening each job by up to 20%. We only charge 5% for using the software, giving you a net saving of 15%."
The smiles are gone and people are starting to fidget.

Offshore jobs tend to operate on a "daily rate". So our pitch translated into something like this:
Using this software we can shorten your billable days by up to 20% and increase your operational costs by 5%, giving you a net profit reduction of 25%."
Oil Companies outsource technology for a reason, and they're not going to force their contractors to use one particular piece of new, untried technology. The aforementioned contractors have every incentive they need not to use it.

The week-to-week possibility of Croesus wealth punctuated by imminent doom were good practice for later years.

That said, it was a great idea that would likely have succeeded given more time and effort. If I hadn't been in such a hurry to move to London, we could have worked the right deals to get the Oil Companies to convince their contractors to use our tools. It would have continued to be challenging, but it had a good chance. I believe it's inevitable that others will succeed where Stu and I left-off, but in startups - like much in life - timing is everything.

The lessons I learned about business, entrepreneurship, and opportunity have proven invaluable. The pressure and excitement of seemingly imminent success parallel to equally imminent crushing failure was a brutal introduction into the Real World after years spent hiding in the dark corner of my own coding universe. It's strange, but seeing something you pour your heart and soul into fail, can be better incentive to strive than unmitigated success.

I never did persue a career in animation. After taking half a year off to travel Europe I settled in London with my passion for coding reignited. Some four years later, I found a job that offers the perfect mix of business development, technology evangelism, and hardcore coding that plays to my strengths. Oh, and did I mention we're hiring?

Thursday, March 17, 2011

Using the New Android Market Stats for Fun and Profit

Earlier this week the Android Market Publisher site was updated to include some cool new statistics for your apps. You can now see the user distribution of your app in terms of the countries, languages, operating system versions, and devices on which your apps are running.

Better still, you can compare your app's distribution in each of these categories with the overall distribution for all apps in the Market.

What does this mean?

There are two axes for gaining insight from these figures:
  • The distribution of languages, OS versions, devices, and countries of your app users.
  • The variance between your app and the overall (expected) distribution.
I looked at the statistics for my three most popular / successful apps: Earthquake, Animal Translator, and Gyro Compass, and have the following observations, conclusions, and action items.

Action Items and Conclusions
  • Create a Japanese and Spanish translation of Earthquake.
  • Translate Animal Translator into Japanese.
  • Modify culturally sensitive place names for Korean users.
  • Confirm Earthquake works on small-screen devices.
  • Drop platform support for Android 1.5 and 1.6 on Animal Translator.
  • For new apps, it's may not be worth supporting Android 1.5 or 1.6.
  • For new apps, it's worth launching with localized language support for Korean and Japan.
  • When promoting apps, be aware of time-zones.
  • Build tablet-targeted versions now to get first-mover advantage.
Observations: Location and Language
  • All my apps do disproportionately well in the UK. I'm based in London, so it's likely that my tweeting and blogging have driven more people in my time-zone to my apps.
  • The proportion of Japanese and Korean users is effected by how long the app has been around. The older apps have disproportionately more US users and fewer Japanese and Korean users, so  for new apps it's worth building with Japan and Korea in mind at launch. 
  • Japanese users account for 10% of Animal Translator users (double the normal distribution). They clearly like the concept, so a japanese language version should help drive popularity.
  • Around 70% of Earthquake! users are from the US (expected distribution if 60%). This is likely due to a lot of Android users in the San Adreas fault cities of San Francisco and Los Angeles.
  • Earthquake has 1.5% South Korean users versus an average of 10%. Many South Korean users have complained about the USGS use of the name "Sea of Japan" which they believe should be "East Sea". This appears to have a direct impact on their usage of the app.
Observations: OS Versions
  • My apps show a trend where older apps have disproportionately more 1.5 / 1.6 users, and new apps have disproportionately fewer. This seems to suggest that owners of these older devices aren't downloading as many new apps. As a result, it might not be worth supporting 1.5 / 1.6 users for new apps.
  • Earthquake! already has 0.2% of users running Android 3.0. This suggests that building tablet-optimized versions now can give you first-mover advantage.
  • Only 8 people are running Animal Translator on a device running 1.6 or earlier, so I can probably drop support for for < 2.0 in the next update.
Observations: Devices
  • Based on the devices, there are no small-screen Earthquake! users. Does the app work on small screens?
  • The popularity of devices seems heavily affected by the country distribution of users. For my apps the HTC EVO 4G and Droid series of devices seem very popular in the US, with the HTC Desire and Samsung Galaxy S very popular in Korea and Europe.
What patterns did you see?

What observations and patterns did you find looking at your app statistics?

Friday, March 11, 2011

What Does a Magnitude 8.9 Earthquake Look Like?

Today Japan was struck by a massive 8.9 magnitude earthquake that caused major damage and loss of life. The quake hit off the coast which resulted in a tsunami which struck towns along the Northern coast of Japan and has resulted in tsunami alerts across the Pacific including as far away as the West coast of the US and Australia.

Seismological details on the quake are available from the Japan Meteorological Society and the US Geological Survey.

If you have friends or family in Japan, or you're in Japan and want to let people know you're ok, you can use the Google Person Finder

Aljazeera are streaming live coverage on YouTube.

In very real terms, these pictures from The Atlantic show exactly what an 8.9 magnitude earthquake looks like. My thoughts go out to everyone affected.

Some Visualizations

A magnitude of 8.9 is big. Very big.

The Richter scale is logarithmic, meaning an increment each whole number increase in magnitude represents a tenfold increase in measured amplitude. In real terms, that makes today's earthquake in Japan is around 500 times more powerful than the 6.3 magnitude quake that devastated Christchurch in New Zealand last month.
The following visualizations are screen captures from my Earthquake Android app. They attempt to give an idea of the scale (relative and otherwise) of the earthquake in Japan. They are approximate, using only the magnitude of the quake to determine the areas likely to be affected. Additional factors like depth of quake, fault lines, mountains, and landscape can significantly change the areas affected.
In the image below, the giant outer circle represents the area within which people were likely to have "felt something"during the quake. That's the better part of a hemisphere reaching as far as Brisbane in Australia and touching Alaska.

By comparison, the smaller inner circle is the same "felt" area for a 7.1 magnitude aftershock.

This next image represents the area at significant risk of suffering from structural damage due to the quake. Again, the larger circle represents the initial 8.9 magnitude quake and covers around three quarters of Japan - an area that represents most of the US West coast.

For comparison, the smaller circle centered around the 7.1 marker is the damage radius for the smaller aftershock.

Just as striking is the aftershocks that inevitably follow such a significant quake. The following image shows only the aftershocks measuring above 6 ("Strong: Can be destructive in areas up to about 160 kilometers") on the Richter Scale.

The overlapping areas all occurred within two hours of each other and give a sense of just how terrifying it must be for the people affected.

My hopes and best wishes go out to everyone affected by this catastrophe.

Monday, March 07, 2011

The Rise of the Tablet and the Innevitable Death of the Netbook

Last week I added a 10.1" Motorola Xoom to my gadget bag at the expense of a Netbook. As tablets grow in popularity I predict the Netbook's days are numbered.

Shortly after buying an Asus EeePC 701 in 2008 I described it to anyone who would listen as the best technology purchasing decision I'd ever made. It cost £200, was thin, light, and cheap. It booted Windows and loaded Office in under 10 seconds - only the paltry 800x600 display resolution was a legitimate cause of grief.

I used it to write most of my first book during my daily commute, and it was light and thin enough for me to throw in my bag for holidays or trips where I wasn't keen on bringing my laptop along (at the time my laptop was as 17" Vaio desktop replacement that weighed the better part of a metric fucktonne).

Light and with a full-day battery life,  Netbooks were a cheap second computer for lightweight computing tasks and surfing the web.

A one trick pony

Growing up, my parents ran a typewriter sales and repair company. I watched as the typewriter was gradually (and then very quickly) replaced with the desktop computer. For a small while "word processors" were a popular alternative - significantly cheaper than computers, and with most of the features of Word Perfect 5.1.

As PCs, laptops, and printers become cheap and ubiquitous, word processors grew more fanciful in an effort to compete. They offered more fonts and options but at a higher price. Before long, like most one-trick ponies, it was quietly covered with a sheet and put out of its misery.

I had a distinct feeling of deju-vu when late last year  I went in search of a replacement for my EeePC in the hopes of finding something cheaper, lighter, and a little faster. Instead I was presented with devices that were:
  • More expensive.
  • Thicker and heavier.
  • Slower to load and lacking SSDs.
At the same time, my new Macbook weighs around two kilograms and is no more than an inch thick. I can use it to write my book, write my code, edit photos, and pretty much anything else I could want to do.

It's true that a MBP is a lot more expensive than a Netbook - and bulkier - but it's not as though you would buy a Netbook instead of a real laptop. It's an additional device, so it's actually competing with tablets or smartphones - and a smartphone / tablet combo has a lot more to offer.

Smartphones and tablets make Netbooks a quaint irrelevancy

Modern smartphones, led by iPhone and Android, have largely filled the niche of mobile web browsing.

The arrival of the tablet is the final nail in the coffin, with a 10" tablet neatly filling the gap between smartphone and laptop.

With bright, high resolution displays, tablets offer an unparalleled experience for watching video. Games designed for tablets are created specifically for portable hardware featuring a touch screens and accelerometers. Similarly the rich ecosystem of apps is optimized specifically for smartphone and tablet platforms.

Typing on a 10.1" touchscreen is certainly no worse than typing on an undersized Netbook keyboard. Walking into meetings these days I'm increasingly finding people have left their laptops behind and are instead bringing along their tablets.

If you need to type, bring a laptop. If weight is an issue, bring your tablet

Tablets provide an optimized experience for portability, mobility, and touch-based input with a rich selection of apps and games designed with their size and power in mind.

Laptops are cheaper, lighter, and more powerful than ever before. They offer a rich ecosystem of apps and provide the perfect platform where text input is required.

Netbooks can still provide a great platform for getting online, but so can laptops and tablets. Laptops may one day give way to tablets and smartphones entirely, and apps may move entirely online, but Netbooks - like word processors in the 80's - will inevitably fall victim to competitors that offer a more dynamic ecosystem of apps, games, and features at an increasingly comparable price.