Wednesday, August 25, 2010

What You Can Do With Your Modal Dialogs and Splash Screens (and the Horse They Rode In On)

Every time I'm presented with a splash screen, what I actually see is the app developer giving me the two fingered salute. The only thing more hostile is the use of a modal "loading" dialog: a UI metaphor used exclusively to save developers time at the expense of every single one of their users.
The apps which make me want to do something out of character start with a splash screen before segueing directly into a modal "loading" dialog. It's like they're spitting directly into my face. Why? Why do you hate me so much? I just want to use your app, I don't deserve this kind of hostility. No one does.

The 90s called, they'd like their UI metaphors back

Modal windows have always been a crutch. Windowed operating systems are by nature multithreaded, so to avoid apps "freezing" you execute time consuming tasks on background threads. This introduces complexity as the user can potentially interact with the UI between expected states.

The easiest solution for developers is to stick up a modal window that steals focus from the application and prevents users from interacting with the UI until the app completes the background task.

With great latency comes great responsibility

The ascent of the web as a platform largely eliminated these anachronisms. Page load times have a dramatic influence on how quickly people navigate away from a site. When page load latency is being measured in 10s or 100s of milliseconds, it's not surprising that splash screens and "loading" dialogs have all but disappeared.

The good news is that users have also gotten used to content being loaded dynamically. Most people won't bat an eyelid at a page which loads text first and updates the images afterwards. It's expected behaviour.

Users will use the time it takes getting past your splash screen uninstalling your app

It's ironic that a platform that has even less allowance for delay has seen the reintroduction of the dreaded "loading" dialog. Sadly too many devs have gotten lazy. It's harder to develop a UI that works smoothly and intuitively while data is loading or processing is being done, so many don't bother.

Don't be the lazy guy - "it's hard" isn't a valid excuse. Mobile users are incredibly impatient, phones are used on the move and users constantly switch between applications. To be successful you need to make startups and transitions fast and seamless.
  • Asynchronous tasks. Make sure anything that could take longer than a small fraction of a second happens in a background thread (use ASyncTasks to help). Use a Service to run tasks that should be completed even if the Activity is killed.
  • Branding fail. You want your app to seem an integral part of the device. Every time you show a splash screen you're reminding people that they're using an add-on -- an add-on they can replace with something less annoying.
  • Lazy loading. Loading data takes time, particularly if you need to download it first (particularly over a mobile network). It's important that you have something to show as quickly as possible, so take the browser approach by displaying what you can get quickly.  Follow that up with slower items such as images. Load additional content as required when users start scrolling down your list, taking the same approach of text first, images later.
  • In-place updates. Don't clear all your data every time you pull an update from the server. Update, add, and remove items from your UI as new data becomes available. Same with updating existing layouts with images once they've been downloaded.
  • Pre-fetching. If you've got multiple tabs or even Activities within your app, there's no reason to wait until the user changes their selection to start loading data. Pre-fetch the first page of data so that it's ready when the user switches to it.
  • Save your state. Switching between apps should be seamless and instant. Save all your Activity state so your app can resume instantly.
  • Caching. There's no reason to download the same image multiple times. Likewise, it's often better to show out of date information and update it quickly than showing nothing at all.
  • Background loading. Use Services to perform updates and download data while your app isn't in the foreground. This can extend from small amounts of pre-fetching to regular updates, or complete offline support.
  • Visual "loading" elements. Use visual elements like progress bars to indicate that you're in the process of getting an update.
  • Disable unsupported actions. Some actions within your app might not make sense until all the data is loaded. Rather than block interactivity, disable actions that aren't possible.
To see some good examples of how to present no information (or updating information) without splash screens or loading dialogs check out the Gmail and News & Weather apps. Neither use a splash screen (or loading dialogs), and both begin with no information. Once they have data, updates and changes are integrated into the existing data being displayed.

If you're creating a fully immersive experience you may have good reason to use both splash screens and loading dialogs

There's always an exception. If you're engaging in an immersive experience (like a game or turn-by-turn navigation), you don't want to start until the environment is fully constructed. Developers can't use many of the tricks above like dynamic lazy loading, because users want to enjoy the complete experience right from the start. This is especially true of 3D environments like a FPS or Google Earth that rely on an OpenGL environment.

I remember playing California Games on a friends C64. It was loaded via the tape drive, so if anyone quit the game we'd spend the 15 minutes it took to reload in a diverting game of "beat the moron". While it's important to ensure the environment is fully complete before allowing users to explore it, there are still a few tricks you can use to ensure the wait isn't quite as painful.
  • Cache any downloaded content.
  • Dynamically pre-load as much data as possible during gameplay.
  • Save and cache as much state data as possible to ensure players can resume from where they left off.
  • Try to provide useful information (hints? instructions? cut scenes?) in inter-level "loading" screens or splash screens.
  • Delay the longest pauses as long as possible. I should be able to start the game, change the settings, and navigate to "load saved game" without a significant wait.
In conclusion

For most apps, splash screens and modal loading dialogs now belong in the 1990s with 9600 baud modems and pashminas.  Embrace the now.

Tuesday, August 17, 2010

Beyond Mobiles: Android as a Universal Development Platform

Early next year GoogleTV will include the Android Market, meaning that it will be a compatible device. A GoogleTV is unlikely to include telephony support, or a camera, microphone, vibration, compass, GPS, or LED. And it likely won’t be the first Android device that isn’t a phone.

This suggests that there are some changes in store for the Android compatibility definition.

If you were an apps developer when the first desktop computer was released – what would you have built?

25 year ago, when I was writing code in my parents living room, I held little expectation that anyone but me would ever use any of it. Sharing an app meant copying it onto a 5.25” floppy disk and biking it over to a friends house where, CRC errors permitting, we’d run it.

There was also the issue of OS fragmentation. I had an IBM compatible PC-XT, my school housed a collection of BBC Micros and most of my friends had Amigas and C64s.

Over the next decade “IBM compatible” became “PC” and the Internet provided an unprecedented distribution mechanism that lets your apps span the globe in a heartbeat.

Right now there are more than 60 different compatible Android devices

Until recently the only way to get your mobile apps onto a phone was through a relationship with a carrier or device maker. Today, every day, 200,000 more people are activating Android compatible devices and searching the Market for your apps.

The open nature of Android means hardware manufacturers are using Android to power an increasingly diverse range of connected devices. At the same time, the Android Compatibility Program is expanding to make more of these devices compatible.

People talk a lot about the dangers of fragmentation

Where some might complain, I see a unique opportunity. Suddenly we’re presented with a blank slate on which to innovate, a rare opportunity to consider new ways for people to interact with devices with which they are already familiar.

If you, as a developer, want to avoid dealing with fragmentation it’s easily done. Pick a single device and develop only for that (I’d suggest the Motorola Droid). You’ll miss out on an order of magnitude of users, but you won’t have to make your apps resolution independent.

The same code will do the same thing on any compatible hardware

The Android Compatibility Program eliminates the real risks of hardware fragmentation. As a developer, all you need to do is make sure that your software doesn’t make assumptions about the underlying hardware.

So what are the assumptions you need to avoid?
  • Screen size, resolution, and aspect ratio.
  • The existence of particular hardware.
Screen size, resolution, and aspect ratio

Android developers have been accounting for different screen sizes since Android 1.6 and the release of the Verizon Droid and HTC Tattoo. There is an excellent writeup describing how to support multiple screen resolutions on the Android Developer Guide, aptly titled Supporting Multiple Screen Sizes.

At the risk of providing spoilers, it will tell you to:
  • Use density independent pixels rather than hard-coding pixel values in your code.
  • Use layouts such as Relative Layout, that don’t assume screen sizes, aspect ratios, or resolutions.
  • Provide alternative layouts (if required) for small, normal, or large screens.
  • Provide alternative drawable assets for low, medium, or high resolution displays.
  • Use the emulator to test, test, and test.
The existence of particular hardware

When it comes to hardware dependencies, there are two questions you need to address. Does your app:
  • Need any specific hardware in order to function?
  • Use some hardware features if they’re available, but which aren’t strictly necessary?
Apps like Layar aren’t very useful on a device without a camera and compass. If the answer to question one is yes, you need to create a manifest entry declaring the hardware features your app requires. The Android Market will then filter those apps out for devices that don’t have the required hardware.

  <uses-feature android:name="android.hardware.sensor.compass"/>
  <uses-feature android:name=""/>

As I described in Future Proofing Your Apps, the Market will make some aggressive guesses even if you don’t specify all the hardware you require, so if the answer to question two is yes, you need to tell the Market.

Specify optional hardware when you will use it if it’s available, but your app doesn’t depend on it.

Google Places uses cool compass arrows to show the direction of a place on interest. Even without the compass the app would still be useful on my TV, so it should declare the compass hardware as optional.

  <uses-feature android:name="android.hardware.sensor.compass"
                android:required="false" />

Within your app you will still need to find the code that uses the optional hardware and modify its behaviour accordingly.In this example, I’d want the app to simply hide the compass arrow if the compass isn’t available.

You can determine the availability of any hardware feature using the Package Manager and modify the UI or behaviour of your app accordingly.

  PackageManager pm = getPackageManager();

To make your life easier, all compatible devices will maintain the APIs used to monitor and control all supported hardware, they simply won’t return useful results when the required hardware doesn’t exist.

Be a launch partner for future Android devices

As an Android Advocate I’m regularly asked by developers how they can have their app available as a launch partner for future devices. You can consider this blog post as my answer.

Monday, August 09, 2010

The Future of Mobile: Invisible, connected devices with infinite screens

The history of smartphones looks something like this: At the end of 2008 the very first Android handset was available on T-Mobile in the US. The iPhone has existed for 3 years. The very first Blackberry featuring push email came out in 2002.

From WAP and push email to iPhone in 5 years. From one iPhone to 60 different Android handsets in under 3 years. At that rate it's challenging to create a credible mobile roadmap that extends as far as 6 months - and the rate of change is increasing.

At the current rate, nearly anything is possible in 20 years

Lately a lot of people have asked me what I think is the future of mobile. Some people just want to know what device they should buy at Christmas, but others are looking for a 20 year outlook. 20 years!  The first GSM network had barely launched 20 years ago! Predication at that scale is destined for failure and embarrassment. But I won't let that stop me.

Bigger screens are better

Mobile devices are morphing. Tablets have been talked about for years, and the iPad and Kindle provide the kind of experience people have been waiting for. Browsing pictures, watching videos, and reading books work really well on a screen that size.

Still, I find the iPad heavy and bulky. The ultimate device would be the size and weight of my mobile but include a screen that could be unfolded or rolled out to provide a better display for watching movies and playing games.

Actually, the ultimate device would be entirely virtual. I’d put on my glasses (or contact lenses) and look at any surface to see an augmented version of reality. Anything from interactive holographs, to augmented reality, or a cinema screen that stretches across the horizon. Everyone could see their own version of reality on a screen the size of their visual field.
1 year  High res screens, tablet devices, and HD output from mobiles.
5 years  Flexible displays and built in HD projectors.
10 years  Transparent LCD patches that can be applied to regular glasses.
20 years  Contact lenses that project a visual feed directly onto your retina.
Full keyboards are better. No keyboards is best

Keyboard designs (like the that on the SE Mini Pro) continue to improve, as do on-screen keyboards with technologies like Swype.

The Nintendo Wii and Microsoft's Kinect suggest that gestures might largely take the place of keyboards and touch screens for some interactions. Better multi-touch and increasingly accurate voice input will make physical keyboards almost entirely redundant.

For those who want to write something longer than an email, gesture recognition (capable of tracking fingers), combined with eye-focus tracking will provide virtual full-size keyboards.

If we’re thinking long-term, we can look forward to research like this letting us control our devices using our minds.
1 year  Wireless keyboards, voice input, and gestures.
5 years  Larger multitouch screens, better gesture input, and flawless voice recognition.
10 years  Full virtual keyboards and voice input eliminate physical keyboards entirely.
20 years  Mind control.
Smaller devices that last longer

The Sony Ericsson X10 Mini is a ridiculous 83x50x16mm and weighs less than 100g. When screens stop being a primary consideration for device size, the devices will shrink dramatically.

That leaves the problem of  the battery. Mobile processors will become more efficient, and fuel cells may help battery life in the short term, but ultimately we’ll be powering mobile device using biology and ambient energy. Biokinetic and ambient energy will likely be the start, but the future suggests a move away from silicon and towards biological processors. The computer you inject is more likely to resemble a specialized virus than a tiny silicon chip.
1 year  Lighter, thinner devices that last longer.
5 years  Tiny devices powered by fuel cells.
10 years  Devices small enough to embed into watches and jewellery that never need charging.
20 years  You are the computer.
Connectivity will become ubiquitous

Cloud computing is already a reality. As even more of our data and processing is done in the cloud, continual and uninterrupted Internet connectivity will become increasingly critical.

The incredible growth of smartphones in countries with a mobile data infrastructure to support them is nothing short of phenomenal. It's easy to forget that the real powerhouses of mobile phone use are developing countries - countries that don't have a reliable infrastructure for traditional "wired" Internet access. Citizens there are likely to access the Internet exclusively via their mobile phones.

Over the next decade we'll see carriers (and new challengers) aggressively rolling out faster, more reliable networks and technologies that cover larger areas across the globe.

At the same time, you'll be using your mobile to control your TV, monitor your fridge, and start your car.
1 year  3G/4G and WiFi covers most of industrial world. Every mobile device comes with an unlimited (or high-cap) data plan. Mobiles start interacting with other consumer electronics and cars.
5 years  4G/5G and WiFi extend to cover the entire developing world.
10 years  Whitespaces or similar technology means everyone everywhere is connected at all times.
20 years  Connectivity is uninterrupted and ubiquitous. Losing connectivity is like losing power or running water.
What about calls!

Apparently some people use mobile phones to make and receive calls(!) As devices get smaller, keyboards become virtual, and screens move closer to your eyes, you'll need a separate piece of kit to sit near your mouth and ears. Bluetooth headsets will get smaller and more discrete, people apparently talking to themselves in public will become no less creepy or annoying.

Infinity screens, invisible devices, always connected

In 2030 you'll think of smart-phones as quaint anachronisms that died out about 10 years ago, now that all computing is mobile. You’ll be constantly connected to the Internet by a virus that lives in your bloodstream. Contact lenses will provide a truly infinite screen, and you’ll interact with your augmented environment through a combination of mental commands, physical gestures, and voice input.

We'll take all this for granted and complain that we still don't have jetpacks or flying cars.