Pages

Tuesday 12 March 2013

Allowing applications to play nice(r) with each other: Handling remote control buttons

[This post is by Jean-Michel Trivi, an engineer working on the Android Media framework, whose T-shirt of the day reads “all your media buttons are belong to you”. — Tim Bray]

Many Android devices come with the Music application used to play audio files stored on the device. Some devices ship with a wired headset that features transport control buttons, so users can for instance conveniently pause and restart music playback, directly from the headset.

But a user might use one application for music listening, and another for listening to podcasts, both of which should be controlled by the headset remote control.

If your media playback application creates a media playback service, just like Music, that responds to the media button events, how will the user know where those events are going to? Music, or your new application?

In this article, we’ll see how to handle this properly in Android 2.2. We’ll first see how to set up intents to receive “MEDIA_BUTTON” intents. We’ll then describe how your application can appropriately become the preferred media button responder in Android 2.2. Since this feature relies on a new API, we’ll revisit the use of reflection to prepare your app to take advantage of Android 2.2, without restricting it to API level 8 (Android 2.2).

An example of the handling of media button intents

In our AndroidManifest.xml for this package we declare the class RemoteControlReceiver to receive MEDIA_BUTTON intents:

   

Our class to handle those intents can look something like this:

public class RemoteControlReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if (Intent.ACTION_MEDIA_BUTTON.equals(intent.getAction())) { /* handle media button intent here by reading contents */ /* of EXTRA_KEY_EVENT to know which key was pressed */ } }}

In a media playback application, this is used to react to headset button presses when your activity doesn’t have the focus. For when it does, we override the Activity.onKeyDown() or onKeyUp() methods for the user interface to trap the headset button-related events.

However, this is problematic in the scenario we mentioned earlier. When the user presses “play”, what application should start playing? The Music application? The user’s preferred podcast application?

Becoming the “preferred” media button responder

In Android 2.2, we are introducing two new methods in android.media.AudioManager to declare your intention to become the “preferred” component to receive media button events: registerMediaButtonEventReceiver() and its counterpart, unregisterMediaButtonEventReceiver(). Once the registration call is placed, the designated component will exclusively receive the ACTION_MEDIA_BUTTON intent just as in the example above.

In the activity below were are creating an instance of AudioManager with which we will register our component. We therefore create a ComponentName instance that references our intended media button event responder.

public class MyMediaPlaybackActivity extends Activity { private AudioManager mAudioManager; private ComponentName mRemoteControlResponder; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mAudioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE); mRemoteControlResponder = new ComponentName(getPackageName(), RemoteControlReceiver.class.getName());}

The system handles the media button registration requests in a “last one wins” manner. This means we need to select where it makes sense for the user to make this request. In a media playback application, proper uses of the registration are for instance:

  • when the UI is displayed: the user is interacting with that application, so (s)he expects it to be the one that will respond to the remote control,

  • when content starts playing (e.g. content finished downloading, or another application caused your service to play content)

Registering is here performed for instance when our UI comes to the foreground:

 @Override public void onResume() { super.onResume(); mAudioManager.registerMediaButtonEventReceiver( mRemoteControlResponder); }

If we had previously registered our receiver, registering it again will push it up the stack, and doesn’t cause any duplicate registration.

Additionally, it may make sense for your registered component not to be called when your service or application is destroyed (as illustrated below), or under conditions that are specific to your application. For instance, in an application that reads to the user her/his appointments of the day, it could unregister when it’s done speaking the calendar entries of the day.

 @Override public void onDestroy() { super.onDestroy(); mAudioManager.unregisterMediaButtonEventReceiver( mRemoteControlResponder); }

After “unregistering”, the previous component that requested to receive the media button intents will once again receive them.

Preparing your code for Android 2.2 without restricting it to Android 2.2

While you may appreciate the benefit this new API offers to the users, you might not want to restrict your application to devices that support this feature. Andy McFadden shows us how to use reflection to take advantage of features that are not available on all devices. Let’s use what we learned then to enable your application to use the new media button mechanism when it runs on devices that support this feature.

First we declare in our Activity the two new methods we have used previously for the registration mechanism:

 private static Method mRegisterMediaButtonEventReceiver; private static Method mUnregisterMediaButtonEventReceiver;

We then add a method that will use reflection on the android.media.AudioManager class to find the two methods when the feature is supported:

private static void initializeRemoteControlRegistrationMethods() { try { if (mRegisterMediaButtonEventReceiver == null) { mRegisterMediaButtonEventReceiver = AudioManager.class.getMethod( "registerMediaButtonEventReceiver", new Class[] { ComponentName.class } ); } if (mUnregisterMediaButtonEventReceiver == null) { mUnregisterMediaButtonEventReceiver = AudioManager.class.getMethod( "unregisterMediaButtonEventReceiver", new Class[] { ComponentName.class } ); } /* success, this device will take advantage of better remote */ /* control event handling */ } catch (NoSuchMethodException nsme) { /* failure, still using the legacy behavior, but this app */ /* is future-proof! */ }}

The method fields will need to be initialized when our Activity class is loaded:

 static { initializeRemoteControlRegistrationMethods(); }

We’re almost done. Our code will be easier to read and maintain if we wrap the use of our methods initialized through reflection by the following. Note in bold the actual method invocation on our AudioManager instance:

 private void registerRemoteControl() { try { if (mRegisterMediaButtonEventReceiver == null) { return; } mRegisterMediaButtonEventReceiver.invoke(mAudioManager, mRemoteControlResponder); } catch (InvocationTargetException ite) { /* unpack original exception when possible */ Throwable cause = ite.getCause(); if (cause instanceof RuntimeException) { throw (RuntimeException) cause; } else if (cause instanceof Error) { throw (Error) cause; } else { /* unexpected checked exception; wrap and re-throw */ throw new RuntimeException(ite); } } catch (IllegalAccessException ie) { Log.e(”MyApp”, "unexpected " + ie); } }  private void unregisterRemoteControl() { try { if (mUnregisterMediaButtonEventReceiver == null) { return; } mUnregisterMediaButtonEventReceiver.invoke(mAudioManager, mRemoteControlResponder); } catch (InvocationTargetException ite) { /* unpack original exception when possible */ Throwable cause = ite.getCause(); if (cause instanceof RuntimeException) { throw (RuntimeException) cause; } else if (cause instanceof Error) { throw (Error) cause; } else { /* unexpected checked exception; wrap and re-throw */ throw new RuntimeException(ite); } } catch (IllegalAccessException ie) { System.err.println("unexpected " + ie);  } }

We are now ready to use our two new methods, registerRemoteControl() and unregisterRemoteControl() in a project that runs on devices supporting API level 1, while still taking advantage of the features found in devices running Android 2.2.

Android 1.5 at Google I/O

I admit, I've been talking big about Google I/O in my last few posts. But I'm entirely serious: Google I/O is going to be the Android developer event of the year, no doubt about it. I want to take a few minutes to explain why.

The most exciting aspect, to my mind, is the technical content. We have 9 sessions listed now on the Google I/O sessions site, and we're working on still more. (And that's not even including the fireside chat with the Android Core Technical Team.) I recently sat down with some of the speakers to discuss their topics, and found that this is very solid material. Here are some of the sessions I'm excited about.

My background is strictly in engineering, and I never had the chance in college to take any design courses. So one session I'll definitely be at is Chris Nesladek's "Pixel Perfect Code". He's going to start with the basics, and give us an overview of the theory of UI design, and then explain the principles that we use when designing the core Android UI. If you like the UI updates that you've seen in the Android 1.5 "Cupcake" user interface, then be at this session.

My particular team works intensively with developers to help them build and launch applications. Justin Mattson is going to share some of the hard-earned debugging and performance techniques that we've picked up in our work with partners. He's going to walk you through some actual, real-world apps on the Android Market and show you how we squeezed the bugs out of them.

Now, they told me to focus on only one or two sessions in this post, but forget that. I can't resist! I have to tell you about a couple more, like David Sparks' session on the media framework. One of the most common questions we get asked goes something like "dude, what is up with all these codecs? AAC? MP3? OGG? MPEG? H264?" David's going to answer that question—among many others -- and explain how the media framework is designed and operates. Armed with this new understanding, you'll be able to make smarter choices as you design the media components of your own apps.

And last (for today), I want to mention Jeff Sharkey's "Coding for Life—Battery Life" session. A statement like "it's important to code efficiently on mobile devices" is deceptively simple. It turns out that what constitutes efficient code on, say, the desktop is sometimes woefully hard on battery life, on mobiles. What I've learned to tell developers is "everything you know is wrong." That's why I'm looking forward to Jeff's session. He's going to go through a whole basket of tips and tricks, backed up by some nice crunchy numbers.

And of course, these are just the technical sessions (and not even half of those.) We're also going to have quite a few folks representing some of our app developer and Open Handset Alliance partners at Google I/O, but I'll save those details for another post. I'm also looking forward to turning the tables, and giving some of you the floor. Besides the fireside chat where you can ask the Core Technical Team all the thorny technical questions you've been saving up, there's also a Lightning Talks session just for Android developers, and an Android Corner mixer area in the After-Hours Playground.

I'm also excited about a few surprises we've lined up... but I can't say anything about those, or they wouldn't be surprises, would they?

So, there you have it. Excitement! Drama! Surprises! It's like a movie trailer, but without the awesome voiceover. I hope it worked, and that you all are looking forward to Google I/O as much as I am. (By the way, I'm instructed to inform you that you can save a bit of coin by registering early. You might want to hurry though, since early registration ends May 1.)

Happy Coding!

Android 2.2 SDK refresh

As you may have noticed, the source code for Android 2.2, which we call Froyo, has been released.

The Android 2.2 SDK that was released at Google I/O contained a preview of the Froyo system image and today, we are releasing an update to bring it into sync with the system image pushed to Nexus One devices.

I encourage all developers to use the SDK manager to update to this version.

Android 1.5 is here!

I've got some good news today: the Android 1.5 SDK, release 1 is ready! Grab it from the download page.

For an overview of the new Android 1.5 features, see the 1.5 release notes page in our developer site.

I am also happy to let you know that our partners at HTC have made available new system images to upgrade your Android Dev Phone 1 (ADP1) to Android 1.5. This new version (which is only available for the ADP1) is based on the Cupcake branch from the Android Open Source Project and corresponds to the system image of the Android 1.5 SDK, release 1. If you have questions about the process of updating your device, you can ask the mailing list that we've set up.

I'd also like to note that Android developer phones like the ADP1 are intended for application development, rather than daily use. Additionally, they are operator-neutral and country-neutral, so they may not include certain features found on end-user devices.

Android 1.1 SDK, release 1 Now Available

Hello, developers! As you may have heard by now, users around the world have started to receive updates to their Android devices that provide new features and functionality. You may also have noticed that the new update reports as "Android 1.1". Applications written with the 1.0_r1 and 1.0_r2 SDKs will continue to work just fine on Android 1.1. But if you want to take advantage of the new APIs in 1.1, you'll need an updated SDK.

That's why I'm pleased to let you know that the Android 1.1 SDK, release 1 is now available. As you'll quickly see from the release notes, the actual API changes are quite minor, but useful. This new SDK has all the new APIs, as well as a new emulator image to let you test your applications. If you have a retail device running Android, contact your operator for the update schedule. An updated v1.1 system image for the Android Developer Phone 1 will be coming soon.

In addition to the new APIs, the emulator also contains improved ability to test localizations to the German language. Localizations for other languages will be added in future SDK releases.

You can download the updated SDK using the links above. Happy coding!

Android 2.2 and developers goodies.

Today at Google I/O we announced that Android 2.2 is right around the corner. This is our seventh platform release since we launched Android 1.0 in September 2008. We wanted to highlight five areas in particular:

Performance & speed: The new Dalvik JIT compiler in Android 2.2 delivers between a 2-5X performance improvement in CPU-bound code vs. Android 2.1 according to various benchmarks.

New enterprise capabilities: We’ve added Exchange capabilities such as account auto-discovery and calendar sync. Device policy management APIs allow developers to write applications that can control security features of the device such as the remote wipe, minimum password, lockscreen timeout etc.

Faster, more powerful browser: We have brought the V8 JavaScript engine to the Android browser as part of 2.2. This has resulted in a 2-3X improvement in JavaScript performance vs. 2.1.

Rich set of new APIs and services: New data backup APIs enable apps to participate in data backup and restore, allowing an application's last data to be restored when installed on a new or a reset device. Apps can utilize Android Cloud to Device Messaging to enable mobile alert, send to phone, and two-way push sync functionality. Developers can now declare whether their app should be installed on internal memory or an SD card. They can also let the system automatically determine the install location. On the native side, a new API now gives access to Skia bitmaps.

Additions to Android Market: Android Market provides Android Application Error Reports, a new bug reporting feature, giving developers access to crash and freeze reports from users. Developers will be able to access these reports via their account on the Android Market publisher website.

For a complete list of everything we’ve included in Android 2.2, please see the platform highlights.

Developers can now download the Android 2.2 SDK and Android NDK, Revision 4 from the Android developer site.

Tools update

We are releasing new version of the Android SDK Tools, Revision 6, Eclipse plug-in ADT 0.9.7 and Android NDK, Revision 4.

Android SDK Tools, Revision 6, Eclipse plug-in 0.9.7

These new versions include support for library projects that will help you share code and resources across several Android projects.

Android NDK, Revision 4

Workflow improvementsThe new NDK brings a host of workflow improvement, from compilation, to debugging. Starting with 2.2, the NDK enables debugging native code on production devices.

ARMv7 instruction set supportThis release enables the generation of machine code for the ARMv7-A instruction set. Benefits include higher performance, as well as full use of the hardware FPU for devices that support it.

ARM Advanced SIMD (a.k.a. NEON) instruction supportThe NEON instruction set extension can be used to perform scalar computations on integers and floating points. However, it is an optional CPU feature and will not be supported by all Android ARMv7-A based devices. The NDK includes a tiny library named "cpufeatures" that can be used by native code to test at runtime the features supported by the device's target CPU.

For more information, please see the releases notes for the SDK Tools, ADT, and NDK.

As I said at the beginning, Android 2.2 will be here soon, and some devices will get the update in the coming weeks. I invite application developers to download the new SDK and tools and test your applications today.

Check out the video below to learn more about Android 2.2.

Android 2.1 SDK

Today, we are releasing the SDK component for Android 2.1, so that developers can take advantage of the new features introduced in Android 2.1. Please read the Android 2.1 release notes for more details. You can download the Android 2.1 component through the SDK Manager.

In addition to the new SDK, a new USB driver that supports Nexus One is also available today through the SDK Manager. The USB driver page contains more information.

An introduction to Text-To-Speech in Android

We've introduced a new feature in version 1.6 of the Android platform: Text-To-Speech (TTS). Also known as "speech synthesis", TTS enables your Android device to "speak" text of different languages.

Before we explain how to use the TTS API itself, let's first review a few aspects of the engine that will be important to your TTS-enabled application. We will then show how to make your Android application talk and how to configure the way it speaks.

Languages and resources

About the TTS resources

The TTS engine that ships with the Android platform supports a number of languages: English, French, German, Italian and Spanish. Also, depending on which side of the Atlantic you are on, American and British accents for English are both supported.

The TTS engine needs to know which language to speak, as a word like "Paris", for example, is pronounced differently in French and English. So the voice and dictionary are language-specific resources that need to be loaded before the engine can start to speak.

Although all Android-powered devices that support the TTS functionality ship with the engine, some devices have limited storage and may lack the language-specific resource files. If a user wants to install those resources, the TTS API enables an application to query the platform for the availability of language files and can initiate their download and installation. So upon creating your activity, a good first step is to check for the presence of the TTS resources with the corresponding intent:

Intent checkIntent = new Intent();checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);

A successful check will be marked by a CHECK_VOICE_DATA_PASS result code, indicating this device is ready to speak, after the creation of our android.speech.tts.TextToSpeech object. If not, we need to let the user know to install the data that's required for the device to become a multi-lingual talking machine! Downloading and installing the data is accomplished by firing off the ACTION_INSTALL_TTS_DATA intent, which will take the user to Android Market, and will let her/him initiate the download. Installation of the data will happen automatically once the download completes. Here is an example of what your implementation of onActivityResult() would look like:

private TextToSpeech mTts;protected void onActivityResult( int requestCode, int resultCode, Intent data) { if (requestCode == MY_DATA_CHECK_CODE) { if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) { // success, create the TTS instance mTts = new TextToSpeech(this, this); } else { // missing data, install it Intent installIntent = new Intent(); installIntent.setAction( TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA); startActivity(installIntent); } }}

In the constructor of the TextToSpeech instance we pass a reference to the Context to be used (here the current Activity), and to an OnInitListener (here our Activity as well). This listener enables our application to be notified when the Text-To-Speech engine is fully loaded, so we can start configuring it and using it.

Languages and Locale

At Google I/O, we showed an example of TTS where it was used to speak the result of a translation from and to one of the 5 languages the Android TTS engine currently supports. Loading a language is as simple as calling for instance:

mTts.setLanguage(Locale.US);

to load and set the language to English, as spoken in the country "US". A locale is the preferred way to specify a language because it accounts for the fact that the same language can vary from one country to another. To query whether a specific Locale is supported, you can use isLanguageAvailable(), which returns the level of support for the given Locale. For instance the calls:

mTts.isLanguageAvailable(Locale.UK))mTts.isLanguageAvailable(Locale.FRANCE))mTts.isLanguageAvailable(new Locale("spa", "ESP")))

will return TextToSpeech.LANG_COUNTRY_AVAILABLE to indicate that the language AND country as described by the Locale parameter are supported (and the data is correctly installed). But the calls:

mTts.isLanguageAvailable(Locale.CANADA_FRENCH))mTts.isLanguageAvailable(new Locale("spa"))

will return TextToSpeech.LANG_AVAILABLE. In the first example, French is supported, but not the given country. And in the second, only the language was specified for the Locale, so that's what the match was made on.

Also note that besides the ACTION_CHECK_TTS_DATA intent to check the availability of the TTS data, you can also use isLanguageAvailable() once you have created your TextToSpeech instance, which will return TextToSpeech.LANG_MISSING_DATA if the required resources are not installed for the queried language.

Making the engine speak an Italian string while the engine is set to the French language will produce some pretty interesting results, but it will not exactly be something your user would understand So try to match the language of your application's content and the language that you loaded in your TextToSpeech instance. Also if you are using Locale.getDefault() to query the current Locale, make sure that at least the default language is supported.

Making your application speak

Now that our TextToSpeech instance is properly initialized and configured, we can start to make your application speak. The simplest way to do so is to use the speak() method. Let's iterate on the following example to make a talking alarm clock:

String myText1 = "Did you sleep well?";String myText2 = "I hope so, because it's time to wake up.";mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, null);mTts.speak(myText2, TextToSpeech.QUEUE_ADD, null);

The TTS engine manages a global queue of all the entries to synthesize, which are also known as "utterances". Each TextToSpeech instance can manage its own queue in order to control which utterance will interrupt the current one and which one is simply queued. Here the first speak() request would interrupt whatever was currently being synthesized: the queue is flushed and the new utterance is queued, which places it at the head of the queue. The second utterance is queued and will be played after myText1 has completed.

Using optional parameters to change the playback stream type

On Android, each audio stream that is played is associated with one stream type, as defined in android.media.AudioManager. For a talking alarm clock, we would like our text to be played on the AudioManager.STREAM_ALARM stream type so that it respects the alarm settings the user has chosen on the device. The last parameter of the speak() method allows you to pass to the TTS engine optional parameters, specified as key/value pairs in a HashMap. Let's use that mechanism to change the stream type of our utterances:

HashMap myHashAlarm = new HashMap();myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_ALARM));mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, myHashAlarm);mTts.speak(myText2, TextToSpeech.QUEUE_ADD, myHashAlarm);

Using optional parameters for playback completion callbacks

Note that speak() calls are asynchronous, so they will return well before the text is done being synthesized and played by Android, regardless of the use of QUEUE_FLUSH or QUEUE_ADD. But you might need to know when a particular utterance is done playing. For instance you might want to start playing an annoying music after myText2 has finished synthesizing (remember, we're trying to wake up the user). We will again use an optional parameter, this time to tag our utterance as one we want to identify. We also need to make sure our activity implements the TextToSpeech.OnUtteranceCompletedListener interface:

mTts.setOnUtteranceCompletedListener(this);myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_ALARM));mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, myHashAlarm);myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, "end of wakeup message ID");// myHashAlarm now contains two optional parametersmTts.speak(myText2, TextToSpeech.QUEUE_ADD, myHashAlarm);

And the Activity gets notified of the completion in the implementation of the listener:

public void onUtteranceCompleted(String uttId) { if (uttId == "end of wakeup message ID") { playAnnoyingMusic(); } }

File rendering and playback

While the speak() method is used to make Android speak the text right away, there are cases where you would want the result of the synthesis to be recorded in an audio file instead. This would be the case if, for instance, there is text your application will speak often; you could avoid the synthesis CPU-overhead by rendering only once to a file, and then playing back that audio file whenever needed. Just like for speak(), you can use an optional utterance identifier to be notified on the completion of the synthesis to the file:

HashMap myHashRender = new HashMap();String wakeUpText = "Are you up yet?";String destFileName = "/sdcard/myAppCache/wakeUp.wav";myHashRender.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, wakeUpText);mTts.synthesizeToFile(wakuUpText, myHashRender, destFileName);

Once you are notified of the synthesis completion, you can play the output file just like any other audio resource with android.media.MediaPlayer.

But the TextToSpeech class offers other ways of associating audio resources with speech. So at this point we have a WAV file that contains the result of the synthesis of "Wake up" in the previously selected language. We can tell our TTS instance to associate the contents of the string "Wake up" with an audio resource, which can be accessed through its path, or through the package it's in, and its resource ID, using one of the two addSpeech() methods:

mTts.addSpeech(wakeUpText, destFileName);

This way any call to speak() for the same string content as wakeUpText will result in the playback of destFileName. If the file is missing, then speak will behave as if the audio file wasn't there, and will synthesize and play the given string. But you can also take advantage of that feature to provide an option to the user to customize how "Wake up" sounds, by recording their own version if they choose to. Regardless of where that audio file comes from, you can still use the same line in your Activity code to ask repeatedly "Are you up yet?":

mTts.speak(wakeUpText, TextToSpeech.QUEUE_ADD, myHashAlarm);

When not in use...

The text-to-speech functionality relies on a dedicated service shared across all applications that use that feature. When you are done using TTS, be a good citizen and tell it "you won't be needing its services anymore" by calling mTts.shutdown(), in your Activity onDestroy() method for instance.

Conclusion

Android now talks, and so can your apps. Remember that in order for synthesized speech to be intelligible, you need to match the language you select to that of the text to synthesize. Text-to-speech can help you push your app in new directions. Whether you use TTS to help users with disabilities, to enable the use of your application while looking away from the screen, or simply to make it cool, we hope you'll enjoy this new feature.

Adjustment to Market Legals

Please note that we have updated the Android Market Developer Distribution Agreement (DDA). This is in preparation for some work we’re doing on introducing new payment options, which we think developers will like.

In the spirit of transparency, we wanted to highlight the changes:

  • In Section 13.1, “authorized carriers” have been added as an indemnified party.

  • Section 13.2 is new in its entirety, covering indemnity for payment processors for claims related to tax accrual.

These new terms apply immediately to anyone joining Android Market as a new publisher. Existing publishers have been notified of this change via email; they have up to 30 days to sign into the Android Market developer console to accept the new terms.

Android 1.6 SDK is here

I am happy to let you know that Android 1.6 SDK is available for download. Android 1.6, which is based on the donut branch from the Android Open Source Project, introduces a number of new features and technologies. With support for CDMA and additional screen sizes, your apps can be deployed on even more mobile networks and devices. You will have access to new technologies, including framework-level support for additional screen resolutions, like QVGA and WVGA, new telephony APIs to support CDMA, gesture APIs, a text-to-speech engine, and the ability to integrate with Quick Search Box. What's new in Android 1.6 provides a more complete overview of this platform update.

The Android 1.6 SDK requires a new version of Android Development Tools (ADT). The SDK also includes a new tool that enables you to download updates and additional components, such as new add-ons or platforms.

You can expect to see devices running Android 1.6 as early as October. As with previous platform updates, applications written for older versions of Android will continue to run on devices with Android 1.6. Please test your existing apps on the Android 1.6 SDK to make sure they run as expected.

Over the next several weeks, we will publish a series of blog posts to help you get ready for the new developer technologies in Android 1.6. The following topics, and more, will be covered: how to adapt your applications to support different screen sizes, integrating with Quick Search Box, building gestures into your apps, and using the text-to-speech engine.

If you are interested to see some highlights of Android 1.6, check out the video below.

Happy coding!