Kotlin: A Learning Process

By Xinye Ji

For those who aren’t in Android development circles, Kotlin has been on the rise since its beta release in 2011. At Google I/O 2017, Kotlin was announced to be officially support and adaptation of the language has soared in the last year. I recently decided pick up the language and hope to implement the upcoming features in our products with Kotlin.

Obviously, switching to a new language always has its hurdles. Syntactically, things always take a while before you get into a groove, but that’s always a part of the expected learning curve.

One of the main benefits of Kotlin over Java is that Kotlin seems to value making code a little more condensed. For developers who aren’t particularly fond of Java’s particularly verbose nature, I think this is a welcome change. There aren’t any massive paradigm shifts, but many smaller changes add up to a more streamlined development experience.

But aside from these mostly aesthetic changes, Kotlin takes many design cues from Joshua Bloch’s Effective Java. For me, this had some mixed results. Some of my lazier programming practices cropped up, resulting in me rethinking and revisiting my thought process when implementing some features in my sandbox app.

Thankfully because Kotlin is developed by the same company that produces Android Studio, a lot of these things are pointed out by the IDE itself. For example, my biggest lazy habit was not considering mutability. In Kotlin, one has to declare their variables as mutable or immutable with var or val respectively. If Android Studio, (or rather Intellij) detects that something you have declared is not changed after you’ve created it, it’ll flag that line and suggest you change the it to immutable. If I had to encapsulate a lot of Kotlin’s language design decisions, I’d say that it almost forces you to make better architectural decisions by employing an opt-out mentality, rather than an opt-in mentality.

Overall, I think this makes the learning curve more difficult, as it can produce unexpected behaviors if you haven’t read all the documentation surrounding certain features in Kotlin. But in the long run, I think this will lead to higher levels of productivity.

On a more personal note, I don’t think I have any particular preference towards Java or Kotlin yet. However, I’ve been working with mostly Java for the past few years, so there are still many things I’m still learning about Kotlin as I play around with it more. So one should consider that my familiarity with both languages are definitely not equal. In general, I’d say that (at the very least) trying Kotlin is a worthwhile endeavor for any native Android developer.

Google I/O 2015 – A Developer’s Perspective

By Xinye Ji

Google I/O was a rather anti-climactic one in some ways. People were expecting a whole new refresh like Lollipop was last year. A lot of people were expecting a new Nexus 5. (Which may or may not have been previewed when introducing the new USB Type C connector…) Instead, Google focused on support, stability and efficiency.

So, in the eyes of many, maybe Google I/O wasn’t so exciting as it was more of a ‘tock’ instead of a ‘tick’ kind of update. However, for both Google and Android developers, this update lets us breathe a sigh of relief. New letter (like L or, in this case, M)  versions typically introduces more fragmentation issues that Android has been so infamously known for. This time around, while we see some features that are not supported in older versions of Android (such as the revamp of app permissions), the support and backwards compatibility of said features won’t completely break the app one hundred percent.

The Android M preview also released with its developer counterpart Android Studio 1.3 along with the 1.3 gradle plugin.

The Good:

Android NDK

For those of you unaware, the Android NDK allows developers to work natively in C or C++. I’m personally quite excited about this, as this allows a much wider spectrum of developers to pick up Android development. I’m sure we’ll see some amazing libraries come out in the ensuing months, perhaps more hardware and low level control.

The Bad:

Additionally, during the “What’s new in Android Testing?” presentation. Many, many features were slated for “in the next few weeks.” It’s unfortunate, but it seems some things in the product were not ready in time for Google I/O.

The Ugly:

This build is… buggy to say the least. The initial release of the 1.3 Canary build had some expected errors; like having trouble switching to the M preview build, and certain gradle issues that came with that. But it also had other issues, such as telling you that your overridden classes never implemented the super class, when it clearly did.

The issue has since been patched, and I understand this is a canary (very early beta) build, but come on guys…!!

One thing is very clear for this year’s Google I/O, however.

                           A Focus on Just Making Better Apps.


One big thing is the Captures feature on Android Studio. This allows you to get the CPU metrics for your connected device. I recall this use to be a giant cluster of irrelevant data. Now they have more intuitive UI and a very well detailed metrics that will help you search CPU hangups and memory leaks.

Theme Editor:

If any of you have done Android Development, one huge pain was setting up themes. This time around, there is a new theme and layout editor. The theme editor has some amazing items. It helps you integrate material design to your app, and it removes a lot of the boilerplate you need to generate.

Additionally, we have a revamped layout editor. This editor definitely seems new and improved. The demo at Google I/O didn’t generate a bunch of gibberish code, and the visual UI designer has piqued my interest. In fact, I suspect many developers may start using that rather than blindly typing into the XML file and hoping the UI looked as intended.

Sadly at the time of posting, this tool is not yet available on the preview.

Android Design Library:

Since I learned about material design, I’ve always wanted proper support for material design. At Google I/O this year, the Android Design Library was released, which has been basically everything I wanted from Google as far as implementing Material Design, as well as supporting Material all the way back to Froyo (Android 2.1).


There was a large chunk of support for testing during Google I/O. Including testing UI, proper unit testing, and automation of said tests. A lot of the testing process is now more tightly integrated with Android Studio. And I, for one, am very excited to check these out in my own geeky way.

The one major thing that all these things have in common is that they are incredibly mundane to the end user, but incredibly exciting to most developers. The updates to these tools will help us improve our development process to make better, more consistent, more reliable, and more powerful apps.

Google IO 2015

Google I/O 2015


By Ganpy

I may not be technically qualified to analyze all the new announcements made at this week’s Google IO purely from a programmer/developer’s point of view, but I find myself qualified enough to comment on the overall outcome of the many things that came out of Moscone West, generally, as someone who is closely associated with the industry. 

The beauty of Google IO is that it is a little more grounded and humble compared to let’s say Apple WWDC. Not that I have a problem with the latter approach. It’s after all a dog and pony marketing event & you got to be in your A game. Google just likes to adopt a different strategy. With the Google umbrella casting a much larger shadow on a typical global citizen’s daily digital life than that of Apple’s, it is simply impressive that this strategy works for them, every year.

To me, what clearly stood out was how much Android centric Google IO this year was. This also highlighted the significance of the times we live in – Mobile, Internet of Things (IoT) and all new innovations around them.


Well. Google acknowledged that it is going to focus on ‘usability’ with Android M. It’s not that Android is not usable in its current form. But the general focus with L was on design. So with M, Google may just be polishing off the usability issues that came with L. Again I am not a programmer, so wouldn’t know how to technically break down the improvements. For example, the ability to change permissions on the fly in an app is such a leap. Improvements like these will put Android always ahead of iOS when it comes to features and what one can do with them as a developer.

Android Watch

No one saw how quickly or how powerfully Apple Watch will dominate this wearables market space. With so many different Android based watches to choose from as opposed to a single device in the iOS world, Android wearables have sort of turned out to be secondary choices for many users who are looking for smartwatches. With some really new cool features that were announced this year, it is clear that Google will continue to show Apple how to get it right in smartwatches and Apple wouldn’t mind playing the catch-up game as long as they can keep their market share this high.

Brillo and Weave

Internet of Things – Just a few months ago, this was such a complicated phrase to explain. Now, it seems like everyone talks about it. What a rapidly changing time we live in!! We are definitely living through a phase in technological development where we are inventing more solutions to solve problems created by our previous technological innovations, than actually inventing some completely new solutions. Brillo and Weave are good examples of this. Brillo is sort of like Android lite, an OS to be used by all devices that are connected to your life’s IoT while Weave is the new protocol which the devices will have to follow to communicate with one another. I know that is putting it simply, but that’s exactly how they pitched it. Let’s see how rapidly they take over our lives. I am not going gaga over them yet.

Google Photos

Well. In some ways, all Google fans could slam my statement – But I for one feel, this is one of the few areas where Google is playing catch up with Apple. Nothing that they announced as new features in Google Photos made me go “wow”, the lifelong free storage for Google Photos notwithstanding. I will be very curious to see how many people really feel that the free storage is an incentive to switch to Google Photos. This may be another area where Google treads the dangerous privacy issue in a ‘greyish’ sort of manner, if you know what I mean.

Google Now

I will keep it short. Google Now is really how Google is trying to make all your frequently used apps redundant. As conflicting as it may be, it is very impressive nevertheless. Google Now may be the only thing you need soon on your Smartphones. 

Cloud Test lab

Another eyeopening announcement. We as a team, haven’t fully discussed how this could change our testing scenarios. It is a well known fact that there is one thing that has been haunting the Android development labs all around the world and that is the fragmentation issue. With something like this, Google could potentially eliminate that being a factor to consider while developing your next Android app. We have been seeing spurts of some third party services offerings along these lines and we were interested in looking at those for our internal use. But if Google can provide something like this which has all its blessings, then we don’t have to look too far for this service in future.


Expeditions was the biggest breakthrough announcement yesterday (for me). It may be hard for to express why in a very eloquent way, but the classroom expeditions demo awakened the student in me.

The GoPro rig combined with the kind of VR editing options you have, which in combination with youtube being a new platform to actually play/watch VR videos, using a ridiculously simple device like a cardboard…….I think we are sitting on the next major technological innovation.

GoPro rig may not be for everyone as I expect the cost to run into a few thousand dollars, but the day is not far off when all of us can sit in the luxury of our couches and perhaps discuss the rate at which climate change is happening all around us, as we watch a live VR capture of another big ice shelf breaking away from the coast of Antarctica. 

And that, dear readers, would be the ironical and the bittersweet future we all can look forward to!

Android Wear – Welcome to the Future

By Xinye Ji

Whether or not you’ve been following Android Wear, I think we can all agree that the integration of technology into our everyday lives is becoming more and more apparent. However, these changes are small, gradual, not sudden. People slowly make minor compromises to incorporate technology into their everyday lives. For example, we hear less and less about people clinging on to their flip phones as we transition through this decade, and more and more about luddites taking their first wobbly steps. It’s strange to think that a decade prior to the founding of Android in 2003, cell phones in general weren’t nearly as widespread as smartphones are today.

But like I said, the change is gradual, and Android Wear is a part of that change.

Diffusion of Innovation: Accepting Change

Everett Rogers wrote a book called Diffusion of Innovation which outlines the pattern at which an innovation reaches critical mass. I’m not going to go into the specifics of what causes an innovation to be adopted by the general public, as that is not my area of expertise. However, there are typically five categories people fall under when taking new technology assimilates into society: Innovators (2.5% of users), Early adopters (13.5% of users), Early Majority (34% of users), Late Majority (34% of users), and Laggards (16% of users).



As it stands, people using Android Wear currently would fall under the early adopter category.

Where it’s come from: Pebble the Predecessor

On April 11th, 2012, Pebble Technology started a Kickstarter campaign to create Pebble, a smartwatch that interacts with your phone. The kickstarter blew past it’s $100,000 goal by over 10000% on the ending date of May 18th with a grand total of $10,266,844.

I’d argue that this kickstarter is what piqued the interest of Google on Android and the respective hardware companies that came with it.

Where it stands now: Early Adopters

On March 18th 2014, Google announced Android Wear. Later in June, the Samsung Gear Live and LG G Watch were launched at Google I/O. Right now, we are seeing innovators push new products to the early adopters. This is a pivotal time for wearable technology. Many wonder if smartwatches will be able to do what Google Glass has not been able to, push us closer toward the future.

As of now, Android Wear does a handful of things. The concept is having phone information at a glance. A lot of this information is based off of cards from Google Now. Google Now, for the time being can keep track of a lot of your personal data. For example, it can keep track of stock prices, your emails, where you parked or the traffic from your commute from work to home, amongst many other things. Also because of it’s integration with Google Now, any commands you dictate with “OK Google” can also be applied to your watch. You can text a friend, send an email, find a contact, or even get navigation. Additionally, these smart watches can also display your notifications from third party applications on your phone.

Android Wear

Where it’s going

In all honesty, I believe that Android Wear’s success will be entirely up to the hands of the developers. Teams like us at Cogent IBS will be the ones who will make smartwatches truly a seamless experience. For example, the next iteration of DynaMeet could easily integrate Android Wear. We could set up a meeting without even pulling out our phones.

However, this technology could just as easily flop. Being at a stage where early adopters begin using your product could make or break it. Early adopters tend to have the greatest effect on new technologies. However, if Pebble’s kickstarter is any indicator for the demand of these devices, we’ll be welcoming the future much sooner than we think.

Google I/O Roundup

By Sean Kollipara


Sundar Pichai, Google I/O

Sundar Pichai, Google I/O


Google held its annual I/O Developer Conference last week.  During the keynote, Google unveiled a preview of L—the next iteration of Android—as well as technology for a number of platforms other than the smartphone.  These included watches, TVs, and automobiles.  This post aims to delve into L and explore some of its features.  In addition, we’ll touch on the new areas of Android exposure, as well as a few other Google tidbits.

Android L

The most noticeable change that users will see in the Android L Preview is the user interface.  Google introduced a new set of design guidelines, aptly dubbed “Material Design,” to influence the user interfaces that it puts on its products across all platforms.  This is not just limited to Android, but to the UIs of Google’s web services, too, such as Gmail.  The idea is that it can be a single design paradigm that works responsively and intuitively across all form factors: phones, tablets, and computers.  Google is encouraging Android developers to use these guidelines when designing Android apps, and even released the Polymer project for using these same principles on the web.  All Google apps are currently undergoing a redesign to help introduce material design, and we were able to see a preview of the new Gmail app during the keynote.

The new “Material” theme introduces responsive graphics.  When a button or table row is pressed, radial gradients appear at the location of the press and move outward from that location.  The overscroll effect has been changed as well.  Instead of glowing when you reach the top or bottom of a scrollable area, a semi-transparent area now reaches out toward your finger from the edge of the scrollable area.  It is curved, with the apex of the curve being determined by the location of the user’s finger.

L introduces a feature that people have desired since the introduction of Google’s Chromecast: device screen casting.  This allows the user to mirror the devices screen on any device that supports Google Cast.  Such a feature is ideal for sharing photos or videos on a TV screen during gatherings with family or friends.  Noted Android developer Koushik Dutta attempted to do this with his AllCast app—which can cast to Apple TV, Xbox, and Roku, as well—but Google removed the APIs that enabled him to do this shortly after Chromecast’s release.  As of now, the screencasting feature is limited to Chromecast because it is the only Google Cast device available on the market.  Android TV devices will have built-in Google Cast support when they become available to consumers.  I tried screencasting using my Nexus 7 (2013) and Chromecast, but the device was not able to find the Chromecast device for screen mirroring.  However, casting through an app like YouTube or Netflix works just fine.

With L, Google launched an initiative named Project Volta.  This project aims to improve Android’s battery usage.  The project starts with Battery Historian, which is an improved interface for viewing battery usage statistics.  It is accessed from the device Settings, just like the battery stats app in KitKat and prior.  Volta also introduces a job scheduling framework, which apps can use to schedule specific tasks.  Developers can qualify them with categories of constraints based on how important a job is and when it needs to be executed.  The Android system will then execute those tasks in a batch fashion with other apps’ tasks according to the specified constraints.  The idea is to group tasks’ execution together to minimize battery consumption of background work.

L also introduces Android RunTime, or ART, as the default virtual machine.  An ART preview was available on KitKat, and it had to be manually enabled.  With L, Dalvik—the previous virtual machine—is no longer available.  ART introduces ahead-of-time compiling and improved garbage collection.  This results in apps that run almost twice as fast, according to the presentation at the I/O keynote.  Best of all, with very limited exceptions, developers don’t need to do anything to take advantage of the improved virtual machine.

Some further highlights:

  • The “back,” “home,” and “recent apps” icons have been replaced with simple shapes: a left pointing triangle, a circle, and a square, respectively.  These remind me of the shapes that Sony uses on PlayStation controllers.
  • The Settings app’s UI has been re-worked: instead of being a full-on list, it is now has two columns in an edge-to-edge, tile-like interface.  It also has a new icon, which looks remarkably better than the Touchwiz-esque icon introduced in KitKat.
  • The notification drawer has been redesigned for Material.  It has more condensed fonts, and the toggles are now accessed by extending the notification drawer further, rather than having it perform a 3D flip or swiping down with two fingers.
  • The recent apps screen now scrolls depthwise in a 3D fashion.  Apps can have multiple entries in the recents list.  For instance, if you have multiple tabs open in the browser, there will be an entry for each tab in the recents list.  This saves the step of first having to go into the browser before choosing which tab you wish to use.

L and the Enterprise

It’s no secret that Android has struggled to gain traction in the enterprise.  Apple has been dominating the enterprise, mostly because iOS has a far superior set of enterprise smartphone features.  Technologies like single-sign-on and per-app VPN are important to the enterprise environment, and Android has thus far been lacking in these areas.  However, with more and more companies switching to a bring-your-own-device (“BYOD”) model, it is imperative that any serious mobile platform have a rich feature set for the enterprise market.

With L, Google aims to put an end to Apple’s enterprise dominance.  Google has teamed up with Samsung to integrate the latter’s Knox technology directly into the Android operating system.  Knox is a platform for sandboxing the personal aspects of your phone from the work-related aspects.  The inclusion of this framework will allow enterprises to deploy and manage apps to Android devices while keeping their data and management separate from the rest of the data and apps on the user’s phone.

Android Wear

Google announced the Wear initiative in March, making clear that it intended to take Android into the wearable market.  Currently, the term wearable is most often associated with smartwatches, which aren’t anything new.  Samsung has already released two generations of smartwatches, and a number of other vendors have models on the market.  However, all of these offerings have been with limitations.  For instance, the Samsung models only work if you have one of a few select Samsung smartphones.  These types of restrictions have kept wearables to a niche market.

With the Android Wear program, Google intends to take smartwatches—and whatever other wearables are yet to arrive—into the mainstream by introducing a set of standardized APIs for the OS to integrate with wearables.  Two watches are already available for pre-order: the LG G Watch and the Samsung Gear Live.  The highly-anticipated, round-faced Motorola Moto 360 will be available later this summer.

Android TV

This is Google’s third foray into working on the biggest screen in the home.  After Google TV failed miserably, the next venture was the Chromecast HDMI dongle.  The Chromecast has been relatively successful, and now Google wants to bring a full-on operating system directly to the TV.  It will of course be Google Cast compatible, but it will also have a media selection interface that will allow the streaming of music or movies from the corresponding services in the Google Play Store.

Android Auto

The extension of mobile operating systems to the automobile seems like a natural progression, but one can’t help but think that Android Auto is a direct response to Apple’s CarPlay, announced at last year’s WWDC.  Android Auto will allow steering wheel controls and a touchscreen to control the phone.  It will provide shortcuts to commonly-used smartphone functions in the car, such as the dialer and navigation.  Most importantly, however, it will be heavily voice-centric, which is intended to reduce the risky behavior of texting while driving.

Google Glass

Glass was noticeably absent from the I/O keynote this year.  This could have something to do with the rounds of negative publicity that it has been receiving in recent months.  Privacy concerns have caused many people to frown upon “glassholes,” or people who use Glass in public.  In November of last year, a Glass user was kicked out of a restaurant in Seattle for wearing Glass during his visit.  A month prior to that, a woman in California was ticketed for driving with Glass, though that charge was dismissed this January.  People have also been detained and questioned by the Department of Homeland Security for wearing Glass in movie theaters, as they have been suspected of attempted piracy.  Most recently, Glass has been banned from movie theaters in the UK.  Glass has also become a symbol of the tech industry’s presence in San Francisco.  Well-paid tech workers are bringing wealth to San Francisco, causing housing prices to rise and many non-tech folks to lose their homes.

Despite all of this, Google has been pressing forward with Glass.  In May of this year, it opened sales of the device to the US public.  It also recently became available outside of the US.  And the day before I/O, it announced that new models will be shipping with 2 GB of RAM instead of 1 GB.


At the end of the keynote, Sundar Pichai informed every developer that they would receive a mysterious-looking cardboard box.  Given the dimensions of the box, some thought a new Nexus tablet would be inside.  As it turns out, the cardboard is a pre-cut box to create an virtual reality experience using the phone screen.  The goal is keeping it inexpensive.  It comes with an SDK and developer tools to help create apps that work in this experience.  There are also instructions online to build your own Cardboard VR set from extra cardboard that you have laying around the house.

Final Word

Google introduced a lot of new things at I/O, but they made it very clear that their approach to mobile was phone-first.  The core of the Android experience still relies on the smartphone, but it is augmented by an increasing number of other smart devices with which the phone can communicate to provide a fresh, new experience in familiar settings such as the car and the living room.

A Brief Overview of Android Fragments

By Sean Kollipara

Introduction: Activities

When learning to develop for the Android platform, one of the first components of the Android SDK that you will encounter is the Activity object.  An activity is the basic component that provides UI and functionality in an Android app.  It has a lifecycle and callbacks associated with the lifecycle events, and is contained within its own window.  Multiple activities are usually present in a single application.

For example, consider Cogent’s directory app, mPower mD.  When launching mD, the first thing you see is the login screen.  It is its own activity.  Upon successful login, the login activity closes, and the home activity appears.  The home activity contains tabs that enable the user to navigate through the app content.  This is the basic gist of an activity.

Meet Fragments

Android 3.0 introduced the concept of a fragment.  A fragment can be thought of as a reusable piece of an activity.  It can have either or both of its own UI and functionality.  Like an activity, a fragment has its own lifecycle and set of callbacks to response to its lifecycle events.  Creating a fragment is easy: make a new class that extends the Fragment class.

public class MyFragment extends Fragment {



Fragment Lifecycle Callbacks

When creating your own fragment, there are a couple of lifecycle callbacks you’ll want to define at minimum.  The first is onCreate().  It is just like the method of the same name in an activity.  It takes a bundle parameter and you use it to perform initialization activities, such as a call to the superclass.


public void onCreate(Bundle savedInstanceState) {




Unlike an activity, a fragment does not have a setContentView() method.  Instead, it has a lifecycle callback named onCreateView().  This is the other callback that you’ll probably want to define.  This callback is used to inflate a view and perform other operations on it as necessary.  For example, you might want to assign some of the UI elements as members of the class if you need to perform operations on them during the fragment’s lifecycle.

class MyFragment extends Fragment {

  public Button mButton;


     public void onCreate(Bundle savedInstanceState) {





  public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {

    View myView = inflater.inflate(R.layout.myview, container, false);

    mButton = (Button) v.findViewById(R.id.mybutton);

    return myView;



Other lifecycle callbacks for fragments include onAttach(), onActivityCreated(), onStart(), onResume(), onPause(), onStop(), onDestroyView(), onDestroy(), and onDetach().  As you can see, many of these are similar to the lifecycle callbacks for an activity, though a few are specific to fragments only.

Why use fragments instead of activities?

Fragments are an ideal solution when you need reusable pieces of UI and function that go together.  For instance, you might have an app where the phone layout has the main content in the entire screen and the menu in a navigation drawer.  But, in order to make use of increased screen real estate, your tablet layout has the content on the screen with the menu as a sidebar on the left.  The menu, with its UI and own functionality, is a reusable component that can be utilized in both layouts.  The same is true for the content.  Thus, it is sensible to put each of these two pieces in its own fragment.

Why not create and swap views or view groups instead?

This is a good question because it suggests a viable alternative to using fragments.  The main reason you would pick fragments over views and view groups is because they have a lifecycle and a backstack.  If either of these features are needed, it is advisable to use the fragment.  Otherwise, you’d have to roll your own lifecycle or backstack, and that would end up being a project in and of itself.  So the best course of action would be to use a fragment.

Special Types of Fragments

There are a few special types of fragments available for common UI paradigms.  In mobile apps, there are often master-detail design flows, where the master component is a list.  For this purpose, the Android SDK provides a special type of fragment called a ListFragment.  It inherits from Fragment and contains a ListView with the ID @android:id/list.  It can also be subclassed and customized to your liking, provided that the view contains a ListView element with the aforementioned ID.  For more information, see the Android development documentation for ListFragment.

Another special type of fragment is the PreferenceFragment.  This replaces the deprecated PreferenceActivity with a fragment that is wrapped by a FragmentActivity.  The PreferenceFragment can be used in a manner just like the PreferenceActivity in order to load an XML definition of preferences into the UI so that the user can customize the app to their liking.

The Fragment Manager and Fragment Transactions

Android contains a special Activity called a FragmentActivity, which includes an object called a FragmentManager.  The fragment manager allows the developer to manage the fragments that appear in each of the fragment containers defined in a view.  The fragments are loaded, switched, and removed through the use of transactions.

Within the FragmentActivity, you can start a FragmentTransaction by grabbing the fragment manager and calling the beginTransaction() method:

FragmentTransaction txn = getFragmentManager().beginTransaction();

Note: if you are using ActionBarSherlock, you should use getSupportFragmentManager() instead.

Once you have a handle for a fragment transaction, you can call various functions to add, switch, hide, show, and remove fragments:

Fragment myFragment = new MyFragment();

Fragment myOtherFragment = new MyOtherFragment();

txn.add(R.id.my_fragment_container, myFragment, “my_fragment”);

txn.replace(R.id.my_fragment_container, myOtherFragment);


Finally, to complete the transaction, call the commit() method:


For a full rundown of the available methods in the fragment manager, see the FragmentTransaction documentation on the Android developer website.


Fragments are a bit confusing at first, but with some time spent researching and playing with code, it becomes easy to understand how they work.  Once you begin to incorporate them into your app, you can see the benefits of performance and reusability that they can bring to your Android app development.  For those who are new to fragments, I hope this post has helped to introduce you to the concept and provide a basic level of understanding as to their function and benefit.  Feel free to leave comments, suggestions, and questions in the comments section.

Happy coding!