Blog‎ > ‎

Pretotyping an Android App

posted Aug 16, 2012, 7:19 PM by Unknown user   [ updated Aug 17, 2012, 9:59 AM ]

Introduction

So I had to come up with three ideas I’m going to test before selecting the one and only I’ll be working with during my training. And here we are, pretotyping the first one. Right now I could start a long speech about pretotypes, but the only thing you need to know (or remember) about them is its definition:


Pretotype:  Testing the initial appeal and actual usage of a potential new product by simulating its core experience with the smallest possible investment of time and money.


Description

Now, let me tell you about this project. The main idea is to develop an android app that let users take a series of pictures, in order to build a single 360° picture, and then give the possibility to the user to share it with friends.


Motivation

The project was born under the next premise: “You can’t capture your scenes in a flat plain picture, they simply don’t fit in”. So I thought it would be nice if we could make our picture as spacious as we needed.


Identifying the problem

According to the pretotyping suggestion, the first thing to do was to understand which the core experience of the potential product was. So let’s list some options.


  • Sharing my photos with friends. This is one of the features, but definitely not the core one. Testing that single part won’t prove anything about the innovation of my product.

  • Merging multiple photos into a single one. This is very important for the project to be functional in the future, but the user won’t even be aware of this process. Remember we’re testing usage.

  • The act of taking the pictures. And finally we decided for this one because of the direct involvement the user has during this process. This is the first and most critical part of the project, since it’s the part that requires the more user interaction.


Hypothesis

Now that we identified the part of the project which has to be tested, let’s define the variables to measure by making more premises.


  • In order to make the user increase accuracy while taking photos, there have to be visual aids while using the app. This will help the user taking the right photos naturally.


So we start from this idea and define the variables we’re putting into test. The accuracy of taking photos is defined by the position of the camera device (my phone, in this case), relative to the position when the last picture was taken. A right position is the one that makes it possible to obtain continuous pictures than can be merged into a single one.


Variables:


  • Orientation sensor values, while the user makes a continuous photo capturing path.

  • Time period that a user is willing to last in order to take a 360° photo.


Method

The test consisted of asking some users to use the pretotype app to make as if they were actually taking a series of photos automatically. Actually, the app was not taking photos, but registering the orientation sensor’s values in a database.


The sample was taken from 33 people using the app in two rounds of the same test. In the first one, users were asked to follow a 360 degrees path without any visual aid in the app interface, and obviously the second one included the visual aid.


In order to obtain one picture, 16 individual pictures are needed, distributed over the 360° spectrum. Therefore, the space was divided in 16 sections.


The app samples the position the camera is pointing to, relative to the magnetic north, 10 times in a second. The current position is considered an error if it meets with the following parameters:


While following the path, if the user enters a section and then turns backwards, or enters a non-continuous section, that action is considered an error in the path. In the end, we obtain the error rate by dividing the number of samples in which the position was wrong (or non-accurate) by the samples taken.


Data


 


Results

There were notable differences in user’s behavior when visual aid was added to the app. The user took more than twice longer to take the picture. That’s a good thing, because it shows the user now knows that it takes time to do it well.


Consequently, the average rotation speed per test was reduced. This is good also because pictures are required to be taken in a static position. So, this reflects that the user understood that.

The average error rate was reduced as well around 29%, which proves the hypothesis.


Conclusion

User accuracy was improved by 29% when visual aid was added to the app.

You can check the source project in:


http://code.google.com/p/android-sensors-ar/





Comments