So I had to come up with three ideas I’m going to test before selecting the one and only I’ll be working with during my training. And here we are, pretotyping the first one. Right now I could start a long speech about pretotypes, but the only thing you need to know (or remember) about them is its definition:
Pretotype: Testing the initial appeal and actual usage of a potential new product by simulating its core experience with the smallest possible investment of time and money.
Now, let me tell you about this project. The main idea is to develop an android app that let users take a series of pictures, in order to build a single 360° picture, and then give the possibility to the user to share it with friends.
The project was born under the next premise: “You can’t capture your scenes in a flat plain picture, they simply don’t fit in”. So I thought it would be nice if we could make our picture as spacious as we needed.
Identifying the problem
According to the pretotyping suggestion, the first thing to do was to understand which the core experience of the potential product was. So let’s list some options.
Now that we identified the part of the project which has to be tested, let’s define the variables to measure by making more premises.
So we start from this idea and define the variables we’re putting into test. The accuracy of taking photos is defined by the position of the camera device (my phone, in this case), relative to the position when the last picture was taken. A right position is the one that makes it possible to obtain continuous pictures than can be merged into a single one.
The test consisted of asking some users to use the pretotype app to make as if they were actually taking a series of photos automatically. Actually, the app was not taking photos, but registering the orientation sensor’s values in a database.
The sample was taken from 33 people using the app in two rounds of the same test. In the first one, users were asked to follow a 360 degrees path without any visual aid in the app interface, and obviously the second one included the visual aid.
In order to obtain one picture, 16 individual pictures are needed, distributed over the 360° spectrum. Therefore, the space was divided in 16 sections.
The app samples the position the camera is pointing to, relative to the magnetic north, 10 times in a second. The current position is considered an error if it meets with the following parameters:
While following the path, if the user enters a section and then turns backwards, or enters a non-continuous section, that action is considered an error in the path. In the end, we obtain the error rate by dividing the number of samples in which the position was wrong (or non-accurate) by the samples taken.
There were notable differences in user’s behavior when visual aid was added to the app. The user took more than twice longer to take the picture. That’s a good thing, because it shows the user now knows that it takes time to do it well.
Consequently, the average rotation speed per test was reduced. This is good also because pictures are required to be taken in a static position. So, this reflects that the user understood that.
The average error rate was reduced as well around 29%, which proves the hypothesis.
User accuracy was improved by 29% when visual aid was added to the app.
You can check the source project in: