The work documented in this post was ignited by receiving the first video tests from usertesting.com and other sources (by the way, Usertesting supplied valuable feedback for a very cheap price), and about at the same time watching a video where Amy Jo Kim from Shufflebrain does a great and inspiring presentation showing how some game mechanics are used by successful social software.
Teamwork 4 was realized after studying several texts on usability, and even inventing new techniques, see for example
But our internal speculations lacked the ideas that may come from an independent fresh look; we received a lot of positive feedback from new users of Teamwork 4, but the problem is that those that do not understand your software, don’t usually give you feedback. So we submitted Teamwork 4.1 to several testers that were first-time users. Then, looking at the testers’ video, these were full of surprises, and quite… painful to watch! Poor users!
Putting together feedback of the testers and the idea of the prime importance of positive feedback from the software, in particular in early usage phases, we designed a new release of Teamwork which we hope should be friendlier for both the first starter and the daily user. Here we document some of the differences. I hope that being this a concrete example of usability evolution, it can be of some interest for those who are working on web application usability in general.
The criteria which we used for evolution of the interface have been inspired also by Amy Jo Kim’s work from Shufflebrain who in this presentation shows how some game mechanics are used by successful social software, and this may inspire in general who is designing any kind of application:
This is filled with interesting ideas and observations; to me what results more interesting is not so much the extrapolation of the specifics of game mechanics, but looking at ways to involve with feedback the user in its first steps in your application, and then guide the evolution of it. It is clear from the speech and Q&A part of the video that she has a wide usage and behavior culture which is only partly expressed there.
Limits of game techniques
Looking at human beings as reacting to behavioral stimuli is taking an extremely partial view, which has little explanational mileage, but not zero.
It is true that games tap in primal response patterns; but that is also their limit, at least for games that use proximal metaphors (body movements) and not reasoning, collecting, quantifying. Collecting and quantifying means inserting strings and numbers; something at which Mario Bros. like interfaces are vary very bad at; like using the iPhone keyboard for a lot of data input. But collecting and quantifying is what is most important for a huge number of applications, and where the proximal metaphors simply won’t help.
In playing games, often the player is happy to use “low level” skills; in planning work, not so much. That is also why simple stimulus – response – reinforce metaphors can be effective in gaming, advertising, but not elsewhere; we are not (fortunately) always that stupid. It is surely false that the most powerful way to manipulate human behavior is to do a variable response schedule: a good argument to a responsive crowd can do better than any behavioral proximal stimulus. But that would take us far, on the failures of behaviorism (this is old stuff from the 50′s). Still, when the users of any software are in their first steps, the response patterns of the application matter a lot.
On the Business of software discussion group I’ve recently seen a discussion on a tool for visually collecting bookmarks. The developers chose to develop a desktop client before developing a browser plugin (!); this to me is a clear mistake in adoption path strategy, which does not consider the critical point of lowering the adoption path as much as possible for such a secondary tool. Seeing collecting bookmarks as a “game played in the browser”, makes it immediately that the separate client idea is disastrous!
My point of this section is just that the gaming metaphors can help, but in limited forms and cases.
The task given to the testers was this:
- enroll to demo
- create a project
- assign yourself to it
- create a resource
- assign it to the project
- create a child task
- create some issues
- register some worklog
- search for a task
- create a to-do
Going through this, they met some difficulties, which we tried to overcome, and we document all this in this PDF:
(1.5 MB – lots of screen shots).
Some ideas in the PDF can be generalized; additional “behavioral reinforcement” tricks which we put in place:
- Whatever first tests the user does, he makes a % increment
- the “user score” (which was already there) has been structured in “badges” (which in our case are balloons of different color)
- operators are associated to a “color”: so say sticky notes coming from them are immediately distinguishable
N.B. The changes to which we refer are not released yet (May 13th, 2009); the demo and the downloadable version are those before changes, the new release (Teamwork 4.2) will be available in a couple of weeks.