A few thoughts on “Using the Difficulty”​ in User Experience Design

Introduction

While looking through articles and examples of UI Dark Patterns, I stumbled across a paper titled “Use the Difficulty through Schwierigkeit”: Antiusability as Value-driven Design”. I was intrigued by the title of the paper, and the essay style used by the author.

On an initial read through, the paper felt like a meandering essay, which touched upon various aspects of the “Anti-usability/Schwierigkeit ” school of thought. A couple more read-throughs later, I was able to understand the various nuances within the points presented by the author.

The author really exemplifies the point he’s trying to make with the words used in the first couple of sentences. I mean, wording such as  “This aphorism also encapsulates the raison d’etre…” has to have been a deliberate choice.

Definition and Origins of “Using the Difficulty”

Antiusability is defined as

“…a novel way of design that centers on the finely tuned integration of graduated difficulty into user interfaces to systems in a variety of contexts.”

It’s important to note that Antiusability is not the opposite of usability, it is in fact part of usability and user experience design. Lenarcic makes note of this by suggesting the use of the term “Schwierigkeit” (which means “difficult” in German) as an alternative.

Lenarcic cites the example of Michael Caine, who asked the director of a play he was in, about how to deal with a chair that was on stage. The director told him to make use of the chair in a way that helped him express the nature of the scene (smashing the chair of it was a dramatic scene, or tripping over it if it was comedic). In this way the chair went from being an obstacle to an object that could be used in a productive manner.

Key Points

Here are the key  points that Lenarcic went over in his paper:

  • The usability of a device can modify a user’s behavior.
  • Difficulties can be used in such a way as to have a net positive effect.
  • Choreographing obstacles in a way that allows the users to regain the feeling of being in charge of the interaction process rather than being “madly addicted” to it.
  • “Calibrated difficulty in practical design to accentuate the greater good in system use”
  • How easy should things be? How difficult should things be to enable an end user to feel they have performed a useful task?
  • Regaining control over our lives- “slow” movement and moving away from hyper-efficiency as an end goal for all interactions
  • Exploring “viscosity” in user environments, affordances that allow resistance to local changes.

What follows is my attempt at summarizing and reflecting on some of the points presented in this paper. The paper was written in a meandering essay style and my thoughts tended to meander as I wrote this, though I have tried my best to create some coherent structure.

“Calibrated Difficulty”: Gamification and Homo Ludens

The author talks about “calibrated difficulty in practical design to accentuate the greater good”. My mind naturally went to levels of difficulty in video games. Games are a great example of difficulty being an important aspect of the user’s experience or indeed their enjoyment. Most games have different levels of difficulty to cater to different people’s preferences.

Today, gamification of interfaces has become a buzzword. “Gamification elements” have become synonymous with things you can tack onto an interface in order to make it more “delightful”. Things like leaderboards, scores, achievements, and badges. I feel like the benefits of adding such elements without proper thought is limited at best and questionable at worst.

I believe that this idea of using the difficulty has an application in truly gamifying a user experience. True gamification involves using aspects of the interaction itself, using the “Core Drives” of the user, as this article by Yu-kai Chou brilliantly describes. Imagine if you got a sense of accomplishment on completing a tedious task, rather than exasperation. (Something like, say, editing an image-laden document in Microsoft Word without messing up the layout.) Providing a challenge, timely positive feedback and competition can definitely act as motivational drivers as is described in this paper.

But gamification at its core still aims to improve the user experience with the goals of the user in mind. It is still about getting things done by providing a sense of accomplishment to the user for doing “grunt work”. Lenarcic’s Schwierigkeit is more in line with William Gaver’s “Designing for Homo Ludens”. As Gaver describes, his definition of “playing” is different from “gaming”

“Not only are these forms of ‘play’ fundamentally goal-oriented, but in striving for a defined outcome they impose rules about the right and wrong ways to go about things… Pursuing such an instrumental version of ‘fun’ does not help provide an alternative model for computing. On the contrary, it co-opts play into the same single minded, results-oriented, problem-fixated mindset that we have inherited from the workplace.”

Gaver goes on to provide examples of open ended forms of engagement with no fixed goals, rules or outcomes. He says that scientific approaches to design should be complemented by open ended and exploratory ones.

“It is difficult to conceive of a task analysis for goofing around, or to think of exploration as a problem to be solved, or to determine usability requirements for systems meant to spark new perceptions”

Regaining control over the experience- allowing the user to reflect on their actions

Another aspect of Lenarcic’s paper is about allowing the user to reflect and rest as they use the system, instead of having a singular goal of reducing the time on task and increasing efficiency. The more time you can save, the more work you will end up doing, as Landauer expressed in his book The Trouble with Computers.

He argues that allowing time to reflect over actions may help the user feel more in control over the system, as opposed to being “madly addicted” to the process. He argues that adopting a more mindful and “slow” approach could lead to more user satisfaction. Allowing users to reflect on their actions be used to improve learnability and understanding of the system.

A personal reflection on Usability, User Experience and “Delighting” the user

Usability discussions are often centered around ease of use, and when people talk about user experience the end goal is often times “delighting” the user. The end goal of UX design and research is always creating an ideal experience for the user based on their needs and behaviors. Words like simple, easy, desirable, and efficient are the ones that are used, while words like difficult always have a negative connotation.

This paper really got me to think- is making something easy to use, more efficient and less time consuming really the only way to improve the user experience? Can the difficulties inherent in some experiences be leveraged to a positive end?

Of course, there’s a difference between making something usable and making a delightful user experience. I feel like Lenarcic’s idea of “using the difficulty” has a place in discussions about the latter. Some may equate the ideal experience with the most efficient, but reading Lenarcic’s essay made me go back and re-read ideas like Gaver’s Designing for Homo Ludens, and made me realize that there’s a lot more to user experience or human centered design than simply designing for efficiency.

Thinking back to my days as in Grad School, I remember the discussions with professors and peers about ways to “delight” the user. It almost always ended up being about how the interface reacted to the user- beautiful transitions, animations, innovative 404 screens or other ways in which to provide information about system status, and so on.

There’s so much more to user experience and “delighting” the user than a focus on making things easier to use. Exploring, playing around with something with no end goal, some level of challenge, all of these are ways in which we may seek fulfillment. I’m glad I picked apart this paper despite my initial hesitation, because it made me go back and re-read so many things that I had forgotten about or was unable to appreciate because of the stresses of being a graduate student. This in itself is a case in point- I read all these academic papers with a goal oriented mindset, I wanted to extract their meaning, write a summary, and discuss them in class, all with the goal of passing my class, with the incentive of good grades (or the fear of bad grades). My goals were set, and I developed methods to efficiently accomplish the tasks of summarizing and discussing papers. It’s now that I have the time to reflect on these papers that I realize how important and thought provoking they are.

This paper reminded me of my initial wonderment about the human condition. I feel like I had lost sight of the complexities of the things that make us tick, and it’s refreshing to remove the blinders of efficiency and ease of use to look at things like difficulty from a new perspective.

Advertisements

What the Indian Spice Box can tell us about optimal menu design

In my two years of living in the US and having to make Indian food for myself, I have learned the importance of the spice box. At first, cooking Indian food seems like a very daunting and labor intensive task. The oil that is used for tempering has to be heated to the right temperature, the spices have to be added in a particular order, and in precise amounts. In this scenario, having a box that contains the right spices at one place is the ideal solution.  As this article in the Boston Globe describes it:

“Timing is key in Indian cooking. Many recipes begin by heating oil first, then adding small amounts of spices in quick succession. The oil’s temperature has to be just right so mustard seeds pop, cumin seeds sizzle, and turmeric and red chile powders lose their raw edge without burning. The spice box is the most efficient and practical way of accessing the required spices easily: Open one lid and everything you need is right there.”

The spice box is a mainstay in every kitchen in every Indian household. The design is quite simple – a steel or wooden container (generally round) with smaller containers inside it. This simple design has been in use for generations, and it’s easy to see why. In its relative simplicity the spice box shows the importance of user-centered design.

The spice box holds the optimal amount of ingredients. The square or round box contains about seven inner containers, plus or minus two. Add more containers, and each individual container becomes too small, requiring frequent refilling. Too large, and there aren’t enough spices in the box, reducing its usefulness. The box was designed keeping the user and the aforementioned cooking process in mind, and that is one of the things that makes it such a great design.

The spice box is customizable. The user can add the spices of their choosing. Although some of the spices are common, there are certain differences in regional cooking styles, and the box allows the user to add the spices that they may require based on their preferences.

The spice box is easy to maintain. Replenishing ingredients is simple, and the frequency of replenishing the ingredients is not too much so as to dissuade the user from using the box.

What I like specifically about the round variants of the spice box, is that the shape both communicates and facilitates rotation- the user can move the inner containers around, or order them in a way that makes it easier to remember the order in which the ingredients are to be added. A square or rectangular box does not communicate or facilitate that as much, although I can see how even the rows and columns can help the user remember the order in which spices are to be added.

What struck me about the timelessness of the spice box design is that in many ways it is a triumph of user-centered design. It is a helpful to the user, and makes the task at hand, in this case cooking, easier by reducing cognitive load. One does not explicitly have to remember the order of adding ingredients- a quick glance at the box and the way the ingredients are ordered acts as a memory cue. It increases the ease of cooking, and the user does not have to read a manual, or remember complex steps before having to use it.

Imagine if someone were to design the spice box in today’s age- I would imagine designers and engineers getting carried away with the prospects, and potential affordances that modern technology brings. That would lead to “features” like a freshness indicator, a rotation device to hasten access to the spice you require, voice commands, recipe suggestions, and so on. It would have IoT connectivity options, a companion app, and perhaps even a crowd finding campaign. I digress.

While designing a menu based solution to a design problem, designers tend to get carried away with the design of the menu itself. The menu is meant to be a means to find the right tool or option to complete the task. It is thus imperative to understand the user’s workflow and identify potential breakdowns before adding design elements like micro-interactions. Of course, modern interfaces tend to have a multitude of features, and it is not always possible to make menus simplistic. But that’s not the point. The spice box isn’t just ubiquitous because it’s simple – it is because it was designed with the user’s needs and wants in mind.

The Ideal E-Reader User Interface

 

I recently got the new Kindle Paperwhite for a relative. Before I handed it over, I had the chance to use it for a few days. It uses a touch based interface on an e-paper display. However, there are no buttons and the interface is navigated entirely using the touch screen. After I handed the Kindle over, I also got a chance to go hands-on with an older variant. This used buttons for navigation. After using both, I felt like I should juxtapose both these experiences and talk about what in my opinion would be the ideal user experience when it comes to interacting with e-readers.

Feedback and Reliability

I personally felt that the older Kindle’s button based interface was much easier to use, and also much better in terms of user experience. As to why I feel like the experience is better, the answer is simple- physical buttons have certain affordances that a touch screen only system cannot provide. Affordances are aspects of an object’s design that suggest what you can do with it. This comes into play especially while using the page turn buttons that are mounted on the sides of the old Kindle. The buttons provide a tactile feedback that is reassuring. Tapping on a touch screen does not provide such feedback, unless there is a vibration or haptic feedback from the screen, which the new Kindle does not have. Another issue with the touch screen is that it is not as responsive as smartphones or tablets these days. Tapping to go to a new page does not work sometimes, and at times pages are skipped. The buttons not only provide reassurance and feedback, but they are also more reliable in this case, as the user can be sure that they have gone exactly one page ahead or behind based on the button pressed.

Faster Interaction

Another issue with the touch based interface on the new Kindle is that the navigation options are hidden by default in the “reading mode”. The top of the screen has to be tapped in order to access the navigation menu. In comparison, the old Kindle’s buttons are always out of the way of the screen, and are accessible at all times. This means that the button based UI has one less step when it comes to navigation. For example, if a user wants to navigate to the home screen, on the old kindle he/she just has to press the home button, while on the new Kindle they have to tap the top of the screen to make the navigation bar appear, then tap the home button to go to the home screen. This may seem like just one more step, but it can add up really fast in terms of user frustration.

One handed use

Side mounted buttons to turn pages on the old Kindle make it easier to use it with one hand. Having to tap the screen with one hand while holding the device in the other hinders and in many cases completely eliminates one handed usability. As this blog post says, one handed usability allows one free hand to eat popcorn or chips without leaving grease on the screen, for example.

On the flipside

Buttons do have disadvantages though. They might add to the overall expense of the device. As they are physical objects, they are subject to wear and tear over time. They also add to the overall bulk of the device and may be undesirable for people who value sleekness of the design over ease of use.

The best of both worlds

With the exclusively touch based interface of the new Kindle, Amazon seems to have decided to go button-free for the future. There is a way where both approaches can coexist. Amazon could include Bluetooth connectivity with the next Kindle. This opens up the possibilities of connecting keyboards and other devices to the Kindle and allow the users to use a button based navigation over the touch screen. A Bluetooth enabled case that adds the side mounted page turn buttons would be a great idea. Allowing the user to choose their preferred way of interacting with the device would be the best pro-consumer way to make sure Amazon can retain their sleek button-free design without un-solving a problem that was solved by the buttons in the first place.

 

Microwave Ovens, product design, and human factors

I recently got a new microwave oven in my apartment after the old unit gave up the ghost. I wasn’t a big fan of how terrible the controls were on that thing, and the newer “better” model has an even worse interface. I began reading up on this online, and I saw some very good points being raised in various discussions.

First I came across the post “Why do most microwaves have such a terrible user interface?“. It does a good job of stating the problem. To summarize:

  • Most microwave ovens have too many buttons on them
  • These buttons have little to no tactile feedback
  • As a result, microwaves are difficult to use quickly

The blog post gives plenty of examples of good and bad user interfaces, which I suggest you should look at. The blog seems to suggest that the old system of having analog dials was much more elegant and simple. I also agree with the fact that a lot of the buttons on microwave ovens these days are superfluous and the most frequently used functions are very few. The writers argue that the rotary dials solve the problems of too many features confusing the user, and the lack of tactile feedback.

2015-11-17

I only use 5 of the 25 buttons on this panel.

A valid counter-argument towards having few buttons and/or dials, is that while they make the interface more simple and elegant, it may come at the cost of appearing to be too simple. “What if the user does not buy the product, if he/she perceives the product as not having as many features as the other microwave ovens out there?”. That is, feature discover-ability. Marketing teams and Engineers come up with and implement various ideas, to make their product seem unique, as they are all vying for the attention of prospective customers.

Another point to be noted, is that there is often a difference between what a user says they want, versus what they actually look for while buying a product. Sure, people may say that they prefer to have simple and elegant user interfaces for their appliances, but when it comes to making a purchasing decision, they will surely look at the different features that every product has to offer.

The argument that dials are better than buttons itself needs to be examined as well. Let’s take the example of microwave popcorn. A microwave oven may have a popcorn button, or it may have just a rotary dial. Microwaves vary in power levels and specifications, and there’s no set standard when it comes to how long it may take for your particular oven to properly heat the bag of popcorn without burning it. More often than not, this leaves the user with the option of trial and error. One or two bags of popcorn may have to be sacrificed before the user understands the time their particular microwave takes to heat a bag of popcorn.

So, the popcorn button isn’t exactly reliable. But what about the dial? Suppose the user wants to heat the bag for 90 seconds, but the dial does not allow finer grained time settings, that restricts user choice. The user is left with performing extra operations like constantly monitoring the time elapsed, and stopping the oven at a particular instant manually, supposing that a manual stop function is available.

There are clearly some important discussions that come up from this:

  • Simplicity of Controls vs Discover-ability of features
  • Simplicity vs Functionality

The blog post titled “A Lesson in Control Simplicity” has a great comment thread about these discussions.

To summarize, there seem to be too many steps involved while using a microwave oven these days, which makes one think that perhaps a simpler approach is what’s required. The buttons are flat, and give no feedback whatsoever. A dial seems to be very simple, but it may over-simplify.

flow_man_machine

The Human-Machine interface. How much should the user do, and how much should the machine do? That’s the important consideration in every case.

Perhaps there’s a need to put more thought into it. Maybe we shouldn’t try to design a microwave oven interface that does absolutely everything for the user. To go back to the popcorn example, most people just hear for when the popping slows down, to turn off the oven. It’s not all bad to involve the user here. What if the user was just given the option to start, stop, and set a particular cooking time, and for flexibility, an add 30 second button.

The microwave oven made me think quite a bit about the factors that influence modern interface design. There’s a marketing side, an engineering side, and a design side. Clearly, there needs to be a significant human factors side as well.

Person with painted hands

Peeple, Self Presentation and Redefining “Weak Ties”

Peeple is an app that’s been in the news recently. It’s an app that would let people rate other people, publicly. There has been quite a bit of outrage about it on the internet, because of what it stands for, and the potential of disastrous things happening to people and their reputations. Let’s peel back some of the layers and try to see the implications of this concept.

What is Peeple

Peeple as I mentioned before is an app that would let you post reviews and rate other people that you know. You can post about others, and others can post about you. Just about anyone that knows you, your neighbor, colleague, etc. could simply give you a rating and write a review about you, like you would on Yelp.

You cannot opt out of this, meaning that if someone decides to post a review about you, it will be on the system. On the other hand, you would get 48 hours to contest any review that you have received.

And the internet responded

There has been considerable backlash on the internet over this app idea. (Not to mention they stole the branding of another legitimate business.) When it comes to presentation of self, nobody wants other people to control it. Our self image is something we are very conscious of, and we take immense care to maintain a particular public image. This image changes based on the context, or group of people as well. There are a lot of dynamics involved in social communication.

Presentation of Self in the age of Social Media

These days, most of us have profiles on numerous social networking websites. We use them to connect and communicate with other people, but that is a secondary purpose. The primary reason for these profiles to exist is to “claim your name”, to project an image of oneself on the web, via posts, communications, messages and so on. We connect with other people, and affiliate with groups and other such entities as a statement of intent. On a surface level, it is a communication platform. But beyond that, it is a means of generating and projecting a certain image of yourself on to others.

To this end, we are often careful of what we post, what we “like”, what we share, and with whom. We delete or modify posts in order to keep a certain image intact. We carefully curate our profiles, to varying degree. Some people take this more seriously than others, of course. But at some level, this curation of social profiles takes place.

Weak ties and Networking

Another purpose of social media is to create and maintain “Weak Ties” – as the name suggests, these are acquaintances, friends, etc. that are not “close friends” or family, etc. but are affiliated to you, often via other people. Friends of friends, acquaintances, people you’ve met at social events and so on, that you may not really know a lot about, but have heard of or met a few times. The “friend” metaphor on Facebook lost it’s significance a while ago, in this regard. We “friend” so many people on Facebook at times, that it is generally more like an extended network of people. Even LinkedIn is a connection based social network, which directly uses such metaphors as first second and third connections.

The significance of weak ties is that they are often very useful when it comes to gaining professional opportunities, or being a part of social and cultural events. Even more so than strong ties. The more people you know, the easier it is for you to “get things done”, so to speak. That’s why there are so many networking events and meetups where people meet new people and get acquainted with people for professional or personal reasons.

Peeple as a threat to Weak Ties and Self Presentation

Of course, the concept of a People rating app has obvious negative connotations. Most importantly, people that do not like you would be free to post negative reviews about you. People who are in competition to you might use it as a means to slander. Personal attacks could gain an even more potent dimension.

As I mentioned before, people spend a lot of time maintaining and worrying about their self image. The Peeple app would mean losing control over this deeply personal component of social engagement. There would be some that like the idea of things being thrown into chaos, and the added layer of tension that the proliferation of such apps would bring into society.

In the professional world, this may not seem to have a direct impact, however it may come up in employee and candidate background checks.

Creating new “weak ties” could thus become very difficult for people if there are certain ideological or preferential differences between people that would not have mattered if not disclosed. If your “character” defined by a star rating becomes public knowledge, it could lead to losing out on networking opportunities.

Peeple as an opportunity

As all of us have learned how to make social media work for us when it comes to presenting ourselves to the world, in time, people could also find ways to leverage apps like this for their own benefit. Tacit agreements between people regarding reviews is one way. Using these apps to heap praise onto prospective employers or other groups to influence their decisions could also be possible. This app could also be, therefore, assimilated into the pool of ways in which you project your own self image. Today we curate social profiles to create a self image, maybe in a future where these apps exist, we would have to curate these profiles through other people. People who know how to influence others directly or indirectly could use tacit agreements or discussions to mitigate the negative effects of any “bad reviews”. For example, if someone posts a bad review about you, you could ask someone to counter it by posting a good review, or posting a counter-review on the other profile. Perhaps, a reply to the negative review with some context, and leaving the viewer of the profile to make conclusions.

This is the side that the co-founders of the company would want us to see- a means of getting feedback from people you know, so that you can improve upon it and be the “best person you can be”. I personally don’t buy it, because it’s a pathetically simplistic solution to a complex topic of social interactions, which is inherently nuanced and contextual in nature.

Of course, this could get very messy very fast. This does have a “he-said-she-said” feel to it, kind of like some kind of high-school drama. If apps like Peeple do get into the collective mind-space of society, there would have to be tacit agreements as I mentioned before, not to use such applications. People could decide not to use this app, or to disregard any reviews left on them.

The idea of the Peeple app is inherently invasive. It could lead to proliferation of gender biases, race biases, and so on. It could lead to creation of inequality – an “elite” class and a “lower” class separated by their star ratings. It goes against the very fabric of modern civilization – the fact that there are certain unspoken rules, often called the social contract. A part of me really hopes people don’t fall for this obviously terrible idea of reducing a person to a number value, but another part of me is really intrigued to see how society would adapt and react to this if it were ever to see the light of day.

Routines and the Quantified Self

What is the “Quantified Self” ?

These days some of us or quite a few of us try to capture certain minute details about our daily lives in a digital format. We keep a track of the amount of steps we have taken, the amount of calories in the meals of the day, and so on. The aim here is to keep a track of these things so that we may reflect, analyze and learn about what is going on with ourselves, to eventually improve ourselves over time. This has become much easier due to smart devices and wearable technology. Each and every one of us is generating tremendous amounts of data about ourselves every single day. Systems like the Nike Fuel band, the FitBit and even Apple and Google’s fitness oriented application suites want to take advantage of this current trend.

At the heart of this new “Quantified Self” movement are tiny, inconspicuous sensors embedded in various devices, that help record and log surprisingly accurate and incredibly detailed information. These sensors, combined with ubiquitous computing that allows these numbers to be crunched and presented to the users in an easy to understand format, and social networks that allow the users to share and collaborate, form the core of the new “revolution” in health and wellness oriented experiences.

Although all of this is a great example of how the latest technology can be used for our benefit, the idea of the Quantified Self is not as completely new as one might think. We have been keeping track of ourselves in various ways long before the advent of miniaturized biometric sensors and portable smart devices. Certain things like keeping a track of spending, or stepping on a scale every morning, have been a part of our lives for quite a while now. What’s new is this increased need for self-knowledge, helped by the rich and detailed information that can be recorded about ourselves.

Of course, there are still a few issues with the whole Quantified Self movement. One of them is keeping the user engaged. These systems currently require the user to constantly monitor or observe the information daily or over time. This may lead to information overload, or confusing the user because of too much information. Another is keeping the user motivated and interested in the system. It is observed that after a while a lot of people tend to revert back to their old ways because they get bored or lose motivation, and their fitness trackers end up in a desk drawer.

Routines

One of the things I realized as I read and researched about human factors, is the importance of routines in our daily lives. Certain things we do, certain actions that we perform, are so familiar to us that we do not spend too many attentional resources to complete those actions. They become “routines”. We continue to follow those routines until something unusual happens.

To understand how we can make the above mentioned Quantified Self systems better, we need to understand how to design them better. That’s where the understanding of routines comes into the picture. If the systems become a part of our routine, completely non-intrusive without too many requirements on our attention, they might just become better experiences.

Today’s solutions

Designers have tried to work around the issue of keeping users motivated in the case of fitness tracking. Gamification, or adding game-like interactive elements such as competition with others in your social network, trophies or achievements for achieving goals, or Role-Playing Game like elements such as character creation and progression, have all been tried out. The problem here is that it lacks a universal appeal to people. Some people really like Gamification, and others can’t be bothered with it.

Other attempts at helping users maintain motivation have been actual monetary incentives, such as the “Pact” app that allows you to bet money on whether or not someone will complete their fitness goals, or the “PavLok”, a wearable device named after the Pavlov experiment, which literally gives the wearer an electric shock if he/she does not complete the pre-decided goal.

I believe that the solution lies in understanding how routines are created, maintained and modified. Creating a new routine or modifying an existing one is difficult compared to maintaining an existing one, because changing certain habits takes conscious effort and attention. It takes a few cycles of the routine to fully internalize the changes. If it is too difficult, the individual may revert back to old habits. Superficial motivation like Gamification may not provide enough incentive to the user, to completely change their routine.

What I feel would be the ideal experience:

One of the key aspects of the quantified self is the focus on the individual. Self improvement, and detailed information that is specific to the individual are the key points of this whole experience. Using pre-set goals like “10,000 steps a day” thus seems counter-intuitive to this point. If every person is different, then every person should have goals as per their requirement, or their capacity. That is where biometric sensors fall short, and human intervention provides a more suitable solution. Sometimes it’s better to jog or run until you can feel your legs tiring out, for example, rather than just stopping after 10,000 steps every time.

That is where I feel this system needs to improve not only simply recording detailed information, but also to help create routines, and help you find your own way of making the best use of the sensor data. Information that can help you improve upon your fitness by showing you how much you can do, and what you should do to push your limits. The user would know when they have done enough, when they can feel it in their own bodies, without the need of a 3D avatar of themselves telling them they did a good job.

Ubiquitous Computing- A Reflective Essay

Introduction

This course “Experience Design for Ubiquitous Computing” has had a focus on both the social and the technical aspects of Ubiquitous Computing, and how User Experiences can be designed keeping in mind all the myriad considerations. We began this course by looking at what was to be the lynchpin of the rest of our journey- Mark Weiser’s vision of the Ubiquitous Computing future [1]. We are arguably two thirds or so of the way there and his vision has materialized in some way albeit not exactly as he had envisioned. Now I will attempt to show my own vision of the future, for the next few years and even beyond.

Beyond the Western UbiComp Worldview

One of the key issues that was discussed time and time again was how Mark Weiser’s vision and UbiComp literature in general seemed to revolve around western culture. Of course, this was addressed by Dourish and Bell in their book [2], but there weren’t any examples. I will attempt to explain how UbiComp technology and design affects the part of the world not focused upon by current literature.

A vision for UbiComp – Convergence of Current and Future Technology

Mark Weiser showed us all his vision of the future in 1991[1]. He envisioned multiple portable computing devices in various form factors, being cheap enough so that people would have many of them at hand and could trade them around like hall passes. One of the foundations of this vision is Moore’s law, which recently completed 50 years of existence. Added to that is the proliferation of big data- tremendous amounts of user generated data being created, collected and now even harvested in some cases. There are also some technologies at the fringes of UbiComp, like augmented and virtual reality. Allow me to show you my vision of the future, with all these technologies taken into consideration.

Moore’s Law continues to hold true, and scientists eventually find a means to miniaturize computing capabilities to such a small scale that it can be measured in a Nano scale. These devices will drive the next generation of Ubiquitous Computing. Often referred to as “smart dust” [8], this concept has far reaching applications in the future. I can imagine smart dust being deployed in farmlands and agricultural fields, giving relaying soil nutrient and other such data to central governmental cloud services, from where farmers can get real time updates about their soil conditions whether they would need fertilizer, etc. This may ensure that farmers would not require to learn about complex systems for computing.

This brings us to the future of location and context awareness. [3, 5] One of the major changes that I see happening is the proliferation of augmented reality. I envision the use of this technology in a scenario that not many pay close attention to, the field of social networking and social media. If you observe what social media giants like Facebook are doing these days, you will observe a heightened interest in big data, and augmented and virtual reality. Facebook’s acquisition of Oculus and messaging platform Whatsapp is proof of this. In my opinion, Facebook’s mission for the future is to permeate into every aspect of an individual’s life. A person wakes up in the morning, his smart device by his side, a multitude of smart dust sensors scattered all around the environment. Wearable devices tell him he should get something to eat, because his blood sugar levels are quite low. His sleep pattern has been erratic over the past few weeks due to an upcoming work deadline, and he can see this through a head mounted display. Wherever he goes, the head mounted display [4] provides him up to date contextual data about the surroundings, his neighborhood, and allows him to take pictures simply by blinking his eyes. This technology brings forth an exponential increase in the amount of user generated data on social networks, with some people allowing social networks to showcase each and every minute by minute detail of their lives, and Facebook provides the facilities to do so. Increase in computing power allows people to live stream to hundreds or thousands of people at once through their phones or their wearables, be it talking to family, or a social gathering, or simply just entertaining personalities who use this as a means to reach out to their followers and perhaps gain some revenue through online payment mechanisms.

One of the major sectors to be influenced greatly by the proliferation of ubiquitous computing will be education. In ancient times, students would get individual attention from teachers.  This kind of teaching was reserved for the upper echelon of society of course. After the industrial revolution, the modern metaphors of classrooms with class teachers and tens if not hundreds of students being taught by one teacher became the norm. The internet brought about a revolution called e-learning. People of all ages could now access eBooks and video lectures from around the world. However, I feel that in the future, the confluence of contextual awareness and an exponential increase in the data available to people will bring about the next revolution in education. Children these days have access to smart devices with internet connections, and they are able to search for things simply by typing in queries in search engines. The rise of UbiComp based design will create a new kind of education system, which would be like a personalized digital teacher. Like Alexander the Great had a teacher and mentor in Aristotle, children will have at their disposal a digital teacher that will teach them exactly based on the child’s needs, based on data gathered through wearables, communication via voice and other input modalities, and various other means. Parents will have a control over and will be able to keep a track of their child’s progress, and will know what their child is learning. Technology, if influenced by the research about child psychology, will be able to cater to even special needs children through this new system. These days we see e-learning platforms like Lynda.com, but they are limited in their effectiveness, as they are not personalized for each and every individual student.

Of course the usual question arises, “What about privacy? Will people allow technology to permeate into their lives to this extent?” I believe so. As Langheinrich [6, 7] said, about 60-70% of people fall under the category of privacy pragmatics. As technology continues to permeate into our lives, and marketers continue to sell smart devices, wearables and even services to the consumers, it will create a level of dependency on these services that we would perhaps find hard to get out of. Just look at our increasing dependence on Google services, for example. Most consumers and small enterprises use Google services for email, cloud storage and even collaborative documents. As this dependence increases, we will slowly allow more and more technology to permeate into our very lives, and we will become more accepting of it as well. Just have a look at how instant messaging has changed family dynamics. I frequently chat with my family on instant messaging platforms like Whatsapp, which recently integrated a calling feature. An immediate result was me getting calls from my distant relatives, just because it was possible. This is an integration of various affordances into systems that increases adoption and acceptance. This also means there’s an increase in the “messiness” of the whole system. Free market competition means that cross-platform communication will probably never be as seamless as some people would like. This can be especially important if we move forward to a vision of connected homes, with the “internet of things” concept.

Another aspect that is important is the energy requirements for powering all these devices. Battery technology has not sufficiently advanced, and techniques like energy scavenging [10] have not yielded significant improvements. This could prove to be a major stumbling block for the proliferation of UbiComp.

Speaking of stumbling blocks, one of the concerns I have is whether all the questions that we have considered over the course of the semester will even be considered by creators of UbiComp systems going forward. I have observed that many of the case studies have been a post rationalization of systems by researchers, to look at what was right and what went wrong. Will the major players in UbiComp consider the socio-technical challenges while creating new systems? In the ethnography discussion [9] Dourish and Bell show in a way, that sometimes introducing technology into different scenarios needs some analysis. Sometimes, you need to know when not to introduce technology, rather than how to introduce new technology into each and every new niche or domain.

Conclusion:

Ubiquitous Computing seemed like a field that was myopic in the sense that it was so heavy on western influences. The key focus areas seemed to be sensors, person tracking and connected environments like the smart home. However, the more that I read into it, especially the two texts “Ubiquitous Computing Fundamentals” and “Divining a Digital Future” that not only showed the technical but the sociological considerations of this field. Being on the cutting edge of technology, UbiComp poses some novel questions and concerns that are not apparent at a surface level evaluation of the field. Designing systems for Ubiquitous Computing therefore should be in essence a multi-disciplinary approach.

References:

  1. Weiser, M. (1991).The computer for the 21st centuryScientific American 265 (3), 94–104.
  2. Dourish, P. & Bell, G. (2011). Contextualizing ubiquitous computing. In P. Dourish & G. Bell, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing(pp. 9–43). Cambridge, MA: MIT Press.
  3. Estrin, D., Culler, D., Pister, K., & Sukhatme, G. (2002). Connecting the physical world with pervasive networksIEEE Pervasive Computing, 1(1), 59–69.
  4. Starner, T. (2013, April–June). Project Glass: An extension of the selfIEEE Pervasive Computing, 12 (2), 14–16.
  5. Dey, A.K. (2010). Context-aware computing(Chapter 8, pp. 321-352). In J. Krumm (Ed.), Ubiquitous Computing Fundamentals. Boca Raton, FL: Taylor & Francis/CRC Press.
  6. Langheinrich, M. (2010).Privacy in ubiquitous computing (Chapter 3, pp. 95–160). In J. Krumm (Ed.), Ubiquitous Computing Fundamentals. Boca Raton, FL: Taylor & Francis/CRC Press.
  7. Dourish, P. & Bell, G. (2011).Rethinking privacy. In P. Dourish & G. Bell, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (pp. 137-160). Cambridge, MA: MIT Press.
  8. Warneke, B., Last, M., Liebowitz, B., & Pister, K. S. (2001). Smart dust: Communicating with a cubic-millimeter computer.Computer34(1), 44-51.
  9. Dourish, P. & Bell, G. (2011).A role for ethnography: Methodology and theory. In P. Dourish & G. Bell, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (pp. 61–89). Cambridge, MA: MIT Press.
  10. Paradiso, J.A., & Starner, T. (2005).Energy scavenging for mobile and wireless electronicsIEEE Pervasive Computing, 4(1), 18–27. (doi)