Ubiquitous Computing- A Reflective Essay


This course “Experience Design for Ubiquitous Computing” has had a focus on both the social and the technical aspects of Ubiquitous Computing, and how User Experiences can be designed keeping in mind all the myriad considerations. We began this course by looking at what was to be the lynchpin of the rest of our journey- Mark Weiser’s vision of the Ubiquitous Computing future [1]. We are arguably two thirds or so of the way there and his vision has materialized in some way albeit not exactly as he had envisioned. Now I will attempt to show my own vision of the future, for the next few years and even beyond.

Beyond the Western UbiComp Worldview

One of the key issues that was discussed time and time again was how Mark Weiser’s vision and UbiComp literature in general seemed to revolve around western culture. Of course, this was addressed by Dourish and Bell in their book [2], but there weren’t any examples. I will attempt to explain how UbiComp technology and design affects the part of the world not focused upon by current literature.

A vision for UbiComp – Convergence of Current and Future Technology

Mark Weiser showed us all his vision of the future in 1991[1]. He envisioned multiple portable computing devices in various form factors, being cheap enough so that people would have many of them at hand and could trade them around like hall passes. One of the foundations of this vision is Moore’s law, which recently completed 50 years of existence. Added to that is the proliferation of big data- tremendous amounts of user generated data being created, collected and now even harvested in some cases. There are also some technologies at the fringes of UbiComp, like augmented and virtual reality. Allow me to show you my vision of the future, with all these technologies taken into consideration.

Moore’s Law continues to hold true, and scientists eventually find a means to miniaturize computing capabilities to such a small scale that it can be measured in a Nano scale. These devices will drive the next generation of Ubiquitous Computing. Often referred to as “smart dust” [8], this concept has far reaching applications in the future. I can imagine smart dust being deployed in farmlands and agricultural fields, giving relaying soil nutrient and other such data to central governmental cloud services, from where farmers can get real time updates about their soil conditions whether they would need fertilizer, etc. This may ensure that farmers would not require to learn about complex systems for computing.

This brings us to the future of location and context awareness. [3, 5] One of the major changes that I see happening is the proliferation of augmented reality. I envision the use of this technology in a scenario that not many pay close attention to, the field of social networking and social media. If you observe what social media giants like Facebook are doing these days, you will observe a heightened interest in big data, and augmented and virtual reality. Facebook’s acquisition of Oculus and messaging platform Whatsapp is proof of this. In my opinion, Facebook’s mission for the future is to permeate into every aspect of an individual’s life. A person wakes up in the morning, his smart device by his side, a multitude of smart dust sensors scattered all around the environment. Wearable devices tell him he should get something to eat, because his blood sugar levels are quite low. His sleep pattern has been erratic over the past few weeks due to an upcoming work deadline, and he can see this through a head mounted display. Wherever he goes, the head mounted display [4] provides him up to date contextual data about the surroundings, his neighborhood, and allows him to take pictures simply by blinking his eyes. This technology brings forth an exponential increase in the amount of user generated data on social networks, with some people allowing social networks to showcase each and every minute by minute detail of their lives, and Facebook provides the facilities to do so. Increase in computing power allows people to live stream to hundreds or thousands of people at once through their phones or their wearables, be it talking to family, or a social gathering, or simply just entertaining personalities who use this as a means to reach out to their followers and perhaps gain some revenue through online payment mechanisms.

One of the major sectors to be influenced greatly by the proliferation of ubiquitous computing will be education. In ancient times, students would get individual attention from teachers.  This kind of teaching was reserved for the upper echelon of society of course. After the industrial revolution, the modern metaphors of classrooms with class teachers and tens if not hundreds of students being taught by one teacher became the norm. The internet brought about a revolution called e-learning. People of all ages could now access eBooks and video lectures from around the world. However, I feel that in the future, the confluence of contextual awareness and an exponential increase in the data available to people will bring about the next revolution in education. Children these days have access to smart devices with internet connections, and they are able to search for things simply by typing in queries in search engines. The rise of UbiComp based design will create a new kind of education system, which would be like a personalized digital teacher. Like Alexander the Great had a teacher and mentor in Aristotle, children will have at their disposal a digital teacher that will teach them exactly based on the child’s needs, based on data gathered through wearables, communication via voice and other input modalities, and various other means. Parents will have a control over and will be able to keep a track of their child’s progress, and will know what their child is learning. Technology, if influenced by the research about child psychology, will be able to cater to even special needs children through this new system. These days we see e-learning platforms like Lynda.com, but they are limited in their effectiveness, as they are not personalized for each and every individual student.

Of course the usual question arises, “What about privacy? Will people allow technology to permeate into their lives to this extent?” I believe so. As Langheinrich [6, 7] said, about 60-70% of people fall under the category of privacy pragmatics. As technology continues to permeate into our lives, and marketers continue to sell smart devices, wearables and even services to the consumers, it will create a level of dependency on these services that we would perhaps find hard to get out of. Just look at our increasing dependence on Google services, for example. Most consumers and small enterprises use Google services for email, cloud storage and even collaborative documents. As this dependence increases, we will slowly allow more and more technology to permeate into our very lives, and we will become more accepting of it as well. Just have a look at how instant messaging has changed family dynamics. I frequently chat with my family on instant messaging platforms like Whatsapp, which recently integrated a calling feature. An immediate result was me getting calls from my distant relatives, just because it was possible. This is an integration of various affordances into systems that increases adoption and acceptance. This also means there’s an increase in the “messiness” of the whole system. Free market competition means that cross-platform communication will probably never be as seamless as some people would like. This can be especially important if we move forward to a vision of connected homes, with the “internet of things” concept.

Another aspect that is important is the energy requirements for powering all these devices. Battery technology has not sufficiently advanced, and techniques like energy scavenging [10] have not yielded significant improvements. This could prove to be a major stumbling block for the proliferation of UbiComp.

Speaking of stumbling blocks, one of the concerns I have is whether all the questions that we have considered over the course of the semester will even be considered by creators of UbiComp systems going forward. I have observed that many of the case studies have been a post rationalization of systems by researchers, to look at what was right and what went wrong. Will the major players in UbiComp consider the socio-technical challenges while creating new systems? In the ethnography discussion [9] Dourish and Bell show in a way, that sometimes introducing technology into different scenarios needs some analysis. Sometimes, you need to know when not to introduce technology, rather than how to introduce new technology into each and every new niche or domain.


Ubiquitous Computing seemed like a field that was myopic in the sense that it was so heavy on western influences. The key focus areas seemed to be sensors, person tracking and connected environments like the smart home. However, the more that I read into it, especially the two texts “Ubiquitous Computing Fundamentals” and “Divining a Digital Future” that not only showed the technical but the sociological considerations of this field. Being on the cutting edge of technology, UbiComp poses some novel questions and concerns that are not apparent at a surface level evaluation of the field. Designing systems for Ubiquitous Computing therefore should be in essence a multi-disciplinary approach.


  1. Weiser, M. (1991).The computer for the 21st centuryScientific American 265 (3), 94–104.
  2. Dourish, P. & Bell, G. (2011). Contextualizing ubiquitous computing. In P. Dourish & G. Bell, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing(pp. 9–43). Cambridge, MA: MIT Press.
  3. Estrin, D., Culler, D., Pister, K., & Sukhatme, G. (2002). Connecting the physical world with pervasive networksIEEE Pervasive Computing, 1(1), 59–69.
  4. Starner, T. (2013, April–June). Project Glass: An extension of the selfIEEE Pervasive Computing, 12 (2), 14–16.
  5. Dey, A.K. (2010). Context-aware computing(Chapter 8, pp. 321-352). In J. Krumm (Ed.), Ubiquitous Computing Fundamentals. Boca Raton, FL: Taylor & Francis/CRC Press.
  6. Langheinrich, M. (2010).Privacy in ubiquitous computing (Chapter 3, pp. 95–160). In J. Krumm (Ed.), Ubiquitous Computing Fundamentals. Boca Raton, FL: Taylor & Francis/CRC Press.
  7. Dourish, P. & Bell, G. (2011).Rethinking privacy. In P. Dourish & G. Bell, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (pp. 137-160). Cambridge, MA: MIT Press.
  8. Warneke, B., Last, M., Liebowitz, B., & Pister, K. S. (2001). Smart dust: Communicating with a cubic-millimeter computer.Computer34(1), 44-51.
  9. Dourish, P. & Bell, G. (2011).A role for ethnography: Methodology and theory. In P. Dourish & G. Bell, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing (pp. 61–89). Cambridge, MA: MIT Press.
  10. Paradiso, J.A., & Starner, T. (2005).Energy scavenging for mobile and wireless electronicsIEEE Pervasive Computing, 4(1), 18–27. (doi)

FilmSite- Design by Contextual Inquiry


The goal of this project was to design a system that could augment filmmaking capabilities with the help of Unmanned Aerial Vehicles (UAVs).

Requirements Gathering:

Our Project involved observing the setup, production and post-production activities that take place during filmmaking projects. This included observing all activities related to videography, light and sound, Computer Graphics, and the synchronization between all these various aspects of the filmmaking project.

We observed the behaviors of the various people involved in the activities, their routines, and the procedures that were involved. We conducted interviews to gather more information about the various issues encountered while conducting activities pertaining to filmmaking. The constraints they had to work within were of importance to us. The observed environments consisted of film sets, office spaces, and Post Production facilities that had computer workstations.

Conceptualization through contextual models

We took note of all the artifacts used, and the methods involved with using them. Contextual notes were taken, and diagrams were made, which were then consolidated. Based on the information gleaned from the aforementioned diagrams, we envisioned certain designs and a storyboard was created collaboratively.

Flow Model

Below is a representation of the coordination, communication, interaction, roles, and responsibilities of the film crew.




Sequence Model

The step by step process of film production is described below in the sequence model. Intent, triggers, activities and breakdowns are discussed.



Physical Model

Below is a model that represents the physical environment where the work tasks are accomplished within it.


Artifact Model

The artifact model gave us some insight on possible inefficiencies with using heavy equipment that requires power outlets and manpower in order to move. This gave us a little insight on how we could use drones in order to make some of these tasks less physically tedious and more efficient.



Cultural Model

The cultural model reflects the close interaction among the film crew.


Affinity Diagram


Visioning and Storyboards

FilmSite is envisioned as a on-the-go visualization and film production tool that would allow directors, film crews, and post-production VFX designers to plan film scenes from any location, at any time that inspiration hit using a combination of real world imagery and simple mock-ups. FilmSite will allow for scene and camera planning and an instantaneous ability to share work through the application or be lending a mobile smart phone to another to view completed work.


The following storyboards illustrate scenarios of envisioned use:




User Environment Design

User Environment Design


Low Fidelity Prototype







High Fidelity Prototype






Interactive Prototype



Key Strengths:

  • Users appreciated the high-level intentions of the design idea
  • The 3D perspective in the pre-visualization was considered helpful
  • Ability to restrict the view to specific camera choices was extremely useful for multiple shot planning
  • Mobile platform is convenient for use when ideas come to mind, or you want to show ideas to colleagues by handing your phone to them

Key areas of Improvement:

  • Improved definition between sections that cater to different subsets of production (pre-production, production, and post-production)
  • Filming section is considered ambiguous, as  the entire application is designed for the process of filming
  • The exportation of the 3D environment in pre-visualization needs more clarity as to what it does
  • Show different angles of the 3D environment so perspectives can be seen within the prototype for better expression of the prototype’s intentions

FULL REPORT: You can view the full report, which includes all of the detailed information here: Full Reportbox_expand-512

Usability Evaluation – Craigslist


An expert usability evaluation I conducted as a part of a team project, for a class titled “Usability Evaluative Methods”. We conducted heuristic analyses and cognitive walkthroughs as a part of the formative evaluation, after which we conducted a usability test.

The test was a comparative study of Craigslist and a competing website called Oodle.com. There were 5 tasks per website. Half of the participants started with Craigslist, and the other half started with Oodle, so practice effects were taken care of. The participants were asked questions in the form of a semi-structured interview at the beginning of the test, and were asked to fill out a post-test questionnaire consisting of a modified System Usability Scale (SUS), and a unique card-sorting session which helped us glean information about the participants’ thought process.

Formative Study- Heuristic Analysis and Cognitive Walkthrough

We conducted heuristic analyses and cognitive walkthroughs individually and combined our findings. Some of them were:

Craigslist Home page

  • The lack of categories order, Boring design.
  • Difficulties in changing location.
  • The search filters are not working.

Account page

  • Difficulty in finding the “create a post” option once logged in to account.
  • Difficulty in navigating away from the accounts page.

Search Results

  • Search filters do not work.
  • Search alert function not clearly explained.

Summative Study- Usability Testing

In order to diagnose areas of improvements, we tested Craigslist.org against a similar site, Oodle.com. We compared these findings with the usability issues we identified in our expert review. Some of our expert review findings were confirmed by the user testing and new issues were revealed as well. There were two evaluators present during each user testing session. One would facilitate the test. The other would observe. A total of 8 user testing sessions were conducted.

Task Descriptions

Each session included the following tasks for Craigslist.org:

  • Logging in to User Account
  • Post a Listing
  • Add an Image to a Posting
  • Search for an Apartment to Rent
  • Save a Search

Each session included the following tasks for Oodle.com:

  • Logging in to User Account
  • Post a Listing
  • Add an Image to a Posting
  • Search for an Apartment to Rent
  • Mark a Listings as Favorite


Summary of Findings

Here are a few graphs showing a summary of our findings:




General Recommendations:

Here are some recommendations that we found as a result of our card sorting exercise:

  • Adding the text box where user can “Search by Location” that is used on several other classifieds websites
  • Adding notification about waiting time for processing new listing after the users had posted their new products. Currently website doesn’t notify its users that the new postings take about 20 minutes before they can be viewed by other customers.
  • Adding product location distance that shows users how far they will have to travel in order to pick up their purchase. Few websites, together with Google maps, are already using this feature.

Other Recommendations:

  • Restrained social media integration
  • Clearly labeled icons

FULL REPORT: You can view the full report, which includes all of the detailed information here: Full Reportbox_expand-512 

My views on Google's new Material Design UI

Google introduced a UI refresh as a part of the Android L developer preview at their recently concluded developer conference, Google I/O. A lot is being said about the new design language labeled “Material Design” and Google has provided extensive guidelines to help developers design their apps in this way, moving forward. A very important aspect of this design is unity, as Google’s VP of design Matias Duarte says: 

We wanted one consistent vision for mobile, desktop and beyond, something clear and simple that people would intuitively understand.

Unity is important for Google as it will make it easier for users to access Google services through different devices. Surely, Google has taken design cues from both Microsoft and Apple in its material design, but it does not look like a patchwork of disjointed ideas- it seems very cohesive, and thoughtful.

It’s all about “Paper Craft”

Paper is the fundamental design paradigm of material design. Every pixel drawn by an application resides on a sheet of paper. A typical layout is composed of multiple sheets of paper. 

Toolbars and menus can be configured to look and feel like papers on a notepad.

Toolbars and menus can be configured to look and feel like papers on a notepad.

Depth as Hierarchy, not Ornamentation

In previous versions of Android and iOS an excessive amount of textures, gradients and shading was used which appeared overdone, disjointed and ugly. IOS 7 saw a radical change towards taking away all these superfluous graphics giving rise to a “flat” UI paradigm without any gradients, shading, etc. 

Instead of going to extremes as is the case with iOS, Google has adopted a more subtle and nuanced approach. Material Design uses depth not as ornamentation, but as a way of communicating hierarchy and as a way to focus users’ attention to a task. Shadows can be added to aid the perception of depth and to highlight objects. 

While the “Flat UI” paradigm is all about taking things away (gradients, shadows, highlights, etc), this new philosophy seems to be based on adding movement, animation and colors to spruce up the user experience. 

Responses to Input

Until now, precious little was done in terms of providing users some positive feedback while interacting with the system/application. Material design incorporates visual and motion cues in an attempt to engage the user, providing input acknowledgement through animated effects that look quite refined, and not overdone.

Upon receiving an input , the system provides an instantaneous visual confirmation at the point of contact.

Use of Color

Android's Gmail app, before and after the new Material Design interface.

Android’s Gmail app, before and after the new Material Design interface.


Taking a leaf out of the Windows Phone UI playbook, Material Design seems to have a distinct focus on typography. The Roboto font, a mainstay on android devices ever since android 4.0 ICS, is modified slightly; it is wider and rounder in an an attempt to be more pleasing to the eye, especially since text is almost always white juxtaposed against a vibrant background in the main title bar of applications. 

Simplified Icons

The trend of moving towards more simplistic icons instead of gaudy texture rich ones is pretty evident ever since android ICS and can also be seen in custom OEM skins like HTC Sense 6. 

Each icon is now reduced to a minimal form, every idea edited to its essence. Even the navigation buttons have been reduced to geometric shapes. The designs ensure readability and clarity even at small sizes. Every icon uses geometric shapes, and a play on symmetry and consistency gives each icon a unique quality. Emphasis is laid upon consistency of icons for both mobile and desktop icons, and small details like rounded/sharp corners have been touched upon.

Focus on Imagery

imagery-focusThe focus on visual content is also very obvious on observing the new Android L design. The image takes center stage, and designers are encouraged to use vibrant and bright imagery without using stock photos. The focus on vibrancy of images has always been a part of the smartphone user experience, users prefer oversaturated images and vibrant colors in the photographs they take, they like colors to “pop” rather than look natural. The popularity of AMOLED display technology and display calibration by OEMs that favors the oversaturated over the true to life colors supports this observation. 

Just like the Windows Phone UI, Material Design relies on images that go right up to the edges of the containing area without any window borders. It’s all big, bold squares/rectangles rather than icons and windows. 

The “Card” Concept extended


Google has been shifting to the “card” user interface, a rectangle or tile that contains a unique set of related information. Cards are typically an entry point to more complex and detailed information. These cards or tiles have been a part of the UI in Google Now and a host of other applications like Google+. The way that these tiles update the user with live information is similar to Microsoft’s live tiles in the Windows Phone UI- for instance, showing the details for your next appointment on the calendar tile. Cards provide the user with summarized and glance able information and will be used extensively in the future as the focus on wearable technology increases. 

Moving Towards Consistency

Google’s new design language is a good refresh, and brings a lot of good things to the table in terms of design. However, one of the most important aspects of Material Design is the depth and detail of the documentation and its systematic nature. After a long era of designers and developers creating Android experiences that often feel renegade or pieced together, Google have undoubtedly stepped up their efforts to standardize and improve the UI and UX across their app ecosystem. If it’s adopted, it’ll certainly lend a much-needed consistency to that world. 

Keeps up with current design trends

Google is trying to incorporate uniformity by  trying to get ahead of all of the screen sizes they have going now and provide some real structure. It seems they really tried to set up a fail proof way to design around all of the screen sizes, from the desktop experience to Glass to the watch. The effort is extremely expressive and is obviously about controlling the experience. Instead of trying to impose a strict visual aesthetic, Google defined a set of principles that leave more freedom to individual designers, while still pushing their numerous apps in the same consistent direction. 

In Conclusion…

Many will see Material as a further extension into a flat era of design, in the same way Windows 8 and iOS 7 use large areas of solid color and wide open spaces with a focus on typography. I think it’s more than that – the current design trends are the only sane way to support a wide range of display sizes, ratios, and pixel densities. Physics, animation, and some of the layering effects are only now possible because the hardware allows it to be. The new design has elements that dynamically shrink and expand, adds more white space between elements, offers lots of animation, and provides a more 3D look emphasized by shadows and lighting effects. It’s designed to put the emphasis on the most important content of a screen. Although these are just visual effects today, they could be handy in future years with 3D displays and the possibility of tactile touch screens that actually raise portions of a display. 

Maybe this is Google’s way of filling the void left by the demise of richly textured skeuomorphic designs? In any case, we can only hope it will add a little warmth and humanity to digital design and save us from a world where every app looks and behaves the same. Overall? I like it, I’m glad it’s here, but I don’t find myself bowled over by any of the components of the new system. It’s a well-considered stride in a necessary direction. I see this a great effort forward in laying the groundwork for a very Google-driven future ecosystem.

The video below reveals how the Material design language works across all devices Google touches, from smartphones to Glass to wearables.

Ingenious Touchscreen UI for Cars

An Ingenious "Eyes-Free" Touch Based Interface for Cars

Cars these days have a lot of features built in to “enhance the driving experience”. Radio sets, central controls, GPS navigation, CD players… and for every new feature that’s added into the central console, there’s an ugly, unintuitive, horrible user interface. On one hand you might have knobs, buttons and dials that can be used without looking, or you can have a touchscreen interface that’s a but better to look at, less clunky, but requires you to take your eyes off the road. There’s always a tradeoff between form and function, visual appeal versus ease of use. 

Touchscreens today

Touchscreen Interfaces on cars these days, too similar to the button/knob paradigm that preceded it.

Touchscreen Interfaces on cars these days, too similar to the button/knob paradigm that preceded it.

The touchscreen interfaces found on cars today are Skeuomorphic. They adhere to the same layout, the same design language and basically the same way of interaction as the preceding standard, buttons-and-knobs, changing only the input method, which is the touch screen. Skeumorphism is not a bad thing in and of itself, Resemblance to real world objects helps understand and learn things better, as is seen in smartphone operating systems today- iOS and android use icons, text and buttons to great effect. 

However, there is a great difference in the usage scenario here. Smartphones can get away with skeumorphism because the user of the device looks at the display, and not anywhere else, while operating it.

While driving a car, the driver’s attention needs to be on the road. Touchscreen interfaces, in the form that they are in today, can’t simply be ported over for use in automobiles.  Virtual buttons and knobs offer no tactile feedback, and the user needs to search for the button every time.

A new solution

Designer Matthaeus Krenn has created a touch based user interface that can be operated completely without

 A car UI that departs from traditional skeumorphism

A car UI that departs from traditional skeumorphism

having to look at it. Instead of buttons, icons, text or menus, the interface is based on the number of fingers used to touch it, and some gestures like pinching and swiping.  Dragging upwards with two fingers turns up the volume; dragging up with three changes the audio source. Four fingers controls temperature; five for airflow. Each has a unique sensitivity based on its function and can be triggered starting anywhere on the touch surface. Moving up or down with your fingers spread a bit wider offers an additional set of controls. All eight of these can be remapped to the driver’s preference.

This new UI seems totally built from the ground up specifically for touch devices. However, it will take some time and effort for users to train themselves and learn this new interface, something which the designer himself admits needs to be addressed in future iterations. The application is currently available only for the iPad, and can be downloaded here.

In the future…

This focus on building a new touch interface from the ground up is a welcome change, and a step in the right direction. New control methods can offer exciting advantages previously impossible. But they also come with their own set of challenges. 

Augmented reality is another aspect of human computer interaction that looks promising. A user interface that combines both touchscreen technology and augmented reality may very well be the way we interact with our cars in the future. Critical information popping up on the windshield of the car, or Heads up displays found in Sci-Fi movies and video games may not be a fatfetched prospect. The only issue here is that augmented reality displays on car windshields may distract the driver, defeating the purpose of it all.

What do you think is the future of automobile interfaces? Let me know in the comments!


stLight.options({publisher: “2814c680-1ca3-4692-a110-0627022150af”, doNotHash: false, doNotCopy: false, hashAddressBar: false});

var options={ “publisher”: “2814c680-1ca3-4692-a110-0627022150af”, “position”: “left”, “ad”: { “visible”: false, “openDelay”: 5, “closeDelay”: 0}, “chicklets”: { “items”: [“facebook”, “twitter”, “googleplus”, “stumbleupon”, “pinterest”, “digg”, “reddit”]}};
var st_hover_widget = new sharethis.widgets.hoverbuttons(options);