Living labs

Blog Thirteen: Date
4/17/2019

What are Living labs?

Living labs are environments that are “configured to measure and record people’s everyday activities in a natural setting, such as the home” (Preece, Rogers & Sharp, 2015). It could be thought of as a combination of both natural and lab envirnoments.

Reflection

Living labs would be most useful when there needs to be a controlled environment but data needs to be recorded from testers in their natural setting. “The concept of a living lab differs from a traditional view of a laboratory insofar as it is trying to be both natural and experimental, and where the goal is to bring the lab into the home” (Preece, Rogers & Sharp, 2015). Often the focus is on a prototype that is installed within the living lab. And so, one example I though of was a home secuirty system. Though the actual security feature of a home security system can be tested in a lab without real users, there are some aspects that still would require user testing. How easy is would it be to set the alarm? How loud should the alarm default be? What should it sound like? These questions and more could be answered by observing and recording user feedback. In a living lab with this alarm system prototype, a family can exerpience a scenario where someone trys to break into the living lab at night. From there, observers can record how the users react to alarm system. “The dilemma is how artificial do you make the more natural setting; where does the balance lie in setting it up to enable the right level of control to conduct evaluation without losing the sense of it being natural?” In the alarm system scenario, I would think that the lab would need to resemble a realistic home that feels enclosed and comfortable to the users. It would need all the facets of a normal home.

Figure 11. Living Lab (Hinchliffe, 2019)

My Thought

“I believe that the level of immersion needed for living labs to maintain the necessary likeness of a natural setting is dependent on the scenario and the users being tested. “ (McMillan, 2019).

Reference List

  1. Hinchliffe, T. (2019). LivingLab: everis Opens Innovation Lab for Co-creation in Barcelona – Novobrief. Retrieved from https://novobrief.com/innovation-lab-barcelona/6179/
  2. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.

AgileUX

Blog Twelve: Date
4/10/2019

What is AgileUX?

AgileUX “is the collective label given to efforts that aim to resolve these
concerns by integrating techniques and processes from interaction design
and those from agile methods” (Preece, Rogers & Sharp, 2015). It means to iterate, adjust, and correct in sprints throughout the project to minimize errors in the final product.

Reflection

The process of AgileUX is to design, build, and iterate. With AgileUX, sprints are used with Agile software development methodolgies. Agile can be useful for reducing the number of errors and unifying developers and designers (Preece, Rogers & Sharp, 2015). If I were developing a game title, I would use AgileUX to reduce as many potential bugs as possible. Too often have I seen newly released online games or updates for existing games that are filled to the brim with bugs and graphical errors. On the otherside, it really shows how effecient a game company is when they can release multiple titles with little to no need for patches or updates to fix bugs or errors that should have been addressed prior to release. With Agile the important thing is that the requirements are specified as early as possible. “Agile development emphasizes regular delivery of working software through evolutionary development, and the elaboration of requirements as implementation proceeds” (Preece, Rogers & Sharp, 2015). With big name gaming companies, I imagine this done often with games that are a part of an ongoing series of big title releases. The company knows what needs to be done and they can refer back to previous titles of the same series. Of course, agile iterations are needed as the requirements change. Changing requirements is not uncommon in any project, especially when players have ever-evolving interests.

Figure 10. Agile UX (Liu, 2019)

My Thought

“If implemented correctly AgileUX can allow a company to deliver a more complete product to users with minimal user testing. At the same time, it is not a ‘one and done’ solution. It should be used alongside other techniques and methodologies either within the same project or other projects moving forward.” (McMillan, 2019).

Reference List

  1. Liu, D. (2019). Agile UX vs Lean UX: Don’t Force Yourself To Choose, Designer. Retrieved from https://uxplanet.org/agile-ux-vs-lean-ux-dont-force-yourself-to-choose-designer-61f8d60f4f7a
  2. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.

Prototyping

Blog Eleven: Date
4/06/2019

What is a Prototype?

A Prototype is essentially “one manifestation of a design that allows stakeholders to
interact with it and to explore its suitability” (Preece, Rogers & Sharp, 2015). Normally, a prototype only focuses on one aspect of a product’s design while paying less attention to others.

Reflection

I’ve heard the word ‘prototype’ for years, ever since I was a child. In my head, a protoype was a rough, first attempt at an invention. Though I wasn’t too far off, I used to think that a protype had to be functional and a three dimensional representation of the final product. But in reality a prototype can be anything from a drawing to a cgi model, or even a clay sculpture. Another thing I didn’t fully understand was the actual purpose of a prototype. “Prototypes are useful when discussing or evaluating ideas with stakeholders; they are a communication device among team members, and an effective way for designers to explore design ideas” (Preece, Rogers & Sharp, 2015). With that said a prototype can made throughout mutliple stages of project. It can be made to show the initial concept of the product, to generate more ideas for changing the current design, to conduct user testing, and so on. Being a person that loves to draw, I’ve created many protoypes in the form of sketches and storyboards. These low-fidelity prototypes were used by both me and group members to plan out recordings and systems that we would develop. Another use of prototypes that I’ve come across is for patents. ”
If a product is new enough or unique enough, patents need to be considered. . . by having a working prototype, it is much easier to sit down with a patent attorney and see what design aspects may be patentable ” (Upton, 2019). And so prototypes can serve their usefulness outside the design phases aswell.

Figure 9. Paper prototype by Tonic3, Texas (“Prototyping for Design”, 2019)

My Thought

“Prototyping is an essential part of project design. Many innovative technological products that we have today started off as ideas that were later represented as prototypes.” (McMillan, 2019).

Reference List

  1. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.
  2. Prototyping for Design. (2019). Retrieved from https://www.signl.co.za/blog/2017/3/30/prototyping-for-design
  3. Upton, S. (2019). Four Key Uses of Prototyping. Retrieved from https://www.moldmakingtechnology.com/articles/why-is-prototyping-important

Hierarchical Task Analysis

Blog Ten: Date
3/28/2019

What is Hierarchical Task Analysis

Hierarchical Task Analysis is an analysis that “involves breaking a task down into subtasks and then into sub-subtasks and so on. These are then grouped together as plans that specify how the tasks might be performed in a real situation” (Preece, Rogers & Sharp, 2015). HTA primarily focuses on what can be seen being performed.

Reflection

Any tasks that can be broken down into subtasks can be done so with HTA. After taking the time to think about my daily and weekly routines I tried to come up with some potential examples of HTA. One example of a task that can be broken down would be making an online purchase from a vendor such as Amazon. Process of purchasing an item from an online vendor can be broken down into five different stages: locate the item, add the item to a shopping cart, enter financial payment details, enter address information, and complete the order. A plan can be made from these subtasks after being broken down. One plan would be to have a new user go through all the listed tasks since their information wouldn’t be in the vendor’s system. The other part of this plan would be to have existing users only carry out locating the item, adding the item to the cart, and confirming purchase. This process seems to be the case with most vendors I’ve purchased from multiple times and it saves time in the long run. Of course, HTA may not be the best to use for all real tasks. “Real tasks are very complex, and task analysis does not scale very well . . . it cannot model tasks that are overlapping or in parallel, nor can it model interruptions” (Preece, Rogers & Sharp, 2015). But on the plus side, this analysis allows one to compare designs, reuse them, and understand them with ease. As one site puts it “while creating a detailed hierarchical task analysis is time consuming, making each step explicit makes it less likely that you’ll ignore any of the knowledge a user requires. Plus, it may let you identify further opportunities for improving the user experience. ” (Hornsby, 2019).

Figure 8. HAT ordering a book (Hornsby, 2019)

My Thought

“To me, Hierarchical Task Analysis is a simple way for new designers to understand users’ tasks. It can make for a good start to trying out other UX tools and methods.” (McMillan, 2019).

Reference List

  1. Hornsby, P. (2019). Hierarchical Task Analysis :: UXmatters. Retrieved from https://www.uxmatters.com/mt/archives/2010/02/hierarchical-task-analysis.php
  2. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.

User-Centered Approach

Blog Nine: Date
3/12/2019

What is User-Centered Approach?

User-Centered Approach is when “users and their goals, not just
technology, are the driving force behind product development” (Preece, Rogers & Sharp, 2015). The basis for a user-centered approach are three principles: early focus on users and tasks, empirical measurement, and Iterative design.

Reflection

With user-centered design, the goal is to create a system that the user would want to use. When reading about this topic, I though of game design in particular. A gamer wants one thing and that is to have fun. Of course, there are multiple degrees of fun and it can be achieved in a plenthora of ways depending on the gamer in question. You have to understand the user of the product, what they will do with it, why they enjoy doing what they do. “Users’ tasks and goals are the driving force behind the development” (
Preece, Rogers & Sharp, 2015 ). In the case of a gamer their goal in a game could be to beat a boss, to finish a storyline, to acquire a unique or rare item of value in the game. One article emphasizes that “Designers who adopt the user-centred design philosophy in their daily work pay attention to the user’s goals, and strive to build products that help the user achieve them in an efficient, effective, and satisfactory manner” (Kumar, Herger & Dam, 2019). So if I were designing this game, I would want the bosses to be difficult, not unbeatable, nore too easy. I would want the storyline, the content of the game, to be worth investing in and not be too short. Finally, I would make sure that every rare and unique item felt like an achievement to the player.

Figure 7. Player Design Chart (Mario Herger, 2019)

My Thought

“With the evolution of video games in particular, it is clear to me that the designers behind many popular titles have, in some way, utilized the user-centered approach to design. I believe this same approach could be effective for almost any interactive product being designed soley for the user” (McMillan, 2019).

Reference List

  1. Kumar, J., Herger, M., & Dam, R. (2019). User-centred Design in a Gamification Context. Retrieved from https://www.interaction-design.org/literature/article/user-centred-design-in-a-gamification-context
  2. Mario Herger, J. (2019). Chapter 2: Player Centered Design. Retrieved from https://www.interaction-design.org/literature/book/gamification-at-work-designing-engaging-business-software/chapter-2-58-player-centered-design
  3. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.

Critical Incident Technique

Blog Eight: Date
3/08/2019

What is the Critical Incident Technique?

TheCritical Incident Technique means “to identify specific incidents that are significant,
and then to focus on these and analyze them in detail, using the rest of the
data collected as context to inform interpretation” (Preece, Rogers & Sharp, 2015). In other words, a careful observation followed by the recording and reporting of data and information that is deemed important by the observers.

Reflection

When gathering data, an observer will likely record a lot of data through means of video or audio recording. However, not all data collected is usually needed. For example, if I were to observe a user testing an application and noticed that they got confused or delayed their actions for a prolonged period of time, this could be considered a critical incident. The number of times this person blinks, exhales, or other little movements throughout this extended period of confusion, would not be considered critical. Thus, that data would not need to be reported. Critical incidents “may be identified by
the users during a retrospective discussion of a recent event, or by an observer either through studying video footage, or in real time” (Preece, Rogers & Sharp, 2015). Again, whether or not an event is considered a critical incident depends on the judgement of the observers and the activity in question. Critical incident analysis is effective for “improving a very infrequent but important task that otherwise might get ignored by a standard task analysis” (“Usability First – Usability Glossary – critical incident analysis | Usability First”, 2019).

My Thought

“As we observe users using various methods and techniques, the goal behind the observation remains the same. With critical incident analysis we narrow down the observation towards what needs to be fixed or improved upon most” (McMillan, 2019).

Figure 6. Analysis (“What is qualitative analysis? Metrics, data and examples”, 2019)

Reference List

  1. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.
  2. Usability First – Usability Glossary – critical incident analysis | Usability First. (2019). Retrieved from http://www.usabilityfirst.com/glossary/critical-incident-analysis/
  3. What is qualitative analysis? Metrics, data and examples. (2019). Retrieved from https://www.rankmyapp.com/market/what-is-qualitative-analysis-metrics-data-and-examples/

Questionnaires

Blog Seven: Date
3/01/2019

What is a questionnaire?

A Questionnaire is essentially an interview on paper given that “they can have
closed or open questions but they can be distributed to a larger number of
participants” (Preece, Rogers & Sharp, 2015). Often questionnaires are handed out as paper forms or can be taken online making them more convenient than in-person interviews.

Reflection

I’ve filled out my share of questionnaires over the years and often they fall into two categories: short and painless, or long and aggrivating. The later usually leaves me giving half-hearted responses and information that may or may not be as accurate. The factors that make a questionnaire aggrivating for me are the overall length, complexity, and wording of the questionnaires. To reduce the length of a questionnaire, it would be best to use information that pertains specifically to the goal of the study. For reducing the any confusion that comes from the questions themselves, answers can be listed in a range. This is also useful in cases where a question may ask a user about a piece of information that they may not feel comfortable giving out. Some examples would be like asking someone how many partners they’ve been with or their age. Checkboxes are also an appealing way to present choices to a participant. In general, “it is important that
questions are specific; when possible, closed questions should be asked and
a range of answers offered, including a ‘no opinion’ or ‘none of these’ option” (
Preece, Rogers & Sharp, 2015 ).

Firgure 5. Questionnaire (“Poem: “Questionnaire,” by Wendell Berry”, 2019)

My Thought

“The information I give when filling out a questionnaire is often for something I’m interested in or involved with. That said, I always make sure to provide the most accurate information I can in hopes that it can provide the information needed to improve upon the product of interest.” (McMillan, 2019).

Reference List

  1. Poem: “Questionnaire,” by Wendell Berry. (2019). Retrieved from https://sacompassion.net/poem-questionnaire-by-wendell-berry/
  2. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.

NUI (Natural User Interface)

Blog Six: Date
2/18/2019

What is NUI?

Natural User Interface is an interface that “enables people to interact with
a computer in the same ways they interact with the physical world” (Preece, Rogers & Sharp, 2015). In other words, it allows a user to interact via facial and body gestures, and voice recognition rather than using additional input devices.

Reflection

After reading about NUI, Natural User Interfaces, I realized that I am already surrounded by them. My phone and my tablet would be two examples of Natural User Interfaces. Both have assistants with voice recongnition and allow me to issue commands and make inquiries. They also have touch screens that react logically to the motion of finger swipes. And with the help of downloaded apps, both devices can utilize their cameras to recongnize facial features and gestures that can serve a multitude of purposes depending on the app. Whether or not a NUI is actually ‘natural’ depends on how easy it is to learn, how complex the interface is, and whether or not speed or accuracy is needed (Preece, Rogers & Sharp, 2015). The key difference between GUI and NUI is that NUI utilizes machine learning and responds to the user rather than having a learning curve and being dependent on data input. One site describes NUI as being “attuned to the user’s intent by design; the user interface is built after first understanding the reason why a user would interact with the system” (“From GUI to NUI/TUI — The Next Step”, 2019). Thinking of my smartphones voice recognition, it can be used for more than just telling me what the weather is like. It can carry out functions like updating my schedule, deleting events, and searching for things online. Overall it is a hands-off experience. My reason for using it is because it provides a simpler and more efficient means of completing a task that I would otherwise have to do myself manually.

Figure 4. Interface Design: Natural User Interface Design (Исаева & profile, 2019)

My Thought

“Natural User Interfaces are just that, natural. They are more satisfying to interect with and I do believe that they are significantly more efficient than that of a Graphical User Interfaces.” (McMillan, 2019).

Reference List

  1. From GUI to NUI/TUI — The Next Step. (2019). Retrieved from https://www.cognizant.com/perspectives/from-gui-to-nui-tui-the-next-step
  2. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.
  3. Исаева, М., & profile, V. (2019). Natural User Interface Design. Retrieved from http://interfacedesignnandomo.blogspot.com/2017/03/natural-user-interface-design.html

Anthropomorphism

Blog Three: Date
2/10/2019

What is Anthropomorphism?

Anthropomorphism “is the propensity people have to attribute human qualities to animals and objects” (Preece, Rogers & Sharp, 2015). Such as naming inanimate objects or illustrating animals with human clothes and personalities.

Reflection

Anthropomorphism has been used for many years in myths, child stories, cartoons, and advertisements. In fact, advertisers tend to capitalize on the phenomenon of anthropomorphism by creating “human-like and animal-like characters out of inanimate objects to promote their products” (Preece, Rogers & Sharp, 2015). many anthropomorphic characters exist in advertisements because of its effect in selling a product or brand. The flashy cereal cartoon characters and various other dancing, singing, and fast-talking creations of advertising are easily remembered after all. Other examples include our ever so helpful pocket agents. Even smartphones are a source of anthropomorphic characters such as Siri, Alexa, Viv, Cortana, and more. One exception of this trend of humanized AI helpers is Google and their helper. Personally, I asked my Google Pixel 3, “Hey Google, what’s your name?” It then replied with “Google Assitant’s the name, helping you is my game”, followed by a smiley emoji. Some have brought notice to this deviation of the advertising trend, “The premise is Google Home is an appendage to the fundamental Google Search product and the aim is to create a design thesis that is similar to the Google search front page. ” (Roemmele, 2019). This writer goes on to say that Google’s choice to stay with the generic google name is likely to be a poor choice for their products. From what I can tell, having a name attached to these AI voices does add a bit personality and character to the devices they are used from.

My Thought

“To say that having a cartoon tiger in a red ascot or a talking parrot is ineffective in selling sugary cereal to kids would be a lie. It’s clear that taking inanimate objects and animals and morphing them into human-like characters has a deep psycological effect on how we percieve those characters.” (McMillan, 2019).

Reference List

  1. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.
  2. Roemmele, B. (2019). Siri, Alexa, Viv, Google?: Why Google needs to give a name (and personality) to Google Home. Retrieved from https://medium.com/@brianroemmele/why-didnt-google-choose-a-human-like-name-for-google-home-like-amazon-did-for-echo-naming-it-alexa-87872bb220b2

Co-Presence

Blog Three: Date
2/3/2019

Co-Presence

Co-Presence is when multiple shareable interfaces “have been developed to enable more than one person to use them at the same time” (Preece, Rogers & Sharp, 2015). It simulates being around others without actually being near them.

Reflection

Devices that allow for co-presence interaction are abundant in people’s lives. Things like smartboards, tablets, and game console accessories allow people to remotely interact with virtual objects at the same time as others. The purpose of this form of interaction in many products is to enhance collaboration between two or more people. Any collaborative activity will require both coordination and awareness among the members of a group. So the question is, how well do shareable interfaces support group coordination and awareness? In one study, the effectiveness of co-presence was studied in 3D learning spaces. In virtual environments “individuals experience different levels of co-presence depending on the psychological involvement of the individuals and the amount of realism of the avatars” (Hassell, Gayal, Limayem & Boughzala, 2011). This study ultimately showed that “co-presence did not have a significant effect on satisfaction” (Hassell, Gayal, Limayem & Boughzala, 2011). There was no significant difference between face-to-face presence and co-presence in terms of learning effectiveness.

Figure 3. copresence interaction and virtual learning (Kevin Baker, 2019)

My Thought

“With virtual reality headsets and even full body immersion suits, copresence interaction is likely to become more popular. Though I don’t see it being used in place of traditional face-to-face interaction when it comes to education.” (McMillan, 2019).

Reference List

  1. Kevin Baker. (2019). ame-Based Learning Market Innovative and Trending Forecast 2023- RallyOn, BreakAway, Sava Transmedia, Lumos Labs [Image]. Retrieved from http://industrynewsdesk.com/2019/01/20/game-based-learning-market-innovative-and-trending-forecast-2023-rallyonbreakawaysava-transmedialumos-labslearningwaremak-technologiesvisual-purpleplaygen-comcorporate-gameware/
  2. Preece, Rogers & Sharp, (2015). Interaction and Design: Beyond human-computer interaction. West Sussex, United Kingdom: John Wiley & Sons, Ltd.
  3. Hassell, M., Gayal, S., Limayem, M., & Boughzala, I. (2011). Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces [Ebook] (pp. 3,5). Retrieved from https://files.eric.ed.gov/fulltext/EJ1056411.pdf