Category Archives: research

Principles of Designing for Humans on edX

2) Principles of Designing for Humans

Retrieval Practice

This was another class where I attempted to use an idea similar to Retrieval Practice while studying the content. Essentially, I would watch a video all the way through and then try and write down anything that I remembered. I would then rewatch the video and stop the video to take notes. I found this to be helpful but it did take more time to “get through” the material. Mid-way through the week, I moved to just stopping the video and taking notes. I picked up the process again in the following weeks. Here are some of my takeaways from the material.

Week 1: Visual Perception and Memory. The visual perception of the human brain is limited. We only have a five degree of central vision range of full focus. Peripheral vision can’t be used for reading it is too out of focus. You must jump around to read. Humans scan pages using eye fixations to move from one word to the next these are called “sa-ka-ds” (saccades). We don’t usually look at the whole page we look at what is most important for us when completing a specific task. Just because something is on a page it does not mean it will be read.

Based on research by Jacob Nielsen the most common pattern of reading on the web is the F shape (for left-to-right languages). Some important principles to remember are:

  • Make important info and actions visible: If something is not visible users can’t interact with it.
  • Leverage how users read: Am I using these patterns?
  • When evaluating ask yourself “did they see it?”, Can they find it? , Did they misunderstand what it meant?

Visual psychology suggests several stages of visual perception. We move from photons to images. The three stages are:

  1. Features
  2. Patterns
  3. Recognizable Objects

According to the course, the Gestalt School of Psychology studies the way that pattern works and created a set of principles on how the eye breaks up the visual field. These principles are:

  • Proximity (nearness means the items are associated)
  • Closure and continuation (we fill in gaps with likely solutions and context helps. Bk means back in the sentence “be right back”)
  • Symmetry (we see something like a single object)
  • Similarity (like things are part of a group)
  • Common area (share a common boundary seen as a group)
  • Common fate (move together those are a different pattern)

Memory

Moving from visual perception to memory requires attention and moving from short-term to long-term memory. Our short-term memory is limited and according to Miller the magic number 7+/-2 items (Miller’s Law 1956). Cowan reduced this number to 4+/-1 (Cowan, 2010). Anything that isn’t retained is lost. Learning happens when we move something from short to long-term memory.

Short-term memory is limited. Here are some UX principles based on this idea. (Attention)

  • Keep lists of options short: Keep within the range of magic numbers
  • Give users tools for reducing options
  • Compare against each other
  • Don’t expect users to remember stuff (from one screen to the next)

There are several ways to transfer something to long-term memory these include association and repetition.

Principles

Learning will work better if the learner can fit the new material into a schema

  • Use metaphors: An example is the shopping cart on Amazon
  • Leverage standards and consistency: Menu structure: Use one system can transfer information to the next.
  • Avoid asking users to memorize information

Prefer recognition over recall

    • Search terms we need to create but Google added autocomplete search terms.

These ideas come from cognitive psychology. I really like these ideas for the principles they provide but I’m not sure we really have all the answers on how the mind and senses work.

Week 2 Norman’s model of Action, the Gulfs of Execution and Evaluation, and Design Principles

In Don Norman’s book The Design of Everyday Things, he describes a seven stage model of action anyone takes to accomplish an outcome. These steps are:

  • Determine a goal
  • Choose a path
  • Act
  • Look at the world for feedback
  • Determine if you were successful
  • Act again

Elaborating on Norman’s model of actions the model can be divided into two parts: actions that the user takes to change the world and the feedback the user gets from the world of that action.

  • The gulf of execution is that the user can see they’ve done an action in the world.
  • The gulf of evaluation is the signal the world or system provides back because of that action. Did this action get me closer or further away from my goal?

Discoverability is the way we bridge the gulfs of execution and evaluation.

  • Execution: users can figure out (discover) what actions are possible
  • Evaluation: users can discover whether the actions were successful

Principles supporting discoverability

  • Affordances: the feature of an object or environment that indicates the possibility of action.
  • Signifiers: Message of what will happen if you take this action.
  • Feedback: users need to know that the system heard them and will do something about their action.
  • Constraints: You can only take certain actions.
  • Conceptual Models: support simulation of future actions because they understand how the system is set up.
  • Consistency:  What users learn in one place they can apply to another.
  • Metaphor: Rapidly communicate an idea about a function

Week 3 Nielsen’s Heuristics for Design

There are lots of guidelines for usability but the downside about guidelines is that they are very detailed and very specific. For example, Usability.gov has around 300 guidelines. Created by Nielsen (1994), heuristic evaluations offer an alternative to formal usability testing and are “cheap, fast, and easy to use” (p. 24). Heuristic evaluations are a discount method for highlighting usability issues in a user interface so they can be dealt with as part of the iterative design process (Nielsen, 1994).

Jacob Nielsen’s 10 heuristics cover the most important areas to consider and are generalizable. His list is derived from a systematic review of usability problems, is intended to be small and complete, able to be taught in a few hours and is well-supported by theories of perception and cognition. You can read them in 10 Heuristics for User Interface Design

Week 4 Heuristic Evaluation and Report

Most heuristic evaluations include several segments. The initial segment involves training evaluators to understand Nielsen’s heuristics. The evaluators then conduct their analysis during the second segment. The analysis is then followed by a debrief where the evaluators discuss their findings with each other. The fourth and final part includes evaluators assigning severity ratings to the usability problems (Nielsen, 1994, p. 38).

The Designing for Humans course included most but not all of these segments. I was trained on Nielsen’s heuristics, conducted an analysis of the edX discussion board, and individually assigned severity ratings to the issues. A debriefing on the issues occurred asynchronously in the form of peer reviews of usability problems with other evaluators. The only one of Nielsen’s segments not included in this evaluation is a collaboration among a team of evaluators to agree on the issues together and then assign a severity score. 

You can read my report below.

Report Cover

What is Systemic-Functional Linguistics?

In Steinkuehler, C. A. (2006). Massively multiplayer online videogaming as participation in a Discourse. Mind, Culture & Activity, 13(1), 38-52, the author uses functional linguistics as a way of analyzing the utterance of one of the players of Lineage. But what is systemic functional linguistics and where did she get the categories she was using interpersonal and ideational semantics? The Systemic Functional Linguistics page provides a summary quote from the this group: http://www.isfla.org/Systemics/definition.html

“A central notion is ‘stratification’, such that language is analysed in terms of four strata: Context, Semantics, Lexico-Grammar and Phonology-Graphology.

Context concerns

  • the Field (what is going on),
  • Tenor (the social roles and relationships between the participants),
  • the Mode (aspects of the channel of communication, e.g., monologic/dialogic, spoken/written, +/- visual-contact, etc.).

Systemic semantics includes what is usually called ‘pragmatics’. Semantics is divided into three components:

  • Ideational Semantics (the propositional content);
  • Interpersonal Semantics (concerned with speech-function, exchange structure, expression of attitude, etc.);
  • Textual Semantics (how the text is structured as a message, e.g., theme-structure, given/new, rhetorical structure etc.

The Lexico-Grammar concerns the syntactic organisation of words into utterances. Even here, a functional approach is taken, involving analysis of the utterance in terms of roles such as Actor, Agent/Medium, Theme, Mood, etc. (See Halliday 1994 for full description).”

Steinkuehler, C. A. (2006). Massively multiplayer online videogaming as participation in a Discourse. Mind, Culture & Activity, 13(1), 38-52.

Steinkuehler, C. A. (2006). Massively multiplayer online videogaming as participation in a Discourse. Mind, Culture & Activity, 13(1), 38-52.


Notes from reading:

Psychology has moved from the 1950 just considering Behaviorism concerns of stimulus response to including “symbolic processing theory,” that included a mind between the stimulus and response.

Cognition has changed to also consider context through which the mind works. “Despite the internal diversity, researchers working under these paradigms have shared a view of cognition as (inter)action in the social and material world.” (p. 2)

“Yet, work in functional linguistics demonstrates that all language-in-use functions not only as a vehicle for conveying information but also, and equally as important, as part and parcel of ongoing activities and as a means for enacting human relationships (Gee, 1999). To take a simple example, consider the statement “Mistakes were made” versus “I made mis-takes.” In the first utterance, I am engaging in an “information-giving” activity that foregrounds the ideational and shrouds agency. In the second, I am engaging in an “apology-giving” activity that foregrounds my responsibility for whatever conundrum occurred and does repair work on my social relationships with whoever my audience may be.” (p. 2)

Steps in Gee’s Discourse Analysis in The ‘no problem’ Discourse model: Exploring an alternative way of researching student learning

One of the things I’m noticing from Gee’s work is he brings up the idea that he is not showing in one spot the specific steps involved in doing a DA. From Jennifer M. Case, Delia Marshall, The ‘no problem’ Discourse model: Exploring an alternative way of researching student learning, International Journal of Educational Research, Volume 47, Issue 3, 2008, Pages 200-207, ISSN 0883-0355, http://dx.doi.org/10.1016/j.ijer.2008.01.008.
(http://www.sciencedirect.com/science/article/pii/S0883035508000402)
Keywords: Discourse;
it appears these are the steps these authors took.

  1. created broad’ transcription style (Gee, 2005), with limited coverage of detailed speech features such as pauses or tone. In analysing the transcripts we followed Gee’s approach of breaking the texts up into lines and stanzas in order to make the logic inherent in a text more apparent. The solid blocks of transcribed texts were taken apart so that each ‘line’ in a transcript consists of a unit of speech that ‘usually contains only one main piece of salient information’ (Gee, 2005, p. 125). These lines have then been grouped into sets of lines, which Gee refers to as ‘stanzas’, each stanza being a set of lines ‘devoted to a single topic, event, image, perspective or theme’ (Gee, 2005, p. 127).
  2. An initial content analysis across all the data resulted in the identification of a number of themes. Students identified this as a particularly ‘good’ course, and spoke highly of the lecturer and the way he interacted with students and ran the classes. Most students talked about the importance of working together with other students. Many had experienced failure in the past but were confident that they were coping well in this course. They noted some challenges, particularly with regard to a high workload across all courses, but felt that they had things under control.
  3. We then used these themes to begin to form what Gee terms ‘hypotheses’ about the Discourse model(s) in operation in this context. We tentatively identified a distinct Discourse model and then went back to the transcripts to check whether this model was discernable across a significant proportion of the data. A preliminary set of findings was presented to the participants in the study, and their responses indicated a strong sense of the ‘recognisable reality’ of the perspectives that the research offered. Thereafter, the characterization of the model was further refined by iterative analyses of the texts, until it was judged that sufficient coverage and convergence had been achieved.
  4. In this process, we were able to discern certain macrostructural features that seemed to be characteristic of the Discourse model. We also conducted a fine-grained analysis of the data, looking at how the model was represented linguistically, bearing in mind that the details of linguistic structure are a crucial element in establishing the validity of a discourse analysis. These two levels of analysis will now be presented in turn.

I’m following up with the book reading to see if I’m missing any pieces.

Facsimiles in higher resolution

Just found a nice website with higher resolution versions of the facsimiles posted by Nathan Richardson! At some point I would like to use Rhodes translation and commentary to overlay the facsimiles with their hieroglyphic transcriptions: http://home.comcast.net/~michael.rhodes/JosephSmithHypocephalus.pdf I will attempt this likely after I complete my dissertation. But unfortunately I could not find larger versions of the facsimiles online until now. Thanks to Nathan Richardson for posting these online.

http://nathanrichardson.com/2011/09/the-book-of-abraham-facsimiles-in-high-resolution/

I’ve added a link to his website for future reference.

 

Characteristics of Case Study Research According to Cresswell

  • Identify specific case (p 98): special education teachers or speech language pathologists in a rural district.
  • Intent of case (p98): either intrinsic: something is generally interesting or instrumental to understand an issue, problem or concern.
  • The point of a case study is to provide an in-depth understanding of the case. In my case I’m looking to understand how special educators educational philosophies impact their perceptions of activities in a PBL MUVE. This comes from the use of multiple data sources.
  • Data analysis: different from case study to case study. Multiple cases to compare. In my study I think each participant is a case.
  • The study needs a description of the case (p 99). As well as themes or issues or specific situations in a case to study.
  • Themes or issues are analyzed across cases for similarities or differences among the cases. (p99)
  • I think mine is a collective case study (p99): I’m looking at multiple individual teachers to determine more of the issue.

Creswells (2013) qualitative inquiry & Research Design

I’ve just finished reading chapter 2 Philosophical Assumptions and interpretive frameworks. How I wish I could have read this book as part of my course work in qualitative research. Oh well better late than never! This chapter helped me to identify my own interpretive framework as a mix of social constructivist and pragmatism. It also helped me to understand specifically clarify my views on ontology (the nature of reality), epistemology, and axiology, and how all of these tie to my methods in my research. As far as ontology I do agree with the idea that reality is a picture of multiple perspectives best seen through a variety of views as linked to my choice of conducting a case study. In terms of epistemology I think the best way of arriving of knowledge is in the field to get a picture of things as they are experienced by my participants from a variety of various data sources. Axiology is a new term for me but fits with my agreement with discourse analysis. Axiology focuses on determining the values of the research. I’m interested in social presence, open simulator etc because I have experienced them myself and found them to be powerful in terms of my feeling of connection to others in virtual worlds and the feeling of self-direction and responsibility for learning in problem-based learning. These three assumptions tie to my use of case-study and via pragmatism multiple methods including discourse analysis and the constant comparative method.

It looks to be a really helpful book for my dissertation. There looks to be a good quote on page 19 from Huff (2009) about why philosophy is important because it helps to formulate how problems, methods and results are determined and what counts as good research. Pg 20 has a good admission that educators are generally eclectic in their use of different ideas and theories.