Author Archives: Max Wilson

Designing for Learning

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Max Wilson


E-Learning is already a large and booming industry. With a wealth of options for software and providers in the individual, academic institution, and corporate training spaces, what are the core elements behind e-learning technology writ-large? This paper approaches this question through an overview of learning norms and case studies of a basic language learning program, an academic oriented course delivery company, and a corporate skills training platform. Analysis reveals that these are each different iterations of Learning Management Systems. Differentiated primarily by content, the unifying elements and of content delivery and management through SCORM offer companies and users with a plethora of options for combinatorial design of applications and platforms tailored directly to customer needs.


“The Return on Investment promise of higher education is gone.”  Melissa Bradley, Adjunct Professor, McDonough School of Business, speaking at the McGowan Symposium on Business and Ethics at Duke University, November 9, 2018.

Traditional 4-year higher education programs are being scrutinized for their applicability in a world where workers need to continually learn and adapt to new challenges alongside the technological tools that are changing how we live, work, and play. The high sticker prices are being met with skepticism when the compensation of the jobs they earn pale in comparison to the jobs of self-taught “basement” programmers. As concerns about affordable access to relevant education has grown, so has the availability of computers, the level of internet access, and the power and affordability of content hosting and delivery services. In 2012, two Stanford professors performed an experiment with recording their classes and releasing the video online. Shortly afterwards they left their teaching roles to start Coursera, an online platform with the explicit purpose of sharing the best university courses with the world at large.[1]

Coursera was revolutionary when it went live for the high profile institutions and professors behind the courses being taught, however, the fundamentals of the service were nothing groundbreaking. Distance learning is not a new concept, dating back to the early mail based correspondence programs of the 1950s.[2] Computer based learning was also not a new concept, with Rosetta Stone providing customers with boxed sets of CD-ROMs in 1992 that promised immersive language learning through your newest home appliance, the personal computer.

Over the past decade those seeking to learn new skills for work or life have been faced with an ever growing number of options, formats, and providers to choose from. Furthermore, these e-learning solutions for individuals represent just a sliver of a marketplace dominated by corporate buyers seeking to onboard new-hires efficiently, offer employees cost-effective and convenient professional development, and upskill workers as new technologies continue to challenge the existing skillset of employees.

From a business or social standpoint, these e-learning solutions simply repackage age old content and teaching styles into a new medium for consumption. Since the final product is minimally differentiated from the service it replaces, are the multitude of platforms and systems in the marketplace truly different from one another? Below, I have gone through three examples of e-learning systems that from a user perspective constitute very different use cases. By deblackboxing these different software platforms I expect I will find a very consistent set of modular technical building blocks, that are fundamentally simple to develop, select from, and recombine in a combinatorial manner. I believe a thorough understanding of the core elements of these Learning Management Systems will empower me in my future work to help clients select their ideal providers and design for success in their workforce development programs.

Professional Relevance and Market Insight

Through gathering sources and researching the technical elements of these platforms I found no true market leader in the Learning Management System space. Instead, I found a lot of extremely similar platforms, making even more similar claims of valuable user outcomes filled with the most buzz-worthy of terms including artificial intelligence, machine learning, and innovative. The preponderance of relatively equal market options, for particularly business oriented products, tells me two things about the nature of LMS software:

  1. The fundamental technical modules that comprise an LMS are relatively simple, easy to build, and do not represent a significant hurdle to the development of a market ready LMS.
  2. LMS platforms meet basic customer needs adequately, but are largely not differentiated in the user experience they provide, otherwise one would be likely to rise to the top of the market.
  3. Companies using LMS desire enough specific tailored features that designing a universally applicable, Off The Shelf (OTS), LMS very difficult.

In my post graduate career as an Digital Strategy Change Management Consultant I will be involved in the creation of content for client LMS, the selection and implementation of externally produced LMS, and the design of internally created LMS. A thorough understanding of the modular elements of LMS software will help me improve upon the existing approaches to LMS software currently on the market and those being developed for proprietary use by e-learning companies or internal training departments.

Comparative Case Studies

Digital Artifacts, Learning Interfaces, and Cognition

The three e-learning software platforms I will be addressing are simply the most recent iteration of educational methods and environments. Early human life was full of experiential lessons. Over time we developed language ability to communicate immediate threats and advance the collective security through the sharing of lessons experienced by others in the community. At the same time as we were using language to communicate, we were also depicting our experiences in the form of cave paintings and other visual symbolic representations of the world around us. There is much debate around the purpose of these symbols, but a common theory says that some were likely used to help communicate important information about the world, what animals were of value or posed a threat, the timing of seasons, family lineages, etc. This early development of visual learning is a crucial precursor to modern e-learning.[3]

E-Learning rests on a foundation of teaching techniques that have their roots in the early communication of risks, history, and the lessons of nature. Teaching psychology singles out three distinct types of learning and learners: visual, tactile, and auditory. As research on learning styles becomes more widely known, classroom teachers have worked to adapt their curriculum to find a happy compromise of all three, ensuring every student receives some portion of learning in their best style.[4] Fundamental to learning through a digital interface is participation with the content. The icons and images support visual learning, the need to select answers on the screen or type out responses provides some degree of tactile engagement, and the videos, audio support, and use of “interaction sounds” – clicks, dings, etc. – trigger auditory learners.[5]

In addition, the affordances of the digital medium, as defined by Janet Murray, seem to align well with providing learners on digital platforms elements suitable to each learning style. As we will see, each of the following platforms accesses and delivers (1) encyclopedic knowledge, requires (2) spatial interpretation of the icons, images, and elements used in the lesson, follows (3) procedures in learning familiar to a broad spectrum of users, and engages the user in (4) participatory interaction with the content in order to move through lessons.[6] Given the affordances inherent in the medium of these platforms, let’s see how they differentiate each other from a user perspective, and more importantly, how are they different or not within their separate black boxes.


As a great example of nothing being “new,” Duolingo is the modern, mobile, lightweight, kid cousin to the pioneer of computer based foreign language training, Rosetta Stone. Duolingo offers English speaking users the ability to study any of 32 languages – including two fictional languages: Klingon and High Valyrian – and non-English speaking users to learn English as well as a few select other languages depending on their native tongue. The process of learning is one of context based vocabulary acquisition through repetition and translation. While the effectiveness of the platform for true language learning is highly debated, it is nonetheless a popular piece of mobile software for engaging in a form of language learning. So how does this software deliver lessons to users?

One hint to the inner-workings of the app, is how the structure of lessons are displayed and made available to users. Each lesson “group” is displayed as an element of a tree of groups. The tree hierarchy structure shows the evolution of the lesson from one group to the next, visually hinting at the progressive nature of the education method. Additionally, only the immediate next tier of lessons, and all previously completed lesson groups, are available for users to work through. This provides a clear structure to the training program by preventing users from jumping ahead without first going through the introductory lessons. All told, this progressive style hierarchy of lessons, suggests a similar hierarchy on the backend of the program.




It is useful to think about what this sort of language training would look like in the analog world to understand how it is built and organized behind the scenes.

In a physical environment, the Duolingo lessons are most similar to successive sets of themed flash cards. Each lesson takes the user through a discrete set of new words and phrases, in a predefined order that slowly builds user familiarity with new elements. As the user progresses, these elements are combined to make more complicated elements that introduce new concepts. Even as the content evolves, the method of delivery remains the same. Users are either presented with a phrase and several translation options to choose from, a phrase and a keyboard to fill in the missing elements, or simply an array of icons on which they must tap pairs of words with their English equivalents. All of these elements are simply digital versions of a deck of flashcards supplemented by fill in the blank and translation exercises one would expect from an entry level classroom language course.

Given this modularized lesson format, we can envision a rather simple artifact based hierarchical database working behind the scenes. Each lesson group must have an associated set of digital artifacts, our flash cards, with individual lesson elements encoded into each. These lesson elements are then queued up and presented to the user as they work through a lesson set. If a user makes a mistake and provides an incorrect answer that individual lesson element is “reshuffled” into the remaining “cards” of elements and presented to the user again before they complete the lesson.

This retrieval system is a straightforward preprogrammed arrangement of elements, functioning similar to pages in a book. From a content development standpoint, the biggest hurdle for Duolingo would have been the creation of the first set of language lessons, the ordering and presentation of the vocabulary and lesson elements. Once one language was complete the sequential hierarchy of elements could be repurposed through roughly direct translation into each subsequent language offering. While limited in scope of content, the content retrieval and delivery mechanism, starting with selection of content by the user, then presentation through a mobile or desktop interface, and sequenced completion of a discrete set of procedural elements is a fundamental system of processes behind all e-learning platforms reviewed here.

Where Duolingo also provides a simplified example of a more complex operation performed by our next few cases is in the Learning Management System, or LMS. One of the affordances of providing content through a digital system that is not offered by traditional book based individual learning is the ability to track and certify progress toward an objective in a dynamic manner. Users, as identified by their username and login credentials, become a discrete tracked element in the Duolingo LMS. As users complete lessons, a record of completion is made in their user accounts. These completion “notes” indicate to the system which lesson groups should remain open to the user the next time they log in, and serve as the keys needed by users to access the next set of content in the lessons. Duolingo takes this tracking a step further into the user experience, making it an element of their method of gamification.

When users sign up for the software, they indicate a desired amount of progress they would like to make in their language learning each day. Upon completion of each sub-lesson, they receive positive feedback in the form of a dial increasing towards their daily goal. Gamification is a fundamental element used by e-learning software to enhance their stickiness with users, ideally increasing user engagement with the software, thereby improving outcomes. Duolingo not only incorporates this gamified element of daily achievement into the user experience while they are engaged in the app but also capitalize on the indicated commitment of the user to remind them, via pop-up notification on their mobile device, to return to the app and complete their lessons for the day.

Next, we will see how the fundamental LMS elements used to make Duolingo successful provide the foundation for more in-depth appearing e-learning solutions.


While Duolingo gamifies language learning for the purpose of keeping users engaged, cutting out traditional classroom elements, such as lectures and assignments, Coursera leans in to the value of the classroom experience. With a stated purpose to bring the best of university classes to students at every stage and place in life, Coursera mimics the in-class learning experience, including videos of lectures, assignments, peer discussion, and exams. With over 150 university partners, Coursera’s content offerings are their source of differentiation amongst e-learning companies. However, while the content is interesting, the content is not inherent to the technology behind Coursera, instead it is a product of successful sales and marketing. At its core Coursera is another example of a software product that is essentially a Learning Management System.

A Learning Management System is the interface between e-learning content developers/admin and users that serves as a repository and distribution hub for the educational content offered. LMS software also tracks information about user progress through courses, completion and performance on assignments and exams, and other details about user engagement including time spent on the platform. While our next case is on a specific corporate oriented LMS software product, Coursera offers an example of a custom LMS product.

To meet the specific needs of Coursera’s stakeholders, the company has developed its own proprietary LMS. Their LMS is comprised of some of the most common technical modules seen in LMS software:

  1. A repository of course content sectioned off according to content developer, learning topic, and a variety of paywalls.
  2. A portal for development of content by university partners with the support of Coursera IT and content teams.
  3. A multimedia hosting and delivery service for efficient storage and consistent streaming of video content.
  4. Learner communication system for learners registered in the same courses to communicate with each other about course content and collaborate on group exercises.
  5. Learner progress tracking system with a consistent testing mechanism.
  6. A gamification engine that introduces game like elements into the learner experience to increase engagement and stickiness of the platform.

The value of using a proprietary LMS to manage the delivery of university content to learners is the creation of a consistent user experience across all courses. Through consistent supporting elements to the content, users develop a relationship and familiarity with the Coursera way of teaching. However, not all academic endeavors lend themselves to logically ordered learning and evaluation, making electronic delivery of diverse subjects challenging.

Coursera’s initial offerings were focused on math and computer sciences, subjects which are well suited for evaluation in an empirical manner. Assignments and exams in these areas can be developed and administered as multiple choice or fill in questions with a limited number of answers (eg. 2 x 4 = 8). However, higher education is not limited to STEM, so universities and students interested in Coursera understandably expected to have access to a more holistic range of courses. This demand posed a new problem to the Coursera model: how to grade the assignments of 30,000+ students per course when “Barack Obama,” “Obama,” and “the President” were all acceptable answers?

Instead of waiting for, or developing in house, the machine based human language analysis required to assess open ended writing prompts, Coursera turned to its vast array of students as a potential solution. Building on the learner communication system of the platform, they introduced a peer-to-peer grading system. Facing a variety of levels of user commitment, challenges in consistent interpretation of rubrics, and often opaque user incentives, this peer grading element is an area where Coursera continues to iterate. More important for this analysis however, is how this need to introduce a system for human grading highlights a clear constraint of e-learning.

Just as language software like Duolingo will likely always fall short in comparison to a real world language immersion program, computer based learning will continue to struggle to provide assessment of more than discrete measurable answers. While this shortfall will certainly hamper the value of such technology to teach complex transferrable skills like advanced creative problem solving and human centered design, the bulk of e-learning applications exemplified by the next case competently meet discrete customer requirements.


When most people hear the phrase “corporate training,” an image comes to mind of a bland hotel conference room, filled with rows and rows of chairs, facing drop down screen featuring a text heavy PowerPoint. Just as employees dread these boring training sessions, so do their employers, who have to shell out money for trainers, space, and supplies, while giving up days of employee productivity. Employee onboarding and training comprise a large part of the e-learning market and Lessonly is just one example of software available to corporate companies.

As an Off-the-Shelf LMS, Lessonly offers similar core capabilities as the Coursera LMS. Easy, non-technical, drag-and-drop content creation, tracking of learner progress and performance, assessment and feedback, and content hosting. Fundamentally, Lessonly stores, organizes, and retrieves content for learners just like Duolingo and Coursera. Where it differs is in the requirement of its customers, the companies in this case, to fit into the network of tools and systems they use to run their business.

Lessonly and other OTS LMS options on the market achieve the fit desired by companies through the support of third party plugins. Through plugins for common business stack applications like Zenefits, Salesforce, and Slack, companies can integrate training with employees daily activities. Completion of programs can be tied to sales outcomes in Salesforce, managers can assign trainings to Slack teams to keep everyone up to speed with new practices and procedures, and progress along various career development pathways feed into performance metrics in Zenefits, the human capital management platform.

Through integration with existing corporate technology stacks, Lessonly, functions as just one more modular application within a larger bundle of tools used to manage company operations. What is interesting about Lessonly is not how it is unique but rather how it is the same as so many products on the market. While variants may be focused on managing traditional education environments, sales and customer service training, or presented as an alternative to traditional education entirely, they all provide the same content creation, management, and delivery elements described above. So how does such a universal set of features actually work? 

Shareable Content Object Reference Model (SCORM)[7]

Behind all three of these platforms is content built according to SCORM. SCORM is a model that guides developers on the creation of units of learning content that can be easily shared across systems. “Shared content objects” created according to this model can be interpreted by different operating systems and content delivery mechanisms and enables users working through a variety of interfaces to interact with the centralized content stored in an LMS.

At a basic technical level SCORM guides developers on how to package content for delivery and Run-Time communication. The specifications around packaging enables the LMS to know which units of content should be accessed in response to user prompts, what type of content it is, the name needed for retrieval, etc. This is essentially the naming and identification scheme of the flash cards in the Duolingo context and the videos and exercises in the Coursera and Lessonly cases. Run-Time communication feeds into the delivery and tracking features of the LMS. What prompts does the user receive while the content is running? What information is to be recorded as the user works through the system? How is the completion of the content to be handled in the user’s record in the LMS.

As long as content follows the SCORM model, it is able to be used, sold, and repurposed by companies, platforms, and users across any number of compliant platforms.

LMS and the Future of Learning

When set next to each other, the cases of Duolingo, Coursera, and Lessonly, serve to illustrate how the e-learning environment is notable more for its content than its technology. At a basic level, each of these applications take traditional models of education and learning and simply repackage educational units for delivery as digital artifacts. Relying on SCORM, or one of a few other similar models for shareable content, each application is built around a relatively simple LMS responsible for handling the inputs of content developers and interactive consumption of that content by learners.

Because SCORM is a universally available guidance for unit creation, the differentiating elements of these platforms are not the underlying technology but rather the content itself and the interface through which makers and users interact with the content. The Coursera commitment to university partners for creation of the best content is similar to the battle between Netflix and Hulu over content aggregation and creation rather than other less differentiable features. However, due to the complex and dynamic nature of learning, e-learning systems that are able to combine this simple LMS technology with tools for greater tailoring of content to individual user needs and the user environment, potentially stand a chance to break away from the pack of basic offerings.

Already we see discrete software applications taking advantage of GPS and Augmented Reality technology to provide users with a significantly more tangible learning experience in museums and national parks. Similar to the Netflix algorithms that interpret user preferences to suggest content in line with viewer tastes, the data gleaned from the delivery of dynamic rather than static content and associated assessment measures could be used to identify the optimum method of content delivery for each learner. With so many potential applications of the basic LMS skeleton in education and business, the question of creating a unique LMS or adopting an OTS version must be grounded in what is the end goal? Starting from an objective setting analysis of the purpose of the LMS and affordances desired for a particular use case, the basic building blocks of an e-learning platform are readily available to build on. The real challenge remains one of whether companies, institutions, and individuals can pause to think about what they really need from their learning programs or if they will jump at the newest and shiniest set of marketing pitches.

[1] Leber, Jessica. “The Technology of Massive Open Online Courses.” MIT Technology Review. Accessed December 13, 2018.

[2] “Correspondence Education | Britannica.Com.” Accessed December 14, 2018.

[3] Tattersall, Ian. “How We Came to Be Human.” Scientific American, 2006, 66–73.

[4] Dunn, Rita, Jeffrey S. Beaudry, and Angela Klavas. “Survey of Research on Learning Styles.” California Journal of Science Education II, no. 2-Spring, 2002 (n.d.): 75–98.

[5] There is much debate about whether the participatory nature of digital interfaces triggers the learning and memory centers of the brain as well as the tactile use of pencil and paper, unfortunately that is a debate we will need for brevity’s sake to put aside for another time.

[6] Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Selections from the Introduction and chapters 1-2.

[7] “SCORM Explained: In Depth Review of the SCORM ELearning Standard.” Accessed December 15, 2018.

Callebaut, Werner, and Diego Rasskin-Gutman, eds. Modularity: Understanding the Development and Evolution of Natural Complex Systems. The Vienna Series in Theoretical Biology. Cambridge, Mass: MIT Press, 2005.

Frey, Carl Benedikt, and Michael A. Osborne. “The Future of Employment: How Susceptible Are Jobs to Computerisation?” Technological Forecasting and Social Change 114 (January 2017): 254–80.

Gamma, Erich. “Elements of Reusable Object-Oriented Software.” Addison-Wesley Professional Computing Series, 1994.

Norman, Donald A. “Cognitive Artifacts.” In Designing Interaction: Psychology at the Human-Computer Interface, edited by J. M. Carroll. Cambridge, Mass: Cambridge University Press, 1991.

Sabharwal, Arjun, ed. Digital Curation in the Digital Humanities: Preserving and Promoting Archival and Special Collections. Chandos Information Professional Series. Waltham: Chandos Publishing, 2015.

Tattersall, Ian. “An Evolutionary Framework for the Acquisition of Symbolic Cognition by Homo Sapiens.” Comparative Cognition & Behavior Reviews 3 (2008).

Learning Management Systems – Outline

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Topic Inspiration: E-Learning has become a central tool for life-long learning, workforce development, and as a replacement or supplement to traditional classroom academic learning.

Platforms to compare in the space:

  1. Duolingo – Freemium, mobile only language learning
  2. Coursera – Full academic and certificate oriented education platform / online “school”
  3. Lessonly – Workforce training platform

Underlying design: Learning Management Systems

Key Components:

  1. Hierarchy file management system – affordances and constraints of complex file hierarchies
  2. Gamification
  3. Testing and repitition

Topics of content motivate the design:

  1. Workforce training
    1. Shorter
    2. Targetted
    3. Certificate or skill authentication vs coursera – traditional oriented

Focus on the learning user interaction with core course content, leaving out the admin interaction, education management aspects.

The affordances of the underlying technologies are implemented in different cases for the topic design case

  1. Mobile language app vs a complex linear algebra course



  1. Class reading on interactive software design – always on and responsive
  2. Icon display
  3. Hierarchy affordances

“New” Gmail Snooze Function

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

By Adriana Sensenbrenner, Max Wilson, and Banruo Xiao

Last April, Google released an overhaul of their web-accessed version of Gmail, dubbed creatively by the tech community “New Gmail.” With it the team at Gmail introduced a plethora of features and functions to the market dominant email client. One of these was the Snooze function, already made popular in the mobile version of Gmail and in their market testing platform, Inbox. While there was a lot of buzz about the Snooze function, as we have discovered throughout the course, this function is simply a combination of already existing aspects of the Google Applications and Gmail environment. The technical simplicity of the combination of modular elements and existing functions involved in the creation of the snooze function is impressive. The function also highlights how the team at Gmail uses Human Centered Design in the creation of features and graphical interfaces that seamlessly integrate into the overall user experience, aligning with digital and “real world” norms and expectations.

Human Centered Design: The socio-technical interface of Snooze

The name of the function, Snooze, is aptly chosen for the action it executes. Snooze comes from the ubiquitous alarm clock function, both computer based and analog, that lets sleepers say, “Just 5 more minutes,” to their insistent alarm telling them to wake up. Alarm clocks, at their core, serve as time based notification devices. The Snooze button lets people delay reaction to that notification.

Over the past few decades, email notifications, whether in the form of mobile alerts or simply the appearance of yet one more unread headline in an inbox, have gone from exciting, ala the “You’ve Got Mail” era, to overwhelming. Many have come up with productivity and triaging strategies to handle the onslaught of emails received, with varying levels of stickiness. One method that has remained popular comes from a 2007 Google Tech Talk by Merlin Mann on Inbox Zero. Mann’s strategy challenges users to clear out their email inbox by the end of every day. Several early elements of Gmail’s design differentiated it as an email client better able to support users in this goal than others, specifically: Archive – users can remove emails from their inbox without deleting them, instead keeping them in a searchable archive; Tagging – users can facilitate the recall of messages from the archive based on user created and applied tags; and Filters – users can apply filters to their inbox that screen incoming emails based on keywords, sender, or other aspects and have tags auto-applied and/or have the email archived immediately upon arrival. The Snooze function is yet one more tool Gmail has provided its users to facilitate achievement of Inbox Zero and pursuit of other productivity strategies.

Just as the snooze button on an alarm clock resets the alarm to go off at a specified later time, Snooze in Gmail resets the arrival sequence of a particular email to a user specified time. Users are empowered to forgo dealing with an email until a later time of their choosing, while also removing the email from their inbox, helping them achieve Inbox Zero for the given day. This lets users take advantage of all of the email receipt notification pathways for emails that they deem important but can’t immediately respond to. In summary, the design of the Snooze function focuses on the organization needs of users, hopefully reducing the stress and frustration of overwhelming inboxes and forgetting to respond to important communications.

User-Interface Design

Gmail is renowned to having one of the “easiest” and “user-friendly” progressive web apps. Google is able to smoothly reduce a lot of information into simple and efficient parts to create a really nice design for anyone to use. Part of the major update that Gmail hauled a few months ago was the introduction of “smart features” on the right of an unread email in your inbox. How it works is like this: when you hover over an email without clicking on it, you’ll see the usual bevy of options, including archive, delete, and mark as read. Snooze is right after “mark as read”. Gmail gives you the option to resurface it later in the day, tomorrow, later this week, on the weekend or next week,” TechCrunch explained. It’s funny that the snooze feature was actually only on the mobile app before but Google wanted it to become more ubiquitous and probably more popular so they implemented in on the web app.

Second is the hover action element. It comes up as soon as you bring your mouse over an area of the unread or read email- creating minimal to no effort from the user to find the icon. This only adds to Google’s innate ability to make everything easy for someone using their interface.

Third is where the features are located on the user’s screen. Gmail has already created icons for the first three features- archive, delete and mark as read- using pre-existing icons used on either google’s platform or on other websites. It is then no surprise that they used the clock as an icon for snooze- playing up on what we think off when someone says “snooze”- an alarm clock (analog not digital). Something else that I noticed was where the icon was located on the interface. It is right next to the timestamp of when the email was delivered. This was a very strategic design on Gmail’s part so the user can know when they received the email in order to assess what time and which day they want to snooze that specific email.

Technical Side of Snooze

Being an interactive design, which allows users to organize their inbox and even time, Snooze function works similarly in a way which some other Google functions work. For example, Archive function enables users to hide certain emails from the inbox by moving and saving them to All Mail folder. Snooze function, in the meantime, is designed to keep all the snoozed emails in the Snoozed folder. Moreover, the way that Archive function and Snooze function organizing emails has no difference with the function of “move an email to a specific folder”.

Meanwhile, Snooze function allows users to schedule a specific time to deal with the snoozed emails. They will reappear at the top of the inbox (as new emails). Some Google add on functions can also enable users to schedule a time slot for sending emails later. Their ability to send emails from a folder to another folder, or from an email address to the recipient, is more or less the same. Both of these functions set up a cron job with Google Scripts, and the emails will be sent around the specified time. The functions run in the background on Google server which are invisible to users so that even if someone turns off his/her computer, the emails will be sent to the right destination.

Despite the fact that Snooze function is an already existed, well developed function, its speciality includes but not limited to link several modules to work together to make Gmail as a more thorough system. When users try to snooze an email, they can add reminder at the same time. In the newer version of Gmail, the design of Reminder function allows users to write down notes while snoozing emails. The reminders with come up with the snoozed emails and can be repeated. From a system design thinking, Snooze function and Reminder function are two modules working together to make the whole system more responsive to user needs.


What’s Behind the Inbox Zero Philosophy?

What Is Gmail Snooze? The New Feature Is SO Helpful For Easily Distracted People

Why We like Gmail’s Latest Redesign and Why You Should Too

How to Schedule Email Messages in Gmail for Sending Later

Snooze Emails and Reminders Until Later

Inbox Zero by Merlin Mann, Google Tech Talk 2007

How blackboxing inhibits design options

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

My inspiration this week comes from a comment made by my fiancee about her new work team’s homepage.

Her complaint had to do with the uninviting nature of a webpage meant to serve as a landing page for Global Specialty Markets, featuring picture after picture of white men. The displaying of a white man’s headshot as the featured image for an article about a WaterAid project in Rwanda seemed at best tone deaf to socio-cultural expectations, and a bit too in line with stereotypes of the insurance industry.

To her, a standard user of the web based interfaces, this looked like a pretty glaring failure on the part of the marketing department. The degree of tone-deafness by the presumably capable marketing and communications team made me wonder what obstacles might lay in their path to amend these photos to more representative images.

The Website Architecture

Having been involved in the design of a new website featuring employee news articles for my last company, I remember learning about one of the key constraints put on us by our web design contractors. All articles were fundamentally linked to individual user accounts who were designated as Authors, these author accounts were incorporated into the metadata of the published pieces. Visitors to the page could search for pieces by particular authors and would see their name under the article heading, which would serve to as a link to their author profile.

The Socio-Cultural History of News

This whole system as an outsider seems very simple to work with and aligned well with what we expected based on our consumption of traditional print newspapers. Authors are always listed along with articles and inform our understanding of the point of view of the article. Their history as an author can be illuminating to the article at hand. As our employees were authors who could publish their articles through the website management portal, it made sense to tie their employee information, including their photos, to their user/author profile. This photo and biographical data would also be included in the About Us section of our website.

Well Thought Out Affordances

Thankfully for my team, we made abundantly clear with the builders of our website that we would like to have control over the photos that appeared with the articles. What we learned once we started using the system, however, is that appropriately accessing, storing, and displaying photos is not as easy as we had imagined. My fiancee’s employer seems to have made the decision to simply display the author photos with their particular publications, avoiding some of the complexities of managing a website.

Media Storage, Attribution, and Compatibility

The headshots of the employees are tied directly to their author profiles which then automatically are attached to their articles. The website engineers who designed this function simply take the corporate headshots, for which the company would have full licensing for use and would have stored in a company repository of owned images, and resizes each of them uniformly to fit into the allocated dimensions of the article boxes in the website architecture. This manual resizing step would only need to be performed once for each new photo. The resized photo would then be duplicated from the company servers and loaded onto the servers that host the website. These photos would take up space on the hosting servers. When a visitor accesses the website, these photos are sent as a sub-group of the data packets that inform the visitor’s computer what to display at that particular website address. The photos will appear as intended with perhaps slight alterations based on the size of screen/window the viewer is using. This is all a relatively contained system. But what if the marketing team wanted to display a more appropriate image?

While the company may own a set of either stock photos or company produced photos, let’s imagine they receive the photos from a 3rd party or find them online. Now each photo will need to be formatted and resized to match the receiving space within the website framework. This process is becoming easier as website management platforms improve their user friendliness, but for a complex corporate website this task will likely need to be done by a member of the website management team. Given that, this is the likely process for each additional photo:

  1. Marketing team selects photo based on internet search of usable photos.
  2. Image is downloaded as a group of data packets to the storage system on their individual computer from whichever website’s servers the image taker has selected to store and display their photo.
  3. The employee’s computer translates the data packet from origination to understandable format for the employee system.
  4. Image is transferred to company server by marketing employee, or is emailed to website IT team who then save it to the company server.
  5. The marketing employee also communicates the attribution requirements for the image (a step removed if the company owns the rights to the image).
  6. The IT team resizes the image and links it with the appropriate article, and publishes article to the website, hosted on yet another server.
  7. Viewer sees final product through their computer, including attribution information.

The Caveat

While all of this certainly adds more step to the article publishing process than currently exist with the use of the authors photo, that may be acceptable for the benefits. However, what if during the design of the website, the decision was made to always display whatever photo was tied to the author’s profile? If so, then doing this would require a redesign of this particular website feature, making this a much more significant demand on the IT process. While it may be a simple change, there is always a risk with integrated website design that something else breaks in the process.

So What?

The design of the features of the larger internet provide us with many apparent affordances. We often feel that if we can imagine how something should look, implementing it should be as simple as drawing it out on a piece of paper. However, on top of the various rules in place at a base level that enable so much freedom, there are rules computer engineers place on websites, software, and hardware that limit this blank space of options. Just because I can imagine displaying a different image for the article header doesn’t mean I can just snap my fingers and make that change, even if I had access. Because design inherently must provide various limitations and parameters on the affordances of the product, it is important to think about the larger ramifications of those design choices. In addition, designing systems for modularity helps improve the ability to make changes to systems and products after realizing the initial design was flawed.

The Art of Audio

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

I have always been an avid music fan. Growing up I was surrounded by my father’s dabbling on a variety of world instruments, my friends live jazz shows, and the hundreds of CDs I was constantly swapping out of various CD players. I found that while I was not particularly inclined to play music, I was highly attuned to acoustics and sound quality. Towards the end of high school, as I was increasingly involved in studying video film, I learned the basics of working with a semi-professional live production sound board. It wasn’t until many years later that I had an opportunity to put those skills to use.

Working at a DC restaurant with live music three nights a week, and a very low-budget set-up, I found myself wanting the sound quality to be just a little bit better. During shifts I would tweak the settings on the sound board, and through a bit of trial and error along with some helpful requests from bands, became comfortable with knowing how to adjust the sound played by the musicians into what I felt was the most acoustically pleasing form. But while I was learning how to make the physical adjustments to analog waveforms, able to understand at a basic level the physics of audio waves and how each dial altered them, I’d never really understood what went into recording audio for conversion into a shareable media format.

Given my love for live music I frequently would try to record shows on my cameraphone, but always found them significantly lacking from what I remembered of the in-person listening experience. Then I received the opportunity to take my passion and basic knowledge to the next level. At one such live show, my skills at live production were noticed and let to an offer to produce future shows that were destined for full scale production into music videos. In this new role I worked to get up to speed with the differences between simply live production and recording for remediation, learning a great deal about the how audio engineers overcome the challenge of capturing live experiences and translating them into digestible media files – and how media must be tailored to meet the socio-cultural expectations of the viewer/listener.

Live Sound vs Recorded Sound

Concerts are immersive experiences. Listeners are surrounded by sound, coming directly from speakers but also reverberating off of walls and being absorbed by soft surfaces and the bodies around them. The result is a very distinct “concert” sound that leaves the music a bit blurred for each listener. This blurriness of the music stems at its root by the physical analog nature of waveforms that limit the speed of sound.

As the sound technician, how do I recreate that sound? These physical aspects of the live experience mean that I can’t just take the audio from a singers microphone, sync it with camera footage, and expect listeners to be satisfied with what they are hearing. Whereas the concert attendee is hearing the singers voice from many different angles, the listener at home has only the usual two speakers on their device, that are hopefully in stereo to give just a hint of depth.

This means that to recreate a concert sound I have to place many more mics around the venue to capture the other versions of the singers voice that are formed by reverberations within the space. These multiple recordings must be then overlaid on top of the primary recording to give the listener the audio sensation of the live concert experience.

When tailoring to the socio-cultural expectations distorts reality

Of course, while I may place multiple microphones around the venue to capture the variations of a singers voice, I’m not recording the loud talkers standing at the bar behind the concert goer. Listening to a live concert at home leaves out the very real sounds of that in-person experience. But sometimes, real experience sound is not what we as a society of consumers of experiences via visual interfaces want, and this is no more true than in sports audio.

The two classic examples of how the audio associated with the representation of sports in digital format differs greatly from the analog “live” experience are Nascar and professional soccer. When watching Nascar at home on a TV, or computer, viewers will hear not only the stereo roar of cars whizzing by but also the roar of the crowds cheering. However, in person the sound of the cars is so loud that it drowns out even your friend sitting next to you, let alone the sound of the crowd cheering. In professional soccer coverage viewers hear a satisfying thump every time a player kicks the ball. This thump clearly syncs what they are seeing with the sound they would expect to hear if they themselves were kicking a soccer ball. However if one thinks for a moment they would realize that there is no way they would be able to hear the thump of a kick from the stands of a soccer arena.

The ability to hear the crowds at a Nascar race and the thump of a kicked ball are audio illusions demanded by viewers that would not be present for in-person attendees. Some are honestly achieved through well placed microphones such as those that track the ball on the field and capture the sound of the kick. But others, like the roar of the crowds at Nascar, are added in from other sources, since even the best microphones would struggle to separate the sound of the crowds from the sound of the cars. These are examples of the affordances offered to creators of audio/visual media that enable them to render an experience for viewers such that preset socio-cultural expectations of the live experience are met.

Mars, Roman. “The Sound of Sports.” (Podcast) 99 Percent Invisible. 8/11/14.

Dreaming, Gaming, and Design

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

The link gaming and computing has been around since the early days of innovation in the computing field. The reading this week made me really sit back and think about the statement that “Before the 1980s, no one in the computer industry imagined that there could be a major consumer or small business market for computers.” (Irvine, 2018) This sentiment is something I have seen and heard many times over the years. Sure, breakthroughs in design had to occur for us to reach the ubiquitousness and power of the technology we use today. However, design is inherently not a practice one undergoes without a vision, goal, or at least motivation in mind.

Thinking back to the first truly significant compression of semiotic media into a quickly accessible format the creation of microfilm and microfiche came to mind. While initially a hobby form of photographic art, the reduction of visual images into small format that could be put behind a lens for consumption by the viewer was quickly recognized as a benefit by engineers, who frequently had to reference thousands of pages of engineering notes and specifications. In the 1920s and 1930s the practice of saving documents and particularly newspapers became institutionalized by the Library of Congress, the New York Times, and Harvard University, who all began creating repositories of important documents and newspapers on microfilm.

While Douglas Engelbart’s work to build a system that would allow for workers to augment their intellect was truly impressive in what it achieved, the underlying concept of amplifying the power of a single person to access and benefit from information without leaving their desk had already been a goal for many many years. The desire beget the design: access a great deal of information while sitting at a desk. This was true of the engineers and later of the libraries who installed very familiar looking readers:


(1) Ames Public Library, 1953. (2) 1956, Old Post Office building, microfilm reader, San Jose Public Library Collection

For me, the desire to have individual access to information at a desk, clearly laid the foundation for work to gradually improve the design of computers from the room sized early IBMs to the step by step, reduction of size and convenience. Early news pieces about microfilm in the 1930s talk about how one day we would all be able to have microfilm readers the size of a wristwatch through which we could read the newspaper… sounds like a clear goal for a current well known product in my opinion.

Another fuel for computer design and innovation I see is gaming. As mentioned some of the earliest games were in fact simply demos of computing hardware and concepts. The first computer games, Bertie the Brain (left), and Nimrod (right), seen below had little to do with gaming, they were demos of underlying computing technology on display at the Canadian National Expo in 1950 and the Berlin Festival in 1951, respectively.


While the companies behind them were set on demonstrating the power of the technology, spectators just wanted to play the game. Combine this focus by consumers on the power of technology to entertain, and we see the potential of consumers to desire electronic gaming in their homes. Early console systems like Magnavox Odyssey and Atari, brought computer based technology into the households of many consumers years before an affordable personal computer had been released. Soon the value of a computer for individual personal use beyond gaming was recognized and companies worked quickly to design the hardware and software needed to support that. The productivity was the goal, the personal computer was the design solution to the problem.

Looking ahead at the obstacles placed in front of users to improve on the design of just a few companies, I see most hope in the gaming community. I am fairly confident that an analysis of all custom built personal computers made this year would be overwhelmingly owned by gamers. These are your power users who demand more than what they can buy in a blackbox product off the shelf. They are the ones still creating components to give them an edge. But it isn’t just about competition.

Many gaming companies themselves have come to accept their own limits in designing games. Now most games on the popular Steam platform have communities of “modders” who design and code mods, or patches, for games that have been released. One of the most famous of these mods is for XCOM: Enemy Unknown by Firaxis Games. The Long War Mod, is a free add-on, yet is so extensive that it turns the underlying experience into an entirely new game. With over 840,000 downloads to date, the mod has greatly improved the sales success of the base game. In response, when releasing the sequel, XCOM 2, Firaxis Games included built in support for modding and modders. This includes access to base code and game algorithms.

While modding is still predominately restricted to the world of gaming, I could see its success resulting in more apps and companies opening up their software to user modification. This of course brings with it security and exploitation concerns, but could be an avenue for better products down the road.

Documentary: Alan Kay on the history of graphical interfaces:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles

Coding and scaling

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Diving into Python with Code academy was a great complement to the Introduction to Computing by Evans. Having never really done any coding, except for formatting of early blogs, I was surprised to see how I could immediately draw connections between coding and my work.

As a Technology Change Management consultant, I frequently work with software developers who are building or modifying systems, engaging with them through scrum meetings where they talk about the time it will take to complete this feature or that. I assumed that getting a bit more insight into coding would help me better understand those meetings and the challenges they were facing. While this is certainly true, I also found the concepts around process, efficiency, and replication at scale to be very applicable to the tasks I complete.

One of my primary functions is to interview employees of my client about their work and draw out process maps for every aspect of their jobs. Some of these processes are very simple, almost perfectly linear with only one or two forks. However, some are very complex with multiple points requiring decision making by one or more parties before the process can continue forward. To date I have been looking for efficiencies and opportunities using just my intuition, trying to place myself in their shoes and think of the most efficient way to get the work done. While reading Evans Introduction, I realized that designers of computers and software fundamentally ask the same two questions I do of the problems they face:

How much information can it process?

How fast can it process?

I see that if I dive further into the concepts of computing and coding, learning what are the fundamental challenges with coding that can eat up time, I can apply these same concepts to processes of job functions. Instead of simply removing stages, I can rework them entirely to facilitate increased speed and volume of work. Additionally, the ideal of any of the processes I work on is the ability to scale them significantly while using the same number of human resources. The focus on designing for future scalability is also something that I could use coding training to develop.

The Alien Emoji: Meaning and pace of change

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

In 2015, many iPhone users ignored – as usual – their phone’s demands to update their operating system. Not long after, those same users started receiving surprising messages from friends:

Users took their fears and confusion to the internet, “What does the alien in a box emoji mean????”

This fun bit of work by Apple programmers illustrates the disconnect between data transmission and meaning interpretation inherent in text messages. Behind the scenes text message communications function in a not dissimilar way from the telegraph. On one end of the chain a human party chooses from a set of pre-determined symbolic characters and arranges their selections through the interface in a manner meaningful to them and ideally meaningful to the receiving party. Those symbols are encoded in a universally agreed upon manner for transmission to the receiving party’s phone. The receiving phone decodes the representations of the symbols according to the “cipher” it maintains for such communication – the library of pre-determined symbolic characters its user has access to. Finally, the receiving party takes in the representation of those characters decoded by their phone and uses cultural and interpersonal context to interpret meaning into those characters.

So what about the alien in a box?

With no social or cultural norm to help the receiver understand the meaning of the symbol, and with no decipherable context clues to deduce a meaning, the alien in the box confounded recipients. This lack of ability to interpret meaning is akin to the experience of someone sitting at the receiving end of a telegraph symbol who does not know morse code: the signals being received have no cipher by which to translate them and therefore carry no meaning for recipient. That was exactly what happened in the case of the alien in a box:

The new operating system had introduced additional symbols into the set of symbols its users could choose from to build their messages. When the recipient’s cipher, its set of symbols, lacked these new options, the bits received for those symbols did not align with any existing symbols and were therefore represented by an “error” symbol.

Not being able to understand their messages led to a lot of anxiety for users. What were their friends trying to say to them? And more importantly, why couldn’t they send one back? This gets into a much deeper discussion of the role of symbol based communication in our daily lives and an ingrained desire to not be left behind when someone else has something new. Fundamentally though, it illustrates how users of symbolic communication are dependent on the accuracy of their individual ciphers to translate messages. What their ciphers can’t do, however, is explain to them what those messages mean.

The Book as an artifact

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Books, bound books in particular, have been a familiar design artifact dating back to the Roman empire. They have differed little over the years from the form we see on our shelves today and that is due largely to the affordances and constraints inherent in their use. As an artifact I look at books as having three arenas of interfaces: Physical, Visual, and conceptual or interpretive.


The fundamental purpose of a bound book is to provide a means of organizing two-dimensional substrates on which our design systems can be applied to communicate information (a term used broadly in this case). This means that the form of the book must facilitate the accessing of this information by a person. As such, the size and physical capabilities of a human are a limiting factor. On one hand, a book may be small enough to hold open in the hand but not so small that the symbolic artifacts used to communicate meaning do not fit on the pages contained. On the other end of the spectrum, some use cases involve the appropriate use of a stand to hold the weight of the book. While this gives a broad range of sizes, there is still an upward bound to book size due to the constraint of the reader needing to be able to see all of the information on the page in a relatively low energy manner – a 10ft tall book may be able to contain a great deal of information but would require a platform to read the top lines and a mechanism for turning the pages.

From a manufacturing perspective there are additional constraints based on the ease of writing or printing the text on pages that can be produced, and the longevity and production of the material used to bind the book.


The visual constraints of a book are a factor of the symbols they contain. Our modern text based communication takes a linear format, either left to right or right to left, making rectangular shaped pages, and hence books, an efficient medium. However, it is not too difficult to imagine a form of communication based in non-linear symbols that when grouped together take on an arced, rounded, or triangular shape. In such a circumstance, given appropriate production means, it would not be surprising to see a non rectangular shaped book.

One of the most readily noticeable aspects of a book has to do with the design of the text inside. Is it too big, too small, or just right? This differs by reader given factors including eyesight and general preferences often described as “comfort.” These decisions have guardrails in the form of number of pages in the book, amount of content desired, and limits of human capabilities.

Additional visual design elements include size of margins, distance between lines of text, and distance between paragraphs.


Another aspect of book design that is often overlooked until it becomes an issue is the organizing of the information. It is a decision of the author as to how to split the book into sections or chapters. The inclusion of a Table of Contents improves ease of access to the information in the book by the reader. For more content heavy books the use of an Index is invaluable, allowing readers to locate key concepts and ideas in the reading quickly. Both of these elements have been readily adopted in a fundamental way by digital reading interfaces that offer search functions in place of an Index, and tabs and menus in place of a Table of Contents.


Side note: nice quick video on affordances and the ever frustrating “Norman doors.”

Tablets and re-mediation of communication

Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Communication, whether verbal or written, has never derived its value or meaning simply from the symbols employed. Rather, these auditory or visual symbols gain meaning when they are used to translate the thoughts or emotions of one individual for the understanding and consumption by one or more others. In this manner they are the medium for the machinations of the human mind.

Over the course of history symbols have been used for human expression through ever evolving series of medias, from fixed-in-place cave paintings and stone obelisks, to transportable papyrus and printed books, and presently through keyboards and electronic signal based formats. Yet, while the physical manifestation and method of consumption of the symbols has changed, the creation and understanding of them remains firmly rooted in the norms of social interaction.

Take as an example the uses of a modern tablet computer. Beyond the physical similarities to early surfaces used for visual communication, the rules that dictate how humans operate IRL translate to this seemingly independent product. First lets think about the interface. The touchscreen removes the artifactual keyboard interface in favor of a medium that ideally responds in a seamless manner to human desire. Analogous to an office desktop, the swipe motion to clear the screen of the current document in favor of seeing the core tools on the user’s desktop feeds into the basic human action of using their hand or arm to clear away space in front of them to work on something new.

The tablet is fundamentally one more tool in the family of stone tablets, papyrus, and typewriters for forming symbols with the goal of long-form communication. However, they are more frequently used for instant outgoing communication with other people via chat, email, audio calls, and video calls and incoming communication via all of these channels plus video and audio broadcasts, and written news media. I would like to touch on a few of these that seem more modern than they actually are in practice: Video Calls and news broadcasts and alerts.

Video calling has been a dream of technologists since long before early sci-fi comics depicted such communication. In practice, while it is used more frequently, we have found that certain rules and social norms surround it. First, the times when I can or will answer a video call are more limited than the times I will answer a phone call or text. Looking back into early history, where a phone call or text would be analogous to leaving a note under the recipients door or passing a message through a neighbor, a video call out of the blue is akin to running up to someone in the street and yelling “Hey! Hey! Will you stop what you’re doing and talk to me?” Just as there has always been a time and place for that sort of impromptu meeting, there is a time and place for video calls. The method of communication has changed but the fundamental social context has not.

As we are bombarded by buzzing alerts and screen pop-ups by news agencies, we might be prone to gripe about the invasion of modern technology into our lives. But is this really a new feature? How is this different from the days of paper boys standing on street corners shouting, “Extra! Extra! Read all about it!” We may have to deal with the issue of a misallocation of importance to certain news items but fundamentally if the news is relevant to us we appreciate the notification and if it isn’t we are a little annoyed by the disturbance.

In summary, all of the functions of the technological artifacts we use today are re-mediation of age old social behaviors and forms of communication. This blog post is the print at home pamphlet shared amongst a small group of thinkers. The medium has changed but the social behaviors fundamentally have not.

Translation of Régis Debray, “Qu’est-ce que la médiologie?”
Le Monde Diplomatique, August 1999, p32.
By Martin Irvine, Georgetown University

Martin Irvine, “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)”