Author Archives: Randal Ellsworth

reMarkable: Digitization without Pixels

The “reMarkable” tablet purports to replicate the feel of drawing, reading, and writing on a paper surface – on a digital device. Click on the video below for a brief description from the company:

There is nothing novel about the concept behind this device. The marketability of this tablet suggests that something has been lost in the attempt to replicate the experience of writing with a pen, pencil, and paper. In his intro to this unit, Professor Irvine says that “we most commonly use digital media to simulate, emulate, or reproduce the experience of analog media, that is, representing symbolic forms that can be created and received by human senses and perceptual organs.” (Irvine, Key Design Concepts for Interactive Interfaces and Digital Media).

What differentiates this tablet from another with a  traditional pixelated screen is simply a difference in how we are able to convert our input into a legible mark on the screen, and how our eyes are able to see it. While the video above is a marketing video, it does provide some useful insight into how these differences play out starting at the 1:10 mark. Whereas traditional screens are visualizing content by lighting up millions of pixels to replicate the image, this particular tablet is magnetizing a synthetic ink to the screen surface, allowing natural light to reflect off of it.

Presumably, the input process is simply indicating where on the screen to magnetize that ink. Whereas a normal pen would release ink from the tip of the pen, in this case, the pen is indicating on the surface where the device should attach ink from the other side.

I was not able to figure out how or if the device converts that input into a digital format, as the device is capable of exporting drawings into common formats such as pdf or other image formats. That would be my question for the class, and/or Professor Irvine.



Lev Manovich, Software Takes Command, pp. 55-239; and Conclusion.

Martin Irvine, Key Design Concepts for Interactive Interfaces and Digital Media

“Appification” and Google Photos

“Appification” Cloud Storage

I recently started using Google Photos, an example of cloud storage service from Google that stores photos in “the cloud”.  Whenever I save a photo on a folder located either on my local hard drive or in a folder that I can see from my web browser, I see a notification on the top right of my screen indicating that it is being backed up to google photos, at which point I can access that image from any of my devices. In order to access these photos, I typically need to sign into either the Google Photos webapp in my browser or the standalone app on my mobile device.

Image Source: How Cloud Storage Works

The process of syncing photos to the cloud via Google Photos is similar to other cloud storage services in general as seen in the image above. THrough a google photos app installed on my computer or mobile device, I grant permission for the app to actively manage image files saved locally, and those images are sent through the web via HTML protocols from my client device to a server located at one of Google’s data centers where it is allocated and stored across a variety of servers.

What’s interesting is how the internet affords the ability to both browse content this way, but also varying means of controlling that content through the growing use of apps such as Google Photos.

Addressing this extensibility aspect of internet protocols, Martin says that “The Web architecture and framework of HTML5 now also supports the function fragments in mobile apps…. Apps and mobile devices allow channeling and fragmenting internet/web architecture for commerce, consumerism, and maintaining the focus and attention of users.”

At this point I have thousands of photos backed up and it’s made my life so much easier in that google’s algorithms will sort everything, tag it, and make it searchable. Google says I can upload “unlimited” amounts of photos, offering infinite scalability for me to upload files over the course of my life. It offers anyone and everyone that option, in fact, they want anyone and everyone on the service. We can see this as an instance where Google is using a closed module to blackbox user activity on the open web. Corporations such as google can merge the affordances of a globally connected network, the internet, and funnel our activity and our content into closed services that they control.


Martin Irvine, “Introduction to the Web

Ron White, How Computers Work. 9th ed. Que Publishing, 2007. “How the World Wide Web Works.” (excerpts).

Cloud storage: What is it and how does it work? | How It Works Magazine. (n.d.). Retrieved December 15,

2017, from

Janna Anderson, and Lee Rainie. “The Future of Apps and Web.” (Also pdf format.) Pew Research Center’s Internet & American Life Project, March 23, 2012.

Jonathan Zittrain, The Future of the Internet–And How to Stop It. New Haven, CT: Yale University Press, 2009. Excerpt from introduction and chap. 1. Entire book is available in a Creative Commons version on the author’s site.

The Affordances of Flash Cards: Interaction with Paper and Digital Variants

Flash cards have been a common tactic for many students to learn lists of vocabulary words and concepts. They are a quick way to test knowledge, and then flip the card to see how well they knew the contents of that card. Students can test themselves on any kind of content that can be drawn on the card, whether in the form of text, images, and/or color coding. Cards can be laid out on the floor, sorted and/or resorted into different boxes, and They can even be mobile, often stored in a small box or on a key ring. Grab a friend and they can easily pull out the cards and quiz you on your knowledge.

These perceived and actual action possibilities are what designers call “affordances”. They are what drive people to make and use flashcards. Software designers have sought to make money by translating these affordances into a digital format, with varying degrees of success. In some ways flash cards make more sense in a digital environment, but in other ways certain affordances are not as immediately perceived.

Don Norman, a leading thinker on the concept of affordances, said that “…affordances are of little use if they are not visible to the users. Hence, the art of the designer is to ensure that the desired, relevant actions are readily perceivable (Norman, 1999).”. In other words, if those affordances are not immediately apparent on the digital media display then the digital format for flash cards will not be very useful, even if they have many more applications and “usefulness” baked into their design.“Strong visual cues” are what indicate the potential uses on the screen, such as “clickable buttons and tabs,draggable sliders”, things that suggest actions and effects (Kaptelinin, 2013).

We can see some of these visual clues specific to digital flashcards. Take for example Anki, one of the more powerful flashcard applications.

Within the above image, you can see three screens. These screens are images from an iphone where I’ve downloaded a flashcard list of portuguese phrases. On the first screen, I can immediately see that there is a “play” icon that lets me listen to the phrase being recited out loud. With a tap, the second screen reveals the answer, and I can indicate on the bottom whether I want that card to be repeated in 1 minute, 1 day, or 4 days. These lengths will change automatically through continued study of the flashcards, whether repeated success or failure, the software has algorithms that spaces the card to best help you learn the concepts. If i were to swipe the screen I can modify the card or its location through a slider and a series of buttons.

Paying attention to actual affordances, Anki is a superior offering to traditional flashcards. Within the confines of my phone and one screen, I can store tens of thousands of flashcards that can be spaced out for frequent or sporadic practice organically. I can embed audio for practicing pronunciation, insert images, even see statistics on how well I am progressing with certain terms or data sets.

Paying attention to perceived affordances, and it’s hard to see all of this value. It’s not immediately obvious that tapping the card will flip it to the other side. The spaced repetition algorithms offered at the bottom of the second screenshot are also not immediately obvous despite an attempt to indicate accomplishment through a color code. (Red for don’t know, so give it back in a minute. Green for so-so, give it back in a day or so. Grey for it’s solid, bring it back in 4 days.)

I love anki. It’s a powerful application to do a lot of cool things I used to use flashcards for. Many others, however, find it useless. I don’t blame them.


Kaptelinin, Victor. “Affordances.” The Encyclopedia of Human-Computer Interaction, 2nd Ed., 2013.

Donald A. Norman, “Affordance, Conventions, and Design.” Interactions 6, no. 3 (May 1999): 38-43.

Content Management or Virtual Learning Environment: A deeper look at Canvas LMS


Learning Management Systems are a hot topic of debate as to whether they function primarily as Virtual Learning Environments or as Learning Content Management Systems. This essay explores this debate by opening up and examining the architecture of one specific case, Canvas by Instructure. Upon closer examination, Canvas seems to be most intuitively used for managing learning processes, albeit with extensibility incorporated for refining deeper learning environments with extra effort and commitment. The approaches taken to arrive at this conclusion consisted of an examination of affordances and conventions presented as part of a socio-technical view of the architecture, paying attention to the major components within a system that is both technical and human. In this way, the essay examines the Cloud Architecture, Stakeholders, Abstraction Layers, and Interoperability potential of LTI standards.


Higher Education Institutions today are tasked with the design, delivery, and administration of learning experiences across in-person and online domains. Students sign up to learn, and institutions seek to facilitate that learning in both the pedagogical and administrative sense. They do so through a variety of software tools and platforms. In their annual review of educational technologies and trends, the New Media Consortium and Educause Learning Initiative defined one of the common types of platforms used for this purpose:

Learning Management Systems (LMS), also referred to as Virtual Learning Environments, comprise a category of software and web applications that enable the online delivery of course materials as well as the tracking and reporting of student participation. Viewed as a centralized location for the ephemera of learning experiences, LMS have long been adopted by colleges and universities worldwide to to manage and administer online and blended courses” (New Media Consortium // Educause Learning Initiative, 2017).

“Canvas” is one such LMS. Created by the vendor company Instructure, adoption of Canvas has increased by a number of higher education institutions. As an aspiring learning designer and technologist, my goal in this essay is to open upsome of the major components and layers that make the platform work in order to better understand the platform contextually, architecturally, and functionally; to outline how Canvas’ design architecture  lends itself especially well to the management of learning processes administratively, but requires extra effort on the part of institutions to successfully incorporate deeper learning in the  virtual environment. The software affords to learning experiences in service of students, however, it’s design features are often more conducive to management and administration in the service of institutional stakeholders.

Why is Canvas designed the way it is, versus some other way? I’ve used a few questions as guiding heuristics or methods to answer that question:

  1. Sociotechnical Systems approach: Technical systems and human stakeholders do not exist in a vacuum, especially when examining core mission-related processes such as how learning is managed and delivered by large institutions such as Universities.
  2. Affordances and Constraints of the Architecture: What action possibilities exist because of these architectures and designs? In particular, as it relates to pedagogical approaches and/or the management of the institution.

The Cloud Architecture creates scalability for IT processes and availability for users and stakeholders.  

Figure 1.1

Image Source: (Serrano et al., 2015)

Cloud architecture makes institutional processes scalable, available, and extensible, easing the burden on institutional stakeholders responsible for the administering learning in blended and online environments.   

The National Institute of Standards and Technology defines cloud computing as “…a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” (Grance & Mell, 2011).   

Observing Figure 1.1 above, the “client” is best understood as the end users such as institutional administrators, faculty, or students. Canvas is typically implemented as part of a “private” cloud model where a given University has access to its’ own instance or individualized online infrastructure using the service. While canvas is also public in the sense that they have instances open to all users such as Canvas Network (“Canvas Network | Free online courses | MOOCs,” n.d.), the core business model revolves around offering a private cloud setup for a given institution.

It’s important to note that while Instructure interfaces with institutions to develop and offer the software, the infrastructure housing the data lives in data centers offered by other providers such as Amazon Web Services, a seen below in Figure 1.2. The hosting entity offers their infrastructure to the vendor, Instructure in this case, and then that vendor develops and manages their software platform for use by institutions and universities.

Figure 1.2

In practice, users can access the software with corresponding shared resources, databases, and artefacts from anywhere as long their digital device is connected to the internet. In this manner Canvas delivers its’ software as a service, where the institution does not have to host any servers or computers locally on their campus to make it work. It operates entirely through their internet connection. Computing infrastructure such as memory, storage, processing, and networking are paid for by scale with the rate of consumption, instead of buying hardware outright. This is distinct from physical hosting, where customers purchase a physical service in a data center or host servers locally.

Serrano et al. describe some of the organizational advantages of a cloud-based solution: “Besides the economic advantages from a cost perspective, the main competitive advantages are the flexibility and speed the cloud architecture can add to your IT environment. In particular, this kind of architecture can provide faster deployment of and access to IT resources, and fine-grain scalability.” (Serrano et al., 2015).

At its heart, cloud computing is all about providing Information Technology resources and management at a distance. The most obvious affordances for this architecture involves the ability to off-load the management of IT onto another organization so that institutional stakeholders can focus more on managing content and learners. Instructure continuously updates the software platform, and those updates trickle down to the institutions where beta testing can be handled in their test environments, making repeated transitions to update software a much simpler process. With this infrastructure hosted and managed off-site, it also creates a scalable solution for universities. If there is a surge in new users for the platform, the model allows for easily purchasing new licenses from the vendor.

At the end of the day, if Instructure is facilitating all of the back-end infrastructure, institutions need much less overhead in the form of certified IT administrators specific to the platform. The focus can remain on funding the salaries of individuals who can focus on implementation, training, and the design and delivery of learning content.

Stakeholders, affordances of the software, and convention.

Figure 1.3

Stakeholders will default to ease-of-use and convention despite advanced features for learning included as extras.

Content Management is the Trend for Faculty and Students using Learning Management Systems

Key stakeholders typically interface with the canvas platform as part of a socio-technical system. These stakeholder categories typically also have other responsibilities as part of their job description, but often fit into the following categories in relation to the Canvas Platform:

  • The Cloud Hosting Platform: The organization hosting the software platform and its corresponding data and servers, in this example Amazon Web Services.
  • The Vendor: Instructure is the company developing the software, relevant updates, and facilitation of back-end data.
  • The Administrator(s): Institutional and/or program level administrators paying for the service and responsible for the university organization.
  • The Faculty: The instructors facilitating blended and online environments for classes as end-users for the platform.
  • The Students: The learners taking classes as end-users and participants in the platform.
  • The Technologists: The staff members supporting regular implementation of updates and training for faculty, students and staff on a technical level
  • The Learning Designers: The staff members who are helping to design and developing online courses, online course components, and online programs.

Canvas implements many of the typical functions of the LMS, and is often at the forefront of in developing and implementing new features. That being said it functions similarly to many of the other leading LMS providers in that users are primarily inclined to manage the learning process versus catalyzing learning for the individual student. There is data for this kind of use on a wider scale, albeit for how Learning Management Systems are being used generally. Having surveyed upwards of 17,000 faculty and 75,000 students as well as evaluated data and metrics related to IT practice from more than 800 institutions; the Educause Center for Analysis and Research put out a report (Dahlstrom, Brooks, & Bichsel, 2014) with the following statistics:

  • 99% of participating institutions have an LMS in place
  • 85% of participating faculty in the survey use a Learning Management System
  • 56% of faculty reported using it every day.
  • 74% of Faculty say it is a useful tool to enhance teaching.
  • 83% of surveyed students reported using the LMS
  • 56% of surveyed students said they used it in most or all of their courses.
  • 41% of surveyed Faculty said they used it to promote interaction outside the classroom.  

These numbers speak to widespread adoption rates across institutions that are using learning management systems. But adoption isn’t the same as impact, and that final statistic speaks to some important nuance in how the LMS is being used. Examining this same data set, Brown et al. speak to how these statistics showcase how the LMS is actually being used:

“Despite the high percentages of LMS adoption, relatively few instructors use its more advanced features: just 41 percent of surveyed faculty report using the LMS ‘to promote interaction outside the classroom’ …… What is clear is that the LMS has been highly successful in enabling the administration of learning but less so in enabling learning itself. Tools such as the grade book and mechanisms for distributing materials (e.g., the syllabus) are invaluable for the management of a course, but these resources contribute only indirectly, at best, to learning success.” (Brown, Dehoney, & Millichap, 2015).”

In other words, users are defaulting to the most basic functions in order to manage and facilitate standard pedagogical practice. For the users who are most in touch with the LMS on the ground, the system is being used to manage content and solicit content rather than to engage learners in a more collaborative virtual learning environment. Now these statistics are speaking to the LMS generally, and are not specific to Canvas. But they are helpful to keep in mind as a heuristic when opening up the design of a specific case like Canvas, which does contain features both built-in and available from third parties that can help facilitate better learning practices and collaborative engagement.

Affordances vs Convention

This relationship within the boundaries of a specific case, Canvas, is pertinent to this discussion because of the disparity between what is offered and what is actually used for most learning management systems. End users will act on what they see as the most useful and actionable functions based on what they are intuitively seeing as possible. Faculty will run courses that feel most natural as extensions to their teaching process, despite the online environment offering very real differences in both what is possible and what is not. Canvas offers the option to incorporate advanced features for collaboration and deeper learning, but because that involves extra steps, it is likely that a majority of users will fall on what is most natural; the content management, the pushing and absorbing of documents and multimedia as described above.

Don Norman describes this disparity in terms of affordances and perceived affordances, or the inherent action possibilities and those that are seen as possible action possibilities because of convention. Speaking to the importance of distinguishing between these two concepts, he asks the reader to “Please don’t confuse affordances with perceived affordances. Don’t confuse affordances with conventions. Affordances reflect the possible relationships among actors and objects: they are properties of the world. Conventions, conversely, are arbitrary, artificial, and learned.” (Norman, 1999).

Canvas’ built environment offers both, however, most users will stick to what is basic, built-in, and obvious. In most cases, that correlates to managing procedures and processes simply because they are the least common denominator. This emphasis ultimately makes it less conducive for experimenting with new approaches for learning, barring a concerted and combined effort to integrate these approaches pedagogically and technically.

Using Accounts to Manage Learners & Artefacts

Figure 1.4

Accounts and Subaccounts are used to manage people and permissions within the system, differentiated by roles and permissions.

Instructure offers a variety of Canvas Guides for learning how it is organized, but some of the main building blocks include accounts, sub-accounts, courses and modules. They are defined therein as follows: 

The terms account and sub-account are organizational units within Canvas. Every instance of Canvas has the potential to contain a hierarchy of accounts and sub-accounts but starts out with just one account (referred to as the top-level account). Accounts include sub-accounts, courses, and sections, all of which can be added manually in Canvas, via the API, or via SIS imports (“What is the hierarchical structure for Canvas accounts? | Canvas Admin Guide | Canvas Guides (en),” n.d.). 

Accounts and subaccounts comprise the main skeleton upon which the instance for an entire institution, program, or school is then built out from. They are separate from individual user accounts, which are the what an individual person uses to log in to the platform and participate. The top-level account is usually defined by the largest overall organization using the account, typically the university as a whole or an individual college or school that decides separately to use the LMS. Sub-accounts then account for the branching units of that organization as shown above in in Figure 1.4 and below in Figure 1.5.


Figure 1.5

Image Source: Canvas Guides 

As you continue to go down that chain, courses and modules exist as sub-units, typically housed within sub-accounts for departments, programs, or other tiers most affiliated with faculty and classes at the institution.


Figure 1.6

“Permissions” and “roles” are the designation given to individual user accounts that let them either participate in or modify accounts, sub-accounts, courses, sections, or even their own settings. Students typically have limited permissions, as their “role” as a student is limited to participation in whichever courses and sections they are a member of. Faculty and designers might have increased permissions to modify and build out the courses they are attached to, and administrators at differing levels can make more changes for the administration of upper-level sub-accounts depending on their role at the institution.

Administrators will typically have permissions to modify or add to sub-accounts depending on which tier of the organization they are managing, as differentiated by the permissions they are given. Unique schools, departments, and programs will often have unique configurations of apps and integrations associated with the courses managed by their sub-account. This enables these sub-accounts to manage the affairs of students and faculty that are unique to those subunits in the organization of the institution, in the form of courses that are designed by the what those students are learning unique to that organization, or attached to other software platforms and databases that corresponding  to that unique organization as well.

File management is an important part of the structure as well, as each of these building blocks and individual users also have a designated amount of storage space to store digital media to be associated with these building blocks as well. There are storage folders associated with students, faculty members, courses, going on up.

Abstraction layers designed for managing the complexity of the institution

So why take the time to lay out these building blocks for Canvas as an instance for the institution? By examining how these units exist in nested layers of abstraction, you begin to see how the software is build to manage the complexity of administering the learning process for the institution. Learning experiences for students only exist on the course layer and below,  

Professor Martin Irvine, a faculty member at Georgetown University describes this method for organization in terms of layering, abstraction, and/or black-boxing. BEcause learning in higher education is managed across multiple levels both horizontally and vertically, a software platform that purports to manage that process needs to be designed such that it can account for varying degrees of permission and access across those layer. For that reason,   “the details of complexity in a module or subsystem can be “hidden” (black-boxed) from the rest of the system with only”interfaces” (structures that create interconnections) to the module as needed by the system.” (Martin, 2017). From a student’s perspective, all they are seeing is their list of courses, with the relevant information being communicated to them through their view of the system.

In this manner, Canvas allows institutional leaders and stakeholders to manage student learning at the micro level on up to organizational structure for courses and departments at higher tiers of the organization. It is a structure that is very good at managing and administering that learning process. But because student learning is taking place at that course level and below, any assertion for the relative importance of management vs quality of the learning experience will need to be discussed with a deeper look at the functionality of those units in the larger structure.

Courses and Modules define where a learning experience is either managed for utility, for learning, or both.

Modules organize the flow for learners and learning experiences within a course

Figure 1.7

Image Source: Canvas Guides


Modules are what give flow and direction to an online or blended course by grouping individual pages and assignments into a cohesive unit. The folks at Instructure define these modules as the organizational unit for courses, saying that:

“Modules allow instructors to organize content to help control the flow of the course. Modules are used to organize course content by weeks, units, or a different organizational structure. Modules essentially create a one-directional linear flow of what students should do in a course. Each module can contain files, discussions, assignments, quizzes, and other learning materials” (Canvas Doc Team, 2017).

The elements contained within these modules are the pieces that determine what kind of experience students will have, and Canvas natively has some of these as standard available templates to design components for a module and course.  

Pages accommodate text, images, and video as the most direct method for delivering video. These media elements can be organized using HTML to position and format where they fit, as well as to include tags that guide screen-readers for disabled learners who cannot see the content. Discussions are basically mini message boards centered around a given topic, but where the discussion is limited to that conversation, on that board, specifically with users who are in the course and have the permissions to participate. Quizzes offer templates for assessment within the module, etc.

All of these elements are standard, and none are particularly engaging. All of them do a good job of streamlining the learning process, the grading process, and containing it into a neatly wrapped experience inside of the learning management system. The screen-reader tags even help make sure that content is accessible to users who need it. These qualities are excellent for administering a course but are not particularly inspiring for experimenting with pedagogical approaches outside of models that emphasize the delivery of content by explaining, demonstrating, then assessing through a combination of rich-media, writing prompts, and quizzes. 

But these modules do allow for flexibility experimentation for those who put in the time to design for it, especially once they begin looking outside of the LMS for additional tools. In conversations with 70 thought leaders in the LMS space, The New Media Consortium concluded that “Overall, a “Lego” approach to LMS was recommended to empower both institutions and individuals with the flexibility to create bespoke learning environments that accommodate their unique requirements and needs” (New Media Consortium, 2017).

While modules and the standard elements that can be incorporated do offer some flexibility for moving pieces around, Canvas does offer the ability to integrate third-party tools that can be incorporated into modules, courses, and even up into other building blocks such as sub-accounts. These outside “lego” pieces are where Canvas gives more options for accommodating learners, or for some institutions, reinforcing the administrative strengths of the platform.

LTI is used for interoperability, allowing administrators, designers, and faculty to integrate third-party applications unique to their sub-account or course.


Figure 1.8

Image Source: imsglobal

LTI, which stands for Learning Tools Interoperability, is a means for Learning Management Systems such as Canvas to integrate third-party tools using agreed-upon standards that allow software systems make connections with each other,  establish secure connections, and then allow for these systems to interact with the relevant corresponding digital resources and databases (whether learning objects, documents, or records for users and participants, etc.)  – similar to an API or Application Programming Interface.  

In this case, as with other similar standards, there is an organization that helps facilitate agreement on how this interoperability can take place and through what kinds of protocols. From their website: 

“Learning Tools Interoperability is a standard developed by IMS Global Learning Consortium. LTI prescribes a way to integrate rich learning applications (often remotely hosted and provided through third-party services) with platforms like learning management systems (LMS), portals, learning object repositories or other educational environments managed locally or in the cloud. In LTI, these learning applications are called Tools, delivered by Tool Providers, and the LMS or platforms are called Tool Consumers” (“Learning Tools Interoperability | IMS Global Learning Consortium,” n.d.).


Figure 1.9

Image Source: Canvas Guides


LTI enables Canvas to act as an open-source “Platform” where third-party vendors can sell or give their integrations for use with the LMS. In some cases, these integrations are standalone additions developed specifically to operate with the LMS, and in other cases, they allow for synergy between the LMS and another platform, such as Google Drive or Social Media. The app store is a tool-rich environment where software developers can create, customise, test, and deploy new applications, but the LTI format also gives institutions the option to incorporate their own customized solutions. For those institutions without the resources to build standalone integrations, they can strive to mix and match those available in the app store. 

While LTI allows for many combinations The limitation in Canvas’ ability to be interoperable with other systems lies with its’ limited utility with other standards and channels – essentially limiting pool of integrations and use-cases to apps and integrations in the canvas app store. This is what makes it difficult to completely distinguish Canvas from a “Content Management System” model, or CMS, as these systems operate off similar cloud-based models that channel users into proprietary app-stores. CMS have users, often giving you the ability to organize and deliver content as well. What would differentiate it more is the ability to incorporate additional standards that allow for increased interoperability and synergy with existing and emerging elearning formats; SCORM, Tin Can, etc.


By opening up the basic layers and building blocks of the system it becomes apparent that Canvas does its job well as a Learning Management System. It can also offer powerful and extensible options to create unique learning experiences, however, not without bucking convention and putting extra thought into design, implementation, and training.  

I’ll conclude by going back to the Horizon Report’s insights on how these adjustments might be taken into consideration for anyone seeking to implement or rethink their approach to Canvas as a Learning Environment, and not just as a Management System:

“The overarching goal of next-generation LMS is to shift the focus of these platforms from enabling administrative tasks to deepening the act of learning. Traditional LMS functions are still a part of the ecosystem, but reimagined incarnations deviate from the one-size-fits-all approach to accommodate the specific needs of all faculty and students.” (New Media Consortium // Educause Learning Initiative, 2017).

Canvas is a streamlined “one-size-fits-all” platform, however, it achieves that in large part by enclosing its users within the platform. Institutional stakeholders are in a position to enhance the learning experience by consciously taking that aspect of the platform into account when seeking out integrations and training stakeholders such as Faculty and Designers who can “open up” increased options for deeper learning.



Brown, M., Dehoney, J., & Millichap. (2015). What’s Next for the LMS? EDUCAUSE Review, 50(4). Retrieved from

Canvas Admin Guide | Canvas Guides (en). (n.d.). Retrieved December 15, 2017, from

Canvas Admin Tour. (n.d.). Retrieved December 15, 2017, from

Canvas Doc Team. (2017). What are Modules. Retrieved from

Canvas Network | Free online courses | MOOCs. (n.d.). Retrieved December 17, 2017, from

Cloud Computing Architecture: an overview. (2015, March 5). Retrieved December 16, 2017, from

Dahlstrom, E., Brooks, D. C., & Bichsel, J. (2014). The Current Ecosystem of Learning Management Systems in Higher Education: Student, FAculty and IT Perspectives. EDUCAUSE Center for Analysis and Research. Retrieved from

Grance, T., & Mell, P. (2011). The NIST Definition of Cloud Computing (No. NIST Special Publication 800-145). National Institute of Standards and Technology. Retrieved from

Instructure | Learning + Tech = Awesome. (n.d.). Retrieved December 16, 2017, from

Intro-Systems-and-Architectures.pdf. (n.d.). Retrieved December 16, 2017, from

Irvine, M. (2017). Intro-Modularity-Abstraction.pdf. Retrieved December 18, 2017, from

Learning management system. (2017, November 16). In Wikipedia. Retrieved from

Learning Tools Interoperability | IMS Global Learning Consortium. (n.d.). Retrieved December 18, 2017, from

Manovich, L. (2013). Software Takes Command (INT edition). New York ; London: Bloomsbury Academic.

New Media Consortium // Educause Learning Initiative. (2017). NMC Horizon Report > 2017 Higher Education Edition. Retrieved from

Norman, D. A. (1999). Affordance, Convention, and Design. Interactions, 6(3). Retrieved from

Norman, D. A. (2002). The Design of Everyday Things (Reprint edition). New York: Basic Books.

Serrano, N., Gallardo, G., & Hernantes, J. (2015). Infrastructure as a Service and Cloud Technologies. IEEE Software, 32(2), 30–36.

The Web: Extensible Design. (n.d.). Retrieved December 16, 2017, from

What are External Apps (LTI Tools)? | Canvas Community. (2017). Retrieved December 18, 2017, from

What is the hierarchical structure for Canvas accounts? | Canvas Admin Guide | Canvas Guides (en). (n.d.). Retrieved December 18, 2017, from



Innovations, Agreements, and the Internet

(Image Credit: geralt)

Most internet users are happy to understand it at a surface level, eager to accept it as a basic utility for other ends; profit, learning, sharing, etc. It becomes necessary to understand the design history of the internet on a deeper level, however, in order to design innovations based on its’ ongoing affordances. In their overview of “The internet as system and spirit”, Abelson et al. conclude that “The Internet is an object lesson in creative compromise producing competitive energy” (Abelson, Ledeen, & Lewis, 2008). I found this statement to be a helpful means of understanding the internet as a systematic set of ongoing interactions both technical and social, rather than a “thing” in and of itself.

Webster’s dictionary defines “internet” as “an electronic communications network that connects computer networks and organizational computer facilities around the world”, citing a usage example as “doing research on the internet” (“Definition of INTERNET,” n.d.). While technically true, this definition fails to paint a full picture of what the internet really is and why it works. Earlier in their overview, Abelson et al. describe a broader convergence of interests that define the internet:

The Internet works not because anyone is in charge of the whole thing, but because these parties agree on what to expect as messages are passed from one to another. As the name suggests, the Internet is really a set of standards for interconnecting networks. The individual networks can behave as they wish, as long as they follow established conventions when they send bits out or bring bits in (Abelson, Ledeen, & Lewis, 2008).

Did you catch the difference in these definitions? If you stopped at the dictionary level, it seems to connote that the network is an organic and self-sustaining entity – a thing. The technology in and of itself does not make the internet work. Whether in the form of agreed-upon protocols or policies, the Internet works because of “standards” that govern mediations within and without the technology. Vint Cerf, David Clark, et al. note that “The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward.” (“A Brief History of the Internet | Internet Hall of Fame,” n.d.).  

The internet has turned out the way it has because of correlating agreements, decisions, and incentives a communicative network facilitated by agreed-upon standards. It does not stand on its own. These standards will shape ongoing incentives based on continued agreement facilitating room for scalable experimentation. For anyone desiring to innovate in today’s global economy, then, they cannot afford to sit out of that conversation.



A Brief History of the Internet | Internet Hall of Fame. (n.d.). Retrieved November 11, 2017, from

Abelson, H., Ledeen, K., & Lewis, H. (2008). Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion (1 edition). Upper Saddle River, NJ: Addison-Wesley Professional.

Definition of INTERNET. (n.d.). Retrieved November 11, 2017, from

Software as a Metamedium: Evolution is the wrong Analogy


Why have media-centric software applications and personal computing devices developed into what they are today versus some other way? What would the present look like had different decisions been made along the way? The full answer to these questions lies outside the scope of this post, however, we can peer into the thought processes of some of the key individuals to better understand them. Based on some of these examples, it becomes apparent that the collective direction of personal computing as a metamedium has developed from choices, and not by natural evolution. I appreciated Lev Manovich’s citation of Alan Kay on this topic: “The best way to predict the future is to invent it” (Manovich, n.d.)

Alan Kay and his team at XEROX PARC  helped establish this direction by making the design choice to consolidate media development applications within a unified framework. Lev Manovich describes how this combination afforded a vision “in which the computer was turned into a personal machine for display, authoring and editing content in different media, and how “By developing easy to use GUI-based software to create and edit familiar media types, Kay and others appear to have locked the computer into being a simulation machine for ‘old media” (Manovich, n.d.). These decisions set the heading for where and why we use software as a metamedium today, in particular when thinking about our personal computers and mobile devices.

We have been using Apple computers and devices as our running case study for how to de-black-box technologies and open them up to see the interwoven stories of design choices and functions. The Apple 1 was one of the earliest machines to economize our modern conception for the “personal” computer. But while the Apple 1 and its successors helped shift the industry toward this design concept, that wasn’t Steve Wozniak’s original thought process. In an interview with NPR, he commented on his mindset:

When I built this Apple 1… the first computer to say a computer should look like a typewriter – it should have a keyboard – and the output device is a TV set, it wasn’t really to show the world here is the direction it should go. It was to really show the people around me, to boast, to be clever, to get acknowledgment for having designed a very inexpensive computer. (“A Chat with Computing Pioneer Steve Wozniak,” n.d.)

One of the people “around him” was Steve Jobs, who helped monetize and scale the Apple 1 as a package of pre-existing ideas based on pre-existing decisions. The Apple 1 and its successors borrowed heavily from the preceding culminations of design decisions as of Alan Kay and his team, and Kay’s team was able to synthesize their concepts from preexisting tools for technical mediation of media, which in turn stemmed from millennia of applied “old media”.


A Chat with Computing Pioneer Steve Wozniak. (n.d.). Retrieved November 1, 2017, from


Manovich, L. (n.d.). Software Takes Command (Vol. 5). New York: Bloomsbury.

Rawlinson, N. (n.d.). History of Apple: The Story of Steve Jobs and the company he founded. Retrieved November 1, 2017, from

Computational Thinking and Musical Composition

In his open-access book on computation, David Evans says that “Computing changes how we think about problems and how we understand the world.”. It certainly has for me this week, but not in the way I expected. I was fascinated to see how computing and computational thinking have enabled research labs and enthusiasts to develop algorithms that compose music in the style of a given music genre.

Jeannette Wing, Professor of Computer Science at Columbia University, has consistently evangelized computational thinking as an essential skill across all domains – not just in the traditional way most people see computer programming.

Computational thinking is a way humans solve problems; it is not trying to get humans to think like computers. Computers are dull and boring; humans are clever and imaginative. We humans make computers exciting. Equipped with computing devices, we use our cleverness to tackle problems we would not dare take on before the age of computing and build systems with functionality limited only by our imaginations (Wing 2006).

The possibilities for interesting applications are astounding in an era where we can set algorithms loose on a challenges such as Musical Composition.

(Credit: Algorithmic Music Composer)

In this video, you can watch a video of a computer generating an improvised jazz track. Watch as the two melodies stream along, one a bass track and the other on guitar. But how would a computer know how to do that? In and of itself, it doesn’t. As Wing says, computers are dull, not imaginative in and of themselves. People empower computers to do imaginative things such as improvisational composition, and they often do so by solving for the program computationally both before and during the actual process using computational thinking.

In Google’s course on computational thinking for educators, they outline the process of problem solving computationally in more detail (Google 2017), however, check out an abbreviated bullet list for our purposes here below.

  • Decomposition – Breaking down data, processes, or problems into smaller, manageable parts
  • Pattern Recognition – Observing patterns, trends, and regularities in data
  • Abstraction – Identifying the general principles that generate these patterns
  • Algorithm Design – Developing the step by step instructions for solving this and similar problems

Following these steps, someone imaginative had to have gone through a process of using computational thinking to break down the problem, amass a collection of jazz music to analyze, and then develop a set of procedural syntax for the computer to look for principles and patterns within that music to know how and which ones to emulate in its own composition.

Another example would be these two videos from a “Flow Machines”, a research group developing algorithms for musical composition.

Check out this video for an AI generated melody in line with the Beatles

(Credit: Flow Machines)

Or this video for harmonies steeped in the style of Bach:

(Credit: Flow Machines)

In the case of the Beatles Video, musicians are collaborating with the Algorithms, adding the vocals to the AI’s melody. For the harmonies imitating Bach’s style of composition, the data is based off a database of sheet music. You can even try to guess the difference here. In any case, it’s been fascinating to see just how interesting problem solving in traditionally right-brained areas such as music with computational thinking.


David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition. CreateSpace Independent Publishing Platform; Creative Commons Open Access:

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.

Google. Computational Thinking for Educators – – Unit 1 – Introducing Computational Thinking. Retrieved October 22, 2017, from

Tinder: Mapping of Digital Information and Meaning

I’ve never actually used Tinder, but the basic premise seemed perfect as a case study for how information as a means of sending technical information can be compared/contrasted with the meaning making that accompanies those signals being sent and received. For those not in the know, Tinder is an app designed for matching people who may or may not be strangers, based on the mutual understanding that they are already attracted to each other. (Or at least to their photos and profile). But technically speaking, what is happening behind the scenes before that understanding can actually take place?  What does a “swipe” mean in context? How do we know that? How can a digital process with only two signals, someone swiping left or right, result in contextual meaning?

In “The Information Paradox“, Denning and Bell resolve the ambiguity of this relationship, saying “The association between a sign and its referent is new information.” (Bell & Denning, 2012). So let’s take a look at how “swiping” acts as a sign for sending technical information, what its referent is for both users on a technical level, and how that association leads to “meaning making” for both of them.

On a procedural and technical level, the process toward this “matching” is pretty straightforward:

A user scrolls through profile photos of other users in their area using their finger on the screen to shuffle them either to the left or the right of the screen.

  1. Tactile gestures on the screen filter interest in the people being looked at. “Swiping” the photo to the left to discard them, and “swiping” their photo to the right saves that profiles information for future signaling and receiving of text communication. The same signals can be sent by touching the corresponding icons on the screen, “x” for pass or “heart icon” for liking.
  2. These signal are encoded digitally, and then signal electronic changes for whether the user should be labeled as available to initiate a match and conversation or whether matching and the initiation of digital contact should be ruled out.  
  3. The people whose profiles are being examined do not receive any signal based on these gestures UNLESS they happened to have “swiped” the other user’s profile photo to the right as well.
  4. At that point, both users are notified by the app of a “match” at the same time, and a text conversation is initiated in case they want to get together.

How then do these people draw meaning? “Information is the difference that makes a difference”. (Bateson, 2000). The difference between “left” and “right” is where meaning is drawn. Each user has become familiar with the procedural actions that lead to a “match” notification, and so they know that they both have “swiped right” as a show of potential attraction or interest. The absence of a “match” notification would have meant that the other user either hasn’t seen their picture, or “swiped left” as a a lack of interest.

Both users are simultaneous receivers of each other’s previous asynchronous signals of interest. In this context of ‘information as a design problem’, Tinder organizes digital information tied to the relationships between swipes and profile photos to “design and control patterns and quantities of electrical current (and radio waves) as signals that map onto human sign and symbol structures”(Irvine, n.d.). The difference between which database or bucket that each photo is swiped toward is ultimately the starting point for a slew of potential meanings, and the initiation of a more nuanced level of electronic signaling within the ensuing text conversation.  



Bateson, G. (2000). Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology (1 edition). Chicago: University of Chicago Press.
Bell, T., & Denning, P. (2012, December). The Information Paradox. American Scientist, (100).
Irvine-Information-Theory-Intro-820.pdf. (n.d.). Retrieved October 18, 2017, from
Irvine, M. (n.d.). Irvine-Information-Theory-Intro-820.pdf. Retrieved October 18, 2017, from
Rocchi, P. (Ed.). (2011). Logic of Analog and Digital Machines (UK ed. edition). New York: Nova Science Publishers.

Opening up a Macbook: Storytelling through Context

A few years back, my 2012 Macbook Pro died, becoming nothing more than a the occasional bookend on my shelf. Because the screen had large cracks, I tore it off, meaning to eventually pull out the hard drive and recover some of my lost photos. I always thought it might be interesting to open it up and learn more abut how computers worked, however, it is interesting to see how doing so also opens up understanding for how the world works.

By disaggregating the various components that make up my headless corpse of a laptop, I can get a contextual view of the components not only within its shell, but also place those components into context within industry, research, and policy as well. Irvine explains the systemic nature of the relationship between technology and the world we actively live and operate, stating that “Any view of “technology” and “society/culture” that begins with the assumption that these terms represent real separate domains is false and can only lead to useless false dichotomies and erroneous analyses….” (Irvine, N.D.)

With newfound confidence browsing iFixit trying to understand these concepts, I decided to open my old bisected machine up:

By examining each of these components on their own merits, and then thinking through how they work and why they work, those questions will lead inevitably into the required institutions, disciplines, and principles across a wide variety of domains. Each with their own stories to tell. This image in particular is significant because it demonstrates a transition from a reliance on discs toward a future more reliant on the combination of larger hard drives and faster internet speeds.

(1) The CD/DVD writer is all but forgotten now, but less than 10 years ago it was indispensable as a form of recordable media for students working, gaming, learning, or flirting with classmates (Back when we burned custom playlists for people). The technology had to be researched and manufactured, standards had to be hashed out turning it into the standard medium, and most computers had to be designed for the capacity to hold at least one.

(2) The Hard Disk Drive is basically a Disc housed permanently inside of that small box, vs the CD/DVD writer where discs are added and removed on a regular basis. For the 2012 Macbook Pro, models began to be offered with the choice of either a Hard Disk Drive like this or for the incoming “solid-state” drive, an advancement in memory technology that improved the speed of accessing memory significantly, with an eye toward a future where disc drives would not be necessary. Apple was already designing for a future where they could mass produce machines where the space required by a CD/DVD writer could be better optimized for other components.

In their book,”A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems”, Veermaas et al. state  that

“Technology is not just a collection of technological products, but it is also about how people, more in particular engineers, develop these products, about how people use them to meet their specific ends, and about how this all changes the world as we know it” (Vermaas, Kroes, Poel, & Franssen, 2011).

There are stories behind why these components were designed and combined the way they are as artefacts and within their socio-institutional contexts. There are even more stories told by how the ensuing affordances and constraints played out in individual lifestyles. All of these stories are not understood as well in isloation. Opening up technologies like my old macbook offers a more interesting lens for understanding the systemic narrative as a whole.


Irvine, “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)” 

Vermaas, P., Kroes, P., Poel, I. van de, & Franssen, M. (2011). A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. Morgan & Claypool.

Understanding “Panopto” through the Lens of Semiotic Affordances

Understanding Technologies as Tools to Solve Specific Problems

For the average consumer, technological services and products are rarely understood beyond how they fit into daily living and/or workflow. How ironic it is, then, that these basic questions related to “how” and “why” they are used are the fundamental starting points to understanding these technologies on a deeper level . Understanding which specific kinds of problems these technologies are designed to solve open the mind to how they can be better used, repurposed, or abandoned in favor of a more relevant tool. As Cole notes, tools are “simply prior natural things reshaped for the sake of entering effectively into some type of behavior.”(Cole, n.d.). The same principle applies to more sophisticated tools such as software. By understanding the design problem for that intended “behavior”, we understand the tool in context.

Cognitive and semiotic technologies, in particular, are best explored by “understanding what kind of technologies these are and how they should support all the functions and activities associated with symbolic cognition and expression” (Irvine, n.d.). For technologies in a learning context such as a university, consciously examining how a technology is used for cognition or expression as well as tracing back how similar problems and tasks were solved across time are a starting point to understanding them now.

Let’s apply these lenses to one such tool many university faculty use for capturing and managing their lectures called “Panopto”.

Understanding “Panopto” by Contextualizing Its Semiotic Affordances

Panopto is a set of software used primarily for recording lectures for digital or online consumption, and also contains organizational functions for managing and embedding recorded media as needed. For the typical Faculty member, they see this software integrated into their learning management system, but it can also be accessed on the desktop or in the cloud. The technology initiates and captures video recordings from a variety of sources, including activity on their computer screen, video source from a webcam or external camera, slide presentations, or isolates only the audio.

See the video I recorded below for a brief demonstration of what this looks like:

The tool is used for semiotic purposes. It mediates the transmission of cognitive learning objects, the recordings, from professor to online learner through several layers of representation. The video begins its life as a recording, makes its way up into the cloud as part of Panopto’s web library,  and is then disseminated as needed through the Learning Management System to the Learner.

As can be seen, Panopto differentiates itself from some other technology for the professor as a tool specifically meant to capture and communicate knowledge within a digital environment, affording itself more specifically to online lectures and blended learning. Historically, professors have used physical environments and physical cognitive artefacts to solve the same problem of capturing and delivering knowledge. Libraries have served as central hubs for preserving, managing, and accessing knowledge for thousands of years. Lecture halls and classrooms have had a similar lifespan. Panopto is software, but more than that, it is a tool to deliver knowledge, and a hub for managing it.

Looking at cognitive technologies and artifacts through the lens of the the problems they and how those issues have been addressed historically is touched on by Don Norman, saying: “The evolution of artifacts over tens of thousands of years of usage and mutual dependence between human and artifact provides a fertile source of information about both”(Norman, 1991).

The nature of the problem itself is not so different. While the video navigation is similar to Youtube for the student, it is not a tool for capturing and sharing knowledge on a massive scale, it is meant specifically for contained classroom settings. By understanding what it isn’t, you understand what it is. Looking at Panopto as a cognitive technology solving semiotic problems, it becomes easier to gauge whether the tool itself is useful and intuitive for solving these problems, or not.


Cole, M. (n.d.). Cultural Psychology: A Once and Future Discipline. Retrieved from

Irvine, M. (n.d.). Introduction to Cognitive Artefacts and Semiotic Technologies. Retrieved September 26, 2017, from

Norman, D. A. (1991). Designing Interaction: Psychology at the Himan-Computer Interface. Cambridge University Press. Retrieved from