Skip to content

Assessing Student Work in Omeka/Neatline

Reading text and reading hypertext are two very different activities. This has implications for teachers and pedagogues who are faced with evaluating students’ digital projects. As one would expect, digital projects generate nonlinear products and present new challenges to those faced with evaluating them.

Think about grading student term papers for a moment. The piece of work delivered by the student typically consists of one continuous, sequentially read document. Now think about the nature of a digital project you might want to assign to your students. Chances are the product you envision will consist of multiple interlinked webpages with additional information such as images and multimedia nested in various constellations. There might be a home or start page to a digital product, but it is usually up to the viewer to decide where to go next. How do you evaluate this type of student work so that the task is manageable and also takes these new dimensions into account?

This spring, I was part of a Digital Humanities team (Hannah Jacobs, Liz Milewicz, and Will Shaw) that collaborated with professor Alicia Jiménez of Duke University’s Classical Studies department to implement a digital component into her Roman Spectacle course (CLST 354). Our team’s goal was to help Jiménez incorporate Omeka and Neatline into students’ coursework, so that the students could better perceive the topographical and chronological evolution of “monumentalization” in Rome. Through my work with this course and on this project I learned some vital lessons about assessing digital projects.

1. What exactly did each student contribute?

When using a shared platform, figuring out what each student contributed can be a big challenge. In the case of the Roman Spectacle course, Jiménez asked the eight enrolled students to make a digital map and timeline of monuments in the ancient city of Rome by adding and describing items in the content management platform Omeka and contributing to a Neatline class map and timeline exhibit drawing on these items. Specific monuments were assigned to each student, but some students were asked to create additional items and content (e.g., triumphal routes) to illuminate their assigned monument or feature.

By the end of the semester, the students together had added roughly 100 individual items, and it was a daunting thought to have to painfully track down in Omeka and Neatline the sum of the information each student had added, especially since some of these contributions were deeply nested. Omeka and Neatline do not provide the kind of real-time collaborative editing environment that many of us have become accustomed to from using Google Drive: there is no version history and attribution can be tricky to figure out at first, since the list found in the Browse Items tab does not automatically include a column to display the users that added the items.

Browse Items list does not include column with contributing user name.
Lesson: Make sure a student’s work can be encapsulated or easily attributed.

Whatever the platform you are exploring for use in your class, make sure there is some reliable way of keeping each student’s work encapsulated and attributable. Omeka can be rather unwieldy in this domain. However, it is possible to filter by users who contributed an item with the advanced search function, which escaped my attention for the longest time.

Accessing the Advanced Search functions.
“Search By User” filters items by user.

Another solution, though not completely reliable, is to require that students manually attach their names to all items they create. In our class, the Digital Humanities team asked students to add their name in parentheses to the title of an item and, additionally, in the Contributor field for that item. (While the Creator field would be convenient for recording attribution since it appears as a sortable column in the Browse Items list, it was not designed to contain this information.) Another option would be to ask students to gather all their items in an Omeka Collection labelled with their name. This would keep everything sorted by student from the beginning. However, there still is a risk that students will be haphazard about this and forget to add an item to their collection.

2. Am I evaluating all the student’s digital work, or just what’s visible in the final product?

A second important consideration when grading digital projects is whether you are only interested in evaluating the final project or whether you want to assess the “hidden” work. Just as with a research paper, much of the work of a digital project is not visible in the final version. While the public-facing website may draw on data stored on the content management platform, often it is not an exhaustive reflection of the information students recorded or the iterations of entry and editing that they went through. For the Roman Spectacle course, we used Omeka as the platform to manage the data and Neatline as the public-facing outlet to deliver the data. While Omeka and Neatline generally work very smoothly together, in some instances not all of the information users add to Omeka is also surfaced in the related Neatline exhibit. Metadata accompanying images, for example, is captured by the Omeka interface, but not surfaced in the Neatline entries. Because we were trying to teach students appropriate use of scholarly images in digital humanities work, it was important to look closely at all the image metadata in Omeka, not just what was visible in the Neatline exhibit.

Lesson: Assess other aspects of the digital project, not just the front end.

To make sure we are assessing students comprehensively, especially for digital projects in which students are learning new methods for researching and displaying content, we need to evaluate all aspects of their work, not just the final project. In an Omeka/Neatline ecosystem, this means evaluating the quality of data and metadata (Omeka) and considering how students presented this information in order to achieve the project’s objectives (Neatline). This bipartite approach is reflected in the grading rubric and grade calculation sheet the Digital Humanities team developed for the Omeka/Neatline section of the Roman Spectacle course.

3. How do I evaluate a student’s use of the digital medium?

Compared to the classic term paper, the possibilities of delivering and displaying information online are practically infinite. Web-based publications can include 3D models, audio files, interactive maps and timelines, and so on; site design and functionality can vary drastically as well. It is easy to imagine how this can quickly evolve into a bewilderingly protean array of information for an instructor to grade. This may be fine for a dedicated digital humanities course, where students are expected to develop their own projects from start to finish, but can unnecessarily complicate grading when the use of digital complements other course assignments that the students need to complete as well. That said, with any digital project there will be some amount of technical skill students must have or develop in order to complete the assignment. How do we take technical effort and stylistic decisions into account in evaluating the project?

Lesson: Provide students with a template or precise style guidelines.

Providing fairly precise style parameters can help channel students’ creative energies and make evaluation more straightforward and manageable for the instructor. This can take the form of a template or styling guidelines including color schemes and other facets of formatting. The more constraints you impose, the more streamlined grading becomes, but also the more monotonous. It is about striking the right balance within the parameters staked out by the instructor. In the Roman Spectacle course, for example, the instructional team specified most of the basic styling parameters, but gave students the opportunity to creatively solve problems like the representation of triumphal routes on a map using vector drawing tools.

Lesson: Develop a rubric that matches the structure of the software solution used.

Redesigning grading rubrics originally developed for analog student work is a necessity, so that one can effectively measure students’ use of the digital medium. Our team identified digital literacy and technical skill as facets of performance on digital projects that are different from conventional course assignments, and we incorporated these two categories into our rubric. Technical proficiency, however, was not our main focus while grading projects (only 20% of the mark). More important to us were content knowledge and digital literacy (80% of the mark). It was essential to us to see that the students had understood the content material and demonstrated that they could use the digital tool at their disposal effectively and creatively to communicate a message to viewers.

Lesson: Ask students to write a reflection piece that explains their design choices.

If students are also asked to keep a log of their decisions regarding data and metadata structure and website design, they can easily convert these notes into a reflection piece that helps illuminate the rationale behind students’ decisions. This can also help the instructor gauge a student’s digital literacy.

Resources:

Adrian High is a fourth-year Classical Studies PhD student at Duke University whose research centers on inscriptions and papyri that capture what everyday life was like in the ancient world. Currently, he is working on a rich archive of inscriptions from Delphi recording more than 1,000 slave manumissions (see digital visualization prototype). In the spring of 2017, he served as a Humanities Writ Large research assistant with Digital Scholarship Services.