Back to Top

The ICE Lab at Digital Humanities 2013

The ICE Lab recently presented in Lincoln, Nebraska at the Digital Humanities 2013 conference. This was hosted by the University of Nebraska-Lincoln from 16-19 July, 2013.

We presented two long-papers at the conference:

The first, "The Advanced Identity Representation (AIR) Project: A Digital Humanities Approach to Social Identity Pedagogy," gave an in depth overview over the activities at our lab, and how our processes and applications relate to the digital humanities field. It was a chance to share both the vision and work processes of our lab, as well as describe the theoretical framework underlying the many of our projects, also touching on how we fulfill our pedagogical aims.

The second, "A Digital Humanities Approach to the Design of Gesture-Driven Interactive Narratives," covered the GeNIE System, its motivation, design, and implementation. As a case study, we also showcased Mimesis.

Of course, it was also a fantastic opportunity to listen to other interesting talks. The presentations spanned various disciplines such as literature studies, history, computer science, cognitive science, and human-computer interaction, all of them related to digital humanities in thoughtful and exciting ways.

The first presentation was "User Ethnographies: Informing Requirements Specifications for Ireland’s national, trusted digital repository," presented by Sharon Webb and John Keating from An Foras Feasa: Institute for Research in Irish Historical & Cultural Traidtions. In this talk, the presenters introduced the term Requirements Engineering (RE), underlying the process and methodology used in constructing their digital repositories. It specifies what the system or product should do, and ensures it is built upon authentic user requirements. The process in which they determine the various stakeholders, and establishing their requirements through was through stakeholder engagement. They employed stakeholder interviews, covered current practices in "analogue" and digital archiving, Additionally, they also involved the content providers (as opposed to the access users). One point they emphasized was that they never explicitly asked them "What do you want?", but established their needs based on their responses to other questions. An interesting point mentioned was the way their problem domain was defined from an HCI perspective, which was to build software that is "usable, useful, and used". There was also mention of the long-term (digital) preservation efforts and potential problems.

Next, was "The Digitized Divide: Mapping Access to Subscription-Based Digitized Resources," by Paul Matthew Gooding. This was an interesting presentation on mapping access to digital news resources across the United Kingdom. The author presented findings based on several hypotheses regarding what might affect access to digital news resources. For example, the physical presence of an educational institute might correspond to a larger number of individuals accessing digital news articles. In addition to showing where potential access points to these resources might be, he also talked about how crucial it was to include demographic information in order to gain insight into where these access to resources appeared to be located.

At another long-paper session was "Computer Identification of Movement in 2D and 3D Data," by Susan L. Wiesner, Bradford C. Bennett; Rommie L. Stalnaker, and Travis Simpson. Susan's presentation was on using 3D motion capture to learn dance moves/poses, and being able to identify the occurrence of dance moves when viewing a video (2D). The 3D motion capture system included the skeletal structure and motion of the dancer, include views from the top and side.I t was interesting how the training input was used to create a generalized model to identify dance moves in unseen, test samples, which were in 2D (videos). They employed the help of an expert dancer to map individual dance moves from 3D data to labels, but the interest portion involved conceptual metaphors in order to classify various dance moves into more abstract dance descriptors (representations) which turned out to be more useful when classifying unseen data. This is due to the nuances in actions that involve timing/motion which made classifying dance moves a difficult task.

Another presentation, "Made to Make: Expanding Digital Humanities through Desktop Fabrication," by Jentery Sayers, Jeremy Boggs, Devon Elliott, and Wiliam J. Turkel opened with a memorable quote by Neil Gersehnfeld: "Personal fabrication will bring the programmability of digital worlds we've invented to the physical world we inhabit." The talk covered desktop fabrication and its goals were defined, as well as several projects such as the"Kits for Cultural History", "Telescribe Kit," and "Flash Jewellery Kit" by the Maker Lab of The University of Victoria. The second part covered Makerspace at Scholars' Lab from the University of Virginia, and its history and ethos, while the third part covered the Lab for Humanistic Fabrication: Public History. It briefly covered the employed models, and suggested the following references for understanding next steps and needs:

For the various panel sessions, incoming ICE Lab research assistant, Jason Lipshin, gave a great talk on "Visualizing Centuries: Data Visualization and the Comedie-Francaise Registers Project," with co-authors Kurt Fendt (MIT Hyperstudio director), Jeffrey Ravel, and the ICE Lab's very own Jia Zhang. He covered the Comedie-Francaise Registers Project (CFRP) archive, and talked about the exploratory research process employed. Exploratory research process covered aspects such as machine reading, and combinatorial research. He also covered several visualization case studies, such as the Theater Mapping project, and the development of a new browser tool using combinatorial and generative research.

Up next was "ChartEx: a project to extract information from the content of medieval charters and create a virtual workbench for historians to work with this information." The presentation was interesting, as it really showcased the bridge between multiple disciplines with the aim of improving the processes employed by Historians, and additionally, extending their capabilities in relation to organizing digital charter collections. The tool developed, ChartEx, employs data-mining and processing using techniques from machine learning, natural language processing, and HCI, and gave Historians involved in constructing urban historic topography the ability to have more insight into the descriptions and people buying and owning property in history, as well as the relationships between those people. The deployed system consisted of several modules and capabilities, such as Language processing, Data mining, Analyzing integrated documents, and a platform called the Researcher's Workbench.

The workbench was developed for Historians to be able to manipulate, and process the data, allowing them to focus on the tasks, which require their expert knowledge, as opposed to the more manual tasks (i.e., transcribing historical documents). It employed chronological sequencing and spatial sequencing for analyzing and constructing spatial relationships. It enabled a Historian to not just look at the data but to actually work with it. They focused on understanding what the historians are doing, not at the level of the big research questions, but focused on the cognitive reasoning tasks that they employed while performing their work. The employed contextual inquiry around user tasks, and distilled a set of basic requirements that grouped into three broad activities. The talk was very entertaining, and particularly when they (jokingly) announced that their work had given birth to a new field called HCI: Historian-Computer Interaction.

Finally, another interesting talk was "Dyadic pulsations as a signature of sustainability in correspondence networks," by Michael Aeschbach, Pierre-Yves Brandt and Frédéric Kaplan. This talk was really interesting because of the presented model for conversations in online discussion groups. One of the first observations motivating the model was that a sudden increase in response time often means the end of a discussion group. Thus, following along these lines, it used a dyadic pulsation as a means to model communication between 2 users. Pulsation corresponds to the creation of a new communication dyad, which represents the first direct communication between 2 users (User A to User B). The first time User B replies to User A, pulsation of type A (asymmetric) is constructed. If User A replies to User B again, a type M (mutual) pulsation is produced. Thus, a group that continuously integrates new members would have a lot of dyadic pulsations.

It was a great experience attending the Digital Humanities conference. It enabled ICE Lab students to learn more about the field, and also gave insight into what the current research relating to the field was about. There was a large variety of projects and research questions pertaining to digital humanities, and it was particularly interesting seeing how some of the readings completed in as part of Professor Harrell’s AIR class, and in the ICE Lab, has been applied to a wide range of different domains. We're looking forward to the next Digital Humanities conference (which is going to be held in Switzerland!) in 2014! Written by: Chong-U Lim PhD Student at the Imagination, Computation, and Expression Laboratory