I’m happy to announce the publication of a new journal article focusing on badly-designed charts in pension fund reports. Below is the abstract. If you have any questions or comments about it, feel free to email me at firstname.lastname@example.org.
My favorite data vis would be the below, showing the errors in a published chart. There are several others, showing issues with a non-zero vertical axis, area v. height confusion, and 3D charts.
The purpose of this study was to investigate how pension funds use charts in popular reports. Popular reports communicate a fund’s financial health to non-technical audiences, and often contain charts, tables, and other graphical elements. Do these graphics meet audiences’ information needs and align with chart best practices?
This study focused on the 60 retirement funds receiving a 2021 popular report award from the Government Finance Officers Association. The author analyzed each graphic’s topic and design.
Most funds presented key topics (such as funding rate and portfolio return), but they generally lacked helpful benchmarks or peer comparisons. A total of 30% of reports had one or more broken charts, where their visual elements did not match the underlying data. A total of 70% of the reports contained at least one badly designed chart. These design flaws included non-zero (truncated) axes, hidden non-zero axes and misleading 3D perspectives.
To the best of the authors’ knowledge, this paper is the first to examine chart quality in pension fund popular reports.
This project measured the effectiveness of charts in SEC 10-K filings. Amazon Mechanical Turk and business students participated in the online experiment. The first half of the study asked participants to interpret five charts rarely found in 10-K filings, including combo charts, scatterplots, stacked bars, relative waterfall charts, and absolute waterfall charts. While participants were successful with combo charts, stacked bars, and absolute waterfalls, half were unable to interpret a scatterplot, and only a handful understood relative waterfalls. The second half of the experiment tested the effects of the three most common design flaws found in 10-K charts. Users were influenced by each of the three flaws: line charts using a non-zero vertical axis, bar charts using an unlabeled non-zero vertical axis, and pie charts using a 3-D perspective. Accounting students should be better trained in deceptive chart designs, and companies should improve their 10-K charts.
Segmentation reduces learners’ cognitive load by inserting system‐controlled pauses into instructional animations and video. However, many previous studies focus on conceptual knowledge, and do not allow users control over the pacing of instruction. This two‐part experiment attempted to validate segmentation in the context of procedural software instruction by applying it to an Excel conditional formatting tutorial. Learners assigned to segmented video failed to show either improved knowledge transfer or decreased cognitive load. Instead, learners using the videos were able to successfully use the pause and rewind features to manage their own cognitive load. This study shows the importance of providing users with control over the pacing of instruction, and with testing educational theories when applying them in a new context.
With the COVID 2020 pandemic, the Western AAA Meeting was pushed online. Below is a video of my presentation!
Below is an abstract highlighting the findings.
This presentation reports on the second year of results from an interactive Excel formula trainer. FormulaTrainer shows students how to write formulas through an adaptive browser-based system. The site walks students through basic math features, as well as a summary, text, date, and conditional functions.
Analyzing student activity logs from 3 different institutions shows that a major source of difficulty is brittle and inflexible learning. While most students can easily use an arithmetic operation or function in isolation, they struggle to combine them. For example, while most students can easily increase a number by a decimal, many struggle to decrease it by a percentage. Similarly, using a mathematic operation inside of a ROUND function increases its difficulty.
Overall, students struggle with critical thinking and problem solving. This session will show examples of concepts that students find difficult, mapped into testable Knowledge Components. Participants will leave with a better understanding of student struggles, and ideas on how to better teach Excel.
My poster titled “Mining Moodle: Extracting Assignment+Rubric Data from Moodle for AoL Purposes“ will be presented at the annual AACSB Assurance of Learning Conference March 18th. This conference brings together assessment officers from business schools around the world, as they discuss better approaches for measuring student learning. The poster shows how Moodle (an open-source course management system) stores assignment grading information. With the correct query, this data can be extracted into a file usable by Excel. The data provides insight on individual and aggregate student success rates. Overall, the poster helps faculty by providing a low-cost method for gathering student learning data.
This experiment compared three forms of tutorials: text, video, and segmented videos. It asked participants to follow along with a step-by-step Excel conditional formatting tutorial, and then to transfer this knowledge to a new problem. Participants assigned to the text tutorial were much slower than those assigned the video tutorials. This performance difference was mostly due to the difficulty text participants had in detecting and recovering from errors. Error detection and recovery may explain performance differences found in earlier studies comparing text and video tutorials. Participants reported no difference in cognitive load between the three instructional formats. But, those with low pre-existing Excel skills reported higher cognitive load and made more errors on the knowledge transfer task. This study also found that self-reported Excel competency is only weakly correlated with actual performance on an Excel assessment.
I’m happy to say that my article reviewing PPT files has been published. I used C# to automate the analysis of a 30,000 PowerPoint files from a large academic publisher. It was a fun exercise in “big” data, involving a lot of files and a lot of data cleaning.
How Do Academic Disciplines Use PowerPoint?
This project analyzed PowerPoint files created by an academic publisher to supplement textbooks. An automated analysis of 30,263 files revealed clear differences by disciplines. Single-paradigm “hard” disciplines used less complex writing but had more words than multi-paradigm “soft” disciplines. The “hard” disciplines also used a greater number of small graphics and fewer large ones. Disciplines identified by students as being more effective users of PowerPoint used larger images and more complex sentences than disciplines identified as being less effective in this regard. This investigation suggests that PowerPoint best practices are not universal and that we need to account for disciplinary differences when creating presentation guidelines.
My latest poster presentation will be at IEEE VIS 2017, Phoenix Arizona, October 1-6 (2017).
Abstract— It is challenging to visualize the time component of eye-tracking data. Scanpaths can show where a single user looked, and in what order, but multiple users’ scanpaths can easily overwhelm viewers. This paper’s approach shows larger trends without hiding short duration fixations. Each user’s fixations are plotted in a separate space-time cube, where fixation x- & y-coordinates are plotted normally, but the z-axis is used to represent time. The fixations are joined by a line, which is color-coded when it intersects areas of interest (AOIs). The resulting cubes, one per user, are then placed into a 3-dimensional space side-by-side. The result can be viewed close up to see an individual user’s gaze, or zoomed out to see larger patterns. When viewed from above, the result looks similar to Sparklines. This design is demonstrated on the eye movements of users watching training videos. It is able to show patterns not visible through other techniques.
I’ve been working on a better way to visualize data generated by my eye tracking camera. The typical approach smashes data into a single image. However, I’m really interested in seeing how eye gaze moves over time, meaning that aggregating loses the critical aspects.
The gif above shows the tool I’m currently developing. It uses three.js to place the recorded video and image slide into a 3d space. I then display *every* person’s gaze as a point, and update it in real-time. This also allows me to include pupil dilation, which is a key marker of cognitive load.
It still needs work, but is a pretty cool way of seeing how people reacted during the one-on-one experimental sessions