Data Verification Study
The Data Verification Study began more than four years ago. In Spring 2016, we collected data from an impressive 90 participants.
At the start of the spring semester, we had seven administrators. During the spring, we added six new administrators, all of whom completed the five-week training. Having such a large team was beneficial, but made scheduling quite a challenge! We had to compare everyone’s available times to decide which person should use which room for the study, while trying to avoid conflicts and maximize the number of sessions we could advertise. After a few drafts, we had a schedule that allowed us to advertise up to 51 different time slots. This gave us maximum flexibility for both our administrators and our participants.
Unfortunately, we experienced a number of technical difficulties. Three of our computers kept rebooting, sometimes leaving us with no computers to administer the study. This was inconvenient and frustrating: We had to cancel appointments, rearrange study schedules, and advertise fewer sessions. Eventually, we were able to restore all computers to full functionality.
We also had to change the browser we were using. The study was originally designed for Internet Explorer, but this browser is no longer being updated. Mid-way through the spring semester, the study videos stopped working. We had to switch to Google Chrome. This caused some confusion among the administrators, since the instructions for running the study were based on Internet Explorer. Appointments had to be cancelled because of this problem, too. We re-wrote the administration instructions for the new browser, and with the help of IT and our computer coordinator, we were able to get the study running again.
April was a very successful month for data collection. It signified the end of the Spring semester and with many ill-prepared students scrambling to earn their study credits, we ended up collecting a lot of data. Our computers were running smoothly, which allowed us to advertise almost 20 sessions a week and all of our time slots were being filled up.
Even with all the technical difficulties, we were able to collect data from 90 participants this spring. That is close to 180 hours of work administrators put into the DV study. We are so grateful for their dedication. With these 90 participants, we finally have 400 usable cases for this study, and data collection is now officially over.
As all of you who have worked on this study know, though, the process isn’t over when the participant leaves the room. The data still needed to be formatted and scored. Small disasters occurred because of transposed numbers and duplicate files. However, we were able to track down and correct each of these errors, and finish the last of the formatting and scoring in June.
We (Liz and Kim) are now working with Amber Stevens to write a journal article describing our results. We started by conducting a literature review and then classifying the 751 data checking errors that participants made during the study. Together, we raced against the clock to get the analyses complete before Liz heads off to graduate school this fall. It’s been a long journey, and we are excited to be able to share our results with the world.
Our key findings:
- Double entry results in half as many errors as the next best method. The Poka-Yoke Double Entry System was recently updated by Ernie Bedoy, and is available on href=”https://faculty.unlv.edu/barchard/doubleentry/” style=”color: #b10202; text-decoration: underline;” rel=”noopener noreferrer”>Kim’s website.
- The next best method is Solo Read Aloud (where you read the original data sheet out loud while looking back and forth to the computer screen). You should use this whenever you haven’t set up a double-entry system.
- Partner Read Aloud is the fastest, but is by far the least accurate. It should not be used.
Elizabeth Ochoa has been a part of the Interactive Measurement Group since spring 2014. She graduated with a B.A in psychology from UNLV in 2015. Starting fall 2016, she will be attending Colorado State University for a Masters in Public Health/ Peace Corps Masters International program. She hopes to get her PhD in Community Psychology or Public Health, so she can work in academia and mentor students and do research in violence prevention.
Kimberly A. Barchard is the Director of the Interactive Measurement Group. She is an Associate Professor in the Department of Psychology at UNLV. She joined UNLV in 2001 after obtaining her MA and PhD in Psychometrics. She works to empower students and colleagues to accomplish their personal and professional goals, particularly through the development of leadership, communication, and research skills.