Over the last several years, I have been very pleased to see an emphasis on program assessment within the NIRSA community. Because we have taken the time to perform intentional assessment, we have proof of a positive correlation between students using their campus recreation facilities, and higher GPAs and retention rates. We have been able to show that our student employees have improved their transferable skills by working within our programs and facilities. There is confirmed evidence that students gain leadership skills by participating in intramural and club sports. Additionally, we can use program assessment on a day to day basis to show us whether our intended goals for our programs and services are being met, and how we can tweak our programs for continuous improvement.
This is all good stuff. Truly.
Which makes it so much more disappointing when we, as individual members of NIRSA, don’t take the time to evaluate our own conference sessions.
Recently, a new professional and a graduate student from my department gave a presentation together at a regional conference. It was their first time presenting, and they worked hard to make their presentation engaging and relevant. They felt prepared, but when they saw that their presentation room had 100 chairs, they started feeling a bit nervous. Once they began, the chairs were full, with some attendees standing in the back of the room. They finished going through their information, and ended up fielding a number of questions from the audience. Attendees from our school who were in the audience all thought their presentation went well; however, we were looking forward to feedback from other NIRSA members who would not be quite as biased as our colleagues.
The team received 8 evaluations. Out of those 8, five people wrote comments. In a room with 100 – 120 attendees, only 8 provided feedback of any kind.
I have attended NIRSA conferences regularly over the last 15 years, when session evaluations were still being completed with pencil and paper. I have sat through some really great sessions (one given by George Brown on Student Learning Outcomes, which changed the direction of my career), and have labored through some really awful presentations that seemed like they were thrown together at the last minute. I have tried to at least give a number rating for the presentations I have attended, and have written comments when I could. I feel like that is the least I can do when someone has taken the time to put themselves out there and share information with the group.
It is fantastic that we take program evaluation so seriously. But shouldn’t we also take assessment seriously when it comes to our NIRSA conferences? Otherwise, how will we ever get better?