Tracking Individual Feds Over Time Could Improve Employee Viewpoint Survey
Academics look at the methodology of the annual survey gauging the attitudes of federal workers.
This story has been updated.
The annual Federal Employee Viewpoint Survey would be more valuable if it tracked individual respondents over time, gauging how their attitudes toward management and agency leadership changed from one year to the next, according to a new study.
A group of researchers from four universities assessed the strengths and weaknesses of the FEVS from an information-gathering and methodology standpoint. FEVS, which the Office of Personnel Management first administered in 2002, doesn’t clearly link individual respondents’ feedback over time, which would help researchers reduce bias in their causal estimates, the academics said.
“If tracing all the respondents from year to year is impossible, or undesirable, OPM could select a subsample of federal employees and track them over time to create a panel,” the study recommended. OPM could address privacy and confidentiality concerns “through a combination of randomly assigned identification codes” and “restricted data best practices” from agencies like the Education Department, the researchers said.
Linking respondents’ feedback over a period of time in what is known as a “longitudinal survey” also requires consistency in questions asked and sampling methods, the study said. “Unfortunately, there have been instances when OPM has added and deleted questions from one survey to the next, changed the wording of questions, or changed the response categories.” The study, conducted by academics from Indiana University, the University of Southern California, the University of Georgia and Haverford College, was published in March in the Public Administration Review, a professional journal in the field of public administration research, theory and practice.
OPM reviewed the academic paper earlier this month, calling it “thoughtful” and saying it “broadened” the agency’s perspective on its methodology. OPM noted that it redesigned its sampling strategy in 2013 to improve representation and enable HR managers to receive employee feedback. In addition to the annual FEVS, the agency has developed online tools to allow managers to do customized analyses of the data.
Kimya Lee, who is OPM’s technical expert on research and evaluation, responded to the March paper with her own paper published in April in PAR. She said the suggestion to trace individual respondents over time was “a good suggestion that may be possible with some careful thought and planning,” offering possible statistical scenarios of how it that could be accomplished. But she also said “extreme caution is needed as we discuss the possibility of linking survey results to other administrative data. Although it may further our research endeavors, it may be to the detriment of the survey program, specifically if employees feel that their responses are no longer confidential.”
The academics studied FEVS because so many researchers have used the survey in separate work exploring public management and governance. FEVS is a source of data “for empirical studies of public management and public organizations,” the publication said. Agencies, lawmakers and other stakeholders have increasingly looked to FEVS to gauge the morale, commitment, and job satisfaction of the federal workforce.
The researchers praised OPM for continuing to increase participation from the federal workforce, in addition to including a broad range of questions, exploring different areas such as employee empowerment, diversity management, job satisfaction and leadership styles. But casting such a wide net when it comes to topics has also limited the survey’s ability to dig deeper, the study said.
“We noted that the wide range of phenomena captured by the FEVS is one of the strengths of the survey,” the study said. “However, it appears a trade-off was made between breadth and quality of measurement. Many concepts are measured in the FEVS using just one or a handful of items, making it difficult for researchers to show that their measurement approach captures all or most of the key dimensions that make up a concept’s content.”
The study recommended that OPM create some more tailored questions that reveal, for example, the specific actions agencies take to increase diversity and how greater diversity affects job satisfaction and respondents’ perceptions of work quality.
The academics also suggested that OPM refine the wording of some questions, avoiding “double-barreled” constructions that end up measuring two separate ideas simultaneously, like motivation and commitment. Using “strongly agree” and “strongly disagree” as response options in surveys also injects too much ambiguity, the publication said. “When a respondent is asked to provide his or her level of agreement with a statement such as, ‘Managers communicate the goals and priorities of the organization,’ is a ‘strongly agree’ response an indication that the respondent believes this behavior to be extremely important (intensity) or that the respondent is highly confident in his or her manager’s respective behavior in this area (strong position)?”
The study urged OPM, the Government Accountability Office and other stakeholders to put together a working group that includes researchers who can help improve the design and implementation of FEVS.
Lee said in her response that OPM was “committed” to working with the research community inside and outside government. “The utility of the survey grows at a seemingly exponential pace within the milieu of the federal government, and with this article and associated commentary, we hope to witness an equally expansive growth in academic capacity.”
(Image via igor.stevanovic / Shutterstock.com)
NEXT STORY: Help Shape the Next President’s Agenda