Featured Articles
Meredith Turner
/ Categories: 534

On the Legal Front: Government-Mandated Pay Reporting Is on the Horizon

Richard Tonowski

On January 29, 2016 (seventh anniversary of President Obama’s signing of the Lily Ledbetter Fair Pay Act), The U.S. Equal Employment Opportunity Commission (EEOC) announced the long-anticipated proposed regulations for pay data collection. Private-sector employers with 100 or more employees will complete an expanded EEO-1 annual workforce demographics report that will now include 12 pay categories. The EEO-1 has been around since 1966; the current rules for which employers are required to file the report are not changing. Pay data would be based on W-2 earnings; employers would also report total hours by pay band. EEOC is soliciting comments on how to handle hours for salaried employees whose hours are not generally tracked. The first pay report would be due in September 2017; employers would report a year of pay data looking back from any pay period between July 1 and September 30 of the reporting year. The rule includes federal contractors and supersedes the Department of Labor proposed regulation announced in 2014; the two agencies are together on the EEOC plan. There is speculation that federal contractors with 50-99 employees who currently file EEO-1 might be included later. Comments were being taken until April 1.

The history of the rule goes back to the resident’s National Equal Pay Enforcement Task Force, established in 2010 to promote interagency cooperation in fighting pay discrimination. One of the action items was for EEOC to contract with the National Research Council (NRC) for a report on methods to collect pay data. That report (NRC, 2013) raised questions of what the data would be used for as well as making recommendations on collection details and confidentiality concerns. It also recommended a pilot program before full implementation of any program. EEOC contracted with Sage Computing for a study, using the EEO-1 as the collection instrument (Sage Computing, 2015). EEOC also held a 2-day meeting in 2012 to get input from various stakeholders.

The EEOC rule states that the purpose of the data is to “assess complaints of discrimination, focus investigations, and identify employers with existing pay disparities that might warrant further investigation.” To that end, the agencies will develop software so that investigators can conduct statistical analyses to compare an individual employer’s numbers with data aggregated by industry and geography.

It’s daunting to write this knowing that by the time it reaches its audience, there will be other very good and probably more detailed commentary available. The period for commentary on the proposed rule will have closed, although likely there is still a public hearing in the offing and the discussion likely will not end there. I take the plunge because this topic represents a unique juncture of civil rights law, social concerns, and the application of science that should be of interest to the I-O profession.

The EEOC proposed rule comes at a time when the effectiveness of merit pay and the usefulness of the traditional annual performance evaluation are under new scrutiny by practitioners and researchers, and Big Data offers the opportunity for more detailed and objective (and potentially more intrusive) insight into job behavior and results.

Depending on the commentator, this initiative is substantive action dealing with an enduring problem (“a significant step forward to address pay inequality”), political posturing (“Obama’s new pay equality rule”), or a confused muddle (“well intended but not well thought out,” “You can’t compare oranges and apples in the same group . . . too many false positives and too many false negatives.”).

Five statements about the proposal can be put forth with relative certainty:

A new regulation is likely. Despite some questioning of EEOC’s authority, this is a Title VII record-keeping matter for which EEOC can issue regulations not just guidance. Plenty of effort and political capital is already invested in the project.

It will be a coarse measure. The proposed EEO-1 would include 10 occupational, 12 pay, 7 race/ethnic, and 2 gender categories. Even so, that gives little specific information for, say, the professionals category. There could be any number of occupations and pay ranges. An example that has been mentioned involves physicians and nurses in healthcare. Both occupations are professionals, but both qualifications and pay are different, and there may be differences in relative representation by sex. The relative number of employees per job could be different across organizations, so even within the same industry comparison of employers could be limited. The practical matter in getting the details is that even this proposal is being criticized as burdensome to those who do not have automated systems to capture and integrate pay data into their EEO-1 reports. Data collection is limited to employers who file EEO-1 reports and have at least 100 employees. That gets to larger employers but obviously not to everyone. Using the W-2 as the pay data source has the advantage of capturing all compensation for the employee, but that inclusiveness is also problematic. In any job category, pay may vary because of overtime, shift differentials, temporary assignments, or competency enhancement provisions such as pay for knowledge.

Employer burden concerns need to be aired. EEOC acknowledged this in its estimation of employer cost and workload and invited comments. The method used in the proposal assumes that the additional data for the reports can be assembled within electronic data systems. On average, this should take 6 hours and increase the employer’s cost by $160. Critics have been quick to complain that this is a major underestimate. In addition, the W-2 data normally produced for the calendar year needs to conform to the EEO-1 reporting cycle ending on September 30.

Confidentiality is another concern. EEO-1 aggregated data are made available to researchers; individual reports are not released. There seems to be little problem in leaking EEO-1 individual employer data currently, but the stakes might be higher with compensation data. The United Kingdom has its own pay disclosure proposal, apparently with the government intending to shame employers with gross male–

female inequities (Cauterucci, 2016). Firms with 250 employers or more will submit descriptive statistics on male–female pay beginning in April 2017. The data will go on a searchable public website. The EEOC proposal does not have this disclosure; employers on this side of the Pond are concerned about any disclosure, even if released data do not name names. Identity both of the firm and of some employees could be inferred by the size, composition, and location of the work force. The Freedom of Information Act, intended to foster transparency in government, might also promote transparency in supposedly confidential reports regardless of agency safeguards. Another concern is that conscientious employers will want to audit their compensation systems in advance of any reporting, but they will be wary of creating records for plaintiffs in subsequent EEO investigations

Details for using the data need to be worked out. This is acknowledged in the proposal, which references the pilot study (Sage Computing, 2015) that explains some of the choices made for data collection, which of necessity impose limitations. The report has a detailed discussion of statistical analysis methods, although the focus seems to be on three mentioned below. It is probably worth a read for anyone who is analyzing pay disparity. Much of the discussion on past research seemed to focus on economy-wide analyses; for the present purpose, two sets of synthetic data were constructed and analyzed. Ay, Holt, and Reardon (2016) seem to be the first with an examination of the proposed statistical methodology. Following the report, they focus on the Mann-Whitney U test, Kruskal- Wallis test, and interval regression. The usual analogous methods of t-test, analysis of variance, and ordinary least squares regression are excluded because the data are categorized. Also, Mann-Whitney and Kruskal-Wallis are nonparametric tests that do not assume that the normal distribution is underlying the data. These methods may provide a foundation for identifying industry statistical outliers, which might then receive further investigative attention. As the authors note, method alone does not set the criterion for what is an outlier. The proposal mentions statistical power and significance, and the “process would include recognition of how sample sizes may influence results.” What that means in practice has yet to be defined. Small differences with small numbers generally mean no statistical significance. That women make 79 cents to the male dollar is a sound bite that, although true, does not necessarily point to discrimination. That figure runs into various explanations. A recent review of national data (Blau and Kahn, 2016) takes it as a raw figure (2010 data) but gives 92 cents after accounting for a set of covariates; however, that includes adjustments for occupation and industry as “explaining” the gap. Adjustments only for “human capital” factors make it 82 cents. There is a persistent, unexplained gap that could be due to discrimination. Any unlawful discrimination is too much. But expending appreciable resources to pursue a small discrepancy that might be smaller when explained is not very exciting. On the other hand, it could be argued that it would be useful to know if there are organizational outliers with extreme differences that drive a smaller average.

How to pursue those differences may not be simple. Probably the easiest salary system to track is one where jobs are narrowly defined, pay raises are either step increases at specified intervals or across-the-board adjustments, and performance evaluations (short of grounds for termination) have no impact on pay. That’s my situation, but it may not be commonplace, not even in the federal government. Pay for performance, variable pay, pay bands, and broad job classifications are more in vogue and by now are not new concepts. EEO enforcement can go awry when confronting these arrangements, as illustrated by two EEOC sex-based pay cases. Where a substantial part of compensation is variable and based in part on contribution to the firm, those on extended leave (maternity or other) may not make as much as those who stay around; that was a defense argument in Bloomberg. And although a group of attorneys shared the same classification, the courts in Port Authority of NY & NJ were not buying the argument that “a lawyer is a lawyer” when assignments as well as pay differed.

Another conceptual matter involves variation in performance. For labor-as-a-commodity, individual performance does not matter and one pay rate fits all. If there is no variation in output, then there is no reason for pay differentiation. But the research in selection utility casts doubt that this is the general case; more general is the rule of thumb that the standard deviation of performance measured in money is around 40% of salary (Schmidt and Hunter, 1998). Arguably, better performance should be better paid. But what if level of performance (or competence influencing performance) is related to EEO protected class? I-Os have been dealing with that in defending employment tests with adverse impact. Reviews such as McKay and McDaniel’s (2006) meta-analysis indicate racial differences in performance but with complexity regarding types of data.

Although some dismiss the very concept of the pay report as wrong headed, the likelihood that it will be next year’s reality invites consideration of how to make the best scientific use of it. That is where SIOP—as well as other professional societies—ought to be claiming a role. Chiming in on the specifics of the proposed rule is the obvious thing. The Sage and NRC reports would seem natural starting points for constructive comments. What are the limitations of data usage in the proposal? What questions can, and cannot, be addressed by data collected under it? Are there enhancements that could be adopted relatively easily, given the competing interests of detail, cost, and confidentiality?

Beyond that are the larger issues underlying pay equity. These have all likely been addressed by one study or another—perhaps with different conclusions. But now is the time to bring this information together to define where there is generally accepted professional practice and where practice is greatly in need to be informed by research. This is a role for broad-based professional groups rather than for those pursuing critical but limited facets of pay-related issues. Performance evaluation, compensation systems, and statistical methodology are broad areas for consideration. How do I-Os and other stakeholders (folks such as EEOC investigators) know when these are effective and fair? There may be no one, simple answer for each area. But a start needs to be made if management systems and EEO enforcement are to have a sane coexistence.

The next year or two could be very interesting regarding pay issues. Credit EEOC with scoring some early points just by making the proposal for the reporting rule. Law firm blogs and newsletters covering the rule are urging clients to examine their pay practices now (under attorney–client privilege), well in advance of September 2017. The irony, of course, is that the data might show little worth pursuing, because the egregious discrepancies (if any) have been fixed by the time a charge could be filed. But that would not be a bad thing.

 

And on a personal note, let me say that it was a privilege and a pleasure to have served my legal watch under our outgoing TIP Editor Morrie Mullins.

 

 

REFERENCES

Ay, G., Holt, R., & Reardon, E. (2016, February 10). Interpreting EEOC’s equal pay statistical tests. Law360. Retrieved from http//www.law360.com/articles/757291/interpreting-eeoc-s-equal-pay-statistical-tests.html

Blau, F. D. & Kahn, L. M. (2016). The gender wage gap: Extent, trends, and explanations. National Bureau of Economic Research Working Paper 21913. Retrieved from http://ww.nber.org/papers/w21913

Cauterucci, C. (2016, February 16). The U.K. is set to publicly shame companies that pay women less than men. Slate Magazine. Retrieved from http://www.slate.com/blogs/xx/factor/2016/02/16/gender_ wage_gap_in_the_u_k_to_be_addressed_with_public_shame_database.html

EEOC (2016, February 1). Agency information collection activities: Revision of the employer information report (EEO–1) and comment request. Federal Register, 81, 5113–5121. Retrieved from https://www.gpo.gov/fdsys/pkg/FR-2016-02-01/pdf/2016-01544.pdf

EEOC v. Bloomberg L.P., No. 07 Civ. 8383 (S.D.N.Y. 9/12/2013).

EEOC v. Port Authority of NY & NJ, No. No. 13–2705 (2nd Cir. 9/29/2014).

McKay, P. F. & McDaniel, M. A. (2006). A reexamination of black–white mean differences in work performance: More data, more moderators. Journal of Applied Psychology, 91, 538–554.

National Research Council. (2013). Collecting compensation data from employers. Washington, DC: The National Academies Press. Retrieved from http://www.nap.edu/catalog/13496/collecting-compensation-data-from-employers

Sage Computing. (2015). Final report: To conduct a pilot study for how compensation earning data could be collected from employers on EEOC’s survey collection systems (EEO1, EEO-4, and EEO5 survey reports) and develop burden cost estimates for both EEOC and respondents for each of EEOC surveys (EEO-1, EEO-4, and EEO-5). Retrieved from http://www.eeoc.gov/employers/eeo1survey/pay-pilot-study.pdf

Schmidt, F. L. & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.

Print
1445 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.