SELECT PERSPECTIVES BLOG

6 Things Everyone Should Know About Mobile Employee Assessments

Posted by  Matthew O'Connell, Ph.D.

mobile-assessmentFor years, there have been serious concerns in the I/O community about unproctored testing. The fact is unproctored testing is here to stay and the advent of mobile devices has made it easier than ever for people take tests anywhere, anytime. Is that a good thing? In some ways yes, it gives more people than ever the opportunity to apply for jobs. At the same time, it raises logistical, psychometric and even ethical concerns.

There are several key issues related to mobile testing. Those include:

  1. Measurement Equivalence

  2. Mean Differences

  3. Validity

  4. Demographic Differences

  5. Device Limitations

  6. Applicant Reactions

At this point, there has not been a lot of research on mobile device usage, but what is out there provides some information regarding each of these issues. Based on existing research and practical experience, here’s what we know:

1) Measurement Equivalence

Basically, this relates to whether the psychometric properties of a test administered on a mobile device are similar to those administered on a non-mobile device, e.g. laptop, desktop. The vast majority of research, using large sample sizes suggests that the psychometric properties, including factor structure and reliability, are similar between mobile and non-mobile devices when assessments are intentionally designed to be administered across devices. Essentially, these findings suggest that the same thing is being measured in mobile and non-mobile environments. This holds for both cognitive and non-cognitive measures.

2) Mean Differences

This is an area where the results are mixed. There seem to be little if any effects on test performance for non-cognitive tests, such as personality scales. There is evidence, however, that people perform worse on cognitive ability tests as well as more interactive simulations. The reason for these differences is not clear, however. Similar findings occur in unproctored vs. proctored settings where applicants take the assessments on PC’s. One explanation is that in unproctored situations individuals are more likely to experience distractions. It may be that people taking tests on mobile devices are just more distracted and that accounts for the majority of the score differences as opposed to the device itself. We know from our research that people sometimes take tests on their smartphones in the back of a cab on the way to the airport. Not ideal test taking conditions, to say the least. Interestingly, a number of studies have shown that pass rates for mobile vs. non-mobile test administrations are almost identical.

3) Validity

This is an area where the research is particularly sparse. What the few studies that do exist seem to indicate is that there are comparable levels of validity for tests taken on mobile vs. non-mobile devices. While research on this topic is in its infancy, early returns suggest that when a test is designed for mobile deployment, validity equivalence is observed for personality and situational judgment. More research is needed, but the results are positive so far.

4) Demographic Differences

African Americans, Hispanics, and females are more likely to take a test on a mobile device than are white males. This may present challenges IF individuals who take tests on mobile devices, for one reason or another, tend to have lower scores. If individuals from protected groups are more likely to take tests on mobile devices, and therein are likely to do worse on the test, then we may see increases in adverse impact arising. Having said that, the majority of research in this area, even with very large sample sizes, indicates that differences in adverse impact are extremely rare when comparing mobile and non-mobile devices. Select International has researched this topic and for assessments developed for mobile deployment, it is possible to build a test that does not select significantly different proportions of applicants across protected classes. It is important to ask questions when selecting an assessment vendor about whether or not they have researched scoring differences across protected classes on different devices.

5) Device Limitations

The actual size of the device, and more specifically screen size, differs significantly between mobile devices. Larger devices such as tablets tend to provide an environment much more similar to a PC than does a smartphone. Screen size becomes a particularly important variable when there are complex images or designs displayed on the screen as in some cognitive ability tests and almost all interactive simulations. In addition to screen size, bandwidth and internet speed comes into play when tests are taken on mobile devices because they vary greatly across users and devices. These considerations impact the speed with which images load on the screen, how quickly responses register, etc. which may all have an effect on test scores.

6) Applicant Reactions

This is actually a very interesting area because on the surface it appears to be very contradictory. On the one hand, individuals view the opportunity to take tests in an unproctored environment, and on mobile devices as a very positive thing and they also perceive companies that allow mobile testing in a positive light. At the same time, applicants who take tests in an unproctored environment, whether on a mobile or non-mobile device say that the experience was not as positive as taking the test in a proctored environment. In this case, it’s not so much mobile vs. non-mobile as it is the level of distraction. It’s important to ensure that to the extent possible, the test was developed in a mobile-optimized manner. If you are not careful about choosing an assessment developed with mobile in mind, you may find that applicants react negatively to the experience.

Conclusion

Putting this all together leads us to a few conclusions and recommendations. The first is that unproctored mobile device testing is a growing trend that is accelerating. In other words, it’s not going away. It’s important to improve the experience for applicants by designing mobile tests in an optimized manner, maximizing the use of screen space, limiting unnecessary buttons, etc. We also think that it’s important to at least provide applicants with an opportunity to take tests in a more stable, non-mobile environment. That may take the form, for example, of a PC at a kiosk. Whether they take advantage of that or not is up to the applicant. The key is to provide choice and opportunity. It will clearly be perceived more favorably by applicants. Because our research suggests that people who take assessment on mobile devices may encounter more serious distractions in their test environment, it’s a best practice to instruct candidates to take control of their test environment and make sure they will be free of distractions during the assessment. That will at least reduce one factor that clearly inhibits optimal test performance.

The growth of mobile device testing poses a number of challenges but at the same time opens up a wide range of exciting opportunities for reaching non-traditional candidates, expanding the applicant pool and optimizing the testing experience for all candidates.

Are All Employee Assessments Alike?

 

Tags:   I/O Psychology, employee assessments

Matthew O'Connell, Ph.D.

Matthew is the Co-Founder and Executive Vice President of Select International. For more than 20 years, he has been a driving force when it comes to designing, evaluating and integrating selection tools into systems that meet the specific needs of Global 2000 organizations. He is the co-author of the business bestselling book, Hiring Great People.

Subscribe to Email Updates

Recent Posts

Tags

see all

Discover the cost-saving benefits of hiring the right employees, the first time.

REQUEST A DEMO