Evaluating Non-response Rates and Bias in the American Time Use Survey

Research Article
Open access

Evaluating Non-response Rates and Bias in the American Time Use Survey

Yizhe Bai 1*
  • 1 University of Michigan    
  • *corresponding author baiyizhe@umich.edu
Published on 15 January 2024 | https://doi.org/10.54254/2753-7048/37/20240569
LNEP Vol.37
ISSN (Print): 2753-7056
ISSN (Online): 2753-7048
ISBN (Print): 978-1-83558-275-6
ISBN (Online): 978-1-83558-276-3

Abstract

This paper investigates the ongoing issue of non-response in the American Time Use Survey (ATUS) and its impact on the survey's reliability and representativeness. Despite targeted efforts, ATUS has experienced a consistent decrease in response rates, which was exacerbated during the COVID-19 pandemic. This decreasing response rate may link to potential non-response bias, which could distort the survey's depiction of time use patterns across the U.S. population. The study analyzes the complexities associated with non-response rates, emphasizing biases that might emerge from groups with weaker community ties. Challenging traditional assumptions, recent research indicates that high response rates do not automatically mitigate bias. This paper highlights the necessity for innovative survey methods and the adoption of new technologies, such as web-administered diaries and smartphone apps, to address these issues. The analysis underscores the importance of adapting survey strategies to contemporary societal and technological landscapes, aiming to enhance the accuracy, efficiency, and representativeness of ATUS data.

Keywords:

survey research, time use studies, response rates, nonresponse bias 1. Introduction

Bai,Y. (2024). Evaluating Non-response Rates and Bias in the American Time Use Survey. Lecture Notes in Education Psychology and Public Media,37,276-282.
Export citation

1. Introduction

The American Time Use Survey (ATUS), a nationally representative survey, plays a crucial role in understanding how individuals in the United States allocate their time across various activities, such as work, leisure, and household chores. ATUS selects participants from the Current Population Survey (CPS) and collects detailed 24-hour activity data through interviews. This data, obtained from individual self-reports, is meticulously coded into a three-tier system for comprehensive analysis, providing insights into employment, domestic responsibilities, and leisure activities, valuable for policymakers and researchers [1,2]. Despite its importance, ATUS faces a significant challenge of non-response, which has been persisting since its inception and raises a risk of bias in the survey's findings. The response rates have shown a decreasing trend from 2003 to 2022, with the issue further amplified during the COVID-19 pandemic in 2020, leading to a notable decline in participation [3].

This paper aims to provide an in-depth analysis of the non-response issue in ATUS, evaluating the trends in response rates over the years and exploring the potential biases that such declining rates introduce. This research assesses scholarly perspectives on the factors contributing to non-response and the resulting biases. The paper focuses on proposing effective strategies and methodological adaptations to address these challenges, ensuring that ATUS continues to capture accurate and representative data about time use in the United States. Through this analysis, this research seeks to contribute to enhancing the reliability and utility of ATUS data collection, which is crucial for understanding societal trends and informing policy and economic decisions.

2. Dissecting the Non-response Rate Issue in ATUS

2.1. The Significance of Response Quality

The significance of response quality in surveys like the American Time Use Survey (ATUS) cannot be overstated, as it directly impacts the reliability and validity of the findings. A high response rate is generally perceived as indicative of high-quality data, reflecting a more comprehensive cross-section of the population and minimizing the risk of nonresponse bias. Conversely, a low response rate can raise concerns about the representativeness of the survey results and the potential introduction of biases, particularly if certain demographic or social groups are underrepresented.

In the ATUS's design phase, strategies like the designated day with postponement were implemented to enhance response rates, balancing data representativeness against survey costs. However, despite these efforts, response rates in 2003 fell short of the 70% target. Subsequent studies, particularly by Abraham, Maitland, and Bianchi in 2006, revealed the intricate nature of these challenges, pointing to social integration as a key factor in survey participation. This evolving understanding underscores the necessity for dynamic survey design and the importance of adapting methods to maintain data quality amidst shifting response patterns.

2.2. Designing ATUS: Strategies for Maximizing Response Rates

In the phase of ATUS design and development, designated-day-with-postponement was initially favored, whereby respondents were called on a specific day of the week, and if unreachable, on the same day the following week. Concerns arose that this might lead to low response rates, leading to the exploration of substitution methods. For instance, research indicated that time-use profiles on weekdays were similar, allowing the possibility of calling respondents on any weekday, known as a designated day with postponement and substitution. However, concerns about potential bias from these schedules led to simulations that revealed while the convenient-day schedule introduced bias in estimating time spent in various activities, the designated-day-with-postponement-and-substitution schedule generally did not, though it was less robust than the designated-day-with-postponement schedule without substitution [4].

To set a response goal, the Bureau of Labor Statistics (BLS) aimed for a conservative 70% target response rate, informed by the experiences of similar surveys like Statistics Canada's. This target was set in conjunction with estimated Census production costs and BLS staff and research costs, leading to a recommended sample size of 21,000 completed interviews per year [4].

Several operational choices were made to maximize contact, and response rates, and manage costs based on the 1997 pilot results. These included using priority mail for all respondents, as it was cost-effective and efficient in reaching respondents. Substitution, proactive appointment setting, and field visits were not implemented due to their cost implications and minimal impact on increasing response rates. Instead, incentives were used only for households without phone numbers, a decision influenced by cost considerations and the aim to include underrepresented demographic groups in the sample [4].

Despite these efforts, the response rates in 2003 fell substantially below the 70% target, with households having a telephone number showing a 58% response rate and those without only 33%. To address this, the Census Bureau established a response rate investigation team to analyze calling and response patterns and improve response methods. Additionally, the BLS also began examining the extent of nonresponse bias in ATUS estimates. A Census Bureau analysis of response and operational data in early 2004 showed the main reason for rejection was survey fatigue: the designated person was fed up participating in the CPS investigation and did not want to respond to another investigation [3].

2.3. Analyzing Nonresponse in ATUS: Social Integration and Its Impact on Participation

In their 2006 study on nonresponse in the American Time Use Survey (ATUS), Katharine G. Abraham, Aaron Maitland, and Suzanne M. Bianchi proposed two alternative hypotheses regarding factors contributing to nonresponse [5]. The first hypothesis suggested that busier individuals are less likely to respond to the survey, potentially leading to an understatement of activities like work-in-time diary estimates. The second hypothesis posited that weaker community ties might decrease the likelihood of survey response, affecting estimates of activities such as volunteer work.

To evaluate these hypotheses, the authors considered evidence from previous research. Studies by Groves and Couper suggested that while households with individuals out of the labor force are easier to contact, being difficult to contact does not necessarily correlate with refusal to participate [6]. This finding challenges the first hypothesis that busier people are less likely to respond. A 2008 Dutch time-use survey by Erik Van Ingen, Ineke Stoop and Koen Breedveld also focused on the impact of busyness on people. Although respondents often claim that they are uncooperative because they are busy, this does not reflect their actual time use. People with empty schedules were no more likely to write than those with full schedules [7]. In contrast, Drago et al. 's diary study found that teachers from a "high-stress" school were less likely to volunteer for a time diary study, hinting at the impact of stress or weaker social ties on survey participation [8]. Further supporting the second hypothesis, Paakkonen’s analysis of the Finnish Time Use Survey showed that people with weaker social ties, such as those living alone or with low levels of social activity, were less likely to respond [9].

The cumulative evidence from these studies suggests that while busyness may influence the likelihood of being contacted for a survey, it does not necessarily lead to refusal to participate. On the other hand, the strength of social integration appears to have a more direct impact on survey response rates, making the second hypothesis, that people with weaker community ties are less likely to participate in surveys, more plausible and supported by empirical evidence.

3. ATUS Response Rate and Potential Bias

3.1. Declining Trends in ATUS Response Rates Raise the Concerns of Response Bias

The American Time Use Survey (ATUS) calculates response rates using a formula endorsed by the American Association for Public Opinion Research (AAPOR), where the rate is the quotient of the number of complete interviews (C) over the sum of completes, refusals (R), noncontacts (NC), other non-interviews (O), and unknown eligibility cases (UE). This standard formula ensures that ATUS response rates are comparable across years and other surveys. As illustrated in the table 1, ATUS response rates have experienced a downward trend, beginning at 57.8% in 2003 and declining to 35.8% in 2022 [3].

\( response rates=\frac{C}{C+R+NC+O+UE} \) (1)

\( C=Completes (complete or sufficient partial interview) \) (2)

\( R=Refusals \) (3)

\( NC=Noncontact (uncompleted callbacks;never contacted) \) (4)

\( O=Other (respondent absent, ill, or hospitalized;language barrier, ect) \) (5)

\( UE=Unknown eligibility ( \begin{array}{c} phone number incorrect for household, \\ unconfirmed number, etc. \end{array} ) \) (6)

Table 1: ATUS response rates by year [3]

Year

Response rate (percent)

2003

57.8

2004

57.3

2005

56.6

2006

55.1

2007

52.5

2008

54.6

2009

56.6

2010

56.9

2011

54.6

2012

53.2

2013

49.9

2014

51.0

2015

48.5

2016

46.8

2017

45.6

2018

43.0

2019

42.0

2020

39.2

2021

39.4

2022

35.8

The intuitive significance of maintaining high response rates in surveys is widely recognized. High response rates are generally seen as indicative of survey quality. The initial 70% target response rate of ATUS also represents a desire for high response rates. However, the persistently declining response rates in ATUS, as shown in the table, raise concerns. A high response rate is traditionally valued because it reduces the risk of nonresponse bias, where the characteristics of nonrespondents systematically differ from those of respondents, potentially skewing survey estimates.

3.2. Nonresponse Bias Due to Weak Community Ties

Focusing on the hypothesis that weaker community ties lead to lower survey participation, this aspect holds significant implications for the ATUS. Individuals with fewer social connections, who may live alone or have limited social engagements, are potentially underrepresented in survey data. This underrepresentation can skew the ATUS findings. This hypothesis suggests that people who are less integrated into their communities, possibly due to factors like living alone, having limited social activities, or frequently moving, may be less inclined or able to participate in surveys [1].

The bias resulting from this underrepresentation is significant because it affects the accuracy and generalizability of the survey findings. If individuals with weaker social ties, who may have distinct time use patterns, are not adequately represented, the survey results could misrepresent the actual time use behaviors of the broader population. For instance, these individuals might engage in different amounts or types of work, leisure, or community activities compared to more socially integrated individuals. Their absence in survey responses could lead to skewed estimates in activities such as volunteering, socializing, or even personal care. In essence, this type of bias – often referred to as nonresponse bias – occurs when the respondents' characteristics and behaviors systematically differ from those of non-respondents. This difference can lead to erroneous conclusions about population-level behaviors and trends, thereby limiting the reliability and applicability of the survey findings to inform policy and research.

3.3. Relevance of Recent Research Indicating a Weak Link between Non-Response Rates and Bias

The general assumption in survey methodology is that low response rates may lead to significant nonresponse bias. However, Robert Groves challenge this assumption. He suggests that striving for high response rates doesn't automatically prevent nonresponse bias. Groves argues that efforts to boost response rates might simply attract more respondents of the same type, failing to create a more representative sample. He emphasizes the importance of focusing on the survey design and the characteristics of the target population to effectively reduce nonresponse bias [6]. The Dutch Time Use Survey study further explores this topic, particularly in the context of time use surveys. Contrary to common beliefs, the study reveals that busy individuals, engaged in work, sports, or volunteer activities, are more likely to respond to surveys. This finding suggests that busyness does not inherently lead to nonresponse bias. The study also notes that higher response rates do not automatically equate to less nonresponse bias, challenging the traditional view that higher response rates improve survey representativeness [7]. Furthermore, recent research by Richard Hendra and Aaron Hill aligns with these perspectives. They discovered little correlation between nonresponse bias and response rates. Their study warns that chasing high response rates could extend the survey duration and introduce additional measurement issues, implying that this pursuit may be a costly and ineffective strategy. They propose considering more efficient methods for addressing nonresponse bias and suggest that lower response rates might still yield valid results if surveys are monitored effectively during data collection [10]. These insights collectively suggest a need to rethink response strategies in surveys like the ATUS. Rather than focusing solely on achieving high response rates, there should be a critical assessment of survey design, population characteristics, and innovative data collection techniques.

4. ATUS Methodology and Response Strategies (800+ words)

4.1. Analysis of the ATUS's Methodological Approach over the Years

The American Time Use Survey (ATUS), since its introduction, has served as a foundational tool for capturing the daily rhythms of American lives. Initially, ATUS relied heavily on face-to-face interviews, a method that, while resource-intensive, ensured a high level of respondent engagement and data accuracy. As technologies advanced and societal behaviors shifted, ATUS adapted by incorporating telephone and internet-based methodologies, hoping to maintain a representative sample while addressing the modern respondent's convenience.

The stability and reliability of response rates in surveys like the American Time Use Survey (ATUS) can be significantly influenced by the methods used for data collection. Traditionally, ATUS has employed the Computer Assisted Telephone Interview (CATI) system for conducting interviews. This method involves interviewers reading scripted texts and entering responses reported by respondents [11]. However, for the time-use diary, the core component of ATUS, a conversational interviewing approach is adopted. The effectiveness of these methods in maintaining stable response rates hinges on their adaptability to various circumstances.

However, these methodologies faced new challenges. During the COVID-19 pandemic, the ATUS faced unprecedented challenges that impacted its data collection and, consequently, its response rates. In March 2020, the Census Bureau temporarily shut down its call and processing centers, halting ATUS data collection and mailing of survey materials. This suspension, lasting from mid-March to mid-May, created a two-month gap in data collection. When operations resumed in May with reduced capacity, only a few interviewers were equipped to conduct surveys remotely, often without mailing materials to respondents. Additionally, the response rate for ATUS in 2020 dropped to 39%, compared to 42% in 2019, reflecting the operational disruptions and challenges in contacting sample members due to the closures [3]. These changes underscore the vulnerability of traditional survey methods to unforeseen events like pandemics and their potential impact on survey response rates.

4.2. Analysis of the ATUS's Methodological Approach over the Years

Census interviews insisted on the use of calling for ATUS even during the pandemic. But what should ATUS do to respond to future outbreaks and maintain data collection as long as possible and prevent lower recovery rates? The study by Chatzitheochari et al. explored new technologies for time diary data collection, contrasting them with traditional paper-and-pencil methods used in surveys like the American Time Use Survey (ATUS) [12]. This research, part of the UK Millennium Cohort Study, introduced two innovative instruments: a web-administered diary and a smartphone app. The study aimed to reduce respondent burden, and administration costs, and improve data quality. Participants could choose between the new methods, with paper diaries offered as a backup. The pilot survey showed a preference for technology choice among the population. The app was the most popular choice (41%), followed by the web-based instrument (28%), and the paper diary (20%). An analysis revealed that web diarists were most likely to produce good-quality diaries (97%), followed by app diaries (approximately 83%). Paper diaries had a lower rate of good-quality submissions, partly due to the lack of interviewer follow-up in this study. The study suggested that new technologies could improve response rates and data quality without the need for an interviewer to verify entries. This is significant as it indicates that adopting new methods could lead to more efficient and accurate data collection in time-use studies.

Applying online survey introduce Strengths such as cost-effectiveness, rapid data collection, and the potential for accessing a diverse participant pool. there also present weaknesses and Biases including, challenges in achieving a representative sample, lower response rates which might affect the survey's accuracy, and limitations in the depth and reliability of the data collected [13]. The Internet has significantly influenced survey research. However, Internet coverage rates are still lower than telephone rates, leading to a 'digital divide'. This disparity affects Internet use across different demographics, with lower rates among older adults, less educated individuals, the poor, the unemployed, noncitizens, and minorities. Such disparities can introduce coverage bias in Web surveys [14]. The potential of new technologies and methodologies to improve data collection and address biases inherent in traditional survey methods. However, in the face of declining response rates and experiencing the impact of emergencies, it remains important to adapt survey design to modern technological and societal changes to improve the accuracy and reliability of survey data.

5. Conclusion

The American Time Use Survey (ATUS) faces significant challenges due to declining response rates over the years, raising concerns about non-response bias and the potential for skewed survey findings, particularly in representing segments of the population with weaker community ties. In examining the challenges of the American Time Use Survey (ATUS), it's clear that the direct relationship between declining response rates and the emergence of potential biases is not straightforward. Contrary to initial assumptions, recent studies, including those by Groves, indicate that merely increasing response rates does not necessarily overcome these biases. Looking forward, the integration of new technologies, such as web-based diaries and smartphone applications, as highlighted by Chatzitheochari et al., appears to offer a viable path to enhancing response rates and improving data quality. By combining traditional methods with these innovative approaches, ATUS may successfully stabilize the response rate and thereby ensure a better representation of its data. However, implementing these technologies must be done with an awareness of issues like the digital divide to ensure the sampling remains representative.


References

[1]. Frazis, H., Stewart, J.: Where Does the Time Go? Concepts and Measurement in the American Time Use Survey. In: Hard-to-Measure Goods and Services: Essays in Honor of Zvi Griliches. pp. 73–97. University of Chicago Press (2007)

[2]. Hamermesh, D.S., Frazis, H., Stewart, J.: Data Watch The American Time Use Survey. J. Econ. Perspect. 19, 221–232 (2005). https://doi.org/10.1257/0895330053148029

[3]. American Time Use Survey User’s Guide. (2023)

[4]. Planning, designing, and executing the BLS American Time-Use Survey.

[5]. Abraham, K.G., Maitland, A., Bianchi, S.M.: Nonresponse in the American Time Use Survey: Who Is Missing from the Data and How Much Does It Matter? Public Opin. Q. 70, 676–703 (2006)

[6]. Groves, R.M.: Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opin. Q. 70, 646–675 (2006)

[7]. Ingen, E.V., Stoop, I., Breedveld, K.: Nonresponse in the Dutch Time Use Survey: Strategies for Response Enhancement and Bias Reduction. Field Methods. 21, 69–90 (2009). https://doi.org/10.1177/1525822X08323099

[8]. Drago, R., Caplan, R., Costanza, D., Brubaker, T., Cloud, D., Donohue, S., Harris, N., Riggs, T.: Time for Surveys: Do Busy People Complete Time Diaries? Loisir Société Soc. Leis. 21, 555–562 (1998). https://doi.org/10.1080/07053436.1998.10753670

[9]. Pääkkönen, H.: Are Busy People Under-or Over-represented in National Time Budget Surveys? Loisir Société Soc. Leis. 21, 573–582 (1998). https://doi.org/10.1080/07053436.1998.10753672

[10]. Rethinking Response Rates: New Evidence of Little Relationship Between Survey Response Rates and Nonresponse Bias, https://journals.sagepub.com/doi/epub/10.1177/0193841X18807719

[11]. Data sources : Handbook of Methods: U.S. Bureau of Labor Statistics, https://www.bls.gov/opub/hom/atus/data.htm

[12]. Chatzitheochari, S., Fisher, K., Gilbert, E., Calderwood, L., Huskinson, T., Cleary, A., Gershuny, J.: Using New Technologies for Time Diary Data Collection: Instrument Design and Data Quality Findings from a Mixed-Mode Pilot Survey. Soc. Indic. Res. 137, 379–390 (2018). https://doi.org/10.1007/s11205-017-1569-5

[13]. Nayak, M., K A, N.: Strengths and Weakness of Online Surveys. 24, 31–38 (2019). https://doi.org/10.9790/0837-2405053138

[14]. Couper, M.P.: New Developments in Survey Data Collection. Annu. Rev. Sociol. 43, 121–145 (2017). https://doi.org/10.1146/annurev-soc-060116-053613


Cite this article

Bai,Y. (2024). Evaluating Non-response Rates and Bias in the American Time Use Survey. Lecture Notes in Education Psychology and Public Media,37,276-282.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2nd International Conference on Social Psychology and Humanity Studies

ISBN:978-1-83558-275-6(Print) / 978-1-83558-276-3(Online)
Editor:Kurt Buhring
Conference website: https://www.icsphs.org/
Conference date: 1 March 2024
Series: Lecture Notes in Education Psychology and Public Media
Volume number: Vol.37
ISSN:2753-7048(Print) / 2753-7056(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Frazis, H., Stewart, J.: Where Does the Time Go? Concepts and Measurement in the American Time Use Survey. In: Hard-to-Measure Goods and Services: Essays in Honor of Zvi Griliches. pp. 73–97. University of Chicago Press (2007)

[2]. Hamermesh, D.S., Frazis, H., Stewart, J.: Data Watch The American Time Use Survey. J. Econ. Perspect. 19, 221–232 (2005). https://doi.org/10.1257/0895330053148029

[3]. American Time Use Survey User’s Guide. (2023)

[4]. Planning, designing, and executing the BLS American Time-Use Survey.

[5]. Abraham, K.G., Maitland, A., Bianchi, S.M.: Nonresponse in the American Time Use Survey: Who Is Missing from the Data and How Much Does It Matter? Public Opin. Q. 70, 676–703 (2006)

[6]. Groves, R.M.: Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opin. Q. 70, 646–675 (2006)

[7]. Ingen, E.V., Stoop, I., Breedveld, K.: Nonresponse in the Dutch Time Use Survey: Strategies for Response Enhancement and Bias Reduction. Field Methods. 21, 69–90 (2009). https://doi.org/10.1177/1525822X08323099

[8]. Drago, R., Caplan, R., Costanza, D., Brubaker, T., Cloud, D., Donohue, S., Harris, N., Riggs, T.: Time for Surveys: Do Busy People Complete Time Diaries? Loisir Société Soc. Leis. 21, 555–562 (1998). https://doi.org/10.1080/07053436.1998.10753670

[9]. Pääkkönen, H.: Are Busy People Under-or Over-represented in National Time Budget Surveys? Loisir Société Soc. Leis. 21, 573–582 (1998). https://doi.org/10.1080/07053436.1998.10753672

[10]. Rethinking Response Rates: New Evidence of Little Relationship Between Survey Response Rates and Nonresponse Bias, https://journals.sagepub.com/doi/epub/10.1177/0193841X18807719

[11]. Data sources : Handbook of Methods: U.S. Bureau of Labor Statistics, https://www.bls.gov/opub/hom/atus/data.htm

[12]. Chatzitheochari, S., Fisher, K., Gilbert, E., Calderwood, L., Huskinson, T., Cleary, A., Gershuny, J.: Using New Technologies for Time Diary Data Collection: Instrument Design and Data Quality Findings from a Mixed-Mode Pilot Survey. Soc. Indic. Res. 137, 379–390 (2018). https://doi.org/10.1007/s11205-017-1569-5

[13]. Nayak, M., K A, N.: Strengths and Weakness of Online Surveys. 24, 31–38 (2019). https://doi.org/10.9790/0837-2405053138

[14]. Couper, M.P.: New Developments in Survey Data Collection. Annu. Rev. Sociol. 43, 121–145 (2017). https://doi.org/10.1146/annurev-soc-060116-053613