Archive for the 'Methodology' Category

Big changes coming to IRBs . . . . still time to comment

OHRP has release its notice of proposed rule making that makes significant changes to the Common Rule.

Federal Register: Federal Policy for the Protection of Human Subjects

[Comment Link]
Comments are accepted up until 12/07/2015 at 11:59 PM EST

If you need to get up to speed, The National Academy of Sciences published a book in 2014 on the first release of changes to the common rule. It is available on-line, as a pdf or as a book.

Will this be an election issue in the US?

The Canadian election campaign period is much shorter than the US. The Canadian election will take place on October 19, 2015 and the campaigning started on August 2nd of this year.

Another difference with the US is the types of issues that candidates are discussing – specifically science policy and the long-form census. Will these be issues in the US? Doubtful, but let’s watch the debates and see.

Below is recent coverage in the Canadian press about the long-form census and science policy being issues, at least among the NDP and Liberals:

Reviving the Census Debate
Donovan Vincent | The Star
September 12, 2015
The Liberals and the NDP have said they want to bring back the long-form census the Conservatives killed in 2010. Could it become an election issue?

Researchers try to make science a federal election issue
Julie Ireton | CBC News
September 3, 2015

Here is a running list of organizations that were against/in favor of the Harper government’s cancellation of the mandatory long-form census.

Here is previous coverage in this blog about Canada’s war on science and follies with their census.

Changes to NBER Working Paper E-Mails

Neil Irwin of NYTimes Upshot writes about why the the National Bureau of Economic Research decided to change the way working papers are presented in its weekly e-mail.

No editorial judgment goes into the sequence in which the working papers appear. It is random, based on the order in which the paper was submitted and in which the N.B.E.R. approval process was completed. In other words, there is no inherent reason to think that the first paper listed is more groundbreaking, important or interesting than the third or 17th.

But a lot more people read the first one listed. Showing up first in the email generated a 33 percent increase in the number of people who clicked on the working paper and a 29 percent increase in the number who downloaded it.

Police calls and blurry neighborhood boundaries

Here’s a great piece using a mix of administrative data (complaint calls to the police), on-line forums, spatial data, and traditional census data to see what happens in the transition zones across neighborhoods. The first link is to the easy-to-read version as reported in CityLab; the second is the original piece, with more details about the methodology.

When Racial Boundaries Are Blurry, Neighbors Take Complaints Straight to 311
Laura Bliss | CityLab
August 25, 2015
In NYC, calls about noise and blocked driveways are most frequent in zones between racially homogenous neighborhoods.

Contested Boundaries: Explaining Where Ethno-Racial Diversity Provokes Neighborhood Conflict
Joscha Legewie and Merlin Schaeffer | Presentated at the American Sociological Meetings
August 21, 2015

Using Probability in Criminal Sentencing

FlowingData extracts a statistics lesson on probability from a piece in FiveThirtyEight about risk assessment and criminal sentencing.

New Book From the Oxford Poverty & Human Development Initiative

The Oxford Poverty & Human Development Initiative published a book on Multidimensional Poverty Measurement & Analysis:

Multidimensional poverty measurement and analysis is evolving rapidly. Quite recently, a particular counting approach to multidimensional poverty measurement, developed by Sabina Alkire and James Foster, has created considerable interest. Notably, it has informed the publication of the Global Multidimensional Poverty Index (MPI) estimates in the Human Development Reports of the United Nations Development Programme since 2010, and the release of national poverty measures in Chile, Mexico, Colombia, Bhutan and the Philippines. The academic response has been similarly swift, with related articles published in both theoretical and applied journals.

The high and insistent demand for in-depth and precise accounts of multidimensional poverty measurement motivates this book, which is aimed at graduate students in quantitative social sciences, researchers of poverty measurement, and technical staff in governments and international agencies who create multidimensional poverty measures.

Draft chapters are available online.

Poll Results and Response Rates

Scott Keeter, Pew Research Center’s director of survey research, discusses declining response rates and what it means for survey reliability.

Apple Research Kit: New Frontiers in Data Collection & Informed Consent

The Apple Research Kit allows researchers to develop an iPhone app, which interested respondents can download from the Apple Store. The respondent goes through an on-line consent form and then responds to questions, tasks (walking), etc. Some of the diagnostic tools are based on previously developed apps from the Apple Healthkit.

As of now, apps have been developed for collecting data for research projects on asthma, cardiovascular disease, diabetes, Parkinson’s, mind, body, and wellness after breast cancer, and for a population-based study, the LGBTQ population.

Here is a description of the informed consent process for these iPhone apps:
Participant-Centered Consent Toolkit

Listed below are a few press releases associated with the Pride Study – the population based study of the gay population. Following those posts are some more general critiques of this way of gathering data. The post from the Verge is probably the most critical raising issues of “on the internet no one knows you are a dog” and gaming the consent process (lying about eligibility for the study). On the plus side, the participant pool is going to be easier to sign up and won’t be limited to those who live close to research hospitals. Here is an excerpt from Business Insider to the reaction to the app launch for the Stanford Heart study:

It’s really incredible … in the first 24 hours of research kit we’ve had 11,000 people sign up for a study in cardiovascular disease through Stanford University’s app. And, to put that in perspective – Stanford has told us that it would have taken normally 50 medical centers an entire year to sign up that many participants. So, this is – research kit is an absolute game changer.

The participant pool is limited to iPhone users (no android version of these apps), although some will have a web interface (the Pride Study).

Launch of the Pride Study
UCSF Researchers Launch Landmark Study of LGBTQ Community Health
Jyoti Madhusoodanan | UCSF Press Release
June 25, 2015

A big LGBT health study is coming to the iPhone
Stephanie M. Lee | BuzzFeed
June 25, 2015

How The iPhone Is Powering A Massive LGBT Health Study
Kif Leswing | International Business Times
June 25, 2015

Critiques of the Apple ResearchKit
Apple’s new ResearchKit: ‘Ethics quagmire’ or medical research aid?
Arielle Duhaime-Ross | The Verge
March 10, 2015

In-Depth: Apple ResearchKit concerns, potential, analysis
March 9, 2015

What’s the Matter with Polling?

What is the Matter with Polling?
Cliff Zukin | New York Times
June 20, 2015

This article focuses on political polling – and predictions from political polls, but much of the content is relevant to other sorts of telephone-based opinion surveys, many of which are used by social scientists: Survey of Consumers, Pew, Gallup, etc.

The article focuses on (a) the move from landline to cellphones; (b) the growing non-response rate; (c) costs; (d) and sample metrics, e.g., representativeness.

The decline in landline phones makes telephone surveys more expensive since cell phones cannot be reached through automatic dialers. The landline phone vs cellphone distribution comes from the National Health Interview Survey. Here’s a recent summary of the data. The article summarizes this as “About 10 years ago. . . . about 6 percent of the public used only cellphones. The N.H.I.S. estimate for the first half of 2014 found that this had grown to 43 percent, with another 17 percent “mostly” using cellphones. In other words, a landline-only sample conducted for the 2014 elections would miss about three-fifths of the American public, almost three times as many as it would have missed in 2008.”

The other issue for polling is the growing non-response rate.

When I first started doing telephone surveys in New Jersey in the late 1970s, we considered an 80 percent response rate acceptable, and even then we worried if the 20 percent we missed were different in attitudes and behaviors than the 80 percent we got. Enter answering machines and other technologies. By 1997, Pew’s response rate was 36 percent, and the decline has accelerated. By 2014 the response rate had fallen to 8 percent.

Non-response makes surveys more expensive – more numbers to call to find a respondent and many of them dialed by hand if it is a cellphone universe. And, most important, is the representativeness of the sample that the survey ends up with. So far, surveys based on probability samples seem to still be representative, at least based on comparing sample characteristics to gold-standard benchmarks like the American Community Survey (ACS). Participation in the ACS is mandatory, although for the last several years, Republicans in the House have tried to remove this requirement. Canada did away with its mandatory requirements with its census, with disastrous results. The following is a compilation of posts related to the mandatory response requirement in the US and Canada: [Older Posts]

Measuring Race . . . Again

The following are collection of news stories on how the Census Bureau is planning to collect data on race. It is misleading to say that the Census Bureau will not collect data on race. Instead, of asking about Hispanic Origin and Race, the Census Bureau is likely to ask about “categories” that describe the person.

And, a new category might be “Middle Eastern or North African.”

The Census Bureau collects data on all sorts of topics, but the Office of Management and Budget (OMB) makes the final call on how the concept is measured by the Federal Statistical System. Links to the Census Bureau’s submission to OMB and a report based on internal research follow a nice summary by Pew.

Census considers new approach to asking about race – by not using the term at all
D’Vera Cohn | Pew Research Center
June 18, 2015

2010 Census Race and Hispanic Origin Alternative Questionnaire Experiment
from the 2010 Census Program for Evaluations and Experiments
Feb 28, 2013

National Content Test
Submission for OMB Review | Federal Register
May 22, 2015