Interest and investment in big data infrastructure has created an urgent demand for statistical methodologies which are extensible to novel and complex structures, e.g., massive, distributed, or streaming data. The designers of these methodologies must consider both statistical and computational efficiency; thus, the theoretical framework under which these methodologies are developed and evaluated must reflect this trade-off. In this roundtable, we will discuss the role of computational efficiency in the context of statistical methodology for big data. Planned discussion topics include: (i) guidelines for algorithm design and evaluation; (ii) graduate training for statistical thinking with big data; and (iii) the role of statistics in data science. However, the discussion may deviate from these topics depending on the interests of the attendees.