Answers to True or False Exercises

This is a pre-release of the Open Access web version of Veridical Data Science. A print version of this book will be published by MIT Press in late 2024. This work and associated materials are subject to a Creative Commons CC-BY-NC-ND license.

Chapter 1

  1. False. Data can never perfectly capture reality.

  2. False. Human beings will always be subject to confirmation bias.

  3. True. Even if the data is collected using the same mechanism, the underlying real-world trends may have changed.

  4. False. The results of your analysis correspond to just one possible answer that you could have obtained. Rather than proof of your conclusion, your analysis provides evidence of your conclusion.

  5. True. There are many ways that you can evaluate the predictability of your results, including showing that they are in line with domain knowledge and that they reemerge in future data.

Chapter 2

  1. True. Every data science problem can be formulated in multiple different ways.

  2. False. It is also important for the analyst themselves to determine whether their results are predictable and stable, even if they will not be presenting them to an external audience.

  3. True. The DSLC is non-linear. Often you will learn things about your data in the later stages that will require you to backtrack to your cleaning or preprocessing steps or conduct further exploration.

  4. False. There is no single correct way that an analysis should be conducted; indeed there are often many (that’s not to say that there are no incorrect ways).

  5. False. While some of the stages are very important in every project, such as the data cleaning and the exploratory data analysis (EDA) stages, there are some stages, such as the prediction stage, that may not be relevant to all projects.

  6. True. It is very common for results to be affected by the judgment calls made early in the DSLC, including at the data cleaning stage.

  7. True. Since PCS evaluations involve conducting stress tests of your results in new scenarios (such as using future data), it can help identify when a finding was just a reflection of the coincidental patterns that exist among specific data points used to generate them, which is another way of saying that it can help prevent data snooping.

  8. False. PCS uncertainty quantification is an extension of traditional statistical inference that takes into consideration uncertainty arising from judgment calls, rather than just from the data sampling mechanism (as in traditional statistical inference).

  9. False. Communication with domain experts is important throughout the entire DSLC.

Chapter 3

  1. True. Your dslc_documentation files can (and should) contain code.

  2. False. While it is OK to edit a copy of your original raw data file if you absolutely must make manual edits (and be sure to document any edits that you make), you should avoid editing the original file itself.

  3. False. Quite the opposite: you should regularly rerun all your code, including the code that cleans your data, to maximize the reproducibility potential of your results.

  4. False. This is subjective, but we strongly believe that neither language is universally better than the other for data science (although each language may be better than the other for some particular types of projects).

  5. False. There is a broad spectrum of types of reproducibility, and this definition encompasses just one of them.

  6. False. The reproducibility crisis is a problem that is affecting a range of scientific fields.

  7. True. This is an excellent way to increase the chance that your results are reproducible, at least in the context of ensuring that you get the same results every time that your code is run.

  8. False. It is possible that the other person is not correctly implementing their reproduction or that they simply arrived at a different answer due to different humans making different judgment calls.

  9. False. This is strong evidence that your results are “trustworthy,” but this doesn’t prove that your results are “correct.” Moreover, you may both be wrong!

Chapter 4

  1. False. Every dataset should at least be examined to determine if it needs to be cleaned.

  2. True. This minimizes the chance that you will accidentally corrupt the original data file.

  3. True. Clean data can contain missing values if they are properly formatted as missing (e.g., as NA values in R).

  4. True. Since the preprocessing steps that you will implement depend on the algorithm/analysis that you will conduct, missing values may or may not be a problem. In most (but not all) scenarios, however, you will end up removing or imputing missing values during preprocessing.

  5. False. Removing the offending rows is one way to handle missing values, but it is rarely the best way since it is likely to introduce bias into your data. Often, it is better to impute missing values or remove unimportant columns with large numbers of missing values.

  6. True. The set of valid values for a variable is indeed usually determined based on domain knowledge and how the data was collected.

  7. False. Just because a value is surprising doesn’t mean that it is incorrect or invalid. It does, however, warrant further exploration (e.g., by asking the person who collected the data how such a value might have arisen). If it is clear that the value is invalid or incorrect, then you may want to replace it with a missing value, but this should not be automatic.

  8. False. There are many reasonable ways to clean every dataset, and everyone will likely clean the same dataset slightly differently.

  9. True. This is because we don’t want to use information from the validation and test sets when cleaning your training data.

  10. False. Exploring your data is part of cleaning it.

  11. False. While we often find it helpful to keep the cleaning and preprocessing stages separate (so we can explore a clean version of the data that has not been modified to suit a particular algorithm), it is not necessary to do so if it feels overly complicated.

  12. False. You can instead add different arguments to your preprocessing function to create different versions of your data for algorithms with different formatting requirements.

Chapter 5

  1. True. You should always evaluate the predictability and stability of every explanatory finding, especially if you are going to present the findings. It is also a good idea to evaluate the predictability and stability of any exploratory findings that seem important.

  2. True. You will tend to make a lot of exploratory findings, but only those that are actually conveying important information that you want to communicate to an external audience need to be turned into explanatory figures.

  3. True. Exploratory (and explanatory) findings can include numeric summaries and tables.

  4. False. We recommend using color sparingly, and only if it adds information. Color can be added for purely aesthetic purposes, but its use should be kept as simple as possible.

  5. False. The correlation describes the tightness of the points around a line that can be at any angle.

  6. True. Correlation captures the strength only of the linear relationship between two variables.

  7. False. Whether to use the mean or the median is context dependent. The mean will be more influenced by the outliers, which may be better in some scenarios, but not others.

  8. False. The amount of detail that you should present depends on your audience. A research paper may call for more details than a presentation slide or a general public-facing information pamphlet.

  9. True. Variables with a causal relationship will also be correlated (but not necessarily the other way around).

  10. False. Correlated variables will not necessarily be causally related. There may be another variable that simultaneously and independently affects both variables.

Chapter 6

  1. False. SVD is a general technique used for a range of problems, one of which is principal component analysis.

  2. True. The fact that every matrix has a SVD decomposition is what makes it such a versatile technique.

  3. False. The principal component variable loading magnitudes are comparable only if the variables have the same scale (e.g., they have been SD-scaled).

  4. False. Even if all the values in the data are positive, principal component analysis can still yield negative variable loadings. This is evident from the examples shown in this chapter.

  5. False. While the first principal component typically contains the most information, there are many applications where you will want to consider multiple principal components beyond just the first one.

  6. True. If you clean or preprocess your data differently, you may end up with slightly different principal components.

  7. True. Principal component analysis can be applied only to datasets with numeric variables. If your data contains non-numeric variables, you can still apply principal component analysis if you convert the non-numeric variables to a numeric format.

  8. True. Principal component analysis is specifically designed to compute linear summaries.

  9. False. The reverse is true: the projected data points will exhibit greater variability when projected onto the first principal component than the second principal component.

  10. False. Principal component analysis does not require that the original variables have a Gaussian distribution, but you will often find more meaningful components when they do.

  11. True. This is exactly what a scree plot does.

Chapter 7

  1. False. Clustering can be based on a range of different distance metrics, but the Euclidean distance metric is the most common.

  2. True. Since the initial cluster positions are random, the K-means clustering results may be different each time the algorithm is run.

  3. False. The K-means algorithm only approximately minimizes the K-means loss function.

  4. False. Sometimes the K-means clusters will yield a higher silhouette score, but not always.

  5. True. One definition of a good cluster is indeed one that is tight and distinct.

  6. True. Even though the hierarchical clustering algorithm does not involve computing cluster centers, it is possible to compute the cluster centers of hierarchical clusters by computing the average value of each variable across all the observations in the cluster.

  7. False. The Rand Index does not require that the two sets of clusters being compared contain the same number of clusters apiece, but rather that they are based on the same sets of observations.

  8. True. Since the Rand Index involves comparing the clustering of each pair of data points, it requires that the two sets of clusters are of the same set of data points.

  9. False. Since the WSS will generally decrease as the number of clusters \(K\) increases, it is not recommended for use when comparing sets of clusters that contain differing numbers of clusters (i.e., different values of \(K\)).

  10. True. The silhouette score, unlike the WSS, can be used to compare sets of clusters that contain differing numbers of clusters (i.e., different values of \(K\)).

  11. True. Cross-validation (CV) is a widely used hyperparameter selection technique for a range of data science algorithms that goes beyond just clustering.

  12. False. Since it is still based on the training data, CV does not provide a reasonable approximation of the performance of an algorithm on external data (but it is a useful technique for hyperparameter selection).

Chapter 8

  1. True. There are often many ways to define a response variable (e.g., some response variables can be formatted as either a binary variable or a continuous variable).

  2. False. Labeled data is required for both training and evaluation. If you don’t know the true/observed response for the observations that you are using for evaluation, you cannot determine how accurate your predicted responses are.

  3. False. There are many scenarios where a random split may not be appropriate, such as when there is a time dependence in your data or when there are repeated observations from the same person/country/group.

  4. True. A predictive relationship does not imply a causal relationship. It may be that your predictor variable tends to have large values whenever your response variable does, but this doesn’t mean that an increase in the predictor variable will cause an increase in the response variable.

  5. True. These are all different terms commonly used (both in this book and outside of it) to mean the same thing.

  6. False. Cluster labels do not necessarily correspond to real-world groups; rather, they capture groups of similar data points.

  7. False. While collecting more training data will often lead to improved predictive performance, this is not always the case.

  8. False. Collecting additional predictor variables will improve the predictive performance only if the new variables have information that is actually predictive of the response variable.

Chapter 9

  1. False. Since the absolute loss function is not differentiable at its minimum, there is no explicit mathematical formula for computing the linear fit that minimizes it.

  2. True. Since the squared loss function is differentiable at its minimum, there is a computable formula that identifies the linear fit that minimizes it.

  3. False. You may use different loss functions for the processes of evaluation and training.

  4. False. While the LAD algorithm will usually produce more accurate predictions on the validation set than the LS algorithm when using the MAE predictive performance measure, this will not always be the case.

  5. True. Since the rMSE corresponds to the square root of the squared error, the rMSE and the MAE are on the same scale.

  6. True. Since MSE and MAE are measuring different things, one algorithm may yield predictions that are closer to the “true” response in terms of average squared error (MSE) but not in absolute error (MAE).

  7. True. If you have shown that the algorithm generates accurate predictions of data in a new context, then it is acceptable to use the algorithm in this new context.

  8. True. Since the algorithm was trained on the patterns and relationships that exist within the training data itself, being able to generate accurate predictions for the training data is no guarantee that it will be able to do so on new data (because the new data might feature slightly different patterns and relationships).

  9. False. One example of a predictability assessment of a predictive algorithm is to compute the MAE for the predictions computed for the validation data (rather than the training data).

Chapter 10

  1. False. This is true only if either (a) the original variables were on the same scale or have been standardized to a common scale before fitting the LS algorithm, or (b) the coefficients have been standardized (e.g., using the bootstrap) before making the comparison.

  2. True. The LS algorithm requires that all the predictor variables are numeric (binary variables are a type of numeric variable). Categorical variables thus need to be transformed to a numeric type (e.g., by creating one-hot encoded/dummy variables). Many algorithmic implementations, such as lm() in R, will do this conversion under the hood for you.

  3. True. Although correlation isn’t a particularly good measure of association for binary variables, this doesn’t mean that you can’t use it.

  4. False. Collecting additional predictor variables will improve the predictive performance only if the new variables contain information that is actually predictive of the response variable.

  5. False. While the LS algorithm only captures linear relationships between the variables that are provided, if you apply a non-linear transformation (such as a logarithmic transformation) to either the predictor or response variables, LS can be used to capture non-linear relationships between the predictor and response variables.

  6. True. While the preprocessing stage may seem fairly removed from the prediction stage of the DSLC, the judgment calls that you make during preprocessing can absolutely affect your predictions (which is why we assess their effect on our results using stability assessments).

  7. True. This is the very definition of overfitting.

  8. False. It depends on how you define the “best” regularization hyperparameter. For instance, sometimes it is “better” to choose a slightly larger value of the regularization hyperparameter (e.g., the largest value within 1 SE of the value that achieves the minimum CV error).

  9. False. More regularization means that the coefficients are shrunk closer to 0 (away from the original LS coefficient values).

  10. False. Although the LS algorithm involves minimizing an L2 (squared) loss function, this doesn’t necessarily mean that L2 (ridge) regularization makes more sense than L1 (lasso) regularization.

Chapter 11

  1. True. We demonstrated in this chapter that LS can technically be used for binary prediction problems.

  2. False. Unlike the logistic regression predictions, the LS predictions for binary responses should not be interpreted as class probability predictions.

  3. True. One way to do this is to use a threshold value to convert a continuous response into a binary response.

  4. True. The logistic function can indeed be used to transform a linear combination into a predicted positive class probability.

  5. False. Unfortunately, there is no closed-form solution to the logistic regression optimization problem. Moreover, it cannot be directly solved using LS because we don’t ever observe the true class probabilities.

  6. False. There are many stability analyses that we could conduct, including investigating whether the predictive performance changes across data and cleaning/preprocessing judgment call perturbations.

  7. False. Whether a high true positive rate (sensitivity) is more or less important than a high true negative rate (specificity) will depend on the particular problem.

  8. False. In such a case, the sensitivity of this algorithm evaluated on the validation set will be equal to the proportion of validation (not training) data points in the positive class.

  9. False. Even if the AUC for algorithm A is higher, there may still be individual threshold choices for which algorithm B performs better than algorithm A.

  10. True. This is particularly true for prediction problems with class imbalance.

  11. False. Gaussianity of the predictive features is not a requirement for the logistic regression algorithm; however, transformations that improve the symmetry and/or Gaussianity of a feature’s distribution may lead to better predictive performance.

Chapter 12

  1. True. This is one of the biggest benefits of the decision tree.

  2. False. CART can be used to construct decision trees for predicting binary and continuous responses.

  3. True. It is certainly possible to improve predictive performance by tuning the hyperparameters, although, in our experience, this performance improvement is usually minor.

  4. False. In this chapter, we saw that the CART algorithm is substantially less stable to data perturbations relative to LS.

  5. False. The Gini impurity is used to train the CART algorithm using the training set, not to evaluate it using the validation set.

  6. False. Because the CART algorithm does not involve any random sampling of observations or predictive features when computing the splits, CART is not a random algorithm (it will produce the same fit every time it is trained on the same dataset).

  7. True. The RF algorithm is random because each tree is trained using a random sample of observations, and each split considers a random subset of predictive features.

  8. False. The RF feature importance measures do not require that you standardize the features.

  9. True. Since the permutation feature importance measure involves identifying how the predictive performance changes when you permute each feature separately (this idea is not specific to the RF algorithm), it could be used to compute a feature importance score for any predictive algorithm.

  10. True. While log-transforming predictive features can change the individual split threshold values, it will not affect the prediction output of the CART algorithm.

Chapter 13

  1. False. This is the traditional approach. In the PCS framework, we instead recommend considering several versions of each algorithm trained on different versions of the cleaned/preprocessed data.

  2. False. An ensemble can be created using a range of different algorithms, so long as they are all designed to predict the same response.

  3. True. An ensemble combines multiple predictions for each data point into a single prediction output.

  4. True. Since both the single prediction and ensemble predictions involve reporting a single response prediction for each data point, they can both be evaluated and compared using the standard performance measures. This is not true of the PPIs, however.

  5. False. As we have seen in this chapter, it is certainly possible to create an ensemble fit that performs worse than the single predictive fit with the highest validation set performance.

  6. True. Repeatedly using the test set to decide the final prediction (e.g., by comparing different options) means that the test set is no longer independent of the final predictions, and thus cannot be used to provide an independent assessment of predictive performance.

  7. False. The uncalibrated validation set coverage will rarely be 90 percent (unless we capture all possible sources of uncertainty in our perturbations—which is essentially impossible). Calibration is required to achieve a coverage closer to our goal of 90 percent.

  8. False. Unfortunately, for many projects, the intended users of your final fit will not have the technical skills required to make use of your fit if it can only be accessed with R or Python.

  9. True. If various alternative options for a particular judgment call all lead to similar predictive performance, then this judgment call clearly doesn’t have much influence on the predictive performance.

  10. True. Since we don’t want to include fits with poor performance in our ensemble or intervals, it is important to conduct predictability screening.