9,December,2023

My analysis of the “911 Daily Dispatch Count by Agency” dataset is a comprehensive and methodical exploration into the trends and patterns of emergency dispatches in the Boston area. The study delved deep into the data, which covered dispatch counts for major emergency services including the Boston Police Department (BPD), Boston Fire Department (BFD), and Emergency Medical Services (EMS).

One of the key findings from my analysis is the identification of distinct yearly and monthly trends in dispatch counts for each agency. The BPD, in particular, showed a significant increase in dispatches from 2010 to 2013, followed by a decline in 2014, suggesting shifts in community dynamics or operational strategies. In contrast, BFD and EMS displayed more consistent dispatch patterns over the years. This variation in trends between agencies underscores the diverse nature of emergencies they respond to and highlights the importance of tailored resource allocation and preparedness strategies.

My study also made insightful observations about potential seasonal variations, especially with higher dispatch counts for BPD in specific months like March and December. The application of time series analysis techniques, including the Augmented Dickey-Fuller (ADF) test and the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test, revealed non-stationarity in the data, indicating the presence of underlying trends or cyclic behaviors. This aspect of the analysis is crucial for emergency services planning and resource management, as it aids in anticipating periods of high demand.

Furthermore, the implementation and evaluation of an ARIMA (1,1,1) model provided a methodological approach to forecast future dispatch needs. While the model offered valuable insights, my analysis also pointed out the need for refinement, especially in addressing outliers and improving the distribution of residuals. The diagnostic plots from the ARIMA model were instrumental in identifying areas for improvement, emphasizing the model’s limitations and the necessity for additional data to enhance its predictive accuracy.

The comprehensive use of visual tools like correlograms and histograms not only facilitated a deeper understanding of the complex data patterns but also aided in the interpretation of the model’s diagnostics. These visualizations played a significant role in conveying intricate analytical findings in an intuitive manner, making them accessible for decision-making processes.

Overall, my analysis stands out as a robust and nuanced approach to understanding emergency dispatch trends, offering valuable insights that could significantly impact resource distribution, emergency response strategies, and policy-making in the realm of public safety and emergency management.

06,December,2023

The SARIMAX model summary provides a comprehensive overview of the time series analysis performed on the variable ‘Total’, with 1268 observations from November 1, 2010, to April 21, 2014. The model used is an ARIMA(1, 1, 1), a type of autoregressive integrated moving average model, which is often used for forecasting time series data.

The results show the following key statistics and diagnostics:

Log Likelihood: The model has a log likelihood of -8626.941, which measures the likelihood of the data given the model.
Akaike Information Criterion (AIC): With a value of 17259.882, the AIC suggests the relative quality of the model, with lower values indicating a better model.
Bayesian Information Criterion (BIC): The BIC value is 17275.315, another criterion for model selection, similar to AIC but with a higher penalty for models with more parameters.
Hannan-Quinn Information Criterion (HQIC): This is another measure for model selection, with a value of 17265.680 in this case.

The coefficients for the autoregressive term (ar.L1) and the moving average term (ma.L1) are 0.3624 and -0.9463, respectively, with the latter showing a strong negative effect. Both coefficients are highly significant, as indicated by their P-values of 0.000.

The variance of the model’s residuals (sigma2) is approximately 48,010, indicating the average squared deviation from the predicted value.

Other diagnostics include:

Ljung-Box Test: With a Q statistic of 4.61 and a P-value of 0.03, this test checks for lack of fit in the model, suggesting that the model fits the data well.
Jarque-Bera Test: The JB statistic is 651.24 with a P-value of 0.00, indicating that the residuals may not follow a normal distribution, given the significant skewness and kurtosis.
Heteroskedasticity: The H statistic is 0.87 with a P-value of 0.16, suggesting that there is no significant heteroskedasticity in the model residuals.

The warning about the covariance matrix being calculated using the outer product of gradients suggests that the standard errors and confidence intervals might not be fully reliable.

Additionally, a future warning indicates that an argument used in the `pd.date_range` function is deprecated, suggesting that the code may need updating to maintain compatibility with future versions of pandas.

These notes provide a concise understanding of the SARIMAX model’s results and its implications for the accuracy and reliability of the time series forecasting.

4, December, 2023

i worked on the following tests and plots:

1.Standardized Residual Plot:

    • This plot displays the standardized residuals of the model over time.
    • Residuals are the differences between observed and predicted values.
    • Ideally, residuals should fluctuate randomly around zero, without any discernible pattern.
    • In our plot, we observed a fairly random scatter of residuals, although there are some instances of potential outliers.
  1. Histogram and Estimated Density Plot:
    • The histogram bins the standardized residuals to show their distribution.
    • An overlaid kernel density estimate (KDE) shows a smoothed version of this distribution.
    • A standard normal distribution (N(0,1)) is plotted for comparison.
    • A good-fitting model would have residuals that closely follow a normal distribution. The histogram should resemble the bell shape of the normal distribution curve.
  2. Normal Q-Q Plot:
    • The quantile-quantile plot compares the quantiles of the residuals with the quantiles of a normal distribution.
    • If the residuals are normally distributed, the points should fall approximately along the red line.
    • In the plot, the points largely follow the line, suggesting normality, but deviations at the ends may indicate heavier tails than the normal distribution.
  3. Correlogram (ACF Plot):
    • The correlogram, or autocorrelation function plot, shows the correlation of the residuals with themselves at different lags.
    • We look for correlations that are significantly different from zero at various lag intervals.
    • In a well-fitted model, we expect that there will be no significant autocorrelation in the residuals. Here, most autocorrelations are within the confidence band, indicating no significant autocorrelation.

1, December, 2023

I attempted to fit an ARIMA model to the ‘Total’ column of the dataset using the `ARIMA` function from the `statsmodels` library.
– The order of the ARIMA model was set to (1, 1, 1), indicating a first-order autoregressive model with a first-order differencing and a first-order moving average component.

Encountered a Warning:
– Upon executing the model fitting code, a `ValueWarning` was generated by the `statsmodels` library.
– This warning indicated that the library had to infer the frequency of the time series data. It assumed a daily frequency, denoted as ‘D’.

Understanding the Warning:
– The warning was not an error but a notification. It suggested that the library wasn’t explicitly provided with the frequency of the data and had to make an assumption.
– If the dataset indeed represents daily observations and there are no gaps in the dates, this assumption by the library is appropriate.

Implications of the Warning:
– If the data genuinely has daily measurements and is consistent without missing days, the inference made by the library aligns with the dataset’s structure.
– However, if the dataset follows a different frequency or has irregular intervals, I would need to set the frequency explicitly to match the actual data pattern.

 

29, Nov,2023

Monthly aggregated summary of various economic and development indicators for a specific region or city. It includes data from January to March 2013. Key metrics include:

1. Logan Passengers and International Flights**: This tracks the total number of passengers and international flights at Logan Airport. For instance, in January 2013, there were 2,019,662 passengers and 2,986 international flights.

2. Hotel Metrics: The hotel occupancy rate and average daily rate are provided. In January 2013, the occupancy rate was 57.2%, with an average daily rate of $158.93.

3. Employment and Labor Data: Includes total jobs, unemployment rate, and labor force participation rate. In January 2013, there were 322,957 jobs, with an unemployment rate of 6.6% and a labor force participation rate of 63.1%.

4. Real Estate and Development: Data on new housing construction, including pipeline unit counts, total development costs, square footage, and construction jobs. For example, in January, there were 329 pipeline units with a total development cost of $80,000,000 and 313,107 square feet leading to 228 construction jobs.

5. Foreclosure Data: Foreclosure petitions and deeds are listed. In January 2013, there were 44 foreclosure petitions and 11 foreclosure deeds.

6. Housing Market Indicators: Median housing price and housing sales volume. In January 2013, the median housing price was $380,000 with a sales volume of 405 units.

7. Construction Permits: The number of new housing construction permits and new affordable housing permits. In January 2013, 534 new housing construction permits and 134 new affordable housing permits were issued.

This comprehensive data set provides a detailed overview of various economic and development aspects of the region, useful for analysis and decision-making.

27, Nov,2023

In analyzing the ‘total_jobs’ time series data, on economic indicators dataset i found  both the Augmented Dickey-Fuller (ADF) and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) tests were conducted to assess stationarity.

The ADF test produced a test statistic of -0.12 with a corresponding p-value of 0.95. The critical values at various confidence levels did not reject the null hypothesis, indicating weak evidence against a unit root and suggesting non-stationarity. Conversely, the KPSS test yielded a test statistic of 1.46 with a p-value of 0.01, strongly rejecting the null hypothesis and indicating stationarity. However, a cautionary note was given about the test statistic being beyond the available p-values in the table, suggesting a smaller actual p-value.

The conflicting results between the ADF and KPSS tests imply that the ‘total_jobs’ time series might exhibit difference-stationary or trend-stationary characteristics. Further investigation or transformation of the data may be necessary to achieve stationarity before drawing conclusive insights. These findings should be considered in the context of the specific analysis and may warrant additional exploration to ensure the reliability of the results.

20, November, 2023

In today’s class, we delved into a fascinating exploration of various time series models, each offering unique insights and capabilities in analyzing temporal data. The diverse set of models discussed included SARIMA (Seasonal Autoregressive Integrated Moving Average), VAR (Vector Autoregression), LSTM (Long Short-Term Memory), and ARIMA (Autoregressive Integrated Moving Average).

We began by exploring SARIMA, a sophisticated extension of the traditional ARIMA model that incorporates seasonality into its framework. SARIMA is particularly adept at handling data with recurring patterns and trends over time, making it a valuable tool for forecasting and understanding complex time series datasets.

Next, we turned our attention to VAR, a model that excels in capturing the dynamic interdependencies between multiple time series variables. VAR allows us to examine how changes in one variable impact others, providing a comprehensive view of the relationships within a system. This makes it an invaluable choice for scenarios where the interactions between different components are crucial for accurate modeling.

Our exploration continued with LSTM, a type of recurrent neural network designed to effectively capture long-term dependencies in sequential data. This model is particularly powerful in handling complex patterns and relationships within time series data, making it well-suited for tasks such as speech recognition, language modeling, and, of course, time series forecasting.

Lastly, we revisited the classic ARIMA model, which combines autoregression, differencing, and moving averages to analyze and predict time series data. ARIMA is a versatile and widely-used model that can be applied to a variety of temporal datasets, offering simplicity coupled with robust predictive capabilities.

Throughout the class, we emphasized the importance of selecting the right model based on the characteristics of the data at hand, considering factors such as seasonality, interdependencies, and the nature of long-term dependencies. As we navigated through these diverse models, we gained valuable insights into their strengths and applications, equipping ourselves with a richer understanding of time series analysis and forecasting techniques.

17, November,2023

I explored the “economic-indicators.csv” dataset to understand various aspects of the region’s economic landscape. Here’s a rundown of what I discovered:

I looked at the historical trends of hotel occupancy rates, trying to discern patterns or seasonal variations in the hospitality industry.

By calculating the average monthly passenger numbers at Logan Airport, I got a sense of the ebb and flow of travel, which speaks volumes about economic activity related to tourism and business.

The trend of new housing construction permits gave me insights into the region’s real estate development. It’s like watching the evolution of the area through the lens of construction permits.

I dove into the relationship between hotel occupancy rates and the average daily rates, unraveling the intricate dynamics that influence pricing strategies in the hotel industry.

Analyzing the seasonality of international flights at Logan Airport provided a glimpse into peak and off-peak travel periods, affecting various stakeholders like airlines and tourism authorities.

Calculating the average monthly new housing construction permits quantified the growth in the housing sector, a key indicator of economic health.

The trend of foreclosure deeds over time told a story about the financial health of the region’s residents and the stability of the local real estate market.

Examining the correlation between median housing prices and housing sales volume revealed insights into market dynamics, including supply and demand, affordability, and broader economic conditions.

In essence, each analysis contributed to understanding the region’s economic well-being and trends, painting a comprehensive picture of its economic landscape.

15,Nov,2023

Understanding the Essence of Time Series Data: Stationary vs. Non-Stationary

Introduction:
Time series data, a cornerstone in numerous analytical domains, can be broadly categorized into two fundamental types: stationary and non-stationary. This distinction plays a pivotal role in the efficacy of various time series analysis techniques.

Stationary Time Series:
A stationary time series is akin to a steady heartbeat – it exhibits consistent statistical properties over time. The mean, variance, and autocorrelation remain constant, unaffected by the temporal dimension. This stability simplifies the application of many analytical models.

Characteristics of Stationary Time Series:
1. Constant Mean and Variance:
– The average and spread of the data don’t fluctuate significantly across different time intervals.

2. Constant Autocorrelation:
– The correlation between the values of the series at different time points remains constant.

3. Absence of Seasonal Patterns:
– Seasonal trends or cycles are not discernible, making the data appear more uniform.

Non-Stationary Time Series:
Contrastingly, a non-stationary time series is akin to a turbulent river – it lacks a consistent pattern over time. Statistical properties evolve, making it a more complex analytical challenge. Non-stationarity often arises due to trends, seasonality, or abrupt changes in the underlying process.

Characteristics of Non-Stationary Time Series:
1. Changing Mean and Variance:
– The average and spread of the data exhibit noticeable fluctuations.

2. Time-Dependent Autocorrelation:
– Correlation between values changes over time, indicating a lack of temporal stability.

3. Presence of Trends or Seasonal Patterns:
– Trends, cycles, or seasonal variations are observable, introducing complexity to the analysis.

Identifying Stationarity:
The quest in time series analysis often begins with assessing stationarity. Tools like statistical tests, visualizations, and differencing techniques aid in making this determination.

1. Augmented Dickey-Fuller Test:
– A statistical test used to assess whether a time series is stationary based on the presence of a unit root.

2. Visual Inspection:
– Plots and charts can provide visual cues about the presence of trends or seasonality.

3. Differencing:
– Applying differencing to the data can help stabilize mean and identify stationarity.

Implications for Analysis:
The classification into stationary or non-stationary isn’t merely an academic exercise. It profoundly influences the choice of analytical tools and the interpretation of results.

1. Stationary Data:
– Easier application of traditional models like ARIMA.
– Assumption of constant statistical properties simplifies forecasting.

2. Non-Stationary Data:
– Requires more advanced models or pre-processing techniques.
– Trend removal and differencing might be necessary to render the data stationary.

In the realm of time series analysis, the classification of data as stationary or non-stationary serves as a compass, guiding analysts through the intricate landscapes of data dynamics. Understanding these distinctions lays the foundation for choosing the right analytical approach, ensuring robust and accurate insights into the temporal intricacies of the data at hand.

13,Nov,2023

Exploring the nuances of the Boston housing market unveils a rich tapestry of trends and patterns. The essence lies not just in static figures but in the ebb and flow of prices over time. Let’s embark on a journey into time series analysis, attempting to decode the temporal intricacies of Boston house prices.

Boston’s real estate market, a dynamic entity, deserves more than a mere snapshot. Time series analysis provides the lens to capture the evolving rhythm of housing prices, where each data point is a note in the melodic progression of the market.

Features at a Glance:
– Median Value: The heartbeat of the market, reflecting the pulse of homeownership.
– Crime Rates: A dynamic variable, influencing perceptions and, consequently, prices.
– Room Metrics: The spatial narrative, where the number of rooms echoes the dwelling’s stature.

Before diving into the depths of analysis, a visual overture is essential. Line charts become our score sheets, plotting the crescendos and diminuendos of median house prices over time. A glance may reveal patterns—undulating waves or perhaps a steady rise, each telling a story of market dynamics.

The first act in our analytical symphony involves discerning the tempo of our data—stationary or dancing to the rhythm of change. Stationarity, a subtle baseline, ensures the constancy of statistical properties over time.

Tools of Discernment:
– Dickey-Fuller’s Harmony: Statistical tests like the Augmented Dickey-Fuller unveil the presence or absence of the unit root, hinting at the stationary nature of our temporal narrative.
– Visual Cadence: Sometimes, the naked eye perceives what statistics may overlook. Visualizations, akin to a musical score, hint at trends and fluctuations.

For a moment, let’s embrace the non-stationary dancers in our dataset. Trends sway, and seasonal breezes influence the rise and fall of prices. Identifying these nuances becomes the essence of our analytical choreography.

Unveiling Trends:
– Changing Mean and Variance: Fluctuations in the average and spread of prices across different time intervals.
– Seasonal Pas de Deux: Patterns repeating at regular intervals, a dance between supply, demand, and the seasons.

Armed with an understanding of the temporal dynamics, our analytical ensemble takes the stage. Linear regression becomes our conductor, orchestrating the relationship between crime rates, room metrics, and the melodic median prices.

Key Movements:
– Feature Harmony: Crime rates, room metrics, and other features become instrumental in the predictive symphony.
– Conducting Predictions: The model’s crescendo—forecasting future median prices based on the rhythm of historical data.

In the Boston housing market, time series analysis isn’t just a retrospective; it’s a continuous composition. As new notes join the melody, the symphony evolves, demanding a dynamic interplay between past, present, and future.

In this journey through the temporal dimensions of Boston’s housing market, the analysis becomes not just a scholarly pursuit but a narrative, where each fluctuation and trend tells a chapter in the story of the city’s real estate rhythm.

10,Nov,2023

In my quest to understand data’s temporal symphony, I find myself immersed in the captivating world of time series analysis. It’s not just about numbers; it’s about unraveling the narrative woven through the fabric of time. Let’s embark on this journey together, exploring the intricacies and revelations hidden within the chronicles of data evolution.

Every data point carries a timestamp, a story waiting to be told. Time series analysis is the lens through which we decipher these stories, seeking patterns, trends, and the heartbeat of change. It’s not just about data points; it’s about the rhythm of the underlying narrative.

Behind the surface of raw data lies a symphony waiting to be heard. Each observation is a note, and the arrangement of these notes reveals the melody of the dataset. From stock prices to weather patterns, time series analysis unlocks the doors to understanding the dynamics of change over time.

In this personal exploration, one concept stands out – stationarity. It’s like finding the steady pulse in the chaos of time. A stationary time series carries a constancy, a predictability that simplifies the analytical journey.

My Tools of Discovery:
– The Augmented Dickey-Fuller test, a kind of compass, guiding me through the terrain of stationarity.
– Visualizations, my artistic canvas, where I observe the rise and fall of patterns like strokes on a painting.

Yet, not every dataset dances to the tune of stationarity. Some sway with the winds of change, and recognizing these dynamic movements becomes the art within the science of time series analysis.

The Dance of Trends:
– The undulating waves of changing means and variances, telling stories of evolving circumstances.
– The seasonal choreography, where patterns repeat like a familiar refrain, echoing the cyclical nature of our data.

Equipped with these insights, my analytical journey takes flight. It’s not just about applying models; it’s about conducting a personalized symphony of predictions.

Features as Characters:
– Each feature is a character, playing its part in the unfolding drama.
– Linear regression becomes my maestro, orchestrating the relationships between these characters and the ultimate crescendo – predicting future values.

In this voyage through time series analysis, the data becomes more than just a collection of numbers. It transforms into a personal story, a narrative of change, a melody of patterns. As I navigate the currents of time, the analysis is not just a scientific pursuit; it’s a personal exploration, a quest to decipher the language of temporal evolution. And in this ongoing journey, every new dataset is a fresh chapter, waiting to be explored, understood, and added to the personal anthology of my data-driven adventures.

08,Nov,2023

Decision trees are a method used in statistics, data mining, and machine learning to model the decisions and possible consequences, including chance event outcomes, resource costs, and utility. Here are some concise class notes on decision trees:

1. **Definition**: A decision tree is a flowchart-like tree structure where an internal node represents a feature (or attribute), a branch represents a decision rule, and each leaf node represents the outcome.

2. **Types of Decision Trees**:
– **Classification trees**: Used when the outcome is a discrete value. They classify a dataset.
– **Regression trees**: Used when the outcome is a continuous value, like predicting temperatures.

3. **Components**:
– **Root Node**: Represents the entire population or sample, further gets divided into two or more homogeneous sets.
– **Splitting**: Process of dividing a node into two or more sub-nodes based on certain conditions.
– **Decision Node**: Sub-node that splits into further sub-nodes.
– **Leaf/Terminal Node**: Nodes that do not split, representing a classification or decision.

4. **Algorithm**:
– Common algorithms include ID3, C4.5, CART (Classification and Regression Tree).
– These algorithms use different metrics (like Gini impurity, information gain, etc.) for choosing the split.

5. **Advantages**:
– Easy to understand and interpret.
– Requires little data preprocessing (no need for normalization, dummy variables).
– Can handle both numerical and categorical data.

6. **Disadvantages**:
– Prone to overfitting, especially with many features.
– Can be unstable because small variations in data might result in a completely different tree.
– Biased with imbalanced datasets.

7. **Applications**: Widely used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal. Also used in machine learning for classification and regression tasks.

8. **Important Considerations**:
– **Pruning**: Reducing the size of decision trees by removing parts that have little power to classify instances, to reduce overfitting.
– **Feature Selection**: Important in building an effective and efficient decision tree.

6,November,2023

Data Preprocessing and Balancing:

  • Filtered out classes with fewer than two samples to ensure that there are at least two examples for each category.
  • Used the Synthetic Minority Over-sampling Technique (SMOTE) to balance the dataset, which helps in addressing the problem of imbalanced classes by creating synthetic samples.
  • Ensured the dataset contains at least two classes with more than one sample each to proceed with SMOTE.
  • After balancing, the dataset was split again into a training set (to learn from) and a testing set (to evaluate the model).

Model Training:

  • Scaled the data to ensure that no feature disproportionately affects the model’s performance.
  • Employed a logistic regression model wrapped in a pipeline with data scaling for classification purposes.
  • Trained the logistic regression model on the balanced and scaled training data.

Model Evaluation and Results:

  • The model’s predictions were compared against the actual data in the test set to evaluate performance.
  • The accuracy obtained was approximately 45%, which indicates that the model correctly predicted the class nearly half of the time.
  • A confusion matrix was generated to show the model’s predictions in detail, highlighting where it got confused between different classes.
  • A classification report provided a breakdown of performance metrics for each class, including precision (correct predictions out of all predictions for a class), recall (correct predictions out of all actual instances of a class), and the F1-score (a harmonic mean of precision and recall).

Summary of Findings:

  • The overall accuracy suggests the model may not be highly effective for this particular dataset as it stands.
  • Some classes were predicted with high accuracy (classes with a large number of synthetic samples created by SMOTE), while others were not, which could indicate model overfitting to the resampled data or an inherent complexity in the dataset that makes some classes hard to distinguish.
  • The detailed results from the confusion matrix and classification report suggest that the model’s performance varies significantly across different classes, with some classes having higher precision and recall than others.

03,Nov,2023

Data processing refers to the collection and manipulation of items of data to produce meaningful information. Here are concise notes on data processing:

1. **Definition**: Data processing is a series of operations on data, especially by a computer, to retrieve, transform, or classify information.

2. **Stages**:
– **Collection**: Gathering data from various sources.
– **Preparation**: Involves cleaning and organizing data into a usable and desired format.
– **Input**: The process of entering data into a data processing system.
– **Processing**: Execution of operations on data (sorting, classifying, calculating, interpreting, etc.).
– **Output**: Production of usable output in various formats (graphs, documents, tables, etc.).
– **Storage**: Saving data in some form for future use.

3. **Methods**:
– **Batch Processing**: Accumulating data and processing it in large batches.
– **Real-time Processing**: Immediate processing of data upon input.
– **Online Processing**: Processing done over the internet.
– **Distributed Processing**: Processing data across multiple computers or servers.

4. **Tools and Technologies**: Software such as databases, data warehousing tools, data mining applications, and big data processing frameworks (e.g., Hadoop, Spark).

5. **Importance**:
– Essential for data analysis, making informed decisions.
– Helps in transforming raw data into meaningful information.

6. **Challenges**:
– Data Quality: Ensuring accuracy, consistency, and reliability of data.
– Data Security: Protecting data from unauthorized access and data breaches.
– Handling Large Volumes: Efficiently processing large volumes of data (Big Data).

7. **Applications**: Used in various domains like business intelligence, finance, research, and more to facilitate data-driven decision-making.

8. **Trends and Future**: Increasing use of AI and machine learning in data processing for more advanced and automated analysis.

Data processing is an integral part of the modern information system and is crucial for extracting meaningful insights from raw data.

1,November,2023

Handling Anomalies and Missing Data in Datasets


1. Anomalies (Outliers):

Definition: Data points that differ significantly from other observations in the dataset.

Detection:

  • Visual Inspection: Scatter plots, Box plots.
  • Statistical Tests: Z-score, IQR.

Handling Techniques: a. Deletion: Remove outlier data points.

  • Pros: Quick and simple.
  • Cons: May lose valuable information.

b. Transformation: Apply log or square root transformations to reduce variance.

c. Capping: Cap the outlier to a maximum/minimum value.

d. Imputation: Replace outliers with statistical measures such as mean, median, or mode.

e. Binning: Convert numerical variable into categorical bins.


2. Missing Data:

Types of Missingness: a. MCAR (Missing Completely At Random): Missingness is not related to any other variable. b. MAR (Missing At Random): Missingness is related to some other observed variable. c. MNAR (Missing Not At Random): Missingness is related to the missing data itself.

Detection:

  • Use libraries like pandas (e.g., dataframe.isnull().sum()) or visualization tools like missingno.

Handling Techniques: a. Listwise Deletion: Remove any row with a missing value.

  • Pros: Simple.
  • Cons: Risk of losing a lot of data.

b. Pairwise Deletion: Use available data for statistical analysis.

c. Mean/Median/Mode Imputation: Fill missing values with mean, median, or mode of the column.

  • Good for MCAR.

d. Forward/Backward Fill: Use the previous or next data point to fill missing values. Useful for time series data.

e. Model-Based Imputation: Use regression, KNN, or other models to estimate and impute missing values.

f. Multiple Imputation: Generate multiple predictions for every missing value.

g. Use Algorithms Robust to Missing Values: Some algorithms (like XGBoost) can handle missing values.


General Recommendations:

  1. Understand Your Data: Always explore and visualize your data before handling anomalies or missing values.
  2. Consider Data’s Context: Understand the potential real-world implications of removing or imputing data.
  3. Validate: After handling anomalies and missing values, validate the results using appropriate statistical tests or performance me

30, October,2023

ANOVA (Analysis of Variance)

ANOVA is a statistical method used to compare the means of three or more groups. It determines if there are any statistically significant differences between the means of multiple groups.

Assumptions of ANOVA:

  1. Independence: Each group’s observations are independent of the other groups. Typically, this is achieved by random sampling.
  2. Normality: The dependent variable should be approximately normally distributed for each group. This assumption can be checked using histograms, Q-Q plots, or statistical tests like the Shapiro-Wilk test.
  3. Homogeneity of Variance: The variances of the different groups should be roughly equal. Levene’s test is often used to check this assumption.
  4. Random Sampling: Each group’s observations should be randomly sampled from the population.
  5. Measurement Level: The dependent variable should be measured on an interval or ratio scale (i.e., continuous), while the independent variable should be categorical.
  6. Absence of Outliers: Outliers can influence the results of the ANOVA test. It’s essential to check for and appropriately handle outliers in each group.

Why use ANOVA for more than three groups?

When comparing the means of more than two groups, you might think of conducting multiple t-tests between each pair of groups. However, doing so increases the probability of committing a Type I error (falsely rejecting the null hypothesis). ANOVA is designed to compare multiple groups simultaneously, while controlling the Type I error rate.

Post-Hoc Tests:

If the ANOVA test is significant, it only tells you that there’s a difference in means somewhere among the groups, but it doesn’t specify where the difference lies. To pinpoint which groups differ from one another, post-hoc tests (like Tukey’s HSD or Bonferroni) are conducted

27,October,2023

Assumptions for the t-test:

  1. Normality
    • The data for each group should be approximately normally distributed.
    • This assumption can be checked using various methods, such as histograms, Q-Q plots, or statistical tests like the Shapiro-Wilk test.
  2. Homogeneity of Variances
    • The variances of the two groups should be equal.
    • This is especially important for the independent two-sample t-test.
    • Can be checked using the Levene’s test.
  3. Independent Observations
    • The observations (or data points) in each group should be independent of each other.
    • This typically means that one observation in a group should not influence another observation.
  4. Random Sampling
    • Data should come from a random sample, ensuring that every individual has an equal chance of being included in the study.
  5. Scale of Measurement
    • The t-test is appropriate for continuous (interval or ratio) data.
    • The dependent variable should be continuous, while the independent variable should be categorical with two levels/groups.
  6. Absence of Outliers
    • Outliers can significantly affect the mean and standard deviation, which in turn can affect the t-test results.
    • It’s important to check for outliers and decide how to handle them before conducting the t-test.

23,October,2023

Hierarchical Clustering

  • A clustering method that creates a tree of clusters. It’s useful if you want to understand hierarchical relationships between the clusters.
  • Steps:
    1. Treat each data point as a single cluster. Hence, if there are ‘N’ data points, we have ‘N’ clusters at the start.
    2. Merge the two closest clusters.
    3. Repeat step 2 until only one cluster remains.
  • Types of Hierarchical Clustering:
    • Agglomerative: This is a “bottom-up” approach. Initially, each point is considered a separate cluster, and then they are merged based on similarity.
    • Divisive: A “top-down” approach. Start with one cluster and divide it until each data point is a separate cluster.
  • Dendrogram: A tree-like diagram that showcases the arrangement of the clusters produced by hierarchical clustering.
  • Applications: Phylogenetic trees, sociological studies.
  • Discussion & Exercises:
  1. Compare and contrast K-means and Hierarchical Clustering.
  2. Explore various linkage methods in hierarchical clustering: Single, Complete, Average, and Ward.

20,October,2023

Introduction to Clustering & Unsupervised Learning

  •  Clustering is an unsupervised learning method that groups data points into clusters based on their similarity.
  • Unsupervised Learning: Unlike supervised learning, there’s no “label” or “answer” given. The model learns the structure from the data.

K-means Clustering

  • A clustering method that divides a dataset into ‘k’ number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible.
  • Steps:
    1. Choose the number ‘k’ of clusters.
    2. Select random centroids for each cluster.
    3. Assign each data point to the nearest centroid.
    4. Recalculate the centroid for each cluster.
    5. Repeat steps 3-4 until there are no changes in the assigned clusters or a set number of iterations is reached.
    • Fast and efficient for large datasets.
    • Produces tighter clusters than hierarchical clustering.
    • Applications: Market segmentation, image compression, anomaly detection.
  1. Differences between supervised and unsupervised learning.
  2. Explore the impact of ‘k’ value in K-means

18,October,2023

Continuing Formulation of Questions about the Data:

As continued from the previous session, the emphasis remains on formulating initial questions to ensure the data analysis remains directed and purposeful.

  • Data Re-evaluation:
    • After a preliminary analysis, it’s beneficial to revisit initial questions to refine or expand upon them based on new insights.
  • Integration with Logistic Regression:
    • How does the data lend itself to a logistic regression model?
    • Are there binary outcome variables that we can predict using our predictor variables?
    • How will we validate the performance of our logistic regression model?

16, October,2023

Logistic Regression (classification) :
Logistic Regression is a statistical method used for modeling the probability of a certain class or event existing. It is used when the dependent variable is binary (i.e., it has two possible outcomes).

  • Fundamentals:
    • While linear regression predicts a continuous output, logistic regression predicts the probability of an event occurring.
    • It uses the logistic function (S-shaped curve) to squeeze the output of a linear equation between 0 and 1.
  • Coefficients:
    • Each coefficient represents the change in the log odds of the output for a one-unit change in the predictor.
    • Positive coefficients increase the log odds of the response (and thus increase the probability), and negative coefficients decrease the log odds of the response (decreasing the probability).
    • The interpretation requires an understanding of log odds (logit function).
  • Applications:
    • Credit approval, medical diagnosis, and election prediction are some areas where logistic regression can be applied.

Logistic Regression & Logistic Regression Details Pt1: Coefficients:

The StatQuest videos provide a visual and intuitive understanding of logistic regression.

  • Key Takeaways from the Videos:
    • The logistic function ensures output values are between 0 and 1, making it suitable for probability estimation.
    • The video discusses how to interpret the coefficients in logistic regression, with an emphasis on understanding the odds ratio.
    • It demystifies the math behind logistic regression, making it easier to grasp for those new to the concept.

13, October, 2023

An Introduction to the Permutation Test:

The Permutation Test, also known as a re-randomization test or exact test, is a non-parametric method for testing the null hypothesis that two different groups come from the same distribution. Instead of relying on a theoretical distribution (like the t-test which relies on the normal distribution), the permutation test creates its distribution from the data by calculating all possible outcomes from rearrangements (permutations) of the data.

  • Basic Steps:
    1. Combine all data from both groups into a single dataset.
    2. Repeatedly shuffle (permute) the combined data and then allocate the first ‘n’ items to the first group and the rest to the second group.
    3. For each shuffle, calculate the test statistic (e.g., difference in means).
    4. The p-value is then calculated as the proportion of shuffled permutations where the test statistic is more extreme than the observed test statistic from the original groups.
  • Advantages:
    • No assumptions about the underlying distribution of the data.
    • Can be applied to a wide range of test statistics and sample sizes.
  • Limitations:
    • Computationally intensive for datasets with large sample sizes since it requires evaluating all possible permutations.

Formulation of Initial Questions about the Data:

Before diving deep into any data analysis project, it’s imperative to formulate questions that guide the research and analysis process. These questions ensure the analysis remains focused and purposeful.

  • Purpose and Goals: Understanding the objectives of the analysis. What do we hope to achieve or conclude at the end of the process?
  • Data Understanding: What kind of data do we have? How is the data structured? What are the primary features and potential target variables?
  • Potential Patterns: Are there specific patterns, correlations, or trends we anticipate or are particularly interested in uncovering?
  • Challenges and Constraints: Are there limitations in the data? Do we anticipate any biases, missing values, or anomalies?
  • Stakeholder Considerations: Who is the target audience for the results? Are there specific questions or concerns from stakeholders that the analysis should address?
  • Potential Impact: How might the results of the analysis affect decision-making processes or future actions?

11, october,2023

In today’s class, we discussed various important aspects of dealing with data, particularly focusing on a dataset obtained from The Washington Post. Here are some key points:

Data Examination: We started by scrutinizing the data for discrepancies and irregularities. It’s essential to ensure data quality and integrity to avoid issues during analysis.

Handling Missing Data: Recognizing that the dataset may contain missing values, we explored methods for addressing this issue. Imputation methods, such as mean, median, or mode imputation, as well as more advanced techniques like regression imputation, were considered to fill in missing data points effectively.

Machine Learning Model: We deliberated on whether our objective should center on constructing a single machine learning model. Deciding on the approach is crucial and depends on the nature of the data and the goals of our analysis. It may be appropriate to build a single comprehensive model or multiple specialized models depending on the complexity and diversity of the data.

Data Classification: A significant question raised was whether we could classify the data based on attributes like police stations and fire stations. This implies the potential application of classification models, which can be an interesting avenue to explore for grouping and understanding the data based on specific criteria.

Professor’s Insights: Lastly, it was highlighted that the professor addressed various queries and doubts raised by students during the class session. This suggests a dynamic learning environment where students receive clarification and guidance on how to approach real-world data analysis challenges.

In summary, today’s class revolved around the data from The Washington Post, focusing on data cleaning, handling missing values, the approach to building machine learning models, data classification possibilities, and the valuable insights provided by the professor to foster a deeper understanding of the data analysis process

6,October,2023

Date: October 6


More on the Bootstrap:

Bootstrap, originating from the statistics field, refers to a method used to estimate the distribution of a statistic (like the mean or variance) by resampling with replacement from the data. It allows the estimation of the sampling distribution of almost any statistic. The primary advantage of Bootstrap is its ability to make inferences about complex statistical measures without making strong parametric assumptions.

  • Resampling with replacement: This means that in a dataset of ‘n’ values, every time a sample of ‘n’ values is drawn, any particular value might be selected multiple times.
  • Non-parametric Bootstrap: This involves straightforward resampling.
    • Parametric Bootstrap: Assumes data comes from a known distribution and estimates the parameters.
    • Smoothed Bootstrap: Adds random noise to the resamples.

Discussed Project 1 Doubts:

During our discussion on Project 1, several uncertainties were clarified:

  • Scope & Requirements: We revisited the primary objectives of the project, ensuring all participants understood the expected deliverables and performance criteria.
  • Dataset Concerns: Some doubts were raised about data integrity, missing values, and the potential need for data transformation or normalization.
  • Implementation Details: Questions regarding certain algorithms, tools, and libraries to be used were addressed. We discussed possible pitfalls and alternative approaches if our primary strategies do not yield the desired results.
  • Timeline & Milestones: We reiterated the importance of adhering to the project timeline, ensuring that key milestones are met on schedule. Concerns related to resource allocation and task delegation were also addressed.

2, October 2023

In today’s class, the professor covered two important topics: the difference between “findings” and “results” in scientific research and an introduction to the concept of a capstone project. Here’s a brief note summarizing the key points discussed:

  1. Difference Between Findings and Results:
    • The class started with an insightful discussion on the distinction between “findings” and “results” in scientific research.
    • “Results” refer to the raw, objective data obtained from experiments or studies, presented in a clear and quantitative manner.
    • “Findings,” on the other hand, involve the interpretation and analysis of those results. This is where researchers draw conclusions, make connections, and discuss the implications of the data.
    • The professor highlighted that both “results” and “findings” play critical roles in scientific communication, offering a comprehensive understanding of the research process and its significance.
  2. Capstone Project Introduction:
    • The class then shifted focus to the concept of a capstone project, an exciting opportunity for students to apply their knowledge and skills to a real-world problem.
    • Students were provided with an overview of what a capstone project might entail, including the scope, objectives, and expected outcomes.
    • The professor emphasized that capstone projects often serve as a culmination of a student’s academic journey, allowing them to showcase their expertise and contribute to meaningful research or practical solutions.
  3. Voice Signaling Capstone Project:
    • Discussed about  capstone project related to voice signaling, where an application is developed to predict a patient’s health based on their voice.
    • This project sounds both intriguing and impactful, as it combines the fields of healthcare and technology. The ability to predict health conditions from voice data has the potential to revolutionize healthcare diagnostics.
    • Such projects reflects commitment to making a meaningful contribution to the field and your enthusiasm for leveraging technology for the betterment of healthcare.

 

29, September,2023

I intend to estimate prediction error for a dataset that includes a binary variable (0 or 1). My plan is to employ a multinomial logistic regression model to gauge the probability of a 0 or 1 response based on various predictor variables. To rigorously evaluate the accuracy of this logistic model, I’m opting for k-fold cross-validation, with k falling within the range of 5 to 10. This approach will help ensure that the model’s performance is robust and not overly influenced by the specific data split.

  • Given the limited amount of data available, I’m also contemplating the use of a bootstrap procedure to create additional datasets. However, I’m currently uncertain about whether this is an appropriate strategy for my specific objectives. I plan to seek guidance from my instructors during class discussions to determine the suitability and best practices for implementing bootstrap resampling in this context. This will ensure that my approach to estimating prediction error is both valid and effective

27,September,2023

 

I conducted a regression analysis on a dataset with 354 data points, aiming to predict a target variable ‘z’ using two predictor variables ‘x1’ and ‘y’ through a linear model:

Linear Model: z = b0 + b1x1 + b2y + e

My initial plan was to split the data into a training set and a test set to evaluate the model’s performance. However, given the limited amount of data, it didn’t seem practical to do so. Instead, I decided to employ k-fold cross-validation, specifically a 5-fold cross-validation, to assess the accuracy of the linear model.

Quadratic Equation: Additionally, I also explored the possibility of fitting a quadratic equation to the data to capture potential non-linear relationships:

Quadratic Model: z = b0 + b1x1 + b2y + b3x1^2 + b4x1y + b5y^2 + e

I planned to use cross-validation to compare the performance of the linear and quadratic models, which would help me determine whether a more complex model is warranted given the dataset.

Mean Square Error (MSE) vs. Model Complexity: To evaluate model complexity, I intended to compute the Mean Square Error (MSE) for both the linear and quadratic models across different levels of complexity. This would involve incrementally adding higher-order terms (e.g., quadratic terms) to the models and observing how MSE changes as complexity increases. The goal is to identify the model complexity that results in the lowest MSE, which signifies the best trade-off between bias and variance.

Example and Test Data with 5-Fold Cross-Validation: For the cross-validation process, I would randomly split the dataset into five equally-sized subsets. Then, I’d train and test the models five times, using each subset as the test set once while the remaining four subsets serve as the training data for each iteration. This process allows me to obtain five different MSE values for each model, which I can then average to get a more robust estimate of model performance.

If you have specific data or need further assistance with the implementation of this approach, please provide the dataset, and I can help you with the actual calculations and code if needed.

25, September, 2023

Having conducted both bootstrap resampling and k-fold cross-validation on the provided data, I aimed to thoroughly evaluate the performance of various statistical models and gain valuable insights into their accuracy. Here’s a breakdown of the steps involved in this comprehensive analysis:

Step 1: Data Preparation In the initial phase of our analysis, I meticulously prepared the data. With three distinct datasets at hand – diabetic data, obesity data, and inactivity data – I thought it prudent to amalgamate them into a unified dataset. This consolidation was achieved by matching entries based on shared attributes such as YEAR, FIPS, and STATE. It was crucial to ensure that the data was meticulously cleaned and properly formatted before proceeding further.

Step 2: Data Exploration Data exploration played a pivotal role in gaining an intimate understanding of the variables at our disposal. I embarked on this journey by scrutinizing the data to uncover insights. I examined the distribution of variables, diligently checked for any instances of missing values, and meticulously identified potential outliers. This exploratory phase involved creating histograms and summary statistics for each variable, enabling me to grasp the data’s nuances.

Step 3: Model Selection Selecting an appropriate model was a crucial decision in our analysis. I opted for a combination of linear regression and multinomial logistic regression models, carefully considering which variables would serve as predictors and which ones would be designated as response variables. This choice was made after a thorough understanding of the dataset and research objectives.

Step 4: Bootstrap Resampling To assess the stability and variability of our model’s parameters, I employed bootstrap resampling. This technique entails repeatedly drawing random samples, with replacement, from the dataset to generate multiple subsamples. Subsequently, I applied our chosen model to each of these subsamples. This rigorous process provided insights into the robustness of our model’s parameters.

Step 5: K-Fold Cross-Validation K-fold cross-validation, a powerful technique, was the cornerstone of our model evaluation strategy. By dividing the dataset into K equally sized folds, I conducted a comprehensive assessment. The model was trained on K-1 folds while being tested on the remaining fold. This process was iterated K times, with each fold serving as the test set once. The primary objective was to evaluate the model’s performance across different subsets of the data. The choice of an appropriate K value, typically within the range of 5 to 10, was guided by the dataset’s size and computational resources.

Step 6: Model Evaluation With the K-fold cross-validation in place, I evaluated the model’s performance for each fold. Using relevant metrics, such as mean squared error for regression or accuracy for classification, I obtained K sets of performance metrics. This provided a comprehensive view of the model’s predictive capability across various data partitions.

Step 7: Interpretation of Results The results obtained from both bootstrap resampling and k-fold cross-validation were meticulously analyzed. Visual aids, such as histograms or box plots, were employed to depict the distribution of model performance metrics. These visualizations shed light on the stability and generalization performance of the model, offering valuable insights into its overall effectiveness.

Step 8: Conclusion and Discussion Concluding our analysis, I summarized the findings and insights derived from the extensive evaluation process. We discussed the practical applicability of the models and, importantly, acknowledged potential limitations, especially considering the relatively small dataset size. In cases where additional guidance was needed, consultation with instructors and domain experts was pursued.

It’s important to note that the actual implementation of these steps would involve code, which would vary depending on the choice of programming language and tools, such as Python with libraries like scikit-learn and matplotlib. The process encompassed data manipulation, model building, and rigorous performance evaluation, ultimately providing a robust assessment of our chosen models.”

22,September 2023

Distribution of Differences presents a histogram of differences in means from the simulation, showing an approximately normal distribution. The calculated z-score for the observed difference (14.6858) is notably high, suggesting a significant difference. Magnitude of Sampling: Emphasizes the vast number of possible sample combinations from the data, highlighting the uniqueness of the observed result.

For the logistic regression analysis, I’m deliberating on the choice of ‘k’ for cross-validation. How to determine the appropriate value of ‘k’? Additionally, in the context of this analysis, should we consider using stratified sampling, and if so, how does it impact our modeling process?

I’m wondering about data quality and whether any data preprocessing or cleaning was performed, the assumptions made in the t-test and the linear model, such as the assumption of normality in the data. Also, Monte Carlo simulation methodology, including the number of iterations and whether the random sampling process was appropriately implemented.

20 September, 2023

The Monte Carlo procedure yields an estimated p-value, which is very close to the p-value obtained from the t-test. The distribution of differences in means from the Monte Carlo procedure is visualized with a histogram.

It shows that the observed difference in means falls within the distribution of differences obtained through random sampling, and concludes that there is strong evidence to reject the null hypothesis (i.e., there is no real difference in means) in favor of the alternative hypothesis (i.e., there is a statistically significant difference in means).


Large Number of Possible Samples
: It highlights the enormous number of possible combinations when randomly selecting samples from the data, emphasizing the complexity of exploring all potential samples. In summary, the professor demonstrates that there is a statistically significant difference in the mean sizes of crab shells before and after molting, based on both t-test and Monte Carlo analysis. This difference is observed in the data and is unlikely to occur by random chance. I left with the following questions after class,

  1. Can the findings of this study be replicated by other researchers using the same dataset and analytical methods?
  2. Alternative Analytical Approaches: Are there alternative statistical tests or methodologies that could have been employed to analyze this dataset? Exploring alternative approaches can enhance the depth and comprehensiveness of data analysis, potentially providing additional insights or validating the results obtained through the chosen methods.

18,September, 2023

y=b0+b1X1+b2X2+b3X3+E(error)

In this equation:

  • y represents the dependent variable or the target you are trying to predict.
  • X1,X2, and are independent variables or predictors.
  • b0,b1,b2, and are the coefficients of the respective predictors.
  • represents the error term, which accounts for the variability in y that cannot be explained by the predictors.
  1. Linear Relationship: This equation still assumes a linear relationship between the dependent variable (y) and the independent variables (X1,X2, and ). Each coefficient (b1,b2, and ) represents the change in y for a one-unit change in the corresponding predictor, assuming all other predictors remain constant.
  2. Overfitting: The risk of overfitting still applies in multiple linear regression, particularly if you have a high number of predictors relative to your sample size. Including too many predictors without enough data can lead to overfitting, just like in polynomial regression.
  3. Model Evaluation: To assess the performance of this multiple linear regression model, you can use techniques such as R-squared (coefficient of determination), p-values for the coefficients, and residual analysis to ensure the model’s validity.
  4. Regularization: In cases where you have many predictors or suspect multicollinearity (correlation between predictors), you may consider using regularization techniques like Ridge or Lasso regression to prevent overfitting and improve model generalization.
  5. Interpretation: Interpretation of coefficients (b1,b2, and b3) remains the same as in simple linear regression. Each coefficient tells you the effect of a one-unit change in the corresponding predictor on the dependent variable, holding other predictors constant.
  6. Assumptions: Like in simple linear regression, multiple linear regression assumes that the errors (E) are normally distributed, have constant variance (homoscedasticity), and are independent of each other (no autocorrelation).

September,15, 2023

I got to know about the connection between p-values and the base 2 logarithm is to quantify the significance of the results. By calculating -log(2, p), where p is the p-value, we can relate the p-value’s magnitude to the likelihood of observing an event as extreme as the one we’ve encountered. This approach provides a framework for understanding the statistical significance of our findings.  when dealing with weighted coins or non-standard situations, our intuition becomes less reliable. We lack an intuitive grasp of what constitutes a rare event in these cases. Therefore, p-values are particularly valuable when working with situations where our intuitive judgments may not apply, helping us objectively assess the significance of our observations.

Overall, p-values serve as a crucial tool in quantifying the significance of observed outcomes and making objective decisions in various fields, even when intuitive judgments may not suffice

September 13, 2023

During today’s class professor speaks about  p-value is a crucial tool in hypothesis testing that helps researchers assess the likelihood of observing their data if the null hypothesis is true. It aids in making informed decisions, quantifying evidence, and promoting scientific rigor. However, it should be interpreted alongside effect size and considered within the broader context of research findings to draw meaningful conclusions. The p-value of 52.8 is exceedingly high and far above the commonly used significance level of 0.05 (5%). Typically, in hypothesis testing, if the p-value is less than the chosen significance level (e.g., 0.05), would reject the null hypothesis. However, in this case, with such a high p-value, the null hypothesis of no association between diabetes and inactivity is not rejected, indicating that the two variables are not significantly related in the analyzed data.

Sepetmeber 11, 2023

My first Mth 522 post which I learned examining residuals in linear models highlights the importance of assessing the reliability of statistical models used in analyzing health data. This can help researchers and analysts ensure the validity of their findings and make informed decisions based on the results. I found the importance of examining residuals in any linear model, highlighting the need to assess the reliability of the linear model used in the analysis. The residuals versus predicted values from a linear model are plotted to assess heteroscedasticity, indicating the reliability of the linear model. The heteroscedasticity of the linear model is used to analyze the relationship between inactivity and diabetes, indicating that the linear model may not be reliable. I found the importance of examining residuals in linear models to assess the reliability of the model and the linear model used in the analysis may not be reliable due to heteroscedasticity. The relationship between inactivity and diabetes suggests alternative methods for testing heteroscedasticity when the residuals are not normally distributed. Descriptive statistics such as median, mean, standard deviation, skewness, and kurtosis are calculated for the inactivity data. Quantile-quantile plots are also used to assess deviation from normality. Professor talks about kurtosis as a measure of the shape of the distribution of the inactivity data, The kurtosis of the inactivity data is about 2, which is somewhat lower than the value of 3 for a normal distribution. The kurtosis of a distribution measures the heaviness of the tails and the peakedness of the distribution. A kurtosis value of 3 indicates a normal distribution, while values less than 3 indicate a less peaked distribution. I learned that kurtosis is one of the descriptive statistics to assess the deviation of the inactivity data from normality.