Skip to content

Linear model diagnostic tests potential discrepancies #795

Open
@krisawilson

Description

@krisawilson

Hello,
I noticed some discrepancies between model diagnostic tests, specifically the test for heteroskedasticity (nonconstant variance). I included the tests for normality and autocorrelation of residuals, for completeness, but it is the heteroskedasticity I am most concerned about due to the magnitude of the difference:

model <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars)
performance::check_heteroskedasticity(model)
#> Warning: Heteroscedasticity (non-constant error variance) detected (p = 0.042).
lmtest::bptest(model) # studentized
#> 
#>  studentized Breusch-Pagan test
#> 
#> data:  model
#> BP = 6.4424, df = 4, p-value = 0.1685
lmtest::bptest(model, studentize = FALSE)
#> 
#>  Breusch-Pagan test
#> 
#> data:  model
#> BP = 7.9496, df = 4, p-value = 0.09344
shapiro.test(model$residuals)
#> 
#>  Shapiro-Wilk normality test
#> 
#> data:  model$residuals
#> W = 0.95546, p-value = 0.2056
performance::check_normality(model)
#> OK: residuals appear as normally distributed (p = 0.230).
performance::check_autocorrelation(model)
#> OK: Residuals appear to be independent and not autocorrelated (p = 0.262).
lmtest::dwtest(model, alternative = "two.sided")
#> 
#>  Durbin-Watson test
#> 
#> data:  model
#> DW = 1.7786, p-value = 0.2846
#> alternative hypothesis: true autocorrelation is not 0

Created on 2025-02-19 with reprex v2.1.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions