It is to be noted by the researcher that the category which is most frequent for all “Good” cases produces the same and correct percentage of 70%.The purpose of chi-square goodness of fit test is to investigate whether the step of judging null hypothesis is justified or not. In this case, the step has been taken from constant-only model to independent model. The step of adding variables or variable in this scenario can be justified if the values are less than 0. If the step would be to exclude variables from equations of this model, than it would be justified by taking the cutoff point as greater than 0. Since the sig. values are less 0.05, therefore null hypothesis can be rejected and the model is statistically significant.In Ordinary Least Square regression, the Nagelkerke R2 and Cox & Snell R square provide logistic analogy to R square. Like R square in OLS, Nagelkerke adjusts the Cox-Snell measures which has range between 0-1. On the basis of Nagelkerke R2, it can be concluded that the Credit Risk is explained only 36.8% by its predictor variables.The purpose of Hosmer Lemeshow goodness of fit test splits the observations into deciles which are based upon predicted probabilities. After this, chi square is computed from expected and observed frequencies. The p-value of 0.540 has been computed with 8 degree of freedom by the distribution of chi square. It shows that the mentioned logistic model is a good fit. If Hosmer and Lemeshow goodness of fit test shows a value of 0.05 or lesser, then difference can be found between the predicted and observed values of dependent variable and therefore we reject the null hypothesis.On the other hand, if the value is larger than 0.05, then the null hypothesis can be accepted having no difference meaning that the model estimations fit at an acceptable level to the data. In this case, the model doesn’t explain the variances of dependent variable except the fact when it does to a significant level.Both correct and incorrect estimates regarding the constant as well as the independents are highlighted in the above table. It can be clearly observed that the rows reflect the observed values of Credit Risk as “Bad” and “Good” whereas, the columns exhibit the predicted values of Credit Risk as “Bad” and “Good”. Since the model applied is binary logistic regression, therefore the percentage correct values of rows “Bad” and “Good” are
Hosmer, D. W., and S. Lemeshow. 2000. Applied Logistic Regression, 2nd ed. New York: John Wiley and Sons.
Kleinbaum, D. G. 1994. Logistic Regression: A Self-Learning Text. New York: Springer-Verlag.
Jennings, D. E. 1986. Outliers and residual distributions in logistic regression. Journal of the American Statistical Association, 81:, 987-990.
Norusis, M. 2004. SPSS 13.0 Statistical Procedures Companion. Upper Saddle-River, N.J.: Prentice Hall, Inc..
Please type your essay title, choose your document type, enter your email and we send you essay samples