Forum Replies Created

Viewing 1 - 4 of 4 posts
  • kikike

    Member
    November 4, 2020 at 1:17 pm
    Up
    0
    Down

    Got it:) Many thanks!

  • kikike

    Member
    November 4, 2020 at 11:40 am
    Up
    0
    Down

    Appreciate the detailed information. Yes , that is the theory of k fold cross-validation.

    I think I confused you the question.

    My question is :

    1. We have a model already(we need dataset to find our final model, let’s say this process we use dataset A ).

    2. Then we need to assess predicting performance of our model. We can use k fold cross-validation to do it. During this process, we still need dataset, let’s say this process we use dataset B.

    The question is what is dataset A, and what is dataset B ?

    For example, A and B come from the same original dataset? Let ‘s say 70% of original dataset used for A, and 30% of original dataset used for B?

    That is what I want to ask, help it is much clear now.

  • kikike

    Member
    November 3, 2020 at 10:11 pm
    Up
    0
    Down

    Many thanks, so for k fold cross cross-validation, how do we split data for model building and for assess predicting performance?

  • kikike

    Member
    November 3, 2020 at 9:32 pm
    Up
    0
    Down

    Thanks for quick reply:)

    So we use train data to find the final model, and use the validation for assess the predicting performance?

    I saw the following R code, is that used for assess the predicting performance?

    set.seed(1) # set the random number generator seed

    cv.error.10 = cv.glm(validate, finalmodel, K=10) #here, does validate means the validate dataset?

    paste(“The cv error for final model is”, signif(cv.error.10$delta[1], digits=3))

Viewing 1 - 4 of 4 posts
error: Content is protected !!