![]() Now that we’ve determined the best model, we can try it on a brand new observation. Variable importance in the complex neural network For both our neural networks, we’ll use 20 tours. Increased tours helps us build multiple models in effort to narrow down the best-fitting model averaged among each network. To help prevent this issue, JMP applies a penalty function so the model stops growing once the model’s predictions stop improving. JMP and other software tend to model neural networks to the point where they become too complex and start modeling random noise. The other specification we adjusted for our models was the number of tours. Our complex model uses 3 nodes each for TanH, Linear, and Gaussian activation functions, and repeats the structure in a second layer. ![]() The base model requests 3 nodes in the first layer using only the TanH function. For this study, we created both a “base” and “complex” neural network. Each has its own strengths, so some models may benefit from certain functions more than other models. The hidden layer, or “black box” layer, of the model can use 1–3 transformation functions to manipulate the data: TanH, Linear, and Gaussian. Using JMP, a number of options are available when building a neural network. The dataset with scripts for the models below can be found on GitHub (.jmp file). We’ll look at two neural networks, a basic and a complex version, and compare the predictions of these with our benchmark Least-Squares model. The dataset ultimately includes observations from 252 participants, which help provide a basis for our models. These variables include age, weight, height, and circumference of the body at the neck, chest, abdomen, and other areas. Data was collected from participants by measuring various aspects of their physical bodies. Digging into JMP once again, this time we’ll look at neural networks as we attempted to build a model predicting the body fat percentage in men. ![]()
0 Comments
Leave a Reply. |