This is documentation special for DeepRL Component.
You can read more detail about DRL in below pages
Deep Reinforcement Learning Component
Workflow
Just quoting shortly, DRL is a way to train machine learning by doing random action, saving it into an experience buffer, and then taking a random sample of the best action as the neural network data train.
Exploration Rate
The exploration rate governs how often the agent chooses a random action instead of relying on its learned policy. the value varies between 0-1, 1 means 100% and 0 means 0%.
it is recommend to use 20% - 30%
also recommended to start high and decrease as learning progresses.
Linear Exploration Rate
Reduce Exploration rate, as we are training the model. simply after we tried N experience, the Exploration rate will become 0.
ExplorationRate = (1 - (NowExperience/Linear) * OriginalExplorationRate;
-1 means, constant exploration rate.