RMSProp is a variation of gradient descent that has an individual learning rate for each parameter (similarly to AdaGrad). This variation helps normalize parameter updates as some parameters might rise significantly while others will only rise slightly. RMSProp does this by taking dividing the starting learning rate by a running average of the past learning rates.
This
It's possible for the denominator
This means that the parameter update would be re-written formally as