Traffic signal control (TSC) is a challenging issue in managing an urban transportation system. A fixed time TSC is easy to implement but has drawbacks in such measures as flow rate, waiting time, and traffic density. The situation gets worse when the arrival rates of vehicles periodically change over time, which is usual in most urban cities. We propose adaptive reinforcement learning (RL) to manage TSC with varying vehicle arrival rates. Our objectives are to improve the averages of flow rate and waiting time and reduce the wasteful green light problem by considering the vehicle densities of the current lane and the downstream directions. Experiments were conducted by Simulation of Urban MObility (SUMO) under three traffic layouts and various vehicle arrival rates. The proposed method not only reduced on average traffic density, waiting time, and queue length, but also increased the average flow rate and average speed, relative to the other algorithms tested.