In this work, we test the influence of several levels of communication and processing corresponding latency for traffic wave dissipation control. The approach uses Connected and Automated Vehicles (CAVs) that are controlled in simulation through reinforcement learning and non-reinforcement learning controllers, and compares their performance with a pure human driving scenario that has no control latency. We measure the performances with respect to average traffic speed (aspect of traffic mobility), traffic speed standard deviation (aspect of traffic smoothness), and percentage of compliance with a custom designed safety monitor (aspect of traffic safety). The work shows that reinforcement learned controllers can perform with almost no deterioration in performance with latencies of 1 s or less. Non-reinforcement learning controllers, which are not intentionally modeled with latency in mind, show rapid deterioration in performance with any unexpected latency, which shows that the motivating problem requires a solution that is robust to latency. The paper discusses the training and reward function modifications required in order to consider latency as part of the framework, and discusses how the results may be suitable for deployment on high-latency networks such as mobile phones, without a 5G deployment.