Skip to main content

Understanding Read Lag values

Read Lag is reported only for targets whose input streams are the output stream of a CDC reader. It is calculated by subtracting the timestamp in the CDC event from the current Striim system time when the event is written to the target.

If both systems use Network Time Protocol (NTP) to set their system time from internet time servers, the system times should be synchronized within a few milliseconds of each other. In that case, Read Lag should accurately indicate how long after the database generated an event it was processed by Striim. Read Lag is reported in milliseconds, so a value of 100 would indicate a tenth of a second lag. 

If the Read Lag is large large but not increasing, that indicates latency between the two systems. If it is increasing, that indicates that the Striim server is not processing events fast enough to keep up with the database activity.

If the system times are not synchronized, Read Lag could be a very large positive or negative number due to the difference in system times. In this case, you can not use Read Lag to estimate latency between the systems, but you can compare values over time to see if the lag is increasing.