How do I find the ingest timestamp on Tier 1 and Tier 2?

We work on a large System Platform solution (100k+ tags) where delays and interruptions are normal. Store-and-forward mechanisms are in place and we receive late data from time to time.

The system has two Tier 1 historians replicating to a common Tier 2 historian.

What is the best way to get the timestamp for when a data point was ingested/received/stored into Tier 1 and when it was replicated (received) on Tier 2?

Parents
  • The system only maintains a timestamp from the source, not a timestamp of the ingestion. In cases where the source doesn't supply a timestamp, the system will apply a timestamp as close as possible to the source. For example, if a Communications Driver reads the timestamp from that PLC/RTU, that time will flow through across multiple levels of Replication; if that PLC/RTU sends the value without a timestamp, it will get a timestamp of "now" when it reaches the Communications Driver and that will be preserved.

    If you're mostly wanting to measure the latency in Replication for the "happy case" (when everything is connected and streaming data), the only mechanism I can think of is to poll a replicated tag with a known source time stamp (e.g. SysTimeSec) at the "tier 2" system repeatedly with a query like this:

    select DateTime, TagName, Value, QueryTime=getdate(), Latency=datediff(millisecond,DateTime,getdate())
    from Live where TagName like 'MyTier1.SysTimeSec'

    In my tests, "Latency" was consistently <3000 milliseconds for a tag that updates every second (so closer to 2000 milliseconds latency) in a system with no network latency and plenty of resources and very low data rates being replicated. You may have lower latency with higher data rates because buffers are sent when full or on a timer and the low data rates end up waiting for the timer. 

Reply
  • The system only maintains a timestamp from the source, not a timestamp of the ingestion. In cases where the source doesn't supply a timestamp, the system will apply a timestamp as close as possible to the source. For example, if a Communications Driver reads the timestamp from that PLC/RTU, that time will flow through across multiple levels of Replication; if that PLC/RTU sends the value without a timestamp, it will get a timestamp of "now" when it reaches the Communications Driver and that will be preserved.

    If you're mostly wanting to measure the latency in Replication for the "happy case" (when everything is connected and streaming data), the only mechanism I can think of is to poll a replicated tag with a known source time stamp (e.g. SysTimeSec) at the "tier 2" system repeatedly with a query like this:

    select DateTime, TagName, Value, QueryTime=getdate(), Latency=datediff(millisecond,DateTime,getdate())
    from Live where TagName like 'MyTier1.SysTimeSec'

    In my tests, "Latency" was consistently <3000 milliseconds for a tag that updates every second (so closer to 2000 milliseconds latency) in a system with no network latency and plenty of resources and very low data rates being replicated. You may have lower latency with higher data rates because buffers are sent when full or on a timer and the low data rates end up waiting for the timer. 

Children
No Data