Digital Trends and Trend Storage on Data Change

Hello,

We trend a lot of Digital tags, set to 2-byte storage method, but it would be so much better to storage them at a single bit. Also, we use the Wonderware Historian with the connector utility, all of our tags are configured as Analog with the 8-byte Trends stored as doubles and the 2-byte Trends stored as Integers. We have to set the engineering units to T/F and the range 0,1 to help sort them.

We run very fast trending, 100-250ms and it uses a lot of storage, we typically save about 13 months of data per site and I thought it might be possible to use the trigger in the Trend tag configuration to only store when the Variable tag data has changed. Has anybody done this? If so, what was your methods, workarounds and pitfall?

Thanks,

Chris

Parents
  • Olivier,

    Thanks for the detailed reply.

    Yes, when you shut down the connector, it saves the last known datetime of each trend tag sample it was able to push to the Historian. Once the Connector is started back up, it will check for new Trend tags and their historize flag and pick up where it left off. It is also tolerant to our WAN connection being poor at times, if it looses the connection to the Historian, it goes into storage mode. Upon reconnecting to the Historian, it will forward that data on.

    The backfill feature I am requesting is one that the user can manually initiate a backfill. We originally setup our Trends to save 13 months of data(700GB-900GB) per server, per cluster. When we installed our central Wonderware Historian, the past 13 months of data was very difficult if not impossible to backfill. We tried using Eric Black's CiCode function where it creates fast load .csv files from the trending data, but was too slow, the HUGE .csv files needed to be manually transferred across our network to the Historian, and you needed to have a lot of HDD space available on each Citect Server in the first place. So with that, it would have been really nice to have the connector doing that for us. The biggest danger that I see of this feature is if someone starts a backfill where there is data samples already, you'll end up with double samples. There would need to be some sort of method built in to prevent this, by or mark existing Historian data as bad before writing new samples. Maybe even a user selection as to how to handle it.

    I appreciate your helping me better understand the deadbands on the Trends. We'll probably just keep them undefined for now and consider rolling back the amount of on server Trend data we save to help with HDD space and rely on the Historian for the rest.

    Regards,

    Chris
Reply
  • Olivier,

    Thanks for the detailed reply.

    Yes, when you shut down the connector, it saves the last known datetime of each trend tag sample it was able to push to the Historian. Once the Connector is started back up, it will check for new Trend tags and their historize flag and pick up where it left off. It is also tolerant to our WAN connection being poor at times, if it looses the connection to the Historian, it goes into storage mode. Upon reconnecting to the Historian, it will forward that data on.

    The backfill feature I am requesting is one that the user can manually initiate a backfill. We originally setup our Trends to save 13 months of data(700GB-900GB) per server, per cluster. When we installed our central Wonderware Historian, the past 13 months of data was very difficult if not impossible to backfill. We tried using Eric Black's CiCode function where it creates fast load .csv files from the trending data, but was too slow, the HUGE .csv files needed to be manually transferred across our network to the Historian, and you needed to have a lot of HDD space available on each Citect Server in the first place. So with that, it would have been really nice to have the connector doing that for us. The biggest danger that I see of this feature is if someone starts a backfill where there is data samples already, you'll end up with double samples. There would need to be some sort of method built in to prevent this, by or mark existing Historian data as bad before writing new samples. Maybe even a user selection as to how to handle it.

    I appreciate your helping me better understand the deadbands on the Trends. We'll probably just keep them undefined for now and consider rolling back the amount of on server Trend data we save to help with HDD space and rely on the Historian for the rest.

    Regards,

    Chris
Children
No Data