Data transfer between two Historians?

Hello,

Is there a fast way to transfer partial data between two Historians?

The detailed situation is as follows:

There are two systems running System Platform, an old one and a new one.

The new system was originally created to connect new hardware and store the data in the associated historian.

Now the hardware, that was connected to the old system, has to be taken over to the new system step by step including the historized data. That means we cannot copy the historical data blocks because the new system already has its own historical data blocks.

Furthermore the tagnames in the old system and in the new system are not the same because different naming conventions were used. That means we have to transfer data from tagnameA in the old system to tagnameB in the new system for example.

Currently we are using a stored procedure that transfers the data over a linked server.

This is the main part of the dynamic SQL we are using:

The problem is, that the transfer is very slow and we have to transfer a quite big amount of data.

Is there any faster way to transfer the data between the two Historians?

We would really appreciate any help. Thank you.

Best regards,

Benjamin

Parents
  • This approach can work, but will be slow. There is not a "feature" for this kind of transfer/merge, but a faster solution is to copy the non-overlapping blocks over and then update the configuration in the new system to recognize the data under the new name. The details of how to deal with the name change will vary some depending on the old/new Historian versions, but if both are Historian 2017 or newer:

    1. Run the "aahTagRename.exe" utility on the old system to change the names to match the new one.
    2. Use the Configuration Export on the old system to export all the Tag definitions and Tag History to a CSV file
    3. Import the CSV on the new system

    Then you can use an approach like what you outlined above to address the overlapping blocks that still have gaps (e.g. to address the 12-hour gap in an overlapping "daily" block when the new system started up at noon). 

  • Hi Elliot,

    Are there any plans for fast way to transfer data between 2 Historians?

    Our customer wants to transfer again a large amount of data between 2 Historians and the approach with non-overlapping blocks does not work because both Historians have run in parallel for a long time.

    Would using the Historian SDK be faster than our solution with the dynamic SQL mentioned above?

    Best regards

    Benjamin

  • No current plans to develop a productized way to do this.

    Regardless of the mechanism (SQL, SDK, etc.), structure the transfer to span all tags across smaller time periods and iterate over the time period. For example, if you have 100 tags, query all tags for 1-day at a time, then advance to the next day (rather than all data for a single tag and iterating over the list of tags). This will be much more efficient for storage on the target system.

    "Streamed" operations are also much more efficient (20x?) to store than "non-streamed" ones. "Streamed" operations are possible when appending new values with newer timestamps, which will normally be the case when you have tags from the source which have not also been stored in the target. Using SQL, send "streamed" values by including "wwVersion" in the list of columns and use "realtime" as the string value inserted. Using the SDK, use the "AddValue()" method.

    It won't have nearly as much of an impact as the above, but the SDK will also be more efficient than SQL. The most significant benefit will be in retrieval, since SQL Server tends to throttle large result sets (over ~10K rows) coming from the "INSQL" OLEDB provider and using the SDK bypasses that. It makes much less difference with smaller result sets, but the "export all data" case will almost always mean a large one. 

Reply
  • No current plans to develop a productized way to do this.

    Regardless of the mechanism (SQL, SDK, etc.), structure the transfer to span all tags across smaller time periods and iterate over the time period. For example, if you have 100 tags, query all tags for 1-day at a time, then advance to the next day (rather than all data for a single tag and iterating over the list of tags). This will be much more efficient for storage on the target system.

    "Streamed" operations are also much more efficient (20x?) to store than "non-streamed" ones. "Streamed" operations are possible when appending new values with newer timestamps, which will normally be the case when you have tags from the source which have not also been stored in the target. Using SQL, send "streamed" values by including "wwVersion" in the list of columns and use "realtime" as the string value inserted. Using the SDK, use the "AddValue()" method.

    It won't have nearly as much of an impact as the above, but the SDK will also be more efficient than SQL. The most significant benefit will be in retrieval, since SQL Server tends to throttle large result sets (over ~10K rows) coming from the "INSQL" OLEDB provider and using the SDK bypasses that. It makes much less difference with smaller result sets, but the "export all data" case will almost always mean a large one. 

Children
No Data