Write to SQL

Hello,

Having 2 redundant servers, what is the best way to write values ​​to an SQL database? Which server write to SQL (only the primary or both) or do I have to do a script to find out the primary one?
It is complicated?

Regards

  • Hi  ,

    One approach I have seen is to use the Report Server to host the code to write values to the SQL database. This handles the Primary/Standby complexity, so only one will run at a time. If the Primary is shutdown, the Standby will start executing the cicode. There are a number of TechNotes that describe this (TN25540).

    Also for SQL code examples, have a look at the Cicode functions SQLOpen, SQLConnect. The Example project also has some code and pages that illustrate how to use them.

  • Thank you Olivier. 

    "Also for SQL code examples, have a look at the Cicode functions SQLOpen, SQLConnect. The Example project also has some code and pages that illustrate how to use them."

    My problem isn't to write to the sql, is how can i managed with redudante servers.

    Best Regards

  • Another suggestion is to use the AVEVA Historian or AVEVA PI System. AVEVA provides Store/Forward connector to both of these purpose built Time-Series based historians which handles the redundancy for you out of the box as it leverage the Citect/Plant SCADA Client API to push data redundantly into the History blocks. Also, just for some background for you: With all the power of Structured Query Language (SQL) to query data, some may claim that relational databases are just as good at retrieving time-series data as they are transactional data. It is certainly true SQL gives great flexibility, but it is based on some fundamental assumptions that don’t apply to time-series data: a) there is no inherent order in the data records (in fact, time-series data is ordered by time), b) all the data is explicitly stored (in fact, most historian data only represents samples from a continuum of the real data), c) all data is of equal significance. These differences are significant. For example, if an instrument reports a value timestamped at “7:59:58.603” and a user queries a relational database for the value at “8:00:00.000,” no data will be returned since there is no record stored for precisely that time—the database does not recognize that 3 20-Nov-2020 time is a continuum. Similarly, if a temperature was “21.0 C” and two-minutes later was “23.0 C”, it has no inherent ability to infer that halfway between these samples the temperature was approximately “22.0 C”. In historian applications, it is rarely steady-state operations that are most significant. If the only way for a client application to find exceptions is to query all of the data for a measurement, it will place a heavy load on the overall system: server, network and client. In contrast, historians generally have means of filtering out insignificant data (based on comparing sequential records) to radically reduce the volume of data that must be delivered to client applications. Also, Relational databases are designed to accumulate massive amounts of data. However, as the amount of data grows, so do query execution times, the size of backups, and numerous other routine operations. To alleviate this performance problem of ever-growing tables, database administrators must routinely purge data from the database, rebuild indices and related operations. In any database that protects transactional integrity, this purge operation must suspend normal database updates—that’s a problem for historian applications running 24 x 7 x 365. To even make the purge operation itself tolerable requires minimizing the amount of data maintained in the database. True historians, on the other hand, are designed to both handle the rapid growth in data and to provide simple means of taking subsets of the data offline and online.... Also :) - In a plant historian, a stored data point not only has an associated value and time stamp, it also has an indication of the data quality. Storing a data point from an instrument, outside of the instrument’s normal operating range, for example, will cause a specific series of quality indicators to be stored with the value. These indicators aren’t simply separate columns in the database, but an inherent property of the sample. They can be retrieved, considered in calculations and used to alert operations or engineering personnel to a potential anomaly. When summarizing the values (for example, calculating an average temperature over the last hour), a historian must be able to reflect this data quality in calculation results, optionally filter out suspect data, and be able to extrapolate when data is missing or deemed invalid. If these real-world aberrations aren’t 5 20-Nov-2020 handled correctly, resulting reports, business system integration, and decision making will be incorrectly skewed. Relational databases alone don’t provide these capabilities. With a 50:1 compression ratio and extremally fast retrieval rates with built in redundancy, it may be worthwhile having a look at the power of a true time-series based historian of which AVEVA specializes in....  - Hope this helps