Please Note This forum exists for community support for the Mango product family and the Radix IoT Platform. Although Radix IoT employees participate in this forum from time to time, there is no guarantee of a response to anything posted here, nor can Radix IoT, LLC guarantee the accuracy of any information expressed or conveyed. Specific project questions from customers with active support contracts are asked to send requests to support@radixiot.com.

Radix IoT Website Mango 3 Documentation Website Mango 4 Documentation Website

bulk sync from SQL to persistent TCP


  • hi
    my SQL data source receives data of the last 4 hours (for every 15 min.) every 4 hours.
    I'd like to publish this data to a persistent TCP data source.
    how do I configure the data sources and the publishers so that I get all the data ?
    thank you


  • Hi maya

    On the receiving side(data source) you will just make sure you are listening on the same port as the publisher and that the compression, checksum and encryption settings are the same.

    If you want all the history to be synced to the data source then just set the 'sync date history from' to the date you want and start the sync.


  • thank you Craig for the quick reply.

    the data is successfully synced up to yesterday using :

    1. publisher's crop pattern = daily 0 0 1 * * ?
    2. publisher's history synchronization = 1 day
    3. point's logging type = All data

    however I'd like to sync the data as it arrives.
    while I know that on 8:00 I'll be receiving the data samples with 15min. gap in the range of 4:00 to 8:00,
    I was not able to configure the system to sync this data on 8:05, nor on any other hour on the same day

    I've tried:
    changing the publisher's crop pattern to 0 0 0/2 ? * * * together with history synchronization of 6 hours
    changing point's logging type to intervals of 15 min / 4 hours
    however none of that seemed to work

    I appreciate your help
    maya


  • The logging on the persistent TCP data source should be "all data" for all your points. They will have this setting by default so don't change them. It will be helpful to know what result you are getting.
    I would try a cron pattern of 0 0 0/2 ? * * * or 0 0 0/4 ? * * *
    change " Synchronize history prior to" to 1 milisecond
    I believe this setting is what is causing the problems.


  • Hi Maya,

    Are you publishing / recording real time data? If your publisher's update event is "Logged" and you are transmitting / saving real time data the sync should only apply when data got missed (which would take exhausting the publisher's queue and discarding from it). The rest of the time, the data should be transmitted when it is acquired.

    I would hesitate to set the synchronized history prior to setting to 1 millisecond. It may be fine in this case, where the timing of the file arriving and the sync pattern cron are definitely going to avoid each other, but every time the sync runs it behaves as though all the data in the period to be sync'ed is written to the database, which is not a safe assumption with 1 milliseconds of 'prior to' being the new time for when the point has been sync'ed unto.


  • Hi Craig and Phil

    1. setting the "Synchronize history prior to" to 1 millisecond did solve the problem - and I can now see the data in the master mango as close to "real time" as I can (thank you Craig)
    2. (Phil) "Are you publishing / recording real time data?" , well no, it's historical data of the last 4 hours
      3 (Phil) "I would hesitate ... which is not a safe assumption" - what should I be worried about? what might happen?

    1. Real time doesn't necessarily mean now, it just means when a point event (Updated, Changed, or Logged) occurs, include that value in the queue of values that is to be published. Then the receiver can decide if it wants to save those values, or merely update the points' runtimes with the newest values. The settings are on both ends.

    2. If the sync happens and runs very quickly, perhaps sending all values in a period in a single packet (one would have to increase minimum overlap, which is good to do if the receiver is using NoSQL, otherwise must be 0) or just very fast, and a new value was just sent to the database controller on the publishing side, then the sync'ed-unto time could conceivably be after the point value that was hiding in the database queue, in which case it would not be sent in a subsequent sync.


  • thank you phil

    1. the data is real time data, it is read on the device side every 4hrs and looks like:
      12:00 - 5.42
      12:15 - 9.24
      12:30 - 4.98
      and so on.
      saving those values, or even updating the points' runtimes with the newest values on the server side is exactly what I need

    2. I guess I encountered it, because using the latest settings (prior to 1 milliseconds) I do see gaps of 4hrs in the data here and there.

    I would very much appreciate your recommendation for the settings I need in this case.
    thank you
    M


  • Try referring to this, I know it relates to meta points. but due to a similar issue with updating in 'real time' a bigger cache might solve this issue...
    https://forum.infiniteautomation.com/topic/4272/inconsistent-values-between-mqtt-and-meta-points/10


  • maya,

    My advice was to set the update event to "Logged" on the publisher, and to transmit real time data. On the receiver, save the real time data. Use a "sync point values before" setting like 10 minutes, if you need to make the sync run every fifteen minutes that's probably fine, but you should get all the data from the transmission of real time data.

    I would not bother with Fox's suggestion. I would not use a sync before time of 1 millisecond, just run the sync more often,