Using Persistent TCP points in script calculations
-
Hey hey, I have question about how, or if, I could or should use the synchronized historical data that the persistent TCP connection offers.
My scenario currently is as follows.
On site I have a Mango ES which reads a totalized pulse count from some meters. I log that data locally and also calculate a difference since the last reading so that essentially I have a raw count and a one minute interval value. Both are logged at one minute intervals and I used to use an HTTP publisher to push both of these values to a cloud based mango. In the cloud based mango I have some Meta points and when a new value is pushed a script runs which applies a pulse weight to my interval reading and I'm left with a nice one minute consumption reading.Using the HTTP publisher I had to manually create all the HTTP receiver points so the Persistent TCP connection offers a really nice solution around that step. Right now I am using the Persistent TCP connection just like the HTTP publisher where it only pushes the current readings without any history sync. As mentioned I use the updating of the raw and interval points as a trigger for my script so this is why I have implemented it this way.
However, curiosity has got the better of me so now I'm curious if I can accomplish the same thing somehow but using the historical sync rather than just pushing current data? Is there a way that I can, for example, sync my historical data from the ES every X mins and on the cloud receiver side have my script trigger to calculate the values for my meta points? The way I picture this is, for example, every 15 mins my ES would push all the historical data up to the cloud. Upon arrival at the cloud the script would trigger and calculate all the new values for the meta points but with the same timestamp as the historical data, essentially back filling history for the meta points.
I know this sounds a bit insane. I'm also considering if I should do all the calculations at the ES, ie put the pulse weights in the ES, and just push that data up. The only thing I don't like about that is that if, and it is an IF, we ever started implementing this solution outside our area I would love to have the ability to ship out a prepackaged box where someone only needs to set the IP and wire stuff in. If the ES in the field sends me just raw data I can control all the pulse weights and calculations myself from the cloud. I think that what I've come to right now is pretty good for what I need but I just wanted to ask in case there is some dead simple way that I don't know about. I also really like the idea of pushing all the historical data, it just makes sense. The only tricky part is that I want to calculate stuff based on that data.
Much appreciated as always.
-
Hi psysak,
Typically I encourage people to have all their meta points on the same Mango as the data they're running over.
However, you can set them up on the receiving end and you'll only have to do a little work to get it how you want it to be. The two main problems with having the meta points at the receiving end is that,
- If you process a later datum, then data prior to that will be considered a backdate and will not generate point events usable by a meta point, which means the meta point won't run for the history sync data if you have it saving real time data, and,
variableName.value
will go through the cache at the time the script runs, rather than the time of the valueTime that caused it to run. So, if 1000 new values arrive at the same time, in proper time order as the newest samples they would generate 1000 runs of the meta point, but the .value in the meta point for all 1000 may refer to the latest value among them (as they would be saved asynchronously to their having happened being processed by their consumers). To get around this, you have to usep.pointValueAt(CONTEXT.getTimestamp()).value
in place ofp.value
So you're probably looking for a configuration like,
- Meta points run on UPDATE context events.
- Publisher transmits real time data, data source does not save it.
- Pretty aggressive history syncs, (every 5 or 10 minutes, for instance)
Your Meta points should only update from a sync, but you would be able to see the other points appear to update but only have history to the previously completed sync
-
WHOA!!! I'm going to setup one of my sites like this right now and see what happens, thank you Phil!
-
Nice!
I forgot to mention the statistics function past / prev will also use the runtime as the relative point, so again you'll want to replace something like,
var pastFiveMinStats = p.past(MINUTE, 5);
with
// 300000 ms == 5 min var pastFiveMinStats = p.getStats(CONTEXT.getTimestamp() - 300000, CONTEXT.getTimestamp());
It's a little bit of a pain to mimic the prev() function, but it can be done if necessary..
-
Also it looks like a misspoke. A meta point set to LOGGED events will run on backdates, too. So, you may rather use the UPDATE event, but both should work the same in this situation.
-
Just trying this now. Do you mean pointValueAt? getPointValueAt says it's not a function
-
Phil, that seems to be working perfectly!
-
Ah yes I did mean pointValueAt, I will edit the post.
Glad it's working!
-
Hey @phildunlap question with this. I just finished setting this up and now I'm curious about something.
I've setup the publisher to sync history every 15mins and to push live data. As you stated on the receiver side I get the history every 15mins and the "cached" live updates. When the live updates arrive, every minute or so, this triggers my meta script to run and calculate a new value for the meta point. So in this scenario, for example, I have my published point with history in the receiver side which is 10mins old but cached live data up to "now". In parallel my meta point, which has been calculated on each live update, has history and values up to "now" as well. Does that make sense?
So.. what happens now when my history is synced over from the publisher? Is the meta point now going to receive 15 mins worth of history in its parent point and calculate another set of data based on each historical record? In essence creating two identical entries in the history?
Up until just now I had the publisher not pushing live data, it was just pushing historical every 15mins and when all that data would arrive the meta point would run and backfill.
Curious situation, I'll let it run for a bit and see what happens but I thought you might have insight.
-
Interesting. Ya so I have the meta point logging on interval so of course it is logging... I have to change that to something else for this to work properly
-
If you set the Meta point to LOGGED events and the received point to log all data, you could have it calculating from the live updates to the received point. It's just not as efficient if you expect spotty connections in the publisher, but it does work.
-
Hey Phil, just wanted to report back. I've decided that for now I really don't need the live data so have just switched to periodic syncs of historical data. As far as I can tell it's working brilliantly, every 15 mins I get a push of data and upon receipt my calculation runs and backfills all the data for the META point. This is just about perfect and I'm confident that if I needed to do live data I can. Although TBH if I needed "live" data, I may just bump the hisroty sync to 5mins and just do it that way. The beauty of this as well is that on the Mango in the field I have it setup to log on interval once a minute which results in perfect timestamps at minute:00
-
Nice!
If I were to try to solve the live data issue, I would almost certainly try running the meta point at the edge and then publishing the result. But, as we discussed setting your publisher's update event to LOGGED and your meta points context to LOGGED, you can have the live values update the meta point on the receiver's end without issue, without having to consider the logging type of the source point on the publisher's end.
-
Hey Phil, I'm getting weird errors and behaviour from this approach. The errors in the system look like this;
Example, at 11:04
'MORGUARD': Script error in point "MORGUARD - EO00001SE_CBCKWH": Script returned null. Ignoring result. [Add comment]
'MORGUARD': Script error in point "MORGUARD - EO00001SE_CBCKWH": TypeError: Cannot get property "value" of null in at line number 32 in at line number 32 [Add comment]Looking at that point, which is the META calculated point the values in the historical are as follows;
222.000 kW·h 11:06:00
216.000 kW·h 11:05:00
228.000 kW·h 10:59:00
270.000 kW·h 10:58:00However the source point which triggers this, the sync'd history point, is as follows;
36.00 11:05:00
37.00 11:04:00
37.00 11:03:00
37.00 11:02:00
37.00 11:01:00
38.00 11:00:00
38.00 10:59:00
45.00 10:58:00
52.00 10:57:00
52.00 10:56:00So the source point has the expected values and it's even calling the script, but upon running the script, which is performing a
p.pointValueAt(CONTEXT.getTimestamp()).value
can't find those values for some reason.Any ideas?
Edit: Weird, it seems to happen to only the one site and it's always with the oldest data, so for example the sync is every 15mins, the data which seems to have issues is always the oldest in the sync. So at 11:15 the problems are with 11:00, 11:01, 11:02 etc. There's about 16 points worth of data which get's sync'd over, it's almost like the oldest values fall out of the end of some queue but the script is still triggered to run.
I'm wondering if instead of doing this the way that I am, when data is pushed quickly run through and calculate the values, I should just be triggering a script which goes to the source point and just manually calculates the values for the meta point. Kind of just say every 15 mins go and calculate the values for the latest 15 mins worth of data.
-
I would wager it's a timing issue. You can see in your NoSQL settings that there is a 5000ms (default) delay before writing backdated point data. This is to queue it up for a more efficient insert. Alas this is one of the reasons to do Meta points at the edge Mango.
You may see resolution lowering that NoSQL setting. Potentially also putting
RuntimeManager.sleep(5000);
in the Meta script would circumvent that (but not an execution delay of 5s, as execution delays cancel one another and the point would only run once, 5s from the last update).Edit: Probably I led you astray in encouraging getting the live updates into the same point, and you were better off doing the aggressive syncs for this point (you could always set it up as its own publisher for just that point and sync every minute) and not recording the live data.
-
K that sounds fair. I'm seriously at the point where I may do it all at the edge and then just push, it would be a lot cleaner.
Out of curiosity, I'm just playing around with a script which would manually go back and backfill the history for the meta point, ie the script just returns an array of data values since it last ran and then attempts to backfill the historical for the meta. I don't think I'll end up doing this but curious how that would be accomplished. If I return say 15 values, one for each minute, and step through the calculation until the end of the array will the meta point just get one value at the time of script run or does it somehow understand that I am trying to insert values into history? IF that makes sense.
-
You know what, nevermind :) I'm going to do my calculations in the edge from now on.
-
With the set() function becoming available in all scripting environments, that's the pathway.
So, one could have a script like this:
if(p.time > my.time + 15000) { //if the source point is 15 seconds ahead or more var values = p.pointValuesSince(my.time); for(var k = 0; k < values.length; k+=1) { my.set(values[k].doubleValue, values[k].time); //need to be specific about // the data type as getPointValuesSince is an array of DataValue objects } } return p.value;
One needs to be mindful of logging types in using the set function. Only logging type 'ALL' on the point being set (the meta point) will work for backdates. Scripting data sources have the setting 'Saves historical' to avoid logging issues when using its set function, but there is no such option in Meta points.
-
You're awesome man, thanks for all the help :) I'll play around with that as well now that I have it
-
@psysak said in Using Persistent TCP points in script calculations:
You know what, nevermind :) I'm going to do my calculations in the edge from now on.
Ha!
Well, for posterity, the easiest setup for a meta point across a persistent TCP connection is to disable real time publishing (or just not save it), have frequent data syncs, and set the meta point to run on LOGGED events. The meta point can only be expected to update in that setup at each sync. But, computing things at the lowest level that has all the information is often a good distributed computing model.
You're awesome man, thanks for all the help :) I'll play around with that as well now that I have it
Certainly :D