Mango/tomcat resources - locking up
-
hi, was messing with some scripts earlier today and put a while loop in which killed the server, 500mb ram and near 100% cpu...
seens as i needs to run mango to make the changes, it made removing the cpu looping 'while' stuff very difficult...
i managed it, but earlier tonight, i've added a page with a trend and 3 curves and something else has happened meaning the server is working flat out..
i dont know exactly what it is yet, but in cases like these, is there another way to resolve the issue.
like disabling data sources in an offline state etc?
thanks
Neil
-
i think i found the problem, but not sure why...
meta data': Script error in point "store10a cooler1 temp status": sun.org.mozilla.javascript.internal.WrappedException: Wrapped org.springframework.dao.ConcurrencyFailureException: PreparedStatementCallback; SQL [select pv.dataType, pv.pointValue, pva.textPointValueShort, pva.textPointValueLong, pv.ts, pva.sourceType, pva.sourceId from pointValues pv left join pointValueAnnotations pva on pv.id = pva.pointValueId where pv.dataPointId=? and pv.ts=?]; A lock could not be obtained within the time requested; nested exception is java.sql.SQLTransactionRollbackException: A lock could not be obtained within the time requested (#1) in at line number 1 in at line number 1
and this is my script...
var pv1=temp.ago(SECOND,10);
var pv=temp.value;
var hi_lim=sp.value+hi.value;
var lo_lim=sp.value+lo.value;
if ((pv>=hi_lim) && (pv1>=hi_lim)) value=1;
else if ((pv<=lo_lim) && (pv1<=lo_lim)) value=-1;
else value=0;
return value;it did work fine... but i've got 2 off these and within each meta point... the declared context variables have identical names... does that present a problem... i assumed the vars were unique to the meta data point?
thanks
neil
-
seems the problem is a red herring... i've got a report that started around the time of the problem and its locked up, in progress.
looks like i need to abort it, but unclear how to do that :-/
-
There's no "safe mode" in Mango, at least yet. (Interesting idea though.) The way stuff like that is tracked down during development is using a debugger. If you can set up a dev env, that would be the way to go.
Reports can't be canceled (because the SQL queries they run can't be canceled), but they should never really take very long unless your database is massive. Any other clues yet?
-
yea "safe mode" good idea... i had a right job getting in to stop the data source....
i looked through the logs and this is my opinion....
whilst an hourly report was running something happened causing it to fail/locl up etc...
it now has a lock on the data points contained within the report template...
my script within the meta data source is trying to access historical data with the x.ago() function... but the logs say it cant due to sql locking or something...
in the reports area, it still thinks the hourly scheduler is running... maybe it is but its not processing it and has locked up...
an i cant clear it... so right now, looks like i'll have to create a new project, import/export tags..
my test project isn't massive, but i'd be worried about this happening if it were :-/
-
Try shutting everything off (data sources, publishers, maybe even points), and gradually starting things up to see if you can narrow down the culprit.
-
as far as i can tell, its not liking the script bit with....
temp.ago(SECOND,10)
i put it in on its own and it can enumerate the value ok, but i make it context update it sends tomcat to high cpu
-
If the script has to go to the database to get the value, things could slow down. If your database is big, you should consider using MySQL instead of Derby since queries like that are quite a bit faster.
But, probably the simplest thing to do is increase the cache size for the point. On the data point edit page look for the "Default cache size" field on the left. Your script asks for the value from 10 seconds ago. If you're reading every, say, second, increase the cache size to 10. Or, to be on the safe side, maybe 15.
-
thanks for that i'm increased cache to 30, will see how that goes. is the mysql functionality documented?
-