possibility of mango cluster
hussam last edited by
I have running a scada system base of mango(to be exactly is mango+scadabr), it have 27 thousand points, more than 60 modbus Serial datasources, the serial com is virtual com use Ethernet connector which listening a tcp port from gprs, 3g,4g ,Ethernet .
I use mysql database, datasource update period is 5 mintue, every day generate more 7 million pointvalues, so i have to partition pointvalues table based on Column(ts),but now the pointvalues have 1.5 billion values.
the values is growing day by day,so we have a idear,if we can build a mango cluster, every server have 10 thousand points, 10 server we can have 100 thousand points, the mango cluster have one unify web front,we cory the session between 10 server,use haproxy shunt post to different server according to some rules(for example url parameter). for the end user,he don't feel the different.
JoelHaggar last edited by
Interesting that you bring this up as we just had a technical discussion around this yesterday. This is indeed possible but would be a fairly large development effort. We don't have plans to implement it unless we have a serious customer that can cover the development cost but it would open up a lot of interesting possibilities including multi redundant fail over.
In the mean time you would benefit greatly from using the Mango Automation Enterprise license with the NoSQL database. We have other clients with many billions of records in a single database with 10,000 new samples coming in every few seconds. This is running in an amazon server and performs very well.
hussam last edited by
I am trying the HAproxy solution,if it is success ,i will share the functon.
jeremyh last edited by
Clustering Mango for HA (high availability) and failover would be a great feature and very attractive to enterprise customers I think.
Personally, the greater value would be in having redundancy, rather than sharing load (as it seems Mango can scale very well itself anyway).
Some network monitoring systems (PRTG?) implement this in a way that means that one node/instance is doing the monitoring, and syncing data to a secondary instance, but if the primary goes down the secondary instantly detects this and takes over polling. When the primary comes back the data is resynced.