Please Note This forum exists for community support for the Mango product family and the Radix IoT Platform. Although Radix IoT employees participate in this forum from time to time, there is no guarantee of a response to anything posted here, nor can Radix IoT, LLC guarantee the accuracy of any information expressed or conveyed. Specific project questions from customers with active support contracts are asked to send requests to

Radix IoT Website Mango 3 Documentation Website Mango 4 Documentation Website

Optimizing Mango for Limited Memory Systems

  • Greetings, all:

    We're running Mango on embedded machines with 1 Gig of RAM and 16 Gigs of storage. I wonder if someone more knowledgeable could through some stuff up here about the best way to optimize systems with limited RAM and Hard Disk.

    To whit, I understand purge settings of data to keep disk usage under control, but I'm not sure what to do for memory.

    We're running ~300 point systems with 5-sec polling on most points, nearly all of which is getting sent up via the persistent TCP publisher. I'm not quite sure how to minimize Mango's memory footprint. Generally, TOP shows of the 1001.5 total, 35.4 free, 317.4 free, and 648.6 buff/cache on a freshly running instance on live data.

    I was browsing the Forum here, and have seen many things about memory leak solutions and whatever else. So: Any advice on keeping Mango Happy with limited Memory and Disk resources?


  • Hi Greg,

    The most important thing is to pick an explicit memory allocation that is appropriate to the machine but as large as possible. On your machine, you could probably allocate 650 or 700 MB to Mango before the operating system starts waking up the oom-killer to take it back.

    There were some memory leaks in the past, true, but there's nothing to do in preparation for the bugs that have already been fixed.

    There are other things, but they are sometimes subtle and certainly numerous. For instance, the persistent's minimum overlap when synchronizing blocks of data setting can speed up synchronizations significantly (the receiver must be using NoSQL though) which will make the memory in use there available to other uses faster.

    Or, scheduling a bunch of meta points on the same cron pattern with a statistics call ( .past(DAY) for instance) would be a bad idea faster when memory is less available. The same could be said for a bunch of reports scheduled for the same instant.

    Usually for memory I find it's easier to unwind the problem than presage where it could go wrong. This way, if there is no problem, there is no unwinding necessary.

    With some of the ways users can cause massive memory usage in short order by certain large requests (which should have seen improvement in 3.6.3, much larger REST requests should be possible with much smaller memory footprints, and 3.6.4 should improve this some more), an important variable is the access control.