Please Note This forum exists for community support for the Mango product family and the Radix IoT Platform. Although Radix IoT employees participate in this forum from time to time, there is no guarantee of a response to anything posted here, nor can Radix IoT, LLC guarantee the accuracy of any information expressed or conveyed. Specific project questions from customers with active support contracts are asked to send requests to

Radix IoT Website Mango 3 Documentation Website Mango 4 Documentation Website

Excessive nightly 3am server slow-downs

  • I have an event detector that raises an alarm if the time stamp register value obtained from a data source does not change in more than one minute (I read that source every second, over Ethernet). Shortly after 3am almost every night, the following occurs:

    1. The time stamp unchanged alarm is raised,
    2. The CPU idle goes down
    3. CPU I/O wait goes up.
    4. Points to write goes way up
    5. Disk block writes goes up, but disk block reads goes way up
    6. Medium Priority Work Items shoots up from zero into the thousands


    1. What are these Medium Priority Work Items? Why is the peak number so different each night?
    2. How can I minimize the load or quantity of that 3am thread?
    3. Can Mango be modified so the Medium Priority Work Items are assigned a priority that is low enough to not interfere with data source reading?
    4. Can the Medium Priority Work items be staggered so they're not all submitted at 3am?
    5. Can Mango be modified so that reading a data source is assigned a higher priority thread?


  • The nightly 'Medium Priority Work Items' that are causing 'datasource time value unchanged for 1 minute' alarms imply that there is a slowdown in the 1 second interval reading of the modbus data source. The problem isn't just the alarm, it is the data dropout that the alarm implies. This appears to be triggered every night by the automatic data purge:

    INFO  2013-05-04 03:05:00,001 (com.serotonin.m2m2.rt.maint.DataPurge.executeImpl:60) - Data purge started  
    INFO  2013-05-04 03:08:31,832 (com.serotonin.m2m2.rt.maint.DataPurge.executeImpl:70) - Data purge ended, 2001716 point samples deleted  

    A simple solution may be to assign a low (or lower) priority to the data purge process, or assign it with ionice so that it does not tax the disk IO. When could something like this be implemented?
    I have roughly 300 datapoints using Mango core 2.0.6