Is it normal to have a 90-100% CPU usage on a dual core xeon server (4gb ram)even when just scanning one system at a time? It seems that whenever there are a few systems in the scanning queue, even if I set the server to scan one system at a time, the cpu usage for the lansweeperservice is 100%. If the queue is empty and I force scan one system, it only peaks at around 80% usage for a few moments but not sustained high usage. When the queue is full, it remains at 100% until the queue is empty which normally takes a few hours for only around 500 systems.
During that time, the web interface loads slowly initially in the morning but speeds up for all IT staff after the initial morning access attempt and runs fairly well even when the system is at 100% CPU usage.
Really the biggest problem with the slowness is that we run other network and maintenance utilities on that same server that run very slow during the morning hours. So we'd like to see the CPU usage reduced for that service somehow. Right now, there isn't much CPU usage difference between scanning 3 systems at a time and 100 systems at a time. It's just slow. The only difference between the first and the latter is the latter will zip through all the systems quite fast, wheras the 1-3 at a time will take a half a day. We've worked around it simply with patience thus far, but were wondering if there was a solution we could try or perhaps an issue with our implementation that causes it.
We are running everything over a gigabit network. Running on Server 2008R2, 64bit. We have no ip scans set up either.
Thanks!