| To: | performancecopilot/pcp <pcp@xxxxxxxxxxxxxxxxxx> |
|---|---|
| Subject: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117) |
| From: | "Frank Ch. Eigler" <notifications@xxxxxxxxxx> |
| Date: | Tue, 04 Oct 2016 05:39:05 -0700 |
| Delivered-to: | pcp@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=github.com; s=pf2014; t=1475584745; bh=dJVmuvYO1RX9N9BOn0kG9ijqT8jMNBF2sucxw/IzjNE=; h=From:Reply-To:To:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=T5jYfybimnmQ/QKIczUVV6pxCWd6fmqzwAzE+N50k8gX1pKbB/JivaFmBqZpXV1Fe ATP9HXGm2hOzk3cYuHx0BXMZ31G0K+nwD04jdoGqkfoBA0X86whYbAJMQaEPvz1oSy I7zTXoDmHS/MD/y6ve7vB5bRWZyhRcVyuD3DEJDM= |
| In-reply-to: | <performancecopilot/pcp/issues/117@xxxxxxxxxx> |
| List-archive: | https://github.com/performancecopilot/pcp |
| List-id: | performancecopilot/pcp <pcp.performancecopilot.github.com> |
| List-post: | <mailto:reply+00bd08b6688d4a4d1e77ac86f85354a032e2033ca17cd0d792cf00000001140b64e992a169ce0abb9393@reply.github.com> |
| List-unsubscribe: | <mailto:unsub+00bd08b6688d4a4d1e77ac86f85354a032e2033ca17cd0d792cf00000001140b64e992a169ce0abb9393@reply.github.com>, <https://github.com/notifications/unsubscribe/AL0ItpJS-HOTg3f5MsO5rUHEqKyCTbrFks5qwkjpgaJpZM4KKD0Y> |
| References: | <performancecopilot/pcp/issues/117@xxxxxxxxxx> |
| Reply-to: | performancecopilot/pcp <reply+00bd08b6688d4a4d1e77ac86f85354a032e2033ca17cd0d792cf00000001140b64e992a169ce0abb9393@xxxxxxxxxxxxxxxx> |
|
So about 2 GB of data per server per day, times seven days, times 300 servers, so 4200 GB of data on disk. Wow. Even the current day's data won't fit into your machine's RAM, so any scanning would have to rely on libpcp/archive optimally using .index files to seek to just the the parts being requested by the client (pmwebd/grafana). I don't know if pcp developers have much experience with such RAM-starved configurations. This is not to say it's hopeless. I'd start with a highly constrained grafana query (substituting PMWEBD and HOSTNAME). It represents kind of the best case - one archive file, small time slice from the end. If that works, try additional &target= clauses, or gradually relax the host wildcard (so as to select more hosts).
— |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117), Marko Kevac |
|---|---|
| Next by Date: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117), Marko Kevac |
| Previous by Thread: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117), Marko Kevac |
| Next by Thread: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117), Marko Kevac |
| Indexes: | [Date] [Thread] [Top] [All Lists] |