| To: | performancecopilot/pcp <pcp@xxxxxxxxxxxxxxxxxx> |
|---|---|
| Subject: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117) |
| From: | "Frank Ch. Eigler" <notifications@xxxxxxxxxx> |
| Date: | Tue, 04 Oct 2016 05:20:01 -0700 |
| Delivered-to: | pcp@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha1; c=relaxed; d=github.com; h=from:reply-to:to:in-reply-to:references:subject:mime-version:content-type:content-transfer-encoding:list-id:list-archive:list-post:list-unsubscribe; s=s20150108; bh=IewuoMVL1krhaR3nkJJmQbqPZt8=; b=M3iY6IMtE6M31pi0 PCJ+EQiUHTEc2JOriSiZ2vLGlwd7qCUb61e/aniGXXX5CXC++Opn/L8SFSDOq5Ph sbkqlJ3n/YR0nxcRj/IkaIszTuIPePGzg5OY2k63fXROaWJ0102b41VqHvUKgq1o qMzW3CUY4Ba+XXzdYC5CKkecWTo= |
| In-reply-to: | <performancecopilot/pcp/issues/117@xxxxxxxxxx> |
| List-archive: | https://github.com/performancecopilot/pcp |
| List-id: | performancecopilot/pcp <pcp.performancecopilot.github.com> |
| List-post: | <mailto:reply+00bd08b6076ffbb8b91b3b2f33c361abd5a4991cd5bea12d92cf00000001140b607192a169ce0abb9393@reply.github.com> |
| List-unsubscribe: | <mailto:unsub+00bd08b6076ffbb8b91b3b2f33c361abd5a4991cd5bea12d92cf00000001140b607192a169ce0abb9393@reply.github.com>, <https://github.com/notifications/unsubscribe/AL0ItuCqODS9bqIkAjnqdn-2_Oc-fqiFks5qwkRxgaJpZM4KKD0Y> |
| References: | <performancecopilot/pcp/issues/117@xxxxxxxxxx> |
| Reply-to: | performancecopilot/pcp <reply+00bd08b6076ffbb8b91b3b2f33c361abd5a4991cd5bea12d92cf00000001140b607192a169ce0abb9393@xxxxxxxxxxxxxxxx> |
|
300 servers are stretching the practical limits of pmwebd's current approach to searching archives, especially if the archives are large enough not to fit into RAM. In fact if the active set of archives (those that pmwebd needs to read, and those that something else (pmmgr/pmlogger?) is writing) are too large to fit into RAM, then I/O will start dominating everything, as you are noticing. Can you offer some stats about your archives? How far back do they go? How large are the currently-written-to ones? How much RAM do you have? How many separate archive files exist? Are any of them compressed (via service-pmlogger's pmlogger_daily, as in *YYYYMMDD.0.xz)? see also — |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | [performancecopilot/pcp] reported pmDestroyContext: pthread_mutex_destroy(c_lock) failed: Device or resource busy, failed second try as well (#118), Frank Ch. Eigler |
|---|---|
| Next by Date: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117), Marko Kevac |
| Previous by Thread: | [performancecopilot/pcp] reported pmDestroyContext: pthread_mutex_destroy(c_lock) failed: Device or resource busy, failed second try as well (#118), Frank Ch. Eigler |
| Next by Thread: | Re: [performancecopilot/pcp] pmwebd impossibly slow when using grafana with 300 archives (#117), Marko Kevac |
| Indexes: | [Date] [Thread] [Top] [All Lists] |