From owner-pcp@oss.sgi.com Sat Jun 3 16:48:28 2000 Received: by oss.sgi.com id ; Sat, 3 Jun 2000 16:48:18 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:1857 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Sat, 3 Jun 2000 16:47:50 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via SMTP id RAA07418 for ; Sat, 3 Jun 2000 17:51:38 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA27105; Sun, 4 Jun 2000 10:44:13 +1000 Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) via ESMTP id KAA22155; Sun, 4 Jun 2000 10:44:08 +1000 (EST) Date: Sun, 4 Jun 2000 10:44:08 +1000 From: Ken McDonell To: kjw@engr.sgi.com cc: ptg_pcp@larry.melbourne.sgi.com, sgisat@corp.sgi.com, sgi.engr.pcp@cthulhu.engr.sgi.com, pcp@oss.sgi.com Subject: PCP Temperature monitor Message-ID: MIME-Version: 1.0 Content-Type: MULTIPART/MIXED; BOUNDARY="-2045888623-480505077-960079448=:4817" Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. Send mail to mime@docserver.cac.washington.edu for more info. ---2045888623-480505077-960079448=:4817 Content-Type: TEXT/PLAIN; charset=US-ASCII Kevin Wang gave me (thanks) a temperature sensor that consists of a standard DB-9 serial port connector, a length of telephone cable, some small parts from Dallas Semiconductor Corporation (DS2480 and DS1280 I think from the source, but I cannot find these dcumented at their website http://www.dalsemi.com, so maybe there are newer or different parts ... Kevin?), and some open source from Dallas to drive this gizmo. 42 minutes later I had a working PMDA (I think this is the fastest one yet) exporting the temperature in celsius and fahreheit. The attached data (ascii below and a pmchart snapshot in the attachment) shows the temperature changes from the ambient in my hotel room, then holding the sensor in my hot little hand, then stuffing the sensor in the freezer of the refrigerator, then wandering about in the morning sun in northern California with my laptop under my arm .... Once I clarify some details of how these gizmos are multiplexed on a single serial port, I'll merge the PMDA code into the base PCP source for the open source release and IRIX. Now PCP can help all those folks with wine cellars, machine rooms and fancy air conditioning controls at home ... 8^)> pmdumptext -a 1GCiiI.bozo-pc -w 19 -i -t 20 roomtemp Sun Jun 4 21:28:40 ? ? Sun Jun 4 21:29:00 19.84 67.72 Sun Jun 4 21:29:20 19.90 67.84 Sun Jun 4 21:29:40 28.56 82.49 Sun Jun 4 21:30:00 31.40 88.10 Sun Jun 4 21:30:20 32.38 90.22 Sun Jun 4 21:30:40 32.66 90.75 Sun Jun 4 21:31:00 31.43 90.40 Sun Jun 4 21:31:20 14.70 60.14 Sun Jun 4 21:31:40 4.15 40.57 Sun Jun 4 21:32:00 -2.76 29.54 Sun Jun 4 21:32:20 -6.26 19.39 Sun Jun 4 21:32:40 -0.01K 13.91 Sun Jun 4 21:33:00 -0.01K 10.19 Sun Jun 4 21:33:20 -0.01K 9.52 Sun Jun 4 21:33:40 -6.13 18.12 Sun Jun 4 21:34:00 10.88 50.29 Sun Jun 4 21:34:20 14.06 56.98 Sun Jun 4 21:34:40 17.20 62.74 Sun Jun 4 21:35:00 18.90 65.81 Sun Jun 4 21:35:20 20.31 68.56 Sun Jun 4 21:35:40 21.67 70.76 Sun Jun 4 21:36:00 22.27 71.89 Sun Jun 4 21:36:20 22.95 73.34 ---2045888623-480505077-960079448=:4817 Content-Type: IMAGE/GIF; name="snap.gif" Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: pmchart for temperature Content-Disposition: attachment; filename="snap.gif" R0lGODdhvgFgAfcAAAAAAAA4/yUlJTg4OEwKsExMTFV9imBgYGhoaHNzc4TB 1pmZmZubm57n/7i4uMHBwczMzOHh4ezs7P8AAP//AAgD/0wTJXwgzA8QD4EM gQrGCnBAcBAAAAwAQsYCYkBM6AAADwAAgQACCnxMcAMAQhICbbBMSAAQAAAM AADLAALwAAEAAAAPEACBDAAKxQNw2AAPAAB8AACaAQDgUgAPBAC1AACCFgHg CBAPAAhyAJCtAYjAUn8AACwAAEAAHg8AAIEAAAoAAHAALA8PAHFxALXiAHCw ARAPDwOBgSUKCvhwcADGAAFA7BAQAAgMAJDFAIjYKH8Af/8A/ywCJUBMzAAC AABMCgd/AED/AAIsADdAACMAAG0AAOgAAAAAAQAAVhAPfwiB/7oKJ5Bw+AAP f0Vx/7+8KNQwAAcAAEAAAAIAADcAAQBxAwDiFAGw5BAEAAgAAJAKAIgYAAAP DwBxcQDi4gOwsHAEAEIMAH3FANzYAAwAAMUAANgqBAAAcQAA4iO/sAiBtboK gpBw4EV1o7+cPQjwlH8AEP/cFyfAEJyyAAAATwAAdgAAZUvscgAQdwAMcgDF aQDYdAB/ZQD/IAAnZQBkeAAEaQAAcwABdAD0aQAAbhBCZ1liIB3oZgAPaQCB bIEKZaRwIAAQJwAMLwDFZAHYMgAALwAAawwAZdeCbgAAbQAAYwABZAoELwCB cwAKbQBwcwAPLwB1bACcaQDwbgAEdQAAeAAKLwBEcAAQYwAJcBhdL+b4cwAQ cgAIYwBMLwA4cDkQbTkIZJtMYSR8cyUPL66BckcKb7twbzkAbTlCdJtiZRjo bTgPcNqBL70KcypwbjkAYTlCcJttLhhIZzgAadoAZr0AJyooPwAHAAFAAABx SAAM3ADFwBDYsngAf2YA/3MBJgBQMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAACwAAAAAvgFgAQAI/gAlHBhIsKDBgwgTKlzIsKHD hxAjSpxIsaLFixgzatzIsaNHhQAeiBxJsqTJkyhTqlzJsqXLlzBjypxJs6bN mzhz6tzJs6fKkA2CCh1KtKjRo0iTKl3KtKnTp1CjSp1KtarVq1izat3K9egD oF3Dih1LtqzZs2jTql3r1MDXB2zjyp1Lt67du3i1GnALtoGCv4ADCx5MuLDh w4gTK17MuLHjx5AjS55MubLly5gza/67ly/coJtDix5NurTp06hTq16NuPNb oaxjy55Nu7bt27gpu+6bu7fv38CDCx+uYPdnv8STK1/OvLlzzntfg35Ovbr1 69gtG4ednQABBd9p/nsfHV51ecHnHaf3Xp49+7/uz8cP/B49/fGA68PHv18+ f/D/6Tdffv9l19h202EXXnqxMYiZg6VBGJl+4OUHH2HyWahhe/dpWOGHH3II 4oIblnghiCF6aCBjCCKn4IUB8ucegRLC6J+I9IUYI4ckArigjCnaiOGOHgI5 IoGGZXiig+ud6GSRKJoY5JQ9UulklVFmuWJr0fHW3X5Lqmhlkj2KyCB+Zm74 3Xhlqgmgjk8eKWebY2KZI5RaNominjAOhmOVgF4p6Jx3grnlgV0e11gAjDbq 6KOQRioppIwFKuegWuJ5pp+amkhnmkHiGCehUVpa455qzlhqfwUqySep/nVK aaqq9h26WIsGmornoAPCKualuobK6qeYjjpge6COKKGSUxYa5q6XrvpsrNMG a6yttybK3Yuwiioqp9P++ievz9opbLRZbnqnpb8+mayzzdoZn7nJWmuvu8Vi axiuX75JLbR+0tkspsESW+6S8sp6cKgC10ouuMWaC3C3D/+LJbv67qttgteR SCSrNCK25o3i3kfywmga2aaRPppc5J9rIlnYejIeS6uPNVOorIAf9+dyyEDj fHLGh/HLbYSKnbqa0kQ37bTRHZ/GNNOpUe301bZCbV2BoSnNtXhYhy221mKX bfbZ2JKN9tpst62c2m7HLffcssFN9914542Z/t169+3330Vv7CLghBduON+G J65424gv7vjjVzcO+eSUryh55Zhn3tzlmnfu+W+cfy766HULDhyyVpOuut6h o4Zmu6vH/nfrpsWMruy490070uz2mvvvjJuem8CvAm882ruTN+O3qR/v/JbJ I41u889Xf130pKEOsfXcZ4w92N2Hn7bwz1Ev/vnCfT+b+ei335v6srHv/vy1 wd8g/fgzZz9r8ufvP2r7W9r/BgicAJqHgAjMjQGrlsAG1o98zumfAycomQW6 joIYTI0FpWY4SmUQaxusXdkmRUJHCYZRH3waBJsjweFEajMoTKG+Qii9FT0q NTGUIfRWyJwW3qZR/rMJgA4PRcPsYQeItRHiEA1URPJUB4k/XCITeZgc3ykn h7hRohSvR0Xl+FA1WOyNFrdInSaK5ounCaNvxkhG55ixa8tR4xrbWJ03bgaN pWFjcPRIx+TYUTN4HA0fgTPIPqavi1W8YhwNuTlEEieQoSnkb+TIyAI6cjiQ hGFzJFlJBV5SOJnMDCX32Ek/fjI4ocQMJydZSuL8MTOpvMwqWdnK4LzyQS6k zixrGZtbXiaWldnlHHkJulOeTjjCpCUxPemZbUUQmdZJ5jInA8XC+NIywJTM KJMjzWkq5oWKuWZlshmZbg7Tm4+RVAWN+RtyQsacYkTnYEpYzcqIc2Y5/tte buApRn4O8YaquSfEbCetc17Hnxj0YC/ZKUDmKfOICCUgQG8j0CQB62aypCcJ txRR/JnQks3kGG5eFyd3rm2bCVToIUM6uNswq3ii6+jzVPo2hoqwUDD9HErb 91HrVNRkKVPX6mQ61HpykaXVMSnbduq8ie4QqdRR6lLR19MZ2nR4uGNqVok6 nJ9CRqpT5Z5WpwjV8uVurDHlak3L+syzNhWtlrvqSH8H18rV9anSaWkPgadW v92ViHJ1KV/7Sreqss2rjwGr2wjbNqfGDbHqOR5jz2bUuUG2MYpdbOYmG1e2 srCplPtr2S5bKdBCTrSjDaxtMhs3zhINtan1/uxeTas412ZNtbVhbWv7NqnF kXYxutWs3mwLQtzGD6N0HW7mfps068EWW8+VG3MTE9zd3o24xZXtcqpr3blh N3LGXV/3vqvI5YY3fuEjL3HUS7Tpiiy97CWlebXrRfHFd5L3HR99E2nf7mLO vYfhrnfzm8XOAdii5yNwFDV34JnxtLEGPu/9qAphBkuYf+5TcBA1jFcvmTXD aONwhxUVVfqJGIyeazCGPGq2E5M1r0nNn4tNM2PsqFifIA5bjY8KY0xyTcB/ 2zEMhVzHC8OxpAMksio/516HJtlpSi6jke/4snlZ+cpYzrKWt8zlLnv5y2AO s5jHTOYyxycAZk6z/prXPGbCNBl28tzim0cVZzLOOVN1XiKAdZZnO0+5z4Bm 0Z8DTejA7bfQiN7boBPN6OIsutGIvjGkZSjpSX+w0pbGIKYzPcFNc7qBnv40 AkMt6gGSutT+a1FnVs3qVrv61bCOtaxnTeta2/rWuM61rnfN6177+tfADraw h03sV0un2MhOtrKXzexmO/vZ0I62tG39mmlb+9rYzra2t83tbg/7LT4Jt7jH Te5ym/vc6E63utctk5Cw+93wjre8503vetvbJ33Ji773ze9++/vfAB9KXgNO 8IIb/OAITzhRWKrwhjv84RCPuF4OjeqKn7ri9Ls4xt2n8Y2jr+MeFx/I/kPe PY1PYAIKQDltTj4alWOG5YNxuWRkbhiV0xwwND+5y3Wu87/wXOY/D0zPBQN0 mPvc6ClH+tCPXvSm45znQkd6dkzuc9vc/OWZuXrVKaP1wtx86SnH+daJLnSx m33nUTd72Ne+drSz3eZnj/vWgT72rj+H6mFXutGhznTEoFzvbi+7zQEvdrgn ffBoN3zecw7zpf898X2v+t6THvXES33sb8e82udedrJLnjCKD33dR5/50tPd 8y/28HUGz3nNi77mhnf7118v+8e3nfQsz/3RXY/72NPe962/PdtRb3rQe/70 m/97zNVOe7kL//mKH/7leaz6xPz8+tjPvvax/s+Y5mM+8HaX/fJRL/7Wxx73 5t/88IP/euijP/3GZz7Th57z+dff+d93vvfdb3q+E9/Gj6Ya+xd4/ed4+Kd5 pcd/59d2fLeABPh29Kd/Dah/9hd8xFd+/xd9BLiB5Pd+CkiB0jd+qUdi2DGA yad+Hbh+Kgh+7+eA47eA62eCCSh/xXd8Xtd5z4eDFhh9hxeBKYiBMoiB/DeC zmQdrPeBOBh+MMiDCeh9Lgh/R3h/QEh6SIiEWod8TBiC+aeCNJiEHhiE+Nd+ UxeAqQF3hDd/T2d3i1d5KOh0aSh3UTh5wmd77Kd7QmiA7PeGc3d1jLdzQdeD Tbd3/ld3/gd2hniG/pGXiH+YiEQoUtWhhpcBiVw4G5JIcpeBd6UhiZWIGpto ibpBhpzYiZEBidNHiZ54GpzDZqq4iqzYiq74irAYi7I4i7SoZm7GTkB2is/j IKmoi97Ei7joi9MEjBQHL8JYSsTYY811jLyUjNUXYMzYjLdYjHAWjW3kjCRI XdbYSthYhNC4jZ3UjY6IYODISOKoV+RYjn10jqWljobEjsB1Rj/TMu4oO/C4 jIA0Lt9Sj6Jzj9p4ZEeyj/zoOf74XgDJI1X2NQPpOAX5jVQmKE62kJjTkOkI S1ASkRJJORTpYAdpIRiZkZCzkSsGkPRYktUIkoUjkjgGPigZktOojP84/lct 6ZKD0YtYNZOPo5IOI1hts30+uYj8qJPGyJK28pNgN3Oi6IlCeZICtCJAeRpJ aYlLSWfitXpHyRpRGXJTiWcT9hxXaYpB+ZLPWJG55RxfaXVhWZPBeJPKcZa3 kZUbt5W5mIuw15b1KJe9QZegB5erwZeohpdsKRx+2ZfuCJgyKZhmWZhimY0G eZjBMZhYqY6GyZOPSR2QmWmTuVrDcZmRCY6ZuT7I9ZZGyJmN9plE6Ruk2Zme eJSmWZW/kZqxAZu1xH0jKRg26Zi9IZuEWWdGSZv4GBi3SZm56ZTeNIgkCRjB qZmoqZuqWUnXx0BqSY1UiV6h0ZvadyjMyT3Z/kedtrmWuCk6pZhAvimcyOmd 5Dk62Qme43lM0QmTjXme6Jk/z9lW3SmdXIlhspOej2OcJdaeY8mR30k6+mk4 btmf9emeDgmf8Vk9/Jkri+mNZHmaqzOgeFOgDuqfjJmgypk7FBo3Frolrcmd uBOemvOhhxKiXcmh6mmi2IKi+Ak8Hdo0DWo2LtqUxhOj2EKiZ1Oj46QqfKaX YYOjTimk/HWg/1mbToRTOwmjRGqVesOjlMEnOaWiBNqk9YWhEAqgykNSO6OQ +Zk32/k3UPpL06OdFcqQDzqOWipCH+k8VvqabxpjWKqmSCqPfYJk3ROnuaGn Bgqc5rmlQcOUVOo2/nz6YUaaoRHqmnnqoTmZpui4pmV5PoUqGzraN2PKQegz qc2pOJd6U5m6Npr6WXP6qHUaqe0TqpxIk4eapaUqoeKDqpkIq1e6qnS6kop6 qmUjq7Pqp/Y5l/IpNrpapLyKoIkqovMTrFk3OZ1aQ/iDrJZRqX6zrEb0P9Bq hBrpqO0YoPRTrV55raOarQrarM76GOMKStgaj4GJQNyaHOuaN9KapBPUrpXp rbRKqrZqrA0kr69ZOe9qpxlUrjXHr+f6m+IRmuIJsESHsOxZr+C6oR+kr2gp sN+KrtpKQRALlvQ6rEd6rymqQxe7qY06sQRrqlL0saE4kQMbk+EqQ0+J/hsK i0op+54Oa0gtS6mZ06/H2Uks2nKag7MPSUw7qxkmSzc+C0joFLTP+rLmKrIq O7PLhLSTobRLy7AUu7K8NKOR+DlFa5GEVrMz148xq6Eki2hh6hhD665hW6wd a2lGGXNS+0hpC6muGnITODpbi0vReLaWGreteqs4aTh3S6Z/uziBi1k/NriE y7cc669Tirh7y7Qyi6kF5biEU7gUizq1mLmau7mc27me+7lipiXJKblASrm7 Wp69+qKla7pwC7li66mry7pTq7GIKre146WyG62Ku6S5mz+W27uq87vAa7e7 O5TD2z5yCbrKu7zM27zO+7zQm2XFe7yXBorU/ms9I3e9eJO92ks33Nu90mW9 4Gs83zu+wWOf5mtq4pu+uFO+7Hs27vu+sZVX3la/9nu/+Ju/+ru/q3Zs/Pu/ ABzAAjzABNxq1VbACJzACrzADFxs4HZvEBzBEjzBFFzBFnwS7nbBGrzBHNzB HvzBKRESBTDCJFzCJnzCKJzCKmzCIiFxLvzCMBzDVPEWBRABNnzDOJzDOrzD PNzDNywALSzDQjzERAzDnlHDEJDESrzETNzETvzEUAwBBQDEx1HEVnzFWOxv iYLEC9DFXvzFYBzGYjzGY5wAUxzE9iq/s7PFEQABZPzGcBzHXWzGVMyqarzG R9zGcrzHfDzHZ1y7/neMxzSsx31cyG9Mx2gcyL7Fxm5syI4cxogMyIrMOoz8 yJbsxZFsx5O8vZUsxw7Axwggx5lcq5t8N8bBxXA8AAPwyQugyq78yqvcxQMQ ynA8ymlcypbVyWSsyg7AygNAAcAczMHsywhAy2Rsy5gBAM+hzM3BzM3sHM7M HNEcGadMyGKsygzAAL0sy8Lczb6czcV8zH+syWQ5zXmykkLVLukMGOa8zteC Y+6cJe1cqkwCz/RMGPPMu+mCzvwsGPnMu/UM0Pc8GP9cjdXcyGGMzdm8za3c zcL8zeBszGCMzF9F0P08lO4c0ENZ0HQWzxldqhz9zgKtz/Ec0hpdMvaM/mMh vc8pjdEradKGNsgI7cUKnc3a7MsOPcyybNMRDcnjTMrokWUAEL1aNtREfWVG fdRWltRK7R5M3dTs8dRbtpioTNMDwNMLjdM5TQEQzdPh/MUUnVgWTdIXjdIj 7c8tbYwnLdLGuNIendZmHRgwDdfnTNZobdcufdZqjc90TbtVXdNYzdC/vNVd jdVfvQBhHVl3ndeMHdeO/RdzrdePvdYK4NZlzdaPXdl9/dGbPdaN3dGdrc+R TdWEPMtYzdOCvdVcvdOnvdC0nNiUYc7JIdvDQdu1Dc3LbE+6XMytfdPcTNis fdoMjdg/fcu4/Fi63MW8HdhandOFndUTXdzH/g04Bx3Gy23TqQ3crczcPl3H QD3d5yvTY3zd2e3cwT3c0e3dxk0m2hPU7f1LATIk7x00fCYZFMIk8R0w892j zDKPP5pP8N3fgZouAB7gGF3g9M011T3eCADdDa3dV+3bZSzdE+KRKoKQ06nY cOIwBPVSt1PRCDNQhqKkAgniG07iF27hGZ6tPyLiIw6RgoqPLc7h0YLhpD3T YlzM5e3QvozeYgzbVVsh7zK5gosqOZImbRqlKa6kYtK49r3kTX6nRD5OUB4m HwlJ3mIfm1LiqCvecKzjXdzLYj7mgs3Kh0zhSm6SMvPieJspXOrhbI5NsJPf AdMhPwshMULjcU7l/lQJJPht50bb5x4jpWve5SJszXBs5nGs6OKs3lGKMfCC R3S+YlmOp9gk4Hq+ppKO6ZHO5DGeNJyu4suiz/Yd6kJi5Pe54Jf8yECOWZ6+ pGgUzyqO6k7u6qSeL7De5tX4Kklu4rsuNJqu6xne4a9+46tuya0e5JUu5J8e uSzN7Pgi5T5E2eNC41yu4Xst7Si+4kEu0gJu419E7bPu4RKi6sdeyMn+XjZj Jgj+6AKyLgX+I/td6u+eKs4i7wM+Gb3iKvHuM/7O3yyjI4RekvU9IbTC7++N 7/9u6A9Q1efex+kO3sidxzj+8KKM5hLPyRRv8eiO8RnvvcnN8XEc8R+//jbm vgAAwMcpj+weX/IT7+VdnPIrH8czH/MQ3/KbXPDOHrk8k/A/dt9BhbuOtvE2 X/RwXPOGTPLye+22SybWPu5lWi2CquozL/MoX/UAsPIyr/Uo78Van/XpncjT rScxo5C2U/ZbblHgHvUDw+1DD/NVf/U2H/dg3/VWP/deD9Y4P8n1Meig7fer MuprzzxAb+p+bc1Zn/hd/8VIT/eLb/eJX/NKH8hn/+GVX+l/HpBQb7y4buxG f/d5j/d47/g/vveUL+qWnyLeglHkvvlmzfTQQfSPD/qfP/uLf/dIj8mmf8cY Tuwl4jHqnOva3qbVziWyT/qMX/fIX/OKr/eO4A7ePT892iOQNNP38X64+s35 h1/xIt/oYu/yuSz73V/Luw/+ZnPy4z/hz2/+L3/oEJAA8B//8j//9F//9n// xd3A+r///N//uTbIAFFA4ECCBQ0eRJhQoIAHDxo0MBBR4kSKFS1exJhR40aO HT1+BBlS5EiSJU2eRJlSpcYHAB4UEBBT5kyaNW3exDmzIcSVPX3+BBpU6FCi RY2mbNlQ6VKmTZ0+hRpV6lSqVa1exZpV61auXb1+BRtW7FiyUgGcRZtW7Vq2 bd2+hRtX7ly6de3exZtX716+ff3+BRxYcNyAADs= ---2045888623-480505077-960079448=:4817-- From owner-pcp@oss.sgi.com Mon Jun 5 07:51:40 2000 Received: by oss.sgi.com id ; Mon, 5 Jun 2000 07:51:30 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:46596 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 5 Jun 2000 07:18:24 -0700 Received: from puma.engr.sgi.com (puma.engr.sgi.com [130.62.52.217]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via ESMTP id DAA07598 for ; Mon, 5 Jun 2000 03:08:03 -0700 (PDT) mail_from (kjw@puma.engr.sgi.com) Received: (from kjw@localhost) by puma.engr.sgi.com (SGI-8.9.3/8.9.3) id DAA61566; Mon, 5 Jun 2000 03:01:49 -0700 (PDT) Date: Mon, 5 Jun 2000 03:01:49 -0700 (PDT) From: Kevin Wang Message-Id: <200006051001.DAA61566@puma.engr.sgi.com> To: kjw@cthulhu.engr.sgi.com, Ken McDonell Subject: Re: PCP Temperature monitor Cc: pcp@oss.sgi.com, sgi.engr.pcp@cthulhu.engr.sgi.com, sgisat@corp.sgi.com, ptg_pcp@larry.melbourne.sgi.com Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Some technical background on the Dallas Semiconductor 1-wire-network (Microlan) that is discussed below. It's called 1-wire (trademark!), even though it's physically two wires. Go figure; marketing! Of the two wires, one is ground, and the other is both power and data. This does impose some weird constraints on the 1-wire-network system, but it has shown itself to me to be quite robust with the use of standard rj-11 phone connectors. From: Ken McDonell >Kevin Wang gave me (thanks) a temperature sensor that consists of a >standard DB-9 serial port connector, a length of telephone cable, some >small parts from Dallas Semiconductor Corporation (DS2480 and DS1280 I >think from the source, but I cannot find these dcumented at their >website http://www.dalsemi.com, so maybe there are newer or different >parts ... Kevin?), and some open source from Dallas to drive this >gizmo. Yes, those are the correct part #'s. Here's the general type of message that I fax to order parts: >Fax To: 972-371-3715 > >From: Kevin Wang >Please bill to: (visa|mastercard) 0000-1111-2222 exp 00/00/2000 >Please ship via: UPS >Please ship to: > Kevin Wang > 1111 Street Ave > Some City, CA 90000 > >Items Requested: > >10 DS1820, PR-35 package - temp sensor accurate to 0.5C >1 DS9097U-S09 - universal 1-wire com port adapter The DS9097 is the serial dongle, with the chip inside being the DS2480. I can't find my receipt from my last shipment, but I remember that the DS9097's were $10-$15, and the sensors themselves were around $2-4 USD each. My original shipment of 2x DS9097 serial port dongles and 4x DS1820 sensors cost $47.42 USD (includes shipping cost). From: Ken McDonell >Kevin, do you expect to have many/several of these sensors wired up >in parallel from the same serial port? And how many in the limit? Yes, I definitely expect more than one sensor to be on a given 1-wire network at a time. I am running three sensors on one 1-wire-network at home, and plan on eventually increasing that to ~20. At work, I have one in my office, and eventualy would like to have one sensor per machine, which would make it average 6 at any given time. I vaguely remember that the limit on quantity of sensors is something like 100. I can't find the appropriate spec sheet right now, but I do remember that it was a very large number. The real limitation is electrical. It varies depending on cable capacitance, length and quantity of sensors. I have tested two sensors at the end of 400' (~120m) of cheap phone cable, and it worked. I plan on using RJ-11 telephone cables everywhere to run these sensors, since it's cheap, light, and flexible cable and I can use existing telephone "Y" splitters to add more sensors. Which reminds me, I brought a second sensor and the appropriate RJ-11 connectors to give to you on friday, but totally forgot to communicate this. How can I get this second sensor to you, so that you can properly test multi-sensor 1-wire-networks? This sensor has only about 19 inches (~0.5m) of cable attached to it. Could I, should I just drop it into an envelope and mail it? >Do you ever expect to use more than 1 serial port? I am guessing that >cable lengths or other MicroLAN limitations would force you to a second >serial port eventually. realistically, just for temp sensors, no. but then again, I don't plan on having that many sensors on the 1-wire-network, maybe 20 tops. If there is a high 1-wire-network load, it can be solved by adding a third wire for power. Actual power consumption is extremely low, and could be done with some batteries, although a wall wart would be more than enough. >Do you know where I can find >(a) a description of the parts you purchased and wired together, and included above. Note that those are raw parts, and one still needs to connect the rj-11 phone jack to the sensor itself which looks like a plain transistor. I can write something up (and include various part #'s) so that anyone with a wire stripper and a screwdriver can put something together. >(b) a MicroLAN programmer's guide ... I am most interested in > - when I need to acquire and release the MicroLAN ... I am doing > it for every fetch at the moment, but could I just acquire it > once at the start of the PMDA? And if so, who would I screw > up and how? http://www.ibutton.com/tmex/ the way I run it at home is to probe the network only once - at program startup which tends to be once every reboot. It does get confused when a sensor drops off the network, but a restart makes the error messages go away. > - there is a 1 second quiescence time before sampling the > reply from each sensor ... this will mean the PMCD timeouts > will have to be potentially very large or I need a cacheing > PMDA if the sensor count is going to be large. http://www.dalsemi.com/datasheets/pdfindex.html Actually, that's ~1 second in which the sensor is calculating the temperature! the spec sheets that this calculation should take no more than 0.5 seconds, but once you add in transfer time, and other overhead, it comes out to ~1 sec. Also, that's per sensor. Only one sensor may be running temperature calculations at a time, because with only two wires to do power and signal, it needs to tie the bus up high to have enough power to do the calculation itself! If you go with the model of probing the 1-wire-network at startup time, and holding the serial port the whole time, I don't see a problem with doing a caching type pmda where everything is continuously queried for temperatures, and assuming 100 sensors at 1 second per datapoint, that's a update time of 100 seconds. Not great, but reasonable. If someone does need more timely temperature readings, one need only purchase more serial dongles and run more 1-wire-networks separate from each other. Hm, one could make some sort of combined pmda, where one pmda can and does probe multiple serial ports, and hides this fact. the 64-bit serial number per temp sensor makes then totally unique from each other. No need to identify which 1-wire-network is which. realistically, I don't think we need to support more than (qty1) 1-wire-network per system. the pmda's might get unusually messy trying to separate things out by network and by temp sensor id#. I see this as a quick and easy fix for no system accessible environmental sensors. >Do you want either the /var/pcp/pmdas/roomtemp contents for Linux, and/or >the source code for the PMDA? Yes, I'd love to have this running under pcp&linux! binaries would be easiest, but anything I need to do is acceptable. - Kevin From owner-pcp@oss.sgi.com Mon Jun 5 07:51:44 2000 Received: by oss.sgi.com id ; Mon, 5 Jun 2000 07:51:31 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:42366 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 5 Jun 2000 06:56:32 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via SMTP id JAA07521 for ; Sun, 4 Jun 2000 09:24:05 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id CAA29880; Mon, 5 Jun 2000 02:16:39 +1000 Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) via ESMTP id CAA34673; Mon, 5 Jun 2000 02:16:36 +1000 (EST) Date: Mon, 5 Jun 2000 02:16:36 +1000 From: Ken McDonell To: Steve Daniels cc: pcp@oss.sgi.com Subject: Re: PMIE and processes In-Reply-To: <39237666.91FA2ED4@denver.sgi.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Wed, 17 May 2000, Steve Daniels wrote: > Ken, > I think this is a simple question that you can answer off the top > of your head. I am trying to write a pmie rule that will notify the > admin team at XYZ Corp when a process gets out of hand with regard to > memory consumption. Periodically, they have a couple of processes that > consume the entire memory on the machine and we would like to catch > them before the machine starts to swap a great deal. So, I am testng on > a O2 and don't understand why these two rules don't seem to work: > > delta = 1 min; > memoryhog = > some_inst > ( proc.memory.virtual.dat > 32 Mbyte ) > -> syslog 2 min "%i is consuming %v memory"; > > memoryholder = > some_inst > ( proc.memory.virtual.bss > 32 Mbyte ) > -> syslog 2 min "%i is holding %v uninitialized memory"; > > I use memclaim to grab 40 Mbytes of memory and check with > pmval -i proc.memory.virtual.bss to verify that > memoryholder should be satisfied, which it does show that > proc.memory.virtual.bss = 40 Mbytes, but pmie never fires. Steve, firstly sincere apologies ... I was away and then swamped when I got back. In Irix, pmie is never going to be able to do this ... it is not a pmie problem but rather a proc PMDA issue ... right from the outset, I decided (and it has been argued that this was a mistake) that fetching metrics for _all_ the processes on a regular basis was not likely to be helpful, and certainly could be expensive. So pmie, like any other PCP client can fetch metrics for selected processes, but not for _all_ processes ... to see what happens, try $ pminfo -f proc.memory.virtual.dat and compare with $ pminfo -F proc.memory.virtual.dat In the Linux implementation this restriction was relaxed, but fewer metrics are available from the "proc" group. In fact, in Linux the comparable rule might be some_inst proc.memory.size > 4 Mbyte -> print "bingo:" " [%i] %v"; And starting and stopping your friendly web browser seems to prove it actually works ... bash$ pmie -t 30 Further, pmie -d shows that all the instances including memclaim > are in the evaluation test. So, what am I missing? Should I be using > the hotproc PMDA to do this? Ah, now this is a bit tricky ... the -d option to pmie is pretty strange in that it does not use the regular fetch scheduling path. A more accurate check of the non-debug behaviour would be using -v and -D, as in: masala 9% pmie -v -Dfetch< /tmp/steve.pmie pmFetch returns ... pmResult dump from 0x100784e0 timestamp: 960134418.114805 09:00:18.114 numpmid: 2 3.5.3 (proc.memory.virtual.dat): Explicit instance identifier(s) required 3.5.5 (proc.memory.virtual.bss): Explicit instance identifier(s) required memoryhog: ? memoryholder: ? pmFetch returns ... pmResult dump from 0x100784e0 timestamp: 960134428.122448 09:00:28.122 numpmid: 2 3.5.3 (proc.memory.virtual.dat): Explicit instance identifier(s) required 3.5.5 (proc.memory.virtual.bss): Explicit instance identifier(s) required memoryhog: ? memoryholder: ? Note the warnings/errors from the pmFetch, and the expression value is ? (not true or false) because there are no values to be used in the evaluation. The right tool here (for Irix) is indeed hotproc ... I configured it thusly ... masala 5# pminfo -f hotproc.nprocs hotproc.nprocs value 4 masala 6# pminfo -f hotproc.control hotproc.control.refresh value 60 hotproc.control.config value "(virtualsize > 32768.000000)" hotproc.control.config_gen value 2 And then rewrote your rules to use hotproc.* in lieu of proc.*, and ... masala 16% pmie -v < /tmp/steve.pmie memoryhog: true memoryholder: false masala 17% tail -1 /var/adm/SYSLOG Jun 4 09:09:29 5D:masala pcp-pmie[19354]: 0000001280 /usr/bin/X11/xdm is consuming 34852864 memory0000001840 vmail is consuming 35123200 memory and pmem confirms than vmail and xdm are the only two candidates on this system. From owner-pcp@oss.sgi.com Mon Jun 5 08:01:29 2000 Received: by oss.sgi.com id ; Mon, 5 Jun 2000 08:01:19 -0700 Received: from ppp0.ocs.com.au ([203.34.97.3]:42251 "HELO mail.ocs.com.au") by oss.sgi.com with SMTP id ; Mon, 5 Jun 2000 08:01:09 -0700 Received: (qmail 15866 invoked by uid 502); 5 Jun 2000 10:34:20 -0000 Received: (qmail 15824 invoked from network); 5 Jun 2000 10:34:11 -0000 Received: from ocs3.ocs-net (192.168.255.3) by mail.ocs.com.au with SMTP; 5 Jun 2000 10:34:11 -0000 X-Mailer: exmh version 2.1.1 10/15/1999 From: Keith Owens To: Kevin Wang cc: kjw@cthulhu.engr.sgi.com, Ken McDonell , pcp@oss.sgi.com, sgi.engr.pcp@cthulhu.engr.sgi.com, sgisat@corp.sgi.com, ptg_pcp@larry.melbourne.sgi.com Subject: Re: PCP Temperature monitor In-reply-to: Your message of "Mon, 05 Jun 2000 03:01:49 MST." <200006051001.DAA61566@puma.engr.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Mon, 05 Jun 2000 20:34:07 +1000 Message-ID: <3179.960201247@ocs3.ocs-net> Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Mon, 5 Jun 2000 03:01:49 -0700 (PDT), Kevin Wang wrote: >Some technical background on the Dallas Semiconductor 1-wire-network >(Microlan) that is discussed below. > >It's called 1-wire (trademark!), even though it's physically two >wires. Go figure; marketing! Historically the ground was not counted. Devices that had separate power, ground and data were called 2 wire devices. From owner-pcp@oss.sgi.com Thu Jun 8 08:39:25 2000 Received: by oss.sgi.com id ; Thu, 8 Jun 2000 08:39:14 -0700 Received: from tah14.cesnet.cz ([194.108.115.182]:54537 "EHLO arthur.plbohnice.cz") by oss.sgi.com with ESMTP id ; Thu, 8 Jun 2000 08:38:58 -0700 Received: (from lemming@localhost) by arthur.plbohnice.cz (8.10.1/8.10.1) id e58DPvd01312; Thu, 8 Jun 2000 13:25:57 GMT Message-ID: <20000608152557.63227@arthur.plbohnice.cz> Date: Thu, 8 Jun 2000 15:25:57 +0200 From: Michal Kara To: pcp@oss.sgi.com Subject: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.88e Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Hello! New PCPMON version is available. I have just added archive support and I would like to get some feedback on the solution used. New version can be got from http://k332.feld.cvut.cz/~lemming/projects/pcpmon-1.2.95.tar.gz Please refer to the README for instructions on how to "turn on" the archive mode. There are two questions, if anoyne has something to say about them: - I have thought of a "search" feature in the archive mode - you would enter condition like "cpuUsage > 90" (supposing you have defined value cpuUsage and that it is in percents) and you would be able to jump to next/previous time when the conditon was satisfied (with some threshold etc.). Do you think it would be useful? - Another feature for the archive mode is that you would be able to specify "time offset" for archive. For example, you would be able to make values recorded on May the second appear as if they have been recorded on May the first. Then you would be able to show two graphs on one screen - one for May second and for May the first to compare the values (of, say CPU load). Let me know what you think about usefullnes of these features. Michal Kara From owner-pcp@oss.sgi.com Thu Jun 8 08:39:35 2000 Received: by oss.sgi.com id ; Thu, 8 Jun 2000 08:39:25 -0700 Received: from tah14.cesnet.cz ([194.108.115.182]:54537 "EHLO arthur.plbohnice.cz") by oss.sgi.com with ESMTP id ; Thu, 8 Jun 2000 08:39:16 -0700 Received: (from lemming@localhost) by arthur.plbohnice.cz (8.10.1/8.10.1) id e58D56a32421; Thu, 8 Jun 2000 13:05:06 GMT Message-ID: <20000608150505.23607@arthur.plbohnice.cz> Date: Thu, 8 Jun 2000 15:05:05 +0200 From: Michal Kara To: pcp@oss.sgi.com Subject: Archive interpolation mode question Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.88e Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Hello! I have added archive mode to PCPMON. It works fine, but when I use the "interpolation" mode, PCP refuses to fetch first two (or three?) metrics from the archive. I guess it is because the interpolation algorithm needs few previous values, I just want to be sure. Thanks, Michal Kara P.S.: If it is as I think, it would be nice to leave a note in pmSetMode(3) manpage. From owner-pcp@oss.sgi.com Thu Jun 8 16:45:19 2000 Received: by oss.sgi.com id ; Thu, 8 Jun 2000 16:44:59 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:33034 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Thu, 8 Jun 2000 16:44:52 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via SMTP id QAA00281 for ; Thu, 8 Jun 2000 16:49:46 -0700 (PDT) mail_from (nathans@wobbly.melbourne.sgi.com) Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA02632 for <@larry.melbourne.sgi.com:pcp@oss.sgi.com>; Fri, 9 Jun 2000 09:43:34 +1000 Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) id JAA12399 for pcp@oss.sgi.com; Fri, 9 Jun 2000 09:43:33 +1000 (EST) From: "Nathan Scott" Message-Id: <10006090943.ZM12121@wobbly.melbourne.sgi.com> Date: Fri, 9 Jun 2000 09:43:32 -0500 In-Reply-To: Michal Kara "New PCPMON 1.2.95 - archive mode added" (Jun 9, 1:39am) References: <20000608152557.63227@arthur.plbohnice.cz> X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail) To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing hi, On Jun 9, 1:39am, Michal Kara wrote: > Subject: New PCPMON 1.2.95 - archive mode added > ... > New PCPMON version is available. I have just added archive support > ... good stuff. > Please refer to the README for instructions on how to "turn on" the archive > mode. > > There are two questions, if anoyne has something to say about them: > > - I have thought of a "search" feature in the archive mode - you would enter > condition like "cpuUsage > 90" (supposing you have defined value cpuUsage and > that it is in percents) and you would be able to jump to next/previous time > when the conditon was satisfied (with some threshold etc.). Do you think it > would be useful? > yup, that would be useful. this sounds alot like what pmie(1) does, so you may want to grab some ideas from that tool (pmie is a very complex & powerful tool). > - Another feature for the archive mode is that you would be able to specify > "time offset" for archive. For example, you would be able to make values > recorded on May the second appear as if they have been recorded on May the > first. Then you would be able to show two graphs on one screen - one for May > second and for May the first to compare the values (of, say CPU load). > sounds a bit confusing the way you've said it ... wouldn't having two pcpmon windows side-by-side (running over the same archive, just at different offsets) allow one to make this sort of comparison? (without additional code in pcpmon?) cheers. -- Nathan From owner-pcp@oss.sgi.com Thu Jun 8 17:41:20 2000 Received: by oss.sgi.com id ; Thu, 8 Jun 2000 17:41:10 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:56670 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Thu, 8 Jun 2000 17:40:51 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via SMTP id RAA19378 for ; Thu, 8 Jun 2000 17:35:49 -0700 (PDT) mail_from (nathans@wobbly.melbourne.sgi.com) Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id KAA03048 for <@larry.melbourne.sgi.com:pcp@oss.sgi.com>; Fri, 9 Jun 2000 10:38:13 +1000 Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) id KAA12745 for pcp@oss.sgi.com; Fri, 9 Jun 2000 10:38:11 +1000 (EST) From: "Nathan Scott" Message-Id: <10006091038.ZM12744@wobbly.melbourne.sgi.com> Date: Fri, 9 Jun 2000 10:38:09 -0500 In-Reply-To: Michal Kara "Archive interpolation mode question" (Jun 9, 1:41am) References: <20000608150505.23607@arthur.plbohnice.cz> X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail) To: pcp@oss.sgi.com Subject: Re: Archive interpolation mode question Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing hi, On Jun 9, 1:41am, Michal Kara wrote: > Subject: Archive interpolation mode question > Hello! > > I have added archive mode to PCPMON. It works fine, but when I use the > "interpolation" mode, PCP refuses to fetch first two (or three?) metrics (i think you mean the first 2/3 values for metrics, not the first 2/3 metrics?) > from the archive. I guess it is because the interpolation algorithm needs few > previous values, I just want to be sure. > pfft - which bit of the code in libpcp/src/interp.c didn't you understand? (all of it? join the club!!) 8-) this is truly complex stuff ... Ken may have it in him to offer the real truth on this stuff when he gets back from his trip (cos he wrote this code, as noone here will ever let him forget ;) but these are just some of the factors off the top of my head (i'm sure theres heaps more, like counter wrap) which affect archive mode, with interpolation mode switched on: - the semantics of the metric (counter/discrete/instant) - counters need to be handled very differently cos we're rate converting them - the "delta" argument to pmSetMode ... and where the values fall (timestamps) in the archive wrt where each delta ends as we step through the timespan covered by the archive. you can see how counters become complex here cos we need to pick two values just before the start and just before the end of the delta (i think!?!) in order to rate convert 'em, whereas for the other types we need to get the value closest to the end of the delta (hmm .., i'm not 100% on how this calculation is done, but conceptually it should be something like the above ... another scheme would be to average the values between the start and end of the deltas ... just not sure exactly what we do here) - the "when" argument to pmSetMode ... whereabouts (timewise) in the archive we're starting (before the first value, just after it, right on it, or somewhere else entirely, for example) - whether we're going forwards or backwards in the archive - ...and probably a few other things too. > Thanks, > Michal Kara > > P.S.: If it is as I think, it would be nice to leave a note in pmSetMode(3) > manpage. this could easily be the subject of a lengthy white paper, i'm sure. cheers. -- Nathan From owner-pcp@oss.sgi.com Thu Jun 8 23:32:03 2000 Received: by oss.sgi.com id ; Thu, 8 Jun 2000 23:31:53 -0700 Received: from tah14.cesnet.cz ([194.108.115.182]:58897 "EHLO arthur.plbohnice.cz") by oss.sgi.com with ESMTP id ; Thu, 8 Jun 2000 23:31:35 -0700 Received: (from lemming@localhost) by arthur.plbohnice.cz (8.10.1/8.10.1) id e596VG331256; Fri, 9 Jun 2000 06:31:16 GMT Message-ID: <20000609083116.06482@arthur.plbohnice.cz> Date: Fri, 9 Jun 2000 08:31:16 +0200 From: Michal Kara To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added References: <20000608152557.63227@arthur.plbohnice.cz> <10006090943.ZM12121@wobbly.melbourne.sgi.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.88e In-Reply-To: <10006090943.ZM12121@wobbly.melbourne.sgi.com>; from Nathan Scott on Fri, Jun 09, 2000 at 09:43:32AM -0500 Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing > > - I have thought of a "search" feature in the archive mode - you would enter > > condition like "cpuUsage > 90" (supposing you have defined value cpuUsage and > > that it is in percents) and you would be able to jump to next/previous time > > when the conditon was satisfied (with some threshold etc.). Do you think it > > would be useful? > > > > yup, that would be useful. this sounds alot like what pmie(1) does, > so you may want to grab some ideas from that tool (pmie is a very > complex & powerful tool). > Yes, it would be partially like PMIE. I am quite clear in how it would work - you would enter condition and the regions which satisfy the codition would be (somehow) highlighted. You would be able to jump to next/previous region (probably with some time threshold). > > - Another feature for the archive mode is that you would be able to specify > > "time offset" for archive. For example, you would be able to make values > > recorded on May the second appear as if they have been recorded on May the > > first. Then you would be able to show two graphs on one screen - one for May > > second and for May the first to compare the values (of, say CPU load). > > > > sounds a bit confusing the way you've said it ... wouldn't having two > pcpmon windows side-by-side (running over the same archive, just at > different offsets) allow one to make this sort of comparison? > (without additional code in pcpmon?) > Yes, it would allow you the comparsion. The question is how much useful it would be to have these graphs in one window instead of the two windows. This feature would not require too much coding, so it helps I am ready to include it. Thanks for the feedback Michal Kara From owner-pcp@oss.sgi.com Mon Jun 12 15:13:00 2000 Received: by oss.sgi.com id ; Mon, 12 Jun 2000 15:12:50 -0700 Received: from [195.76.64.14] ([195.76.64.14]:34570 "EHLO rosalia.crtvg.es") by oss.sgi.com with ESMTP id ; Mon, 12 Jun 2000 15:12:35 -0700 Received: from 1wzvMgmaJ (unverified [216.214.106.136]) by rosalia.crtvg.es (Rockliffe SMTPRA 3.3.0) with SMTP id ; Tue, 13 Jun 2000 01:12:29 +0100 DATE: 12 Jun 00 4:13:48 PM FROM: 2X66t75OD@public1.bta.net.cn Message-ID: <0Q8jh09SzsW0ly36Wqt> TO: gfhrt3465436EHRE@oss.sgi.com SUBJECT: DST-Pocket MirrorDrive, as requested. Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing THE PMD IS FINALLY HERE !! http://www.dst3.com/email.htm From owner-pcp@oss.sgi.com Thu Jun 15 09:13:19 2000 Received: by oss.sgi.com id ; Thu, 15 Jun 2000 09:13:09 -0700 Received: from cpk-mail-relay1.bbnplanet.com ([192.239.16.198]:1444 "HELO vienna1-mail-relay1.bbnplanet.com") by oss.sgi.com with SMTP id ; Thu, 15 Jun 2000 09:12:59 -0700 Received: from atlantic3-cp.atlanticos.com (atlantic3-cp.atlanticos.com [199.120.242.66]) by vienna1-mail-relay1.bbnplanet.com (Postfix) with SMTP id EAFDB50508 for ; Thu, 15 Jun 2000 16:12:53 +0000 (GMT) Received: by AtlanticMutual.com(Lotus SMTP MTA v4.6.6 (890.1 7-16-1999)) id 852568FF.005901A1 ; Thu, 15 Jun 2000 12:12:11 -0400 X-Lotus-FromDomain: ATLANTIC COMPANIES From: Cameron_C_Caffee@AtlanticMutual.com To: pcp@oss.sgi.com Message-ID: <852568FF.00590029.00@AtlanticMutual.com> Date: Thu, 15 Jun 2000 12:12:49 -0400 Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii Content-Disposition: inline Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Location: Roanoke Department: Network Support Making an attempt to utilize the archive mode .... /var/log/pcp/pmlogger/swampy.atlanticos.com contains : 20000605.10.41.0 20000605.10.41.index 20000605.10.41.meta Commands tried : pcpmon -a localhost=/var/log/pcp/pmlogger/swampy.atlanticos.com/20000605.10.41 cpu.cfg produces error message : Cannot lookup metric 'localhost:kernel.all.cpu.idle' ... (good date range) pcpmon -a swampy=/var/log/pcp/pmlogger/swampy.atlanticos.com/20000605.10.41 cpu.cfg produces error message : Alias for 'localhost' not defined (archive mode) (bogus date range) What am I missing here ? Thanks ! Cameron RH 6.2 pcp 2.1.7 pcpmon 1.2.95 From owner-pcp@oss.sgi.com Thu Jun 15 09:51:09 2000 Received: by oss.sgi.com id ; Thu, 15 Jun 2000 09:51:00 -0700 Received: from roadrunner.neo.lrun.com ([204.210.223.8]:421 "EHLO roadrunner.neo.lrun.com") by oss.sgi.com with ESMTP id ; Thu, 15 Jun 2000 09:50:48 -0700 Received: from silverfields.com ([24.93.252.114]) by roadrunner.neo.lrun.com (Post.Office MTA v3.5.3 release 223 ID# 0-53939U80000L80000S0V35) with ESMTP id com for ; Thu, 15 Jun 2000 12:45:03 -0400 Message-ID: <39490804.5844DDBF@silverfields.com> Date: Thu, 15 Jun 2000 12:44:52 -0400 From: Timothy Reaves Organization: Silverfields X-Mailer: Mozilla 4.72 [en] (X11; U; Linux 2.2.14 i686) X-Accept-Language: en MIME-Version: 1.0 To: "pcp@oss.sgi.com" Subject: archive Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing I'm trying to figure out how to use the archive mode. I would like to gather statistics over a week long period, then view them. Unfortunately, I can't seem to figure out how to do this. Could someone help me out? Point me to the correct man page or such? Thanks. From owner-pcp@oss.sgi.com Thu Jun 15 18:19:36 2000 Received: by oss.sgi.com id ; Thu, 15 Jun 2000 18:19:26 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:57710 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Thu, 15 Jun 2000 18:19:12 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via SMTP id SAA10945 for ; Thu, 15 Jun 2000 18:14:13 -0700 (PDT) mail_from (markgw@sgi.com) Received: from sandpit.melbourne.sgi.com (sandpit.melbourne.sgi.com [134.14.55.132]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id LAA18967; Fri, 16 Jun 2000 11:17:51 +1000 Date: Fri, 16 Jun 2000 11:17:50 +1000 (EST) From: Mark Goodwin X-Sender: markgw@sandpit.melbourne.sgi.com To: pcp@oss.sgi.com cc: sgi.engr.pcp@engr.sgi.com, linux-perf@www-klinik.uni-mainz.de, beowulf@beowulf.gsfc.nasa.gov Subject: [ANNOUNCE] PCP-2.1.7-2 now available Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing SGI is pleased to announce the new version of Performance Co-Pilot (PCP) open source (version 2.1.7-2) is now available for download from http://oss.sgi.com/projects/pcp/download There are binary RPMS for ia32 and ia64, the source RPM and tar.gz files. The source is also known build and work for Linux-ppc and Linux-alpha. The PCP homepage is at http://oss.sgi.com/projects/pcp and you can join the PCP mailing list via http://oss.sgi.com/projects/pcp/mail.html Changes since the last public release (2.1.4) include :- Adjustments to tolerate SuSE's location of magic file (different than Redhat's) and the lack of chkconfig on SuSE, migration of all __clone use to pthreads to improve portability (especially to IA64), support for RAID disk stats and devfs-style SCSI disk names, new XFS metrics extracted from /proc/fs/xfs/stat, NFS (version 3) metrics, use of -Wall in CFLAGS, and numerous bug fixes. To use the new XFS metrics, obviously you need a kernel that supports XFS - see http://oss.sgi.com/projects/xfs/cvs_download.html or join the XFS mailing list via http://oss.sgi.com/projects/xfs/mail.html In addition, there is a new PCP monitoring tool available "PCPMON" from Michal Kara http://freshmeat.net/appindex/2000/05/15/958381663.html and a new PCP agent for MYSQL Databases, also from Michal. SGI would be delighted to hear from anyone wanting to contribute to the PCP project (especially new monitoring tools), and will provide technical assistance getting your project off the ground. thanks -- Mark Goodwin SGI Engineering From owner-pcp@oss.sgi.com Thu Jun 15 23:08:39 2000 Received: by oss.sgi.com id ; Thu, 15 Jun 2000 23:08:19 -0700 Received: from tah14.cesnet.cz ([194.108.115.182]:33293 "EHLO arthur.plbohnice.cz") by oss.sgi.com with ESMTP id ; Thu, 15 Jun 2000 23:08:16 -0700 Received: (from lemming@localhost) by arthur.plbohnice.cz (8.10.1/8.10.1) id e5G673r02532; Fri, 16 Jun 2000 06:07:03 GMT Message-ID: <20000616080655.56902@arthur.plbohnice.cz> Date: Fri, 16 Jun 2000 08:06:55 +0200 From: Michal Kara To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added References: <852568FF.00590029.00@AtlanticMutual.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.88e In-Reply-To: <852568FF.00590029.00@AtlanticMutual.com>; from Cameron_C_Caffee@AtlanticMutual.com on Thu, Jun 15, 2000 at 12:12:49PM -0400 Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing > pcpmon -a localhost=/var/log/pcp/pmlogger/swampy.atlanticos.com/20000605.10.41 > cpu.cfg > > produces error message : Cannot lookup metric 'localhost:kernel.all.cpu.idle' > ... (good date range) > Is that metric in the archive? Try pminfo -a /var/log/pcp/pmlogger/swampy.atlanticos.com/20000605.10.41 kernel.all.cpu.idle > pcpmon -a swampy=/var/log/pcp/pmlogger/swampy.atlanticos.com/20000605.10.41 > cpu.cfg > > produces error message : Alias for 'localhost' not defined (archive mode) > (bogus date range) You have defined that you want some metrics (kernel.all.cpu.idle) from localhost, but you didn't defined in which archive the stored metrics for localhost could be found. The first command defined alias for localhost so it got further :) So it seems you only need to instruct PCP to store the kernel.all.cpu.idle into the archive and it will be OK, the invocation was OK. Please write when you need further help. Michal Kara From owner-pcp@oss.sgi.com Fri Jun 16 05:01:10 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 05:01:00 -0700 Received: from cambridge1-smrly3.gtei.net ([199.94.215.250]:33783 "HELO cambridge1-smrly3.gtei.net") by oss.sgi.com with SMTP id ; Fri, 16 Jun 2000 05:00:40 -0700 Received: from atlantic3-cp.atlanticos.com (atlantic3-cp.atlanticos.com [199.120.242.66]) by cambridge1-smrly3.gtei.net (Postfix) with SMTP id 151E844F2 for ; Fri, 16 Jun 2000 12:00:39 +0000 (GMT) Received: by AtlanticMutual.com(Lotus SMTP MTA v4.6.6 (890.1 7-16-1999)) id 85256900.0041E87C ; Fri, 16 Jun 2000 07:59:53 -0400 X-Lotus-FromDomain: ATLANTIC COMPANIES From: Cameron_C_Caffee@AtlanticMutual.com To: pcp@oss.sgi.com Message-ID: <85256900.0041E5E1.00@AtlanticMutual.com> Date: Fri, 16 Jun 2000 07:58:39 -0400 Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii Content-Disposition: inline Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing > Is that metric in the archive? Ah ha ! - (I'm showing off my "newbie" status in re pcp) one needs to configure pmlogger as to which metrics to log ! For the benefit of others who haven't found the params : on RH : /var/pcp/config/pmlogger/control # reference your config in place of config.default /var/pcp/config/pmlogger/config.mine # configure desired metrics here - ref output of pminfo /etc/rc.d/init.d/pcp stop # restart pcp to take effect /etc/rc.d/init.d/pcp start > You have defined that you want some metrics (kernel.all.cpu.idle) from localhost, >but you didn't defined in which archive the stored metrics for localhost could be found. I gather when pcp logs data from the host its running on, those metrics are stored as node "localhost". Any way to offer pcp data from localhost as its node name (e.g. "swampy") ? Thanks for the assist! Cameron From owner-pcp@oss.sgi.com Fri Jun 16 05:17:00 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 05:16:50 -0700 Received: from cpk-mail-relay1.bbnplanet.com ([192.239.16.198]:11763 "HELO vienna1-mail-relay1.bbnplanet.com") by oss.sgi.com with SMTP id ; Fri, 16 Jun 2000 05:16:27 -0700 Received: from atlantic3-cp.atlanticos.com (atlantic3-cp.atlanticos.com [199.120.242.66]) by vienna1-mail-relay1.bbnplanet.com (Postfix) with SMTP id DB1BF503B9 for ; Fri, 16 Jun 2000 12:16:08 +0000 (GMT) Received: by AtlanticMutual.com(Lotus SMTP MTA v4.6.6 (890.1 7-16-1999)) id 85256900.0043532E ; Fri, 16 Jun 2000 08:15:22 -0400 X-Lotus-FromDomain: ATLANTIC COMPANIES From: Cameron_C_Caffee@AtlanticMutual.com To: pcp@oss.sgi.com Message-ID: <85256900.00435218.00@AtlanticMutual.com> Date: Fri, 16 Jun 2000 08:15:01 -0400 Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii Content-Disposition: inline Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Location: Roanoke Department: Network Support I'm really excited to see an archive feature in PCPMON ! When I have done performance analysis and capacity planning for other operating systems (OpenVMS, MVS), the typical use for archived data is to develop trend analysis graphs. I would typically be looking at the utilization trends of key system resources (cpu, memory, disk i/o, network i/o) over selected periods of time (daily, weekly, monthly). I would also focus on those hours of the day that are important to my user community ("prime shift"). Products I have used supported a variable sampling interval and a graphing capability that allowed me to specify the date range and hours/per day to present. Can PCPMON produce a graphical representation of selected hours from multiple days in its present form (e.g. average cpu utilization this week between the hours of 8 am and 5 pm) ? Is it a direction of PCPMON to provide a similar capability or should I be looking at applying generic plotting tools to the raw archives ? Is a hard copy option for the graphs displayed planned ? Many thanks for the effort in developing PCPMON thus far. Its a great real-time tool as-is ! Cameron +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I Cameron Caffee, Sr Systems Manager Atlantic Mutual Companines I +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I Internet : Cameron_C_Caffee@AtlanticMutual.Com I Snail Mail : I I +++++++++++++++++++++++++I I FAX : (540) 772-4198 I 1325 Electric Road, SW I I Voice : (540) 772-4071 I Roanoke, VA 24018 I +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From owner-pcp@oss.sgi.com Fri Jun 16 05:39:09 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 05:39:00 -0700 Received: from tah14.cesnet.cz ([194.108.115.182]:49672 "EHLO arthur.plbohnice.cz") by oss.sgi.com with ESMTP id ; Fri, 16 Jun 2000 05:38:46 -0700 Received: (from lemming@localhost) by arthur.plbohnice.cz (8.10.1/8.10.1) id e5GCcVJ27282; Fri, 16 Jun 2000 12:38:31 GMT Message-ID: <20000616143831.39757@arthur.plbohnice.cz> Date: Fri, 16 Jun 2000 14:38:31 +0200 From: Michal Kara To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added References: <85256900.00435218.00@AtlanticMutual.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.88e In-Reply-To: <85256900.00435218.00@AtlanticMutual.com>; from Cameron_C_Caffee@AtlanticMutual.com on Fri, Jun 16, 2000 at 08:15:01AM -0400 Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing > I'm really excited to see an archive feature in PCPMON ! :) > Can PCPMON produce a graphical representation of selected hours from multiple > days in its present form (e.g. average cpu utilization this week between the > hours of 8 am and 5 pm) ? > In the announcement I have asked whether to add possibility of applying time offset to the archive so that data from multiple days (e.g.) can be in the graph on the same time point. From your question I see it is necessary :) When I add it, you will be able to run: pcpmon -a day1=... -a day2=... -s -1d -a day3=... -s -2d Then, if you create yourself config which will contain expression: (day1:kernel.all.cpu.load+day2:kernel.all.cpu.load+day3:kernel.all.cpu.load)/3 This would print average CPU load from the three specified days. > Is it a direction of PCPMON to provide a similar capability or should I be > looking at applying generic plotting tools to the raw archives ? > > Is a hard copy option for the graphs displayed planned ? It is currently possible to grab the window (e.g., with xv) and then print it (hard-copy). Maybe I will add possibility to export the graph data, either as text-format values or maybe even as PostScript. What I am doing right now is to add possibility of entering the expression and highlight specified parts of the graph (i.e., when CPU load was >10 etc.). Michal Kara From owner-pcp@oss.sgi.com Fri Jun 16 06:09:00 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 06:08:50 -0700 Received: from cpk-mail-relay1.bbnplanet.com ([192.239.16.198]:1440 "HELO vienna1-mail-relay1.bbnplanet.com") by oss.sgi.com with SMTP id ; Fri, 16 Jun 2000 06:08:26 -0700 Received: from atlantic3-cp.atlanticos.com (atlantic3-cp.atlanticos.com [199.120.242.66]) by vienna1-mail-relay1.bbnplanet.com (Postfix) with SMTP id 0800550DE1 for ; Fri, 16 Jun 2000 13:08:21 +0000 (GMT) Received: by AtlanticMutual.com(Lotus SMTP MTA v4.6.6 (890.1 7-16-1999)) id 85256900.00481C75 ; Fri, 16 Jun 2000 09:07:38 -0400 X-Lotus-FromDomain: ATLANTIC COMPANIES From: Cameron_C_Caffee@AtlanticMutual.com To: pcp@oss.sgi.com Message-ID: <85256900.00481AD5.00@AtlanticMutual.com> Date: Fri, 16 Jun 2000 09:06:48 -0400 Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii Content-Disposition: inline Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Location: Roanoke Department: Network Support >When I add it, you will be able to run: >pcpmon -a day1=... -a day2=... -s -1d -a day3=... -s -2d Consider a syntax that will support wildcard expressions for the archive files (e.g. /var/log/pcp/pmlogger/{node}/2000*) and optionally several directories to search. Given that the archives contain date/time stamp info, one could envision a PCPMON syntax that allows the user to specify a date/time range (e.g. -beg 2000/06/01:09:00 -end 2000/06/30:17:00 -day 08:00-17:00). Perhaps the ability to handle compressed formats (gzip,compress) would be helpful since pmlogger_daily does this. >Maybe I will add possibility to export the graph data, either >as text-format values or maybe even as PostScript. Consider the following forms of export : (1) comma-delimited text file of the data points used to produce the graph (2) .jpg graphic for posting to an web site of performance data (3) postscript for hardcopy printing Concurrent with export of the graphing results, consider a "batch-mode" of operation where graphs can be produced and their results exported without interactive use of PCPMON. This would support periodic trend analysis via scheduled jobs. Cameron From owner-pcp@oss.sgi.com Fri Jun 16 10:57:34 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 10:57:24 -0700 Received: from roadrunner.neo.lrun.com ([204.210.223.8]:33935 "EHLO roadrunner.neo.lrun.com") by oss.sgi.com with ESMTP id ; Fri, 16 Jun 2000 10:57:06 -0700 Received: from silverfields.com ([24.93.252.114]) by roadrunner.neo.lrun.com (Post.Office MTA v3.5.3 release 223 ID# 0-53939U80000L80000S0V35) with ESMTP id com for ; Fri, 16 Jun 2000 13:56:58 -0400 Message-ID: <394A6A5F.82EB7827@silverfields.com> Date: Fri, 16 Jun 2000 13:56:48 -0400 From: Timothy Reaves Organization: Silverfields X-Mailer: Mozilla 4.72 [en] (X11; U; Linux 2.2.14 i686) X-Accept-Language: en MIME-Version: 1.0 To: "pcp@oss.sgi.com" Subject: pmie init file Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing What is this file for? From owner-pcp@oss.sgi.com Fri Jun 16 16:21:17 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 16:21:07 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:36141 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Fri, 16 Jun 2000 16:20:46 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via SMTP id QAA07493 for ; Fri, 16 Jun 2000 16:25:50 -0700 (PDT) mail_from (nathans@wobbly.melbourne.sgi.com) Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA25750 for <@larry.melbourne.sgi.com:pcp@oss.sgi.com>; Sat, 17 Jun 2000 09:19:28 +1000 Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) id JAA02862 for pcp@oss.sgi.com; Sat, 17 Jun 2000 09:19:28 +1000 (EST) From: "Nathan Scott" Message-Id: <10006170919.ZM2857@wobbly.melbourne.sgi.com> Date: Sat, 17 Jun 2000 09:19:26 -0500 In-Reply-To: Cameron_C_Caffee@atlanticmutual.com "Re: New PCPMON 1.2.95 - archive mode added" (Jun 16, 10:01pm) References: <85256900.0041E5E1.00@AtlanticMutual.com> X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail) To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing hi, On Jun 16, 10:01pm, Cameron_C_Caffee@atlanticmutual.com wrote: > Subject: Re: New PCPMON 1.2.95 - archive mode added > ... > > on RH : > this'll be the same for all distro's. > /var/pcp/config/pmlogger/control # reference your config in place > of config.default > > /var/pcp/config/pmlogger/config.mine # configure desired metrics here - > ref output of pminfo > > /etc/rc.d/init.d/pcp stop # restart pcp to take effect > /etc/rc.d/init.d/pcp start > > > > You have defined that you want some metrics (kernel.all.cpu.idle) from > localhost, > >but you didn't defined in which archive the stored metrics for localhost could > be found. > > I gather when pcp logs data from the host its running on, those metrics are > stored as node "localhost". > Any way to offer pcp data from localhost as its node name (e.g. "swampy") ? > if you use the LOCALHOSTNAME syntax in your customised /var/pcp/config/pmlogger/control file, then pmlogger_check uses this little bit of shell to figure out what that hostname equates to: # determine real name for localhost _lhnm=`which hostname 1>/dev/null && hostname` LOCALHOSTNAME=${_lhnm:-localhost} so, you can either: - use "swampy" explicitly in your pmlogger control file; or - figure out why the hostname command on the "_lhnm=" line above isn't giving the name you're expecting cheers. -- Nathan From owner-pcp@oss.sgi.com Fri Jun 16 16:39:17 2000 Received: by oss.sgi.com id ; Fri, 16 Jun 2000 16:39:07 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:13137 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Fri, 16 Jun 2000 16:39:00 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via SMTP id QAA25984 for ; Fri, 16 Jun 2000 16:34:02 -0700 (PDT) mail_from (nathans@wobbly.melbourne.sgi.com) Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA25810 for <@larry.melbourne.sgi.com:pcp@oss.sgi.com>; Sat, 17 Jun 2000 09:36:27 +1000 Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) id JAA02592 for pcp@oss.sgi.com; Sat, 17 Jun 2000 09:36:26 +1000 (EST) From: "Nathan Scott" Message-Id: <10006170936.ZM2881@wobbly.melbourne.sgi.com> Date: Sat, 17 Jun 2000 09:36:25 -0500 In-Reply-To: Michal Kara "Re: New PCPMON 1.2.95 - archive mode added" (Jun 16, 10:39pm) References: <85256900.00435218.00@AtlanticMutual.com> <20000616143831.39757@arthur.plbohnice.cz> X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail) To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Hi Michal, On Jun 16, 10:39pm, Michal Kara wrote: > Subject: Re: New PCPMON 1.2.95 - archive mode added > > ... > > Can PCPMON produce a graphical representation of selected hours from multiple > > days in its present form (e.g. average cpu utilization this week between the > > hours of 8 am and 5 pm) ? > > > > In the announcement I have asked whether to add possibility of applying time > offset to the archive so that data from multiple days (e.g.) can be in the graph > on the same time point. From your question I see it is necessary :) > > When I add it, you will be able to run: > pcpmon -a day1=... -a day2=... -s -1d -a day3=... -s -2d > fyi - there is some code in libpcp for parsing time windows ... see pmParseTimeWindow(3). most of the tools in the base pcp rpm will use this routine when parsing their -S and -T options, as described in the PCPIntro(1) man page in the `TIME WINDOW SPECIFICATION' section. it might save you some coding effort here. cheers. -- Nathan From owner-pcp@oss.sgi.com Sun Jun 18 16:24:59 2000 Received: by oss.sgi.com id ; Sun, 18 Jun 2000 16:24:49 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:8821 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Sun, 18 Jun 2000 16:24:30 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via SMTP id QAA01209 for ; Sun, 18 Jun 2000 16:29:37 -0700 (PDT) mail_from (nathans@wobbly.melbourne.sgi.com) Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id JAA05302; Mon, 19 Jun 2000 09:23:14 +1000 Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) id JAA05341; Mon, 19 Jun 2000 09:23:09 +1000 (EST) From: "Nathan Scott" Message-Id: <10006190923.ZM5324@wobbly.melbourne.sgi.com> Date: Mon, 19 Jun 2000 09:23:08 -0500 In-Reply-To: Timothy Reaves "archive" (Jun 16, 2:51am) References: <39490804.5844DDBF@silverfields.com> X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail) To: Timothy Reaves Subject: Re: archive Cc: "pcp@oss.sgi.com" Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Jun 16, 2:51am, Timothy Reaves wrote: > Subject: archive > I'm trying to figure out how to use the archive mode. I would like > to gather statistics over a week long period, then view them. > Unfortunately, I can't seem to figure out how to do this. > > Could someone help me out? Point me to the correct man page or > such? use pmlogger(1) to create archives (takes a configuration file and a slew of options re when to stop, default logging interval, etc.) then use the -a option to whichever tool you wish to view the data with (see the man pages for the individual tools). cheers. -- Nathan From owner-pcp@oss.sgi.com Mon Jun 19 06:16:12 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 06:16:02 -0700 Received: from thebrain.fnal.gov ([131.225.80.75]:24842 "EHLO thebrain.fnal.gov") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 06:15:46 -0700 Received: from fnal.gov (localhost.localdomain [127.0.0.1]) by thebrain.fnal.gov (8.9.3/8.9.3) with ESMTP id IAA12381; Mon, 19 Jun 2000 08:15:47 -0500 Message-ID: <394E1D03.9D388A31@fnal.gov> Date: Mon, 19 Jun 2000 08:15:47 -0500 From: Troy Dawson X-Mailer: Mozilla 4.72 [en] (X11; U; Linux 2.2.14-5.0smp i686) X-Accept-Language: en MIME-Version: 1.0 To: PCP Mailing List Subject: pmlogger for multiple remote machines Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Hello, I'm trying to set up pmlogger to work so that I have one central logging machine for a couple hundred remote machines all using pcp. I've looked through all the man pages and the online documentation. It explains how to do it, the only problem is that when it gives any examples, it only gives it for one remote machine, which is a bit frustrating, because going from one to two, there could be alot of different things to change. Basically my problem is this, all I'm getting logged is my primary machine, and not the remote machines. I can get access to the remote machines through other means (pminfo) but the logger doesn't seem to want to. This is what my files look like ---------------------------------- /var/pcp/config/pmlogger/control ---------------------------------- *comments removed* fncdf1 n y /export/farms/pcplogger/cdf/fncdf1 -c /var/pcp/config/pmlogger/config.farmworker fncdf2 n y /export/farms/pcplogger/cdf/fncdf2 -c /var/pcp/config/pmlogger/config.farmworker ...*bunches more*... fncdf47 n n /export/farms/pcplogger/cdf/fncdf47 -c /var/pcp/config/pmlogger/config.farmworker fncdf48 n n /export/farms/pcplogger/cdf/fncdf48 -c /var/pcp/config/pmlogger/config.farmworker ...*bunches more*... LOCALHOSTNAME y n /var/log/pcp/pmlogger/LOCALHOSTNAME -c config.default --------------------------------------- /var/pcp/config/pmlogger/config.default --------------------------------------- *comments removed* log mandatory on once { hinv.ncpu hinv.ndisk } log mandatory on 1 hour { kernel.all.load [ "15 minute" ] filesys.full } [access] disallow * : all except enquire; allow localhost : mandatory, advisory; --------------------------------------- /var/pcp/config/pmlogger/config.farmworker --------------------------------------- *comments removed* log mandatory on once { kernel.uname hinv mem.physmem pmcd.numagents pmcd.numclients pmcd.version pmcd.agent pmcd.pmlogger } log mandatory on 1 hour { kernel.all.load [ "15 minute" ] kernel.all.cpu proc.nprocs filesys.full } [access] disallow * : all; allow localhost : mandatory, advisory; ----------------------------------------------------- Does anyone see what I'm doing wrong. I've tried doing both n and y for the secondary logger. The logger does create the directories (/export/farms/pmlogger/cdf/fncdfxx) but it doesn't every put anything in them. Thanks Troy -- __________________________________________________ Troy Dawson dawson@fnal.gov (630)840-6468 Fermilab ComputingDivision/OSS CSS Group __________________________________________________ From owner-pcp@oss.sgi.com Mon Jun 19 12:11:35 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 12:11:25 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:15139 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 12:11:09 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via ESMTP id MAA01019 for ; Mon, 19 Jun 2000 12:06:11 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id FAA21778; Tue, 20 Jun 2000 05:08:38 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Tue, 20 Jun 2000 05:08:38 +1000 From: Ken McDonell To: Troy Dawson cc: PCP Mailing List Subject: Re: pmlogger for multiple remote machines In-Reply-To: <394E1D03.9D388A31@fnal.gov> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Mon, 19 Jun 2000, Troy Dawson wrote: > Hello, > I'm trying to set up pmlogger to work so that I have one central logging > machine for a couple hundred remote machines all using pcp. ... Most excellent. This is one of the pmlogger deployment options that I personally favour. > ... I've looked > through all the man pages and the online documentation. It explains how to do > it, the only problem is that when it gives any examples, it only gives it for > one remote machine, which is a bit frustrating, because going from one to two, > there could be alot of different things to change. Good point. I'll add this to the TODO list for the next rev of the PCP User and Admin Guide. For those that have not found this document yet, try this long URL ... http://techpubs.sgi.com/library/dynaweb_bin/ebt-bin/0620/nph-infosrch.cgi/infosrchtpl/SGI_Admin/PCP_UAG/@InfoSearch__BookTextView/11377?DwebQuery=PCP The changes will be small, though. At the end of 7.3.2 add this To create archive logs on the local host for performance metrics collected from multiple remote hosts, repeat steps 1. to 5. above for each host. > Basically my problem is this, all I'm getting logged is my primary machine, > and not the remote machines. I can get access to the remote machines through > other means (pminfo) but the logger doesn't seem to want to. > > This is what my files look like > ---------------------------------- > /var/pcp/config/pmlogger/control > ---------------------------------- > *comments removed* > fncdf1 n y /export/farms/pcplogger/cdf/fncdf1 -c > /var/pcp/config/pmlogger/config.farmworker > fncdf2 n y /export/farms/pcplogger/cdf/fncdf2 -c > /var/pcp/config/pmlogger/config.farmworker > ...*bunches more*... > fncdf47 n n /export/farms/pcplogger/cdf/fncdf47 -c > /var/pcp/config/pmlogger/config.farmworker > fncdf48 n n /export/farms/pcplogger/cdf/fncdf48 -c > /var/pcp/config/pmlogger/config.farmworker > ...*bunches more*... > LOCALHOSTNAME y n /var/log/pcp/pmlogger/LOCALHOSTNAME -c > config.default Do you need pmsocks for fncdf1 and fncdf2? Otherwise this looks OK to me. > --------------------------------------- > /var/pcp/config/pmlogger/config.default > ... > /var/pcp/config/pmlogger/config.farmworker These both seems OK. > Does anyone see what I'm doing wrong. I've tried doing > both n and y for the secondary logger. The logger does > create the directories (/export/farms/pmlogger/cdf/fncdfxx) > but it doesn't every put anything in them. You definitely need "n" in the second field of the control file for each remote host. With everything setup, can you please send me the output from running the commands # pmlogger_check -V as root, and then $ pminfo -f pmcd.pmlogger $ ls -laR /export/farms/pcplogger/cdf From owner-pcp@oss.sgi.com Mon Jun 19 13:23:05 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 13:22:46 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:61801 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 13:22:36 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via ESMTP id NAA01327 for ; Mon, 19 Jun 2000 13:27:43 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id GAA23271; Tue, 20 Jun 2000 06:18:48 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Tue, 20 Jun 2000 06:18:48 +1000 From: Ken McDonell To: Timothy Reaves cc: "pcp@oss.sgi.com" Subject: Re: archive In-Reply-To: <39490804.5844DDBF@silverfields.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing I think Nathan answered this one already, but you should use pmlogger to create the archive. See pmlogger(1), or chapter 7 of the PCP User and Admin guide from the techpubs.sgi.com website. Also, take a look in /var/pcp/config/pmlogger for some sample pmlogger configuration files. If you're looking to create logs over a long period, consider setting up a managed pmlogger instance, see pmlogger_daily(1), and then merging the logs from several days together with pmlogextract(1). On Thu, 15 Jun 2000, Timothy Reaves wrote: > I'm trying to figure out how to use the archive mode. I would like > to gather statistics over a week long period, then view them. > Unfortunately, I can't seem to figure out how to do this. > > Could someone help me out? Point me to the correct man page or > such? > > Thanks. > From owner-pcp@oss.sgi.com Mon Jun 19 13:27:35 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 13:27:16 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:4203 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 13:27:09 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via ESMTP id NAA03527 for ; Mon, 19 Jun 2000 13:32:17 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id GAA23400; Tue, 20 Jun 2000 06:23:23 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Tue, 20 Jun 2000 06:23:23 +1000 From: Ken McDonell To: Timothy Reaves cc: "pcp@oss.sgi.com" Subject: Re: pmie init file In-Reply-To: <394A6A5F.82EB7827@silverfields.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Fri, 16 Jun 2000, Timothy Reaves wrote: > What is this file for? I assume you're referring to /etc/rc.d/init.d/pmie on a RH-style installation. This script basically launches pmie_check(1) at system reboot. pmie_check(1) manages a collection of zero or more pmie(1) instances to perform automated performance monitoring. To get this working you'll need to choose/make some inference rules for pmie, and then tell pmie_check about them via /var/pcp/config/pmie/control. From owner-pcp@oss.sgi.com Mon Jun 19 13:34:06 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 13:33:46 -0700 Received: from heffalump.fnal.gov ([131.225.9.20]:57575 "EHLO fnal.gov") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 13:33:43 -0700 Received: from thebrain.fnal.gov ([131.225.80.75]) by smtp.fnal.gov (PMDF V6.0-24 #44770) with ESMTP id <0FWF00A6X53MPO@smtp.fnal.gov> for pcp@oss.sgi.com; Mon, 19 Jun 2000 15:33:22 -0500 (CDT) Received: from fnal.gov (localhost.localdomain [127.0.0.1]) by thebrain.fnal.gov (8.10.2/8.10.2) with ESMTP id e5JKXLW13463; Mon, 19 Jun 2000 15:33:21 -0500 Date: Mon, 19 Jun 2000 15:33:21 -0500 From: Troy Dawson Subject: Re: pmlogger for multiple remote machines To: Ken McDonell Cc: PCP Mailing List Message-id: <394E8391.2BB602BB@fnal.gov> MIME-version: 1.0 X-Mailer: Mozilla 4.72 [en] (X11; U; Linux 2.2.14-5.0smp i686) Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7bit X-Accept-Language: en References: Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing > > With everything setup, can you please send me the output from running > the commands > > # pmlogger_check -V > > as root, and then > > $ pminfo -f pmcd.pmlogger > $ ls -laR /export/farms/pcplogger/cdf I have edited the control file so that everything but the primary looks like fncdf1 n n /export/farms/pcplogger/cdf/fncdf1 -c /var/pcp/config/pmlogger/config.farmworker One further thing is that fnsfo, which you'll see below, was actually what was on top of the list before. But I moved it and some other systems down to the bottom. So it looks like only the very first machine in the control file is getting everything properly configured. Is there something at the end of the line that I should be putting (like a /) that will let pmlogger know that this is another system? [root@pinky /root]# /usr/share/pcp/bin/pmlogger_check -V Restarting pmlogger for host "fncdf1" ... [process 31554] .. done Latest folio created for 20000619.14.48 [root@pinky /root]# pminfo -f pmcd.pmlogger pmcd.pmlogger.host inst [10846 or "10846"] value "pinky.fnal.gov" inst [15745 or "15745"] value "pinky.fnal.gov" inst [31554 or "31554"] value "pinky.fnal.gov" inst [0 or "primary"] value "pinky.fnal.gov" pmcd.pmlogger.port inst [10846 or "10846"] value 4330 inst [15745 or "15745"] value 4331 inst [31554 or "31554"] value 4332 inst [0 or "primary"] value 4330 pmcd.pmlogger.archive inst [10846 or "10846"] value "/var/log/pcp/pmlogger/pinky.fnal.gov/20000619.00.10" inst [15745 or "15745"] value "/export/farms/pcplogger/server/fnsfo/20000619.14.39" inst [31554 or "31554"] value "/export/farms/pcplogger/cdf/fncdf1/20000619.14.48" inst [0 or "primary"] value "/var/log/pcp/pmlogger/pinky.fnal.gov/20000619.00.10" pmcd.pmlogger.pmcd_host inst [10846 or "10846"] value "pinky.fnal.gov" inst [15745 or "15745"] value "fnsfo.fnal.gov" inst [31554 or "31554"] value "fncdf1.fnal.gov" inst [0 or "primary"] value "pinky.fnal.gov" [root@pinky /root]# [root@pinky /root]# ls -laR /export/farms/pcplogger/cdf /export/farms/pcplogger/cdf: total 200 drwxr-xr-x 50 root root 4096 Mar 15 13:26 . drwxr-xr-x 8 root root 4096 Jun 7 12:10 .. drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf1 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf10 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf11 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf12 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf13 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf14 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf15 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf16 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf17 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf18 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf19 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf2 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf20 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf21 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf22 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf23 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf24 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf25 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf26 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf27 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf28 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf29 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf3 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf30 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf31 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf32 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf33 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf34 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf35 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf36 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf37 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf38 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf39 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf4 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf40 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf41 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf42 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf43 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf44 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf45 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf46 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf47 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf48 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf5 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf6 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf7 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf8 drwxr-xr-x 2 root root 4096 Jun 19 14:49 fncdf9 /export/farms/pcplogger/cdf/fncdf1: total 32 drwxr-xr-x 2 root root 4096 Jun 19 14:49 . drwxr-xr-x 50 root root 4096 Mar 15 13:26 .. -rw-r--r-- 1 root root 776 Jun 19 14:48 20000619.14.48.0 -rw-r--r-- 1 root root 172 Jun 19 14:48 20000619.14.48.index -rw-r--r-- 1 root root 1351 Jun 19 14:48 20000619.14.48.meta -rw-r--r-- 1 root root 221 Jun 19 14:49 Latest -rw-r--r-- 1 root root 150 Jun 19 14:48 pmlogger.log -rw-r--r-- 1 root root 178 Apr 4 09:25 pmlogger.log.prior /export/farms/pcplogger/cdf/fncdf10: total 8 drwxr-xr-x 2 root root 4096 Jun 19 14:49 . drwxr-xr-x 50 root root 4096 Mar 15 13:26 .. /export/farms/pcplogger/cdf/fncdf11: total 8 drwxr-xr-x 2 root root 4096 Jun 19 14:49 . drwxr-xr-x 50 root root 4096 Mar 15 13:26 .. ***************************** ***bunches of spam removed*** ***************************** /export/farms/pcplogger/cdf/fncdf9: total 8 drwxr-xr-x 2 root root 4096 Jun 19 14:49 . drwxr-xr-x 50 root root 4096 Mar 15 13:26 .. [root@pinky /root]# -- __________________________________________________ Troy Dawson dawson@fnal.gov (630)840-6468 Fermilab ComputingDivision/OSS CSS Group __________________________________________________ From owner-pcp@oss.sgi.com Mon Jun 19 15:37:46 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 15:37:27 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:33090 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 15:37:18 -0700 Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via SMTP id PAA15324 for ; Mon, 19 Jun 2000 15:32:21 -0700 (PDT) mail_from (nathans@wobbly.melbourne.sgi.com) Received: from wobbly.melbourne.sgi.com (wobbly.melbourne.sgi.com [134.14.55.135]) by larry.melbourne.sgi.com (950413.SGI.8.6.12/950213.SGI.AUTOCF) via ESMTP id IAA12711; Tue, 20 Jun 2000 08:36:01 +1000 Received: (from nathans@localhost) by wobbly.melbourne.sgi.com (980427.SGI.8.8.8/980728.SGI.AUTOCF) id IAA07657; Tue, 20 Jun 2000 08:36:00 +1000 (EST) From: "Nathan Scott" Message-Id: <10006200835.ZM7645@wobbly.melbourne.sgi.com> Date: Tue, 20 Jun 2000 08:35:59 -0500 In-Reply-To: Ken McDonell "Re: pmlogger for multiple remote machines" (Jun 20, 5:12am) References: X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail) To: Troy Dawson Subject: Re: pmlogger for multiple remote machines Cc: pcp@oss.sgi.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing hi, On Jun 20, 5:12am, Ken McDonell wrote: > Subject: Re: pmlogger for multiple remote machines > > ... > > This is what my files look like > > ---------------------------------- > > /var/pcp/config/pmlogger/control > > ---------------------------------- > > *comments removed* > > fncdf1 n y /export/farms/pcplogger/cdf/fncdf1 -c > > /var/pcp/config/pmlogger/config.farmworker > > fncdf2 n y /export/farms/pcplogger/cdf/fncdf2 -c > > /var/pcp/config/pmlogger/config.farmworker > > ...*bunches more*... > > fncdf47 n n /export/farms/pcplogger/cdf/fncdf47 -c > > /var/pcp/config/pmlogger/config.farmworker > > fncdf48 n n /export/farms/pcplogger/cdf/fncdf48 -c > > /var/pcp/config/pmlogger/config.farmworker > > ...*bunches more*... > > LOCALHOSTNAME y n /var/log/pcp/pmlogger/LOCALHOSTNAME -c > > config.default > ... > With everything setup, can you please send me the output from running > the commands > > # pmlogger_check -V > > as root, and then > > $ pminfo -f pmcd.pmlogger > $ ls -laR /export/farms/pcplogger/cdf you'll also probably want to have a look in each of your pmlogger log files - /var/log/pcp/pmlogger//pmlogger.log ... anything interesting there? cheers. -- Nathan From owner-pcp@oss.sgi.com Mon Jun 19 15:43:47 2000 Received: by oss.sgi.com id ; Mon, 19 Jun 2000 15:43:27 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:12359 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Mon, 19 Jun 2000 15:43:13 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via ESMTP id PAA16837 for ; Mon, 19 Jun 2000 15:38:16 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id IAA25649; Tue, 20 Jun 2000 08:40:42 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Tue, 20 Jun 2000 08:40:42 +1000 From: Ken McDonell To: Nathan Scott cc: Troy Dawson , pcp@oss.sgi.com Subject: Re: pmlogger for multiple remote machines In-Reply-To: <10006200835.ZM7645@wobbly.melbourne.sgi.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Tue, 20 Jun 2000, Nathan Scott wrote: > ... > you'll also probably want to have a look in each of your pmlogger > log files - /var/log/pcp/pmlogger//pmlogger.log ... > anything interesting there? There are _no_ pmlogger.log files for the failed instances, so we're not even getting that far. I'm working on this with Troy, and when there is an explanation and a fix, I'll post to the wider audience. From owner-pcp@oss.sgi.com Tue Jun 20 03:13:45 2000 Received: by oss.sgi.com id ; Tue, 20 Jun 2000 03:13:35 -0700 Received: from tah14.cesnet.cz ([194.108.115.182]:22284 "EHLO arthur.plbohnice.cz") by oss.sgi.com with ESMTP id ; Tue, 20 Jun 2000 03:13:24 -0700 Received: (from lemming@localhost) by arthur.plbohnice.cz (8.10.1/8.10.1) id e5KADT012596; Tue, 20 Jun 2000 10:13:29 GMT Message-ID: <20000620121327.53912@arthur.plbohnice.cz> Date: Tue, 20 Jun 2000 12:13:27 +0200 From: Michal Kara To: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added References: <85256900.00481AD5.00@AtlanticMutual.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.88e In-Reply-To: <85256900.00481AD5.00@AtlanticMutual.com>; from Cameron_C_Caffee@AtlanticMutual.com on Fri, Jun 16, 2000 at 09:06:48AM -0400 Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing > >When I add it, you will be able to run: > >pcpmon -a day1=... -a day2=... -s -1d -a day3=... -s -2d > > Consider a syntax that will support wildcard expressions for the archive files > (e.g. /var/log/pcp/pmlogger/{node}/2000*) and optionally several directories to > search. > This wouldn't be easy to do in PCPMON (if you mean concateating more archives). > Given that the archives contain date/time stamp info, one could envision a > PCPMON syntax that allows the user to specify a date/time range (e.g. -beg > 2000/06/01:09:00 -end 2000/06/30:17:00 -day 08:00-17:00). > This may be interesting option, to limit times... I will thing about this. > Perhaps the ability to handle compressed formats (gzip,compress) would be > helpful since pmlogger_daily does this. > Isn't this responsibility of PCP library? > Concurrent with export of the graphing results, consider a "batch-mode" of > operation where graphs can be produced and their results exported without > interactive use of PCPMON. This would support periodic trend analysis via > scheduled jobs. > Sure. Michal Kara From owner-pcp@oss.sgi.com Tue Jun 20 03:43:14 2000 Received: by oss.sgi.com id ; Tue, 20 Jun 2000 03:43:04 -0700 Received: from pneumatic-tube.sgi.com ([204.94.214.22]:35145 "EHLO pneumatic-tube.sgi.com") by oss.sgi.com with ESMTP id ; Tue, 20 Jun 2000 03:42:49 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by pneumatic-tube.sgi.com (980327.SGI.8.8.8-aspam/980310.SGI-aspam) via ESMTP id DAA03595 for ; Tue, 20 Jun 2000 03:47:57 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id UAA41390; Tue, 20 Jun 2000 20:40:15 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Tue, 20 Jun 2000 20:40:14 +1000 From: Ken McDonell To: Michal Kara cc: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added In-Reply-To: <20000620121327.53912@arthur.plbohnice.cz> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Tue, 20 Jun 2000, Michal Kara wrote: > > >When I add it, you will be able to run: > > >pcpmon -a day1=... -a day2=... -s -1d -a day3=... -s -2d > > > > Consider a syntax that will support wildcard expressions for the archive files > > (e.g. /var/log/pcp/pmlogger/{node}/2000*) and optionally several directories to > > search. > > > > This wouldn't be easy to do in PCPMON (if you mean concateating more > archives). Assuming the archives are all collected from the same host ... The semantics of the archives (and in particular the meta data therein) make it difficult for a tool to process chronologically adjacent (or even overlapping) archives. Rather it is better to stitch the archives together with pmlogextract(1) and then point the tool at the single resultant archive. If the archives are collected from different hosts then you have a different set of curly issues. > > Given that the archives contain date/time stamp info, one could envision a > > PCPMON syntax that allows the user to specify a date/time range (e.g. -beg > > 2000/06/01:09:00 -end 2000/06/30:17:00 -day 08:00-17:00). > > > This may be interesting option, to limit times... I will thing about > this. Check out the various options under TIME WINDOW SPECIFICATION in the pcpIntro(1) man page. Most of the common perversions for setting time windows are supported there, and there is PCP library support for parsing options in these formats. > > Perhaps the ability to handle compressed formats (gzip,compress) would be > > helpful since pmlogger_daily does this. > > > Isn't this responsibility of PCP library? Don't go there. If/when you understand interp.c in libpcp, you know this is a bad plan. PCP archives may be accessed more or less at random, and read forwards or backwards and any combination of the above. So uncompress on the fly is not possible. If you uncompress when the archive is opened you have to worry about - disk space - recompressing - cleanup on exit on balance I decided these are all tasks better handled by humans via the shell. From owner-pcp@oss.sgi.com Tue Jun 20 05:26:05 2000 Received: by oss.sgi.com id ; Tue, 20 Jun 2000 05:25:55 -0700 Received: from cpk-mail-relay1.bbnplanet.com ([192.239.16.198]:24707 "HELO vienna1-mail-relay1.bbnplanet.com") by oss.sgi.com with SMTP id ; Tue, 20 Jun 2000 05:25:42 -0700 Received: from atlantic3-cp.atlanticos.com (atlantic3-cp.atlanticos.com [199.120.242.66]) by vienna1-mail-relay1.bbnplanet.com (Postfix) with SMTP id C47C74FFA8; Tue, 20 Jun 2000 12:25:37 +0000 (GMT) Received: by AtlanticMutual.com(Lotus SMTP MTA v4.6.6 (890.1 7-16-1999)) id 85256904.00443053 ; Tue, 20 Jun 2000 08:24:48 -0400 X-Lotus-FromDomain: ATLANTIC COMPANIES From: Cameron_C_Caffee@AtlanticMutual.com To: Ken McDonell Cc: pcp@oss.sgi.com Message-ID: <85256904.00442D20.00@AtlanticMutual.com> Date: Tue, 20 Jun 2000 08:25:24 -0400 Subject: Re: New PCPMON 1.2.95 - archive mode added Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii Content-Disposition: inline Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing Location: Roanoke Department: Network Support Per Ken McDowell : >Assuming the archives are all collected from the same host ... > > The semantics of the archives (and in particular the meta data > therein) make it difficult for a tool to process chronologically > adjacent (or even overlapping) archives. Rather it is better to > stitch the archives together with pmlogextract(1) and then point the > tool at the single resultant archive. > >If the archives are collected from different hosts then you have a >different set of curly issues. Regarding the nature of the archives making the processing of spanned archives difficult ... That's too bad. Other performance products are designed to provide for a "log-once - read-many" approach which does not require any re-processing of logs in order to select a particular range of dates/times for a particular host computer. After reviewing the man page for pmlogextract, I can agree that several logs for a given host can be re-processed to create a single archive file for analysis. The utility also offers an opportunity for data reduction through selection of a sub-set of metrics for inclusion in the output archive. Obviously, I'd prefer to avoid this type of re-processing to obtain the data desired when the archive file names and the content of the archives already communicate the information necessary to support the desired date/time selection criteria. Regarding the question of multiple nodes ... I agree that it is a significant design consideration. However, it may not be too early for the project to start thinking about a design that will facilitate multi-node reporting. When one considers the evolving use of clustered machines, the reporting requirement for those environments is to reflect the over-all performance and capacity measurements for the cluster as a whole. If PCP is to be useful in those environments, it will have to accommodate this requirement. BTW: Does pmlogextract support a wild-carded input file specification ? Thanks ! Cameron From owner-pcp@oss.sgi.com Tue Jun 20 15:48:00 2000 Received: by oss.sgi.com id ; Tue, 20 Jun 2000 15:47:50 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:8026 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Tue, 20 Jun 2000 15:47:31 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via ESMTP id PAA00758 for ; Tue, 20 Jun 2000 15:42:32 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id IAA55841 for ; Wed, 21 Jun 2000 08:46:15 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Wed, 21 Jun 2000 08:46:14 +1000 From: Ken McDonell To: pcp@oss.sgi.com Subject: Re: pmlogger for multiple remote machines (fwd) Message-ID: MIME-Version: 1.0 Content-Type: MULTIPART/Mixed; BOUNDARY="-2045888623-1917072337-961531085=:40493" Content-ID: Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. Send mail to mime@docserver.cac.washington.edu for more info. ---2045888623-1917072337-961531085=:40493 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII Content-ID: Troy advises that this new pmlogger_check works. So if you are having the same problem, please use this one until the next spin of the pcp rpms. ---------- Forwarded message ---------- Date: Wed, 21 Jun 2000 05:58:05 +1000 From: Ken McDonell To: Troy Dawson Subject: Re: pmlogger for multiple remote machines On Tue, 20 Jun 2000, Troy Dawson wrote: > Hi Ken, Found it. It was a bug in the translation from Irix to Linux, and then the QA test to exercise this has not yet been ported from Irix to Linux. Attached is a new pmlogger_check, please extract and install in /usr/share/pcp/bin/pmlogger_check. Please let me know if this works, so I can post to the mail alias on oss.sgi.com. Thanks for your patience ... I hope we can meet on Aug 15 when I visit Fermilab. ---2045888623-1917072337-961531085=:40493 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII; NAME=pmlogger_check Content-Transfer-Encoding: BASE64 Content-ID: Content-Description: Content-Disposition: ATTACHMENT; FILENAME=pmlogger_check IyEgL2Jpbi9zaA0KI1RhZyAweDAwMDEwRDEzDQojDQojIENvcHlyaWdodCAo YykgMjAwMCBTaWxpY29uIEdyYXBoaWNzLCBJbmMuICBBbGwgUmlnaHRzIFJl c2VydmVkLg0KIyANCiMgVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7 IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkgaXQNCiMg dW5kZXIgdGhlIHRlcm1zIG9mIHZlcnNpb24gMiBvZiB0aGUgR05VIEdlbmVy YWwgUHVibGljIExpY2Vuc2UgYXMNCiMgcHVibGlzaGVkIGJ5IHRoZSBGcmVl IFNvZnR3YXJlIEZvdW5kYXRpb24uDQojIA0KIyBUaGlzIHByb2dyYW0gaXMg ZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3b3VsZCBiZSB1c2Vm dWwsIGJ1dA0KIyBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVu IHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQojIE1FUkNIQU5UQUJJTElUWSBv ciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4NCiMgDQojIEZ1 cnRoZXIsIHRoaXMgc29mdHdhcmUgaXMgZGlzdHJpYnV0ZWQgd2l0aG91dCBh bnkgd2FycmFudHkgdGhhdCBpdCBpcw0KIyBmcmVlIG9mIHRoZSByaWdodGZ1 bCBjbGFpbSBvZiBhbnkgdGhpcmQgcGVyc29uIHJlZ2FyZGluZyBpbmZyaW5n ZW1lbnQNCiMgb3IgdGhlIGxpa2UuICBBbnkgbGljZW5zZSBwcm92aWRlZCBo ZXJlaW4sIHdoZXRoZXIgaW1wbGllZCBvcg0KIyBvdGhlcndpc2UsIGFwcGxp ZXMgb25seSB0byB0aGlzIHNvZnR3YXJlIGZpbGUuICBQYXRlbnQgbGljZW5z ZXMsIGlmDQojIGFueSwgcHJvdmlkZWQgaGVyZWluIGRvIG5vdCBhcHBseSB0 byBjb21iaW5hdGlvbnMgb2YgdGhpcyBwcm9ncmFtIHdpdGgNCiMgb3RoZXIg c29mdHdhcmUsIG9yIGFueSBvdGhlciBwcm9kdWN0IHdoYXRzb2V2ZXIuDQoj IA0KIyBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBH TlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBhbG9uZw0KIyB3aXRoIHRoaXMg cHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3Vu ZGF0aW9uLCBJbmMuLCA1OQ0KIyBUZW1wbGUgUGxhY2UgLSBTdWl0ZSAzMzAs IEJvc3RvbiBNQSAwMjExMS0xMzA3LCBVU0EuDQojIA0KIyBDb250YWN0IGlu Zm9ybWF0aW9uOiBTaWxpY29uIEdyYXBoaWNzLCBJbmMuLCAxNjAwIEFtcGhp dGhlYXRyZSBQa3d5LA0KIyBNb3VudGFpbiBWaWV3LCBDQSAgOTQwNDMsIG9y Og0KIyANCiMgaHR0cDovL3d3dy5zZ2kuY29tIA0KIyANCiMgRm9yIGZ1cnRo ZXIgaW5mb3JtYXRpb24gcmVnYXJkaW5nIHRoaXMgbm90aWNlLCBzZWU6IA0K IyANCiMgaHR0cDovL29zcy5zZ2kuY29tL3Byb2plY3RzL0dlbkluZm8vU0dJ R1BMTm90aWNlRXhwbGFuLw0KIw0KIyBFeGFtcGxlIGFkbWluaXN0cmF0aXZl IHNjcmlwdCB0byBjaGVjayBwbWxvZ2dlciBpbnN0YW5jZXMgYXJlIGFsaXZl LA0KIyBhbmQgcmVzdGFydCBhcyByZXF1aXJlZC4NCiMNCg0KIyBHZXQgc3Rh bmRhcmQgZW52aXJvbm1lbnQNCi4gL2V0Yy9wY3AuZW52DQoNCiMgZXJyb3Ig bWVzc2FnZXMgc2hvdWxkIGdvIHRvIHN0ZGVyciwgbm90IHRoZSBHVUkgbm90 aWZpZXJzDQojDQp1bnNldCBQQ1BfU1RERVJSDQoNCiMgY29uc3RhbnQgc2V0 dXANCiMNCnRtcD0vdG1wLyQkDQpzdGF0dXM9MA0KZWNobyA+JHRtcC5sb2Nr DQp0cmFwICJybSAtZiBcYFsgLWYgJHRtcC5sb2NrIF0gJiYgY2F0ICR0bXAu bG9ja1xgICR0bXAuKjsgZXhpdCBcJHN0YXR1cyIgMCAxIDIgMyAxNQ0KcHJv Zz1gYmFzZW5hbWUgJDBgDQoNCiMgY29udHJvbCBmaWxlIGZvciBwbWxvZ2dl ciBhZG1pbmlzdHJhdGlvbiAuLi4gZWRpdCB0aGUgZW50cmllcyBpbiB0aGlz DQojIGZpbGUgdG8gcmVmbGVjdCB5b3VyIGxvY2FsIGNvbmZpZ3VyYXRpb24N CiMNCkNPTlRST0w9JFBDUF9WQVJfRElSL2NvbmZpZy9wbWxvZ2dlci9jb250 cm9sDQoNCiMgZGV0ZXJtaW5lIHJlYWwgbmFtZSBmb3IgbG9jYWxob3N0DQpf bGhubT1gd2hpY2ggaG9zdG5hbWUgMT4vZGV2L251bGwgJiYgaG9zdG5hbWVg DQpMT0NBTEhPU1ROQU1FPSR7X2xobm06LWxvY2FsaG9zdH0NCg0KIyBkZWZh dWx0IGxvY2F0aW9uDQojDQpsb2dmaWxlPXBtbG9nZ2VyLmxvZw0KDQoNCiMg b3B0aW9uIHBhcnNpbmcNCiMNClNIT1dNRT1mYWxzZQ0KTVY9bXYNClZFUkJP U0U9ZmFsc2UNClZFUllfVkVSQk9TRT1mYWxzZQ0KdXNhZ2U9IlVzYWdlOiAk cHJvZyBbLU5WXSBbLWMgY29udHJvbF0iDQp3aGlsZSBnZXRvcHRzIGM6TlY/ IGMNCmRvDQogICAgY2FzZSAkYw0KICAgIGluDQoJYykJQ09OVFJPTD0iJE9Q VEFSRyINCgkJOzsNCglOKQlTSE9XTUU9dHJ1ZQ0KCQlNVj0iZWNobyArIG12 Ig0KCQk7Ow0KCVYpCWlmICRWRVJCT1NFDQoJCXRoZW4NCgkJICAgIFZFUllf VkVSQk9TRT10cnVlDQoJCWVsc2UNCgkJICAgIFZFUkJPU0U9dHJ1ZQ0KCQlm aQ0KCQk7Ow0KCT8pCWVjaG8gIiR1c2FnZSINCgkJc3RhdHVzPTENCgkJZXhp dA0KCQk7Ow0KICAgIGVzYWMNCmRvbmUNCnNoaWZ0IGBleHByICRPUFRJTkQg LSAxYA0KDQppZiBbICQjIC1uZSAwIF0NCnRoZW4NCiAgICBlY2hvICIkdXNh Z2UiDQogICAgc3RhdHVzPTENCiAgICBleGl0DQpmaQ0KDQppZiBbICEgLWYg JENPTlRST0wgXQ0KdGhlbg0KICAgIGVjaG8gIiRwcm9nOiBFcnJvcjogY2Fu bm90IGZpbmQgY29udHJvbCBmaWxlICgkQ09OVFJPTCkiDQogICAgc3RhdHVz PTENCiAgICBleGl0DQpmaQ0KDQpfZXJyb3IoKQ0Kew0KICAgIGVjaG8gIiRw cm9nOiBbJENPTlRST0w6JGxpbmVdIg0KICAgIGVjaG8gIkVycm9yOiAkMSIN CiAgICBlY2hvICIuLi4gbG9nZ2luZyBmb3IgaG9zdCBcIiRob3N0XCIgdW5j aGFuZ2VkIg0KICAgIHRvdWNoICR0bXAuZXJyDQp9DQoNCl93YXJuaW5nKCkN CnsNCiAgICBlY2hvICIkcHJvZyBbJENPTlRST0w6JGxpbmVdIg0KICAgIGVj aG8gIldhcm5pbmc6ICQxIg0KfQ0KDQpfbWVzc2FnZSgpDQp7DQogICAgY2Fz ZSAkMQ0KICAgIGluDQoJcmVzdGFydCkNCgkgICAgZWNobyAtbiAiUmVzdGFy dGluZyRpYW0gcG1sb2dnZXIgZm9yIGhvc3QgXCIkaG9zdFwiIC4uLiINCgkg ICAgOzsNCiAgICBlc2FjDQp9DQoNCl91bmxvY2soKQ0Kew0KICAgIHJtIC1m IGxvY2sNCiAgICBlY2hvID4kdG1wLmxvY2sNCn0NCg0KX2dldF9sb2dmaWxl KCkNCnsNCiAgICAjIGxvb2tpbmcgZm9yIC1sTE9HRklMRSBvciAtbCBMT0dG SUxFIGluIGFyZ3MNCiAgICAjDQogICAgd2FudD1mYWxzZQ0KICAgIGZvciBh IGluICRhcmdzDQogICAgZG8NCglpZiAkd2FudA0KCXRoZW4NCgkgICAgbG9n ZmlsZT0iJGEiDQoJICAgIHdhbnQ9ZmFsc2UNCgkgICAgYnJlYWsNCglmaQ0K CWNhc2UgIiRhIg0KCWluDQoJICAgIC1sKQ0KCQl3YW50PXRydWUNCgkJOzsN CgkgICAgLWwqKQ0KCQlsb2dmaWxlPWBlY2hvICIkYSIgfCBzZWQgLWUgJ3Mv LWwvLydgDQoJCWJyZWFrDQoJCTs7DQoJZXNhYw0KICAgIGRvbmUNCn0NCg0K X2NoZWNrX2xvZ2ZpbGUoKQ0Kew0KICAgIGlmIFsgISAtZiAkbG9nZmlsZSBd DQogICAgdGhlbg0KCWVjaG8gIiRwcm9nOiBFcnJvcjogY2Fubm90IGZpbmQg cG1sb2dnZXIgb3V0cHV0IGZpbGUgYXQgXCIkbG9nZmlsZVwiIg0KCWxvZ2Rp cj1gZGlybmFtZSAkbG9nZmlsZWANCgllY2hvICJEaXJlY3RvcnkgKGBjZCAk bG9nZGlyOyBwd2RgKSBjb250ZW50czoiDQoJbHMgLWxhICRsb2dkaXINCiAg ICBlbHNlDQoJZWNobyAiQ29udGVudHMgb2YgcG1sb2dnZXIgb3V0cHV0IGZp bGUgXCIkbG9nZmlsZVwiIC4uLiINCgljYXQgJGxvZ2ZpbGUNCiAgICBmaQ0K fQ0KDQpfY2hlY2tfbG9nZ2VyKCkNCnsNCiAgICAkVkVSQk9TRSAmJiBlY2hv IC1uICIgW3Byb2Nlc3MgJDFdICINCg0KICAgICMgd2FpdCB1bnRpbCBwbWxv Z2dlciBwcm9jZXNzIHN0YXJ0cywgb3IgZXhpdHMNCiAgICAjDQogICAgZGVs YXk9NQ0KICAgIFsgISAteiAiJFBNQ0RfQ09OTkVDVF9USU1FT1VUIiBdICYm IGRlbGF5PSRQTUNEX0NPTk5FQ1RfVElNRU9VVA0KICAgIHg9NQ0KICAgIFsg ISAteiAiJFBNQ0RfUkVRVUVTVF9USU1FT1VUIiBdICYmIHg9JFBNQ0RfUkVR VUVTVF9USU1FT1VUDQoNCiAgICAjIHdhaXQgZm9yIG1heGltdW0gdGltZSBv ZiBhIGNvbm5lY3Rpb24gYW5kIDIwIHJlcXVlc3RzDQogICAgIw0KICAgIGRl bGF5PWBleHByICRkZWxheSArIDIwIFwqICR4YA0KICAgIGk9MA0KICAgIHdo aWxlIFsgJGkgLWx0ICRkZWxheSBdDQogICAgZG8NCgkkVkVSQk9TRSAmJiBl Y2hvIC1uICIuIg0KCWlmIFsgLWYgJGxvZ2ZpbGUgXQ0KCXRoZW4NCgkgICAg IyAkbG9nZmlsZSB3YXMgcHJldmlvdXNseSByZW1vdmVkLCBpZiBpdCBoYXMg YXBwZWFyZWQgYWdhaW4NCgkgICAgIyB0aGVuIHdlIGtub3cgcG1sb2dnZXIg aGFzIHN0YXJ0ZWQgLi4uIGlmIG5vdCBqdXN0IHNsZWVwIGFuZA0KCSAgICAj IHRyeSBhZ2Fpbg0KCSAgICAjDQoJICAgIGlmIGVjaG8gImNvbm5lY3QgJDEi IHwgcG1sYyAyPiYxIHwgZ3JlcCAtcSAiVW5hYmxlIHRvIGNvbm5lY3QiDQoJ ICAgIHRoZW4NCgkJOg0KCSAgICBlbHNlDQoJCXNsZWVwIDUNCgkJJFZFUkJP U0UgJiYgZWNobyAiIGRvbmUiDQoJCXJldHVybiAwDQoJICAgIGZpDQoNCgkg ICAgX3BsaXN0PWBfZ2V0X3BpZHNfYnlfbmFtZSBwbWxvZ2dlcmANCgkgICAg X2ZvdW5kPWZhbHNlDQoJICAgIGZvciBfcCBpbiBgZWNobyAkX3BsaXN0YA0K CSAgICBkbw0KCQlbICRfcCAtZXEgJDEgXSAmJiBfZm91bmQ9dHJ1ZQ0KCSAg ICBkb25lDQoNCgkgICAgaWYgJF9mb3VuZA0KCSAgICB0aGVuDQoJCSMgcHJv Y2VzcyBzdGlsbCBoZXJlLCBqdXN0IG5vdCBhY2NlcHRpbmcgcG1sYyBjb25u ZWN0aW9ucw0KCQkjIHlldCwgdHJ5IGFnYWluDQoJCToNCgkgICAgZWxzZQ0K CQkkVkVSQk9TRSB8fCBfbWVzc2FnZSByZXN0YXJ0DQoJCWVjaG8gIiBwcm9j ZXNzIGV4aXRlZCEiDQoJCWVjaG8gIiRwcm9nOiBFcnJvcjogZmFpbGVkIHRv IHJlc3RhcnQgcG1sb2dnZXIiDQoJCWVjaG8gIkN1cnJlbnQgcG1sb2dnZXIg cHJvY2Vzc2VzOiINCgkJcHMgJFBDUF9QU19BTExfRkxBR1MgfCB0ZWUgJHRt cC50cG0gfCBzZWQgLW4gLWUgMXANCgkJZm9yIF9wIGluIGBlY2hvICRfcGxp c3RgDQoJCWRvDQoJCSAgICBzZWQgLW4gLWUgIi9eWyBdKlteIF0qIFsgXSok cHAgL3AiIDwgJHRtcC50bXANCgkJZG9uZSANCgkJZWNobw0KCQlfY2hlY2tf bG9nZmlsZQ0KCQlyZXR1cm4gMQ0KCSAgICBmaQ0KCWZpDQoJc2xlZXAgNQ0K CWk9YGV4cHIgJGkgKyA1YA0KICAgIGRvbmUNCiAgICAkVkVSQk9TRSB8fCBf bWVzc2FnZSByZXN0YXJ0DQogICAgZWNobyAiIHRpbWVkIG91dCB3YWl0aW5n ISINCiAgICBzZWQgLWUgJ3MvXi8JLycgJHRtcC5vdXQNCiAgICBfY2hlY2tf bG9nZmlsZQ0KICAgIHJldHVybiAxDQp9DQoNCiMgbm90ZSBvbiBjb250cm9s IGZpbGUgZm9ybWF0IHZlcnNpb24NCiMgIDEuMCB3YXMgc2hpcHBlZCBhcyBw YXJ0IG9mIFBDUFdFQiBiZXRhLCBhbmQgZGlkIG5vdCBpbmNsdWRlIHRoZQ0K Iwlzb2NrcyBmaWVsZCBbdGhpcyBpcyB0aGUgZGVmYXVsdCBmb3IgYmFja3dh cmRzIGNvbXBhdGliaWxpdHldDQojICAxLjEgaXMgdGhlIGZpcnN0IHByb2R1 Y3Rpb24gcmVsZWFzZSwgYW5kIHRoZSB2ZXJzaW9uIGlzIHNldCBpbg0KIwl0 aGUgY29udHJvbCBmaWxlIHdpdGggYSAkdmVyc2lvbj0xLjEgbGluZSAoc2Vl IGJlbG93KQ0KIw0KdmVyc2lvbj0xLjANCg0KZWNobyA+JHRtcC5kaXINCnJt IC1mICR0bXAuZXJyDQpsaW5lPTANCmNhdCAkQ09OVFJPTCBcDQogfCBzZWQg LWUgInMvTE9DQUxIT1NUTkFNRS8kTE9DQUxIT1NUTkFNRS9nIiBcDQogICAg ICAgLWUgInM7UENQX0xPR19ESVI7JFBDUF9MT0dfRElSO2ciIFwNCiB8IHdo aWxlIHJlYWQgaG9zdCBwcmltYXJ5IHNvY2tzIGRpciBhcmdzDQpkbw0KICAg IGxpbmU9YGV4cHIgJGxpbmUgKyAxYA0KICAgIGNhc2UgIiRob3N0Ig0KICAg IGluDQoJXCMqfCcnKQkjIGNvbW1lbnQgb3IgZW1wdHkNCgkJY29udGludWUN CgkJOzsNCglcJCopCSMgaW4tbGluZSB2YXJpYWJsZSBhc3NpZ25tZW50DQoJ CSRTSE9XTUUgJiYgZWNobyAiIyAkaG9zdCAkcHJpbWFyeSAkc29ja3MgJGRp ciAkYXJncyINCgkJY21kPWBlY2hvICIkaG9zdCAkcHJpbWFyeSAkc29ja3Mg JGRpciAkYXJncyIgXA0KCQkgICAgIHwgc2VkIC1uIFwNCgkJCSAtZSAiLz0n L3MvXCg9J1teJ10qJ1wpLiovXDEvIiBcDQoJCQkgLWUgJy89Ii9zL1woPSJb XiJdKiJcKS4qL1wxLycgXA0KCQkJIC1lICcvPVteIiciJyInXS9zL1s7Jjw+ fF0uKiQvLycgXA0KCQkJIC1lICcvXlxcJFtBLVphLXpdW0EtWmEtejAtOV9d Kj0vew0Kcy9eXFwkLy8NCnMvXlwoW0EtWmEtel1bQS1aYS16MC05X10qXCk9 L2V4cG9ydCBcMTsgXDE9L3ANCn0nYA0KCQlpZiBbIC16ICIkY21kIiBdDQoJ CXRoZW4NCgkJICAgICMgaW4tbGluZSBjb21tYW5kLCBub3QgYSB2YXJpYWJs ZSBhc3NpZ25tZW50DQoJCSAgICBfd2FybmluZyAiaW4tbGluZSBjb21tYW5k IGlzIG5vdCBhIHZhcmlhYmxlIGFzc2lnbm1lbnQsIGxpbmUgaWdub3JlZCIN CgkJZWxzZQ0KCQkgICAgY2FzZSAiJGNtZCINCgkJICAgIGluDQoJCQknZXhw b3J0IFBBVEg7JyopDQoJCQkgICAgX3dhcm5pbmcgImNhbm5vdCBjaGFuZ2Ug XCRQQVRILCBsaW5lIGlnbm9yZWQiDQoJCQkgICAgOzsNCgkJCSdleHBvcnQg SUZTOycqKQ0KCQkJICAgIF93YXJuaW5nICJjYW5ub3QgY2hhbmdlIFwkSUZT LCBsaW5lIGlnbm9yZWQiDQoJCQkgICAgOzsNCgkJCSopDQoJCQkgICAgJFNI T1dNRSAmJiBlY2hvICIrICRjbWQiDQoJCQkgICAgZXZhbCAkY21kDQoJCQkg ICAgOzsNCgkJICAgIGVzYWMNCgkJZmkNCgkJY29udGludWUNCgkJOzsNCiAg ICBlc2FjDQoNCiAgICBpZiBbICIkdmVyc2lvbiIgPSAiMS4wIiBdDQogICAg dGhlbg0KCWFyZ3M9IiRkaXIgJGFyZ3MiDQoJZGlyPSIkc29ja3MiDQoJc29j a3M9bg0KICAgIGZpDQoNCiAgICBpZiBbIC16ICIkcHJpbWFyeSIgLW8gLXog IiRzb2NrcyIgLW8gLXogIiRkaXIiIC1vIC16ICIkYXJncyIgXQ0KICAgIHRo ZW4NCglfZXJyb3IgImluc3VmZmljaWVudCBmaWVsZHMgaW4gY29udHJvbCBm aWxlIHJlY29yZCINCgljb250aW51ZQ0KICAgIGZpDQoNCiAgICBpZiAkVkVS WV9WRVJCT1NFDQogICAgdGhlbg0KCXBmbGFnPScnDQoJWyAkcHJpbWFyeSA9 IHkgXSAmJiBwZmxhZz0nIC1QJw0KCWVjaG8gIkNoZWNrIHBtbG9nZ2VyJHBm bGFnIC1oICRob3N0IC4uLiBpbiAkZGlyIC4uLiINCiAgICBmaQ0KDQogICAg IyBtYWtlIHN1cmUgb3V0cHV0IGRpcmVjdG9yeSBleGlzdHMNCiAgICAjDQog ICAgaWYgWyAhIC1kICRkaXIgXQ0KICAgIHRoZW4NCglta2RpciAtcCAkZGly ID4kdG1wLmVyciAyPiYxDQoJaWYgWyAhIC1kICRkaXIgXQ0KCXRoZW4NCgkg ICAgY2F0ICR0bXAuZXJyDQoJICAgIF9lcnJvciAiY2Fubm90IGNyZWF0ZSBk aXJlY3RvcnkgKCRkaXIpIGZvciBQQ1AgYXJjaGl2ZSBmaWxlcyINCgllbHNl DQoJICAgIF93YXJuaW5nICJjcmVhdGluZyBkaXJlY3RvcnkgKCRkaXIpIGZv ciBQQ1AgYXJjaGl2ZSBmaWxlcyINCglmaQ0KICAgIGZpDQoNCiAgICBbICEg LWQgJGRpciBdICYmIGNvbnRpbnVlDQoNCiAgICAjIGNoZWNrIGZvciBkaXJl Y3RvcnkgZHVwbGljYXRlIGVudHJpZXMNCiAgICAjDQogICAgaWYgWyAiYGdy ZXAgJGRpciAkdG1wLmRpcmAiID0gIiRkaXIiIF0NCiAgICB0aGVuDQoJX2Vy cm9yICJDYW5ub3Qgc3RhcnQgbW9yZSB0aGFuIG9uZSBwbWxvZ2dlciBpbnN0 YW5jZSBmb3IgYXJjaGl2ZSBkaXJlY3RvcnkgXCIkZGlyXCIiDQoJY29udGlu dWUNCiAgICBlbHNlDQoJZWNobyAiJGRpciIgPj4kdG1wLmRpcg0KICAgIGZp DQoNCiAgICBjZCAkZGlyDQogICAgZGlyPWBwd2RgDQogICAgJFNIT1dNRSAm JiBlY2hvICIrIGNkICRkaXIiDQoNCiAgICBpZiBbICEgLXcgJGRpciBdDQog ICAgdGhlbg0KICAgICAgICBlY2hvICIkcHJvZzogV2FybmluZzogbm8gd3Jp dGUgYWNjZXNzIGluICRkaXIsIHNraXAgbG9jayBmaWxlIHByb2Nlc3Npbmci DQogICAgZWxzZQ0KCSMgZGVtYW5kIG11dHVhbCBleGNsdXNpb24NCgkjDQoJ ZmFpbD10cnVlDQoJcm0gLWYgJHRtcC5zdGFtcA0KCWZvciB0cnkgaW4gMSAy IDMgNA0KCWRvDQoJICAgIGlmIHBtbG9jayAtdiBsb2NrID4kdG1wLm91dA0K CSAgICB0aGVuDQoJCWVjaG8gJGRpci9sb2NrID4kdG1wLmxvY2sNCgkJZmFp bD1mYWxzZQ0KCQlicmVhaw0KCSAgICBlbHNlDQoJCWlmIFsgISAtZiAkdG1w LnN0YW1wIF0NCgkJdGhlbg0KCQkgICAgaWYgdW5hbWUgLXIgfCBncmVwICde NVwuMycgPi9kZXYvbnVsbA0KCQkgICAgdGhlbg0KCQkJIyBJUklYIDUuMyBk b2VzIG5vdCBzdXBwb3J0IC10IGZvciB0b3VjaCgxKQ0KCQkJIw0KCQkJdG91 Y2ggYHBtZGF0ZSAtMzBNICVtJWQlSCVNJXlgICR0bXAuc3RhbXANCgkJICAg IGVsc2UNCgkJCXRvdWNoIC10IGBwbWRhdGUgLTMwTSAlWSVtJWQlSCVNYCAk dG1wLnN0YW1wDQoJCSAgICBmaQ0KCQlmaQ0KCQlpZiBbICEgLXogImBmaW5k IGxvY2sgLW5ld2VyICR0bXAuc3RhbXAgLXByaW50IDI+L2Rldi9udWxsYCIg XQ0KCQl0aGVuDQoJCSAgICA6DQoJCWVsc2UNCgkJICAgIGVjaG8gIiRwcm9n OiBXYXJuaW5nOiByZW1vdmluZyBsb2NrIGZpbGUgb2xkZXIgdGhhbiAzMCBt aW51dGVzIg0KCQkgICAgbHMgLWwgJGRpci9sb2NrDQoJCSAgICBybSAtZiBs b2NrDQoJCWZpDQoJICAgIGZpDQoJICAgIHNsZWVwIDUNCglkb25lDQoNCglp ZiAkZmFpbA0KCXRoZW4NCgkgICAgIyBmYWlsZWQgdG8gZ2FpbiBtdXRleCBs b2NrDQoJICAgICMNCgkgICAgaWYgWyAtZiBsb2NrIF0NCiAgICAgICAgICAg IHRoZW4NCiAgICAgICAgICAgICAgICBlY2hvICIkcHJvZzogV2FybmluZzog aXMgYW5vdGhlciBQQ1AgY3JvbiBqb2IgcnVubmluZyBjb25jdXJyZW50bHk/ Ig0KCQlscyAtbCAkZGlyL2xvY2sNCgkgICAgZWxzZQ0KCQllY2hvICIkcHJv ZzogYGNhdCAkdG1wLm91dGAiDQoJICAgIGZpDQoJICAgIF93YXJuaW5nICJm YWlsZWQgdG8gYWNxdWlyZSBleGNsdXNpdmUgbG9jayAoJGRpci9sb2NrKSAu Li4iDQoJICAgIGNvbnRpbnVlDQoJZmkNCiAgICBmaQ0KICAgIA0KICAgIHBp ZD0nJw0KICAgIGlmIFsgIlgkcHJpbWFyeSIgPSBYeSBdDQogICAgdGhlbg0K ICAgICAgICBpZiBbICJYJGhvc3QiICE9ICJYJExPQ0FMSE9TVE5BTUUiIF0N Cgl0aGVuDQogICAgICAgICAgICBfZXJyb3IgIlwicHJpbWFyeVwiIG9ubHkg YWxsb3dlZCBmb3IgJExPQ0FMSE9TVE5BTUUgKGxvY2FsaG9zdCwgbm90ICRo b3N0KSINCiAgICAgICAgICAgIF91bmxvY2sNCiAgICAgICAgICAgIGNvbnRp bnVlDQogICAgICAgIGZpDQogICAgICAgIA0KICAgICAgICBpZiB3aGljaCBj aGtjb25maWcgPi9kZXYvbnVsbCAyPiYxDQoJdGhlbg0KICAgICAgICAgICAg SVNfT049Y2hrY29uZmlnDQogICAgICAgIGVsc2UNCiAgICAgICAgICAgIElT X09OPWZhbHNlDQogICAgICAgIGZpDQoNCiAgICAgICAgUE1MT0dHRVJfQ1RM PW9mZg0KICAgICAgICBpZiBbIC1mICRQQ1BfU1lTQ09ORklHX0RJUi9wY3Ag XQ0KCXRoZW4NCiAgICAgICAgICAgICMgVHJ5IHRoZSBSZWRoYXQgd2F5LCBW RVJCT1NFIGZyb20gaGVyZQ0KICAgICAgICAgICAgLiAkUENQX1NZU0NPTkZJ R19ESVIvcGNwDQogICAgICAgIGVsc2UNCiAgICAgICAgICAgICMgVHJ5IHRo ZSBJUklYIHdheQ0KICAgICAgICAgICAgaWYgWyAtZiAvZXRjL2NvbmZpZy9w bWxvZ2dlciBdDQoJICAgIHRoZW4NCiAgICAgICAgICAgICAgICBQTUxPR0dF Ul9DVEw9b2ZmDQogICAgICAgICAgICAgICAgJElTX09OIHBtbG9nZ2VyICYm IFBNTE9HR0VSX0NUTD1vbg0KICAgICAgICAgICAgZmkNCiAgICAgICAgZmkN Cg0KICAgICAgICBpZiBbICIkUE1MT0dHRVJfQ1RMIiA9ICJvZmYiIF0NCgl0 aGVuDQogICAgICAgICAgICBfZXJyb3IgInByaW1hcnkgbG9nZ2luZyBkaXNh YmxlZCBmb3IgJGhvc3QiDQogICAgICAgICAgICBfdW5sb2NrDQogICAgICAg ICAgICBjb250aW51ZQ0KICAgICAgICBmaQ0KDQoJaWYgWyAtTCAkUENQX1RN UF9ESVIvcG1sb2dnZXIvcHJpbWFyeSBdDQoJdGhlbg0KCSAgICAkVkVSWV9W RVJCT1NFICYmIGVjaG8gLW4gIi4uLiB0cnkgJFBDUF9UTVBfRElSL3BtbG9n Z2VyL3ByaW1hcnk6ICINCgkgICAgcGlkPWBscyAtbCAkUENQX1RNUF9ESVIv cG1sb2dnZXIvcHJpbWFyeSB8IHNlZCAtZSAncywuKi8sLCdgDQoJICAgIGlm IF9nZXRfcGlkc19ieV9uYW1lIHBtbG9nZ2VyIHwgZ3JlcCAtcSAiXiRwaWRc JCINCgkgICAgdGhlbg0KCQkkVkVSWV9WRVJCT1NFICYmIGVjaG8gInBtbG9n Z2VyIHByb2Nlc3MgJHBpZCBpZGVudGlmaWVkLCBPSyINCgkgICAgZWxzZQ0K CQkkVkVSWV9WRVJCT1NFICYmIGVjaG8gInBtbG9nZ2VyIHByb2Nlc3MgJHBp ZCBub3QgcnVubmluZyINCgkJcGlkPScnDQoJICAgIGZpDQoJZmkNCiAgICBl bHNlDQoJZnFkbj1gcG1ob3N0bmFtZSAkaG9zdGANCglmb3IgbG9nIGluICRQ Q1BfVE1QX0RJUi9wbWxvZ2dlci9bMC05XSoNCiAgICAgICAgZG8NCgkgICAg WyAiJGxvZyIgPSAiJFBDUF9UTVBfRElSL3BtbG9nZ2VyL1swLTldKiIgXSAm JiBjb250aW51ZQ0KCSAgICAkVkVSWV9WRVJCT1NFICYmIGVjaG8gLW4gIi4u LiB0cnkgJGxvZzogIg0KCSAgICAjIHRocm93IGF3YXkgc3RkZXJyIGluIGNh c2UgJGxvZyBoYXMgYmVlbiByZW1vdmVkIGJ5IG5vdw0KCSAgICBtYXRjaD1g c2VkIC1lICczcy9cL1swLTldWzAtOV1bMC05XVswLTldWzAtOS5dKiQvLycg JGxvZyAyPi9kZXYvbnVsbCBcDQogICAgICAgICAgICAgICAgICAgfCAkUENQ X0FXS19QUk9HICcNCkJFR0lOCQkJCQkJCXsgbSA9IDAgfQ0KTlIgPT0gMgkm JiAkMSA9PSAiJyRmcWRuJyIJCQkJeyBtID0gMTsgbmV4dCB9DQpOUiA9PSAy CSYmICInJGZxZG4nIiA9PSAiJyRob3N0JyIgJiYNCgkoICQxIH4gL14nJGhv c3QnXC4vIHx8ICQxIH4gL14nJGhvc3QnJC8gKQl7IG0gPSAxOyBuZXh0IH0N Ck5SID09IDMgJiYgbSA9PSAxICYmICQwID09ICInJGRpciciCQkJeyBtID0g MjsgbmV4dCB9DQpFTkQJCQkJCQkJeyBwcmludCBtIH0nYA0KCSAgICAkVkVS WV9WRVJCT1NFICYmIGVjaG8gLW4gIm1hdGNoPSRtYXRjaCAiDQoJICAgIGlm IFsgIiRtYXRjaCIgPSAyIF0NCgkgICAgdGhlbg0KCQlwaWQ9YGVjaG8gJGxv ZyB8IHNlZCAtZSAncywuKi8sLCdgDQoJCWlmIF9nZXRfcGlkc19ieV9uYW1l IHBtbG9nZ2VyIHwgZ3JlcCAtcSAiXiRwaWRcJCINCgkJdGhlbg0KCQkgICAg JFZFUllfVkVSQk9TRSAmJiBlY2hvICJwbWxvZ2dlciBwcm9jZXNzICRwaWQg aWRlbnRpZmllZCwgT0siDQoJCSAgICBicmVhaw0KCQlmaQ0KCQkkVkVSWV9W RVJCT1NFICYmIGVjaG8gInBtbG9nZ2VyIHByb2Nlc3MgJHBpZCBub3QgcnVu bmluZywgc2tpcCINCgkJcGlkPScnDQoJICAgIGVsaWYgWyAiJG1hdGNoIiA9 IDAgXQ0KCSAgICB0aGVuDQoJCSRWRVJZX1ZFUkJPU0UgJiYgZWNobyAiZGlm ZmVyZW50IGhvc3QsIHNraXAiDQoJICAgIGVsaWYgWyAiJG1hdGNoIiA9IDEg XQ0KCSAgICB0aGVuDQoJCSRWRVJZX1ZFUkJPU0UgJiYgZWNobyAiZGlmZmVy ZW50IGRpcmVjdG9yeSwgc2tpcCINCgkgICAgZmkNCglkb25lDQogICAgZmkN Cg0KICAgIGlmIFsgLXogIiRwaWQiIF0NCiAgICB0aGVuDQoJcm0gLWYgTGF0 ZXN0DQoNCglpZiBbICJYJHByaW1hcnkiID0gWHkgXQ0KCXRoZW4NCgkgICAg YXJncz0iLVAgJGFyZ3MiDQoJICAgIGlhbT0iIHByaW1hcnkiDQoJICAgICMg Y2xlYW4gdXAgcG9ydC1tYXAsIGp1c3QgaW4gY2FzZQ0KCSAgICAjDQoJICAg IFBNX0xPR19QT1JUX0RJUj0kUENQX1RNUF9ESVIvcG1sb2dnZXINCgkgICAg cm0gLWYgJFBNX0xPR19QT1JUX0RJUi9wcmltYXJ5DQoJZWxzZQ0KCSAgICBh cmdzPSItaCAkaG9zdCAkYXJncyINCgkgICAgaWFtPSIiDQoJZmkNCg0KCSMg ZWFjaCBuZXcgbG9nIHN0YXJ0ZWQgaXMgbmFtZWQgeXl5eW1tZGQuaGgubW0N CgkjDQoJTE9HTkFNRT1gZGF0ZSAiKyVZJW0lZC4lSC4lTSJgDQoNCgkjIGhh bmRsZSBkdXBsaWNhdGVzL2FsaWFzZXMgKGhhcHBlbnMgd2hlbiBwbWxvZ2dl ciBpcyByZXN0YXJ0ZWQNCgkjIHdpdGhpbiBhIG1pbnV0ZSBhbmQgTE9HTkFN RSBpcyB0aGUgc2FtZSkNCgkjDQoJc3VmZj0nJw0KCWZvciBmaWxlIGluICRM T0dOQU1FLioNCglkbw0KCSAgICBbICIkZmlsZSIgPSAiJExPR05BTUUiJy4q JyBdICYmIGNvbnRpbnVlDQoJICAgICMgd2UgaGF2ZSBhIGNsYXNoISAuLi4g ZmluZCBhIG5ldyAtbnVtYmVyIHN1ZmZpeCBmb3IgdGhlDQoJICAgICMgZXhp c3RpbmcgZmlsZXMgLi4uIHdlIGFyZSBnb2luZyB0byBrZWVwICRMT0dOQU1F IGZvciB0aGUNCgkgICAgIyBuZXcgcG1sb2dnZXIgYmVsb3cNCgkgICAgIw0K CSAgICBpZiBbIC16ICIkc3VmZiIgXQ0KCSAgICB0aGVuDQoJCWZvciB4eCBp biAwIDEgMiAzIDQgNSA2IDcgOCA5DQoJCWRvDQoJCSAgICBmb3IgeXkgaW4g MCAxIDIgMyA0IDUgNiA3IDggOQ0KCQkgICAgZG8NCgkJCVsgImBlY2hvICRM T0dOQU1FLSR7eHh9JHt5eX0uKmAiICE9ICIkTE9HTkFNRS0ke3h4fSR7eXl9 LioiIF0gJiYgY29udGludWUNCgkJCXN1ZmY9JHt4eH0keXkNCgkJCWJyZWFr DQoJCSAgICBkb25lDQoJCSAgICBbICEgLXogIiRzdWZmIiBdICYmIGJyZWFr DQoJCWRvbmUNCgkJaWYgWyAteiAiJHN1ZmYiIF0NCgkJdGhlbg0KCQkgICAg X2Vycm9yICJ1bmFibGUgdG8gYnJlYWsgZHVwbGljYXRlIGNsYXNoIGZvciBh cmNoaXZlIGJhc2VuYW1lICRMT0dOQU1FIg0KCQlmaQ0KCQkkVkVSQk9TRSAm JiBlY2hvICJEdXBsaWNhdGUgYXJjaGl2ZSBiYXNlbmFtZSAuLi4gcmVuYW1l ICRMT0dOQU1FLiogZmlsZXMgdG8gJExPR05BTUUtJHN1ZmYuKiINCgkgICAg ZmkNCgkgICAgZXZhbCAkTVYgLWYgJGZpbGUgYGVjaG8gJGZpbGUgfCBzZWQg LWUgInMvJExPR05BTUUvJi0kc3VmZi8iYA0KCWRvbmUNCg0KCSRWRVJCT1NF ICYmIF9tZXNzYWdlIHJlc3RhcnQNCglzb2NrX21lPScnDQoJWyAiJHNvY2tz IiA9IHkgXSAmJiBzb2NrX21lPSdwbXNvY2tzICcNCg0KCV9nZXRfbG9nZmls ZQ0KCWlmIFsgLWYgJGxvZ2ZpbGUgXQ0KCXRoZW4NCgkgICAgJFZFUkJPU0Ug JiYgJFNIT1dNRSAmJiBlY2hvDQoJICAgIGV2YWwgJE1WIC1mICRsb2dmaWxl ICRsb2dmaWxlLnByaW9yDQoJZmkNCglpZiAkU0hPV01FDQoJdGhlbg0KCSAg ICBlY2hvDQoJICAgIGVjaG8gIisgJHtzb2NrX21lfXBtbG9nZ2VyICRhcmdz ICRMT0dOQU1FIg0KCSAgICBfdW5sb2NrDQoJICAgIGNvbnRpbnVlDQoJZWxz ZQ0KCSAgICAke3NvY2tfbWV9cG1sb2dnZXIgJGFyZ3MgJExPR05BTUUgPiR0 bXAub3V0IDI+JjEgJg0KCSAgICBwaWQ9JCENCglmaQ0KDQoJIyB3YWl0IGZv ciBwbWxvZ2dlciB0byBnZXQgc3RhcnRlZCwgYW5kIGNoZWNrIG9uIGl0cyBo ZWFsdGgNCglfY2hlY2tfbG9nZ2VyICRwaWQNCg0KCSMgdGhlIGFyY2hpdmUg Zm9saW8gTGF0ZXN0IGlzIGZvciB0aGUgbW9zdCByZWNlbnQgYXJjaGl2ZSBp bg0KCSMgdGhpcyBkaXJlY3RvcnkNCgkjDQoJaWYgWyAtZiAkTE9HTkFNRS4w IF0gDQoJdGhlbg0KCSAgICAkVkVSQk9TRSAmJiBlY2hvICJMYXRlc3QgZm9s aW8gY3JlYXRlZCBmb3IgJExPR05BTUUiDQogICAgICAgICAgICBta2FmICRM T0dOQU1FLjAgPkxhdGVzdA0KCWVsc2UNCgkgICAgZWNobyAiJHByb2c6IEVy cm9yOiBhcmNoaXZlIGZpbGUgJExPR05BTUUuMCBtaXNzaW5nIg0KCSAgICBs b2dkaXI9YGRpcm5hbWUgJExPR05BTUVgDQoJICAgIGVjaG8gIkRpcmVjdG9y eSAoYGNkICRsb2dkaXI7IHB3ZGApIGNvbnRlbnRzOiINCgkgICAgbHMgLWxh ICRsb2dkaXINCglmaQ0KICAgIGZpDQoNCiAgICBfdW5sb2NrDQoNCmRvbmUN Cg0KWyAtZiAkdG1wLmVyciBdICYmIHN0YXR1cz0xDQpleGl0DQo= ---2045888623-1917072337-961531085=:40493-- From owner-pcp@oss.sgi.com Sun Jun 25 23:20:02 2000 Received: by oss.sgi.com id ; Sun, 25 Jun 2000 23:19:53 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:19510 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Sun, 25 Jun 2000 23:19:15 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via ESMTP id XAA08140 for ; Sun, 25 Jun 2000 23:13:45 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id QAA93973; Mon, 26 Jun 2000 16:17:26 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Mon, 26 Jun 2000 16:17:26 +1000 From: Ken McDonell Reply-To: kenmcd@melbourne.sgi.com To: Cameron_C_Caffee@AtlanticMutual.com cc: pcp@oss.sgi.com Subject: Re: PCPMON - kudos & encouragement In-Reply-To: <852568C0.00487733.00@AtlanticMutual.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing I know this is very late, but I'm just responding to the philosophical issues, rather than the technical ones ... 8^)> On Thu, 13 Apr 2000 Cameron_C_Caffee@AtlanticMutual.com wrote: > ... > (3) Axis units - the user could provide a units label for each axis. The PCP metadata already provides all the information a tool needs to construct the units label automagically. > Archive analysis : > > ... There's > nothing wrong with having one tool for real time monitoring and another for > archive analysis since the goals for each are a bit different. The former has > the character of an "alert" or "alarm" while the latter is normally oriented > toward historical trending (e.g. CPU Utilization trend for last 6 months). I think this line of argument takes you down a rat hole. In the PCP APIs and in particular the way the archive library support is implemented in libpcp, there is a very determined effort to make retrospective and real-time sources of performance metrics semantically equivalent. The rationale is that many tools, and more importantly the users of the tools, are best served by operating on the abstraction of a series of observations over time. The alert or alarm function is just as useful in historical data as it is in real-time data, but for different purposes. We use alarming in real-time for operational management, we use alarming against archive data for analysis, exception reporting and alarm tuning. One of the strong points of the successful PCP monitoring tools, including pmie in the open source release, is that they _do_ operate on both real-time and historical data. And in fact one of the most severe criticisms of the pmgagets tool (a 2-D visible alarm constructor) is that it cannot replay from PCP archives. From owner-pcp@oss.sgi.com Sun Jun 25 23:21:22 2000 Received: by oss.sgi.com id ; Sun, 25 Jun 2000 23:21:12 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:13367 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Sun, 25 Jun 2000 23:20:50 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via ESMTP id XAA08404 for ; Sun, 25 Jun 2000 23:15:21 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id QAA94163; Mon, 26 Jun 2000 16:19:06 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Mon, 26 Jun 2000 16:19:06 +1000 From: Ken McDonell Reply-To: kenmcd@melbourne.sgi.com To: Cameron_C_Caffee@AtlanticMutual.com cc: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added In-Reply-To: <85256904.00442D20.00@AtlanticMutual.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Tue, 20 Jun 2000 Cameron_C_Caffee@AtlanticMutual.com wrote: > ... > Regarding the nature of the archives making the processing of spanned archives > difficult ... > > That's too bad. Other performance products are designed to provide for a > "log-once - read-many" approach which does not require any re-processing of logs > in order to select a particular range of dates/times for a particular host > computer. After reviewing the man page for pmlogextract, I can agree that > several logs for a given host can be re-processed to create a single archive > file for analysis. The utility also offers an opportunity for data reduction > through selection of a sub-set of metrics for inclusion in the output archive. > Obviously, I'd prefer to avoid this type of re-processing to obtain the data > desired when the archive file names and the content of the archives already > communicate the information necessary to support the desired date/time selection > criteria. I think this is an operational issue to a large extent. If your normal mode of processing invloves archives spanning long durations, then a simple combination of pmlogger_check, pmlogger_daily, cron and pmlogextract will allow you to stitch together logs of any desired duration. But there are lots of sites where a more useful operational model is a collection of archives each spanning one day. pmlogger_daily is biased towards the latter situation, based soley on arguments of simplicity, i.e. it is easier to construct a weekly archive from a set of daily archives, than to chop a weekly archive into a set of daily archives. > Regarding the question of multiple nodes ... > > I agree that it is a significant design consideration. However, it may not be > too early for the project to start thinking about a design that will facilitate > multi-node reporting. When one considers the evolving use of clustered machines, > the reporting requirement for those environments is to reflect the over-all > performance and capacity measurements for the cluster as a whole. If PCP is to > be useful in those environments, it will have to accommodate this requirement. As a historical aside, PCP is not "early in the project" ... many of the architectural and key design decisions were made 7 years ago. I think there may be a misunderstanding here. In my previous posting, I was trying to say that the issues for multiple archives from different hosts are different to the issues for multiple archives from a single hosts. There is nothing in the PCP approach that is _not_ designed for monitoring multiple hosts ... quite the contrary, the whole client-server architecture is biased towards arbitrary combinations of monitors and systems being monitored. The same client can monitor stats from multiple hosts (or multiple archives) concurrently. We routinely see one system acting as the pmlogger farm for multiple hosts, and monitoring tools watching multiple hosts concurrently. pmie includes logical predicates that extend rules to accommodate multiple hosts (see the some_hosts, all_hosts and N%_host aggregate operators). We have PCP PMDAs that span multiple nodes in a cluster (although I admit none of these have escaped into the open source release as yet). So, I think PCP is _really_ well placed to operate in environments with lots of hosts. > BTW: Does pmlogextract support a wild-carded input file specification ? No, at first blush I'd say that is a function for the shell. From owner-pcp@oss.sgi.com Sun Jun 25 23:22:22 2000 Received: by oss.sgi.com id ; Sun, 25 Jun 2000 23:22:02 -0700 Received: from deliverator.sgi.com ([204.94.214.10]:21559 "EHLO deliverator.sgi.com") by oss.sgi.com with ESMTP id ; Sun, 25 Jun 2000 23:21:39 -0700 Received: from rattle.melbourne.sgi.com (rattle.melbourne.sgi.com [134.14.55.145]) by deliverator.sgi.com (980309.SGI.8.8.8-aspam-6.2/980310.SGI-aspam) via ESMTP id XAA08501 for ; Sun, 25 Jun 2000 23:16:09 -0700 (PDT) mail_from (kenmcd@melbourne.sgi.com) Received: from localhost (kenmcd@localhost) by rattle.melbourne.sgi.com (SGI-8.9.3/8.9.3) with ESMTP id QAA94132; Mon, 26 Jun 2000 16:19:54 +1000 (EST) X-Authentication-Warning: rattle.melbourne.sgi.com: kenmcd owned process doing -bs Date: Mon, 26 Jun 2000 16:19:54 +1000 From: Ken McDonell Reply-To: kenmcd@melbourne.sgi.com To: Michal Kara cc: pcp@oss.sgi.com Subject: Re: New PCPMON 1.2.95 - archive mode added In-Reply-To: <20000616143831.39757@arthur.plbohnice.cz> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-pcp@oss.sgi.com Precedence: bulk Return-Path: X-Orcpt: rfc822;pcp-outgoing On Fri, 16 Jun 2000, Michal Kara wrote: > ... > When I add it, you will be able to run: > pcpmon -a day1=... -a day2=... -s -1d -a day3=... -s -2d > > Then, if you create yourself config which will contain expression: > (day1:kernel.all.cpu.load+day2:kernel.all.cpu.load+day3:kernel.all.cpu.load)/3 > > This would print average CPU load from the three specified days. You may wish to review pmlogsummary. For all sorts of statistical summaries from archives, we've found this to be a generally useful tool. I think one needs to be careful about overloading PCPMON ... our experience has been that a number of smaller, focussed tools are more effective than the swiss army knife style of all singing all dancing tool. This is especially true when one moves from interactive monitoring to the broader field of performance management. > What I am doing right now is to add possibility of entering the expression > and highlight specified parts of the graph (i.e., when CPU load was >10 etc.). Another alternative may be to use pmie to filter the archive (pmie already has very powerful rule evaluation features), and customize the "notification" of the rules to produce timestamped events when the rules are true, then have your tool read these notifications and translate this into a visual alarms on the graph window.