On Wed, 2013-06-19 at 09:34 +1000, Dave Chinner wrote:
Hi Dave,
> On Tue, Jun 18, 2013 at 05:59:59PM -0500, Chandra Seetharaman wrote:
> > Hello All,
> >
> > Couple of weeks backs we had a discussion in xfs meeting to collect
> > xfstests results. I volunteered to collect xfstests results from
> > different architectures and upload to XFS.org.
> >
> > I can run and get the results for x86_64 and ppc64. If anyone has other
> > architectures that they can run the tests on and provide me the results,
> > I will filter them an upload to XFS.org.
>
> How are you going to filter and display them on xfs.org? Should the
> scripts to do this be part of xfstests?
I wasn't thinking of very elaborate filtering of all the results
submitted by all variants of kernel/xfsprogs.
My thinking was very simple:
- get test results based on publicly available git tree commit IDs
- show the commit ID, and the failures seen in a specific arch.
This would help when anybody runs xfstests and sees a failure with any
newer code, they would know if it is a regression or already seen.
But, looks like there is more elaborate work in progress. I will sync up
with Ben and Phil to see how to help.
>
> FWIW, without a database of results that users can use to filter the
> test results themselves, it will become unmanageable very quickly...
>
> BTW, from my notes from the 2012 LSFMM XFs get-together, there are
> these line items related to exactly this:
>
> ----
> - Public repository of test results so we can better track failures
> - Look into resurrecting old ASG xfstests results
> repository and web iquery interface (Ben)
> - host on oss.sgi.com.
> - script to run xfstests and produce publishable output (Ben)
> ----
>
> Ben, did you ever start to look into this?
>
> > Here is what I think would be of value to provide along with the results
> > (others, please feel free to add more to the list for the results to be
> > more useful)
> > - Architecture of the system
>
> - base distro (e.g. /etc/release).
>
> > - Configuration - memory size and number of procs
>
> I think that all the info that we ask people to report in bug
> reports would be a good start....
>
> > - Filesystem sizes
>
> More useful is the MKFS_OPTIONS and MOUNT_OPTIONS used to run the
> tests, as that tells us how much non-default test coverage we are
> getting. i.e. default testing or something different.
>
> > - Commit ID of the kernel
>
> Not useful for kernels built with local, non-public changes, which
> is generally 100% of the kernels and userspace packages I test
> with.
>
> > - which git tree (XFS git tree or Linus's)
> > - xfsprogs version (or commit ID if from the git tree)
>
> Same as for the kernel - base version is probably all that is useful
> here.
>
> You'd probably also want to capture the console output indicating
> test runtimes and why certain tests weren't run.
>
> If you create a pristine $RESULTS_DIR and output all the information
> you want to gather into it, then it will be trivial for users to
> send information onwards. Providing a command line parameter that
> generates a unique results directory and then packages the results
> up into a tarball would be a great start. We'd then have a single
> file that can be sent up a central point with all the test results
> available. We could even do all the filtering/processing before
> upload.
>
> IOWs, the idea behind $RESULTS_DIR is to make this sort of scripted
> test result gathering simple to do....
>
> Cheers,
>
> Dave.
|