Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: current build failure automated messages



    Date:        Tue, 19 Jul 2016 12:40:34 +0300
    From:        Andreas Gustafsson <gson%gson.org@localhost>
    Message-ID:  <22413.62866.53313.424303%guava.gson.org@localhost>


  | You don't have to screen scrape the HTML reports - you can get the
  | underlying data by anonymous rsync, as described in
  | 
  |   https://mail-index.netbsd.org/current-users/2015/10/18/msg028217.html

Thanks - I haven't seen that message (yet) - I have a kind of bizarre
way of reading NetBSD e-mail, I cherry pick current messages, reading any
where the subject looks interesting, and then I have two "current" pointers,
where I am really reading - one is back in early 2014 somewhere, and is
where I was up to in reading (more or less) everything - until a busy period
in 2013 (I think) made me just start the cherry picking of current messages
and only very occasionally moving the real read pointer forwards (though
it has moved 5 or 6 months - probably more, since then.)    Then early this
year I just established a second read pointer, and went back to reading all
the messages again, just leaving a big gap in the messages I had read.
Then of course, that one slipped behind as well - it is currently up to
the start of May, and I'm back cherry picking again...  That one I am trying
to get caught up, but the gap between April 2014 (1st read pointer) and
Jan 2016 (where I started reading again) is not closing very fast - every
now and again when I'm bored I go process a couple of hundred messages
from back then, but it will take a while before I reach Oct 2015.)

  | With those, finding out if the
  | latest build succeeded is a one-liner in sh, for example:

Yes, just finding the success/fail or the build, even using the HTML
version, is trivial - but I also wanted the commit messages to be available.
I am not going to use them often, but a few times I have seen a build
failure, looked into it, and failed to work out how it should be fixed.
But then, obviously, it gets fixed by someone who knows how - by looking
at how it was fixed, hopefully I can learn more.   Some old dogs actually
appreciate new tricks...   For that I need to know what happened between the
last failure to build and when it started working again.

I will take a look at what is there and see if processing those logs
would be easier than the current processing of the HTML (which is not
really very difficult.)

  | The Python code that generates the existing HTML reports and email
  | notifications is also available if you want it.

Not really.   python is on my "I hope I never have to go there" list...

  | A new monthly report page is created when there is a build result to
  | report from building sources with a CVS source date in that month.

OK, thanks.

  | Since the internal date storage format of CVS has a "month" field, in
  | principle the above definition is complete without introducing the
  | concept of a time zone.

Not directly in the report generating code, but it is there in the way
CVS works, and is always UTC (or always on a unix type system anyway.)
I will adapt my script.   By adopting CVS dates for this, you're effectively
adopting UTC for the dates in the file names - which is as it should be.

  | so a commit made after 0:00 UTC on the 1st will trigger the
  | creation of a new report page once it has been built.

That's fine - a failure to fetch the log, if the script requests it before
it is created will (should, and almost does) just cause the script to
assume that there is no status change - which must be true, if it was not
changed (to fixed or to broken) after the last commit of the previous month,
and there has been no commit this month, then it is still in the same state.

I realise at the minute my script doesn't quite handle this properly, but
I will fix that, the file fetching part of it is the easy part...

  | Again, I think it is would be better to use the underlying data than
  | to screen scrape HTML reports that were never intended for machine
  | parsing.

Understood.   But for now, what I have works, so at least until the
generated HTML changes, I'm happy...

  | If I end up adding the "build fixed" notifications to the TNF test
  | server, it will be a reimplementation in Python anyway, sharing code
  | with the existing build failure notifications.

Sure, with the raw data, and knowledge how to use it (and much of the code
already existing) that would be a much better way.

kre

ps: if anyone has any actual interest in the script I posted, let me know,
and I will either send (via private e-mail) it after it is fixed, or
send it to the list again if there are many requests.



Home | Main Index | Thread Index | Old Index