On 27.09.2019 20:50, Rhialto wrote: > On Fri 27 Sep 2019 at 15:53:47 +0200, Kamil Rytarowski wrote: >> On 27.09.2019 10:19, Rhialto wrote: >>> On Thu 26 Sep 2019 at 01:15:23 +0200, Kamil Rytarowski wrote: >>>> Is this patch correct? >>>> >>>> http://netbsd.org/~kamil/patch-00168-kern_rndq.c-avoid-overflow.txt >>>> >>>> This code will map the corner case into defined semantics, treating >>>> delta INT32_MIN as INT32_MIN+1. >>> >>> I don't know if it is important, but it makes the "normalized" delta >>> value INT32_MAX more likely than it probably should be, since it can >>> result from "raw" delta values INT32_MIN, INT32_MIN+1, and INT32_MAX. >>> >>> Similarly, 0 is underrepresented as output, since it only can result >>> from 0. >>> >>> Other positive results x can come from x and -x as inputs. >>> >>> -Olaf. >>> >> >> If you think that it shall be represented a 0, I can go for it. >> > I have been pondering it for a while and it seems more complicated than > I first thought... it seems that the actual value of delta isn't even > important, but what rnd_delta_estimate() makes of it. That calculates a > derivative and a second derivative. So then one would think that > creating discontinities in the time sequence of delta values, since that > would likely lead to mis-estimations. (Or at least big changes compared > to what it calculates now) > > But if the actual magnitude of delta isn't even important (due to the > differentials), then one could get rid of all the negative delta values > by simply always using delta-INT32_MIN (a.k.a. delta+INT32_MAX+1). > That maps the range [ INT32_MIN, INT32_MAX ] to [0, UINT32_MAX ]. > > To bad that rnd_delta_estimate() doesn't accept unsigned integers but > only signed ones. (Why? A delta between two n-bit integerrs needs n+1 > bits, so one would expect it to take int64s...) > > To correct for that, maybe (delta-INT_MIN)/2 would be a solution (if > calculated without causing overflows). > > The different calculations throw away information in different ways: the > current code that essentially takes abs(delta) sees a no entropy in the > delta sequence 1,-1,1,-1,.... But the (delta-INT_MIN)/2 method sees > nothing in 1,2,1,2,... > > So my final thought is that it is perhaps best to adjust > rnd_delta_estimate() to take delta in an int64_t or uint32_t. After > all, rnd_delta_t already contains uint64_ts for the first and second > order differentials. With 32-bit samples and 33-bit deltas, 64-bits are > certainly enough there. Then (delta-INT_MIN) can simply be used. > > -Olaf. > Thank you for the analysis. Please prepare a patch and commit. Please add in the message: Reported-by: syzbot+68c37d09c833f8ec1341%syzkaller.appspotmail.com@localhost
Attachment:
signature.asc
Description: OpenPGP digital signature