About integers and negative values

I bet there's a good reason for the observation that INT of -PI returns -4 whereas putting -PI into an integer variable returns -3.
A%=-PI:PRINT ;A%;" ";INT(-PI)
-3 -4

But what is that reason? (Just curious!)

(Naturally -PI is just standing for any negative non-integer value)

Comments

  • BigEd wrote: »
    But what is that reason? (Just curious!)
    If you're familiar with C library functions the reason is that INT() exactly corresponds to the C floor() function whereas assigning to an integer variable is exactly equivalent to the C trunc() function.

    Put another way, INT() rounds to the next integer towards minus infinity whilst assigning to an integer variable rounds to the next integer towards zero. It's helpful to have both options available.

    INT() is invaluable because it allows you to round to the nearest integer using INT(x + 0.5). Truncation, i.e. simply discarding the fractional part, is however the more natural (and faster) operation for the CPU.
  • Thanks... so, noting that this behaviour seems to be the same since the beginning, does this come naturally from the BBC's guidance, or might it have just fallen out from implementation, or would Sophie (and, indeed you, for the Z80 version) have made a deliberate decision? This might be unanswerable, of course!
  • BigEd wrote: »
    does this come naturally from the BBC's guidance
    The behaviour of INT() is standardised across (nearly) all BASICs, including Microsoft BASICs; the only dialect I know which gets it wrong is Liberty BASIC. So given that the BBC specified that BBC BASIC behave like Microsoft BASIC, unless there was a very good reason not to, it was predetermined.

    Assignment to an integer variable wasn't covered by the BBC specification, not least because it didn't go into that sort of detail (it was Acorn's idea that there should be integer variables at all). But in every language I know, including K&R C (informally specified in 1978), casting a floating point value to an integer variable uses truncation.

    So any other behaviour from what we have would have been extremely surprising.
  • Thanks!
  • simply discarding the fractional part, is however the more natural (and faster) operation for the CPU.
    I will add that to some extent this depends on the floating-point format used, but 'sign & magnitude' is by far the most common format, and in that case truncation is what you get if you discard the fractional part.
  • So given that the BBC specified that BBC BASIC behave like Microsoft BASIC, unless there was a very good reason not to, it was predetermined.
    Although off-topic it would perhaps be appropriate to mention the counter-example of two respects in which BBC BASIC departed drastically from Microsoft BASIC, despite the BBC's specification: LOG() which is the natural (Napierian) logarithm in MS BASICs but is to base-10 in BBC BASIC, and graphics coordinates which traditionally increased downwards (with the origin at the top-left) but in BBC BASIC increase upwards (with the origin at the bottom-left).

    I have no personal recollection now, but I imagine these must have been the subject of some intense discussions! In both cases BBC BASIC's approach is more in keeping with mathematical conventions: ln is usually the natural log in maths and Cartesian Coordinates have the origin at the bottom-left. Interestingly PostScript (which already existed then) also uses Cartesian coordinates.

    I don't know whether it was Acorn or some of the other people advising the BBC (e.g. from MEP or Oundle School) who pressed for adopting the more mathematical conventions.