Class | AppMath::R |
In: |
rnum.rb
|
Parent: | Numeric |
Class of real numbers which implements arbitrary precision arithmetic together with the standard elementary transcendental functions.
Implementation is based on classes BigDecimal and module BigMath created by Shigeo Kobayashi.
The term ‘precision’ is here used in one of its technical meanings: here it is the number of decimal digits used for representing the ‘significand’ of a number. Recall that each real number can be represented uniquely as
sign * significand * 10^exponent
where
sign = +/- significand = 0.d_1 d_2 d_3 ...., where the d_1, d_2, are decimal digits in 0,1,2,3,4,5,6,7,8,9, but d_1 != 0 exponent is an integer number
For any natural number n, a real number ‘of precision n’ is one for which the digits d_i vanish for all i > n.
Notice that specifying the precision defines an infinite subset of the rational numbers. The infinity is due to the infinity of choices for the exponent. In the class BigDecimal, there is the restriction that the exponent be a Fixnum (and not a Bignum) so that there is a well-defined large but finite number of possible exponents in BigDecimal and so there is a well-defined finite set of BigDecimals for any specified value of the precision.
Unlike class BigDecimal, the present class R (recall the mathematical standard symbols N, Z, Q, R, C, H for natural, integer, rational, real, complex, quaternionic numbers) treats precision not as a parameter of the arithmetic operations but as a class variable: The computational representation of the real number world as a whole depends on the precision parameter. Class R makes it easy to write programs in a ‘precision independent style’: With a single statement, e. g.
R.prec = 100
one selects the precision of all numbers (here as 100 decimal places), and all arithmetic operations and transcendental functions take the required precision automaticaly int account.
Since arbitrary precision arithmetic, not only for very large precision, is considerably slower than Float arithmetic (I found a factor 3.6 for a singular value decomposition of a 20 times 20 matrix between precision 17 computation time and Float computation time. Precision 100 led to a computation time of 5.39 s, which was 2.7 times longer than for precision 17.) it is desirable to allow switching to Float arithmetic with the same syntax:
R.prec = 0
and
R.prec = "float"
both have this effect. In order for this to work, one has to introduce real numbers into the program by a simple specific syntax relying on R: Instead of
x = 1.456e-6; y = 7.7; z = 1000.0; n = 128
a R-based program may use the conventional mehod ‘new’ and write
x = R.new("1.456E-6"); y = R.new(7.7); z = R.new(1000); n = 128
Instead, we also may use the class method c (‘c’ for ‘converter’) and write (with or without argument brackets)
x = R.c 1.456e-6; y = R.c(7.7); z = R.c 1000; n = 128
Written in this form, the variables x,y,z refer to R-objects for precision set as a positive integer, and to Float-objects as a consequence of the statement
R.prec = "float"
Assume that the program continues as
u = (x.sin + y.exp) * z
This works for R-objects since R implements as methods all the functions which module Math povides as module functions. When, however, we have the statement R.prec = "float" in action, the variables x and y refer to Float objects. Then, the terms x.sin and y.exp are not defined, unless we prevented the problem via
require 'float_ext'
which adds to the interface of Float all the methods of R so that any program hat makes sense for R, also makes sense for Float. (This is a harmless change of Float that breaks no code which relied on the unmodified class Float. If Float has been modified already for some reason, one has to make sure that this extension does not conflict with the previous one.) Instead of setting precision for the whole program, we may vary it for direct comparison by code like this:
for i in 0..8 R.prec = 20*i # some computation, e. g. t1 = Time.now x = [R.c2, R.i2, R.i(0.33333),R.ran(137), R.tob(i), R.pi] x.each{ |x| y = x.sin**2 + x.cos**2 - 1 puts "x = #{x.round(6)}, y = #{y}" } t2 = Time.now puts "computation time was #{t2 - t1} for precision #{R.prec}" end
Here all precision independent creation modes for real numbers are exemplified:
R.c2 equivalent to the yet known R.c(2), hence 2 R.i2 inverse of 2, i.e. 0.5 R.i(0.33333) inverse of 0.33333 i.e. approximately 3 R.ran(137) sine-floor random generator invoked for argument 137 R.tob(i) test object generator invoked for agument i R.pi number pi = 3.14159... (also R.e = 2.71828...)
Since the method ‘coerce’, which determines the meaning of binary operator expressions, is defined appropriately in R, one may for a R-object r write 1/r instead of R.c1/r and r + 1 instead of r + R.c1. This allows us to avoid the appearance of R in all formulas except those where real variables are initialized, as in the part
x = [R.c2, R.i2, R.i(0.33333),R.ran(137), R.tob(3), R.pi]
of our example computation.
Class R is truly consitent with Ruby‘s numerical concepts: R is derived from Ruby class Numeric so that R, Float, Fixnum, and Bignum have a common superclass. Notice that class BigDecimal cannot have this property since most of its functions need the precision parameter as input. Further, R implements as member funcions all functions which are defined in Ruby module Math and thus form the core of Ruby‘s mathematics. It would therefore be natural to integrate class R into the BigMath module. It would also be natural to include float_ext.rb since this would enable the ‘precision-independent coding style’ scetched above within all number-oriented Ruby projects. One would then have a framework in which valuable algorithms such as singular value decomposition of matrices could be coded with permanent validity. I achieved to transform real matrix SVD code from ‘Numerical Recipes in C’ by Press et al. to precision-independent Ruby code and did not succeed so far with the same task for complex matrices. Inspecting existing sources for "Algorithm 358" shows how far we are away fom presenting algorithms of typical complexity in a form that is ready for usage at arbitrary precision. In order for this task to make sense, one has to have complex numbers, vectors and matrices in precision-independent Ruby style. As far as I can see it, the many mathematical tool boxes provided by the Ruby community don‘t contain the tools which are needed here. The Ruby language makes it pure fun, however, to build these tools according to ones needs, and I did it with the results available from www.ulrichmutze.de. I think that it is extremely important to have an agreed framework for coding mathematical and physical algorithms. Only then one can hope that people will start to add their work and use the work of others. So far, I have collected all the algoritms that I ever had the opportunity to use in my CPM class system, which is written in C++, using a programming style that I call C+-. This is presented on my website.
Having now a tiny part of the CPM system carried over (not at all verbatim) to Ruby, I expect that a Ruby version of the whole system would be much smaller and more compact. I‘m quite sure that I‘ll lack enthusiasm (and even power) to create such an extended algorithm collection in Ruby myself. Would it not be great, if any algorithm (e.g. eigen-vectors/values of matrices, FFT, min and root finding) that finds its way into some Ruby library, would be written in a way that allows it to be executed in arbitrary precision, even if Float precision is the preferred option? The present class R and also the other classes to be collected in the present module AppMath, should add to the experience which is needed to come up with a working method to achieve this goal.
We know from the beginning that in BigDecimal and hence in R the exponents of numbers are restricted to the size which can be represented by a Fixnum. Although computations in physics and engineering are far from coming close to this limit, schematically applied mathematical functions easily transcend it:
R.prec = 60 a = R.c 2.2 for i in 0...9 a = a.exp puts "a = " + a.to_s end
yields
a = 0.902501349943412092647177716688866402972021659669817926079804E1 a = 0.830832663077249493655084378868900432568369546441921929731276E4 a = 0.182141787499134800567191386180195980368655533517234407096707E3609 a = 0.0 a = 0.1E1 a = 0.271828182845904523536028747135266249775724709369995957496696E1 a = 0.151542622414792641897604302726299119055285485368561397691404E2 a = 0.381427910476022059220921959409820357102394053622666607552533E7 a = 0.233150439900719546228968991101213766633201742896351361434871E1656521
where only the first three results are correct and the following ones are determined by the overfow behavior of Fixnum.
Motivation and rational
Experimentation with class R provided many experiences which sharpened my long-standing diffuse ideas concerning a coding framework for mathematics.
In mathematics we have ‘two worlds’: the discrete one and the ‘continuous’ one. For the first world, Ruby is fit without modifications. Its automatic switch from Fixnum representation of integer numbers to a Bignum representation eliminates all limitations to the coding of discrete mathematics problems for which a coding approach is reasonable from a logical point of view.
The ‘continuous world’ is the one for which the real numbers lay the ground. The prime example of this world is ‘real analysis’ for which the concept of convergence is considered central. When a computationally oriented scientist describes to a pure mathematician his experience that all mathematically and physically relevant structures seem to have natural codable counterparts, this pure mathematician will probably admit that this holds for the trivial part of the story, but he will probably insist that all deeper questions, those concerning convergence, closedness, and completeness are a priori outside the scope of numerical methods.
According to my understanding, this mathematician‘s point of view is misleading. It is, however, suggested by what mathematicians actually do. For them, it is natural, when having to work with a number the square of which is known to be 2, to ‘construct’ this number as a limit of objects (e.g. finite decimal fractions) with the intention to use this ‘exact solution of the equation x^2 = 2’ in further constructions and deductions within the framework of real analysis.
For a computationally oriented scientist an alterative view is more natural and more promising: Do everything (i.e solving x^2 = 2, and the further constructions and deductions mentioned above) with finite precision and consider this ‘whole thing’ (and not only x) as a function of this precision. When we consider the behavior for growing precision of the ‘whole thing’ we have a single limit (if we need to consider a limit at all), and the question never arises whether two limits are allowed to be interchanged. Such questions are typically not easy and a major part of the technical scills of mathematicians is devoted to them. I‘m quite sure that I spent more than a year of my live struggling with such questions.
To have a realistic example, let us consider that the ‘whole thing’, mentioned above, is the task to produce all tables, figures, diagrams, and animations that will go into a presentation or publication. Then it is natural to consider a large structured Ruby program as the means to perform this task. (My experience is restricted to C++ programs instead of Ruby programs for this situation.) Let us assume that all data that normally would be represented as Float objects, now are represented as R objects. As discussed already, this means that e.g. instead of
x = 2.0
we hat to write
x = R.c(2.0)
or
x = R.c2
or we had to add to the statement
x = 2.0
the conversion statement
x = R.c(x)
No modification or conversion would be needed for the integer numbers! Now consider having an initial statement
R.prec = "float"
in the main program flow. By executing the program we get curves in diagrams, moving particles in animations, etc. If some of these curves are more jagged than expected, or some table values turn out to be NaN or Infinite we may try
R.prec = 40
and
R.prec = 80 ...
If the results stabilize somewhere, practitioners are sure that ‘the result is now independent of numerical errors’. Of course, the mathematician‘s objection that behavior for a few finite values says nothing about the limit, also applies here. But we are in a very comfortable position to cope with this objection: Since we deal with the solution of a task, we are allowed to assume that we know the experience-based conventions and ‘best practices’ concerning solutions of tasks in the problem field under consideration. So, the judgement whether the behavior of the solution as a function of precision supports a specific conclusion or not has a firm basis when the context is taken into account. It is an illusionary hope that some abstract framework such as ‘Mathematics’ could replace the guidance provided by problem-field related knowledge and experience.
Let me finally describe an experience which suggested to me the concept of precision as a parameter of whole task (project) instead of a parameter of individual arithmetic operations:
In an industrial project I had to assess the influence of lens aberrations to the efficiency of coupling laser light into the core of an optical fiber. Although the situation is clearly a wave-optical one, the idea was to test whether a ray-tracing simulation would reproduce the few available measured data. In case of success one would rely on ray-tracing for unexpensive optimization and tolerance analysis. At those times, in my company all computation was done in FORTRAN and floating point number representation was by 4 bytes (just as float in C). The simulation reproduced the measured curve not really but it wiggled arround it in a remarkably symmetrical manner. Just at this time our FORTRAN compiler was upgraded to provide 8-byte numbers (corresonding to C‘s double). What I had suspected turned out to be true: the ray-trace simulation now reproduced the measurements with magical precision. Congratulations to the optical lab for having got such highly consistent data! Soon I turned to programming in C and enjoyed many advantages over FORTRAN, but one paradise was lost: There was no longer a meaningful comparison between 4 byte and 8 byte precision since the available C-compiler worked with 8 bytes internally in both cases. When later we got the type long double in additon, this was an disappointment since it was identical to double for the MS-compiler and only 10 bytes (?) for GNU. So my desire was to have a C++ compiler which implemented, and consistenly used
float: 4 bytes double: 8 bytes long double: 16 bytes
I then would write all my programs with a floating point type named R which gets its real meaning in a flexible manner by a statement like
typedef long double R;
I was rather sure for a long time that I would never meet a practical situation in which a simulation would show objectionable numerical errors when done with 16 byte numbers. I had to learn, however, that this was a silly idea: Let us consider a system of polyspherical elastic particles (the shape of each particle is a union of overlapping spheres) which are placed in a mirror-symmetrical container and let initial positions (placements) and velocities be arranged symmetrically (with respect to the same mirror). Then the exact motion can be seen to preserve this symmetry. However, each simulation will loose this symmetry after a few particle to particle collisions. (The particles start to rotate after collisions, which influences further collisions much more than for mono-spherical particles.) Increasing the number of bytes per number will only allow a few more symmetrical collisions, and the computation time needed to see these will increase. Of coarse, this deviation from symmetry is not objectionable from a ‘real world’ point of view. It teaches the positive fact that tolerances for making the particles, placing and boosting them, are unrealisticly tight for realizing a symmetrical motion of the system. This shows that there are computational tasks in which not all computed system properties will stabilize with increasing computational precision. With class R such computational phenomena are within the scope of Ruby programming!
Preferred command line for documentation:
rdoc -a -N -S --op doc_r r.rb
g | [R] |
Number generator which yiels a Float for R.prec ==0 and a R else. This is the recommended method for introducing real numbers in programs since, when followed consequently, the whole program can be switched from Float precission to arbitrary precision by a single R.prec= statement. Usage:
x = R.c("3.45E-45"); y = R.c 2; z = R.c(1.25e13); u = R.c "40.1e-1"
# File rnum.rb, line 488 488: def R.c(aNumber) 489: if @@prec < 1 # yield a Float 490: nf = aNumber.to_f 491: if nf.class == Float 492: nf 493: else 494: nff = aNumber.to_s.to_f 495: if nff.class == Float 496: nff 497: else 498: fail "Unable to define a Float from the argument" 499: end 500: end 501: else # yield a R with the correct number of digits 502: R.new(aNumber) 503: end 504: end
The constant 0.
# File rnum.rb, line 530 530: def R.c0 531: return 0.0 if @@prec < 1 532: R.new 533: end
The constant 1.
# File rnum.rb, line 535 535: def R.c1 536: return 1.0 if @@prec < 1 537: R.new("1.0") 538: end
The constant 10.
# File rnum.rb, line 580 580: def R.c10 581: return 10.0 if @@prec < 1 582: R.new("10.0") 583: end
The constant 2.
# File rnum.rb, line 540 540: def R.c2 541: return 2.0 if @@prec < 1 542: R.new("2.0") 543: end
The constant 3.
# File rnum.rb, line 545 545: def R.c3 546: return 3.0 if @@prec < 1 547: R.new("3.0") 548: end
The constant 4.
# File rnum.rb, line 550 550: def R.c4 551: return 4.0 if @@prec < 1 552: R.new("4.0") 553: end
The constant 5.
# File rnum.rb, line 555 555: def R.c5 556: return 5.0 if @@prec < 1 557: R.new("5.0") 558: end
The constant 6.
# File rnum.rb, line 560 560: def R.c6 561: return 6.0 if @@prec < 1 562: R.new("6.0") 563: end
The constant 7.
# File rnum.rb, line 565 565: def R.c7 566: return 7.0 if @@prec < 1 567: R.new("7.0") 568: end
The constant 8.
# File rnum.rb, line 570 570: def R.c8 571: return 8.0 if @@prec < 1 572: R.new("8.0") 573: end
The constant 9.
# File rnum.rb, line 575 575: def R.c9 576: return 9.0 if @@prec < 1 577: R.new("9.0") 578: end
The constant 1/10.
# File rnum.rb, line 637 637: def R.i10 638: return 0.1 if @@prec < 1 639: R.new("0.1") 640: end
The constant 1/2 (inverse 2)
# File rnum.rb, line 607 607: def R.i2 608: return 0.5 if @@prec < 1 609: R.new("0.5") 610: end
The constant 1/4.
# File rnum.rb, line 616 616: def R.i4 617: return 0.25 if @@prec < 1 618: R.new("0.25") 619: end
The constant 1/5.
# File rnum.rb, line 621 621: def R.i5 622: return 0.2 if @@prec < 1 623: R.new("0.2") 624: end
The constant 1/8.
# File rnum.rb, line 630 630: def R.i8 631: return 0.125 if @@prec < 1 632: R.new("0.125") 633: end
The constant NaN, not a number.
# File rnum.rb, line 642 642: def R.nan 643: return 0.0/0.0 if @@prec < 1 644: R.new(BigDecimal("NaN")) 645: end
Only for @@prec > 0, the function works, otherwise it fails. The argument is assumed to be either a BigDecimal, or any object x for which x.to_s determines a BigDecimal bd via bd = BigDecimal(x.to_s). This allows to input number-related character strings and numerical literals, and numberical variables in a variety of styles:
R.prec = 40 i = 10000000 j = 123456789 a = R.new(i * j) b = R.new(1e-23) c = R.new("-117.07e99") R.prec = 100 # same as R.prec = 1e2 d = R.new(1e66) e = R.new("1E666")
The most universal mehod for generating large numbers is the one from String objects as in the definition of the variables c and e above. According to the design of BigDecimal, the exponent (given by the digits after the ‘E’) has to be representable by a Fixnum.
# File rnum.rb, line 444 444: def initialize(aStringOrNumber="0.0") 445: fail "R.new makes sense only for @@prec > 0" unless @@prec > 0 446: if aStringOrNumber.kind_of?(BigDecimal) 447: @g = aStringOrNumber # in order to take the reuts from BigDecimal 448: # arithmetic directly, speeds programs up efficiently 449: else 450: x = BigDecimal(aStringOrNumber.to_s) 451: fail "Unable to define a BigDecimal from the argument" unless 452: x.kind_of?(BigDecimal) 453: @g = x 454: end 455: end
The constant 1.
# File rnum.rb, line 515 515: def R.one 516: return 1.0 if @@prec < 1 517: R.new("1.0") 518: end
The constant pi=3.14159…
# File rnum.rb, line 586 586: def R.pi 587: return Math::PI if @@prec < 1 588: R.new(@@pi) 589: end
Returns the number of digits. Usage:
n = R.prec
# File rnum.rb, line 413 413: def R.prec 414: return @@prec 415: end
Setting precision, i. e. the number of digits.
If the anInteger.to_i is an integer > 0, this number will be set as the class variable @@prec. All other input will set @@prec to 0. Usage:
R.prec = 100
or
R.prec = "float"
A class state @@prec == 0 lets all generating methods create Float-objects instead of R-objects. Calling methods of R-objects is undefined in this class state. This is behavior is reasonable since all real numbers will be created as Floats for this setting of precision so that all method calls will automatically be invoked by Float-objects instead of R-objects. For this to work one needs
require 'float_ext'
and to use R.c instead of R.new.
# File rnum.rb, line 392 392: def R.prec=(anInteger) 393: if anInteger.kind_of?(Numeric) 394: ai=anInteger.to_i # for robustness 395: ai = 0 unless ai.class == Fixnum 396: else 397: ai = 0 398: end 399: @@prec = ai > 0 ? ai : 0 400: if @@prec > 0 401: @@pi = BigMath::PI(@@prec) 402: @@e = BigMath::E(@@prec) 403: ln5 = BigMath.log(BigDecimal("5"),@@prec) 404: ln2 = BigMath.log(BigDecimal("2"),@@prec) 405: @@ln10 = ln5.add(ln2,@@prec) 406: @@lge = BigDecimal("1").div(@@ln10,@@prec) 407: end 408: end
Random value (sine-floor random generator).
Chaotic function from the integers into the unit interval [0,1] of R
Argument condition: R.new(anInteger) defined
Although the main intent is to use the function for integer argments, this is not essential.
# File rnum.rb, line 676 676: def R.ran(anInteger) 677: if @@prec < 1 # no further usage of R 678: x = anInteger 679: y = 1e6 * Math::sin(x) 680: return y - y.floor 681: else 682: shift = 6 683: fac = R.one.ldexp(shift) 684: R.add_prec(shift) 685: x = R.c anInteger 686: y = fac * x.sin 687: res = (y - y.floor).round(@@prec - shift) 688: R.add_prec(-shift) 689: res 690: end 691: end
The constant 10.
# File rnum.rb, line 525 525: def R.ten 526: return 10.0 if @@prec < 1 527: R.new("10.0") 528: end
Consistency test for class R This is intended to keep the class consistent despite of modifications. The first argument influences the numbers which are selected for the test. Returned is a sum of numbers each of which should be numerical noise and so the result has to be << 1 if the test is to indicate success. For istance, on my system
R.prec = 100; R.test(137)
produces The error sum is 0.1654782936431420775338085739363521532600906524358 495733897874347055126769599726793415224324363846749E-29 . Computation time was 0.853 seconds.
# File rnum.rb, line 1269 1269: def R.test(n0, verbose = false ) 1270: puts "Doing R.test(#{n0}, #{verbose}) for R.prec = #{@@prec}:" 1271: puts "*************************************************" 1272: require 'float_ext' 1273: t1 = Time.now 1274: small = R.prec <= 0 1275: s = R.c0 1276: puts "class of s is " + s.class.to_s 1277: i = n0 1278: a = R.tob(i) 1279: i += 1 1280: b = R.tob(i) 1281: i += 1 1282: c = R.tob(i) 1283: i += 1 1284: 1285: r = 2 + a 1286: l = R.c2 + a 1287: ds = r.dis(l) 1288: puts "coerce 2 + a: ds = " + ds.to_s if verbose 1289: s += ds 1290: 1291: r = a + 1.234 1292: l = a + R.c(1.234) 1293: ds = r.dis(l) 1294: puts "coerce a + float: ds = " + ds.to_s if verbose 1295: s += ds 1296: 1297: ai = a.round 1298: bool_val = ai.integer? 1299: 1300: ds = bool_val ? 0 : 1 1301: puts "rounding gives integer type: ds = " + ds.to_s if verbose 1302: s += ds 1303: 1304: diff = (a - ai).abs 1305: bool_val = diff <= 0.5 1306: ds = bool_val ? 0 : 1 1307: puts "rounding is accurate: ds = " + ds.to_s if verbose 1308: s += ds 1309: 1310: r = (a + b) * c 1311: l = a * c + b * c 1312: 1313: ds = r.dis(l) 1314: puts "Distributive law for +: ds = " + ds.to_s if verbose 1315: s += ds 1316: 1317: r = (a - b) * c 1318: l = a * c - b * c 1319: ds = r.dis(l) 1320: puts "Distributive law for -: ds = " + ds.to_s if verbose 1321: s += ds 1322: 1323: r = (a * b) * c 1324: l = b * (c * a) 1325: ds = r.dis(l) 1326: puts "Multiplication: ds = " + ds.to_s if verbose 1327: s += ds 1328: 1329: a = R.tob(i) 1330: i += 1 1331: b = R.tob(i) 1332: i += 1 1333: c = R.tob(i) 1334: i += 1 1335: 1336: r = (a * b) / c 1337: l = (a / c) * b 1338: ds = r.dis(l) 1339: puts "Division: ds = " + ds.to_s if verbose 1340: s += ds 1341: 1342: x = R.c0/R.c0 1343: y = x.nan? 1344: ds = y ? 0 : 1 1345: puts "0/0: ds = " + ds.to_s if verbose 1346: s += ds 1347: 1348: r = R.c1 1349: l = a * a.inv 1350: ds = r.dis(l) 1351: puts "inv: ds = " + ds.to_s if verbose 1352: s += ds 1353: 1354: 1355: r = 1/a 1356: l = a.inv 1357: ds = r.dis(l) 1358: puts "inv and 1/x: ds = " + ds.to_s if verbose 1359: s += ds 1360: 1361: r = b 1362: l = -(-b) 1363: ds = r.dis(l) 1364: puts "Unary minus is idempotent: ds = " + ds.to_s if verbose 1365: s += ds 1366: 1367: x = -a 1368: y = x + a 1369: r = y 1370: l = R.c0 1371: ds = r.dis(l) 1372: puts "Unary -: ds = " + ds.to_s if verbose 1373: s += ds 1374: 1375: l = a.sin * b.cos + a.cos * b.sin 1376: r = (a + b).sin 1377: ds = r.dis(l) 1378: puts "Addition theorem for sin: ds = " + ds.to_s if verbose 1379: s += ds 1380: 1381: l = a.sin ** 2 + a.cos ** 2 1382: r = R.c1 1383: ds = r.dis(l) 1384: puts "sin^2 + cos^2: ds = " + ds.to_s if verbose 1385: s += ds 1386: 1387: x = a.sin 1388: y = a.cos 1389: l = x.hypot(y) 1390: r = R.c1 1391: ds = r.dis(l) 1392: puts "hypot: ds = " + ds.to_s if verbose 1393: s += ds 1394: 1395: phi = (R.ran(i) - R.i2) * R.pi * 2 1396: x = phi.cos 1397: y = phi.sin 1398: r = phi 1399: l = x.arg(y) 1400: ds = r.dis(l) 1401: puts "arg: ds = " + ds.to_s if verbose 1402: s += ds 1403: 1404: l = a.exp * b.exp 1405: r = (a + b).exp 1406: ds = r.dis(l) 1407: puts "Addition theorem for exp: ds = " + ds.to_s if verbose 1408: s += ds 1409: 1410: l = b 1411: r = b.exp.log 1412: ds = r.dis(l) 1413: puts "exp and log: ds = " + ds.to_s if verbose 1414: s += ds 1415: 1416: x = c.abs 1417: l = x 1418: r = x.log.exp 1419: ds = r.dis(l) 1420: puts "log and exp: ds = " + ds.to_s if verbose 1421: s += ds 1422: 1423: i +=1 1424: a = R.tob(i) 1425: l = a.sin 1426: r = l.asin.sin 1427: ds = r.dis(l) 1428: puts "asin and sin: ds = " + ds.to_s if verbose 1429: s += ds 1430: 1431: i +=1 1432: a = R.tob(i) 1433: l = a.cos 1434: r = l.acos.cos 1435: ds = r.dis(l) 1436: puts "acos and cos: ds = " + ds.to_s if verbose 1437: s += ds 1438: 1439: i +=1 1440: a = R.tob(i) 1441: l = a.tan 1442: r = l.atan.tan 1443: ds = r.dis(l) 1444: puts "atan and tan: ds = " + ds.to_s if verbose 1445: s += ds 1446: 1447: i +=1 1448: a = R.tob(i) 1449: l = a.cot 1450: r = l.acot.cot 1451: ds = r.dis(l) 1452: puts "acot and cot: ds = " + ds.to_s if verbose 1453: s += ds 1454: 1455: i +=1 1456: a = R.tob(i,true) # smaller version, in order 1457: # not to overload function exp 1458: l = a.sinh 1459: r = l.asinh.sinh 1460: ds = r.dis(l) 1461: puts "asinh and sinh: ds = " + ds.to_s if verbose 1462: s += ds 1463: 1464: i +=1 1465: a = R.tob(i,true) 1466: l = a.cosh 1467: r = l.acosh.cosh 1468: ds = r.dis(l) 1469: puts "acosh and cosh: ds = " + ds.to_s if verbose 1470: s += ds 1471: 1472: i +=1 1473: a = R.tob(i,true) 1474: l = a.tanh 1475: r = l.atanh.tanh 1476: ds = r.dis(l) 1477: puts "atanh and tanh: ds = " + ds.to_s if verbose 1478: s += ds 1479: 1480: i +=1 1481: a = R.tob(i,true) 1482: l = a.coth 1483: r = l.acoth.coth 1484: ds = r.dis(l) 1485: puts "acoth and coth: ds = " + ds.to_s if verbose 1486: s += ds 1487: 1488: i += 1 1489: a = R.tob(i,true) 1490: i += 1 1491: b = R.tob(i,true) 1492: i += 1 1493: c = R.tob(i,true) 1494: i += 1 1495: 1496: ap = a.abs 1497: l = (ap ** b) ** c 1498: r = ap ** (b * c) 1499: ds = r.dis(l) 1500: "general power: ds = " + ds.to_s if verbose 1501: s += ds 1502: 1503: l = (ap ** b) * (ap ** c) 1504: r = ap ** (b + c) 1505: ds = r.dis(l) 1506: puts "general power, addition theorem: ds = " + ds.to_s if verbose 1507: s += ds 1508: 1509: x=(a.abs+b.abs+c.abs) 1510: l = x.sqrt 1511: r = x ** 0.5 1512: ds = r.dis(l) 1513: puts "square root as power: ds = " + ds.to_s if verbose 1514: s += ds 1515: 1516: bi = i % 11 -6 1517: ci = i % 7 - 3 1518: bi = 7 if bi.zero? 1519: ci = 3 if ci.zero? 1520: # avoid trivial 0 1521: l = (a ** bi) ** ci 1522: r = a ** (bi * ci) 1523: puts "bi = " + bi.to_s + " ci = " + ci.to_s if verbose 1524: ds = r.dis(l) 1525: puts "integer power: ds = " + ds.to_s if verbose 1526: s += ds 1527: 1528: r = b 1529: l = b.clone 1530: ds = r.dis(l) 1531: puts "cloning: ds = " + ds.to_s if verbose 1532: s += ds 1533: ds = (l == r ? 0.0 : 1.0) 1534: puts "cloning and ==: ds = " + ds.to_s if verbose 1535: x = a 1536: y = b 1537: p, q = x.divmod(y) 1538: l = x 1539: r = y * p + q 1540: ds = r.dis(l) 1541: puts "divmod 1: ds = " + ds.to_s if verbose 1542: s += ds 1543: 1544: x = a 1545: y = -b 1546: p, q = x.divmod(y) 1547: l = x 1548: r = y * p + q 1549: ds = r.dis(l) 1550: puts "divmod 2: ds = " + ds.to_s if verbose 1551: s += ds 1552: 1553: x = b 1554: y = a 1555: p, q = x.divmod(y) 1556: l = x 1557: r = y * p + q 1558: ds = r.dis(l) 1559: puts "divmod 3: ds = " + ds.to_s if verbose 1560: s += ds 1561: 1562: x = b 1563: y = -a 1564: p, q = x.divmod(y) 1565: l = x 1566: r = y * p + q 1567: ds = r.dis(l) 1568: puts "divmod 4: ds = " + ds.to_s if verbose 1569: s += ds 1570: 1571: x, y = a.frexp 1572: l = a 1573: r = x.ldexp(y) 1574: ds = r.dis(l) 1575: puts "frexp and ldexp: ds = " + ds.to_s if verbose 1576: s += ds 1577: 1578: x1 = R.c 1100000 1579: x2 = R.c "1100000 with comment which will be ignored" 1580: x3 = R.c(1200000.12) 1581: x4 = R.c("1200000.12") 1582: x5 = R.c("34567.89001953125e2") 1583: x6 = R.c("345.6789001953125E4") 1584: 1585: l = x1 1586: r = x2 1587: ds = r.dis(l) 1588: puts "input 1: ds = " + ds.to_s if verbose 1589: s += ds 1590: 1591: l = x3 1592: r = x4 1593: ds = r.dis(l) 1594: puts "input 2: ds = " + ds.to_s if verbose 1595: s += ds 1596: 1597: l = x5 1598: r = x6 1599: ds = r.dis(l) 1600: puts "input 3: ds = " + ds.to_s if verbose 1601: s += ds 1602: 1603: x = R.c9 1604: h = R.c(1e-6) 1605: fp = (x + h).erf 1606: fm = (x - h).erf 1607: l = (fp - fm)/(h * 2) 1608: r = (-x*x).exp*R.c2/R.pi.sqrt 1609: ds = r.dis(l) 1610: puts "erf derivative : ds = " + ds.to_s if verbose 1611: s += ds 1612: 1613: t2 = Time.now 1614: puts "class of s is " + s.class.to_s + " ." 1615: puts "The error sum s is " + s.to_s + " ." 1616: puts "It should be close to 0." 1617: puts "Computation time was #{t2-t1} seconds." 1618: s 1619: end
Test object.
Needed for automatic tests of arithmetic relations. Intended to give numbers which rapidly change sign and order of magnitude when the argument grows regularly e.g. as in 1,2,3,… . However, suitibility as a random generator is not the focus. If the second argument is ‘true’, the result is multplied by a number << 1 in order to prevent the result from overloading the exponential function.
# File rnum.rb, line 703 703: def R.tob(anInteger, small = false) 704: small_a = small || @@prec<=0 705: ai=anInteger.to_i 706: mag_num1 = 7 # 'magic numbers' 707: mag_num2 = 11 708: mag_num3 = 17 709: mag_num4 = small_a ? 0.0423 : 1.7501 710: mag_num5 = small_a ? 0.13 : 0.65432 711: r1 = ai % mag_num1 712: r2 = ai % mag_num2 713: r3 = ai % mag_num3 714: y=(-r1 - r2 + r3) * mag_num4 + mag_num5 715: if @@prec < 1 716: res = Math::exp(y) * (ran(ai) - 0.5) 717: else 718: res = R.new(y).exp * (ran(ai) - R.i2) 719: end 720: end
The constant 2.
# File rnum.rb, line 520 520: def R.two 521: return 2.0 if @@prec < 1 522: R.new("2.0") 523: end
The constant 0.
# File rnum.rb, line 510 510: def R.zero 511: return 0.0 if @@prec < 1 512: R.new 513: end
Returns the a-th power of self, if a is an integer argument, and the a-th power of self.abs for real, non-integer a. We put 0**a = 0 for all non-integer a.
# File rnum.rb, line 817 817: def **(a) 818: return R.nan if nan? 819: if a.class == Fixnum || a.class == Bignum 820: return R.new(@g.power(a)) 821: end 822: a = R.new(a) if a.class != R 823: R.nan if a.nan? 824: x = abs 825: if x.zero? 826: R.c0 827: else 828: (x.log * a).exp 829: end 830: end
Returns the square of the absolute value of self.
# File rnum.rb, line 884 884: def abs2; self * self; end
Inverse cosine.
# File rnum.rb, line 998 998: def acos 999: x = self 1000: y = (R.c1 - x * x).sqrt 1001: x.arg(y) 1002: end
Inverse hyperbolic cosine.
# File rnum.rb, line 1169 1169: def acosh 1170: ((self * self - R.c1).sqrt + self).abs.log 1171: end
Inverse hyperbolic cotangent.
# File rnum.rb, line 1179 1179: def acoth 1180: ((self + R.c1)/(self - R.c1)).abs.log * R.i2 1181: end
Argument, i.e. polar angle phi of point (x=self,y), -pi < phi <= pi.
This is the basic tool for defining asin, acos, atan, acot. Notice x.arg(y) == y.atan2(x) with function atan2 to be defined next.
# File rnum.rb, line 974 974: def arg(y) 975: a=(self*self+y*y).sqrt 976: res = R.c2*(y/(a+self)).atan_aux 977: res -= R.pi * 2 if res > R.pi 978: res 979: end
Inverse sine.
# File rnum.rb, line 991 991: def asin 992: y = self 993: x = (R.c1 - y * y).sqrt 994: x.arg(y) 995: end
Inverse hyperbolic sine.
# File rnum.rb, line 1164 1164: def asinh 1165: ((self * self + R.c1).sqrt + self).log 1166: end
Inverse tangent.
# File rnum.rb, line 1005 1005: def atan 1006: cosine = (R.c1 + self * self).sqrt.inv 1007: sine = cosine * self 1008: cosine.arg(sine) 1009: end
Inverse hyperbolic tangent.
# File rnum.rb, line 1174 1174: def atanh 1175: ((R.c1 + self)/(R.c1 - self)).abs.log * R.i2 1176: end
Supports the unified treatment of real and complex numbers.
# File rnum.rb, line 783 783: def complex?; false; end
(Complex) conjugation, no effect on real numbers. Supports the unified treatment of real and complex numbers.
# File rnum.rb, line 739 739: def conj; self; end
Cosine.
# File rnum.rb, line 1076 1076: def cos 1077: piHalf=R.pi*R.new("0.5") 1078: (self+piHalf).sin 1079: end
Hyperbolic cotangent.
# File rnum.rb, line 1129 1129: def coth 1130: s = self.exp - (-self).exp 1131: c = self.exp + (-self).exp 1132: c/s 1133: end
Returns a kind of relative distance between self and aR. The return value varies from 0 to 1, where 1 means maximum dissimilarity of the arguments. Such a function is needed for testing the validity of arithmetic laws, which, due to numerical noise, should not be expected to be fulfilled exactly.
# File rnum.rb, line 892 892: def dis(aR) 893: aR = R.aa(aR) 894: a = self.abs 895: b = aR.abs 896: d = (self - aR).abs 897: s = a + b 898: return R.c0 if s.zero? 899: d1 = d/s 900: d < d1 ? d : d1 901: end
Returns the error function evaluated at x=self.
# File rnum.rb, line 1232 1232: def erf 1233: if self < R.c0 1234: x = abs 1235: sig = - R.c1 1236: else 1237: x = self 1238: sig = R.c1 1239: end 1240: if x > R.c9 1241: return (R.c1 - x.erfc_asy)*sig 1242: else 1243: return x.erf_ps * sig 1244: end 1245: end
Returns a two-component array containing the normalized fraction (an R) and the exponent (a Fixnum) of self. The exponent refers always to radix 10.
# File rnum.rb, line 650 650: def frexp # member functions will be called only in situations in 651: # which @@prec > 0 is guarantied 652: n = @g.exponent 653: fac = BigDecimal("10").power(-n) 654: prel = [@g.mult(fac, @@prec), n] 655: [R.new(prel[0]),prel[1]] 656: end
Returns he hypotenuse of a right-angled triangle with sides x = self and y.
# File rnum.rb, line 988 988: def hypot(y); (self * self + y * y).sqrt; end
Returns the inverse 1/self.
# File rnum.rb, line 839 839: def inv 840: return R.nan if nan? 841: R.new(BigDecimal("1").div(@g,@@prec)) 842: end
Adds the argument to the exponent of self. If self is a normalized fraction (and thus has exponent 0) the resulting number has an exponent given by the argument. This is the situation to which the functions name ‘load exponent’ refers.
# File rnum.rb, line 663 663: def ldexp(anInteger) 664: ai=anInteger.to_i 665: self * (R.c10 ** ai) 666: end
Natural logarithm.
# File rnum.rb, line 1156 1156: def log 1157: return R.nan if nan? 1158: r1=self.log10 1159: r2=R.e.log10 1160: r1/r2 1161: end
Logarithm to base 10.
Makes use of knowing the decimal exponent for a reduction to an actually small argument.
# File rnum.rb, line 1140 1140: def log10 1141: return R.nan if nan? 1142: return R.nan if @g.infinite? || @g.nan? 1143: sp=@g.split 1144: exponent=sp[3] 1145: s="0."+sp[1] 1146: x=BigDecimal(s) 1147: if x.zero? 1148: fail "zero argument of log" 1149: end 1150: x=BigMath.log(x,@@prec) 1151: y=BigDecimal(exponent.to_s) 1152: R.new(x) * R.new(@@lge) + R.new(y) 1153: end
Returns ‘true’ if self is ‘not a number’ (NaN).
# File rnum.rb, line 769 769: def nan?; @g.nan?; end
Printing the value together with a label
# File rnum.rb, line 913 913: def prn(name) 914: puts "#{name} = " + to_s 915: end
The pseudo inverse is always defined: the pseudo inverse of 0 is 0.
# File rnum.rb, line 845 845: def pseudo_inv 846: return R.c0 if zero? 847: inv 848: end
Supports the unified treatment of real and complex numbers.
# File rnum.rb, line 780 780: def real?; true; end
If the method gets no argument we return the ‘nearest integer’: For the return value res we have
res.int? == true
and
(self - res).abs <= 0.5
For an integer argument we return a real number, the significand of which has not more than n digits. Notice that there is also a function.
# File rnum.rb, line 858 858: def round(*arg) 859: n = arg.size 860: case n 861: when 0 862: (self + 0.5).floor.to_i # here we ask for an integer output 863: # notice that R#round maps to R 864: when 1 865: m = arg[0].to_i 866: x = frexp 867: y = x[0].ldexp(m) 868: (y + 0.5).floor.ldexp(x[1] - m) 869: else 870: fail "needs 0 or 1 arguments" 871: end 872: end
Sine.
This reduces computation of the sine of any angle to computation of sine or cosine of a angle less than pi/4 and thus less than 1. This speeds up the convergence of the sine or cosine power series.
# File rnum.rb, line 1024 1024: def sin 1025: return R.nan if @g.infinite? || @g.nan? 1026: sign=1 1027: if @g >= BigDecimal("0") 1028: x=@g 1029: else 1030: x=-@g 1031: sign=-1 1032: end 1033: twoPi=@@pi*BigDecimal("2") 1034: piHalf=@@pi*BigDecimal("0.5") 1035: piQuart=@@pi*BigDecimal("0.25") 1036: res=x.divmod(twoPi) 1037: phi=res[1] 1038: res2=phi.divmod(piHalf) 1039: qn=res2[0] # number of quadrant 0,1,2,3 1040: alpha=res2[1] # angle in quadrant 1041: if qn==0 1042: # first quadrant 1043: if alpha<=piQuart 1044: r=R.new(BigMath.sin(alpha,@@prec)) 1045: else 1046: r=R.new(BigMath.cos(piHalf-alpha,@@prec)) 1047: end 1048: elsif qn==1 1049: # second quadrant 1050: if alpha<=piQuart 1051: r=R.new(BigMath.cos(alpha,@@prec)) 1052: else 1053: r=R.new(BigMath.sin(piHalf-alpha,@@prec)) 1054: end 1055: elsif qn==2 1056: # third quadrant 1057: if alpha<=piQuart 1058: r=-R.new(BigMath.sin(alpha,@@prec)) 1059: else 1060: r=-R.new(BigMath.cos(piHalf-alpha,@@prec)) 1061: end 1062: elsif qn==3 1063: # fourth quadrant 1064: if alpha<=piQuart 1065: r=-R.new(BigMath.cos(alpha,@@prec)) 1066: else 1067: r=-R.new(BigMath.sin(piHalf-alpha,@@prec)) 1068: end 1069: else 1070: puts "error in R.sin, not assumed to happen" 1071: end 1072: sign == -1 ? -r : r 1073: end
Returns the square root of self.
# File rnum.rb, line 875 875: def sqrt 876: return R.nan if nan? 877: R.new(@g.sqrt(@@prec)) 878: end
Hyperbolic tangent.
# File rnum.rb, line 1122 1122: def tanh 1123: s = self.exp - (-self).exp 1124: c = self.exp + (-self).exp 1125: s/c 1126: end
Returns the unit-element which belongs to the same class than self
# File rnum.rb, line 836 836: def to_1; R.c1; end
Auxliar version of arcsin ( not public).
Definition in terms of a power series. The convergence becomes very bad when self.abs becomes close to 1. The present function will actually be used for defining atan.
# File rnum.rb, line 934 934: def asin_aux 935: return R.nan if @g.infinite? || @g.nan? 936: one=BigDecimal("1") 937: raise ArgumentError, "@g.abs must be <= 1.0" if @g.abs>one 938: n = @@prec + BigDecimal.double_fig 939: y = @g 940: d = y 941: t = @g 942: n1 = one 943: n2 = BigDecimal("2") 944: n3 = BigDecimal("3") 945: x2 = @g.mult(@g,n) 946: while d.nonzero? && ((m = n - (y.exponent - d.exponent).abs) > 0) 947: m = BigDecimal.double_fig if m < BigDecimal.double_fig 948: t = t.mult(x2,n)*n1/n2 949: d = t.div(n3,m) 950: y += d 951: n1 += 2 952: n2 += 2 953: n3 += 2 954: end 955: R.new(y) 956: end
Auxliar version of arctan ( not public).
# File rnum.rb, line 959 959: def atan_aux 960: a = @g.abs 961: if a < BigDecimal("1.618") 962: x = self / (R.c1+self*self).sqrt # x.abs < 1/1.618 = 0.618 963: y = x.asin_aux 964: else 965: rec = BigDecimal("1").div(@g,@@prec) # rec.abs < 1/1.618 = 0.618 966: y = R.pi * R.i2 - R.new(BigMath.atan(rec,@@prec)) 967: end 968: end
Redefining coerce from Numeric. This allows writing 1 + R.new(137) instead of R.new(137) + 1 or R.new(137) + R.c1.
# File rnum.rb, line 746 746: def coerce(a) 747: [ R.new(a), self] 748: end
Returns the value obtained from evaluating the power series of erf at x = self. Here it is assumed, that x < 9 which is sufficient since for larger values we have a asymptotic formula. If we had to use larger x‘s, the ‘magic number’ 33 in the code had to be increased, which would slow down the computation.
# File rnum.rb, line 1208 1208: def erf_ps 1209: return R.nan if @g.infinite? || @g.nan? 1210: x = self 1211: n = @@prec + BigDecimal.double_fig 1212: x2 = -x * x 1213: y = R.c1 1214: i = R.c0 1215: b = R.c1 1216: c = R.c1 1217: a = R.c0 1218: n_add = 33 1219: R.add_prec(n_add) 1220: while a.frexp[1] > y.frexp[1] - n 1221: i += R.c1 1222: b *= x2/i 1223: c += R.c2 1224: a = b/c 1225: y += a 1226: end 1227: R.add_prec(-n_add) 1228: res = y * R.c2 * x / R.pi.sqrt 1229: end
Returns the value of erfc for large arguments according to the asymptotic formla 7.1.23 on p. 298 of Abramowitz Stegun. From the sum over m = 1, 2, … we take the terms up to m = 4.
# File rnum.rb, line 1186 1186: def erfc_asy 1187: z = self *self 1188: z2 = z * R.c2 1189: yp = z2.clone 1190: s = R.c1 - R.c1/yp 1191: yp *= z2 1192: s += R.c3/yp 1193: yp *= z2 1194: s -= R.new("15")/yp 1195: yp *= z2 1196: s += R.new("105")/yp 1197: yp *= z2 1198: s -= R.new("945")/yp 1199: (-z).exp * s/R.pi.sqrt 1200: end
Power of 10: exp10(x) = 10**x. Auxiliar function for the implementation of exp.
# File rnum.rb, line 1095 1095: def exp10 1096: return R.nan if @g.infinite? || @g.nan? 1097: n = @g.floor 1098: f = @g - n 1099: x = BigMath.exp(f.mult(@@ln10,@@prec),@@prec) 1100: newExp=n.to_i # newExp has to be a Fixnum in order that 1101: # the next line defines a BigDecimal. This looks like a missed 1102: # opportunity: numbers that accept Bignums as exponents should 1103: # be not much more difficult to implement. 1104: p10=BigDecimal("1.0E"+newExp.to_s) # a power of 10, generated on the 1105: # level of symbols, not by computation 1106: y = x.mult(p10,@@prec) 1107: R.new(y) 1108: end