New features in MathPartner 2021

We introduce new features in the MathPartner service that have recently become available to users. We highlight the functions for calculating both arithmetic-geometric mean and geometric-harmonic mean. They allow calculating complete elliptic integrals of the first kind. They are useful for solving many physics problems, for example, one can calculate the period of a simple pendulum. Next, one can calculate the modified arithmetic-geometric mean proposed by Semjon Adlaj. Consequently, one can calculate the complete elliptic integrals of the second kind as well as the circumference of an ellipse. Furthermore, one can also calculate the Sylvester matrices of the first and the second kind. Thus, by means of a few strings, one can calculate the resultant of two polynomials as well as the discriminant of a binary form. Some new matrix functions are also added. So, today the list of matrix functions includes the transpose, adjugate, conjugate, inverse, generalized inverse, and pseudo inverse of a matrix, the matrix determinant, the kernel, the echelon form, the characteristic polynomial, the Bruhat decomposition, the triangular LSU decomposition, which is an exact block recursive LU decomposition, the QR block recursive decomposition, and the singular value decomposition. In addition, two block-recursive functions have been implemented for calculating the Cholesky decomposition of symmetric positive-definite matrices: one function for sparse matrices with the standard multiplication algorithm and another function for dense matrices with multiplication according to the Winograd--Strassen algorithm. The linear programming problems can be solved too. So, the MathPartner service has become better and handy. It is freely available at http://mathpar.ukma.edu.ua/ as well as at http://mathpar.com.


INTRODUCTION
The MathPartner service is useful at school, university, and work [15,16].It can help you to solve problems in mathematical analysis, algebra, geometry, physics, and more.You can operate with functions and functional matrices, to obtain the exact numerical and analytical solutions and solutions in which the numerical coefficients have a required accuracy.Today it is available at http://mathpar.ukma.edu.ua/ as well as at http://mathpar.com/.
We present some new features and improvements.In particular, you can calculate the arithmetic-geometric mean and its modification to calculate the complete elliptic integrals of the first and the second kind [1,2].Thus, you can calculate the circumference of an ellipse as well as the period of a pendulum.Another application is also proposed to compute packing section not specified or unknown properties [13,20,23].One can also calculate the Sylvester matrix [3,4] as well as the resultant of two polynomials.Some new matrix functions have been implemented too [8,14,17,18].

SIX MEANS AND THE COMPLETE ELLIPTIC INTEGRALS
Given two non-negative numbers x and y, one can define their arithmetic, geometric and harmonic means as x+y 2 , x y, and 2x y x+y , respectively.Moreover, AGM(x, y) denotes the arithmeticgeometric mean of x and y.It was defined by Johann Carl Friedrich Gauss at the end of the 18th century.GHM(x, y) denotes the geometric-harmonic mean of x and y.At last, MAGM(x, y) denotes the modified arithmetic-geometric mean of x and y.It is defined by Semjon Adlaj [1,2].Every mean is a symmetric homogeneous function in their two variables x and y.In contrast to well-known means, AGM(x, y), GHM(x, y), and MAGM(x, y) are calculated iteratively.
The arithmetic-geometric mean AGM(x, y) is equal to the limit of both sequences x n and y n , where x 0 = x, y 0 = y, x n+1 = 1 2 (x n + y n ), and y n+1 = x n y n .In the same way, the geometric-harmonic mean GHM(x, y) is equal to the limit of both sequences x n and y n , where x 0 = x, y 0 = y, x n+1 = x n y n , and y n+1 = 2x n y n x n +y n .Note that AGM(x, y)GHM(x, y) = x y.
These means are applicable, in particular, to calculate the complete elliptic integrals of the first and second kind.Let us use the parameter 0 ≤ k ≤ 1.
The complete elliptic integral of the first kind K (k) is defined as It can be computed in terms of the arithmetic-geometric mean: On the other hand, for k < 1, it can be computed in terms of the geometric-harmonic mean: The complete elliptic integral of the second kind E (k) is defined as It can be computed in terms of the modified arithmetic-geometric mean: section not specified or unknown The circumference of an ellipse is equal to where the semi-major and semi-minor axes are denoted a and b.Let a point mass be suspended from a pivot with a massless cord.The length of the pendulum is denoted by L. It swings under gravitational acceleration g = 9.80665m/s 2 .The maximum angle that the pendulum swings away from the vertical, called the amplitude, is denoted by θ 0 .One can find the period T of the pendulum using the arithmetic-geometric mean

THE SYLVESTER MATRICES, THE RESULTANT, AND THE DISCRIMINANT
Let us consider two univariate polynomials f (x) and g (x), where deg( f ) = n, deg(g ) = m, and m ≤ n hold.James Joseph Sylvester introduced two matrices associated to f (x) and g (x).
Please, refer to [3,4].More precisely, there are two different Sylvester matrices associated with two univariate polynomials.Let us denote f (x) = f n x n +• • •+ f 1 x + f 0 and g (x) = g m x m +• • •+g + 1x +g 0 .The Sylvester matrix of the first kind was introduced in 1840 [21].It is the (n+m)×(n+m) matrix.Its determinant is called the resultant of f and g .For example, if f = x 3 + p x + q and g = 3x 2 + p, then the Sylvester matrix of the first kind is equal to and its determinant equals 4p 3 + 27q 2 , i.e., it is the opposite of the discriminant of f .The Sylvester matrix of the second kind was introduced in 1853 as an improvement of the Sturm theory [22].It is the (2n) × (2n) matrix, where n ≥ m.The first and the second rows are The next pair is the first pair, shifted one column to the right; the first elements in the two rows are zero.The remaining rows are obtained the same way as above.For example, if f = x 3 +p x +q and g = 3x 2 + p, then the Sylvester matrix of the second kind is equal to 1 0 p q 0 0 0 3 0 p 0 0 0 1 0 p q 0 0 0 3 0 p 0 0 0 1 0 p q 0 0 0 3 0 p Of course, if the resultant vanishes, then the determinant of the Sylvester matrix of the second kind vanishes too.The Sylvester matrix of the first kind can be calculated in MathPartner by a ternary function called sylvester(•, •, 0), where the third argument is equal to zero.In the same way, the Sylvester matrix of the second kind can be calculated in MathPartner by a ternary function called sylvester(•, •, 1), where the third argument is not equal to zero.The first and the second arguments are univariate polynomials, for example, f (x) and g (x).The variable must be the last one in the list of variables.For example, if the polynomials over the ring of integers depend on parameters p and q, then the declaration in MathPartner can be SPACE = Z[p,q,x].
The resultant of two univariate polynomials can be calculated as resultant( f , g ).The variable must be the last one in the list of variables.For example, let us run The discriminant can be calculated immediately.For example,

SYSTEMS OF ALGEBRAIC EQUATIONS
Let us show an application of the resultant of two univariate polynomials.For this purpose, we consider a system of two polynomial equations in two variables and eliminate a variable.Of course, variable elimination can be done by computing a Gröbner basis.So, there exists another way to solve a system of algebraic equations.Unfortunately, the Gröbner basis approach is sometimes very complicated.Contrariwise, the approach based on the resultant is often more effective.Let us consider the system In this case, solutions to the system correspond to intersection points of the circle and the ellipse.
Next, let us show the corresponding program in MathPartner.The Gröbner basis of a polynomial ideal can be obtained due to Bruno Buchberger.The algorithm is implemented as groebnerB().The same basis can be calculated using a matrix algorithm that is similar to the F4 algorithm.It is implemented as groebner().The ordering is reverse lexicographical.Note that functions should begin with the symbol \.Moreover, one can solve a system of inequalities in one variable.For example, The output is equal to (1,4).In the next example The output is equal to the empty set .

THE GREATEST COMMON DIVISOR OF TWO POLYNOMIALS
In this section we shall consider polynomials over either the field of rational numbers or the ring of integers.The problem of calculating the greatest common divisor of two polynomials is important for symbolic computations, in particular, over a finite extension of the field of rational numbers [5,7,10].Unfortunately, the bit complexity of the Euclidean algorithm is exponential.

section not specified or unknown
There exists a polynomial upper bound on the number of arithmetic operations.But the size of a product of integers at intermediate steps can be very large.For some discussion about the computational complexity of powers of integers refer to [11].
A modified algorithm based on subresultant residues had been proposed by J.J. Sylvester [22] and later improved by Walter Habicht [9] and Alkiviadis Akritas [3].
The main result was obtained by Brown.He found a way to compute the subresultant PRS without using matrix reduction.He proposed to modify the Euclidean algorithm, reducing all coefficients by common factors so that they coincide with the subresultant PRS [6].This algorithm is applied in MathPartner to compute the GCD of two polynomials.This approach was further developed in work [4].
To calculate the greatest common divisor one can run GCD( f , g ); for example, SPACE = Z[x]; \GCD(9*x, 6*x+6); The output is equal to 3. To calculate B ˊezout coefficients one can run extendedGCD( f , g ).
The least common multiple can be calculated too.
The output is equal to 18x 2 + 18x.

MATRIX FUNCTIONS
Today the list of matrix functions includes the transpose, adjugate, conjugate, inverse, generalized inverse, and pseudo inverse of a matrix, the matrix determinant, the kernel, the matrix echelon form, the characteristic polynomial, the Bruhat decomposition, the triangular LSU decomposition, which is an exact block recursive LU decomposition, the QR block recursive decomposition, and the singular value decomposition.In addition, two block-recursive functions have been implemented for calculating the Cholesky decomposition of symmetric positive definite matrices: one function for sparse matrices with the standard multiplication algorithm and another function for dense matrices with multiplication according to the Winograd-Strassen algorithm.The linear programming problems can be solved too.
For a given matrix A, the pseudo inverse of A is a matrix A − satisfying both equalities A A − A = A and A − A A − = A − .Furthermore, the generalized inverse Moore-Penrose A + satisfies four equalities A A + A = A, A + A A + = A + , (A + A) T = A + A, and (A A + ) T = A A + .If A is a square non-degenerate matrix, then three inverses of A coincide, i.e., A −1 = A − = A + .If a n × m matrix A can be decomposed as A = BC , where B is a n × k matrix, C is a k × m matrix, and rank(A) = rank(B ) = rank(C ) = k, then A + = C T (CC T ) −1 (B T B ) −1 B T .This idea was expressed by Vera Nikolaevna Kublanovskaya [12].About big matrices refer to [19].To calculate the characteristic polynomial of a matrix A, you should work over the ring of polynomials in some new variable and run charPolynom(A).For example, let us run the commands [3,5]]; f=\charPolynom(M); The output is equal to f = x 2 − 6x − 1.
Let us take a closer look at some types of decomposition.

The Bruhat decomposition
To calculate the Bruhat decomposition of a matrix A one can run BruhatDecomposition(A).The result consists of three matrices [V, D,U ], where both V and U are upper-triangular matrices, D is a permutation matrix multiplied by the inverse of the diagonal matrix [14].If all entries of the matrix A are elements of commutative domain R, then all entries of matrices V , D −1 , and U belong to the same domain R. Let us consider a 2 × 2 matrix over Z.For example, Let us run the commands The output consists of three matrices An entry of the middle matrix D is not integer, but the inverse matrix has integer entries.

The LSU decomposition
The LSU decomposition of a matrix A can be calculated by means of the command LSU(A).The result consists of three matrices [L, S,U ], where L is a lower-triangular matrix, U is an uppertriangular matrix, S is a permutation matrix multiplied by the inverse of a diagonal matrix.If all entries of the matrix A belong to a commutative domain R, then all entries of matrices L, S −1 , and U belong to the same domain R, refer to [18].Let us consider an example, where M is a 2×2 matrix.

M = 1 2 3 1
Let us run the commands The output consists of three matrices Entries of the middle matrix S are rational functions.All entries of the matrices L, S −1 , and U are polynomials over Z.

The QR block recursive decomposition
Let us consider a 2 k × 2 k matrix A over the field of reals.The QR decomposition of A can be calculated by means of the command QR(A).Note that if the order is not equal to 2 k for any integer k, then the algorithm does not work because it is based on block recursion [17].Let us consider an example, where M is a 2 × 2 matrix.
section not specified or unknown

The singular value decomposition
To calculate the singular value decomposition (SVD) of a matrix A, one can run SVD(A).As a result, three matrices [U , D,V ] will be calculated.The matrices U and V are unitary, the matrix D is diagonal, and A = U DV holds.Let us consider an example, where M is a 2 × 2 matrix.

The Cholesky decomposition
In general, the Cholesky decomposition is a decomposition of a Hermitian positive-definite matrix into the product of a lower-triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions.It was discovered by Andr ˊe-Louis Cholesky for real symmetric matrices [8].And we also suppose that matrices are real.So, every real symmetric positivedefinite matrix is equal to the product LL T , where L is a lower-triangular matrix.
The Cholesky decomposition can be calculated for a symmetric and positive definite matrix A by means of the command cholesky(A).The result consists of two lower triangular matrices L and S such that A = LL T and SL = I .Let us consider an example, where M is a 2 × 2 matrix.

MODULAR ARITHMETIC
The current version of the MathPartner service supports operations over a finite field Z/pZ, where p is a prime number.One should use either SPACE = Zp[] or SPACE = Zp32[].The prime number p is equal to the constant MOD or MOD32, respectively.In the second case, p satisfies the inequality p < 2 31 .The default value is 268435399.For example, working over the field Z/5Z one can run SPACE = Zp32[x]; MOD32 = 5; \GCD(x+2,x-3); The output is equal to x − 3 because −3 ≡ 2 (mod 5).On the other hand, the same example over Z/7Z leads to another answer.SPACE = Zp32[x]; MOD32 = 7; \GCD(x+2,x-3); section not specified or unknown The output is equal to 1.
All functions using only rational operations on input data can be calculated over finite fields.
In particular, for two polynomials over Z/pZ one can calculate the greatest common divisor GCD() as well as Sylvester matrix sylvester().One can calculate the Gröbner basis of an ideal in a polynomial ring using either groebner() or groebnerB().One can also calculate the determinant det(), echelon form toEchelonForm(), characteristic polynomial charPolynom(), Bruhat decomposition BruhatDecomposition(), LDU decomposition LDU(), and LDUWMdet() of a matrix.

CONCLUSION
Now the MathPartner service has become even better and allows us to solve new problems in geometry and physics.In particular, new functions allow to calculate the period of a simple pendulum as well as the circumference of an ellipse in terms of the arithmetic-geometric and the modified arithmetic-geometric means.The resultant of two univariate polynomials is a basic tool of computer algebra because it allows solving systems of polynomial equations.Matrix functions are also widely used to solve applied problems.
The reader is recommended to calculate examples of the considered quantities using the MathPartner service.These exercises will help you remember and understand the computer algebra methods better.On the other hand, new algorithms can be implemented by the user through the branch and loop operators.Moreover, the MathPartner service opens up the possibility of distance learning.
a, b, c, x]; f = a*x^2+b*x+c; \discriminant(f); The output is equal to −4c a + b 2 .There exists another way to calculate the discriminant of the univariate polynomial x 2 + bx + c, where b and c are parameters.SPACE = Z[b, c, x]; f = x^2+b*x+c; -\det(\sylvester(f, \D(f, x), 0)); The output is equal to −4c + b 2 .Of course, D( f , x) calculates the first derivative of f .

For a given
matrix A, one can calculate: • The transpose: transpose(A) or A T ; • The conjugate: conjugate(A) or A * ; • The matrix echelon form: toEchelonForm(A); • The kernel: kernel(A); • The rank: rank(A); • The determinant: det(A); • The inverse: inverse(A) or A −1 ; • The adjugate: adjoint(A) or A ⋆ ; • The generalized inverse Moore-Penrose: genInverse(A) or A + ; • The pseudo inverse: pseudoInverse(A); section not specified or unknown • The closure: closure(A) or A × .The closure of a matrix A is equal to the sum of matrices I + A + A 2 + A 3 + • • •.For the classical algebras it is equivalent to (I − A) −1 .

0 − 5 5
section not specified or unknown Both first and third matrices are triangular matrices over Z.The middle matrix S has a rational entry, but the inverse matrix is defined over Z.To calculate the LSU decomposition of A together with decomposition of the pseudo inverse A − = W S M , one can run the command LSUWMdet(A).The result consists of five matrices and determinant of the largest non-degenerate corner block [L, S,U ,W, M , det], where L and U are lower and upper triangular matrices, S is a truncated weighted permutation matrix, S M and W S are lower and upper triangular matrices.Moreover, A = LSU and A − = W S M .If entries of the matrix A belong to a commutative domain, then all matrices, except for S, also belong to this domain.Let us run the commandsSPACE = Z[]; M = [[1, 2], [3, 1]]; \LSUWMdet(M);Of course, three of these matrices coincide with three matrices in the previous example.Next, let us consider a matrix over the ring Z[x, y] commands SPACE = Z[x, y]; M = [[y, x], [x, y]]; \LSU(M);The output consists of three matrices Let us run the commandsSPACE = R64[]; FLOATPOS = 3; M = [[2, 3],[1, 0]]; \SVD(M); For large dense matrices, whose size is greater than or equal to 128 × 128, one can use a fast algorithm cholesky(A, 1) that uses multiplication of blocks by the Winograd-Strassen algorithm.