Let $D$ be a positive integer, and let $p$ be an odd prime with $p\nmid D$. In this paper we use a result on the rational approximation of quadratic irrationals due to M. Bauer, M. A. Bennett: Applications of the hypergeometric method to the generalized Ramanujan-Nagell equation. Ramanujan J. 6 (2002), 209–270, give a better upper bound for $N(D, p)$, and also prove that if the equation $U^2-DV^2=-1$ has integer solutions $(U, V)$, the least solution $(u_1, v_1)$ of the equation $u^2-pv^2=1$ satisfies $p\nmid v_1$, and $D>C(p)$, where $C(p)$ is an effectively computable constant only depending on $p$, then the equation $x^2-D=p^n$ has at most two positive integer solutions $(x, n)$. In particular, we have $C(3)=10^7$.
The objective of this paper is to obtain sharp upper bound for the function f for the second Hankel determinant |a2a4 − a 2 3 |, when it belongs to the class of functions whose derivative has a positive real part of order α (0 ≤ α < 1), denoted by RT (α). Further, an upper bound for the inverse function of f for the nonlinear functional (also called the second Hankel functional), denoted by |t2t4 − t 2 3 |, was determined when it belongs to the same class of functions, using Toeplitz determinants.
Max-min algebra and its various aspects have been intensively studied by many authors \cite{Baccelli,Green79} because of its applicability to various areas, such as fuzzy system, knowledge management and others. Binary operations of addition and multiplication of real numbers used in classical linear algebra are replaced in max-min algebra by operations of maximum and minimum. We consider two-sided systems of max-min linear equations
\begin{math}\emph{A}\otimes\emph{x}= B\otimes\emph{x}\end{math}, with given coefficient matrices \emph{A} and \emph{B}. We present a polynomial method for finding maximal solutions to such systems, and also when only solutions with prescribed lower and upper bounds are sought.
It is one of the fundamental and challenging problems to determine the node numbers of hidden layers in neural networks. Various efforts have been made to study the relations between the approximation ability and the number of hidden nodes of some specific neural networks, such as single-hidden-layer and two-hiddenlayer feedforward neural networks with specific or conditional activation functions. However, for arbitrary feedforward neural networks, there are few theoretical results on such issues. This paper gives an upper bound on the node number of each hidden layer for the most general feedforward neural networks called multilayer perceptrons (MLP), from an algebraic point of view. First, we put forward the method of expansion linear spaces to investigate the algebraic structure and properties of the outputs of MLPs. Then it is proved that given k distinct training samples, for any MLP with k nodes in each hidden layer, if a certain optimization problem has solutions, the approximation error keeps invariant with adding nodes to hidden layers. Furthermore, it is shown that for any MLP whose activation function for the output layer is bounded on R, at most k hidden nodes in each hidden layer are needed to learn k training samples.