Analytic one-dimensional maps and two-dimensional ordinary differential equations can robustly simulate Turing machines

In this paper, we analyze the problem of finding the minimum dimension $n$ such that a closed-form analytic map/ordinary differential equation can simulate a Turing machine over $\mathbb{R}^{n}$ in a way that is robust to perturbations. We show that one-dimensional closed-form analytic maps are sufficient to robustly simulate Turing machines; but the minimum dimension for the closed-form analytic ordinary differential equations to robustly simulate Turing machines is two, under some reasonable assumptions. We also show that any Turing machine can be simulated by a two-dimensional $C^{\infty}$ ordinary differential equation on the compact sphere $\mathbb{S}^{2}$.


Introduction
As it is well known (see e.g.[Sip12]), a Turing machine is a mathematical model of computation that formalizes the notion of algorithm/computation over discrete structures such as the set of positive integers N or integers Z.In practice, Turing machines are computationally equivalent to a standard digital computer.Furthermore, following the work of Turing and others it is also known that some problems such as the Halting Problem or Hilbert's 10th problem are noncomputable, i.e. there is no Turing machine (i.e.no algorithm) that solves those problems [Tur37], [Mat93].This remarkable result shows that some problems are algorithmically unsolvable.Examples of other nontrivial behavior regarding Turing machines include, for instance, the existence of universal Turing machines, which can simulate the computation of any other Turing machine (see [Tur37] or e.g.[Sip12]), or the existence of self-reproducing Turing machines which output their own description [Mos10].
Although the results above are considered classically over discrete structures (e.g.N), often they can be studied over continuous spaces such as R n .The idea is to simulate the computation of a Turing machine with a continuous map/flow.If a continuous system is able to simulate any Turing machine (or, equivalently, a universal Turing machine), then this system is usually referred to as Turing universal.A consequence of the noncomputability of the Halting Problem is that the long term behavior of Turing universal systems is highly complex (in a manner distinct from behaviors considered e.g. in chaos theory) and has some characteristics which are not computable (see e.g.[Moo91]).However, in applications, it is often desirable that Turing universal systems are relatively simple and mathematically well-behaved so that they can be used in meaningful situations.For this reason, one might be interested in having properties such as low-dimensionality, reasonable smoothness, or robustness to perturbations for Turing universal systems.
In this paper, we investigate the problem of determining the lowest dimension n such that the analytic maps/ODEs defined on R n can robustly simulate Turing machines.
It is well-known that piecewise affine and other types of maps and ODEs can simulate Turing machines on R n , see e.g.[Moo91], [KCG94], [Bra95], [Koi96], [Bou99], [KP05], [BC08], [BP21]).However, some authors have claimed (see e.g.[MO98], [KM99], [AB01], [KM99], [BGH13]) that, when focusing on more physically realistic systems simulating Turing machines, one might desire other additional attributes such as robustness to noise (it is known that the addition of noise to some classes of systems reduces their computational power, see e.g.[MO98], [AB01]) or smoothness of the dynamics since most classical physical systems are expressed with smooth (actually analytic) functions.In [KM99] the authors have shown that closed-form analytic maps (i.e.those analytic functions which can be expressed in terms of elementary functions such as polynomials, trigonometric functions, exponential and logarithmic functions, their composition, etc.) are capable of simulating Turing machines with exponential slowdown in dimension one or in real time in dimension ≥ 2. In [GCB08] it was shown that, under a certain notion of robustness (see Theorems 1 and 3 below), the class of closed-form analytic maps on R 3 as well as the class of ODEs defined with analytic closed-form functions in R 6 can robustly simulate Turing machines.
In the present paper, we show that one-dimensional closed-form analytic maps can robustly simulate Turing machine on R (Theorem 9) in real time (i.e.without the exponential slowdown of [KM99]; the simulation in [KM99] is not robust to perturbations either) and that two-dimensional closed-form analytic ODEs can also robustly simulate Turing machines (Theorem 11), both in the sense of [GCB08].We also show that, under certain reasonable assumptions, none of one-dimensional autonomous analytic ODEs can simulate (robustly or not) a universal Turing machine.
Similar to what is done in [CMPS21], we show that there is a C ∞ ODE which can simulate Turing machines over the compact set S n = {x ∈ R n+1 : x = 1} for n ≥ 2. The difference between our result and the result of [CMPS21] is that, although the flow of [CMPS21] is mathematically simpler (it is polynomial), Turing universality is only achieved over S n for n ≥ 17.The authors then use this polynomial flow to construct a (Euler) partial differential equation which is Turing universal.We also note that in [CMPSP21] the authors proved that there are Turing complete (stationary Euler) fluid-flows on a Riemannian 3dimensional sphere.The difference of the later result from the one presented in this paper is that we use ODEs instead of partial differential equations, and our results are for S 2 instead of S 3 .
The outline of the paper is as follows.In Section 2 we review the construction presented in [GCB08] to create analytic maps on R 3 which can robustly simulate Turing machines.In Section 3 we present some auxiliary functions.Building on these results, we show in Section 4 that one-dimensional analytic maps can robustly simulate Turing machines.By iterating these maps with ODEs, we are able to show in Section 5 that two-dimensional ODEs can robustly simulate Turing machines.In Section 6, we construct a C ∞ ODE that can simulate Turing machines over the compact set S 2 .Finally, in Section 7 we show that under reasonable hypothesis, no one-dimensional analytic ODE can simulate a universal Turing machine.

Simulating Turing machines in dimension three
In this section, we review several results from [GCB08] which are useful for proving our main results.
We first recall some basic results from computability theory (see e.g.[Sip12]).Given a finite set Σ (the alphabet), a word over Σ is a finite sequence w = (w 1 , ..., w k ) ∈ Σ k for some k ∈ N 0 (k is the length of the word), where N 0 is the set of all non-negative integers.Note that there is a special sequence, represented by , which denotes the word of length 0. As usual, for notational simplicity, we will denote the word w = (w 1 , ..., w k ) simply as w = w 1 ...w k .The set of all words over Σ is denoted by Σ * .We also recall that a Turing machine is a discrete dynamical system defined by the iteration of a map, although it is usually viewed as a finite-state machine since this approach is often more convenient.More specifically, let Σ be an alphabet, and take some symbol B / ∈ Σ, which is usually known as the blank symbol, and let Q be a finite set known as the set of states with some special elements q 0 , q h ∈ Q, called the initial state and the final state, respectively.Then a Turing machine M is a map works as follows when viewed as a machine.It has a bi-infinite tape, divided into cells, and a head which is associated to some state of Q.Given some (u, v, q) ∈ Σ * × Σ * × Q (the configuration of the Turing machine), where u = u 1 u 2 . . .u n and v = v 1 v 2 . . .v p , then the tape contents of the Turing machine at this configuration is while its associated state is q.In this case the Turing machine is also said to be reading symbol v 1 .Then, depending only on the value of the current state and of the symbol being read by the head, the machine simultaneously (i) updates its state, (ii) updates the symbol being read by the head and (iii) either moves the head one cell to the right, one cell to the left, or maintains the head on the same position.
A Turing machine M computes a function f : Σ * → Σ * as follows.Given a word w it starts its computation on the initial configuration (w, , q 0 ), i.e. in the initial configuration the state is the initial state and the tape contains the input w only.Then M proceeds with the computation until it reaches the halting state q h .In this case we say that the Turing machine has halted with configuration (u h , v h , q h ) ∈ Σ * × Σ * × Q.In this case its output will be u h , i.e. u h = f (w).If M does not halt with input w, then f (w) is undefined.
Given some Turing machine M as described above, let k = 1 + #Σ and take an injective map γ : Σ → {0, 1, 2, . . ., k − 1} with γ(B) = 0. Let (u, v, q) be the current configuration of M and let us further assume that M has m states, represented by the numbers 1, . . ., m, and that if M reaches an halting configuration, then it moves to the same configuration (i.e.F M (u h , v h , q h ) = (u h , v h , q h )).Take and suppose that q is the state associated to the current configuration.Then (y 1 , y 2 , q) ∈ N 3 encodes unambiguously the current configuration of M .Under these assumptions, the transition function of M can be encoded as a function fM : N 3 → N 3 .In [GCB08] it was shown that fM can be extended to a function f M : R 3 → R 3 , which has the following properties: (i) it is capable of simulating M in the presence of perturbations; (ii) the function f is analytic, and each of its components can be expressed using only the following terms: variables, polynomial-time computable constants (see Remark 2 for a definition), +, −, ×, sin, cos, arctan.The precise statement of this result is given below, where f = sup x∈A f (x) for a function f : A ⊆ R l → R j , y = max 1≤i≤j |y i | for y = (y 1 , . . ., y j ) ∈ R j , and f [k] denotes the kth iterate of the function f : A → A, which is defined as follows: Theorem 1 ([GCB08, p. 333]) Let ψ : N 3 → N 3 be the transition function of a Turing machine M under the encoding described above, and let 0 < δ < ε < 1/2.Then ψ admits a globally analytic closed-form extension f M : R 3 → R 3 such that the expression of each component of f M can be written using only the following terms: variables, polynomial-time computable constants, +, −, ×, sin, cos, arctan.Moreover, f M is robust to perturbations in the following sense: for all f such that f − f M ≤ δ, for all j ∈ N, and for all x0 ∈ R 3 satisfying x0 − x 0 ≤ ε, where x 0 ∈ N 3 represents a configuration according to the encoding described above, We note that the proof of this theorem is constructive and that f M can be obtained explicitly.A continuous-time version of Theorem 1 was also proved in [GCB08].
Remark 2 We note that we can define computable real constants and computable real functions using the approach of computable analysis.These notions can be presented in several equivalent but different ways.For example, according to the approach presented in [Ko91] (see also [BHW08]), a number c ∈ R is computable if there is a Turing machine M that, on input n ∈ N, outputs (in finite time) a rational q n with the property that |q n − c| ≤ 2 −n .If the Turing machine M runs in polynomial time (in n), then we say that c is computable in polynomial time.Similarly, a function f : R → R is computable if there is an oracle Turing machine M that computes f (x) in the sense that, given as input n ∈ N and any oracle ϕ : and its derivative are both computable.These notions can be generalized to R n in a straightforward manner.

Some useful auxiliary functions
In this section we present several functions and results which are needed in subsequent sections.The function Υ presented in the next lemma can be seen as a generalization of the error-correcting function l 2 from [GCB08, Lemma 9].More specifically, the function Ψ can be viewed as a function that improves the accuracy of approximations within distance ≤ 1/5 of an integer (the function l 2 of [GCB08, Lemma 9] has a similar property, but only works for the integers 0 and 1), where the correction factor is bounded by e −y , and y > 0 is the second argument of Ψ.
Proof.Let Ψ : R → R be defined by Ψ(x) = x − 1 2π arcsin(sin(2πx)).We note that, since sin(2πx) has period 1, However, although it is continuous, the function Ψ is not analytic, since it is well-known that if an analytic function is constant in a non-empty interval, e.g.[3/4, 5/4], then it should be constant everywhere on the real line R, which is not the case for Ψ.The problem is that, although the composition of analytic functions yields again an analytic function, the derivative of arcsin y is not defined when y = −1 or y = 1 and thus Ψ is not analytic when x = k − 1/4 or x = k + 1/4 for some k ∈ Z. Note, however, that arcsin is analytic in (−1, 1).Hence, we can multiply sin(2πx) by a value 1 − e −y (or 1 − e −y−2 , which will be more convenient later on), which is slightly less than 1, to ensure that the resulting function Ψ is analytic, since in this way we guarantee that −1 < sin(2πx)(1 − e −y ) < 1 for any x ∈ R and y ≥ 0. Next we notice that, by the mean value theorem Let us now take g(x) = 7x − sin(2πx).We note that g(0) = 0 and that g (x) = 7 − 2π cos(2πx) > 0. Hence we conclude that g strictly increases in [0, 1/5], which implies 7 |x| ≥ |sin(2πx)| when x ∈ [−1/5, 1/5].This implies that for where k ∈ Z is arbitrary, we have We now present another error-correcting function σ : R → R which was first presented in [GCB08,Proposition 5].This function is a uniform contraction around integers.Unlike Ψ, one cannot prescribe the amount of error reduction around each integer with a single application of the map σ.On the other hand its use is not restricted to a 1/5 -neighborhood of integers and can be used on larger neighborhoods.This last property will be handy later on.
It is well known that there are bijective functions from N 2 to N.An example (see e.g.[Odi89,) is the dovetailing pairing map I : N 2 → N defined by the formula Using this map we can obtain a bijective map ).We now show that the maps I k can be extended to R k robustly around the integers.Since each I k is a (multivariate) polynomial, to achieve this objective we have to analyze how the error is propagated via the application of a polynomial map.The following lemma is from [BGP12], and can be viewed as an extension of a similar result proved in [GCB08,Lemma 11] for the case of monomials.For multivariate polynomials, the multi-index notation is used for compactness as follows: a monomial x α1 1 . . .x α k k is represented by x α , where x = (x 1 , . . ., x k ), α = (α 1 , . . ., α k ), |α| = α 1 + . . .+ α k is the degree of the monomial, and the degree of a (multivariate) polynomial is the maximum degree of all the monomials which appear in its expression.
Lemma 6 ([BGP12, Lemma 4]) Let P : R k → R be a multivariate polynomial of degree k and let x, y ∈ R k be such that x , y ≤ M for some M ≥ 0.
where ΣP denotes the sum of the absolute values of the coefficients of P .
Now we are ready to state the result that shows the existence of robust analytic extensions of I k for each k.
Proposition 7 For each k ∈ N, k ≥ 2, there exists an analytic function Υ k : R k → R with the following properties: Proof.We start with the case k = 2. Since it is clear that the function I 2 is well-defined for all x 1 , x 2 ∈ R. In the remaining of this proof, we assume that I 2 is defined over R 2 .Since I 2 is a polynomial of degree 2, by Lemma 6 we conclude that , which implies property 1.For property 2, let us assume that x − y ≤ ε ≤ 1/5 for some y ∈ N k .Using Lemma 4 and the inequality e −l < (1/l) for all l ≥ 1, we obtain the following estimate, where ε = x − y ≤ 1/5: .
Recall that, on R 2 , • 2 denotes the Euclidean norm while • denotes the maximum norm.Since x − y ≤ 1 5 and Then it follows from this inequality, (5), and (6) that For the case where k > 2, the result is obtained inductively by setting Property 1 is immediate; property 2 follows from the estimate below: As shown above, I k : N k → N provides a bijection between N k and N that can be robustly extended to an analytic function Υ k : R k → R. We now show that the inverse function of I k can also be robustly extended to an analytic function from R to R k .We write Proposition 8 For each k ∈ N, k ≥ 2, and for each 1 ≤ i ≤ k, there exists an analytic function Ω k,i : R → R with the following property: for any Proof.First we prove the result when k = 2. Let us assume that i = 1 (the case where i = 2 is similar).Then, by definition, [Odi89,p. 27]), we have the following algorithm to compute J 2,1 , given some input x ∈ N: 1.For all i = 1, . . ., x 2. For all j = 1, . . ., x

Next i
This algorithm always stops with the correct result.Hence J 2,1 can be computed by a one tape Turing machine M .Furthermore, using well-known techniques, we can assume that M has the following properties: (i) the tape alphabet of the Turing machine is {B, 1} where B denotes the blank symbol; (ii) the input alphabet is {1}; (iii) each input z ∈ N and the respective output of the computation is represented in unary, i.e. by a sequence of z 1's; and (iv) J 2,1 is computed by M in time P (n) = P (x), where P is a polynomial which can be explicitly obtained and which is assumed to be an increasing function.Regarding condition (i), we notice that there are universal Turing machines which only use the alphabet {B, 1} and hence we do not lose computational computational power with respect to Turing machines using more symbols.For example, if we have a Turing machine M 1 which tape alphabet has k > 3 symbols (including the blank symbol B), then we can create a Turing machine M 2 with tape alphabet {B, 0, 1} which simulates M 1 by taking some fixed l ∈ N satisfying l ≥ log 2 (k) such that each symbol of M 1 is represented by distinct strings (blocks) of {0, 1} * of length l, with the exception of the blank symbol of M 1 which is represented by a block of l blank symbols in M 2 .By its turn, M 2 can be simulated by a Turing machine M 3 with tape alphabet {B, 1} by coding each symbol of M 2 as a string of length 2 in M 3 , e.g. by coding 0, 1, and B as 1B, 11, and BB, respectively.We note that regarding condition (iv), the expression of P depends on the exact implementation details of M , but we prefer to omit the exact description of M and of P for brevity (these can be obtained as usual, although the procedure is a bit tedious and hence not of much interest for this proof).
Let g M : R 7 → R 6 be the function given by Theorem 3 such that y = g M (y) simulates M (taking ε = 1/5 in that theorem), and let η < 1/2 be the associated constant such that (3) holds.(The value of δ in Theorem 3 is not really relevant; one may simply take δ = 1/5).)Let l ∈ N be chosen such that σ [l] (η) ≤ 1/5 (see Lemma 5).Given an input x ∈ N for the Turing machine M , let us assume that this input is encoded in unary (i.e.x is represented by a sequence of x 1's) when processed by M .We can then transform this unary coding of x into another integer value ϕ 1 (x) via the coding (2), where γ(B) = 0 and γ(1) = 1, which can then be used to create an initial condition for y = g M (y) such that this IVP simulates M with input x.Note that although x ∈ N and ϕ 1 (x) ∈ N, we do not necessarily have x = ϕ 1 (x).Since initially the tape will be empty, with the exception of the input, and M will be on its initial state, which we assume to be the state 1 (we can assume, without loss of generality, that the states of M correspond to the elements of {1, 2, . . ., m}), then the initial configuration of M will be coded as (ϕ 1 (x), 0, 1).Let Φ(t, ϕ 1 (x)) denote the solution of y = g M (y) with initial condition associated to the configuration (ϕ 1 (x), 0, 1) and let Note that Φ is analytic and that M computes J 2,1 .We will use these facts to create the function Ω 2,1 to be defined as • Φ(P (x + 1), ϕ 1 (x)) for some analytic functions ϕ 1 , ϕ 2 yet to be defined, which essentially translate the value of x ∈ N into the coding (2) of its unary representation (case of ϕ 1 ) and, reciprocally, converts the coding of the unary representation back to the number encoded by this representation (note again that x ∈ N may not be equal to the number x ∈ N encoding the symbolic representation -unary, binary, etc.of x given by ( 2)).Since |x − n| ≤ 1/5 and P is assumed to be increasing, it follows that x + 1 > n ≥ 0 and we get that Φ(P (x + 1), ϕ 1 (x)) will return the coding of the output of M with the input encoding the number x ∈ N (note that although the relation (3) is in general valid only in intervals of the format [j, j + 1/2] with j ∈ N, but since we have assumed that the image of an halting configuration is itself, it follows from the results of [GCB08] that (3) is valid for all times [j + 1/2, j + 1] after the Turing machine has halted.See also Remark 15).We now only have to define the functions ϕ 1 and ϕ 2 .Let us now first turn our attention to ϕ 1 .Note that given some n ∈ N, the number 2 n − 1 will represent n in unary when using the coding (2) (taking k = 2, since γ(B) = 0 by definition, and by taking γ(1) = 1).Hence it makes sense to take ϕ 1 (n) = 2 n − 1.However, we cannot take ϕ 1 (x) to be 2 x − 1, because in that case we cannot ensure that |x − n| ≤ 1/5 implies |ϕ 1 (x) − ϕ 1 (n)| ≤ 1/5.To avoid this problem, we improve the accuracy of x using the function Ψ from Lemma 4, obtaining an improved estimate x satisfying |2 x − 2 n | ≤ 1/5.We now determine the accuracy improvement needed to achieve this objective.Note that the exponential function 2 x is strictly increasing and thus, by the mean value theorem, we have We now proceed with a similar reasoning for ϕ 2 .We first note that if n ∈ N codes the exact output of M according to (2), and thus represents in unary some number i ∈ N, we will have n = 2 i − 1 as we have already seen.This implies that i = log 2 (n + 1).Now we have to analyze again the effect of replacing n by some real value x satisfying |x − n| ≤ 1/5.By the mean value theorem, we have (note also that n ≥ 0 since n ∈ N) Therefore, to ensure that |x − n| ≤ 1/5 implies that |log 2 (x + 1) − log 2 (n + 1)| ≤ 1/5, it is enough to take ϕ 2 (x) = log 2 (Ψ(x, 2) + 1) (using Lemma 4) or ϕ 2 (x) = log 2 (σ(x) + 1) (using Lemma 5 and noting, as mentioned in [GCB08, Remark 6], that we can take λ 1/4 = 0.4π − 1 ≈ 0.2566371).
Proceeding similarly for the case of , where M is a TM machine computing J 2,2 which is similar to M , with the difference that in Step 3 of the pseudo-algorithm above we take "If I 2 (i, j) = x, then output j".The results for k > 2 follow inductively.

Analytic one-dimensional maps robustly simulate Turing machines
We now present one of the main results of this paper.Let M be a one-tape Turing machine and let (y 1 , y 2 , q) ∈ N 3 be the encoding of a configuration as given in Section 2 and (2).In what follows each configuration (y 1 , y 2 , q) is encoded in the single value Thus we can consider that, under this new encoding, the transition function of a Turing machine is a map ψ : N → N.
Theorem 9 Let ψ : N → N be the transition function of a Turing machine M , under the encoding described above, and let 0 ≤ δ < 1/5.Then there is an analytic function g M : R → R that robustly simulates M in the following sense: for all g such that g − g M ≤ δ, and for all x0 ∈ R satisfying |x 0 − x 0 | ≤ 1/5, where x 0 ∈ N represents some configuration, one has for all j ∈ N g Proof.Most of the work to prove this theorem was already done in Section 3. Let us first define a function ḡM : R → R that robustly simulates M in a weaker sense that just the input can be perturbed and not ḡM itself.More specifically, let us define a function ḡM : R → R with the following property: for all x0 ∈ R satisfying |x 0 − x 0 | ≤ 1/5, where x 0 ∈ N represents some configuration, one has for all j ∈ N ḡ To achieve this purpose, let f M be the corresponding 3-dimensional map simulating M obtained via Theorem 1 with ε = 1/5.Then we take It then follows from Theorem 1, Proposition 7 , and Proposition 8 that property (8) is satisfied.We now only have to take care of the perturbations to ḡM .Let σ be the function defined in Lemma 5. Let j ∈ N be some integer such that 0 < λ j 1/4 /5 < 1/5 − δ and take Then, by property (8) and Lemma 5, if x0 ∈ R satisfies |x 0 − x 0 | ≤ 1/5, where x 0 ∈ N represents some configuration, one has By using this last inequality and by iterating g and ψ, we conclude that the property (7) holds.
Remark 10 In the statement of Theorem 9, we could have picked some fixed ε > 0 satisfying δ < ε ≤ 1/5 and, instead of assuming that |x 0 − x 0 | ≤ 1/5, we could have assumed that |x 0 − x 0 | ≤ ε and required that g [j] (x 0 ) − ψ [j] (x 0 ) ≤ ε for condition (7).To see this it would be enough to compose g M with σ [l] , where σ is given by Lemma 5 and l ∈ N is such that

Analytic two-dimensional ODEs can robustly simulate Turing machines
In this section we construct an analytic two-dimensional ODE that robustly simulates Turing machines in the sense of Theorem 3. To prove this result, we simulate the iteration of the one-dimensional analytic function provided by Theorem 9 using a two-dimensional analytic ODE.Then it follows from Theorem 9 that this ODE will simulate a TM.The approach is similar to that used in [GCB08].More precisely, the following theorem is proved in this section.
Theorem 11 Let ψ : N → N be the transition function of a Turing machine M , under the encoding described in Section 4, and let 0 ≤ δ < 2/5.Then there exist: • η > 0 satisfying η < 2/5 < 1/2, which can be computed from δ; and • an analytic function g M : R 3 → R 2 such that the ODE z = g M (t, z) robustly simulates M in the following sense: for all g satisfying g − g M ≤ δ < 2/5 and for all x 0 ∈ N which encodes a configuration according to the encoding described above, if x0 , ȳ0 ∈ R satisfy the conditions x0 − x 0 ≤ 1/5 and ȳ0 − x 0 ≤ 1/5, then the solution z(t) of satisfies, for all j ∈ N 0 and for all t ∈ [j, j + 1/2], The remaining of this section is devoted to the proof of Theorem 11.We first present the main ideas in [GCB08] for simulating the iteration of a map defined over the integers, which admits a robust analytic real extension, using an analytic ODE.We begin with a construction that uses an analytic ODE to approximate a value b ∈ R in a finite (given) amount of time with some (given) accuracy.This construction will be needed to derive the ODE simulating the iteration of the map.Consider the following basic ODE which was already studied in [Bra05], [CMC00], [GCB08, Section 7].The ODE can be easily solved by separating variables, which gives rise to the following result.
Lemma 12 ([GCB08]) Consider a point b ∈ R (the target), some γ > 0 (the targeting error), time instants t 0 ( departure time) and t 1 ( arrival time), with t 1 > t 0 , and a function φ : R → R with the property that φ(t) ≥ 0 for all t ≥ t 0 and t1 t0 φ(t)dt > 0. Then the IVP defined by (9) (the targeting equation) with the initial condition y(t 0 ) = y 0 and c ≥ 1 has the property that |y(t) − b| < γ for t ≥ t 1 , independently of the initial condition y 0 ∈ R.
However, since we wish the ODE simulating Turing machines to be robust to perturbations, we have to analyze a perturbed version of (9).
Proceeding along the lines of the argument presented in [GCB08], we now show how the map given by Theorem 9 can be iterated by a 2-dimensional ODE.Although our objective is to obtain an analytic function g M defining an ODE z = g M (t, z) which simulates a given Turing machine M , in a first step we iterate the map given by Theorem 9 by using a non-analytic ODE.
Following the approach provided in [Cam02, p. 37], let θ : R → R be the C ∞ function defined by Next we define the C ∞ function v : R → R given by The function v has the property that v(x) = n, whenever x ∈ [0, n + 1/2].We now get the following lemma.
Note that the function r can be seen as a function that returns the integer part of a real number around a 1/4-vicinity of an integer.
We now consider the ODE where f : R → R is an extension of the function f : N → N, z 1 (0) = z 2 (0) = x 0 ∈ N and c is a constant yet to be defined.We will next show that (14) iterates the map f near integers.Its behavior is depicted in Fig. 1 when iterating the exponential function 2 x .Suppose that t ∈ [0, 1/2].Then z 2 (t) = 0 and thus z 2 (t) = x 0 and r(z 2 ) = x 0 .In this manner, the first equation of ( 14) behaves like the targeting equation ( 9), where b = f (r(z 2 )) = f (x 0 ), t 0 = 0, t 1 = 1/2, This implies that 1 Therefore, due to Lemma 12 and (10), if we take c ≥ 2e √ 2 /γ 2 the first equation of ( 14) becomes a targetting equation on the time interval [0, 1/2] associated to a targetting error γ.In particular, if we pick γ = 1/5, then we can pick c = 206 such that (10) holds and thus On the time interval [1/2, 1], the roles of z 1 and z 2 are switched: we will have z 1 (t) = 0 which implies that z 1 (t) = z 1 (1/2) for all t ∈ [1/2, 1].We thus conclude that r(z 1 (t)) = f (x 0 ) for t ∈ [1/2, 1] and therefore the second equation of (14) behaves like the targeting equation ( 9) with b = r(z 1 (t)) = f (x 0 ), t 0 = 1/2, t 1 = 1, and φ(t) = θ j (− sin 2πt) and c = 206.Again, using Lemma 12 and similar arguments as in the previous case, we conclude that In the following time interval [1, 3/2], the cycle repeats itself and we have z 2 (t) = 0 and thus f (r(z 2 )) = f (f (x 0 )) = f [2] (x 0 ).Using a similar reasoning, we conclude that for all k ∈ N 0 we have z We thus have shown how to iterate (the extension of) a discrete function f : N → N with an ODE.Nonetheless, the ODE is still not analytic as required.Remark that if the ODE is analytic, then z 1 and z 2 cannot be 0 in half-unit intervals, since it is well-known that if an analytic function (z 1 and z 2 in our case) takes the value zero in a non-empty interval, then this function has to be identically equal to 0 on all its domain.Therefore, instead of requiring that z 1 and z 2 take the value 0 in alternating half-unit intervals, we require that these functions take values very close to zero.Since the map g M given by Theorem 9 is robust to perturbations on its input, this will ensure that the whole simulation of g M with a two-dimensional ODE can still be performed, even if z 1 and z 2 are not strictly constant in the half-intervals [k + 1/2, k + 1] and [k, k + 1/2], respectively.However, we still have to ensure that |z 1 (t)| and |z 2 (t)| are sufficiently small in the half-unit intervals of interests to guarantee that the iteration can be carried faithfully.To better understand how this can be achieved, we have to analyze the effects of introducing perturbations in (14), with the help of Lemma 13 since now the "targets" will be slightly perturbed.
Proceeding as in [GCB08], the non-analytic function θ j (sin 2πt) in the first equation of ( 14) is replaced by an analytic periodic function with period 1 which is close to zero when t ∈ [1/2, 1].As shown in [GCB08], this can be done by considering the function s defined by which ranges between 0 and 1 in [0, 1/2] (and, in particular, between 4/5 and 1 when x ∈ [0.17, 0.33]), and between − 1 8 and 0 on the time interval [1/2, 1].Then we take the analytic function φ : R 2 → [0, 1] defined by φ(t, y) = Ψ(s(t), y), we conclude that 1/2 0 φ(t, y)dt > 4/5 × (0.33 − 0.17) = 0.128 > 0 (assuming that y ≥ 5) and |φ(t, y)| < e −y /8 for all t ∈ [1/2, 1] (i.e.y allows us to provide an error bound for z 1 (t) in the time interval [1/2, 1]).Since φ has period 1 on t, we conclude that We can now proceed with the main construction that simulates the iteration of the map given by Theorem 9 with an analytic ODE.Take γ > 0 to be a value such that 2γ + δ/2 ≤ 1/5, and let g M be the map given by Theorem 9 (use as value for δ in the statement of the Theorem 9 the value η/2 < 1/5, where η = (γ + δ) /2 + 1/5 < 2/5).Consider the ODE z = h M (t, z) given by where , where x0 , ȳ0 ∈ R are approximations of some initial configuration x 0 satisfying ), and and c 1 , c 2 are constants associated to a targeting error of value γ (i.e. they are chosen such as the constant c in (10) and by noting that 1/2 0 φ(t, y) > 0.128, we can take c 1 , c 2 = 4γ −2 ).Note that, since φ satisfies the assumptions of function φ in Lemma 13, the same will happen to φ 1 and φ 2 .The ODE (16) simulates the Turing machine M similarly to the ODE (14), by iterating g M .However (17) has a more complicated expression, since it has to deal with the fact that z 1 (t) and z 2 (t) are not exactly zero in half-unit intervals.
Since we want that the ODE z = h M (t, z) to robustly simulate M , let us assume that the right hand-side of the equations in (16) can be subject to an error of absolute value not exceeding δ.To start the analysis of the behavior of ( 16), let us first consider the time interval [0, 1/2].Since |x| 3 ≤ x 4 + 1 for all x ∈ R, we conclude that φ 2 is less than min(γ(c This implies, together with the assumption that z 2 in (17) is perturbed by an amount not exceeding δ, that |z 2 (t)| ≤ γ + δ for all t ∈ [0, 1/2] which implies that |z 2 (t) − z 2 (0)| ≤ (γ + δ)/2 for all t ∈ [0, 1/2].Since the initial condition x0 satisfies |x 0 − x 0 | ≤ 1/5 where x 0 ∈ N, we conclude that which implies that Due to Theorem 9, we conclude that Since σ [l] (η) ≤ γ and η > 1/5, we get that σ Then the behavior of z 1 is given by Lemma 13 and In the next half-unit interval [1/2, 1] the roles of z 1 and z 2 are switched as before and one concludes, using similar arguments to those used in the time interval . This inequality together with (18) yields that We thus conclude that this analysis can be repeated in subsequent time intervals and thus that for all j ∈ N, if t ∈ [j, j + 1 2 ] then z 2 (t) − ψ [j] (x 0 ) ≤ 1/5.This concludes the proof.
Remark 15 From the proof of Theorem 11, we conclude that if the Turing machine halts after n 0 steps with configuration c h and if we assume that ψ(c h ) = c h , then for any n ∈ N satisfying n ≥ n 0 + 1, we have Furthermore, on the half-unit time interval [n, n + 1/2], we will get that Assuming that δ = 0 in Theorem 11, we then conclude from (16) that z 2 (t) starts its trajectory from a value 1/5-near to c h and will monotically converge to a value σ [l] (z 1 (t)) which, although may change with time, will never leave an

Simulating Turing machines over compact sets
So far all our simulations of Turing machines are carried out over the noncompact set R n .In this section we show that it is possible to simulate Turing machines with ODEs on the 2-dimensional compact sphere S 2 = {x ∈ R 3 : x = 1}.The technique used in the construction is based on a similar result proved in [CMPS21].It is shown in [CMPS21] that there is a computable polynomial vector field on the sphere S n = {x ∈ R n+1 : x = 1} simulating a universal Turing machine; in other words, the polynomial vector field is Turing complete, where n ≥ 17 is arbitrary.The underlying idea of the construction presented in [CMPS21] is to map a vector field defined in R n to S n using the stereographic projection, and to remove the singularity at the north pole by using a suitable reparametrization on the polynomial vector fields.We show in this section that the dimension can be lowered from n ≥ 17 to n = 2 at the cost of using a non-polynomial C ∞ vector field.The notion introduced in the following definition will be used throughout this section.
For example g(t, x) = x sin(2πt) is bounded by a constant on t and polynomially bounded on x and h(t) = sin(2πt) is bounded by a constant on t (to simplify notation, in this latter case where h is not polynomially bounded on any other variables, we will just say that h is bounded by a constant).Some functions which are bounded by a constant include sin, cos, arctan.
The next result shows when a Turing universal vector field can be defined on S n .
be a C 1 -computable ODE simulating a Turing machine M , where f : R n+1 → R n is a C ∞ function with the property that both f and all its partial derivatives are bounded by a constant on t and are polynomially bounded on the variables x 1 , . . ., x n , assuming that the argument of f is Then from f one can compute a C ∞ vector field F defined on S n that also simulates M .
Proof.We recall that the (inverse) stereographic projection ϕ : R n → S n ⊆ R n+1 is given by ).Then we can write the vector field defined by f on R n as We recall that if g : M → N is a C 1 map between two manifolds M = R n and N = R k , then for each p ∈ M the map g induces a linear map g * : T p M → T g(p) N from the tangent space T p M of M at p to the tangent space of N at g(p).
We also recall that ( ∂/∂x ), is the Jacobian of g.In the case of the map ϕ, and if we take (y 0 , y 1 , . . ., y n ) as coordinates for R n+1 , we obtain the following (note that the variable t in the expression of f can be seen as a fixed parameter): where f i is evaluated at (t, ϕ −1 (y 0 , y 1 , . . ., y n )) = t, y1 1−y0 , . . ., yn 1−y0 .This implies that ϕ * (f ) is a vector field of class C ∞ , except at the north pole y N P = (1, 0, . . ., 0) of S n where it is not defined.Let us now consider the following ODE where K : R n → R is such that K(x) > 0 for any x ∈ R n and τ (0) = 0, x(0) = x 0 .Since τ (t) > 0, τ is strictly increasing and thus τ admits an inverse τ −1 .Next we note that if x = x • τ , where x is a solution of (21) and τ (0) = 0, we have that It is not difficult to see that if (τ, x) is a solution for (23), then x is also a solution to (24).Since the solution of the ODE (23) is unique, by the Picard-Lindelöf theorem, we conclude that x(t) = x(t) = x(τ (t)).Furthermore, since τ (t) > 0 for any t ∈ R, we conclude that any solution curve of (21) with initial condition y(0) = x 0 also provides a solution curve for the last n components of the solution of (23) with initial condition τ (0) = 0, x(0) = x 0 , up to some time reparametrization, and vice versa.x = e and of ( 21) are the same, up to a time reparametrization τ given by the ODE τ = e − 2 1+r 2 > 0. Note that the right-hand side of ( 25) is formed by C 1 -computable functions, which means that the solution (τ, x) to ( 25) is also computable, since the solution to a C 1 -computable system is also computable [GZB09], [CG08].Hence, when simulating Turing machines, if the result of the nth step of the computation of the Turing machine being simulated by ( 21) can be read in the time interval [a n , b n ], then the result of the nth step when the simulation is performed by ( 25) can be read on the time interval Note that τ −1 can be computed from τ and hence from f .Indeed, we know that the derivative of τ −1 is given by .
Hence τ −1 can be obtained as the solution of the initial-value problem (IVP) defined by τ −1 (t) = 1/K(x(t)), τ −1 (0) = 0. Since the right-hand side of the ODE defining this IVP is computable from x and hence from f , we conclude that τ −1 is computable from f [GZB09], [CG08].In particular, if f is computable, then so is τ −1 .
We now have (we can again assume that t is a fixed parameter for h, where h is given by ( 25)) From ( 22) we conclude that (note that |y i | ≤ 1) This latter result implies that lim Similarly, if we assume that where s i are functions, from (22) and from the assumption that all the partial derivatives of f of the form are polynomially bounded on x 1 , . . ., x n , we can conclude that each partial derivative of s i is polynomially bounded by some polynomial p y1 1−y0 , . . ., yn 1−y0 , which implies that lim Therefore we can extend ϕ * (h) (y) to a C ∞ vector field g defined on the entire sphere S n if we assume that the value of g and of its partial derivatives is 0 at the north pole y N P = (1, 0, . . ., 0) of S n .We thus have defined a C ∞ vector field g on the entire sphere S n , and g is Turing universal.With the help of the above theorem, we may hope we could just make use of the vector field g M of Theorem 11 as the vector field f in Theorem 17 to prove that there is a Turing universal vector field in S 2 .However, there are several problems with this approach as listed below: (1) the approach requires that g M (t, x 1 , x 2 ) and all of its partial derivatives are bounded by a constant on t and are polynomially bounded on x 1 and x 2 .A major problem in this respect is that g M uses in its expression the function Ψ defined in Lemma 4 that does not necessarily have polynomially bounded derivatives because the function arcsin is used in the definition of Ψ and the derivative of arcsin is not polynomially bounded as its argument approaches −1 or 1. (2) The expression of g M relies on the expression of the function ḡM given by Theorem 3. Thus one must show that ḡM (t, x 1 , x 2 , x 3 , y 1 , y 2 , y 3 ) is bounded polynomially on x 1 , x 2 , x 3 , y 1 , y 2 , y 3 .
(3) The argument of the functions Ω k,i : R → R from Proposition 8 is provided as the initial condition of an ODE.It is not straightforward to analyze the dependence of Ω k,i and of its derivatives on its argument.
In the following we present the solutions to the listed problems.First we note that in Theorem 17 the vector field is no longer required to be analytic (it only has to be C ∞ ).Therefore we can substitute the function Ψ defined in Lemma 4 by the function r : R → R defined in Lemma 14.We recall that the function r has the property that r(x) = n, whenever x ∈ [n−1/4, n+1/4], for all integers n.Thus if we take Ψ(x, y) = r(x), the properties stated for Ψ in Lemma 4 remain true.Therefore, we can replace Ψ with r when defining the vector field g M of Theorem 11.In this case, the properties stated in Theorem 11 remain true with g M being a C ∞ function rather than an analytic function.However, we gain the advantage that r and its derivatives are polynomially bounded as we shall show now.Indeed, it follows from its definition that |r(x)| ≤ |x| + 1 ≤ x 2 + 2 (recall that |x| ≤ x 2 +1).For the derivatives of θ, we note that if a(x) = e −1/x , then it is readily seen by induction that for each n ∈ N 0 there is a polynomial P n such that a (n) (x) = P n (1/x)e −1/x , with a (0) (x) = a(x) (and thus P 0 = 1).This implies that lim x→0 + a (n) (x) = 0 and lim x→+∞ a (n) (x) = K n , where K n is the constant term of P n .Therefore, by definition of the limit, there exists b n , ε n > 0 such that a (n) (x) ≤ 1 whenever x ∈ (0, ε n ] and a for all x ∈ (0, +∞), which implies that θ in (12) as well as its derivatives are polynomially bounded on [0, +∞).Consequently, the function v from (13) as well as the derivatives of v are polynomially bounded following Lemmas 18 and 19 to be presented in a moment.Lemma 14 together with Lemmas 18 and 19 then imply that r as well as its derivatives are polynomially bounded.Some notations are in order for the statements of the next two lemmas.Given multi-indexes α x is the identity operator) Lemma 18 Suppose that f, g : R n → R are C ∞ functions which are bounded by a constant on the variables x 1 , . . ., x i and are bounded by a polynomial on the variables x i+1 , . . ., x n , as well as all their partial derivatives.Then: is bounded by a constant on the variables x 1 , . . ., x i and bounded by a polynomial on the variables x i+1 , . . ., x n , as well as all its partial derivatives.

The C
is bounded by a constant on the variables x 1 , . . ., x i and bounded by a polynomial on the variables x i+1 , . . ., x n , as well as all its partial derivatives.
Proof.Let α = (α 1 , . . ., α n ) be a multi-index.For point 1, we note that and thus the result follows immediately from the assumption.For the product f × g, the claim follows directly from the general Leibniz rule for multivariate functions (see e.g.[CS96, Proof of Lemma 2.6]): Lemma 19 Suppose that f : R j → R n and g : R k → R j are C ∞ functions with the following properties: 1. f and its partial derivatives are polynomially bounded on its arguments z 1 , . . ., z j ; 2. g and its partial derivatives are bounded by a constant on the variables x 1 , . . ., x i and bounded by a polynomial on the variables x i+1 , . . ., x k .
Then f • g is a C ∞ function with the property that f • g as well as all its partial derivatives are bounded by a constant on the variables x 1 , . . ., x i and by a polynomial on the variables x i+1 , . . ., x k .
Proof.To prove this theorem, we will use a multivariate version of the Faà di Bruno formula which allows us to compute the higher order partial derivatives of the composition of multivariate functions f and g.Some notational matter is in order first.Let α = (α 1 , . . ., α n ) ∈ N n 0 be a multi-index.Then following the approach of [Ma09], let us assume that χ 0 = 1 regardless of whether χ is a number or a differential operator.We say that a multi-index α can be decomposed into s parts p 1 , . . ., p s ∈ N n 0 with multiplicities m 1 , . . ., m s ∈ N d 0 if the decomposition α = |m 1 | p 1 + . . .+ |m s | p s holds and all parts are different.In this case the total multiplicity is defined as m = m 1 + . . .+ m s .The list (s, p, m) is called a d-decomposition, or simply just a decomposition, of α.Then, assuming that z = f • g(x) and y = g(x), we have where D is the set of all decompositions of α.From this later formula and the the hypothesis on f , g we conclude that f • g is a C ∞ function with the property that f •g as well as all its partial derivatives are bounded by a constant on the variables x 1 , . . ., x i and are bounded by a polynomial on the variables x i+1 , . . ., x k .Since the function f M of Theorem 1 can be written using only the following terms: variables, polynomial-time computable constants, +, −, ×, sin, cos, arctan, we conclude from Lemmas 18 and 19 that f M as well as all its partial derivatives are bounded by a polynomial.Furthermore, as the function g M from Theorem 9 is obtained using the functions Υ 3 , f M , Ω 3,1 , Ω 3,2 , Ω 3,3 , σ, again by Lemmas 18 and 19 it suffices to show that Υ 3 , Ω 3,1 , Ω 3,2 , Ω 3,3 , σ as well as their partial derivatives are (or can be made) bounded by a polynomial.The case of the function σ in Lemma 5 is immediate from Lemmas 18 and 19.The case of Υ 3 can be treated by replacing Ψ(x, y) in the expression of Υ 3 given in the proof of Proposition 7 by the C ∞ function r, as explained above.Then it follows immediately that Υ 3 maintains its properties given by Proposition 7, except analiticity (Υ 3 will only be C ∞ ) and, meanwhile, Lemmas 18 and 19 imply that Υ 3 and its partial derivatives are polynomially bounded.
The situation for Ω 3,1 , Ω 3,2 , Ω 3,3 is more subtle, as the arguments to these functions are passed as initial conditions of an ODE.What we are going to do is to create new C ∞ functions Ω 3,1 , Ω 3,2 , Ω 3,3 such that these new functions maintain the useful properties of Ω 3,1 , Ω 3,2 , Ω 3,3 on the one hand and, on the other hand, the new functions together with their partial derivatives are polynomially bounded.Once this is done, the old functions Ω 3,1 , Ω 3,2 , Ω 3,3 can then be replaced by the new functions Ω 3,1 , Ω 3,2 , Ω 3,3 in the expression of g M in the proof of Theorem 9.The function g M as well as its partial derivatives are now polynomially bounded on all variables except the time t.But since the time variable only appears inside the functions φ 1 and φ 2 defined by (17) in the format of sin(2πt) as defined in (15), and sin(2πt) and its derivatives are obviously bounded by a constant, it follows that the theorem below holds true.
Theorem 20 Let ψ : N → N be the transition function of a Turing machine M , under the encoding described in Section 4. Then there exist 0 < η < 2/5 and a C ∞ function g M : R 3 → R 2 such that the ODE z = g M (t, z) simulates M in the following sense: for all x 0 ∈ N which encodes a configuration according to the encoding described above, if x0 , ȳ0 ∈ R satisfy the conditions x0 − x 0 ≤ 1/5 and ȳ0 − x 0 ≤ 1/5, then the solution z(t) of z = g M (t, z), z(0) = (x 0 , ȳ0 ) satisfies, for all j ∈ N 0 and for all t ∈ [j, j + 1/2], where z(t) ≡ (z 1 (t), z 2 (t)) ∈ R 2 .Furthermore g M (t, z 1 , z 2 ) and its partial derivatives are polynomially bounded on z 1 and z 2 and bounded by a constant on t.
Theorem 21 Let M be a Turing machine.Then one can compute from f a C ∞ vector field F defined on S 2 which also simulates M .
Proof.Immediate from Theorems 17 and 20.It remains to show that the new C ∞ functions Ω 3,1 , Ω 3,2 , Ω 3,3 can be constructed such that they retain the useful properties of Ω 3,1 , Ω 3,2 , Ω 3,3 , but these new functions and their partial derivatives are polynomially bounded.We begin with a preliminary lemma.
Lemma 22 Let f : R n+1 → R n be a C ∞ function such that f and its partial derivatives are polynomially bounded.Suppose that x f : R → R is the first coordinate of a solution x of the IVP If x is polynomially bounded, then all the derivatives of x f are polynomially bounded.
Proof.We show the result by induction on the order of the derivative by showing that x (k) = f k (t, x(t)), where f k is a C ∞ function which is polynomially bounded, as well as all its partial derivatives.Then we will conclude that x (k) is polynomially bounded as well as all its partial derivatives by Lemma 19.The base case is trivial.Let us now assume that x (k) = f k (t, x(t)), where f k is polynomially bounded as well as all its partial derivatives.Then By taking f k+1 = (f k+1 1 , . . ., f k+1 n ), we conclude that x (k+1) = f k+1 (t, x(t)) and by Lemmas 18 and 19 we conclude that f k+1 is polynomially bounded as well as all its partial derivatives, thus showing the result.
From here we see that the graphs for J 2,1 and J 2,2 , provided in Figures 2 and 3, respectively, have certain regularities which will be explored to obtain Ω2,1 and Ω2,2 .Let us start with the case of Ω2,1 .We first analyze the behavior of J 2,1 .
Let us suppose that the argument z of J 2,1 (z) codes a pair (0, n) at the start of the diagonal with sum n.
. .Thus to simulate J 2,1 we need to track the sum s of the diagonal, and increase it by one when J 2,1 (z) reaches the value of s.On the next value z + 1, we will have that J 2,1 (z + 1) will take the value 0, and each time its argument z increases by one, J 2,1 (z) also increases by one, until it reaches the (new) value of the sum of the diagonal and the cycle repeats itself.We will simulate this behavior with ODEs.Before showing how 0 2 4 6 8 10 12 14 16 18 20 22 24 0  this can be done, consider the auxiliary function ξ : R → R defined as where ξ(x) = 0 and c ξ = . It is not difficult to see that θ(−(x − 1/4)(x − 3/4)) > 0 when 1/4 < x < 3/4.Hence we have that ξ(x) = 0 whenever x ≤ 1/4, 0 < ξ(x) < 1 when 1/4 < x < 3/4 and ξ(x) = 1 when x ≥ 3/4.Furthermore ξ is C ∞ and ξ and all its derivatives are polynomially bounded due to Lemmas 18 and 19.
Let us now present the ODE which will define Ω2,1 ] to be able to use the "memorized" values of x 2 and s 2 when updating x 1 and s 1 .We note that x 1 must be increased by one unit from its previous value (stored on x 2 ) until it reaches the value of the sum of the diagonal, which is stored in s 2 .On that moment we will have ξ(r(s 2 ) − r(x 2 )) = 0 on the equation for x 1 (if the value of x 2 is less than the value of the sum of the diagonal stored in s 2 , then ξ(r(s 2 ) − r(x 2 )) = 1 and x 1 is incremented by one) and x 1 will be reset to the value 0 starting the cycle again.The analysis for s 1 is similar: its value will be essentially constant as long as ξ(r(x 2 ) + 1 − r(s 2 )) = 0, which happens when r(s 2 ) − 1 ≥ r(x 2 ).When r(x 2 ) = r(s 2 ), we will have ξ(r(x 2 ) + 1 − r(s 2 )) = 1 and s 1 will be incremented by one from its previous value.We can see that the ODE (27) behaves as desired and that we can take Ω2,1 (t) = x 2 (t).Since the right-hand sides of (27) are polynomially bounded as well as their derivatives and (x 1 , x 2 , s 1 , s 2 ) is also polynomially bounded, implying by Lemma 22 that Ω2,1 and all its derivatives are polynomially bounded.The case for J 2,2 is similar.Let us suppose that the argument z of J 2,2 (z) codes a pair (0, n) at the start of the diagonal with sum n.Then J 2,2 (z) = n, J 2,2 (z+1) = n−1, . . ., J 2,2 (z+n) = 0, J 2,2 (z+n+1) = n+1, J 2,2 (z+n+2) = n, . . .Thus to simulate J 2,2 we need again to track the sum s of the diagonal, but now we need to increase it by one when J 2,1 (z) reaches the value 0. On the next value z + 1, we will have that J 2,1 (z + 1) will take the value s + 1, and each time its argument z increases by one, J 2,1 (z) decreases by one, until it reaches 0 and the cycle repeats itself.This behavior can be simulated in a similar way to (27) by the following ODE We can do an analysis to (28) similar to the one of ( 27) to conclude that we can take Ω2,2 (t) = x 2 (t) on (28).This concludes the proof of Theorem 20.
7 Can one-dimensional ODEs simulate Turing machines?
As we have seen in the previous section, analytic two-dimensional ODEs can robustly simulate Turing machines.But what about one-dimensional ODEs?
In this section we show that no one-dimensional autonomous ODE can simulate a universal Turing machine under some reasonable conditions.First let us give a more precise meaning to the notion of an ODE simulating a Turing machine.Let M be a Turing machine.Since ODEs are defined on R k , to simulate the Turing machine M with an ODE we first need to encode a configuration of M as a point of R k .However, since the coding of a configuration might not be unique, as it happens in the previous sections, we map each configuration to a set of possible encodings of that configuration.Hence we have to consider a map χ which maps configurations of M into non-empty subsets of R k .Then given a configuration c M , any point of χ(c M ) is assumed to represent the configuration c M .In this manner we can consider the case when c M is represented by a single point in R k (when χ(c M ) is a singleton) or when c M is represented by several points of R k .For example, in Theorem 9 we have assumed that any point in a 1/5-vicinity of I 3 (y 1 , y 2 , q), where y 1 and y 2 are given by (2) represents the configuration c M which is encoded by I 3 (y 1 , y 2 , q), i.e.
Note that it makes sense to assume that if c M and c M are distinct configurations, then χ(c M ) ∩ χ(c M ) = ∅.However, this assumption may be too weak, since even if χ(c M ) ∩ χ(c M ) = ∅ nothing prevents e.g. that χ(c M ) and χ(c M ) are fractal (e.g.Cantor-like) sets which are intermingled and thus very hard to separate in practice.To avoid such undesirable instances we impose a natural separation-condition on χ so that χ(c M ) and χ(c M ) are separated by disjoint open subsets of R k .More precisely, let {c i } i∈N denote all configurations of a given Turing machine M (recall that a Turing machine has at most countably many configurations).Then we assume that there are two computable maps a : N → Q k , r : N → Q such that for all i ∈ N, χ(c i ) ⊆ B(a(i), r(i)) = {x ∈ R k : x − a(i) ≤ r(i)} and, moreover, if i = j then B(a(i), r(i)) ∩ B(a(j), r(j)) = ∅.Now we say that the ODE where f : R k → R k , simulates a Turing machine with the coding χ if given an arbitrary configuration c 0 of M and some point y 0 ∈ χ(c 0 ) one has that the solution y to (29) with initial condition y(0) = y 0 satisfies y(n) ∈ χ(ψ [n] (c 0 )) for all n ∈ N, where ψ is the transition function of M .In the following, we show that no one-dimensional ODE can simulate a universal Turing machine under the separation-condition.
Theorem 23 Let M be a universal Turing machine.Then no ODE y = f (y) can simulate M in the sense explained above, where f : R → R is a computable function with only isolated zeros.
Proof.Let M be a universal Turing machine.We may assume that it has only one halting state and it cleans its tape immediately before it halts.This implies that M has only one halting configuration c k , k ∈ N; moreover, the problem of deciding whether M halts on input w, w ∈ Σ * = {0, 1} * , is undecidable.Let us assume, by contradiction, that there is an ODE (29) which simulates M , where f : R → R is a computable function.Then f must admit a zero in B k = B(a(k), r(k)) = [a k − r k , a k + r k ] where a k = a(k) and r k = r(k).Assume otherwise that this is not the case.Then since computable functions are continuous, it must be either f (x) < 0 for all x ∈ B k or f (x) > 0 for all x ∈ B k .Moreover, since B k is compact, it follows that min x∈B k |f (x)| = δ > 0. As a result, any solution starting on B k must leave it in time ≤ 2r k /δ and never return to B k afterwards (note that a solution of (29) is a continuous function which must move continuously along the real line).But this is impossible because ψ [n] (c k ) = c k for all n ∈ N and (29) simulates M. Hence B k must contain at least one zero x k , which is computable because f is computable and the zeros of f are isolated (it is well-known that isolated zeros of computable functions are computable.See e.g.[BHW08, Theorem 7.8]).
Let w be some input with the property that M halts on w, and suppose that the initial configuration associated to w is c iw .Then c iw = c k (note that in an universal Turing machine the initial state cannot be an halting state).Hence, if y 0 ∈ χ(c iw ), then y 0 / ∈ B k .Let us assume without loss of generality that y 0 < a k − r k .We note that the solution of the IVP (29) with y(0) = y 0 must reach B k because M halts on input w.This implies that f (x) > 0 for all x ∈ [y 0 , a k − r k ).There are two cases to to be considered: If there is a word w such that M halts on input w with the property that there is some z ∈ χ(c iw ) satisfying a k < z, we repeat the above procedure on the half line [a k , +∞) obtaining I := I ∪ [a k + r k , +∞) provided that f (x) < 0 for all x ∈ (a k + r k , +∞) or I := I ∪ [a k + r k , x) provided that f (x) = 0 for some x ∈ (a k + r k , +∞), where x is obtained similarly as in the previous case.
From the arguments above, we conclude that M halts on word w iff χ(c iw ) ⊆ I.We will use this fact to show that the halting problem is decidable, a contradiction.Let us assume, without loss of generality that I = (x 1 , x2 ) (the cases where one or more extremities of I are unbounded is dealt with similarly).Suppose first that for all i ∈ N, {x 1 , x2 } ∩ B i = ∅.Then to decide whether M halts on input w proceed as follows.Given the initial configuration c iw associated to input w, test whether a iw ∈ I by testing whether x1 < a iw < x2 .Notice that, since {x 1 , x2 } ∩ B iw = ∅, this test can be done in finite time.If the test suceeds, then accept w otherwise reject it.
Let us now suppose that x1 ∈ B j1 and x2 ∈ B j2 (the cases where: (i) x1 ∈ B j1 , and for all i ∈ N x2 / ∈ B i or (ii) x2 ∈ B j2, and for all i ∈ N x1 / ∈ B i could be treated similarly).We can then "wire" into the above algorithm for deciding the halting problem the correct answers for the two special cases when i w = j 1 or i w = j 2 .More specifically, given the initial configuration c iw associated to input w, test if i w = j 1 .If the test i w = j 1 suceeds, then accept (reject) if M (does not, respectively) halts starting on configuration c j1 .Otherwise, test if i w = j 2 .If the test i w = j 2 suceeds, then accept (reject) if M (does not, respectively) halts starting on configuration c j2 .If both tests fail, test whether a iw ∈ I by testing whether x1 < a iw < x2 .Notice that in this case it must be {x 1 , x2 } ∩ B iw = ∅, and thus this test can be done in finite time.If the test suceeds, then accept w otherwise reject it.
In other words, if the ODE (29) simulates M , then we can decide the halting problem, a contradiction.

Figure 1 :
Figure 1: Iterating the exponential function 2 x with an ODE.

Figure 2 :Figure 3 :
Figure 2: Graph of the function J 2,1 .Since the function is discrete, the image are only the blue points (the red line is given as a visualization helper).