1 Introduction and background

This study deals with the complementary influence of different theoretical approaches on the research process in mathematics education. Different theoretical frameworks might provide different insights into the description of processes that accompany the emergence of new mathematical knowledge structures. In Mathematics education, we might need different types of theories (Boero et al. 2002). Sometimes data from empirical research are difficult to discuss and interpret within a single theoretical frame (Arzarello and Olivero 2006). Therefore, the questions how to deal with the diversity of theories in mathematics education are of interest (see for example, English et al. 2005; Bikner-Ahsbahs and Prediger 2006).

Discussions on theoretical perspectives in mathematics education were initiated at the Fourth Congress of the European Society for Research in Mathematics Education, CERME 4, and continued at CERME 5. The participants compare, contrast and combine in a coherent way different theoretical frameworks currently used in mathematics education, with the eventual aim of networking between theories (Artigue, Bartolini-Bussi, Dreyfus, Gray and Prediger 2006; Arzarello, Bosch, Lenfant and Prediger 2008). For example, Kidron, Lenfant, Bikner-Ahsbahs, Artigue and Dreyfus (2008) compare and contrast how social interactions are taken into account in different theoretical frameworks. In the present study, the direction is different: the subject matter is not how to deal with the diversity of theories in mathematics education. In this paper, the focus on theories and the search for complementarities is justified by real research concerns. The research concerns relate to the conceptualization of the notion of limit. Indeed, this paper reflects many years of my own research on the conceptualization of the notion of limit and the focus on different theories reflects the evolution of this research. Like some other researchers, I have been involved in research concerning students’ understanding of mathematical concepts that relate to the conceptualization of the continuous such as the real numbers, the notion of limit. There is a general agreement in the literature about the difficulties experienced by students in learning some of these key concepts in analysis, especially those like the limit concept that are related to infinite processes (see for example: Courant and Robbins 1941; Sierpinska 1985; Davis and Vinner 1986; Cottrill et al. 1996; Cornu 1991; Tall 1992).

The process object theory was my first theoretical background. The lenses offered by this framework help me to understand the cognitive difficulties that accompany the learning of the limit concept. These lenses highlight students’ dynamic process view in relation to concepts such as limit and infinite sums and were influential in my research. Later, Gray and Tall (1994) introduced the notion of procept, referring to the manner in which learners cope with symbols representing both mathematical processes and mathematical concepts. In previous studies concerning the way students conceive rational and irrational numbers, infinite decimals were viewed as potentially infinite processes rather than as number concepts. Monaghan (1986) observed that students’ mental images of both, repeating and non-repeating decimals often represent “improper numbers which go on for ever”. Kidron and Vinner (1983) observed that the infinite decimal is conceived as one of its finite approximations (“three digits after the decimal point are sufficient, otherwise it is not practical”), or as a dynamic creature which is in an unending (a potentially infinite) process: in each next stage we improve the precision with one more digit after the decimal point. Research using the process-object approach and later the notion of procept helped me to understand the cognitive difficulties that are inherent to the epistemological nature of the mathematics domain. It helped me essentially to understand that students view the limit concept as a potential infinite process. Like other mathematical education researchers I tried again and again different directions to help students overcome the conceptual difficulties that accompany the understanding of the limit concept. In the last decade, the use of technology, especially some sophisticated Computer Algebra Systems (CAS), offers a new means in the effort to overcome some of the conceptual difficulties. The next step was an effort to answer the important question—how to use the technology and in particular how to use the discrete continuous interplay, interplay which can be achieved by the technology, to enhance students’ mathematical thinking processes. In my prior researches, the discrete continuous interplay was first used to develop visual intuitions that support the formal statement of the theory. In these researches, the procept theory was a relevant framework to consider the use of technology. See for example, Kidron (2003a) which deals specifically with the conceptual understanding of the convergence process obtained by approximating a function by means of polynomials.

In these prior studies, I realize that even if students did succeed to visualize formal definitions yet old conceptions about infinite sums and limit reappear. This pertinence of old conceptions led me to another different specific usage of the discrete continuous interplay which will be described in this paper as the substance of the empirical study. Only then will I be in a situation in which I can better explain why the substantive research problem and issues arising during the research process justified the decision to consider two specific additional theoretical lenses, namely the instrumentation theory and the model of Abstraction in Context (AiC).

Meanwhile, I will mention that concerns about students’ cognition led me towards this research study: the ways and obstacles to the conceptualization of limits are the core of the empirical study. I highlight my focus on cognition because this focus will serve as a common point in the three frames. The procept theory deals with cognitive difficulties that accompany the understanding of some mathematical domains. In the description of the empirical study, we will observe that the instrumental approach will also be used as a cognitive frame to understand the complexity of the “instrumentation process” (i.e. how the tool becomes an effective instrument of mathematical thinking for the learner). Finally, the new lenses offered by the model of abstraction in context will be used essentially to address students’ cognitive processes, especially the inner details of students’ construction of knowledge. Concerns about students’ cognition serve as a common point in the three frames but these concerns are expressed in different ways in the different frameworks. In the following sections, I will sharpen the discussion highlighting the differences between the frames aiming towards a basis for complementarity.

In the next section, I describe the essence of the three different theoretical frameworks. In Section 3, I describe the substance of the empirical study and the role of the procept theory in the design part. In Section 4, I first explain why the two additional frameworks were required and then analyze their specific influence on the research process. Section 5 is devoted to the additional insights offered by each framework to the two others. The concluding section emphasizes the complementary role of the three theoretical approaches.

2 The theoretical frameworks

2.1 The procept theory

The theory focuses on the relationship between mathematical processes, objects and symbols that dually evoke both mathematical processes and mathematical concepts. The notion of procept is present throughout a large portion of mathematics. Gray and Tall (2002) focus on the abstractive processes occurring in constructing procepts in arithmetic, algebra and symbolic calculus and how different types of symbol (whole numbers, fractions, algebraic expressions, (infinite) decimals, limits) give rise to distinct problems of concept construction and re-construction. Thus, each new form of procept has its own cognitive difficulties. Gray and Tall (2002) believe that knowledge of these specific difficulties and awareness of the underlying cognitive structures is essential to help a wider spectrum of students to succeed in the longer-term process of successive abstractions. For the limit procept, the accent is on the dual nature of the specific mathematical concept. The procept theory is used as a cognitive frame but we may also discern its strong epistemological basis: processes and objects are mathematical constructs as well as cognitive entities.

2.2 Instrumentation theory

Researchers have reflected on issues of ‘instrumentation’ and the dialectics between conceptual and technical work in mathematics—see for example Artigue (2002), Guin and Trouche (1999), Lagrange (1999), Trouche (2004). The instrumentation theory is well described in Monaghan (2007). In the present paper, I view the instrumental approach as a specific approach built upon the instrumentation theory developed by Vérillon and Rabardel (1995) in cognitive ergonomics. The “instrument” is differentiated from the object, material or symbolic, on which it is based and for which the term “artifact” is used. An instrument is a mixed entity, in part an artifact, and in part cognitive schemes which make it an instrument. For a given individual, the artifact becomes an instrument through a process, called instrumental genesis (Artigue 2002). Trouche (2005) emphasizes the subject-tool dialectic. The term instrumentation is used to denote how the tool shapes the actions of the tool-using subject and instrumentalisation is used to denote the ways the subject shapes the tool. Artigue adds that it is necessary to identify the new potentials offered by instrumented work, but she also stresses the importance of identifying the constraints induced by the instrument. Instrumentation theory also deals with the unexpected complexity of instrumental genesis. Students’ cognitive processes of construction of knowledge arise during the complex process of instrumental genesis in which they transform the artifact into an instrument that they integrate with their activities.

2.3 The AiC model of Abstraction in Context

Hershkowitz, Schwarz and Dreyfus (2001) have proposed a model of dynamically nested epistemic actions for processes of abstraction in context (AiC). The new construct is related to the three epistemic actions: recognizing, building-with and constructing. Recognizing a familiar mathematical construct occurs when a student realizes that the construct is inherent in a given mathematical situation. Building-with consists of combining existing artifacts in order to meet a goal such as solving a problem or justifying a statement. The AiC model is based on three epistemic actions. Taking into consideration the Consolidation phase which will be described in the following, the authors of the theory denote the model’s epistemic actions as RBC + C. The first C, constructing, is the main step of abstraction. It consists of assembling knowledge artifacts to produce a new mental construct with which students become acquainted. When we analyze constructing, R (Recognizing) and B (Building-with) refer to previous constructs. C (Constructing) necessarily refers to a new construct, and therefore to a construct that is different from all those to which R and B refer. The natural order is R then B (possibly with further R), usually lots of Rs and Bs; C is the global process of all (or most; sometimes C does not start at the beginning of the activity) Rs and Bs together. There is usually not a single action or utterance that can be referred to as C, but sometimes there is a single action or utterance, which shows that C has taken place. Often, this action or utterance can then be considered the place where the C-action terminated. The second C in RBC + C denotes the Consolidation of the abstract entity through repeated recognition of the new construct and building-with it in further activities. Hershkowitz (2004) pointed out that knowledge might be constructed that remains available only for a short while. In a later stage the student may not recognize it as an already existing construct—no consolidation of this short-term construction has occurred. A mental construct that has not been consolidated is likely to be fragile. Tabach, Hershkowitz and Schwarz(2006) point out the necessity of the consolidation of constructs. Dreyfus and Tsamir (2004) identify modes of thinking that take place in the course of consolidation. Monaghan and Ozmantar (2006a) note that the fragility of new constructions makes students reluctant to use them to counter challenges. They also note that in the course of consolidation, students begin to resist challenges by establishing interconnections between the new constructions and established mathematical knowledge and by reasoning with these constructions. In the analysis of consolidation, recognizing, building-with or constructing actions, cognitive aspects of learning mathematics are the focus of the model. The nested epistemic actions model of abstraction in context has the purpose of providing a fine-grained analysis of processes of emergence of abstract mathematical knowledge constructs in learners.

3 The empirical study: the procept theory and the design part of the study

In the following, I describe the substance of the empirical study. A preliminary and partial description can be found in Kidron (2003b).

3.1 The procept theory as the theoretical background for the research study

Previous research has used the procept theory to describe the cognitive difficulties that accompany the learning of the limit concept. These studies and others that are described in this section highlight students’ dynamic process view in relation to concepts such as limit and infinite sums and were influential in defining the aim of the empirical study. Tall (2000) theorized that the concept of limit is accompanied by cognitive difficulties because it conflicts with students’ previous experience of symbols as procepts. In arithmetic, symbols have built-in computational processes to ‘give an answer’. In school algebra, the symbols are algebraic expressions that are potential procepts. The operation (of evaluation) can only be carried out when the variables are given numerical values. Tall stressed that in the calculus, the situation changes with limit processes that are potentially infinite and so give rise to a limit concept approached by an infinite process which usually has no finite procedure of computation. In the case of the limit concept, a procept may exist which has both a process (tending to a limit) and a concept (of limit), yet there is no procedure to compute the desired result. Tall (1992) emphasized that the ideas of limits and infinity, which are often considered together, relate to different and conflicting paradigms. He illustrated this argument in analyzing students’ answer that 1 + 1/2 + 1/4 + 1/8 +…is 2 − 1/∞ ‘because there is no end to the sum of segments’ (Fischbein, Tirosh and Hess 1979). Tall explained that here it is the potential infinity of the limiting process that leads to the belief that any property common to all terms of a sequence also holds for the limit. In this case, the suggested limit is typical of all the terms: just less than 2.

I summarize what we learned from these previous researches:

  • The students viewed the limit concept as a potential infinite process.

  • The students expressed their belief that any property common to all terms of a sequence also holds for the limit.

This natural way in which the limit concept is viewed might be an obstacle to the conceptual understanding of the limit notion in the definition of the derivative function f′(x) as \(\lim _{h \to 0} {{\left( {f\left( {x + h} \right) - f\left( x \right)} \right)} \mathord{\left/ {\vphantom {{\left( {f\left( {x + h} \right) - f\left( x \right)} \right)} h}} \right. \kern-\nulldelimiterspace} h}\). For example, we cite students’ difficulties to conceive the concept of instantaneous velocity as the limit of average velocities. These difficulties are well described in Schneider (1992). Some of the difficulties might be related to the view of infinite as a potential infinite.

The derivative might be viewed as a potentially infinite process of \({{\left( {f\left( {x + h} \right) - f\left( x \right)} \right)} \mathord{\left/ {\vphantom {{\left( {f\left( {x + h} \right) - f\left( x \right)} \right)} h}} \right. \kern-\nulldelimiterspace} h}\) approaching f′(x) for decreasing h. As a result of the belief that any property common to all terms of a sequence also holds for the limit, the limit might be viewed as an element of the potentially infinite process. In other words, \(\lim _{\Delta x \to 0} {{\Delta y} \mathord{\left/ {\vphantom {{\Delta y} {\Delta x}}} \right. \kern-\nulldelimiterspace} {\Delta x}}\) might be conceived as Δyx for a small Δx. How small? If we choose Δx = 0.016 instead of 0.017, what will be the difference? There is a belief that gradual causes have gradual effects and that small changes in a cause should produce small changes in its effect (Stewart 2001, p.148). This intuition might explain the misconception that a change of, say, 0.001 in Δx will not produce a big change in its effect.

The cognitive difficulties relating to the understanding of the definition of the derivative as a limit are reflected in the historical evolution of the concept. Kleiner (2001) suggested that before introducing rigorous definitions we have to demonstrate the need for higher standards of rigor and that this could be done by introducing counterexamples to plausible and widely held notions.

3.2 The substance of the empirical study

Finding such a counterexample that demonstrates that one cannot replace the limit “\(\lim _{{\text{ $ \Delta $ }}x \to 0} {{\Delta y} \mathord{\left/ {\vphantom {{\Delta y} {\Delta x}}} \right. \kern-\nulldelimiterspace} {\Delta x}}\)” by Δyx for Δx very small was crucial to my research focus. Such a counterexample demonstrates that the passage to the limit leads to a new entity and that therefore omitting the limit will change significantly the nature of the concept. It demonstrates that the limit could not be viewed as an element of the potentially infinite process. In other words, such a counterexample might help to overcome the cognitive difficulties that were characterized with the lenses of the procept theory.

Such counterexamples exist in the field of dynamical systems. A dynamical system is any process that evolves in time. In a dynamical process that changes with time, time is a continuous variable. The mathematical model is a differential equation dy/dt = y′ = f(t,y) and we encounter again the derivative \(y\prime = \lim _{\Delta t \to 0} {{\Delta y} \mathord{\left/ {\vphantom {{\Delta y} {\Delta t}}} \right. \kern-\nulldelimiterspace} {\Delta t}}\). Using a numerical method to solve the differential equation, there is a discretization of the variable “time”. My aim is that the students will realize that in some differential equations the passage to a discrete time model might totally change the nature of the solution. I also aim to help students realize that gradual causes do not necessarily have gradual effects, and that a difference of 0.001 in Δt might produce a significant effect. In the following counterexample (the logistic equation), the analytical solution obtained by means of continuous calculus is totally different from the numerical solution obtained by means of discrete numerical methods. The essential point is that using the analytical solution, the students use the concept of the derivative as a limit \(\lim _{\Delta x \to 0} {{\Delta y} \mathord{\left/ {\vphantom {{\Delta y} {\Delta x}}} \right. \kern-\nulldelimiterspace} {\Delta x}}\) but, using the discrete approximation by means of the numerical method, the students omit the limit and use Δyx for small Δx. The aim of the empirical study is to analyze the effect of this specific discrete continuous interplay on the students’ conceptual understanding of the limit in the definition of the derivative.

I aimed to examine the students’ reactions when they realize that the approximate solution to the logistic equation by means of discrete numerical methods is so different from the analytical solution. My aim was that the students will reach the conclusion that passing to limits—or passing to discrete approximations—may change the nature of a problem significantly (Peitgen, Jurgens and Saupe 1992, pp. 295–301).

3.3 The procept theory and the design of the learning experiment: do gradual causes have gradual effects?

Previous procept related researches influenced the specific discrete continuous interplay that motivated the design of the learning experiment.

First year college students in a differential equations’ course (N = 60) were the participants in the research. The exercise sessions were held in PC laboratories equipped with MatLab and Mathematica.

The students were given the following task: a point (t 0, y 0) and the derivative of the function dy/dt = f(t, y) are given. Plot the function y(t). The students were asked to find the next point (t 1, y 1) by means of \({{\left( {y_{1} - y_{0} } \right)}} \mathord{\left/ {\vphantom {{{\left( {y_{1} - y_{0} } \right)}} {{\left( {t_{1} - t_{0} } \right)} = f{\left( {t_{0} ,y_{0} } \right)}}}} \right. \kern-\nulldelimiterspace} {{\left( {t_{1} - t_{0} } \right)} = f{\left( {t_{0} ,y_{0} } \right)}}\). As t increases by the small constant step t 1 − t 0 = Δt, the students realized that they are moving along the tangent line in the direction of the slope f (t 0, y0). The students generalized and wrote the algorithm: \(y_{n + 1} = y_n + \Delta t\,f\left( {t_n ,y_n } \right)\) for Euler’s method. They were asked how to better approach the solution. They proposed to choose a smaller step Δt.

The logistic equation dy/dt = r y(t) (1 − y(t)), y(0) = y 0 was introduced as a model for the dynamics of the growth of a population. The solution of the differential equation dy/dt = r y(t) (1 − y(t)), y(0) = y 0 is different from the numerical solution in the discrete model equation

$$\left\{ {\begin{array}{*{20}l} {p\left( {t + {\text{ $ \Delta $ }}t} \right) = p\left( t \right) + {\text{ $ \Delta $ }}t \cdot r \cdot p\left( t \right)\left( {1 - p\left( t \right)} \right)} \hfill \\ {p\left( 0 \right) = p_0 } \hfill \\ \end{array} } \right.$$

in which the numerical solution substantially depends on the choice of the step size Δt. If we compare the numerical solution \(p\left( {t + \Delta t} \right) = p\left( t \right) + \Delta t \cdot r \cdot p\left( t \right)\left( {1 - p\left( t \right)} \right)\) with the original logistic equation \(p_{n + 1} = p_n + r \cdot p_n \left( {1 - p_n } \right)\) we note that these formulas coincide for Δt = 1. In other words:

The growth law for arbitrary time step Δt as described in the learning experience reduces to the original Verhulst model by replacing the expression Δt·r by the parameter r. An analytical solution exists for all values of the parameter r. The numerical solution is totally different for different values of Δt as we can see in the graphical representations of Euler’s numerical solution of the logistic equation with r = 18 and y(0) = 1.3 (see Fig. 1). In the first plot, the sequence representing the numerical solution has the same asymptotic behavior like the analytical solution that tends to 1. In the second, third and fourth plot, the process is a periodic oscillation between two, four and eight levels. In the fourth plot, we did not join the points, in order that this period doubling will be clearer. In the fifth and sixth plot, the logistic mapping becomes chaotic. We slightly decrease Δt in the seventh plot. For the first 40 iterations, the logistic map appears chaotic. Then, period 3 appears. As we increase Δt very gradually we get, in the eighth plot, period 6 and, in the ninth plot period 12 and the belief that gradual causes have gradual effects is false!

Fig. 1
figure 1

Graphical representations of Euler’s numerical solution of the logistic equation

The phenomenon illustrated in Fig. 1 can be explained mathematically: If we consider sequences defined by \(p_{n + 1} = f\left( {p_n } \right)\) with \(f:x \to x + ax\left( {1 - x} \right)\), 1 is a fixed point for f [i.e. f(1) = 1] which is attractive for 0 ≺ a ≺ 2 and repelling for a ≻ 2. The behavior of the sequences illustrated in Fig. 1 can then easily be understood. \(a = r \cdot {\text{ $ \Delta $ }}t\). For the first graph a ≺ 2 and for the other graphs a ≻ 2. If we consider the iterated functions generated by f, we can link the number of attractive fixed points of these functions with the observations in the different plots.

Students’ reactions were observed by means of questionnaires and interviews. In this learning experience, I aimed to develop students’ understanding of the need for the formal definition of the derivative as a limit. Only a small percentage (19%) of the students wrote explicitly in their answers to the questionnaires that the source of the error resides in the fact that in the numerical method \(\lim _{\Delta x \to 0} {{\Delta y} \mathord{\left/ {\vphantom {{\Delta y} {\Delta x}}} \right. \kern-\nulldelimiterspace} {\Delta x}}\) is replaced by Δyx for Δx very small. The other students looked at Euler’s algorithm and even those who previously correctly defined the derivative as a limit, did not succeed to identify the source of error in the algorithm. On the other hand, students did notice that the error accumulates (22% wrote it explicitly) but they did not mention what was the source of the error in the numerical method. Moreover, 23% of the students’ answers included discrete continuous considerations, with a well developed qualitative approach to differential equations, adequate to explain why there is an error but inadequate to give a formal account of how the discrete method employed the derivative concept.

I observed that the students’ awareness of the limitations of the discrete methods did not guarantee their awareness of the fact that the numerical method does not use the derivative as a limit. In a more detailed analysis in the next section we will see how these observations lead to the need for additional frameworks.

4 The two additional frameworks

4.1 The need for additional frameworks

In my previous research, in which I use Mathematica’s dynamic graphical capabilities to help students visualize the dynamic process of convergence, the procept theory was an adequate framework to consider the use of technology. Animations were used to represent the infinite process of the different approximating polynomials approaching a given function. The animation creates the impression of completing an ongoing, infinite process—the potentially infinite process of the different Taylor polynomials which “settle” on the function (Kidron 2003a).

4.1.1 What was different in the present study that justifies the need for the additional lenses of the instrumental approach?

Observing students’ difficulties with the potential and actual infinite led me to consider the possibilities offered by the discrete continuous interplay. Indeed, the juxtaposition actual /potential infinite has a counterpart in the continuous/discrete approach in the computation. In the discrete continuous interplay described in my previous studies an effort was made to conceal the discrete nature of the computations. Animation was used to create the impression of a continuous movement towards the limit function. The present empirical study is different. Instead of concealing the discrete nature of the computations, I consider a different specific use of the discrete continuous interplay that highlights the discrete nature of the computations. My aim is to highlight the difference between the analytical-continuous solution and the solution to the same problem by means of discrete numerical methods. I am interested in the situation of conflict which results by contrasting the two approaches and in the influence of these situations of conflict on students’ conceptual thinking processes. My hypothesis is that students’ analysis of the source of error is crucial to make them aware that in the continuous method the derivative was defined as a limit in contrast to Euler’s numerical method in which the derivative was a term of the sequence Δyx for or a “small” Δx. Moreover, the students were requested to differentiate between errors due to the constraints of the computer and errors due to the algorithm used in the numerical method.

In order to be able to perform the error analysis, the students had to experiment with different situations with discrete and continuous methods, to compare the solutions obtained by continuous methods (with symbolic computation commands) and those obtained by numerical methods, and to reflect on the algorithm used in the numerical method. Experimenting with different situations while using the numerical method the students had to differentiate situations in which the numerical method works beautifully and situations in which it fails. In particular, in the case of the logistic equation, I wanted the students to be aware that something is wrong with the numerical method by means of theoretical results (a qualitative analysis) that help identify when the numerical method fails—for example, when the numerical solutions cross the equilibrium solutions. The investigation of this specific usage of the discrete continuous interplay in the research process was possible by attacking the logistic equation with the three different registers: the analytic, the numeric and the qualitative. The qualitative register offers insight into the behavior of solutions. It involves graphics as well as a description of the long-term behavior of the solution. The specific discrete continuous interplay was possible only by working in a flexible way in the three registers. But precisely, when the computer permits students to experiment with a result obtained in a previous register (for example the qualitative) in a new setting (for example the numeric) it becomes an effective instrument of mathematical thinking for the learner. Moreover, obstacles which students encounter during this instrumentation process offer opportunities for learning and what might be taken simply as ‘technical’ difficulties often have a wider ‘conceptual aspect’ (Guin, Ruthven and Trouche 2005). Therefore, analyzing students’ cognitive processes of construction of knowledge in their error analysis of the discrete continuous methods, I had to reverse the order of analysis that I used in my previous researches with the procept lenses. I had to analyze first the processes in which the computer becomes an effective instrument of mathematical thinking for the learner and then to analyze the students’ construction of knowledge in terms of and within these processes. To attain this aim I needed a framework which distinguishes clearly between the artifact and the instrument. Moreover, I needed a framework which illuminates how the artifact becomes an instrument—a mixed entity, in part an artifact, and in part cognitive schemes which make it an instrument. Quoting Vérillon and Rabardel (1995) “the instrument does not exist in itself, it becomes an instrument when the subject has been able to appropriate it for himself and has integrated it with his activity”. My focus was in cognition and I aimed to analyze the students’ instrumental genesis—how the artifact becomes a psychological construct, an instrument.

4.1.2 Why did I decide to select the model of abstraction in context?

My starting theoretical approach, the procept theory permits the epistemological analysis of the content under consideration. I realized the ensuing cognitive challenges offered by the mathematics but I also realized the need for a systematical investigation of the knowledge construction. Even if the expected construct has not been reached, I was interested in the evolution of the process of construction and its connections with context aspects. Therefore, I realized the need for a fine-grained analysis of the cognitive processes that arise within the students’ instrumental genesis. These processes were lengthy processes and I was interested in the inner details of the students’ construction of knowledge. I decided to analyze the knowledge construction by the individual. I focused on process aspects of construction of the knowledge constructs rather than on outcomes (Hershkowitz 2004). By analyzing each student’s answers to the different questionnaires I aimed to examine the evolution of the student’s thinking before and after being confronted with the counterexample.

4.2 The influence of the two additional frameworks

I will present excerpts in relation to the influence of each of these two theories on the research study. Each theory has a different focus. Therefore, in relation to each theory, in order to make clearer the lenses offered by each theory, I choose some appropriate excerpts from the data set of students’ answers to the questionnaires and interviews. The excerpts, although different, demonstrate that the mathematical meanings that are the source of the error in the numerical method might be indistinguishable at the first glance: Some other effects might draw the students’ attention and as a consequence the knowledge construction by the individual might be a lengthy process.

4.2.1 The complexity of the instrumentation process

Analyzing the influence of the procept theory, the emphasis is on the cognitive challenges that characterize the conceptualization of the limit. Analyzing the influence of the instrumentation theory, the emphasis is on the way we take advantage of the specific discrete continuous interplay.

We can profit from the new potentials offered by the instrumented work, for example, by means of the discretization processes. But, Artigue (2002) warns us that the learner needs more specific knowledge about the way the artifact implements these discretization processes. Thus, it is important to be aware of the complexity of the instrumentation process when we come to analyze the students’ reactions in the research study. By means of error analysis, I planned to help the students to better understand the continuous methods and the concept of limit. But, working with a Computer Algebra System, there are other unexpected effects that are directly linked with the “instrument” and the way it influences the students’ thinking. In addition to the error due to the discretization process, there are other sources of error that are directly related to the artifact. For example, an error could be a result of a round off in the computations that becomes more pronounced because of the “cumulative effect” of the iterative numerical method. The students’ attention might be distracted by the round off error especially if in previous experience with the computer they encountered such kind of round off error. This happened to a student, Meira, who attributed the error to the round off effect:

Meira I remember from an exercise in the calculus course that the solution with Matlab was 0 but the solution using the symbolic form was 0.5. When we tried to understand why this happened we realized that MatLab computes only 15 digits after the decimal point.

In the instrumentation approach, the instrumental mediation is seen as an essential component of the learning process. In the specific research study, this approach indicates the need to help students analyze the different sources of errors and differentiate between an error due to mathematical meanings, like the fact that the limit was omitted in Euler’s algorithm, and an error due to the characteristics of the artifact.

4.2.2 The influence of the two frames: the model of abstraction in context and the instrumentation theory on the research study

In the following, I will analyze the answers of a student named Nurit to the questionnaires in terms of both the instrumental approach and the RBC + C epistemic actions where the accent is on the second C, the consolidation process. We will follow Nurit’s construction of Euler’s numerical method within which appeared the consolidation of the derivative concept as a limit.

Nurit’s construction of the error analysis is an instrumentation process. The instrumental approach permits an analysis of how the computer becomes an effective instrument of mathematical thinking for Nurit. The fine grained analysis offered by the model of abstraction in context helps essentially to discern the consolidation process which takes place within her instrumentation process.

To the question: “In Euler’s method, if we attribute a very small value to the step Δt, for example—0.02, can we be sure that we have a good approximation to the solution?”, Nurit answered that the smaller Δt, the better is the approximation. In this first stage Nurit identified the limit as a process and that a small Δt may be not small enough.

Nurit was also asked to express her opinion about the following statement:

“If in Euler’s method, using a step size Δt = 0.017 we get a solution very far from the correct solution, then a step size Δt = 0.016 will not produce a big improvement, maybe some digits after the decimal point and no more”.

This question was presented to the student before the introduction of the logistic equation. Nurit’s first reaction was:

It seems to me that if with Δt = 0.017 we didn’t get a good solution, then Δt = 0.016 will not produce a big improvement.

Then she changed her mind:

That was my first impression but a second look at the expression for y k+1 in Euler’s algorithm led me to the conclusion that the method is iterative, that is, on y k we apply the algorithm in order to find y k+1 etc. etc. and after many iterations even a slightly smaller step size will produce a big improvement.

In this second stage, Nurit overcame the intuition that gradual causes have gradual effects by reflecting on the accumulating effect in the numerical solution. After being introduced to the logistic equation with the two different solutions, the analytical and the numerical, Nurit was asked to characterize the source of the error in Euler’s method. At that point, she remembered an exercise on the sensitivity of some differential equations on initial conditions and realized that in the continuous approach too, small changes in a cause can produce large changes in its effect:

We have worked this week an exercise that demonstrates that a change in the initial condition of a differential equation might cause a large change in the solution. When we choose an initial value y(0) = 1 + ɛ the solution curve tends to ∞ when x→∞ but when the initial value was y(0) = 1 − ɛ then the solution curve tends to −∞. All this happened because of the term e x in the solution. Therefore in an expression like ce x with x→∞ it is very significant if c is positive or negative. Maybe the small error made in the Euler’s method induced big changes in the graph of the solution curve also in our case.

The influence of the interaction with the computer on cognition is strong and even if the computer is turned off the mental image of the solution tending to infinity because a small change in the initial value accompanies Nurit thinking. In this third stage, Nurit realized that small changes in a cause can produce large changes in its effect also without the accumulating effect and not only in iterative processes. But at that stage, the reason for the small change in the cause in the case of the numerical solution to the logistic equation was not clear to Nurit. Then Nurit observed that

The graph of the approximated curve obtained with Euler’s method crosses the equilibrium solution many times and this is another problem that we identify in Euler’s method.

Nurit is aware that something is wrong with the numerical method by means of qualitative analysis that helps identify when the numerical method fails. She works in a flexible way in the qualitative and numerical registers.

She added:

In Euler’s method, the numerical approach consists on finding points y k which approximate the values of the function.

Trying to identify the source of the error Nurit’s first reaction was that the error is due to the round off effect and the fact that the error accumulates. Then she changed her mind:

The source of the error in Euler’s method is the way the derivative is defined \( {{\left( {y_{{k + 1}} - y_{k} } \right)}} \mathord{\left/ {\vphantom {{{\left( {y_{{k + 1}} - y_{k} } \right)}} {\Delta t = f{\left( {t_{k} ,y_{k} } \right)}}}} \right. \kern-\nulldelimiterspace} {\Delta t = f{\left( {t_{k} ,y_{k} } \right)}} \) and by means of this definition we find y k  + 1 in Euler’s algorithm. But we know that this definition is not precise. We have to add the condition that Δt→0 so we will know that we are not dealing with the secant to the graph of the function but with the slope of the tangent. Because of the numerical method, Δt was chosen as a number, a small number: Δt = 0.1; Δt = 0.12…but not small enough and in fact the derivative is defined for Δt→0.

Nurit added the following figure (Fig. 2):

In the following we demonstrate the way Nurit consolidated her knowledge about the definition of the derivative by means of interconnections with existing knowledge (the sensitivity of some differential equations) and intuitive ideas (gradual causes have gradual effects). Her process of construction led her to differentiate between the error due to mathematical meanings, namely, the fact that the limit was omitted in Euler’s algorithm and the error due to meanings specific to the computer like the round off error. Nurit consolidated her conceptual understanding of the limit concept in the definition of the derivative in this situation. She did it within her constructing of the error analysis in applying Euler’s method to solve the logistic equation. Analyzing her lengthy process of error analysis we distinguish different phases. In the later phases her attention is no longer distracted by the accumulating effect of the numerical method, nor by the round off effect induced by the computer. She is ready to seek ‘the reason for a small change in a cause’ not only in errors specific to the instrument. This led her to look for an error due to mathematical meanings and to analyze how y k  + 1 is obtained in Euler’s algorithm and as a consequence to observe “the way the derivative is defined” in Euler’s method. At the end of the consolidation process, Nurit is confident with her reconstruction of the limit concept and is also resistant to challenges.

Now, it could be that there is also a round off error in the numerical method but a round off error by itself could not have such a big influence on the graph of the solution so that we will have a periodic oscillation between two levels instead of a solution that tends to 1. The error is due to the way the derivative is defined in the numerical method.

In the course of consolidation, Nurit begins to resist challenges by establishing interconnections between the new construction and established mathematical knowledge and by reasoning with this construction (in the words of Monaghan and Ozmantar 2006a). We recall that in the established mathematical knowledge, the instrument had an important role: for example, in the knowledge about the sensitivity of some differential equations, the mental image (of the solution tending to infinity because a small change in the initial value) is present even if the computer is turned off. We also recall that confidence is one of the psychological constructs that Dreyfus and Tsamir (2004) associated with the progressive consolidation of an abstraction. We also pay attention that Nurit’s construction is observed by means of verbalization with a language more and more precise which is one of the characteristics of the consolidation phase (Hershkowitz et al. 2001).

Observing Nurit’s lengthy process of reconstruction of the definition of the derivative as a limit, I was surprised to notice that before the learning experience, being asked to define the derivative concept, Nurit had written a correct definition of the derivative at x 0 as \(\mathop {\lim }\limits_{h \to 0} \frac{{f\left( {x_0 + h} \right) - f\left( {x_0 } \right)}}{h}\). She also wrote that f′(x 0) is the slope of the tangent to the graph of the function at the point x 0. She added that the limit of the sequence of the slopes of the secants is the slope of the tangent and this happened for h→0, the closer we are to the point x 0. A question arises—if Nurit previously defined correctly the derivative as a limit, why did she need such a long process to identify the source of error in Euler’s method? How can we explain her lengthy process of consolidation of the definition of the derivative as a limit? We may assume that her knowledge of the definition of the derivative as a limit was fragile and needed to be consolidated. Even so, new questions arise: Why was her knowledge so fragile? What are the characteristics of the process of consolidation in this case? It seems that the figural model she had for the derivative is the geometrical representation of the concept and that she was influenced by this figural model. At that time, when Nurit answered the questionnaire, the geometrical representation appeared to be the only active representation she had of the derivative concept. After the learning experience in which Nurit accomplished her construction of the error analysis in applying Euler’s method to solve the logistic equation, she confronted her figural model of the derivative with the formal definition.

In her drawing, Fig. 2, the limit curve, the tangent is thicker than the other elements of the sequence, the secants, and thus differentiated from them.

Fig. 2
figure 2

The tangent is differentiated from the other elements of the sequence, the secants

The way Nurit confronted her figural model of the derivative with the formal definition was an important stage towards her understanding that the formal definition is representation independent (Dreyfus 1991). The opportunity to understand the need for a formal definition, the opportunity to link her figural model to the formal definition of the derivative as a limit enabled Nurit to consolidate her conceptual understanding of the definition of the derivative as a limit. In fact, this was the first of two phases of Nurit’s consolidation of the definition of the derivative as a limit.

Two years later, I interviewed Nurit and analyzed again her construction of the error analysis in applying Euler’s method to solve the logistic equation.

This time, she did not need a lengthy process of consolidation of the definition of the derivative as a limit.

Nurit The error in the numerical method is in the way we use the derivative. We consider the slope at a given point as if it is the derivative in the entire segment and this is not correct since the derivative changes. It is a good approximation if we take a small Δt but it seems to be not good enough.

Nurit correctly defined the derivative and linked the formal definition with the geometrical representation of the secants tending to the limit curve, the tangent. But now as a student in her third year majoring in physics, Nurit had used the derivative notion in other physics courses. This is well expressed in the following excerpt:

Nurit Today, I see in the derivative \(\mathop {\lim }\limits_{\Delta t \to 0} \frac{{f\left( {t + \Delta t} \right) - f\left( t \right)}}{{\Delta t}}\) more than the figure of the secants tending to the limit curve, the tangent. The symbol \(\mathop {\lim }\limits_{\Delta t \to 0} \frac{{f\left( {t + \Delta t} \right) - f\left( t \right)}}{{\Delta t}}\) has for me, now, a larger significance, a physical meaning, changes in volume and pressure, changes in tension and time.

We note that in her second phase of consolidation, Nurit succeeded to integrate different representations of the concept of derivative as a limit.

5 The different frameworks: mutual analytic benefits and additional insights

The three theoretical approaches deal with cognitive processes with different highlighting. In my research, using the procept theory as a cognitive frame, the highlighting was on epistemology—the cognitive difficulties that are inherent to the epistemological nature of the mathematics domain; using the instrumentation theory, the highlighting was on the instrumental mediation and its influence on the learner’s cognitive processes; using the AiC model of Abstraction in Context, the highlighting was on the inner details of the learners’ nested epistemic actions: the AiC model proposes an adequate framework to consider processes that are fundamentally cognitive while taking into account the context in which these processes occur.

These three kinds of “highlighting” although different but strongly linked with cognition were crucial to my research. Using the three theoretical perspectives, I was interested in insights each framework can offer to the two others. Moreover, the specific aspects of one framework can be viewed in terms of the others and this re-viewing might bring mutual analytic benefits. Investigating these mutual analytic benefits is the core of this section. In Fig. 3, we see that although each theoretical approach has its own focus it is also related to the other main aspects of the other frames.

Fig. 3
figure 3

Each approach—its own focus

5.1 Epistemology and cognition

The procept approach helped to identify what the constructs are that should be constructed or consolidated. It offers an a priori analysis which is an important additional insight offered to the AiC model. Identifying the constructs that should be constructed is a first stage. In the second stage, tasks should be designed. Dreyfus and Tsamir (2004) and Monaghan and Ozmantar (2006a) discussed the importance of task design in relation to consolidating a construction. The design of tasks with potential for abstraction benefits from the influence of the instrumentation theory. For example, the instrumentation framework points out the importance to identify the constraints induced by the instrument (Artigue 2002). Being aware of these constraints might help students towards the differentiation between error due to mathematical meanings and error due to the instrument. Indeed, in the specific research study, the ability of the students to make this differentiation was crucial in enabling the consolidation of mathematical meanings. The interaction with existing knowledge also enables the consolidation of the derivative as a limit. This was possible thanks to the different schemes of instrumented actions (the sensitivity to initial conditions of the ordinary differential equation: the picture which remains in students’ minds even if the computer is turned off).

5.2 Tools and cognition

We deal with the question how the instrumentation process interferes with the two other frames. For AiC, the technology is considered as a part of the context. The context contains, for example, the given mathematical domain, learners’ personal histories, tasks learners are presented with, computer environment or social context. The model describes processes of abstraction in their specific context. In the investigation of the learner’s cognitive processes with the AiC model or the instrumentation approach, there is an essential difference in the order of analysis, which also influences the essence: the instrumentation approach analyzes the process in which the artifact changes into an instrument and the epistemic actions analysis is in terms of and within this process. AiC analyzes the role of the computer environment at a second stage, as a part of the context, in order to identify what in the RBC processes might have been influenced by the interaction with the computer. This observation explains the different way the two frameworks deal with the notion of artifact. The instrumentation approach makes clear from the beginning the distinction between the artifact and the instrument—a distinction which is essential for the error analysis.

We emphasized the differences between the two frames but there are also important common features. Both theoretical frameworks focus on epistemic actions. Monaghan and Ozmantar (2006b) demonstrate, in so many words, that in some instrumented action schemes we might recognize the recognizing, building and constructing epistemic actions of the AiC model and that this model is an activity theoretic (Leont’ev 1981) model of abstraction. Indeed, in Hershkowitz et al. (2001) we read that the model of abstraction in context is inspired by Davydov’s (1990) ideas and that some of the tenets of Davydov’s theory are similar to principles of activity theory. Also in the instrumentation approach, abstractions arise in practical activity by mentally transforming the features and potentialities of tools. AiC draws the researcher’s attention to the fact that even if the expected construct has not been reached, the evolution of the process of construction is important in itself. It gives an indication of how to enhance the instrumentation process. For example, in the research study described in this paper, the instrumentation process might be enhanced with theoretical discourses and activities in order to develop schemes that will help students in their error analysis.

In the following, I analyze how the instrumentation process interferes with the procept framework in relation to the specific topics that relate to the conceptualization of the continuous. The a priori analysis done within the procept perspective influenced the design of the learning experience. The decision for the specific discrete continuous interplay and the way learners might take advantage of this specific discrete continuous interplay was directly influenced by what might be offered by means of the instrumental mediation. Indeed, the instrumental mediation was an essential component of the learning process. The schemes of use which were introduced in the classroom experience rendered concrete the mediation of the technology. All the students work in the PC lab in the exercise lessons. They visualized differential equations and their solutions in geometric ways (slope fields, graphs of solutions) and moved across these geometric representations and their analytic representations. Numerical techniques were used to investigate the behavior of solutions of differential equations. The students perform computations both in the discrete and continuous world. The teacher emphasized the role of parameters in many examples. The students changed very, very slightly the value of parameters and the computer was used to address the manner in which the behavior of solutions changes as these parameters are varied. As a consequence, meaningful mental schemes were built. The students worked some exercises in which the inaccuracies of “large step” Euler’s method were discussed. The graphics offered by the computer in case of the numerical solution led to the understanding that in some numerical solutions The differences are so significant: it could not be a result of round off. The difference is not a small difference but a transition above and under the equilibrium solution (in students’ words). I observed these opportunities offered by the instrumentation process but at the same time, I realized the constraints induced by the instrument. In fact, these constraints were a source of new opportunities for the students: the awareness of possible round off errors due to the computer accompanied with a previous observation that a very small change in a cause can produce a very large change in its effect also in the continuous world encouraged the students to enter more deeply into the world of error analysis.

5.3 Learners’ epistemic actions and cognition

In the framework of the present research, the analysis of the learner’s epistemic actions is the main focus of AiC. This framework offers a systematic means to investigate learners’ cognitive processes that accompany the emergence of new constructs. The identification of constructs enables researchers to see the details of the constructing process. Even if the focus of the model is cognition, it is a model of abstraction in context and both the epistemological nature of the mathematics domain and the technology are parts of the context. RBC + C analysis first investigates the learner’s epistemic actions. Then it analyzes the role of the contextual influences on the epistemic actions.

6 Concluding remarks: the complementary role of the three theories

The procept theory helped me to prepare the design of the learning experiment and was also used in analyzing some students’ reactions. Students’ expressions like the following: “the smaller Δt, the better is the approximation” reflect the way the limit is conceptualized as a potentially infinite process. We observe the complementary role of the theories in the analysis of the students’ reactions. The lenses offered by AiC facilitated my understanding that it is within the process of construction of Euler’s method, that the consolidation of the definition of the derivative as a limit took place. But I had to analyze the way students achieved this process of construction in the light of the instrumentation theory: the roles of AiC and the instrumentation theory intertwined.

Theory-based mathematics education must be rooted in mathematical context (Harel 2006). Therefore, I will conclude with the question: What is specific to the subject of the research study that demands the contribution of more than one theoretical approach in the research process? Trying to answer this question we may observe the dual character of the limit as a process and as a concept (the procept theory) but we may also consider the discrete continuous interplay that is the basis of the definition of the derivative as it is expressed in Berlinski (1995, p. 168):

In making possible the definition of the derivative, the concept of a limit unifies in a fragile and unlikely synthesis two diverse aspects of experience, the discrete and the continuous. It is the genius of the calculus to reconcile in the definition of a limit and the definition of the derivative these conflicting aspects of experience.

In the research study we used the discrete continuous interplay to help students conceptualize the notion of limit. To achieve this goal, the students were asked to compare continuous and approximate discrete methods in solving the same problem. In the previous sections we noticed that the investigation of this specific usage of the discrete continuous interplay in the research process benefits from the contribution of the instrumentation theory in order to investigate the students’ error analysis.

The students learned the notions of limit and derivative in the calculus course. In the research study with its specific usage of the discrete continuous interplay, in some cases, like for Nurit, the student had to reconstruct the limit concept by recognizing it in the different context of a course in differential equations. This kind of reconstruction is part of the consolidation phase in the model of abstraction in context.

This specific research study demands the contribution of more than one theoretical approach to the research process and the search for complementarities was justified by real research concerns. The question whether and how one can take advantage of the combination of different theories in other research processes in mathematics education is an important issue for further investigation. Learning processes are very complex with many aspects interacting and influencing the process. Different theoretical perspectives offer different ways for approaching learning’s processes and for taking into account environmental conditions and influences on these processes.

Trying to answer the question whether the theories can complement each other we point out potential benefits between different perspectives. Identifying differences that will serve as a basis for complementarities between different perspectives is rewarding in the sense that it permits a better understanding of the potential of each frame but also its limits. We pointed out potential benefits but we should also point to the difficulties that may arise in the process of linking theories. Indeed, there might be possible disagreements between the underlying assumptions of different theories. Having become aware of the substantial difficulties involved in any attempt to connect theories, we raise the question what can (and what cannot) be possible aims of such an effort.

Each approach has its own focus. Therefore the important question is how to retain the specificity of each theoretical framework with its basic assumptions, and at the same time profit from combining the different theoretical lenses. This paper is a first attempt towards this direction.