Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Nonlinear ray tracing in focused fields, part 1. Calculation of 3D focused wavefields: tutorial

Open Access Open Access

Abstract

In this three-part paper series, we develop a method to trace the lines of flux through a three-dimensional wavefield by following a direction that is governed by the derivative of the phase at each point, a process that is best described as flux tracing but which we interchangeably name “nonlinear ray tracing.” In this first part, we provide a tutorial on the high-speed calculation of three-dimensional complex wavefields, which is a necessary precursor to flux tracing. The basis of this calculation is the angular spectrum method, a well-known numerical algorithm that can be used to efficiently and accurately calculate diffracted fields for numerical apertures $\lt 0.7$. It is known that this approach yields identical predictions to the first Rayleigh–Sommerfeld solution. We employ the angular spectrum method to develop two algorithms that generate the 3D complex wavefield in the region of focus of a lens. The first algorithm is based on the thin lens approximation, and the second is based on the concept of an ideal lens, which can be modeled using an optical Fourier transform. We review how these algorithms can be used to calculate focused laser beams with ${{\rm TEM}_{00}}$ and ${{\rm TEM}_{01}}$ laser profiles. The three-dimensional sampling requirements of the focused field are explained in detail, and expressions for the computational and memory efficiency of the two algorithms are developed. These two algorithms generate the 3D scaffold for the flux tracing method developed in the second paper, and in the third paper we highlight the application of the method to understanding monochromatic lens aberration. Disregarding the second and third papers, this first paper will serve as a practical tutorial for anyone seeking to compute focused fields in three dimensions.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

The objective of this three-part paper series is to develop a computationally efficient method that can accurately trace the flux lines in the focal region of a lens and to explore the application of this method for understanding lens aberrations and laser spatial modes. The flux lines, which we interchangeably call “nonlinear rays,” are discussed in much more detail in the second paper. Briefly, the concept of wave optics and geometrical ray optics can be related using the eikonal function [1], which can be related to the derivative of the phase of the wavefront. By following paths that are guided by the derivative of the phase of the wavefield in three dimensions, these flux lines can be traced. Indeed, these nonlinear rays and the classical rays are one and the same if the wavefront is slowly varying. A physical interpretation of the flux lines and their relationship to classical ray optics is provided in the next paper as is a detailed discussion of the method used to trace the lines. A necessary framework for this method is the availability of the samples of the converged wavefield in the focal region of the lens over a three-dimensional grid. The computation of this 3D sample distribution in an accurate and timely manner is the subject of this first paper in the three-part series.

More specifically, in the first paper, we focus on numerical methods that can accurately calculate a propagating complex wavefield over a three-dimensional volume, and the associated sampling requirements and computational efficiencies are considered in some detail. We limit this study to focusing fields, for which the most rapid change of flux can be expected. Two distinct numerical algorithms are developed for the case of a thin lens and an ideal lens, respectively, both of which iteratively apply the angular spectrum method over a range of distances, albeit in different manners. In the second paper, these numerical algorithms form the basis of a nonlinear ray tracing algorithm, whereby the 3D wavefield is sampled in a uniform grid in the focal region of the lens, and the “ray” is guided by the derivative of the phase, which is calculated at a sequence of points travelling through this grid. Interestingly, the nonlinear rays appear to propagate around regions of low intensity. In the third paper, the flux tracing algorithm is applied to convergent lenses that contain aberration, several of which cannot readily be interpreted in terms of linear geometrical ray optics but can be interpreted more easily in terms of the nonlinear rays. In summary, the concept of flux and flux-tracing is explored in the second paper, and its application to aberration theory is highlighted in the third paper; in this first paper, we focus only on calculating a focusing wavefield over a uniform 3D grid of sampling points, which is a necessary precursor to flux-tracing.

At the heart of this paper is the concept of numerical propagation. Numerical propagation algorithms that can accurately calculate diffraction patterns at variable distances within an homogeneous isotropic medium are an active area of research, with far reaching applications including in the areas of quantitative phase microscopy [2], computer generated holography [3], confocal microscopy [4], computation of the point spread function in lens design and lithography [5,6], and digital holography [7]. Solutions based on paraxial scalar optics include various discretizations of the Fresnel and linear canonical transforms [8], which continue to find application in quantitative phase imaging. However, algorithms that are founded on nonparaxial scalar optics are, in general, preferable since they offer superior performance in terms of accuracy and can be numerically implemented at least as efficiently as paraxial based algorithms.

These algorithms include the angular spectrum method (ASM) [9,10] and the Rayleigh–Sommerfeld convolution (RSC) method [1113], which can both provide highly accurate solutions for numerical apertures up to at least 0.6 [5,14]; for higher numerical apertures, algorithms based on the more intricate vectorial model are required [6]. Indeed the first Rayleigh–Sommerfeld diffraction integral and the propagation of the angular spectrum can be shown to provide identical solutions [10]. However, their discrete counterparts, the RSC and ASM, have significantly different sampling considerations [11,1517]. In this paper, we employ the ASM as the numerical propagation algorithm of choice, making use of recent advances in sampling theory to optimize the computational memory efficiency of the ASM for focused fields [18,19]. The choice of the ASM to calculate the diffraction pattern in the focal region of a lens is based on several factors. First, methods that are based on the non-paraxial (Fresnel) model will not be sufficiently accurate for the numerical apertures considered in these papers. The ASM and RSC methods are computationally fast algorithms that can calculate the field with high accuracy up to a numerical aperture of 0.6–0.7, but the ASM performs better for the spatial and sampling requirements of focused fields resulting from wide lens apertures, as explained in Refs. [11,18]. For an excellent review of the various propagation algorithms and their frequency and sampling considerations, we refer the reader to Ref. [20].

In this tutorial we cover several detailed topics:

  • • We develop two algorithms for the calculation of the diffraction pattern at the focal point of a lens over a 3D grid of sampling points. The first algorithm, which relates to the case of the thin lens approximation, iteratively applies the ASM, where the lens function is first “pre-aliased” [18,19] in order to improve the computational efficiency. The second algorithm, which relates to the ideal lens, involves the use of a single discrete Fourier transform (DFT) to “propagate” from the front focal plane of the lens to the back focal plane, followed by an iterative application by the ASM.
  • • We extend these algorithms to include different laser spatial modes that are focused by the lens. In the third paper we will further extend the algorithms to include different lens aberrations.
  • • We provide for the first time a rigorous proof using fundamental sampling theory of the superposition-tile method introduced by Kanka [19] and further developed by Kelly [18], which involves adapting the input to the ASM in order to increase its computational efficiency. This involves pre-aliasing the input, whereby we break the input into smaller “tiles” that are superimposed together, and the result is input to the ASM.
  • • We examine in detail the computational efficiency of the ASM with and without pre-aliasing for the first time; we find pre-alisaing only improves the computational efficiency if the ASM is to be applied iteratively over several distances for the same input as is the case for the 3D sampling method developed in this paper for flux tracing.
  • • The Rayleigh–Sommerfeld thin lens function is introduced for the first time, which was found to produce superior focusing when compared to spherical and commercial aspheric lens functions, albeit only for a single point.

The breakdown of this paper is as follows: In Section 2, the ASM is described, and the associated sampling conditions are examined. Particular attention is given to the superposition-tile method [18], which is renamed the “pre-aliasing” method in this paper; this is accompanied by a rigorous proof of the underlying sampling theory in Supplement 1. In Section 3, the two algorithms are defined to calculate the 3D diffraction pattern around the focal point of a thin lens and an ideal lens, respectively. In Sections 4 and 5, the algorithms are extended to account for the spatial mode of the laser. This is followed by a brief discussion in Section 6, which will lead into the second paper.

2. ANGULAR SPECTRUM METHOD

A. Propagation of the Angular Spectrum

Propagation of the angular spectrum [10] is described as follows:

$${u_z}({x_z},{y_z}) = {{\rm{FT}}^{- 1}}\big\{{{\rm{FT}}\big\{{{u_0}({x_0},{y_0})} \big\}{H_z}({f_x},{f_y})} \big\},$$
where the transfer function ${H_z}$ is defined as follows:
$$\begin{split}&{{H_z}({f_x},{f_y})}\\& = {\left\{{\begin{array}{*{20}{l}}{\exp \!\left({j2\pi z\sqrt {\frac{1}{{{\lambda ^2}}} - f_x^2 - f_y^2}} \right),\quad {\rm{for}}\;\,\frac{1}{{{\lambda ^2}}} \ge f_x^2 + f_y^2}&\\[6pt]{\exp \!\left({- 2\pi z\sqrt {f_x^2 + f_y^2 - \frac{1}{{{\lambda ^2}}}}} \right),\quad {\rm{for}}\;\,\frac{1}{{{\lambda ^2}}} \le f_x^2 + f_y^2}&\end{array}} \right.},\end{split}$$
and the Fourier transform and inverse Fourier transforms are defined as
$$\begin{split}{U({f_x},{f_y})}& = {{\rm{FT}}\{u(x,y)\} ({f_x},{f_y}) }\\[-4pt]&={ \int_{- \infty}^{+ \infty} \int_{- \infty}^{+ \infty} u(x,y)\exp [{- j2\pi (x{f_x} + y{f_y})} ]{\rm d}x{\rm d}y},\\[-4pt]{u(x,y)}& ={ {{\rm{FT}}^{- {{1}}}}\{U({f_x},{f_y})\} (x,y) }\\[-4pt]&= {\int_{- \infty}^{+ \infty} \int_{- \infty}^{+ \infty} U({f_x},{f_y})\exp [{j2\pi (x{f_x} + y{f_y})} ]{\rm d}{f_x}{\rm d}{f_y}}\end{split},$$
and where ${u_0}({x_0},{y_0})$ is the initial wavefield defined in a planar coordinate system $({x_0},{y_0})$; ${u_z}({x_0},{y_0})$ is the diffracted wavefield in coordinate system $({x_z},{y_z})$ following propagation distance $z$. ${\rm{FT}}$ and ${{\rm{FT}}^{- 1}}$ are the operators for the forward and inverse Fourier transforms. In Eq. (2), the transfer function ${H_z}$ has an infinite support in $({f_x},{f_y})$, which tends to zero at $1/{\lambda ^2} = f_x^2 + f_y^2$. However, it should be noted that the support of ${H_z}$ that needs to be considered in the propagation of the angular spectrum defined in Eq. (1) is typically limited by the support of the signal, which we define as $(\Delta {f_x},\Delta {f_y})$. This can be interpreted in terms of the Nyquist frequencies $(\pm \Delta {f_x}/2, \pm \Delta {f_y}/2)$.

B. Angular Spectrum Method

The angular spectrum method (ASM) can be used to calculate the samples of the diffracted wavefield, ${u_z}$, using the samples of ${u_0}$. We stipulate that the two continuous wavefields have spatial support $(\Delta {x_0},\Delta {y_0})$ and $(\Delta {x_z},\Delta {y_z})$ in space and both have spatial frequency support of $(\Delta {f_x},\Delta {f_y})$, which is a consequence of the properties of the propagation of the angular spectrum as described in Section 2.A. We note that the spatial supports at distance $0$ and at $z$ can be significantly different for the case of a converging or diverging wavefield. It is well known that a function cannot be truly bounded with finite support in both space and spatial frequency; however, as discussed in Supplement 1, this can be said to be approximately true in many cases. Both continuous functions ${u_0}$ and ${u_z}$ can be recovered (via Shannon interpolation) from their discrete samples, if uniform sampling intervals $({T_x},{T_y})$ have been applied where the Nyquist condition is satisfied, i.e., ${T_x} \le 1/\Delta {f_x}$ and ${T_y} \le 1/\Delta {f_y}$.

The ASM is based on a direct “discretization” of Eqs. (1) and (2) and is implemented using the DFT in place of the Fourier transform operator in Eq. (1) as follows:

$$\begin{split}{u_z}\!\left({{n_x}{T_x},{n_y}{T_y}} \right) &= {{\rm{DFT}}^{- 1}}\left\{{{\rm{DFT}}\left\{{{u_0}\!\left({{n_x}{T_x},{n_y}{T_y}} \right)} \right\}}\right.\\[-4pt]&\quad\times\left.{{H_z}\!\left({{m_x}{T_{{f_x}}},{m_x}{T_{{f_x}}}} \right)} \right\},\end{split}$$
where ${\rm{DFT}}$ and ${{\rm{DFT}}^{- 1}}$ are the operators for the forward and inverse discrete Fourier transform. In summary, the specific steps involved in implementing the ASM are as follows:
  • 1. First ${u_0}({x_0},{y_0})$ is sampled to produce the discrete function ${u_0}({{n_x}{T_x},{n_y}{T_y}})$ for ${n_x} = - {N_x}/2 \to {N_x}/2 - 1$ and ${n_y} = - {N_y}/2 \to {N_y}/2 - 1$, where these samples span the full support: $\Delta {x_0} = {N_x}{T_x}$ and $\Delta {y_0} = {N_y}{T_y}$.
  • 2. Second, these ${N_x} \times {N_y}$ samples are input to a DFT to produce the discrete samples of the Fourier transform ${U_0}({f_x},{f_y})$ denoted as ${U_0}({{m_x}{T_{{f_x}}},{m_y}{T_{{f_y}}}})$ for ${m_x} = - {N_x}/2 \to {N_x}/2 - 1$ and ${m_y} = - {N_y}/2 \to {N_y}/2 - 1$, where ${T_{{f_x}}} = 1/{T_x}{N_x}$ and ${T_{{f_y}}} = 1/{T_y}{N_y}$.
  • 3. These discrete samples are multiplied by the discrete samples ${H_z}({{m_x}{T_{{f_x}}},{m_y}{T_{{f_y}}}})$ over the same range of ${m_x},{m_y}$.
  • 4. Finally, an inverse DFT is applied to the resultant samples to produce the samples of the propagated function ${u_z}({{n_x}{T_x},{n_y}{T_y}})$ over the same range of ${n_x},{n_y}$ as the input.

We refer the reader to Supplement 1 for a definition of the DFT as well as a detailed derivation of the sampling constraints of the ASM.

C. Pre-aliasing for Convergent Wavefields

Recently, there have been several contributions on the numerical calculation of diffraction fields converging at the focal point of a lens [11,17,18]. Of particular interest in this paper is the method developed by Kanka et al. [19], and further investigated by [18], which exploits redundancy for the case of a small spatial support in the output field for convergent focusing fields; in such cases the input to the ASM can be broken up into smaller tiles, which are superimposed, and this smaller matrix serves as input to the ASM without any loss in accuracy. In this paper we refer to this method as “pre-aliasing,” and a rigorous proof of its validity is provided in Supplement 1 using fundamental sampling theory. The method is summarized below.

Taking the definition of the ASM above, the discrete signals ${u_0}({{n_x},{n_y}})$ and ${u_z}({{n_x},{n_y}})$, which are input and output to the ASM, must have the same support, $({\Delta {x_z},\Delta {y_z}}) = ({\Delta {x_0},\Delta {y_0}}) = ({{N_x}{T_x},{N_y}{T_y}})$. However, for convergent wavefields, the propagated wavefield will have significantly smaller support: $({\Delta {x_z},\Delta {y_z}}) \ll ({\Delta {x_0},\Delta {y_0}})$. In such case, a smaller number of samples ${M_x} \times {M_y}$ is required to fully represent the continuous function ${u_z}$, where $({\Delta {x_z},\Delta {y_z}}) = ({{M_x}{T_x},{M_y}{T_y}})$. It is possible to generate a discrete input function in Step 1 that will produce this reduced set as samples. In short, Step 1 in the algorithm described in Section 2.B is replaced with the following:

  • 1. As before, ${u_0}({x_0},{y_0})$ is sampled to produce the discrete function ${u_0}({{n_x}{T_x},{n_y}{T_y}})$ for ${n_x} = - {N_x}/2 \to {N_x}/2 - 1$ and ${n_y} = - {N_y}/2 \to {N_y}/2 - 1$, where again $\Delta {x_0} = {N_x}{T_x}$ and $\Delta {y_0} = {N_y}{T_y}$. However, in this case, an aliased version of this function, ${u^\prime _0}({{n_x},{n_y}})$, is generated to serve as input to Step 2:
    $$\begin{split} {{{{u}^\prime_{0}}}}\!\left( {{n}_{x}}{{T}_{x}},{{n}_{y}}{{T}_{y}} \right)&=\sum\limits_{{{k}_{x}}=-\frac{{{P}_{x}}+{{\epsilon }_{x}}}{2}}^{\frac{{{P}_{x}}+{{\epsilon }_{x}}}{2}}\sum\limits_{{{k}_{y}}=-\frac{{{P}_{y}}+{{\epsilon }_{y}}}{2}}^{\frac{{{P}_{y}}+{{\epsilon }_{y}}}{2}}\\[-4pt] &\quad\times{{u}_{0}}\!\left( {{k}_{x}}{{T}_{x}}+{{k}_{x}}{{M}_{x}}{{T}_{x}},{{n}_{y}}{{T}_{y}}+{{k}_{y}}{{M}_{y}}{{T}_{y}} \right) \\[-4pt] \forall {{n}_{x}}&=-{{M}_{x}}/2\to {{M}_{x}}/2-1 \\[-4pt] \forall {{n}_{y}} &=-{{M}_{y}}/2\to {{M}_{y}}/2-1 ,\end{split}$$
    where the integers ${P_x}$ and ${P_y}$ are the largest integers that satisfy the following relationship:
    $$\begin{array}{*{20}{l}}{{P_x} \le \frac{{{N_x}}}{{{M_x}}}},\\[4pt]{{P_y} \le \frac{{{N_y}}}{{{M_y}}}},\end{array}$$
    and $\epsilon_i$ has a value of 1 if ${P_i}$ is odd and 0 if it is even. Steps 2–4 are implemented as described in Section 2.B, where ${M_x}$ and ${M_y}$ are substituted for ${N_x}$ and ${N_y}$. Even though this function is clearly aliased, it can serve as an input to Steps 2–4 to produce an accurate calculation of the samples of ${u_z}$ such that the continuous function can be interpolated from these samples.

It should be noted that the subject of pre-aliasing in the manner described above can be most easily interpreted in terms of phase space description as described in Chapter 10 of Ref. [21], whereby the signal support in both space and frequency can be visualized in phase space, and the effect of propagation is a shearing of this support.

D. Sampling Conditions in Three Dimensions for the ASM

Taking the definition for the propagation of the angular spectrum in Section 2.A, the required sampling conditions can be defined in terms of Nyquist and Shannon sampling theory [22]. First, the input function ${u_0}$ must be Nyquist sampled such that the sampling intervals ${T_x} \le | {1/\Delta {f_x}} |$ and ${T_y} \le | {1/\Delta {f_y}} |$. However, if the support $({\Delta {f_x},\Delta {f_y}})$ is greater than the support of ${H_z}$ defined in Eq. (2), which has radius $1/\lambda$, then this latter support will constrain the sampling intervals. More explicitly, the values for ${T_x}$ and ${T_y}$ should be chosen to satisfy the constraint:

$$\begin{array}{*{20}{l}}{{T_{x,y}} \le \max \!\left({\frac{\lambda}{2},\frac{1}{{\Delta {f_{x,y}}}}} \right)}\end{array}.$$

In general, $1/\Delta {f_{x,y}}$ will be the larger of the two values. In such case the spatial frequency support of ${u_z}$ will be the same as that of ${u_0}$; therefore, the same sampling intervals in $x$ and $y$ are applicable in both planes. This case is illustrated in Fig. 1(a). We note that there have been several contributions in the literature that relax the Nyquist constraint for the propagated wavefield, or indeed for any chirped wavefield, which can be related to the shape of the signal’s phase space distribution [21,23,24]. These generalized sampling theorems can tolerate the aliasing of the function in the Fourier domain caused by breaking of the Nyquist condition. However, the ASM cannot generally be applied to such a sub-Nyquist sampled function due to the multiplication in Eq. (4); see Ref. [11] for an excellent analysis of this subject.

 figure: Fig. 1.

Fig. 1. Illustration of the bandwidth of the three-dimensional function ${u_z}({x_z},{y_z})$. (a) The bandwidth in $x$ and $y$ is in general determined by the bandwidth of the initial function ${u_0}$, and (b) the bandwidth in the $z$-direction is determined by the size of the surface area on the Ewald sphere over which the function exists.

Download Full Size | PDF

The previous paragraph deals with the sampling conditions for the input/output functions for the ASM for the $x$- and $y$-dimensions. However, if a three-dimensional wavefield was to be sampled, for example, by repeated application of the ASM to ${u_0}$ over a sequence of different distances, ${n_z}{T_z}$, where ${n_z}$ is an integer, then the sampling interval, ${T_z}$, also needs to be considered. Similar to the case of sampling in $x$ and $y$, the relevance of the sampling condition in $x$ relates to the correct reconstruction (using Shannon interpolation) of the continuous field ${u_z}(x,y)$ from a uniform sampling of the field over $z$ using a sampling interval ${T_z}$. In order to define the Nyquist condition for ${T_z}$ we must first investigate the frequency distribution of ${u_z}(x,y)$ in the $z$ dimension.

Taking the Fourier transform over $z$ of the continuous function ${u_z}({x_z},{y_z})$ as defined by Eq. (1), it is straightforward to deduce that the three-dimensional Fourier transform of ${u_z}$ exists only over a surface defined by the Dirac-delta functional $\delta ({{f_z} - \sqrt {1/{\lambda ^2} - f_x^2 - f_y^2}})$, which relates to the well-known Ewald sphere [1], as illustrated in Fig. 1(b). Therefore, the maximum frequency in the $z$-direction is given by ${f_{z,\max}} = 1/\lambda$, which occurs at ${f_x} = {f_y} = 0$; and the minimum frequency in the $z$-direction occurs at ${f_{x,y}} = \Delta {f_{x,y}}/2$ for the general case and is illustrated in Fig. 1. From this illustration it can be seen that the minimum spatial frequency in ${f_z}$ is given by

$$\begin{array}{*{20}{l}}{{f_{z,\min}} = \sqrt {\frac{1}{{{\lambda ^2}}} - {{\left({\frac{{\Delta {f_x}}}{2}} \right)}^2} - {{\left({\frac{{\Delta {f_y}}}{2}} \right)}^2}}}\end{array}.$$

Defining the minimum sampling interval in ${f_z}$ requires further consideration. As illustrated in Fig. 1(b) it can be seen that in the general case, where ${(\Delta {f_x}/2)^2} + {(\Delta {f_y}/2)^2} \lt 1/{\lambda ^2}$, the three-dimensional Fourier transform of ${u_z}({x_z},{y_z})$ does not exist for the center band of ${f_z}$, and two separate bands exist on each side with a width defined as $\Delta {f_z}/2$. For signals such as this, it is sufficient to apply a sampling interval given by $\Delta {f_z}$ even though the support in ${f_z}$ is distributed over two separate bands. Therefore, the sampling interval in the $z$-direction is given by

 figure: Fig. 2.

Fig. 2. Computational and memory savings provided by pre-aliasing the input to the ASM to calculate the 3D distribution of a convergent wavefield where the input has a large number of samples, ${N_x} = {N_y} = 10{,}000$. (a) Computational and memory saving for the 2D case only for a range of values ${P_x}$ and (b) the same result over a continuous range of ${M_x}$. (c) and (d) show the same set of results for the 3D case where ${N_z} = 1000$.

Download Full Size | PDF

$$\begin{array}{*{20}{l}}{{T_z}}&{\le \frac{1}{{\Delta {f_z}}} = \frac{1}{{2({{f_{z,\max}} - {f_{z,\min}}} )}}}\end{array}.$$

Therefore, if the ASM is used to calculate the samples of a wavefield in three dimensions, sampling intervals of ${T_x}$ and ${T_y}$ should meet the constraint set out in Eq. (7), and the sampling interval ${T_z}$ should meet the constraint set out in Eq. (9) such that the continuous three-dimensional function ${u_z}({x_z},{y_z})$ can be recovered via Shannon interpolation, i.e., by convolution with a three-dimensional Sinc convolution as follows:

$${u_z}({x_z},{y_z}) = \sum\limits_{\frac{{- {N_z}}}{2}}^{\frac{{{N_z}}}{2} - 1} \sum\limits_{\frac{{- {N_x}}}{2}}^{\frac{{{N_x}}}{2} - 1} \sum\limits_{\frac{{- {N_y}}}{2}}^{\frac{{{N_y}}}{2} - 1} {u_{{n_z}{T_z}}}\!\left({{n_x}{T_x},{n_y}{T_y}} \right) \times {\rm{sinc}}\frac{{\pi \!\left({{x_z} - {n_x}{T_x}} \right)}}{{{T_x}}}{\rm{sinc}}\frac{{\pi \!\left({{y_z} - {n_y}{T_y}} \right)}}{{{T_y}}}{\rm{sinc}}\frac{{\pi \!\left({z - {n_z}{T_z}} \right)}}{{{T_z}}}.$$

E. Computational Efficiency of the ASM

The pre-aliasing step for convergent fields described in Section 2.C requires the traditional Nyquist condition to be satisfied. The redundancy that is exploited relates to the small spatial support of ${u_z}$, which places a relaxed sampling condition on its DFT, which in turn translates to pre-aliasing of ${u_0}$. The pre-aliasing step is not required for convergent fields; it is always possible to avoid this step, but the advantage of pre-aliasing relates to computational and memory efficiency benefits. For example, numerical propagation between a lens with large radius and high numerical aperture to its focal plane would necessitate an input field with a small sampling interval applied over a large area; the total number of samples for such a discrete function may be onerous in terms of temporary computer memory. However, following the reduction in the size of the input matrix, ${u^\prime _0}$ defined in Eq. (5), the requirement for memory is significantly relaxed. Although the efficiency of memory afforded by this technique is straightforward to calculate and has been highlighted by other authors [18], there has been little discussion on the numerical efficiency of the technique to date.

It is naive to assume that the pre-aliasing step will translate to a significant computational saving to match the saving in memory. However, the saving in computational load, while still significant, is much less than the saving in terms of memory. The DFT is typically calculated using the fast Fourier transform (FFT) algorithm [22], which will have a number of operations in the order of $O({{M_x}{M_y}\mathop {\log}\nolimits_2 ({{M_x}{M_y}})})$ for radix 2 FFT. Thus, if $({{M_x},{M_y}}) \ll ({{N_x},{N_y}})$, it can be expected that there will be a significant saving in terms of computational time for Steps 2–4 defined in Section 2.C. However, this is partially offset by the additional computational load introduced with the implementation of the pre-aliasing step in the revised Step 1; the greater the saving in Steps 2–4, the greater the additional load in Step 1 due to the larger values of ${P_x}$ and ${P_y}$ in Eq. (6). However, if the purpose is to apply the ASM repeatedly to the same input ${u_0}$ in order to calculate the field over a long sequence of different distances, then far greater computational savings can be made. For this case, Step 1 (the pre-aliasing step) is implemented only once, while Steps 2–4 are repeated iteratively.

 figure: Fig. 3.

Fig. 3. Illustration of the two algorithms developed in this paper. (a) For the case of thin lens, the input to the algorithm is ${u_0}({x_0},{y_0})$, which is the product of the complex thin lens transmission function $t$ shown in gray, the laser profile $A$ shown in blue, and the lens pupil function of radius $R$ shown in red. The ASM is the key component in the algorithm and is iteratively applied to generate a 3D matrix of samples that represent the focused wavefield over a volume of interest. (b) The algorithm comprises five steps, which are detailed in the text. Note that Step 3 is a “pre-aliasing” step whereby multiple copies of the input are shifted and added. This step is applied only once and serves to reduce the number of samples fed as input to the iteratively applied ASM. (c) For the case of ideal lens, the input to the algorithm is ${U_0}({x^\prime _0},{y^\prime _0})$, which is the product of the laser profile $A$ and the lens pupil function of radius $R$. A first DFT operation calculates the focused plane, and following this the ASM is applied iteratively to generate a 3D matrix of samples that represent the focused wavefield over a volume of interest around the focused plane. (d) The ideal lens algorithm comprises six steps, which are detailed in the text.

Download Full Size | PDF

The computational efficiency of the ASM with and without the pre-aliasing step can be quantitatively compared. With pre-aliasing, Step 1 will require a number of operations in the order of $O({P_x}{P_y}{M_x}{M_y})$. Therefore, the total number of operations without pre-aliasing is given by $O({N_x}{N_y}\mathop {\log}\nolimits_2 ({{N_x}{N_y}}))$, and with pre-aliasing this becomes $O({P_x}{P_y}{M_x}{M_y} + {M_x}{M_y}\mathop {\log}\nolimits_2 ({{M_x}{M_y}}))$. When the ASM is applied to the same input repeatedly in order to calculate the 3D diffraction distribution, the comparison of these efficiencies changes dramatically; the total number of operations without pre-aliasing is given by $O({N_z}{N_x}{N_y}\mathop {\log}\nolimits_2 ({{N_x}{N_y}}))$, and with pre-aliasing this becomes $O({P_x}{P_y}{M_x}{M_y} + {N_z}{M_x}{M_y}\mathop {\log}\nolimits_2 ({{M_x}{M_y}}))$. To illustrate the practical importance of the pre-aliasing step we examine the case where then input ${u_0}$ is sampled with ${N_x} = {N_y} = 10{,}000$ samples. Taking ${M_x} = {M_y} = 50,100,200,500,1000,2000,5000$ (i.e., ${P_x} = {P_y} = 200,100,50,20,10,5,2$), the memory saving and the computational saving are shown in Fig. 2(a); for the first five cases the computational saving is approximately the same at ${\approx} \times 22 - 26$. The memory saving, which is plotted on a $\mathop {\log}\nolimits_{10}$ scale, is significantly higher and varies over a much wider range from 40,000 for ${P_x} = 200$ down to $4$ for ${P_x} = 2$. These savings are plotted over a continuous range of ${M_x}$ in Fig. 2(b). For the three-dimensional case, significantly greater computational savings are made, while memory saving remains the same. Taking the same parameters as above and now taking ${N_z} = 1000$, the savings are illustrated in Fig. 2(c). In this case the computational saving ranges from $\times 20{,}000$ for ${P_x} = 200$ to $\times 4$ for ${P_x} = 2$. These savings are plotted over a continuous range of ${M_x}$ in Fig. 2(d).

3. NUMERICAL CALCULATION OF FOCUSED WAVEFIELDS

In this section, we outline two methods for calculating the complex amplitude of a focused wavefield in a three-dimensional volume, which are illustrated in Fig. 3. Both methods use the ASM over a sequence of distances in order to generate a point cloud of complex values over a uniform three-dimensional grid. The first method is based on the thin lens approximation, whereby the lens can be described as a simple two-dimensional (2-D) phase only function. The second method is based on modeling the effect of the lens as a simple pupil function in the Fourier domain for the case of coherent imaging.

A. Thin Lens Algorithm

The thin lens approximation [10] is commonly used within the paraxial approximation, to model a lens as a simple 2-D phase transformation, where the phase relates to the spatially variant delay imparted by the lens due to a varying thickness and/or refractive index. This approximation is based on the position of a ray that is incident on the lens in $(x,y)$ being identical to the position of the ray as it leaves the lens; this requires negligible propagation of the ray within the lens. The phase transformation of the lens is given by

$$t(x,y) = \exp \left[{jk\phi (x,y)} \right]P\!\left({\frac{x}{R},\frac{y}{R}} \right),$$
where the wave vector $k = 2\pi /\lambda$, and $\lambda$ is the wavelength of the light; $\phi (x,y)$ represents the delay imparted by the lens at the position $(x,y)$ in terms of a wave cycle. The aperture of the lens is defined in terms of the aperture radius, $R$, and the pupil function $P(x,y)$, which is defined as unity inside a circle of radius 1, and zero elsewhere, is
$$\begin{array}{*{20}{l}}{P(x,y)}&={ 1\forall \left({{x^2} + {y^2}} \right) \le 1},\\[5pt]{P(x,y)}&={ 0\forall \left({{x^2} + {y^2}} \right) \gt 1}.\end{array}$$

Assuming an incident plane wave of complex amplitude $A(x,y)$ is input to the plane of the lens, the resultant wavefield immediately after the lens is given by

$${u_0}({x_0},{y_0}) = A({x_0},{y_0})t({x_0},{y_0}).$$

The wavefield ${u_0}$ will converge at the focal point of the lens, i.e., at a distance $\approx f$, where $f$ is the focal length of the lens as illustrated in Fig. 3(a). The corresponding algorithm to calculate the three-dimensional complex field in the region of this convergence is illustrated in Fig. 3(b) as a sequence of five steps, which are further defined below.

  • 1. The first step is to define the wavelength $\lambda$ and the continuous function ${u_0}({x_0},{y_0})$; the user selects a lens phase delay function $\phi ({x_0},{y_0})$ as well as an input laser mode, $A({x_0},{y_0})$, the radius of aperture, $R$, and the focal length, $f$. Therefore, the input function is defined as ${u_0}({x_0},{y_0}) =\def\LDeqbreak{} A({x_0},{y_0})P({\frac{{{x_0}}}{R},\frac{{{y_0}}}{R}})\exp [{jk\phi (x,y)}]$. The choice of $A$ will be discussed in the next section. We note that in the third paper an additional additive phase term will be included in this step, which relates to lens aberration.
  • 2. The second step is to define the size of the region of interest in terms of the sampling intervals, $({{T_x},{T_y},{T_z}})$, and the number of samples to use in this region $({{M_x},{M_y},{M_z}})$. As a maximum, the values of ${T_x}$ and ${T_y}$ should satisfy the Nyquist condition for ${u_0}({x_0},{y_0})$; however, the user may opt for much smaller values than those imposed by the Nyquist limit. The value for ${T_z}$ should also be selected to be sufficiently small to avoid aliasing in the $z$ direction; this will depend on the convergence rate of the wavefield. The values for $({{M_x},{M_y},{M_z}})$ should be chosen to be large enough to fully capture the support of converged light in the region of interest. The support can be estimated using geometrical optics.
  • 3. The third step is to sample the input ${u_0}({x_0},{y_0})$ with sampling intervals $({{T_x},{T_y}})$ and to pre-alias in order to generate the discrete function ${u_0^\prime}({{n_x},{n_y}})$ made up of ${M_x}{M_y}$ samples as defined in Eq. (5). In total, the number of samples of ${u_0}$ that must be considered is given by ${N_x}{N_y}$, where these are the smallest integers that satisfy ${N_x} \ge 2R/{T_x}$ and ${N_y} \ge 2R/{T_y}$. However, it is not necessary to compute all of these values and to store these in memory; rather, $({{P_x}})({{P_y}})$ smaller batches of size ${M_x}{M_y}$ can be computed and superimposed as defined in Eqs. (5) and (6). Some zero-padding of ${u_0}$ may be necessary if ${P_x}{P_y}{M_x}{M_y} \gt {N_x}{N_y}$; in such case ${N_x}$ and ${N_y}$ should be increased with zeros.
  • 4. The fourth step is to apply the ASM to ${u_0^\prime}$ repeatedly over a sequence of ${M_z}$ distances given by ${z_i} = f + {n_z}{T_z}$, where the integer ${n_z}$ takes the values ${n_z} = - {M_z}/2 \to {M_z}/2 - 1$, and to store the resulting complex values in a three-dimensional matrix ${u_f}({{n_x},{n_y},{n_z}})$ of size $({{M_x}{M_y}{M_z}})$. It should be noted that for large values of ${M_z}$ the 3D matrix may present difficulties for storing in memory. In some cases, it may not be necessary to store the entire matrix. For example, in order to calculate the center slice for $y = 0$ (i.e., the $xz -$ plane), one needs only to store a 2D matrix for which the ${n_y}$ value is fixed at zero, and only a single 1D vector needs to be stored after each iteration of the ASM. This approach is used to generate several results in this paper. In other cases where the full 3D volume is required and temporary memory is limited, completed portions of the matrix can be intermittently stored to permanent memory or copied over; this approach is taken in the second paper.
  • 5. The fifth step is optional. This involves cropping a region of the 3-D matrix and upsampling in one or more dimensions. This may be applied if it is desirable to zoom in on a small area of interest. This can be implemented by calculating the 3D DFT of the cropped matrix using the FFT algorithm, zero-padding the result as desired, and applying an inverse DFT, or by any other appropriate method of interpolation.

Some initial results from this algorithm are provided in Fig. 4. Here, the three-dimensional field is calculated for three different thin lens functions; for all cases, “top-hat” illumination is assumed ($A = 1$) such that ${u_0} = t$, and for all cases the aperture of the lens was given by $R = 2.5$ mm, and the wavelength of light was defined to be $\lambda = 1064$ nm. Following the approach taken by Goodman [10], the first lens is defined as follows:

$${t_1}(x,y) = \exp \left[{jk\!\left({{\Delta _0} + \Delta (x,y)(n - 1)} \right)} \right]P\!\left({\frac{x}{R},\frac{y}{R}} \right),$$
where ${\Delta _0} = {\Delta _{01}} + {\Delta _{02}} + {\Delta _{03}}$ is the maximum thickness of the lens, $\Delta (x,y)$ is the thickness at coordinate $(x,y)$, and $n = 1.515$ is the refractive index of the material (BK7). The thickness function is defined to be
 figure: Fig. 4.

Fig. 4. The intensity profiles of convergent light from four different lenses with the same numerical aperture and with the same top-hat uniform illumination, $A = 1$. For the case of the thin lens defined in Eq. (14), (a1) and (a2) are slices through the computer 3D volume for $y = 0$ (i.e., the $xz$-plane) and $z = f - 49\;{\rm{\unicode{x00B5}{\rm m}}}$, which was found to be the $xy$-plane with the brightest spot. Also shown in (a2) is the inset image of the profile of this spot. In (b1) and (b2) the same figures are shown for the thin lens with purely quadratic phase given in Eq. (17). The profiles for these two cases are almost identical owing to the high accuracy of the approximation of obtaining ${t_2}$ from ${t_1}$. For the case of the third thin lens defined in Eq. (18), (c1) and (c2) are slices through the volume for $y = 0$ and $z = f$; for this case the light converges to a much smaller spot due to the profile of the lens function. Finally, the same slices are shown in (d1) and (d2) for the case of the ideal lens. Remarkably, the intensity profiles obtained using the RS lens and the ideal lens are very similar even though these were computed using different algorithms and definitions. Note that the scale used for images (c1), (c2), (d1), and (d2) is 0.05 times that used for the other four images due to the significantly higher intensity values; (e1) shows a comparison of the amplitude profile over the line indicated in (c3). The result from the thin lens algorithm (TLA) is plotted with the corresponding result using direct numerical integration, which is described in the text. The difference in amplitude is shown on a log scale in (e2); equivalent results are shown for the ideal lens case in (e3) and (e4).

Download Full Size | PDF

$$\begin{split}\Delta (x,y) &= {\Delta _0} - {R_1}\\&\quad\times\left({1 - \sqrt {1 - \frac{{{x^2} + {y^2}}}{{{R_1}^2}}}} \right) + {R_2}\!\left({1 - \sqrt {1 - \frac{{{x^2} + {y^2}}}{{{R_2}^2}}}} \right),\end{split}$$
where ${R_1} = 3\;{\rm{mm}}$ and ${R_2} = - 4\;{\rm{mm}}$ are the radius of curvatures of the two lens surfaces. For such a lens, the focal length can be approximated to be given by
$$\frac{1}{f} = (n - 1)\!\left({\frac{1}{{{R_1}}} - \frac{1}{{{R_2}}}} \right),$$
which for the values of ${R_1}$ and ${R_2}$ selected here is $f = 3.328$ mm; these values were selected to produce a numerical aperture of approximately ${\rm NA} = 0.6$. For further details on this thin lens definition we refer the reader to Chapter 5 in Ref. [10]. The parameters selected in Step 2 of the algorithm are as follows: ${N_x} = {N_y} = 50{,}000$, ${T_x} = {T_y} = {T_z} = 0.1\;{\rm{\unicode{x00B5}{\rm m}}}$, ${M_x} = {M_y} = {M_z} = 2000$. To save memory only (i) the center slice corresponding to $y = 0$ was stored producing an $xz$ image of ${M_x} \times {M_z}$ samples, and (ii) one $xy$ image was stored at a single value of ${n_z}$ containing ${M_x} \times {M_y}$ samples. Figure 4(a1) shows a cropped region of the $xz$ image with $\times 10$ interpolation in both dimensions, and Fig. 4(a2) shows the $xy$ image at distance $z = f - 49\;{\rm{\unicode{x00B5}{\rm m}}}$ also with $\times 10$ interpolation in both dimensions. The profile of the focused spot is shown in the inset image. The same set of results is shown in Figs. 4(b1) and 4(b2) for a second thin lens function with a simpler quadratic variation in phase:
$$\begin{array}{*{20}{l}}{{t_2}(x,y) = \exp \!\left[{- \frac{{j\pi}}{{\lambda f}}\!\left({{x^2} + {y^2}} \right)} \right]P\!\left({\frac{x}{R},\frac{y}{R}} \right)}\end{array}.$$

We note that ${t_2}$ can be obtained from ${t_1}$ based on the paraxial approximation. The same values for $f$ and $r$ are used as for ${t_1}$ in the simulation. The results for ${t_1}$ and ${t_2}$ are almost identical, highlighting the validity of the paraxial approximation in modelling this thin lens.

It is interesting to note that the thin lens algorithm predicts that these lenses produce their bright spots at a distance that is slightly defocused. The quadratic phase lens function that defines these lenses is essentially a paraxial model, by which we mean that such a phase function will produce accurate focusing when the NA is small (resulting from a small lens aperture and/or a long focal length). However, a lens with an NA of 0.6 cannot be considered paraxial. If we had employed the Fresnel transform (or rather a numerical equivalent), which is of course itself based on the paraxial approximation, then this lens would indeed produce the brightest spot at the focal plane as expected. However, here we use the AS method, which can be related to the RS diffraction integral and is more accurate in modelling light propagation for numerical apertures of 0.6. Therefore, we find that the quadratic phase lens is imperfect for an NA of 0.6 and produces the brightest spot at a distance that is slightly defocused.

A third thin-lens function was investigated, where

$$\begin{array}{*{20}{l}}{{t_{\text{RS}}}(x,y)}&={ \exp [- jk{r_f}(x,y)]P\!\left({\frac{x}{R},\frac{y}{R}} \right)},\\[4pt]{{r_f}(x,y)}&={ \sqrt {{x^2} + {y^2} + {f^2}}}.\end{array}$$

This definition is borrowed from the Rayleigh–Sommerfeld (RS) diffraction integral, which relates a field ${u_0}({x_0},{y_0})$ to the diffracted field at distance $z$ as follows:

$$\begin{split}{u_z}({x_z},{y_z}) &= - \frac{1}{{2\pi}} \times \int_{- \infty}^\infty {u_0}({x_0},{y_0})\frac{\delta}{{\delta z}}\\&\quad\times\left[{\frac{{\exp (- jk{r_z}({x_z} - {x_0},{y_z} - {y_0}))}}{{{r_z}({x_z} - {x_0},{y_z} - {y_0})}}} \right]{\rm d}{x_0}{\rm d}{y_0}.\end{split}$$

This integral can be rewritten as a convolution as follows:

$$\begin{split}{{u_z}({x_z},{y_z})}& = {{u_0}({x_0},{y_0}) * {h_z}({x_0},{y_0})},\\{{h_z}({x_0},{y_0}) }&= {- z\frac{{\exp \left[{jk{r_z}({x_0},{y_0})} \right]}}{{2\pi r_z^2({x_0},{y_0})}}\left[{jk - \frac{1}{{{r_z}({x_0},{y_0})}}} \right].}\end{split}$$

The complex term in square brackets has an approximately constant phase angle of $\pi /2$. Thus, a point source of light, represented by a Dirac-delta functional (see Supplement 1), will produce a diffraction pattern at distance $f$ with a phase function given by the conjugate of ${t_{\text{RS}}}$. Such a lens function, which we refer to as a Rayleigh–Sommerfeld (RS) lens going forward, can therefore be expected to produce a highly focused spot after propagation a distance $z = f$. Selecting $f = 3.3\;{\rm{mm}}$ and choosing all of the same sampling parameters as for the previous two cases, the resulting intensity is shown in Figs. 4(c1) and 4(c2). The spot in this case is significantly more tightly focused and as a consequence is more intense than for the previous cases, and it is scaled $\times 0.05$ for comparison. The wide range of intensity allows only a small region around the focus to be visualized using a linear scale; hence, an inset image shows the logarithm of the intensity to reveal the light pattern in the region of the focus in more detail. The inset image in Fig. 4(c2) shows the profile of the focused spot, which is significantly narrower and $\times 20$ more intense that the corresponding profiles shown in Figs. 4(a2) and 4(b2).

In order to evaluate the accuracy of the thin lens algorithm a comparison is given with direct numerical integration in Figs. 4(e1) and 4(e2). Direct integration is implemented using a Gauss–Kronrod quadrature algorithm [25], which has previously been applied to evaluate the angular spectrum method [11]. This algorithm is used to numerically calculate the Rayleigh–Sommerfeld diffraction integral using the RS lens as the input function, and it produces a value of the complex amplitude of the propagated wavefield at a single point in space. While this algorithm is comparatively slow for the large number of points that describe our RS lens, we can be reasonably confident that it has high accuracy, and we use it as a standard for evaluating the accuracy of the thin lens algorithm, and later using the ideal lens algorithm. The profile of the amplitude over a line close to the focal plane [indicated in Fig. 4(c3)] is calculated using both methods; the results are shown to be in close agreement in Fig. 4(e1), and the difference in amplitude is given in Fig. 4(e2). Although this difference is relatively small, it is important to point out that small errors in calculating the complex wavefield could potentially lead to erroneous results when tracing the eikonal in the two papers that follow. Nevertheless, this comparison provides a validation that the thin lens algorithm is operating as expected within a small margin of error.

B. Ideal Lens Algorithm

Assuming an incident plane wave of complex amplitude $A(x,y)$, we define the effect of an ideal lens with focal length $f$ and numerical aperture NA in terms of the optical Fourier transform [10]:

$$\begin{split}{{u_0}({x_0},{y_0})}&={ {\rm{OFT}}\{{U_0}({{x^\prime_0}},{{y^\prime_0}})\} ({x_0},{y_0})}\\&={ \int_{- \infty}^{+ \infty} \int_{- \infty}^{+ \infty} {U_0}({{x^\prime_0}},{{y^\prime_0}})}\\&\quad\times{\exp \left[{- \frac{{j2\pi}}{{\lambda f}}\!\left({{x_0}{{x^\prime_0}} + {y_0}{{y^\prime_0}}} \right)} \right]{\rm d}{{x^\prime_0}}{\rm d}{{y^\prime_0}},}\end{split}$$
where OFT is the operator for the optical Fourier transform mapping between spatial coordinates $({x^\prime _0},{y^\prime _0})$ and $({x_0},{y_0})$ at the focal planes of the lens and where the input function is given by
$$\begin{array}{*{20}{l}}{{U_0}\!\left({{{x^\prime_0}},{{y^\prime_0}}} \right)}&={ A\!\left({{{x^\prime_0}},{{y^\prime_0}}} \right)P\!\left({\frac{{{{x^\prime_0}}}}{{r\lambda f}},\frac{{{{y^\prime_0}}}}{{r\lambda f}}} \right)}\end{array}.$$
Here, ${u_0}$ is the wavefield that is produced at the focal length of the lens. The pupil function of the lens, $P$ [defined in Eq. (12)], has radius $r\lambda f$, where $r$ is defined in terms of the numerical aperture as follows:
$$\begin{array}{*{20}{l}}{r = \frac{{\rm NA}}{\lambda}}\end{array}.$$

Setting $A = 1$ for the case of an ideal infinite plane wave, the result is in the form of a Bessel function of the first kind [10]: $\pi {r^2}{J_1}(2\pi r\rho)/(\pi r\rho)$, where $\rho$ is the radial coordinate. The intensity of this pattern is known as the Airy pattern and the width of the central lobe given by $1.22\lambda /{\rm NA}$, which is the well-known limit of optical resolution for coherent imaging. Therefore, two ideal microscope objectives with the same $NA$ but different focal lengths, $f$, will produce identical Airy patterns at their respective focal length for the same uniform plane wave illumination but would have different input physical pupil radius given by $R = fNA$. Since the pupil plane of the microscope objective is typically close to the back aperture of the objective, we can take this radius to be equivalent to the physical radius of the lens back aperture. The wavefield ${u_0}$ will converge at the focal point of the lens, and we proceed to define an algorithm to calculate the three-dimensional volume of light in this region. This algorithm is illustrated in Fig. 3 as a sequence of six steps, which are further defined below.

  • 1. The first step is to define the wavelength $\lambda$, the focal length $f$, and numerical aperture NA of the lens. This will in turn define the input aperture to be given by a pupil function of radius $R = f{\rm NA}$; the user also selects an input laser mode $A({x^\prime _0},{y^\prime _0})$. Therefore, the input function is defined: ${U_0}({x^\prime _0},{y^\prime _0}) = A({x^\prime _0},{y^\prime _0})P({{{x^\prime_0}}/R,{{y^\prime_0}}/R})$. The choice of $A$ will be discussed in the next section. We note that in the third paper an additional additive phase term will be included in this step, which relates to lens aberration.
  • 2. The second step is to define the size of the region of interest (around the focal point of the lens) in terms of the sampling intervals $({{T_x},{T_y},{T_z}})$ as well as the number of samples to use in this region $({{M_x},{M_y},{M_z}})$. Since ${u_0}({x_0},{y_0})$ will be defined by ${M_x} \times {M_y}$ samples with uniform sampling intervals of ${T_x}$ and ${T_y}$, the optical Fourier transform (OFT) ${U_0}({x^\prime _0},{y^\prime _0})$, when calculated using the DFT, will be also be represented by ${M_x} \times {M_y}$ samples but with sampling intervals given by ${T^\prime _x} = \lambda f/({{M_x}{T_x}})$ and ${T^\prime _y} = \lambda f/({{M_y}{T_y}})$. As for the previous algorithm, the sampling intervals $({{T_x},{T_y},{T_z}})$ must satisfy the Nyquist criterion in all three dimensions. In this case, however, given that the total number of samples must span the full support of the lens aperture, it is a little simpler to define the condition for the number of samples that are required: ${M_x}{T^\prime _x} \ge R$ and ${M_y}{T^\prime _y} \ge R$, which provides a lower bound for the values ${M_x}$ and ${M_y}$.
  • 3. The third step is to sample the continuous input function ${U_0}({{{x^\prime_0}},{{y^\prime_0}}})$ (this is the field passing through the back aperture of the MO) with sampling intervals $({T^\prime _x},{T^\prime _y})$ over an extent of $({{M_x}{{T^\prime_x}},{M_y}{{T^\prime_y}}})$ such that ${M_x} \times {M_y}$ samples are obtained of the form ${U_0}({{m_x}{{T^\prime_x}},{m_y}{{T^\prime_y}}})$, where ${m_x} =\def\LDeqbreak{} - {M_x}/2 \to {M_x}/2 - 1$ and ${m_y} = - {M_y}/2 \to {M_y}/2 - 1$.
  • 4. The DFT of ${U_0}$ is calculated to produce the discrete function ${u_0}({{n_x}{T_x},{n_y}{T_y}})$ over an extent $({{M_x}{T_x},{M_y}{T_y}})$. This is the in-focus image of the laser spot and will serve as input to the ASM algorithm in order to calculate complexity in the planes before and after using positive and negative propagation distances.
  • 5. Similar to the previous algorithm, the fifth step is to apply the ASM to ${u_0}$ repeatedly over a sequence of ${M_z}$ distances given by ${z_i} = {n_z}{T_z}$, where the integer ${n_z}$ takes the values: ${n_z} = - {M_z}/2 \to {M_z}/2 - 1$, and to store the resulting complex values in a three-dimensional matrix ${u_f}({{n_x},{n_y},{n_z}})$ of size $({{M_x} \times {M_y} \times {M_z}})$. Similar difficulties may present in terms of storage as for the previous algorithm.
  • 6. As for the previous case the final step is optional and involves cropping a region of the 3-D matrix and upsampling in one or more dimensions as desired.

An initial result from this algorithm is shown in Figs. 4(d1)–4(d3). Here, for Step 1 the following selections were made: As for the previous cases, “top-hat” illumination was used, i.e., $A(x,y) = 1$, and no aberration is included, such that ${U_0}({x^\prime _0},{y^\prime _0}) = P({{{x^\prime_0}}/fNA,{{y^\prime_0}}/fNA})$, where $f$ and NA were chosen to be $3.3\;{\rm{mm}}$ and $0.6\;{\rm{mm}}$, respectively, resulting in an aperture size of $R = 2.5\;{\rm{mm}}$. As for the previous cases, the wavelength of light was defined to be $\lambda = 1064\;{\rm{nm}}$. The parameters selected in Step 2 of the algorithm are as follows: ${M_x} = {M_y} = {M_z} = 2000$, ${T_x} = {T_y} = {T_z} = 0.1\;{\rm{\unicode{x00B5}{\rm m}}}$. As for the previous examples, to save memory only one $xz$-slice and one $xy$ image were stored; cropped areas of both are shown in Figs. 4(d1) and 4(d2) with $\times 10$ interpolation. The profile of the focused spot is shown in the inset image in Fig. 4(d1). The spot in this case is similar to that for the case of the RS lens. The magnified image in Fig. 4(d3) shows the logarithm of the intensity of the $xz -$ image, in which it can be seen that there are striking similarities to the same image for the thin (RS) lens.

As for the previous case, we evaluate the accuracy of the ideal lens algorithm by comparison with direct numerical integration using a Gauss–Kronrod quadrature algorithm. In this case this algorithm is used to numerically calculate the Rayleigh–Sommerfeld diffraction integral using a Bessel function as input, which is taken to be the analytical result of the first part of the ideal lens algorithm (i.e., DFT of a circ function). The profile of the amplitude over a line close to the focal plane [indicated in Fig. 4(d3)] is calculated using both methods, and the results are shown to be in close agreement in Fig. 4(e3); the difference in amplitude is given in Fig. 4(e3). As for the previous case we can state that this comparison provides a validation that the ideal lens algorithm is operating as expected within a small margin of error, with the caveat that even small errors could lead to slightly inaccurate tracing of the eikonal in the papers that follow.

4. INCLUDING THE LASER SPATIAL MODE

In the discussion thus far, top hat illumination has been assumed for which $A = 1$, and the laser power, $P$, is given no consideration. In practice, the laser will have a particular spatial mode that defines its complex spatial profile and a total power, both of which should be taken into account. Many applications employ lasers with a ${{\rm TEM}_{00}}$ or ${{\rm TEM}_{01}}$ mode that are characterized by a Gaussian or, more generally, by a Laguerre–Gaussian (LG) distribution. While “top-hat” illumination would produce the tightest laser spot, most applications will require maximal use of the laser power, and as such the laser cannot be widely expanded to achieve a uniform “top-hat” profile across the lens aperture; it is common for the laser to be expanded such that the lens aperture apodizes the Gaussian/LG distribution at a particular radius in the distribution, which is deemed to provide the most acceptable compromise between the loss in laser power and the enlargement of the spot size. Examples that apply this approach include optical trapping, confocal microscopy, and laser cutting. We examine two commonly used laser spatial modes here: ${{\rm TEM}_{00}}$ or ${{\rm TEM}_{01}}$. The former is the most common laser mode, and in its simplest form for which the laser is assumed to have negligible expansion, the electric field is defined by a Gaussian distribution as follows:

$${A_{{{\rm TEM}_{00}}}}(x,y) = {\pi ^{- 1/4}}\sqrt {\frac{P}{\sigma}} \exp \!\left({- \frac{{{x^2} + {y^2}}}{{2{\sigma ^2}}}} \right),$$
where $\sigma$ is the standard deviation, which relates to the width of the Gaussian function. The intensity of this field is given by
$$|{A_{{{\rm TEM}_{00}}}}{|^2} = \frac{P}{{\sqrt {2\pi} \sigma ^\prime}}\exp \!\left({- \frac{{{x^2} + {y^2}}}{{2{{\sigma ^\prime}^2}}}} \right),$$
where the standard deviations are related as follows: $\sigma = \sqrt 2 \sigma ^\prime $. The most commonly chosen cutoff point for the laser at the back aperture of a microscope objective is the $1/{e^2}$ width, which is the beam radius at which the intensity has dropped to $1/{e^2}$ or approximately $14\%$ of its maximum value (by the same definition, this is the $1/e$ width for amplitude); see Chapter 17 in Ref. [26] for further details. This radius is equal to $2\sigma ^\prime $ or equivalently $\sqrt 2 \sigma$. For the RS lens that was simulated in the previous section (for which $f = 3.3$ mm and $R = 2.5$ mm), a Gaussian profile was applied at the input, i.e., ${u_0}({x_0},{y_0}) = {t_{\text{RS}}}({x_0},{y_0}){A_{{{\rm TEM}_{00}}}}({x_0},{y_0})$, where $\sigma$ was set equal to $R/\sqrt 2$, where $R$ is the radius of the lens. The parameters of the simulation using the thin lens algorithm are the same as those defined in Section 3.A. The intensity and phase distributions of the field in the $xz$-plane are shown in Figs. 5(a1) and 5(b1) over a size of $20\;{\rm{\unicode{x00B5}{\rm m}}} \times 20\;{\rm{\unicode{x00B5}{\rm m}}}$. The intensity and phase in the $xy$-plane over an area of $10\;{\rm{\unicode{x00B5}{\rm m}}} \times 10\;{\rm{\unicode{x00B5}{\rm m}}}$ are shown at five different distances in Figs. 5(c1) and 5(e1) with profiles of these distributions given in Figs. 5(d1) and 5(f1).
 figure: Fig. 5.

Fig. 5. Phase and intensity of the focused spot for the case of the Rayleigh–Sommerfeld lens defined in Eq. (18) for the case of both Gaussian and ${{\rm TEM}_{01}}$ laser spatial modes. All images were calculated using the thin lens algorithm; (a1) and (b1) are the intensity and phase of the center XZ slice for $y = 0$ for the case of a Gaussian beam; Visualization 1 shows a video of the $xz$ intensity/phase slice as a function of different values of $y$. (c1) shows the intensity of the $xy$-plane at five different distances indicated in the lower left of the figure; (d1) shows the profile of the focused spot in the five planes. (e1) and (e2) show the corresponding phase image and profiles. Visualization 2 shows a video of the $xy$ intensity/phase slice as a function of different values of $z$. A similar set of results is shown in (a2)–(f2) for the case of a ${{\rm TEM}_{01}}$ laser mode. The characteristic doughnut shape of the focused spot is seen to expand with defocus, and the spiral phase pattern is seen to twist. Remarkably, the $xz$-slice of the phase distribution is identical to that for the Gaussian case, except for a constant phase shift on the right side of the image. Visualization 3 and Visualization 4 show videos of the variation in the $xz$- and $xy$-planes as for the Laguerre Gaussian case. The same results for the ideal lens calculated using the ideal lens algorithm are provided in Visualization 5, Visualization 6, Visualization 7, and Visualization 8.

Download Full Size | PDF

The Gaussian distribution defined in Eq. (24) is ideal and does not take into account expansion of the laser beam or the change in the phase distribution as a function of propagation. The ${{\rm TEM}_{00}}$ laser mode is more accurately described as a special case of the more general Laguerre–Gaussian mode. The amplitude distribution of the Laguerre–Gaussian mode is uniquely defined by the beam waist width $w(0)$ and the Rayleigh range, ${z_R}$, as well as integers $l$ (the azimuthal index) and $p$ (the number of radial nodes), as follows:

  

$$\begin{split}{{A_{{{\rm LG}_{\text{pl}}}}}(x,y)}&={ \sqrt {\frac{{2p!}}{{\pi \frac{(}{p} + \left| l \right|)!}}} \frac{1}{{w(z)}}{{\left[{\frac{{\sqrt {2({x^2} + {y^2})}}}{{{w^2}(z)}}} \right]}^{\left| l \right|}}\exp \left[{\frac{{- ({x^2} + {y^2})}}{{{w^2}(z)}}} \right]}{ L_p^{|l|}\left[{\frac{{2({x^2} + {y^2})}}{{{w^2}(z)}}} \right]}\\&\quad\times{\exp \left[{jl\mathop {\tan}\nolimits^{- 1} \!\left({\frac{y}{x}} \right)} \right]\exp \left[{\frac{{jkz({x^2} + {y^2})}}{{2({z^2} + z_R^2)}}} \right]}{ \exp \left[{- j(2p + \left| l \right| + 1)\mathop {\tan}\nolimits^{- 1} \!\left({\frac{z}{{{z_R}}}} \right)} \right],}\end{split}$$
where the $1/e$ width of the Gaussian term is given by
$$w(z) = w(0)\sqrt {\frac{{{z^2} + z_R^2}}{{z_R^2}}} .$$

The argument of the last exponential term in Eq. (26) is the Gouy phase. The term $L_p^{| l |}$ is the associated Laguerre–Gaussian polynomial obtained from Laguerre polynomials:

$$L_p^{|l|}[a] = (- {1)^{\left| l \right|}}\frac{{{d^{\left| l \right|}}}}{{d{a^{\left| l \right|}}}}{L_{p + \left| l \right|}}[a].$$

We note that for $l = p = 0$ and for the case of negligible beam expansion ${z_R} \to \infty$, the above expression reduces to the form of the Gaussian mode defined in Eq. (24), where $w(0) = \sqrt 2 \sigma$. In order to normalize the amplitude ${A_{{{\rm LG}_{\text{pl}}}}}$ such that the integral of the intensity distribution is equal to $P$, the laser power, the amplitude distribution can be scaled by a parameter $\alpha$, where

$$\alpha = \frac{{\sqrt P}}{{\sqrt {\iint {{\left| {{A_{{{\rm LG}_{\text{pl}}}}}(x,y)} \right|}^2}{\rm d}x{\rm d}y}}}.$$

Once again, using the same RS lens, a LG profile was applied at the input, i.e., ${u_0}({x_0},{y_0}) = {t_{\text{RS}}}({x_0},{y_0})\alpha {A_{{{\rm LG}_{01}}}}({x_0},{y_0})$, where $w(0)$ was set equal to $R$ and ${z_R} = w_0^2k/2$. Once again the parameters of the simulation using the thin lens algorithm are the same as those defined in Section 3.A. The intensity and phase distributions of the field in the $xz$-plane is shown in Figs. 5(a2) and 5(b2) over a size of $20\;{\rm{\unicode{x00B5}{\rm m}}} \times 20\;{\rm{\unicode{x00B5}{\rm m}}}$ and a clear discontinuity in the phase is observed at $({x_z},{y_z}) = (0,0)$. The intensity and phase in the $xy$-plane over an area of $10\;{\rm{\unicode{x00B5}{\rm m}}} \times 10\;{\rm{\unicode{x00B5}{\rm m}}}$ are shown at five different distances in Figs. 5(c2) and 5(e2) with profiles of these distributions given in Figs. 5(d2) and 5(f2). The spiral phase shape of the distribution is clearly evident and appears to rotate with propagation distance. The video files cited in the Fig. 5 caption also enable visualization of this rotation.

5. DIFFRACTIVE OPTICAL ELEMENTS

An interesting question arises when we consider diffractive optical elements that are designed as lenses: Is such a thin lens function applicable for the thin lens algorithm? If we are dealing with a diffractive optical element that contains diffraction orders, such as a binary phase plate or a spatial light modulator, in order to fully capture the propagating wavefield we would need to sample this input function with a rapid sampling interval that could encompass the spatial frequency support of all of the diffraction orders up to the point at which their energy is sufficiently small to be considered negligible. This densely sampled function could then be input to a numerical propagation, but the algorithm would have to be chosen such that it will produce an output field over the true spatial and spatial frequency supports of such a field. Certainly the ASM would be a poor choice since the support would never expand over that of the input support and zero-padding or chirp-transform would be necessary.

There is a short-cut one could consider, though, that may validate the use of the thin lens algorithm. If one were confident that in the region of interest, for example, the focal plane of the DOE (assuming it is a focusing phase function), the diffraction orders were well separated with negligible overlapping energy, then the thin lens algorithm (or ASM with pre-aliasing) becomes applicable for each order in isolation. This assumption is likely to be valid if the DOE is well pixelated. Taking the SLM as our example, presuming the central order is the only order of real interest, the approach might be as follows. Taking one sample value to represent the phase for each pixel (i.e., sampling interval equals pixel pitch), perform a DFT on this 2D discrete function. This DFT can then be multiplied by a 2D “sinc” function (since the SLM can be described by the convolution of our samples with a pixel “rect” function). Performing an inverse DFT will produce a discrete function could be used as input to the thin lens algorithm. Here we have isolated only the zero order for propagation in the knowledge that it will be indeed be isolated in the focal region.

6. INTERPRETATION OF PHASE

When calculating diffraction patterns via the two algorithms presented in this paper or by any other method, interpreting the phase at points where the amplitude of the wave is negligible presents a unique challenge. The phase of the wavefield at a given point is a critical parameter that describes the position within the wave’s oscillatory cycle at that specific point in space and time and is crucial for determining interference. However, the physical significance of the phase becomes less pronounced at points where the amplitude, or the wave’s strength, is close to zero. In these regions of negligible amplitude, the phase can become mathematically indeterminate or ill-defined. This indeterminacy arises because the phase is derived from the argument of a complex number representing the wave, and when the modulus of this complex number (the amplitude) approaches zero, small fluctuations or uncertainties in the wavefield caused by small errors inherent to the numerical method, or due to imperfect sampling conditions, or even due to machine error, can lead to disproportionately large and undefined variations in the phase value.

In practical calculations, it is common to encounter zones where the amplitude is very small. Physically, these areas with negligible amplitude often correspond to points of destructive interference or locations outside the main propagation path of the wave, where its influence is minimal. It is clear in Fig. 5 (and Fig. 2 in Supplement 1) for example that the amplitude of the field is very low away from the center, and yet the phase values in this region appear to be characterized by an organized pattern. From a computational perspective, one might argue that handling these regions of negligible amplitude requires careful consideration since these phase values must be poorly defined based on the arguments above. We would argue the opposite; these phase values can be ignored since these low-amplitude regions must be less significant in terms of the wave’s physical impact. The phase value in these weak amplitude regions is less critical to the overall behavior of the wave, particularly regarding observable quantities like intensity, which is primarily dependent on the amplitude. For regions where the amplitude falls below a certain threshold, the phase might be considered undefined or treated as less reliable, and we should make no attempt to be overly concerned with its accuracy. Therefore, we should make no attempt to examine the eikonal in these areas in subsequent papers. Indeed we will see in those papers that the eikonal will seek to avoid these areas, always bending away from them towards areas of higher amplitude, which is of course in agreement with the concept of flux.

7. CONCLUSION

In this paper, we have developed two algorithms for the numerical calculation of the three-dimensional complex amplitude of coherent wavefields that are focused by a lens. The first algorithm deals with the case of the thin lens approximation, while the second deals with an ideal lens system. In both cases, the angular spectrum method is applied in a loop in order to generate a stack of complex images over a regular grid around the focal region of the lens. However, the manner in which the ASM is applied differs considerably for both cases; for the thin lens, the ASM is applied directly to the lens transmission function, but the difference in spatial support between the input plane (i.e., the lens aperture) and the output plane (the focused spot) requires a “pre-aliasing” step, recently proposed by other authors [11,19] and further developed in this paper, which although numerically intensive, needs only to be applied once before the ASM is applied iteratively; for the ideal lens case, a single discrete Fourier transform is used to transform between the circular pupil function (in the Fourier domain) of the lens system to the focal plane of the lens, and then the ASM is applied iteratively to generate the stack of complex images immediately before and after this plane. A detailed list of steps is provided for both algorithms in Section 3 that includes the selection of the key lens properties as well as the optimal sampling conditions over the three spatial dimensions.

The thin lens functions that were initially selected for analysis included the case of the spherical lens for which the transmission function is directly related to the thickness of the glass. This was compared with the three-dimensional focus of an equivalent lens defined with a simple quadratic phase, and the results were found to be the same, as expected. A third case was also investigated whereby a transmission function defined by the kernel of the Rayleigh–Sommerfeld diffraction integral was proposed. This “lens” was simulated to produce a focused spot that was much smaller than the equivalent results for the spherical lenses. Indeed, the results were shown to be equivalent to those from an ideal lens with similar numerical aperture using the second algorithm.

In Section 4 both algorithms were augmented to account for the laser spatial mode for the case when the input to the lens is a temporally coherent laser input. In particular, ${{\rm TEM}_{00}}$ (Gaussian) and ${{\rm TEM}_{01}}$ (Laguerre–Gaussian) were investigated. Furthermore, in the third paper both algorithms will be further augmented to account for aberrations in the lens, and for both cases the well-known Zernike polynomials will be used to described the various forms of lens aberration, albeit in different ways for both algorithms.

The computational time for the two algorithms presented in this paper is an important consideration. Disregarding the time-costly initial pre-aliasing step in the thin lens algorithm, which is implemented once and can be stored in memory, both propagation algorithms essentially comprise a single element-wise matrix multiplication and a single DFT operation, both of which can be implemented using parallelized SIMD instruction on a graphics processing unit (GPU) [27]. This process involves transferring the input from the CPU to the GPU over a PCI bus (this can be ignored since it happens only once), implementing the GPU operations and transferring the result back to the CPU. Using an Intel i7-9700K CPU Processor and an Nvidia GeForce GTX 1070 GPU, we were able to implement this overall process in 20 ms for a single propagation of size $2000 \times 2000$ samples. To calculate the volume of $2000 \times 2000 \times 2000$ takes 40 s for both algorithms. This is the time taken to calculate all of the volumes used in all of the results across the three papers. Further speed up could be achieved using higher performance GPUs. The code to implement the two algorithms is provided in Code 1, Ref. [28].

This paper is a useful reference for those wish to simulate focused light over three dimensions with lenses that have numerical aperture ${\lt}0.6$ and can therefore be modeled using the paraxial approximation. It is particularly useful for those wishing to simulated focused laser beams by lenses with aberrations. Although this paper is useful in a broader context, it is the first part in a three-part series; the second part relates to calculating and visualizing the flux of focused light fields, which has been investigated by several authors as an extension of the concept of geometrical ray optics. Calculating the flux involves calculating the derivative of the phase of the wavefield as it propagates. A necessary first step, as described in this paper, is to calculate the complex amplitude of the light over three dimensions with sufficiently fine-grained sampling intervals such that the flux rays can be tracked accurately. The third part highlights the power of the method to provide insights into lens aberrations.

Funding

Irish Research eLibrary; Science Foundation Ireland (15/CDA/3667, 16/RI/3399, 19/FFP/7025).

Acknowledgment

The authors would like to acknowledge the support of Science Foundation Ireland. Open access funding provided by Irish Research eLibrary. This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant numbers 15/CDA/3667 and 19/FFP/7025.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper have been produced using the code provided in Code 1, Ref. [28].

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Elsevier, 2013).

2. M. Mir, B. Bhaduri, R. Wang, et al., “Quantitative phase imaging,” Prog. Opt. 57, 133–217 (2012). [CrossRef]  

3. Y. Zhao, L. Cao, H. Zhang, et al., “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23, 25440–25449 (2015). [CrossRef]  

4. T. Wilson, C. Shepparu, and K. Löschke, Theory and Practice of Scanning Optical Microscopy (1985).

5. J. J. Stamnes, Waves in Focal Regions: Propagation, Diffraction and Focusing of Light, Sound and Water Waves (Routledge, 2017).

6. J. J. Braat, S. van Haver, A. J. Janssen, et al., “Assessment of optical systems by means of point-spread functions,” Prog. Opt. 51, 349–468 (2008). [CrossRef]  

7. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). [CrossRef]  

8. B. M. Hennelly and J. T. Sheridan, “Generalizing, optimizing, and inventing numerical algorithms for the fractional Fourier, Fresnel, and linear canonical transforms,” J. Opt. Soc. Am. A 22, 917–927 (2005). [CrossRef]  

9. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).

10. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

11. M. Hillenbrand, D. P. Kelly, and S. Sinzinger, “Numerical solution of nonparaxial scalar diffraction integrals for focused fields,” J. Opt. Soc. Am. A 31, 1832–1841 (2014). [CrossRef]  

12. N. Delen and B. Hooker, “Free-space beam propagation between arbitrarily oriented planes based on full diffraction theory: a fast Fourier transform approach,” J. Opt. Soc. Am. A 15, 857–867 (1998). [CrossRef]  

13. F. Shen and A. Wang, “Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula,” Appl. Opt. 45, 1102–1110 (2006). [CrossRef]  

14. S. Van Haver and A. Janssen, “Advanced analytic treatment and efficient computation of the diffraction integrals in the extended Nijboer-Zernike theory,” J. Eur. Opt. Soc. Rap. Publ. 8, 13044 (2013). [CrossRef]  

15. L. Onural, “Exact analysis of the effects of sampling of the scalar diffraction field,” J. Opt. Soc. Am. A 24, 359–367 (2007). [CrossRef]  

16. P. Picart and P. Tankam, “Analysis and adaptation of convolution algorithms to reconstruct extended objects in digital holography,” Appl. Opt. 52, A240–A253 (2013). [CrossRef]  

17. T. Kozacki, K. Falaggis, and M. Kujawinska, “Computation of diffracted fields for the case of high numerical aperture using the angular spectrum method,” Appl. Opt. 51, 7080–7088 (2012). [CrossRef]  

18. M. Hillenbrand, A. Hoffmann, D. P. Kelly, et al., “Fast nonparaxial scalar focal field calculations,” J. Opt. Soc. Am. A 31, 1206–1214 (2014). [CrossRef]  

19. M. Kanka, R. Riesenberg, and H. Kreuzer, “Reconstruction of high-resolution holographic microscopic images,” Opt. Lett. 34, 1162–1164 (2009). [CrossRef]  

20. W. Zhang, H. Zhang, and G. Jin, “Frequency sampling strategy for numerical diffraction calculations,” Opt. Express 28, 39916–39932 (2020). [CrossRef]  

21. M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-space Optics: Fundamentals and Applications (McGraw-Hill Education, 2010).

22. R. N. Bracewell and R. N. Bracewell, The Fourier Transform and Its Applications (McGraw-Hill, 1986) Vol. 31999.

23. W. Zhang, H. Zhang, C. J. Sheppard, et al., “Analysis of numerical diffraction calculation methods: from the perspective of phase space optics and the sampling theorem,” J. Opt. Soc. Am. A 37, 1748–1766 (2020). [CrossRef]  

24. J. J. Healy, B. M. Hennelly, and J. T. Sheridan, “Additional sampling criterion for the linear canonical transform,” Opt. Lett. 33, 2599–2601 (2008). [CrossRef]  

25. L. F. Shampine, “MATLAB program for quadrature in 2D,” Appl. Math. Comput. 202, 266–274 (2008). [CrossRef]  

26. A. E. Siegman, Lasers (University Science Books, 1986).

27. L. Ahrenberg, A. J. Page, B. M. Hennelly, et al., “Using commodity graphics hardware for real-time digital hologram view-reconstruction,” J. Disp. Technol. 5, 111–119 (2009). [CrossRef]  

28. Q. Yu and B. M. Hennelly, “Code for ‘Nonlinear ray tracing in focused fields, part 1. Calculation of 3D focused wavefields: tutorial’,” figshare, 2024, https://doi.org/10.6084/m9.figshare.25065944.

Supplementary Material (10)

NameDescription
Code 1       This code will implement the two algorithms described in the paper.
Supplement 1       Derivation of sampling theorem for angular spectrum method from first principles.
Visualization 1       This video shows the xz intensity/phase slice as a function of different values of y with Gaussian laser spatial mode and algorithm1.
Visualization 2       This video shows the xy intensity/phase slice as a function of different values of z with Gaussian laser spatial mode and algorithm1.
Visualization 3       This video shows the xz intensity/phase slice as a function of different values of y with Laguerre Gaussian laser spatial mode and algorithm1.
Visualization 4       This video shows the xy intensity/phase slice as a function of different values of z with Laguerre Gaussian laser spatial mode and algorithm1.
Visualization 5       This video shows the xy intensity/phase slice as a function of different values of z with Gaussian laser spatial mode and algorithm2.
Visualization 6       This video shows the xz intensity/phase slice as a function of different values of y with Laguerre Gaussian laser spatial mode and algorithm2.
Visualization 7       This video shows the xz intensity/phase slice as a function of different values of y with Laguerre Gaussian laser spatial mode and algorithm2.
Visualization 8       This video shows the xz intensity/phase slice as a function of different values of y with Laguerre Gaussian laser spatial mode and algorithm2.

Data availability

Data underlying the results presented in this paper have been produced using the code provided in Code 1, Ref. [28].

28. Q. Yu and B. M. Hennelly, “Code for ‘Nonlinear ray tracing in focused fields, part 1. Calculation of 3D focused wavefields: tutorial’,” figshare, 2024, https://doi.org/10.6084/m9.figshare.25065944.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Illustration of the bandwidth of the three-dimensional function ${u_z}({x_z},{y_z})$. (a) The bandwidth in $x$ and $y$ is in general determined by the bandwidth of the initial function ${u_0}$, and (b) the bandwidth in the $z$-direction is determined by the size of the surface area on the Ewald sphere over which the function exists.
Fig. 2.
Fig. 2. Computational and memory savings provided by pre-aliasing the input to the ASM to calculate the 3D distribution of a convergent wavefield where the input has a large number of samples, ${N_x} = {N_y} = 10{,}000$. (a) Computational and memory saving for the 2D case only for a range of values ${P_x}$ and (b) the same result over a continuous range of ${M_x}$. (c) and (d) show the same set of results for the 3D case where ${N_z} = 1000$.
Fig. 3.
Fig. 3. Illustration of the two algorithms developed in this paper. (a) For the case of thin lens, the input to the algorithm is ${u_0}({x_0},{y_0})$, which is the product of the complex thin lens transmission function $t$ shown in gray, the laser profile $A$ shown in blue, and the lens pupil function of radius $R$ shown in red. The ASM is the key component in the algorithm and is iteratively applied to generate a 3D matrix of samples that represent the focused wavefield over a volume of interest. (b) The algorithm comprises five steps, which are detailed in the text. Note that Step 3 is a “pre-aliasing” step whereby multiple copies of the input are shifted and added. This step is applied only once and serves to reduce the number of samples fed as input to the iteratively applied ASM. (c) For the case of ideal lens, the input to the algorithm is ${U_0}({x^\prime _0},{y^\prime _0})$, which is the product of the laser profile $A$ and the lens pupil function of radius $R$. A first DFT operation calculates the focused plane, and following this the ASM is applied iteratively to generate a 3D matrix of samples that represent the focused wavefield over a volume of interest around the focused plane. (d) The ideal lens algorithm comprises six steps, which are detailed in the text.
Fig. 4.
Fig. 4. The intensity profiles of convergent light from four different lenses with the same numerical aperture and with the same top-hat uniform illumination, $A = 1$. For the case of the thin lens defined in Eq. (14), (a1) and (a2) are slices through the computer 3D volume for $y = 0$ (i.e., the $xz$-plane) and $z = f - 49\;{\rm{\unicode{x00B5}{\rm m}}}$, which was found to be the $xy$-plane with the brightest spot. Also shown in (a2) is the inset image of the profile of this spot. In (b1) and (b2) the same figures are shown for the thin lens with purely quadratic phase given in Eq. (17). The profiles for these two cases are almost identical owing to the high accuracy of the approximation of obtaining ${t_2}$ from ${t_1}$. For the case of the third thin lens defined in Eq. (18), (c1) and (c2) are slices through the volume for $y = 0$ and $z = f$; for this case the light converges to a much smaller spot due to the profile of the lens function. Finally, the same slices are shown in (d1) and (d2) for the case of the ideal lens. Remarkably, the intensity profiles obtained using the RS lens and the ideal lens are very similar even though these were computed using different algorithms and definitions. Note that the scale used for images (c1), (c2), (d1), and (d2) is 0.05 times that used for the other four images due to the significantly higher intensity values; (e1) shows a comparison of the amplitude profile over the line indicated in (c3). The result from the thin lens algorithm (TLA) is plotted with the corresponding result using direct numerical integration, which is described in the text. The difference in amplitude is shown on a log scale in (e2); equivalent results are shown for the ideal lens case in (e3) and (e4).
Fig. 5.
Fig. 5. Phase and intensity of the focused spot for the case of the Rayleigh–Sommerfeld lens defined in Eq. (18) for the case of both Gaussian and ${{\rm TEM}_{01}}$ laser spatial modes. All images were calculated using the thin lens algorithm; (a1) and (b1) are the intensity and phase of the center XZ slice for $y = 0$ for the case of a Gaussian beam; Visualization 1 shows a video of the $xz$ intensity/phase slice as a function of different values of $y$. (c1) shows the intensity of the $xy$-plane at five different distances indicated in the lower left of the figure; (d1) shows the profile of the focused spot in the five planes. (e1) and (e2) show the corresponding phase image and profiles. Visualization 2 shows a video of the $xy$ intensity/phase slice as a function of different values of $z$. A similar set of results is shown in (a2)–(f2) for the case of a ${{\rm TEM}_{01}}$ laser mode. The characteristic doughnut shape of the focused spot is seen to expand with defocus, and the spiral phase pattern is seen to twist. Remarkably, the $xz$-slice of the phase distribution is identical to that for the Gaussian case, except for a constant phase shift on the right side of the image. Visualization 3 and Visualization 4 show videos of the variation in the $xz$- and $xy$-planes as for the Laguerre Gaussian case. The same results for the ideal lens calculated using the ideal lens algorithm are provided in Visualization 5, Visualization 6, Visualization 7, and Visualization 8.

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

u z ( x z , y z ) = F T 1 { F T { u 0 ( x 0 , y 0 ) } H z ( f x , f y ) } ,
H z ( f x , f y ) = { exp ( j 2 π z 1 λ 2 f x 2 f y 2 ) , f o r 1 λ 2 f x 2 + f y 2 exp ( 2 π z f x 2 + f y 2 1 λ 2 ) , f o r 1 λ 2 f x 2 + f y 2 ,
U ( f x , f y ) = F T { u ( x , y ) } ( f x , f y ) = + + u ( x , y ) exp [ j 2 π ( x f x + y f y ) ] d x d y , u ( x , y ) = F T 1 { U ( f x , f y ) } ( x , y ) = + + U ( f x , f y ) exp [ j 2 π ( x f x + y f y ) ] d f x d f y ,
u z ( n x T x , n y T y ) = D F T 1 { D F T { u 0 ( n x T x , n y T y ) } × H z ( m x T f x , m x T f x ) } ,
u 0 ( n x T x , n y T y ) = k x = P x + ϵ x 2 P x + ϵ x 2 k y = P y + ϵ y 2 P y + ϵ y 2 × u 0 ( k x T x + k x M x T x , n y T y + k y M y T y ) n x = M x / 2 M x / 2 1 n y = M y / 2 M y / 2 1 ,
P x N x M x , P y N y M y ,
T x , y max ( λ 2 , 1 Δ f x , y ) .
f z , min = 1 λ 2 ( Δ f x 2 ) 2 ( Δ f y 2 ) 2 .
T z 1 Δ f z = 1 2 ( f z , max f z , min ) .
u z ( x z , y z ) = N z 2 N z 2 1 N x 2 N x 2 1 N y 2 N y 2 1 u n z T z ( n x T x , n y T y ) × s i n c π ( x z n x T x ) T x s i n c π ( y z n y T y ) T y s i n c π ( z n z T z ) T z .
t ( x , y ) = exp [ j k ϕ ( x , y ) ] P ( x R , y R ) ,
P ( x , y ) = 1 ( x 2 + y 2 ) 1 , P ( x , y ) = 0 ( x 2 + y 2 ) > 1 .
u 0 ( x 0 , y 0 ) = A ( x 0 , y 0 ) t ( x 0 , y 0 ) .
t 1 ( x , y ) = exp [ j k ( Δ 0 + Δ ( x , y ) ( n 1 ) ) ] P ( x R , y R ) ,
Δ ( x , y ) = Δ 0 R 1 × ( 1 1 x 2 + y 2 R 1 2 ) + R 2 ( 1 1 x 2 + y 2 R 2 2 ) ,
1 f = ( n 1 ) ( 1 R 1 1 R 2 ) ,
t 2 ( x , y ) = exp [ j π λ f ( x 2 + y 2 ) ] P ( x R , y R ) .
t RS ( x , y ) = exp [ j k r f ( x , y ) ] P ( x R , y R ) , r f ( x , y ) = x 2 + y 2 + f 2 .
u z ( x z , y z ) = 1 2 π × u 0 ( x 0 , y 0 ) δ δ z × [ exp ( j k r z ( x z x 0 , y z y 0 ) ) r z ( x z x 0 , y z y 0 ) ] d x 0 d y 0 .
u z ( x z , y z ) = u 0 ( x 0 , y 0 ) h z ( x 0 , y 0 ) , h z ( x 0 , y 0 ) = z exp [ j k r z ( x 0 , y 0 ) ] 2 π r z 2 ( x 0 , y 0 ) [ j k 1 r z ( x 0 , y 0 ) ] .
u 0 ( x 0 , y 0 ) = O F T { U 0 ( x 0 , y 0 ) } ( x 0 , y 0 ) = + + U 0 ( x 0 , y 0 ) × exp [ j 2 π λ f ( x 0 x 0 + y 0 y 0 ) ] d x 0 d y 0 ,
U 0 ( x 0 , y 0 ) = A ( x 0 , y 0 ) P ( x 0 r λ f , y 0 r λ f ) .
r = N A λ .
A T E M 00 ( x , y ) = π 1 / 4 P σ exp ( x 2 + y 2 2 σ 2 ) ,
| A T E M 00 | 2 = P 2 π σ exp ( x 2 + y 2 2 σ 2 ) ,
A L G pl ( x , y ) = 2 p ! π ( p + | l | ) ! 1 w ( z ) [ 2 ( x 2 + y 2 ) w 2 ( z ) ] | l | exp [ ( x 2 + y 2 ) w 2 ( z ) ] L p | l | [ 2 ( x 2 + y 2 ) w 2 ( z ) ] × exp [ j l tan 1 ( y x ) ] exp [ j k z ( x 2 + y 2 ) 2 ( z 2 + z R 2 ) ] exp [ j ( 2 p + | l | + 1 ) tan 1 ( z z R ) ] ,
w ( z ) = w ( 0 ) z 2 + z R 2 z R 2 .
L p | l | [ a ] = ( 1 ) | l | d | l | d a | l | L p + | l | [ a ] .
α = P | A L G pl ( x , y ) | 2 d x d y .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.