Introduction

A key aspect in formulation and process development of biologics is the prevention of particle formation. Protein particles can occur at all manufacturing steps due to external stress, formulation components, or process-related factors that might impair colloidal protein stability [1,2,3,4,5,6,7]. Protein aggregation, which can be reversible or irreversible, is driven by various mechanisms, including protein self-association of native monomers, particle formation of partially unfolded intermediates or through chemical degradations or chemical linkages [8,9,10]. This may cause protein particles of different sizes ranging from oligomers (< 0.1 µm), submicron (0.1 – 1.0 µm), and subvisible (1.0 – 100 µm) to visible (> 100 µm) particles [11, 12]. These particulate impurities in biopharmaceuticals are a major threat as they can affect safety and efficacy of the final drug product [2, 3, 13,14,15,16]. Therefore, it is essential to understand, monitor, and minimize proteinaceous particle formation. Despite several existing and validated methods to analyze particles by size and concentration [7, 17,18,19,20,21], there is currently no single analytical technique available covering the whole size range. Especially for submicron particles, there is an analytical size gap around 1 µm and methodical requirements proposed by the United States Pharmacopeia (USP) are lacking. Nevertheless, the Food and Drug Administration (FDA) announced nonbinding recommendations for profiling risk assessments concurrently with a characterization for particles down to a size of 0.1 µm [22]. Thus, particle analysis and the continuous evaluation of new methods are crucial in pharmaceutical development to meet regulatory requirements and to ensure the quality of the drug product and patient safety [7, 23,24,25].

During the last decade, nanoparticle tracking analysis (NTA) has been applied for particle analysis in the submicron range in various applications such as nanoplastics, vesicles, or protein particles [26,27,28,29,30,31]. NTA detects the Brownian motion of individual particles in the focus area simultaneously by monitoring the light scattering that occurs upon exposure to light. This motion represents random movements of particles in liquids. In the sample chamber, particles are illuminated by a focused, monochromatic laser beam. The scattered light of each particle is then captured by a high-sensitivity complementary metal-oxide-semiconductor (CMOS) camera via a 20-fold magnification microscope lens [32]. In addition, the software automatically calculates the viscosity for each measurement by assuming an aqueous, water-like solution. The temperature is measured and taken into account for each sample. Each scattering centre in the captured video frames is individually tracked, and the diffusion coefficient (D) is calculated from the two-dimensional trajectories (x, y) over time (t) using Eq. (1). Since all other variables in the Stokes-Einstein equation (see Eq. (2); kB = Boltzmann’s constant, T = temperature, η = dynamic viscosity, d = particle diameter) are known, the diameter can be determined [33]. Consequently, the particle size is derived from the diffusional properties of the scattering centre, i.e., the diffusion coefficient of the particle. The calculated size reflects the size of an equivalent spherical particle with the same hydrodynamic properties as the hydrated analyte particle, thus denoted as hydrodynamic size [34, 35]. Additionally, the particle concentration for each size class can be calculated as size distribution due to tracking individual particles in a known volume.

$$\frac{\overline{(x,y)^2}}{4t}=D$$
(1)
$$D= \frac{{k}_{B}T}{3\pi \eta d}$$
(2)

Improvements in digital camera techniques, video frame processing and algorithms for particle analysis significantly improved the possibilities to track individual particles [36].

The experimental procedure of NTA is divided into a capturing step and a subsequent analysis of the captured video frames [26, 37]. According to our experience, each step is strongly influenced by (i) the sample preparation and (ii) several instrumental parameters such as the camera level (CL), camera focus, and detection threshold (DT), all of which must be set manually by the operator for each measurement (refer to Table 1 for a comprehensive overview of instrumental parameters). Adjustments to this video capturing and analysis parameters can considerably affect the results, as demonstrated in a previous study [26]. Newer versions of the instrumental software (NTA 3.20) allow for automated optimization of the camera focus and CL. However, this feature is primarily useful for monodisperse samples [27, 38]. For polydisperse or unknown samples, the software struggles to automatically identify appropriate settings. [26, 37]. Consequently, it is crucial for the operator handling the NTA to be well-trained and experienced, as emphasized in several publications [26, 37, 39].

Table 1 Overview of NTA software settings

It is therefore necessary to gather as much information as possible about the influence of the experimental and instrumental settings on the measurement result, particularly for proteinaceous samples. This understanding is a prerequisite for developing a robust, operator-independent assay, to analyze samples from process or formulation development, where the composition is unknown. In this study, we present a comprehensive NTA analysis of well-defined samples "representing" development samples. Polystyrene (PS) beads were used due to their higher colloidal stability compared to protein particle samples [40]. They are available in a size range from 50 to 1000 nm with known particle size and solid fraction of the bead suspension, which allows for calculating the particle concentration. We have used monomodal PS bead samples to investigate key experimental parameters such as the particle concentration, the measurement temperature, and the influence of the neutral density (ND) filter on determining the concentration and size of PS particles. In addition to monomodal samples, we have further investigated multi-modal samples that are more representative for development samples. We used bimodal and trimodal mixtures of PS bead samples to assess the resolution power of the investigated NTA NS300 setup for discriminating PS beads of varying size. This study offers practical insights into the capabilities and limitations of using NTA for submicron particle analysis in the biopharmaceutical community.

Materials and Methods

Materials

National Institute of Standards and Technology (NIST)-certified PS particle standards of the 3000-series and 5000-series were purchased from Fisher Scientific (Schwerte, Germany). Table 2 summarizes the various sizes of the PS beads, along with the manufacturer’s specification. Particle sizes were confirmed by dynamic light scattering (DLS) (Malvern Zetasizer, Malvern, UK) as described elsewhere [9].

Table 2 Polystyrene particle size standards provided by Thermo Fisher and confirmed by control measurements using dynamic light scattering (DLS)

Method: Nanoparticle Tracking Analysis (NTA)

A NanoSight NS300 (Malvern Instruments, Malvern, UK) equipped with a high-sensitivity scientific CMOS-camera and a 405 nm laser (violet) was used for NTA measurements. The NS300 features two optional neutral density filters (ND filter) that can reduce the light intensity to either 10 or 1 %. This option is useful when the scattering intensity is too high to obtain a reliable measurement result. In addition to measuring particle sizes ranging from 10 to 2000 nm [32], the NTA device can also determine the particle concentration. The manufacturer specifies an optimal particle concentration range of 107 to 109 total particles/mL [32], with a working range covering 106 to 109 total particles/mL. The “concentration upgrade feature” provided by the manufacturer was used to increase reproducibility of concentration measurements whereby minimizing user-dependency of results [41].

Since the particle motion is tracked in aqueous media, the hydrodynamic diameter, which is derived from the diffusion coefficient, is larger than the actual sample size measured in the dry state. To reduce this effect, a 10 mM sodium chloride (NaCl) solution was employed as dilution medium [33, 35]. The dilution medium was filtered three times with a 0.2 µm, polyvinylidene fluoride (PVDF) filter prior to use and measured as a negative control sample in all experiments to ensure contamination-free dilution and particle-free control measurements. The required dilution factor to reach the particle concentration working range was assessed using the manufacturer’s data for particle size (confirmed by DLS), solid fraction of the particle suspension (1 %), and particle density (1.05 g·mL−1). This allowed for calculating the particle concentration for each size class of PS beads.

Before each measurement, the flow cell was rinsed with several milliliters of dilution medium to avoid cross contaminations. A sample volume of 1 mL was placed in a silicone oil-free 1 mL syringe (Plastipak BD) and inserted into the NanoSight Syringe Pump (Malvern Instruments, Malvern, UK). Sample injection was controlled by the NTA software 3.20 (Malvern Instruments, Malvern, UK). The software`s scripting function enabled multiple automatic video captures of a single sample with different instrument parameter settings, such as temperature, capture, or analysis settings.

During the capture step of the NTA measurement, the “slider shutter” and “gain” settings were set automatically by the software. The CL and camera focus were adjusted manually, depending on the respective experiment. During the analysis step, the parameters “blur”, “maximum jump distance” (which is based on the “minimum expected particle size”), and “minimum track length” were set automatically, while only the DT was adjusted manually. For monomodal samples, the “finite track length adjustment” (FTLA) method of the NTA software was used to optimize the results. Particularly small particles diffuse in and out of the camera focus plane and are thus only visible for a limited time, which results in a short-observed trajectory due to the small number of video frames. This leads to inaccuracies and a broadening of the size distribution. The FTLA algorithm compensates for this effect and renders a narrower distribution of the raw data [29]. However, this algorithm is not suitable for polydisperse systems, which are commonly found in protein samples [29, 42]. Therefore, the smoothed data fitting mode is used for polymodal samples [42].

Based on Zhou et al. (2015) and our prior experience with this instrument, we established the following criteria for reliable measurements [43]. We aimed for at least 100 valid tracks for the PS bead samples to obtain statistically relevant results. Furthermore, we monitored the live results view during video analysis, which displays the size distribution during the ongoing analysis of the video. We considered the results valid when the size distribution did not undergo major changes during the last third of the video analysis. For a reproducible particle sizing, corresponding size peaks of triplicate measurements should not deviate more than 5 % from each other. Unless otherwise noted, all results discussed in this study fulfill those criteria. Particle concentration ratios of multimodal samples were calculated by peak integration using OriginPro 2019.

Results and Discussion

Impact of Particle Concentration on Sizing and Counting

To assess the effect of particle concentration on sizing, a serial dilution experiment was performed within the optimal concentration range specified by the manufacturer. Starting with an initial concentration of 4 × 108 particles/mL for a 600 nm PS bead sample, a twofold serial dilution in 10 mM NaCl was performed. Each dilution was measured in triplicates, with 215 s capturing time at 25 °C using CL08, DT20, and a 10 % ND filter. The long capturing time was chosen to guarantee statistically relevant data even at lower particle concentrations. The measurements from this experiment met the criteria of at least 100 valid tracks per run. As shown in Fig. 1A, an increase in particle concentration resulted in a decrease of the measured particle size for the 600 nm PS beads used. However, the measured deviation of up to 60 nm within our experiment´s concentration range (2.5 × 107 to 4.0 × 108 particles/mL) falls within a 10 % error range. Despite this, the deviation from the actual particle size follows a clear trend. Highest accuracies (0.68 % deviation from the nominal size of 600 nm) were observed at the center of the optimal concentration range at 1 × 108 particles/mL. As concentration differences from the center point increase, inaccuracies also rise. Extrapolating experimental data to the boundaries of the optimal concentration range would further amplify the error. Consequently, particles should be measured at the optimal concentration for particle sizing, which is approximately 108 particles/mL for the 600 nm PS beads indicated in Fig. 1A. Due to size-dependent scattering intensities and their subsequent effects on the measurements, this optimal concentration may vary for different particle sizes. Therefore, various dilutions should be evaluated for different particle size classes to compare the sizing accuracy.

Fig. 1
figure 1

A Impact of particle concentration on sizing and B dilution linearity of a 600 nm PS bead dilution series. A A twofold serial dilution of a 600 nm PS bead suspension was prepared in six steps and samples were analyzed by NTA for 215 s with the 405 nm laser using the neutral density filter 10 %, CL08, and DT20. The measured particle concentration is plotted semi-logarithmically against the measured particle size (green). B The experimental particle concentration is shown in red, and the theoretical particle concentration is shown in black. All experiments were performed in triplicates

Furthermore, we have assessed the dilution linearity and reproducibility of particle concentration measurements in a dilution series experiment. The theoretically measured concentrations were plotted against the dilution steps (Fig. 1B). The largest difference between theoretical and measured concentration was observed at the highest particle concentration with a deviation of 49 % (8.4 × 108 vs 4.3 × 108 ± 3.7 × 107 particles/mL). This discrepancy is reduced with each dilution step until both particle concentrations nearly align with a deviation of only 5 % after the fifth dilution step (2.6 × 107 vs 2.5 × 107 ± 6.6 × 105 particles/mL). As illustrated in Fig. 1B, concentration measurements are more precise for particle concentrations of approximately 1 × 108 particles/mL or lower with a deviation of less than 15 %.

Impact of Temperature on Particle Size Determination

The diffusion velocity is significantly influenced by the sample temperature as described by the Stokes-Einstein equation (see Eq. (2)). The NTA software compensates for temperature shifts in aqueous sample matrices by calculating the viscosity based on the temperature, thereby eliminating the temperature dependency of the sizing measurements. However, in preliminary tests, we noticed that particle sizes varied based on the temperature at which the experiment was conducted. To further evaluate this, we tested various PS particle standards at different temperatures and compared the determined particle sizes.

We analyzed PS particle sizes from 100 to 600 nm in a temperature range between 21 °C and 29 °C. All particle suspensions (Thermo Fisher 3100A, 5020A, 3300A, 3400A, 3600A, see Table 2) were diluted to a concentration of 4.0 × 108 ± 1.7 × 108 particles/mL in 10 mM NaCl to maintain an aqueous environment. Each sample was measured in triplicates using different CL and DT parameters optimized for the investigated particle size (see Fig. 2).

Fig. 2
figure 2

Impact of the temperature on particle size measurements. The percentage deviation of measured particle mode sizes from expected sizes is plotted for temperatures of 21 °C (yellow), 23 °C (blue), 25 °C (pink), 27 °C (green) and 29 °C (red) along with sizes of 100 nm A, 200 nm B, 300 nm C, 400 nm D, and 600 nm E, PS beads. The particles were diluted to a concentration of 4 × 108 ± 1.7 × 108 particles/mL and each size was measured using its own set of parameters (right box). Assuming aqueous suspensions, the temperature-dependent viscosity values for water were automatically applied by the NTA software. Triplicate 60 s recordings were made with a 30 s equilibration period prior to each measurement

Figure 2 illustrates the deviations observed for five temperature settings. With increasing temperature, there is a trend towards the measurement of larger particle sizes for all particle standards examined. A size difference of up to 8 % was observed between the minimum (21 °C) and the maximum (29 °C) temperatures tested. A similar result was also reported in a study by Kestens et al. (2017), which demonstrated increasing measured particle sizes by raising the temperature. They have defined an optimal corridor of 24 to 26 °C for their experiments [44]. This effect may result from the setup of the laser module, which includes the laser and its optical elements, a Peltier-element and a temperature probe beneath a glass plate (optical flat). Consequently, there is a minor spatial separation between the temperature probe and the actual sample. When measurements are conducted outside of room temperature, this separation could cause a slight discrepancy between the probe’s read-out temperature and the true sample temperature, leading to an inaccurate temperature value that the software uses to correct the viscosity influence. If the measurement temperature is set to above room temperature (approximately 25 °C), the read-out temperature is slightly higher than the actual sample temperature. This results in a marginally lower viscosity, causing the particles to move slower and appear bigger in size (positive deviations in Fig. 2). Conversely, if the measurement temperature is set below ambient temperature, the read-out temperature is slightly lower than the actual sample temperature, which leads to a marginally higher viscosity, causing the particles to move faster and appear smaller in size (negative deviations in Fig. 2).

In this experiment, we showed that the determined particle size depends on the measurement temperature, probably due to a difference in the sample and probe temperature, which could also be influenced by the initial sample temperature and the laboratory conditions if the equilibration time is insufficient. In our experiments an equilibration time of 30 s was used for the measurements. A one-point validation with 100 nm PS beads was performed prior to the measurements. When conducting measurements at non-ambient temperature, it is advisable to calibrate the system at the same temperature as the intended measurement and to assess the impact of an increased equilibration time.

Impact of Neutral Density (ND) Filter on Particle Size

The energy density of a laser beam is inversely proportional to its wavelength. In this study, we employed a violet laser module with a 405 nm wavelength, which is at the lower end of the visible light spectrum, resulting in a high energy density. The scattered light intensity (I) depends on particle size diameter (d) following the relationship I ∝ d6, as well as the optical properties of the scattering particle and surrounding media. To optimize the measurements for particles in the lower nanometer size range, we selected the laser module with the shortest wavelength [45].

Investigation of Monomodal Systems

In this NTA setup, larger particles exhibit a strong scattering intensity, resulting in overexposed videos. To enable video evaluation, two parameters can be adjusted to reduce exposure. One option is to decrease the CL, which lowers the camera’s sensitivity. Alternatively, a neutral density (ND) filter can be introduced to reduce the incident scattered light intensity on the camera. In the subsequent experiment, we evaluated these two options for sizing 800 nm PS beads at a concentration of 2 × 108 particles/mL using the 405 nm laser.

The first measurement was captured without using a ND filter at a CL of 02. For the measurements with interposed ND filter, the CL was adjusted accordingly to achieve a sufficient signal intensity (CL06 for ND = 10 % and CL11 for ND = 1 %). Measurements were conducted in triplicates for 90 s each. Similar video characteristics were observed across all parameter settings (see Fig. 3A): The NTA software identified particle centers along with small scattering effects surrounding them. Without employing a ND filter, the determined particle size is 36 % lower than expected (see Fig. 3B). The above-mentioned scattering effects interfered with the video analysis, resulting in a broadened peak and high standard errors (see Fig. 3B, orange curve). Utilizing ND filters mitigated this issue resulting in narrower peaks in size distribution and reduced sizing deviation (13 % for ND = 10 % and 8 % for ND = 1 %).

Fig. 3
figure 3

Influence of neutral density (ND) filter on particle sizing. A Video frames without ND filter (left), with ND = 10 % (middle), and ND = 1 % (right). B 800 nm PS beads were diluted to ~ 2 × 108 particles/mL and captured in triplicates in 90 s videos using the 405 nm laser at 25 °C. The same sample was measured without ND filter (orange), and with ND filter at 10 % (blue), and 1 % (green) light transmittance. The CL was adjusted accordingly (see black box). The colored areas around the mean line represent the standard error calculated from the triplicate measurements

Hence, while the use of ND filters can enhance the precision and accuracy of NTA measurements, they can also significantly affect the results. Similar to parameters such as CL and DT previously analyzed by Gross et al. (2016), the application of these filters, particularly for analyzing unknown or polydisperse samples, necessitates an experienced user [26].

Investigation of Bimodal Systems

Unlike previously tested monomodal systems, protein samples tend to be polydisperse with unknown particle concentration and size distribution. Ideal particle size characterization tools should be able to detect the entire size distribution, which may comprise multiple size populations over a broad range. The aim of this investigation is to elucidate the limits of the NS300 NTA system with defined polymodal samples and the impact of instrumental setting parameters on the results. First, we examined a bimodal system composed of 300 nm and 600 nm PS beads at defined concentration ratios (see Fig. 4, labeled spike 1 to spike 4) with various parameter settings: CLs of 05, 07 and 09; DT of 20, and three captures of 60 s each. Both, the monomodal 300 nm and 600 nm PS beads, were first analyzed individually using the same NTA settings and dilution factors to calculate the theoretical concentration ratios of the mixtures measured by NTA in monomodal samples. For CL05 and DT20, the system could not detect any 300 nm particles (see Fig. 4B), presumably due to insufficient scattering of the particles at CL05. With CL07 and 09, the 300 nm particles were detected, but with a significant difference in particle concentration. A higher camera level of CL09 resulted in approximately three times more detected scattering particles compared to a lower CL07. For the 600 nm bead sample, the CL had a smaller impact on the measurements with less than two times higher particle concentrations for CL09 compared to CL07 and CL05. This can be attributed to the scattering intensity scaling with the sixth power of the particle diameter. Furthermore, the monomodal 600 nm sample (see Fig. 4A) exhibits a scattering interference, leading to falsely detected particles in the 20 to 400 nm range. Therefore, this size range was excluded for the subsequent calculation of the theoretical bimodal mixture ratios.

Fig. 4
figure 4

Effect of the camera level on the size distribution of bimodal PS samples. The particle size distributions of A monomodal 600 nm PS beads, B monomodal 300 nm PS beads, and C to F mixtures (indicated as “spike”) with different PS bead ratios are shown for CL05 (blue), CL07 (orange), and CL09 (green), respectively. The 300 nm particle concentration decreased fivefold in between each spike from C to F, while the 600 nm PS particle concentration remained constant. The 600/300 nm particle concentration ratios were calculated by peak integration or labelled as “n.a.” (not applicable) when not possible

The particle concentration ratios of the bimodal samples were calculated by peak integration using OriginPro 2019. However, for spike 3 (see Fig. 4E, only CL07) and spike 4 (see Fig. 4F), a reliable peak separation, and thus peak integration was not possible. Overall, increased accuracy was observed for CL09 compared to CL07, which overestimated the 600 nm particle concentration by approximately a factor of 4, while CL09 only overestimated it by a factor of around 1.5 (spike 1 and spike 2) or even underestimated it by roughly a factor of 0.6 (spike 3). With increasing difference in concentrations ratios (spike 3 and spike 4), the scattering interference of the 600 nm particles causes the signal for the 300 nm particles to become so noisy, that an accurate calculation of the concentration ratio is not possible.

This experiment highlights the challenges of analyzing samples with multiple particle sizes with NTA, which requires different settings. As demonstrated in Fig. 4, the presence of a particle size population can be suppressed, depending on the experimental settings. For unknown samples, that may contain various particle size populations, it is very difficult to generate trustworthy results. Furthermore, we observed a peak shift to smaller sizes (up to 100 nm) while increasing the CL. Compared to the tracking with DT30 (data not shown), DT20 exhibited higher peak intensities in all measurements due to the higher number of tracked particles. Despite that, there was no significant difference in the determined concentration ratios noticeable.

Investigation of Trimodal Systems

In the following, we have investigated a trimodal PS bead suspension and the impact of various system parameters, such as CL and DT, as well as analysis algorithms. We analyzed a trimodal mixture of 80 nm, 400 nm, and 600 nm PS beads with a particle concentration ratio of 9:2:1 and a total concentration of 2 × 109 particles/mL. The suspension was captured at CL04, 06, and 09 with a video length of 215 s and analyzed with DTs ranging from 04 to 30. In addition, we used both, the “smoothed” and the FTLA algorithm for raw data processing (see Fig. 5). In a previous study, a similar trimodal system (50 nm, 400 nm, and 600 nm PS beads at a 9:2:1 concentration ratio) was tested investigating a different NTA system, namely, the NS500 instrument from Malvern [26]. To compare both studies, we used the exact same NTA setting parameters as provided by the manufacturer`s software. Despite the different NTA system and major software changes (not further specified by the manufacturer), the size distribution of the NS300 using the “smoothed” algorithm (see Fig. 5A) was comparable to the previous results of the NS500 [26].

Fig. 5
figure 5

Size distributions of a trimodal PS bead system with sizes of 80 nm, 400 nm, and 600 nm (indicated by the black arrows) in a concentration ratio of 9:2:1. Raw data analysis was performed using both, a “smoothed” fit (A) and the FTLA algorithm (B), and each sample was measured with different parameters (see black boxes), a capture time of 215 s and an equilibration time of 180 s at 20 °C

Using the NS300 NTA system and the smoothed algorithm, only one combination of parameters (CL06 and DT14) was able to detect three distinct populations at the expected sizes: a main peak around 400 nm, a second peak around 80 nm, and a third population around 550 nm (see Fig. 5A, bold green line). However, choosing these settings was not obvious unless one knows what to expect, which is impossible for unknown samples. For instance, three other sets of parameters (CL04 and DT04, CL04 and DT08, CL09 and DT30) resulted in small 80 nm peaks, main peaks at around 400 nm, but only shoulders of the main peaks around 550 nm. Interestingly, most settings exhibited interfering noise in the size range between 100 and 250 nm, which makes it difficult to detect the 80 nm particle population. Without prior knowledge of a defined trimodal system, the 80 nm peaks would have been neglected for most of the tested settings due to the low scattering of small particles.

Using the FTLA algorithm (see Fig. 5B) induces a narrowing of the peaks compared to the “smoothed” fitting algorithm, which resulted in a main particle population at around 400 nm and a distinct second population around 580 nm, where the “smoothed” fitting algorithm only showed a shoulder. Although the two populations are baseline separated in all distribution profiles, none of the evaluated settings could detect the expected trimodal test system. Additionally, the detected noise introduced peak shifts and lead to the appearance of additional “ghost” peaks in the size range of 100 to 250 nm. Consequently, the algorithm is beneficial for “narrowing” and “separating” obvious populations in polymodal systems. However, noisy results or polydisperse samples may lead to incorrect size populations, making the FTLA algorithm unsuitable for samples with unknown composition, as confirmed by previous studies [29, 42]. Alternative software, such as ImageJ using the NanoTrackJ plugin, was tested only for monomodal samples and did not show any improvement in data analysis compared to the manufacturer-provided NTA software [46]. For evaluating polydisperse systems like protein samples, smoothed raw data fitting is recommended.

In our second set of experiments with trimodal samples, the 600 nm PS particles were replaced by 800 nm PS particles. The CL was adjusted accordingly, while the other measurement settings remained unchanged (see Fig. 6). In contrast to the previous trimodal sample composing of 80/400/600 nm particles, none of the tested NTA settings could detect all three populations of the 80/400/800 nm trimodal sample. Instead, two of the settings (CL02 and DT10, CL02 and DT15) showed a single broad peak at around 450 nm. Most settings revealed a main peak around 300 to 400 nm with a shoulder at around 600 nm, indicating a second population, and a third peak population around 100 nm accompanied by interfering noise.

Fig. 6
figure 6

Size distribution of a trimodal PS bead system with sizes of 80 nm, 400 nm, and 800 nm (indicated by the black arrows) at a particle concentration ratio of 9:2:1. The same sample was measured with different parameters (black box), a capture time of 215 s, and an equilibration time of 180 s at 20 °C. The raw data analysis was processed with a smoothed fit

Detecting small particles using the NTA in the presence of larger particles of a polymodal sample is highly challenging, even at significantly higher concentrations. By employing the FTLA algorithm and with known particle sizes and concentrations, it is possible to manipulate the results in such a way to obtain baseline separated peaks with a narrow size distribution. But as mentioned before, this algorithm is not suitable to analyze unknown samples.

Conclusion

In this study, we evaluated the performance of the NS300 NTA instrument for the analysis of monomodal and polymodal particle samples. We evaluated the influence of (i) particle concentration, (ii) measurement temperature, and (iii) various instrument settings such as camera level (CL), detection threshold (DT), neutral density filter (ND filter), and various raw data analysis algorithms. Polystyrene (PS) beads of known size and particle concentration were used.

Our analysis of monomodal samples revealed concentration dependent sizing results for PS particle standards with a trend towards increasing measured sizes at decreasing concentrations. For accurate sizing of 600 nm particles, concentrations of approximately 1 × 108 particles/mL are suitable. We have also found that an increased measurement temperature increases the measured particle size. Therefore, all measurements should be performed at a consistent temperature. When analyzing larger particles, such as 800 nm PS beads, the use of the ND filter can improve precision, reducing the variation from the nominal size to 8 % compared to 36 % without the filter. However, this requires extensive user experience with NTA measurements as it can massively influence the results, making it difficult to apply to samples of unknown composition (see Fig. 3). In summary, size class–dependent parameters are required for optimal monomodal PS bead measurements.

As a result, the system is of limited use for polymodal or aggregated samples, such as formulations containing protein particles. Due to the different sensitivities of different particle size classes at different system settings, finding a suitable setup is challenging. Our evaluation of a 300/600 nm bimodal particle system showed that a CL of 09 gave more accurate particle concentration ratios, deviating from theoretical ratios by factors of 0.6 – 1.5. Lower CLs underestimated the 300 nm PS particles, either deviating by a factor of about 4 or not detecting them at all. For the 80/400/600 nm trimodal system at a ratio of 9:2:1, only a single combination of system settings (CL06 and DT14) was able to detect three peaks, but the resulting concentration of smaller particles was largely underestimated. Applying the FTLA algorithm instead of the “smoothed” raw data fitting introduced more false positive noise peaks, and therefore, this algorithm should not be applied to polymodal samples. When the size difference within the trimodal system was further increased (80/400/800 nm in a ratio of 9:2:1), no parameter setting was able to detect all three particle size classes. In conclusion, the application of NTA for the analysis of polydisperse samples is challenging and highly dependent on measurement settings. Therefore, operators should be aware of trends in the recorded data caused by these settings.

It is therefore helpful to always present the parameters used together with the NTA results to ensure transparency and reproducibility of the experimental data. An experienced operator is required to perform NTA measurements, and a critical evaluation of the measured size distributions and particle concentrations, taking into account the system settings, is necessary to interpret the results.

Technological advances can help to improve data quality, simplify analysis, and reduce the dependence of results on user and measurement settings. Recently, systems using multiple wavelengths simultaneously have become available, improving the analysis of multimodal particle samples [47]. In addition, a successor to the NS300, the NanoSight Pro, entered the market with some upgrades such as interchangeable lasers, fluorescence detection, and improved analysis software [48]. It remains to be seen whether these advances can overcome the known limitations of NTA and make biopharmaceutical particle analysis more reliable.