Introduction

The approach of drug delivery using a nanoparticulate system has shown tremendous potential in the efficient drug delivery of various synthetic and naturally derived therapeutic moieties to the target site of action in the body [1]. Drug products with nanoparticle formulations are developed with the intention of (a) minimizing undesired consequences and side effects caused due to pharmaceutical compounds, by directing them to the precise location of the action, (b) lowering their dosage, by altering pharmacokinetics, (c) minimizing dosing frequency by keeping a control on the drug release, and (d) improving life span by enhancing stability. This finally leads to improved safety, effectiveness, and patient adherence, a longer drug shelf life, and possibly slower healthcare costs [2]. Characterization of these formulations with orthogonal techniques, especially the confirmation of nanostructure, stability evaluation, and physiological mobility, remains to be a crucial aspect of drug delivery. To prove nanomedicines’ safety, efficacy, and reproducibility from batch to batch is an issue in clinical studies. The same factor is taken into account when creating generic nanomaterials (nano-similars). In this review, we point out the importance of studying the difference in physicochemical features that biological systems can recognize, and further discuss how these differences can lead to different consequences for the nanoformulations’ in vivo outcomes and possible effects on their behaviors and security [3]. Micelles, liposomes, dendrimers, nanoemulsions, and polymeric nanoparticles are a few of the nano-formulations that are becoming more and more popular in the pharmaceutical sector for improved drug formulation and delivery. Nano-formulations usually are in the size range from 10 to 100 nm, where the drug is dispersed, enclosed, or bonded to the drug carrier [4,5,6,7,8,9]. The water solubility of the drugs can be enhanced by controlling the drug release rate using polymeric nanoparticles such as nano-spheres and nano-capsules [10,11,12,13,14]. Micelles are 10–100 nm-sized amphiphilic molecules with a core–shell structure. The inner hydrophobic core and outer hydrophilic corona permit extended drug circulation in biological systems [15,16,17,18,19]. Liposomes are highly stable lipid bilayers that can enclose substances in an aqueous phase and are made of either synthetic or natural phospholipids pharmaceutical drugs in a spherical closed vesicle [20]. These are generally employed as drug carriers for vaccines, steroids, and genetic materials [21]. Nanoemulsions, having droplet sizes in the range of 20–200 nm, are very resistant to sedimentation and also feature size-dependent properties such as optical transparency and enhanced shelf stability against gravity-driven particle creaming [22,23,24].

With growing advancement in pharmaceutical research, nanocarriers are being used as efficient colloidal drug delivery systems with particle size in submicron range typically < 500 nm. These systems can be characterized on the basis of their size and morphology using scanning electron microscopy (SEM), transmission electron microscopy (TEM), and atomic force microscopy (AFM) whereas dynamic light scattering (DLS) is employed for determination of particle size and its distribution [25]. Additional characteristics and potential uses of the NPs may be affected by the size, size distribution, and organic ligands present on the particles’ surfaces. Following the synthesis of nanoparticles, the crystal structure and molecular composition of the NPs are also thoroughly studied. Nanoparticle toxicity can result from particle size, surface area, shape, content, and surface properties [2]. Variability and inaccurate interpretation of the association between nanoparticle properties and therapeutic action are among the primary causes of these unsatisfactory results. Therefore, the optimum utilization of advanced modern characterization techniques, with a sufficient amount of understanding of principles and working, is critical for the efficient development and characterization of nanoformulations. There are typically two elementary techniques for determining particle size. The initial approach involves the direct examination of the particles with microscopic techniques that measure the multidimensional size of particle images. The strategies employed by current techniques mentioned in this article for the characterization of nanoparticles utilize the link between the size and behavior of a particle. This frequently implies a notion of equivalent spherical size generated by linking a size-differentiated particle attribute on a linear scale. The diameters of spheres with equal or comparable lengths are referred to as similar spherical diameters, and this similarity extends to their volumes, which characterize the asymmetrical particles themselves. DLS technique is an example of this method that measures the dynamic fluctuation of the intensity of scattered light and serves as the basis for calculating the average particle size [26]. Moreover, by comparing the results from different instruments, it is possible to obtain extra information about the particle system. This review will emphasize the sophisticated techniques of characterization for the synthesis of nanoparticle-based formulation during pharmaceutical research (Table 1).

Table 1 Summary of current advance techniques for nanoparticle size characterization presented in this article

Challenges in measuring size of nanoparticles in formulation

Drugs with poor aqueous solubility usually show unpredictable bioavailability and poor oral absorption, thereby affecting the route of delivery of drug products. Thus, higher doses are administered to compensate for the slow dissolution rate. This becomes a risky approach due to potential undesirable side effects. Producing pure pharmaceutical particles in smaller sizes can help with this problem since they have a higher surface-to-volume ratio than raw materials, which makes them dissolve more quickly and have higher saturation solubility [116].

Given the effect of particle size on efficacy and stability, it is of utmost importance to accurately measure the size of nano-formulations. The difference in properties of nanoparticles in terms of their size, shape, surface charge, and surface hydrophobicity can lead to varied in vivo pharmacokinetic profiles, biodistributions, and therapeutic effectiveness. These characteristics apply if the nanosystems are not meant to dissolve right away. If the particles are ingested by the cells as solid particles, the shape of the particle may affect the amount of absorption, intracellular distribution, and cytotoxicity of cell absorption kinetics and mechanisms. For example, it was reported that when non-spherical particles were utilized, the therapeutic effects of anticancer drugs were enhanced [117]. The characteristics of the nanoparticles induce different membrane folding strengths during endocytosis, which impacts cell uptake.

Apart from this, nanoparticles are inherently unstable and often tend to aggregate. The tinier the particle size, the greater the aggregation tendency. Similarly, even more diverse particles, i.e., greater size variation, exhibit unsteady nature. For samples that are polydisperse upon the settling of bigger particles, inaccurate particle size information would have been produced, or alternatively, the tiny particles are dissolved during the sample dilution. It is crucial to take into account the goal of measurement before selecting a suitable technique. Sampling should be representative of the whole batch. Generally, size measurement techniques are classified as imaging and non-imaging techniques. Non-imaging techniques are static light scattering (SLS) and DLS, whereas imaging techniques include SEM, TEM, and AFM [54].

Electron microscopy and atomic force microscopy are direct methods used for nano pharmaceutical formulation, and both are appropriate for the nanometer size range. Although, the sample sizes required to get statistically significant findings number rise from a few hundred to around 106 for narrow and wide size distributions, respectively. These methods also require cumbersome sample preparation such as immobilization and freezing which may alter the sample characteristics. Hence, complementary quality evaluations are necessary while the aim of the experiment is to evaluate if it is safe to send this batch or lot. In the case of automated imaging analysis, the identification and measured particle size are impacted, such as signal threshold and pixel counting parameter settings [27].

Atomic force microscopy (AFM)

AFM is a technique from the group of related scanning probe microscopes (SPM) that were introduced by Binnig and Rohrer back in 1982. Scanning tunneling microscope (STM) involves the use of tunneling current across a narrow gap between a probe and a sample that is conducting in nature. As the probe was scanned across the sample, the electrons were able to simultaneously tunnel across the forbidden gap of energy thereby yielding a topographical image of the sample [28]. Only conducting specimens are suitable for this technique, and the detailed study of thick samples of biological origin and insulating materials such as polymers is not possible. Hence, to overcome this challenge, AFM was introduced in 1986 that measures forces exerted on a cantilever, to which the tip was mounted. This technique has now made advances in terms of a wide array of derivative processes and new imaging methods [29].

AFM uses an atomic scale probe tip for scanning and provides ultra-high resolution of particle size measurement and morphological features such as shape [30, 31]. The modes of operation of an AFM are based on the extent and nature of the probe-surface interaction [32]. In contact mode, when the probe (tip + cantilever) comes in contact with the sample, a repulsive force (the set-point) causes a small deflection of the cantilever that is subsequently detected by the photodetector. Tapping Mode® (intermittent contact mode) is a trademark of Bruker, where the cantilever’s oscillation takes place at its resonant frequency. This prevents the exertion of possibly damaging lateral effects on the sample. The measurement is carried out when the tip comes in contact with the specimen, and for the most part, the tip is safely retracted while the scanner moves the sample. This mode is suitable for the imaging of biological samples such as DNA that are delicate in nature, and analysis can also be carried out in physiological fluids. The non-contact mode (NC-AFM) was developed particularly for use in a vacuum with pressures below 1 × 10−10 mbar, where the tip is typically kept at a distance of about 20–150 nm above the surface of the sample. This technology uses the cantilever deflection as a drive signal through a self-driven oscillator and ensures that the cantilever instantly responds to changes in the resonance frequency [33].

Visualization of topographical features of biological structures from nanometer to angstrom scale resolution is possible, the technique being useful both in air and liquid form [27, 34]. Fibrils and oligomers suspended in buffer solutions can be studied without causing any physiological damage, while at the same time, monitoring temporal changes in their structure, e.g., growth of single oligomers. For direct methods such as AFM, the sample size needed to obtain a statistically valid result ranges from 102 for narrow size distribution to 107 for wide size distribution. High-resolution AFM has been extensively used for the imaging of hydrated proteins and characterization of least molecular weight oligomers. While low molecular weight oligomers are relatively compact and have significant order, high molecular weight oligomers vary with lateral dimensions and heights in the range of 15–25 nm and 2–8 nm, respectively [35]. Direct visualization of single molecules allows the identification of the oligomeric state of proteins and its morphological properties, where a protein quantity of as low as 1 ng is sufficient for analysis with AFM. Visualization of recombinant hemagglutinins obtained from different strains of influenza viruses was done at neutral and acidic pH, which revealed pH-dependent heterogeneity of particle size. Recombinant hemagglutinins, present as large oligomers, may impair the protein’s functional activity. High-resolution AFM provides distribution histograms of height and volume for multiple such samples that assist in the estimation of oligomer size. Additionally, the oligomeric state of hemagglutinins relates to their functional activity; the hemagglutinins with the lowest oligomer sizes were found to induce hemagglutination of human red blood cells [36].

The transient presence of oligomers usually complicates their structural analysis. Using time-lapse HS-AFM, imaging and characterization of isolated photochemical cross-linked amyloid β-protein trimers, pentamers, and heptamers were done, and their dynamics in solution were studied at nanometer resolution on a millisecond timescale. Several hundred time-lapse imaging frames were analyzed to characterize the dynamics of oligomers over a period of 80 s revealing that Aβ42 oligomers are inherently dynamic systems capable of moving in nanometer-scale distances [37].

AFM is used to monitor in vitro assembly mechanism and structures of amyloid fibril. It is important that the specimen be attached to a solid support (such as mica) for convenient measurement. Usually, the sample is prepared in buffer conditions and adsorbed onto a suitable substrate. The peptide is initially prepared in 1,1,1,3,3,3,-hexafluoro-2-isopropanol (HFIP) to reduce the extent of aggregation and improve the detection of early oligomeric intermediates. HFIP stock solutions are first diluted with 10 mM Tris–HCl at pH 7.4, as HFIP, and then adsorbed onto mica substrates, as HFIP increases the rate of fibril assembly. Oligomeric structures usually formed after 60 s are monitored using single particle analysis software. This set of experiments revealed a two-step process in the development of fibrils from oligomers, thereby enabling the study of the morphology of various forms involved in the process [34]. In a similar study [38], HS-AFM was employed to visualize the yeast prion determinant domain (Sup35NM) and the formation of its oligomers and fibrils at sub-second and sub-molecular resolutions. Initially, monomers were observed in the images, followed by oligomers in the size range of 1.7–4 nm. This analysis provided direct proof of the relationship between oligomers and fibrils and the eventual structural conversions to cross β-structures. Also, real-time visualization of fibril elongation enables the researcher to elucidate the mechanism of fibril growth in much more detail.

Because of their large size and complexity, determining the oligomeric state of membrane proteins in a lipid bilayer is generally challenging. The oligomeric states of microbial rhodopsins have been studied by X-ray crystallography, gel filtration chromatography, cryo-electron microscopy, and circular dichroism (CD) spectroscopy. However, the determined stoichiometry of the oligomeric structures has sometimes exhibited conflicting results due to differences in analytical techniques and experimental conditions [39,40,41,42,43,44,45,46,47,48,49]. The predominant existence of eubacterial rhodopsin in the lipid membrane in pentameric forms, while archaeal rhodopsin in trimers, was revealed by HS-AFM analysis [50].

Morphologically diverse species of oligomers have been studied by cryo-electron microscopy (cryo-EM), but the heterogeneity of oligomers during protein aggregation happens to be the limitation in the application of EM. Hence, AFM comes into the picture in order to tackle this issue without significant sacrifice of sensitivity and spatial resolution as offered by cryo-EM. Structural characterization of individual α-synuclein oligomers was carried out by combining AFM with infrared spectroscopy (AFM-IR), where the process of α-synuclein aggregation was closely observed as it transitioned from oligomeric state to filaments, protofibrils, and fibrils [51]. No sample preparation procedure was needed before measurement. Additionally, a relationship could be developed between structure and morphology on the single oligomer level, and high structural heterogeneity within α-synuclein oligomers was identified. IR was employed to analyze and quantify the distribution of parallel and antiparallel β-sheet structures [51].

The interaction of toxic protein misfolded oligomers with the cellular membrane has been found to be the basis of toxicity in the process of amyloid formation. Different oligomeric forms with different structural features and cytotoxicity interacted with ordered and disordered lipid phases, leading to the identification of components responsible for toxicity and its role in mediating oligomer toxicity. AFM plays a part in monitoring such interactions at the sub-microscopic level with high and unparalleled resolution where it could discriminate between the ordered and disordered lipid phases in the bilayer by considering image height distributions. The toxicity of N-terminal domain of the E. coli HypF protein (HypF-N) oligomers has been found to be dependent on both the oligomer structure and the physicochemical properties of the lipid membrane [52]. Toxicity mechanisms that may form possible link between oligomer formation of human islet amyloid polypeptide (hIAPP) and exacerbation of cytotoxicity that may lead to the development of type-2 diabetes were uncovered using AFM and TEM studies [53].

Transmission electron microscopy (TEM)

In TEM analysis, electrons are allowed to penetrate a thin specimen and then its imaging is performed using appropriate lenses. Electron guns are used for the generation of electron beam; different materials are given in Table 2. Production of images of a thin sample with 103–106 times magnification is possible through the use of TEM. Patterns of electron diffraction can also be generated, which are helpful for gaining knowledge of the characteristics of crystalline specimens. To facilitate electron transmission, samples to be studied with TEM must be ultra-thin (below 100 nm), necessitating significantly greater time for sample preparation than AFM where sample size in the millimeter range provides satisfactory measurements [55]. Mounting the sample onto support grids and fixing it with a negative stain agent, such as phosphotungstic acid or uranyl acetate, or plastic embedding is possible. In the case of cryo-TEM, the sample is exposed to liquid nitrogen after inclusion in vitriform ice [56]. Bio-polymeric samples must be repeatedly stained with heavy metals to achieve adequate contrast for identification [30]. TEM can be used for particle sizing, shape, and morphological information [54].

Table 2 Characteristics of material to be considered when selecting the type of electron gun to be used during TEM analysis

Liquid–liquid phase separation or coacervation of intrinsically disordered proteins is involved in intra- and extracellularly biological processes, and the assembly of dense coacervate microdroplets from concentrated oligomers of collapsed proteins has been studied thoroughly using cryo-TEM. Due to stern sample preparation technique, it is not possible to visualize dynamic phase transitions by cryo-TEM. Previously, the application of TEM could not be extended to the analysis of proteins as these are readily degraded on exposure to heat, which may lead to sample damage. Contrasting agents are usually used for better imaging, but they may convert proteins into a non-native state, thereby hindering dynamic observations. Newer TEM systems are equipped with liquid cells and direct electron detectors that allow observation of crystal nucleation, polymerization-induced micelle formation, and aggregation processes [57]. In another study, structures of oligomers in water were studied using single-molecule atomic resolution time-resolved electron microscopy (SMART-EM) in combination with DLS. Dimeric complexes of daptomycin in water were observed using TEM for as long as 100 s associated with small structural variations. The dynamic motions of individual oligomers were recorded by SMART-TEM, estimating the size distribution of the longitudinal direction of oligomers along with their gyration radius. Tetramers were detected to be the largest oligomer forming in a homogeneous solution [58, 59]. The application of TEM could be extended to the detection of 5–50 nm oligomers that are usually prone to cellular uptake and propagation and can induce aggregation. In one such study [60], full-length Tau oligomers were identified in heterogeneous globular assemblies while elongated fibrils were found to be absent. Accumulation of granular Tau oligomers is more likely to happen in the case of Alzheimer’s disease brain; hence, it is crucial to monitor these components. The structure was observed with HR-TEM where the oligomers and Tau fibrils were spotted onto carbon-coated copper grids and further analyzed.

The use of TEM along with ion mobility-mass spectrometry coupled with infrared spectroscopy (IR) enables the researcher to study kinetics and oligomeric structural evolution that occurs during fibril formation [61, 62]. Negative staining technique used for TEM sample preparation of a hexapeptide derived from the hIAPP exhibited formation of amyloid fibrils after incubation for 2 days in ammonium acetate buffer, which allowed the study tendency of early oligomers to form β-sheets. TEM operated at 100–120 kV has been used for morphological characterization of aggregates [63]. Apart from this, oligomeric steroidal formulations have been evaluated using TEM to understand the internal construct of nanocarriers [64]. Nanocarriers prepared using steroid dimer exhibited a spherical shape under transmission electron microscope, whereas clusters of rods and fibers were observed for dapsone-loaded bile acid–enriched nanocarriers.

Scanning transmission electron microscopy (STEM) combines both SEM and TEM, where a transmission image is obtained using a scanning technique. A TEM system can be modified into STEM by attachment of a system that scans a focused beam across the sample, whereas a SEM system can be operated in STEM mode by attachment of a transmission stage and a detector. In recent years, dedicated STEM systems have been used as multi-signal analytical microscope with high spatial resolution. Although these systems are employed to image heavy single atoms, their use is limited as they are relatively expensive [65]. Multiple detectors used in STEM are bright-field, annular dark-field (ADF), and high-angle annular dark-field detectors. Differences between STEM and TEM are summarized in Table 3. The image produced by STEM is based on the number of electrons elastically scattered by the constituent atoms of the macromolecule.

Table 3 Comparison of STEM and TEM techniques with respect to their working principle, type of image production, resolution capacity, and applications

Core applications of STEM in the biological field have been mass determination and characterization of proteins and macromolecular assemblies by employing ADF detector. Mass measurement is performed offline using custom software, PCMass. ADF is also useful in imaging ultrasmall, functionalized nanoparticles of heavy atoms for labelling specific amino acid sequences in protein assemblies. Elemental mapping in isolated macromolecular assemblies is performed by acquiring electron energy loss spectra (EELS). A recent application of STEM has been electron tomography which involves the use of an on-axis bright-field detector, and it enables three-dimensional reconstruction of micrometer-thick sections of cells [66]. STEM has been frequently used to delineate the assembly mechanism and investigate amyloid assembly pathways, fibril polymorphisms, and structural properties of amyloid aggregates. One of the many features of STEM is that it is the only method that is able to directly measure the mass-per-length (MPL) of individual filaments.

STEM analysis essentially requires a limited amount of material about 50 µl of 50 to 300 µg/ml and also provides exceptional statistics after sorting large particles by shape during mass measurement [67]. Image intensity in STEM can be integrated over an isolated particle, a known length of filament, and converted to a molecular weight. Hence, intensity is directly proportional to the local mass per unit area in the specific region of the specimen. A resolution of 2 to 4 nm can be achieved in the projected mass distribution. Negative staining with uranyl acetate or methylamine vanadate gives better visualization in the case of freeze-dried samples. A general sample preparation technique by wet film method for STEM mass measurement is given (Fig. 1).

Fig. 1
figure 1

General sample preparation technique by wet film method for STEM mass measurement

Proteins with molecular weight less than 30 kDa are too small to give acceptable mass measurements unless they exhibit dense and compact properties. tRNA, nucleic acids, and double-stranded DNA are visible in STEM while bacteria, cells, and organelles yield minimum information as they are opaque in nature. Another aspect to consider is the physical purity of the sample as everything can be observed in STEM analysis, even the contaminants, if present. Hence, it is preferable to perform a final purification to remove any impurities or particulates, using an appropriate sizing column prior to STEM analysis. Additionally, mass mapping is efficient when there is a clean background between isolated particles; optimum concentration of a specimen will be the one that gives good distribution of particles on a grid, which will also be convenient for data analysis [67].

Dynamic light scattering (DLS)

DLS also known as “quasi-elastic light scattering” or “photon correlation spectroscopy” is a well-known non-invasive technique used to determine the molecular size and size distribution of particles in a liquid phase. This technique is often applied in the submicron range and, thanks to recent advancements in technology, can achieve measurements below 1 nm. It finds widespread application in analyzing various complex injectable substances such as suspensions containing NPs, proteins, and colloids. In this method, a diluted sample of particle suspension is exposed to coherent light from a laser. By observing the time-dependent changes in the amount of scattered light due to the Brownian movement of particles, one can calculate the translational diffusion coefficient. This coefficient is related to the particle’s hydrodynamic radius and is derived using the Stokes–Einstein equation [68,69,70]

$$D={k}_{{\text{b}}}T/6\pi \eta {R}_{h}$$
(1)

where

kb:

Boltzmann constant (m2kg/Ks2)

T:

absolute temperature [K]

Η:

viscosity of the medium

Rh:

the hydrodynamic radius of the particle [m]

The “Brownian movement” causes dispersion between particles, or scattering centers, to vary over time, leading to “constructive and destructive interferences” in the intensity of scattered light. This information can be used to determine the velocity of light scattering centers by observing changes in scattered light intensity over time, i.e., the translational diffusion coefficient. Larger particles show lower rates of fluctuation in scattered light, while smaller, faster particles result in higher rates of fluctuation. These fluctuations in scattered light over time can then be converted into values for particle radius and diffusion coefficient (as depicted in Fig. 2). The DLS device’s correlator compares changes in intensity over time and generates an autocorrelation function G(t). This function is determined by the equation below (Fig. 3) [71].

$$G\left(t\right)\equiv \langle I\left(t0\right)I\left(t0+t\right)$$
(2)

where

Fig. 2
figure 2

Correlation function of larger and smaller particles in DLS (dynamic light scattering). a Typical intensity fluctuation of larger particles. b Typical correlogram from a sample containing large particles in which the correlation of the signal decays more rapidly. c Typical intensity fluctuation of smaller particles. d Typical correlogram from a sample containing small particles in which the correlation of the signal decays more rapidly

Fig. 3
figure 3

Schematic diagram showing the fluctuation in the intensity of scattered light as a function of time in DLS (dynamic light scattering). If the intensity of signal at time = t is compared to the intensity at a very small time later (t + δt), there will be a strong relationship or correlation between the intensities of two signals. If the signal at time t, derived from a random process such as Brownian motion, is compared to the signal at t + 2δt, there will still be a reasonable comparison or correlation between the two signals

I(t0):

count of photons reaching a sensor at time t0

(t0 + t):

count after a time interval t

Photocounting occurs at a rate dependent on the amount of light reaching the detector. Analyzing the cumulative data allows for the determination of the autocorrelation function, particularly if the intensity decrease follows a simple exponential pattern [72].

It is important to apply DLS to solutions of drug particles that are highly diluted. Diluting a large quantity of drug particles poses challenges as it could alter their structural features, including the dissociation of complex particles. Therefore, ensuring that the particles being observed in a DLS experiment retain their native, food-specific structures is crucial. Even a small amount of large aggregates can have negative effects [73]. Typically, the sample needs to be diluted prior to measurements, regardless of whether it is in dry form, a colored solution, or a turbid liquid. However, this dilution step can impact particle size variation in different ways. In cases of very fast-dissolving drug nanocrystals, dissolution may occur before measurement during dilution, leading to higher particle size values. To minimize particle disintegration during dilution, it is recommended to use a saturated drug solution [54]. Depolarized DLS was developed to better understand the nature of NPs in complex biological systems, overcoming the limitations of polarized DLS. This technique demonstrated that unwanted signals from reflected proteins unrelated to the particles could be completely eliminated, resulting in an excellent signal-to-noise ratio [74].

Several precautions should be taken during sample preparation and data processing, as detailed in the ISO standard. These crucial features include [75, 76]:

  1. I.

    The sample’s scattering intensity (expressed as the count rate) should ideally be ten times higher than that of the solvents, which enhances the signal-to-noise ratio. The scattering intensity is influenced by particle size and concentration.

  2. II.

    To avoid interference from dust particles or other contaminants, the dispersion medium should be filtered through 0.2-micron filters for weakly scattering suspensions. A rule of thumb is to use a filter size three times larger than the largest size to be measured. A 5-µm filter is often suitable for DLS.

  3. III.

    When analyzing DLS data using the Stokes–Einstein equations, it is assumed that interparticle contact does not affect particle diffusion. However, in cases of heavy charge-stabilized dispersions, scattering strength decreases. Varying electrolyte concentrations like NaCl can help mitigate particle interaction effects. Experimentally determining the optimal electrolyte concentration is recommended to prevent excessive coagulation.

  4. IV.

    The amplitude of the correlation function should be at least 80% of the optimal value, as determined by a calibration method. Decreases in amplitude can be caused by low scattering efficiency, expansion of the detector’s aperture, nonergodicity in samples, and other factors.

  5. V.

    To ensure repeatability, at least six distinct measurements should be conducted for each sample. Counting time for each run can be adjusted for a smooth correlation function. Using multiple measurements lasting around 60 s is advisable to minimize errors caused by dust particles. The standard deviation of the average count rate should be less than 5% for acceptable results. Outliers can be removed using conventional statistical techniques.

  6. VI.

    For narrow polydisperse samples (PdI less than 0.1), the dimensions of the average intensity-weighted particle and Polydispersity Index (PdI) are provided via cumulant analysis. An average diameter’s standard deviation should make up less than 5% of the value. Multimodal analysis techniques can be employed for PdI greater than 0.1, examining data for multiple peaks in the distribution. For unimodal distributions, values derived via cumulants remain valid. Non-negative least square (NNLS), CONTIN, and similar methods can provide an intensity-weighted size distribution, aiding in identifying polydispersity type and particle size distribution in each peak. For multimodal distributions, particles associated with different peaks should be separated using appropriate techniques (e.g., size exclusion chromatography, ultracentrifugation) and re-examined separately for accurate size estimation.

Badasyan and colleagues have proposed optimal methods for employing DLS to determine the sizes of polysilane nanoparticles (NPs) in a solution. Initially, the solution should be diluted until it surpasses the DLS detection threshold. A comparison between the optical signals of the first and second samples, with the latter having twice the concentration of the former, helps establish whether the sample was within the dilute regime. If the signal doubles, it confirms the dilute state; otherwise, potential non-diluteness requires solution enhancement, perhaps by adjusting temperature or solvent. Once these conditions are met, DLS measurements should be conducted as an alternative to size exclusion chromatography (SEC). The procedure involves a 5-min measurement followed by a 2-h settling period at 50.1 °C to disperse aggregates. Subsequently, DLS measurements are executed at 50 °C over a 3-min duration using the “90Plus/BI-MAS DLS” apparatus, with data analysis conducted through the 9KPSDW software. The results, derived from the mean of five trials, indicate that DLS provides more dependable outcomes than SEC for diluted solutions of globules, based on the polysilane experiment findings. This reliability stems from the fact that the sole potential factor causing divergence is the absence of molecule interaction during DLS measurements, in contrast to their interaction with the SEC column gel [77].

The impact and tissue penetration capabilities of lipid NPs have been shown to rely on their particle size, as discussed in [78, 79]. In a groundbreaking study by X. Jia et al. (2021), a convenient and rapid approach to assess the size-dependent distribution of RNA payloads in real time was introduced. This method utilizes information from multi-angle light scattering detectors, ultraviolet light, and refractive index measurements after SEC separates the lipid NPs. The experimental setup involved an Agilent 1200 HPLC system with a UV/vis photodiode array detector termed “SEC-multi angle light scattering (MALS)-UV,” coupled with an “optilab T-rEX dRI” detector and a Dawn “HELEOS II MALS” detector. To ensure robust measurement, the MALS detector employed a 20% laser power, enabling effective detection of strongly scattering lipid NPs. For online analysis of hydrodynamic radius (Rh), an optical fiber connected a “DynaPro Nano Star” dynamic light scattering detector to the 16th angle of the MALS detector. Data acquisition and analysis were facilitated by the “ASTRA” software version 6.1.7.15. Real-time Rh estimation was achieved by disconnecting the optical fiber from the DAWN MALS detector, employing the aforementioned “DynaPro Nano Star” instrument. The method involved transferring sample aliquots into 4-L disposable nano star micro cuvettes, which were then positioned within the DLS detector. These evaluations were conducted using a solvent having a refractive index of 1.333 at a constant temperature of 25 °C.

Malm and Corbett (2019) introduced an innovative approach to enhance the accuracy and precision of particle diameter data obtained through DLS analysis. Their technique involves capturing and processing DLS data in a novel manner, resulting in reduced noise artifacts within the correlation function. By performing separate correlations over multiple brief sub-measurements, the fitting and inversion processes yield more consistent outcomes with decreased variability. Furthermore, this method generates sets of data that comprehensively depict both the steady-state and transient components of the samples, achieving heightened precision and accuracy. This is achieved by individually analyzing each sub-size measurement and applying an outlier detection strategy to the resulting polydispersity index (PdI). Unlike conventional preset size range rejection methods, this statistical approach identifies sample-specific transients without the need for prior knowledge about the sample’s characteristics, whether it is polydisperse, multimodal, or monomodal. As a result, the resulting particle size distributions offer a more representative portrayal of the sample. Notably, this advanced technique significantly reduces measurement time, allowing repeat measurements to be completed within 24 s a mere one-fifth of the time required by some previously employed methods. This efficiency is particularly notable for well-prepared and stable samples. Additionally, the algorithm optimizes measurement durations by collecting data until sample statistics are sufficiently captured for characterizing samples with minimal dispersion. The algorithm’s effectiveness is demonstrated through various standards, including proteins and polystyrene latex, encompassing polydisperse mixtures. Crucially, this algorithm’s applicability extends beyond DLS measurements, encompassing other properties inferred from DLS analysis, such as micro rheology, while adhering to similar principles as SLS. While methods like high-performance liquid chromatography (HPLC) combined with DLS offer enhanced resolution and separation of size elements in dispersed systems, batch DLS often proves superior for characterization due to its non-invasive nature, measurement simplicity, and minimal volume requirements. The novel adaptive correlation technique effectively addresses one of the main criticisms associated with this method [80].

The study conducted by Zheng et al. (2016) highlights the significant variation in the hydrodynamic diameters of gold nanoparticles (AuNPs) based on nanoparticle concentration and laser source intensity. This variability was demonstrated through DLS investigations involving AuNPs of diverse concentrations and sizes. Measurement challenges arise from either an excess or inadequate dispersion of light from the test suspension. Notably, at specific concentrations, repeated scattering can lead to notable inaccuracies in determining the average hydrodynamic size of the AuNPs. Fundamental principles of light scattering provide a straightforward means of identifying or predicting concentrations where multiple scattering becomes problematic for particles of differing sizes. The prevention of multiple scattering necessitates using concentrations lower than those provoking such effects. The insights and techniques presented in the study extend beyond AuNPs, offering applicability to the examination and analysis of other nanoparticle materials. For instance, silver nanoparticles exhibit more potent light-scattering properties than even gold nanoparticles. Considering the anticipated multi-scattering effects on silver nanoparticles, careful consideration is crucial when conducting DLS investigations and selecting optimal concentrations for accurate measurements. Beyond spherical metallic nanoparticles, diverse variations like nanorods, cubes, stars, and core–shell particles have been developed and documented alongside spherical nanoparticles. All these nanoparticle materials warrant comprehensive DLS examinations, given the substantial light scattering capabilities often seen in metallic NPs. DLS’s intensity-weighted average diameter values yield exceptional advantages in identifying nanoparticle aggregates due to the substantial size-dependent nature of the scattered light, which increases proportionally to the sixth power of the particle’s diameter. Researchers can effectively leverage this adaptable and broadly applicable approach by understanding both its inherent limitations and unique benefits. This understanding will facilitate its further expansion into unconventional fields [81].

Differential centrifugation sedimentation (DCS)

DCS is a technique used to analyze distinct particle size distributions. It is also referred to as disc centrifuge photosedimentometry (DCP) or analytical disc centrifugation (AUC) [82]. It is particularly useful in characterizing particle sizes in the coating industry, dealing with latexes, pigments, and emulsions [83]. DCS involves assessing the particle size distribution within an aperture diameter. It involves two main components, continuous light extinction monitoring at a specific point and the centrifugal separation of different NP populations. The spinning disc contains a sucrose density gradient that increases radially. This arrangement causes particles to separate based on their density and size. To prevent the sample from behaving like a higher-density liquid, a slope compensates for the apparent density variation within the sample. This setup is crucial to ensure that particles settle according to their sedimentation coefficients. Near the densest end of the gradient, a transmitter beam interacts with particles possessing specific sedimentation coefficients. Light scattering from these particles causes attenuation in the transmitter beam, enabling particle quantification (Fig. 4). During an experimental run, the equipment measures the time it takes for NPs to travel from the gradient to the transmitter. Modified Stoke’s law is then employed to calculate the NP diameter. However, for an accurate determination of the “actual” NP diameter, a basic core–shell model that considers newer nanoparticle concentrations must be used. This is to differentiate it from the “apparent” size obtained through DCS analysis. Furthermore, DCS is capable of shedding light on surface functionalization with stabilizing ligands and the overall colloidal stability after interactions with biomolecules [84]. One of the major advantages of DCS is its ability to isolate and precisely identify small fractions of nanoparticle concentrations within complex colloidal samples. This can be quite challenging to achieve with other methods [85,86,87]. Nevertheless, it is worth noting that the size of nanoparticles might change during sedimentation, especially if they are coated with biomolecules or polymers, which could alter their density and size. Despite this limitation, the field of colloid science has extensively embraced the versatile DCS approach for determining particle sizes, even at scales down to a few nanometers [88, 89]. For guidance on improving sample preparation for DCS techniques, refer to the step-by-step recommendations illustrated (Fig. 5) [90].

Fig. 4
figure 4

Schematic representation of differential centrifugal sedimentation (DCS) particles thrown out radially to the disc periphery and detected by change in light intensity

Fig. 5
figure 5

Step wise recommendations for improved sample preparation for differential centrifugal sedimentation techniques

The core–shell model calculations

DCS is a technique that calculates the time it takes for NPs to settle and reach a measurement point while passing through a fluid with a linear density gradient within a rotating empty disc operating at a constant speed. To achieve this, the DCS program employs particle size distribution data collected during the measurement, ensuring consistency and alignment with one of the fundamental components of all measurements. The calculation employs the equation below

$$t=C/\left(\mathrm{\rho eff}-\mathrm{\rho fl}\right) {d}^{2}$$
(3)

where

ρeff:

density of NPs

ρfl:

fluid density

C:

calibration constant

Before each evaluation, the value of the particle density (ρeff) is assumed. The density of single-component NPs, such as an exposed metallic core, is generally known, enabling precise size estimation. In cases of hybrid nanoparticles, the density is often determined using well-known densities of the component materials. For NPs with multiple layers of different elements, ρeff must be calculated to determine the total diameter [91, 92]. The inclusion of a ligand in the particle’s shell reduces the overall particle density (ρeff), as the ligand’s density is typically lower than that of the particle core element.

$$\mathrm{\rho eff}={{d}_{c}}^{3}{\rho }_{c}+\left[\left({d}_{c}+{2}_{s1}\right)3-{dc}^{3}\right]{\rho }_{{\text{s}}1}/{\left({d}_{c}+{2}_{s1}\right)}^{3}$$
(4)

One common method to determine ρeff is the core–shell model, which posits that hybrid NPs consist of a core with a known density and distinct, well-defined shells, each with its own thickness and density. When a core and a single layer are present, ρeff is determined by calculating the combined mass of the NPs, which depends on the density and thickness of each layer and dividing it by their volume. The equation mentioned earlier is used to calculate ρeff in this case [85, 93].

Kamiti et al. determined the size distribution and optimal density of cylindrical silica particles concurrently using analytical disc centrifuge with fluids of various densities. They demonstrated that adding a calibrated IR thermometer improved the measurement performance of the DCP technique by accounting for the temperature-dependent viscosity and density of the spin fluid. Absolute DCP estimations for dimension and density analysis offer advantages over standardized DCP measurements as they eliminate the need for calibrating instruments with nanoparticles of known density and assessments or estimations of effective particle density. By comparing these findings to a diameter calculation derived from silica particle size standards using TEM, the accuracy of diameter estimations using pure DCP for silica particles with nominal diameters between 250 and 700 nm was verified. Effective densities of silica particles, evaluated by absolute DCP, ranged from 2.02 to 2.34 g/cm3, suggesting significant porosity. This approach can be extended to describe various particle forms, including core–shell particles, nanoparticle mixtures with different compositions, and colloids with irregular shapes and structures, in line with colloidal silica’s characteristics reported by absolute DCP [82].

DCS is also applied to quantitatively analyze the molecular-level binding of bovine serum albumin (BSA) on aqueous gold colloids. This method uses mass to differentiate between colloidal particles of equal density. As proteins attach to the particles, their mass and effective density change, leading to altered sedimentation times. The adsorbed layer’s quantity can be determined using a simple methodology. Notably, DCS is capable of detecting both physiosorbed (soft corona) and chemisorbed (hard corona) proteins, setting it apart from some other methods that typically focus on one type of corona. This capacity to explore the soft corona is valuable, particularly in contexts where additional proteins are present, making it relevant to the field of nanoparticle applications in medicine. Overall, the core–shell model calculations in DCS provide valuable insights into nanoparticle characteristics, enabling accurate and detailed analyses across various applications. Unlike other techniques like analytical ultracentrifugation, fluorescence, or scattering correlation spectroscopy, which examine soft and hard coronas, DCS offers a simple methodology that does not require complex arrangements, mathematical deconvolution of raw data, or specialized knowledge. Additionally, DCS overcomes limitations found in fluorescence correlation spectroscopy (FCS) and plasmon scattering correlation spectroscopy (PSCS), making it a versatile tool for studying nanoparticles [94, 95]. Lastly, DCS has demonstrated its ability to capture corona dynamics on a sub-minute timescale. Overall, the core–shell model calculations in DCS provide valuable insights into nanoparticle characteristics, enabling accurate and detailed analyses across various applications [85].

Nanoparticle tracking analysis (NTA)

Nanoparticle tracking analysis (NTA) is a relatively new technique that was developed almost two decades ago. However, its commercialization required fast computers capable of performing intensive video recording in a timely manner. It is one of the few techniques that uses visualization effect charge-coupled device (CCD) and complementary metal oxide semiconductor (CMOS) camera to analyze particle size [96]. It is a technique used to analyze size, concentration, zeta potential, and fluorescence. It is designed to measure particles ranging from 10 nm to 1 µm [96,97,98].

Principle

NTA uses both Brownian motion and light scattering effect (Fig. 6) undergone by the particle in a liquid suspension to analyze particle size. Brownian motion of particles is determined by NTA through a video sequence. A beam of laser is passed through the sample cell, which strikes the particles and scatters. This scattering is recorded by the mounted camera as a video file of moving particles under the Brownian motion at a specific frame rate (Fig. 6) [96].

Fig. 6
figure 6

Schematic representation of principle and working of nanoparticle tracking analysis (NTA) including monitoring screen visualisation

Input parameters for NTA instrument

The major parameters that can be modified (optimized) in NTA instrument by the user (analyst) as per need are as follows.

Time of analysis

There has been a concern with the reproducibility of NTA’s results. The outcomes depend on how quickly the sample has been analyzed. A prominent effect on outcome is observed when particle concentration has been determined. On the other hand, no or minimal effect was observed on the measurement of average particle size. There has been a decrease in the particle concentration when analysis is delayed. The incorporation of proper cleaning procedure between each analysis minimizes this type of error [97, 99,100,101,102,103].

Flow rate of analysis

An arbitrary flow rate is set while operating the instrument. A low flow rate during analysis creates laminar flow which eliminates drift in flow and thereby takes only Brownian motion into consideration. However, for the particles that require a higher flow rate for movement (i.e., in the case of the large particle) drift became an inevitable occurrence. Since NTA deals with a small volume of sample (i.e., in picolitre) care must be taken while diluting from milliliter to picoliter, in order to exactly replicate the same dilution condition. A minute deviation in dilution can have a significant impact on the outcome of the analysis. Based on various published experimental studies, it is recommended to have a low flow rate for a polydisperse sample in order to distinguish between two size distributions. Low flow rates provide particles an appropriate time to pass through the analysis window and get analyzed. However, in case of higher flow rate biasness in the result is observed towards larger particles [97, 99,100,101].

MINexps (MaxJump Distance) setting

This setting helps in establishing the threshold limits, which provide an assumption about the maximum displacement a particle can undergo over the course of two successive video frames. The threshold limit helps in identifying whether the particle observed in two successive frames is the same or not. If the particle in successive frames is found within the threshold distance, then it is considered as one particle and if not found within the threshold distance, then it is considered as a different particle. In older software versions, threshold limits need to be set manually; however, in the updated version, this limit can be set automatically. Additionally, the option for manual change in setting parameters is also available. Various published experimental studies suggest performing experimental studies using automatic settings as it shows results that are closer to accurate results [97, 99,100,101,102].

Blur setting

This setting shows no/minimal effect of particle size distribution on analysis data. The main role of this type of setting is to enhance signal and reduce noise. This setting helps in improving the signal by blurring the background noise. However, various studies suggest operating on automatic setting will work well for analysis [97, 99,100,101, 103].

Based on the above finding, it can be inferred that by minimizing analysis time, carrying out appropriate cleaning procedure after each analysis, operating at a low flow rate, and utilizing automatic MINexps and blur setting will yield results that are close to the actual size of particles.

Sample preparation

Either water or organic solvent (generally 70% ethanol) is used as a liquid medium for the dispersion of particles. Particles dispersed in liquid medium move randomly in all directions, this random movement is due to diffusion phenomena and is expressed by diffusion constant (D). The migration of a given particle (dispersed in a liquid medium) in any direction is due to the energy transfer by the surrounding liquid molecules. The theory to determine diffusion constant was given by famous physicist Albert Einstein [98, 101].

The movement of particles in each time frame is recorded and quantified as the mean square displacement < x2 > . Depending on the number of dimensions in which the particle is displaced (one, two, three), the mean square difference can be calculated as:

$${\text{One}}-\mathrm{dimension\;mean\;square\;displacement}:\;D=\frac{<{x>}^{2}}{2t}$$
(5a)
$${\text{Two}}-\mathrm{dimension\;mean\;square\;displacement}:\;D= \frac{<\overline{{x }^{2}{y}^{2}}>}{4t}$$
(5b)
$${\text{Three}}-\mathrm{dimension\;mean\;square\;displacement}:\;D= \frac{<\overline{{x }^{2}{y}^{2}{z}^{2}}>}{4t}$$
(5c)

With the help of the Stokes–Einstein relationship, particle diameter “d” can be calculated as a function of the diffusion coefficient “D” at a temperature “T” and viscosity “\(\eta\)” of the liquid (kB = Boltzmann’s constant) [96, 101].

$$D=\frac{4{k}_{B}T}{3\prod \eta {\text{d}}}$$
(6)

NTA records the displacement of particles in two dimensions, so the Stokes–Einstein relationship is combined with two dimensions mean square displacement to get the particle diameter [104].

$$d=\frac{4{k}_{B}T}{3\prod \eta {\text{t}}}\cdot \frac{4t}{<\overline{{x }^{2}{y}^{2}}>}=\frac{16{k}_{B}T}{3\prod \eta <\overline{{x }^{2}{y}^{2}}>}$$
(7)

Comparison of NTA and DLS

DLS is a widely used and accepted technique when it comes to the characterization of particles. Comparison of results generated by NTA with DLS will give assurance over the reliability of results. Many such experiments were conducted on comparison of these two techniques. Data from various experiments such as dilution study, number ratio-based dilution study, spiking study, monodisperse study, and polydisperse study that are crucial for particle characterization were compared for both techniques [99, 105].

The results of volume-based and number-based dilution studies reveal whether a technique can display size distribution even if the portion of one particular-sized particle is less compared to another. The spiking study provides information about the ability of the technique to differentiate if a known polydisperse sample is spiked into the analysis sample. Monodisperse and polydisperse study provides information about the ability of the technique to provide size distribution for mono and polydisperse samples. The finding of all comparison studies shows that NTA and DLS both analyze mono and polydispersed samples with a similar level of accuracy; however, NTA’s results were less reproducible than DLS’s results when the same trial was repeated. NTA stands superior to DLS when it comes to resolution between the size distribution. NTA can resolve even when there is a narrow size distribution difference within the polydisperse sample. NTA, being a single particle analysis technique, can detect contamination like dust present in a sample even if it is in minute quantity while DLS fails to detect it as it is a bulk particle analysis technique. DLS can be operated for concentrated samples while NTA requires dilute samples. Apart from that, DLS is a user-friendly technique and requires less time for analysis compared to NTA [99, 105].

Some advanced applications

Protein aggregation studies

In any biological formulation including therapeutic protein, there has been an issue of aggregation of protein. Aggregation of protein not only causes failure in delivering therapeutic action but also leads to an immunogenic response inside the body as it will be treated as a foreign particle/invader. Protein aggregation within the formulation can arise from several factors, including the incorporation of higher protein concentrations, combining multiple therapeutic proteins, and improper storage conditions like conducive temperatures. NTA, because of its great resolving capabilities, has a clear-cut advantage over DLS and many other particle characterization techniques. Various published articles prove the capability of NTA to show polydisperse size distribution when protein aggregation takes place in the formulation. This capability of NTA can be used in the development of biologics for optimizing concentration as well as used in designing optimum storage conditions [101, 106].

Detection and characterization of sub-micron particles involved in various cellular signaling processes

Exosomes and membrane vesicles are submicron particles involved in various cellular signaling processes. These submicron signaling particles such as DNA, proteins, lipopolysaccharides, and phospholipids contain genetic information. The role of these signaling molecules is to convey information to other cells about the needs of the body including the synthesis of specific proteins. Microorganisms also produce these signaling molecules in order to adapt to adverse environmental conditions. By using these signaling molecules, microbes mutate themselves. These inherent properties of signaling molecules have drawn the attention of various researchers to dive deep into this concept and find a way to develop novel therapeutics such as vaccines. But for the development of novel therapeutics, the characterization of these submicrons became an utmost priority. Experiments have been performed where NTA has been used to characterize these submicron particles [107, 108].

Characterization of various cellular interactions within submicron particles

Various disease conditions are caused due to dysregulation in cellular events such as protein–protein and DNA–protein complexation. Generally, due to this type of event, there are morphologic changes taking place between submicron particles including particle size. Characterization of these submicron particles helps in monitoring disease conditions. Various studies have shown the successful use of characterization of these submicron particles involved in this type of cellular event [109].

Tunable resistive pulse sensing (TRPS)

Tunable resistive pulse sensing (TRPS) is a single particle analysis technique that measures the particle in the range of micro and nanometer size. It is used to analyze size (size distribution), charge (zeta potential), and concentration of particles. It is designed for measurement of particles ranging from 1 µm to 40 nm [110].

Principle

TRPS technique works on a similar principle as that of coulter counter but at nanoscale level (Fig. 7). In this technique, the movement of a particle (present in the sample) through the nanopore is governed by hydrodynamic, electrophoretic, and electroosmotic pressure. On passing through the nanopore, a particle displaces the conductive solution and creates resistance. This decrease in current and increase in resistance are recorded by the instrument [110].

Fig. 7
figure 7

Schematic representation of principle and working of tunable resistive pulse sensing (TRPS) including extraction of information from individual signal

Inference from the signal recorded by the instrument

Each recorded signal in TRPS instrument corresponds to the particle passing through the pore. Each signal individually needs to be analyzed for detailed information. Information that can be extracted from each signal is shown in Fig. 7 [111]:

  1. i.

    The height of the signal determines the size of particles.

  2. ii.

    The duration/time taken by the particle to pass through the pore determines the charge of the particle.

  3. iii.

    The rate at which particle passes through the pore determines the concentration of the sample.

These properties such as height, time, and rate of signal are linked with magnitude, duration, and frequency, respectively, for specific pore size.

  1. i.

    When the particle passes through the nanopore, there is a decrease in current, and this is known as a blockage event. The magnitude of blockade (height of signal) when extrapolated to baseline gives information that is directly proportional to the volume of the particle (indirectly relates to the particle size). It will be observed that a larger particle will displace a larger volume of electrolyte solution and will give a larger magnitude of blockade event.

  2. ii.

    The time taken by the particle to travel the nanopore is called blockade duration; it determines how fast the particle moves through the pore (indirectly relates to the charge on the particle).

  3. iii.

    The number of blockade events occurring per unit time is called as frequency of blockade event. It gives information that is directly proportional to the concentration of particles in a sample.

Input parameter for TRPS instrument

The major parameters that can be modified in the TRPS instrument by the user (analyst) as per need are as follows [111, 112]:

Stretch

The stretch parameter helps in controlling the size of nanopore. In other words, it sets the dynamic range of nanopore, i.e., the specific range of particles that can pass through nanopore at a given time. On increasing the stretch, the size of nanopore increases and size of the blockade decreases. Baseline current is determined by the stretch, where an increase in stretch leads to an increase in baseline current (as more ions can flow through pore).

Voltage

The voltage parameter helps in controlling the magnitude of blockade (S/N ratio). In other words, it helps in the amplification of the low-magnitude signal and provides confidence in measurement. Voltage also determines baseline current, the higher the voltage, the greater the baseline current. If a particle of the same size is measured at low and high voltage, then at a high voltage, there will be a significant improvement in the magnitude of signal that will be recorded by the instrument.

Pressure

The pressure parameter helps in controlling the hydrodynamic flow of the particle through nanopore. In other words, it will affect the rate of travel by a particle through nanopore. On increasing the applied pressure, the rate of travel by the particle through the nanopore will increase, and the duration of blockade will decrease. The applied pressure can be used as a balancing parameter during the electrokinetic transport mechanism that allows high precision charge measurement. Multiple pressure measurement is used for accurate concentration measurement.

Based on the above observation, two below-mentioned combinations of settings are suggested to improve the signal and get more accurate analytical results.

Case 1

A small conclusive gist for below-mentioned case 1 is that the stretch parameter (i.e., size of pore) should be manipulated based on the signal obtained. Ideally, the stretch should be of optimum size; it should neither be too large which causes the missing of signal obtained by smaller partials nor too small which may result in clogging of pores due to large particles. The optimum stretch value with sufficient voltage will lead to a result that is closer to the accurate particle size value (Fig. 8) [113, 114].

Fig. 8
figure 8

Flow diagram explanation for manipulation of stretch and voltage parameters to improve signal obtained from tunable resistive pulse sensing (TRPS)

Case 2

A small conclusive gist for below-mentioned case 2 is that the pressure parameter should be manipulated based on the signal obtained. Ideally, the applied pressure should be of optimum value; if the pressure is higher, then it will allow a larger number of particles to pass through pores and will result in missing of signal obtained by smaller particles, and if the pressure applied is too low, then it will lead to increase in the time of analysis. Therefore, an optimum value should be used to obtain good results (Fig. 9) [112, 115].

Fig. 9
figure 9

Flow diagram explanation for manipulation of pressure parameter to improve signal in tunable resistive pulse sensing (TRPS)

Conclusion

Physical stability and efficacy of nanoformulations are largely dependent upon their particle size, solubility, and modified surface characteristics. Hence, imaging and non-imaging characterization techniques play a complementary role in the overall formulation development and testing of pharmaceuticals. In this article, we have shed some light on multiple available techniques with respect to their advantages, limitations, and applications in the measurement of NPs during pharmaceutical research. Every analytical method is limited by certain shortcomings, and therefore, it is crucial to comprehensively evaluate the size range of NPs and the practical applicability of the method prior to analysis. Although non-imaging techniques are rapid and suitable for process control purposes, it is possible that sample heterogeneity may generate ambiguous results, and it is recommended to perform confirmation analysis by some imaging technique, such as SEM or TEM. As a general perspective on working range, DLS and AFM are suitable for size characterization of micro- and nanomaterials, whereas TRPS and NTA share a similar working range of 1 µm to 10 nm. TEM has been found to be useful particularly for analysis of materials in sub-nano range. Scientific advancement is a continuous process, and persistent efforts are needed to overcome the various challenges that are presented during characterization of nano formulations. Research and development are in progress to improve and validate evaluation methods for ensuring quality of NPs measurements. In conclusion, efficient nano formulation development and delivery can be achieved, and biological toxicity can be minimized by extracting best possible surface information using suitable measurement technique.