Sunday 25 February 2024

R function conf_int_Accuracy

 

R function


conf_int_Accuracy <- function(GMTobs, s, nDays = 2, nAnalysts = 2, nPlates = 2, nReplicates = 3, FoldDilution = 2, Threshold = 1, alpha = 0.05) {

  # INPUT:

  # GMTobs: vector of observed geometric means at each fold dilution

  # s: vector of the standard deviations of the log-transformed replicates

  # nDays: number of experimental days (default = 2)

  # nAnalysts: number of analysts performing the tests in validation (default = 2)

  # nPlates: number of plates used per analyst per day (default = 2)

  # nReplicates: number of measurements that each analyst performs per plate per day

  # FoldDilution: step of the serial dilution (default = 2)

  # Threshold: critical threshold of the log-difference between the observed and the true mean (default=1)

  # alpha: significance level (default=0.05)

  #

  # OUTPUT:

  # vector of Relative Accuracy calculated at each fold dilution and its confidence interval. In addition, a plot of Relative Accuracy vs. fold dilution (log2 units)

 

  # calculate the expected geometric mean vector

  GMTexp <- round(GMTobs[1] * 2^(seq(0, -(length(GMTobs) - 1), -1)), 0)

 

  # Relative Accuracy (or recovery)

  RA <- log(GMTobs / GMTexp)

 

  N <- nDays * nAnalysts * nPlates * nReplicates

  n1 = n2 = N

 

  # standard deviations

  s1 <- s  # vector of standard deviations of the log-transformed replicates

  fd <- round(FoldDilution^-seq(0, length(GMTobs) - 1), 4)

  s2 <- rep(s1[1], length(GMTobs))  # fd*s1[1]

 

  # pooled variance

  sp2 = ((n1 - 1) * s1^2 + (n2 - 1) * s2^2) / (n1 + n2 - 2)

 

  # standard error

  se <- sqrt((sp2 / n1) + (sp2 / n2))

 

  # t-statistic

  tstat <- qt(1 - alpha / 2, n1 + n2 - 2)

 

  # confidence interval

  lower <- RA - tstat * se

  upper <- RA + tstat * se

 

  # critical thresholds

  theta.low <- FoldDilution^-Threshold

  theta.up <- FoldDilution^Threshold

 

  # exponentiate to get back to original scale

  RA <- exp(RA)

  lower <- exp(lower)

  upper <- exp(upper)

 

  # plot RA versus fold dilution

  plot(log2(fd), RA, type = "o", pch = 16, lty = 2, xlab = "fold dilution (log2)", ylab = "Relative Accuracy", ylim = c(0, 3))

  lines(log2(fd), lower, col = "blue", lty = 4)

  lines(log2(fd), upper, col = "blue", lty = 4)

  grid()  

 

  # return RA and its confidence interval

  return(cbind(fd,RA, lower, upper))

}

 

This function calculates the Relative Accuracy (RA) and its confidence interval at each fold dilution and plots RA versus fold dilution (log2 units). The function takes the following inputs:

  • GMTobs: vector of observed geometric means at each fold dilution
  • s: vector of the standard deviations of the log-transformed replicates
  • nDays: number of experimental days (default = 2)
  • nAnalysts: number of analysts performing the tests in validation (default = 2)
  • nPlates: number of plates used per analyst per day (default = 2)
  • nReplicates: number of measurements that each analyst performs per plate per day
  • FoldDilution: step of the serial dilution (default = 2)
  • Threshold: critical threshold of the log-difference between the observed and the true mean (default=1)
  • alpha: significance level (default=0.05)

The function returns a matrix containing the RA and its confidence interval for each fold dilution. The confidence interval is calculated using the t-distribution with n1+n2-2 degrees of freedom and a significance level of alpha. The function also plots RA versus fold dilution (log2 units) and the confidence interval.






Quantifying a biossay: confidence interval of the accuracy

A bioassay validation is a set of procedures that ensure the precision and accuracy of the test results. Precision refers to the proximity of results to each other, and accuracy is the proximity of measurements results to the true value. Here we want to focus on the accuracy and the building of a confidence interval around it.

 

Let's design the validation experiments with three possible sources of variability, day, analyst, and plate having nd levels, na levels, and np levels, respectively. Each analyst performs nm measurements per plate per day on each sample. The samples will be serially diluted by a factor of two, resulting in dl dilution levels from 1/1 to 1/(2^(dl-1)).

So, the total number of measurements, N, for each dilution level will N=nd*na*np*nm.

 

To calculate the accuracy, A, we can follow the FDA recommendation [1] of comparing the mean of the measured values with the "true" value, and define accuracy as that is the ratio between the observed geometric mean of the measured titers (GMTobs) and the expected geometric mean of the titers (GMTexp), that is: 


A = GMTobs/GMTexp


The GMTexp associated at the i-th dilution level equals the GMTobs calculated over the titers measured at the i-th dilution level divided by the i-th dilution level.

Of course, the accuracy of the neat (i.e., undiluted) sample is equal to 1, because at the first dilution level GMTexp = GMTobs.

To enhance the quality of a validation process, it is recommended to include not only the point estimates of the accuracy but also their confidence intervals in the reports. This will provide more information about the precision and variability of methods. In the section below, we can see the accuracy estimates and their confidence intervals calculated using the custom R function conf_int_Accuracy.


Calculating the confidence interval of the accuracy is beneficial in bioassay validation because it provides a measure of the precision of the assay results. The confidence interval is a range of values that is likely to contain the true value of the parameter being estimated, with a certain degree of confidence. In other words, it quantifies the uncertainty associated with the assay results. By comparing the confidence interval against target validation acceptance criteria, one can determine whether the assay is fit for its intended purpose. The USP <1033> recommends comparing confidence intervals against target validation acceptance criteria in a bioassay validation exercise [2]. The European Medicines Agency (EMA) guideline for bioanalytical method validation also recommends calculating the confidence interval as part of the validation process [3]. 


Example 


Let's consider a design-of-experiment consisting of eleven dilution levels in a two-fold serial dilution (from 1:1 to 1:1024) with N measurements of titers for each dilution level. 


In the vector GMTobs, we collect the results of the eleven row-wise geometric means calculated across N titers:

GMTobs<- c(11494, 6089, 2560, 1208, 640, 293, 135, 78, 39, 17, 10,5) 

Let s be the vector of the standard deviations calculated on the log2-transformed titers:


s<- c(0.114600161,0.133153354,0.125538186,0.084989754,0,0.101697554,0.133153354,


0.061447491,0.061447491,0.133153354,0,0)



After running the conf_int_Accuracy function


conf_int_Accuracy(GMTobs,s)



we get these results:

Dilution Accuracy CI95%.lower CI95%.upper
1.0000 1.0000000 0.9355777 1.0688583
0.5000 1.0595093 0.9857258 1.1388157
0.2500 0.8907446 0.8306566 0.9551793
0.1250 0.8406402 0.7927762 0.8913941
0.0625 0.8913649 0.8503661 0.9343404
0.0312 0.8161560 0.7663595 0.8691881
0.0156 0.7500000 0.6977705 0.8061390
0.0078 0.8666667 0.8215771 0.9142309
0.0039 0.8666667 0.8215771 0.9142309
0.0020 0.7727273 0.7189150 0.8305675
0.0010 0.9090909 0.8672767 0.9529211
0.0005 0.8333333 0.7950037 0.8735110




The graph can be completed by entering the limits of acceptability. In doing so, we could have an immediate representation of any out-of-bounds points.  

References

[1] Accuracy and Precision in Bioanalysis: Review of Case Studies. https://www.fda.gov/media/135131/download.

[2] Statistical Assessments of Bioassay Validation Acceptance Criteria - BioProcess InternationalBioProcess International (bioprocessintl.com)

[3] Advanced Topics in Calculating and Using Confidence Intervals for Model Validation (uah.edu)

 





Tuesday 11 July 2023

A Standardized Microneutralization Assay for RSV Subtypes A and B: A Tool for Diagnosis and Vaccine Development

Respiratory Syncytial Virus (RSV) is a common, contagious virus that causes infections of the respiratory tract. It is a negative-sense, single-stranded RNA virus that belongs to the genus Orthopneumovirus, subfamily Pneumovirinae, and family Paramyxvoviridae. It consists of two major antigenic subgroups, RSV/A and RSV/B viruses, which have multiple genotypes with high diversity in the attachment (G) glycoprotein. RSV is the single most common cause of respiratory hospitalization in infants, and can also cause severe illness in older adults, people with heart and lung disease, or anyone with a weak immune system.

The diagnosis of RSV infection can be done by detecting viral antigens, nucleic acids, or antibodies in clinical specimens. Antigen detection methods include immunofluorescence assay (IFA), enzyme immunoassay (EIA) and rapid antigen tests. Nucleic acid detection methods include reverse transcription polymerase chain reaction (RT-PCR), real-time PCR and multiplex PCR. Antibody detection methods include enzyme-linked immunosorbent assay (ELISA), indirect fluorescent antibody assay (IFA) and neutralization assay.

The serology of RSV is important for understanding the epidemiology, immunity, and vaccine efficacy of RSV infection. The most widely used serological marker of RSV infection is the neutralizing antibody, which reflects the functional capacity of serum to inhibit viral infectivity in cells. Neutralizing antibody activity is correlated with protection against RSV disease in animal models and humans.


Microneutralization Assay for RSV

 

RSV-specific neutralizing antibody activity is a correlate of immune protection and a useful marker for evaluating vaccine candidates, performing seroprevalence studies, and detecting infection. Microneutralization assay is a method that measures the functional capacity of serum (or other fluids) to neutralize virus infectivity in cells. It involves incubating serial dilutions of serum with a known amount of RSV in cell culture plates, followed by measuring the viral replication after a certain period. Viral replication can be measured by different methods, such as plaque counting, immunostaining, or quantitative PCR. Microneutralization assay is more sensitive and specific than ELISA for detecting RSV antibodies and can distinguish between RSV subgroups and genotypes. However, it is also more labor-intensive and time-consuming than ELISA and requires standardized protocols and reagents.

To ensure the reliability and reproducibility of the microneutralization assay for RSV, it is essential to validate the assay using standardized protocols and quality control measures. Some of the aspects that need to be validated are:

 

- The source and quality of RSV strains used for the assay, which should represent the diversity and prevalence of RSV subgroups and genotypes in the population.

- The cell line and culture conditions used for the assay, should support optimal viral growth and infection without cytotoxicity or interference.

- The serum dilution range and incubation time used for the assay, which should cover the expected range of neutralizing antibody titers and allow sufficient time for viral replication.

- The method of viral replication measurement used for the assay, which should be accurate, precise, and consistent across different operators and batches.

- The criteria for defining positive and negative controls, cut-off values and neutralization titers used for the assay should be based on statistical analysis and biological relevance.


A recent study (Bonifazi et al., 2023) found that a micro-neutralization assay is a reliable and standardized method for assessing the neutralizing activity of serum samples against respiratory syncytial virus (RSV). The method is based on a recombinant RSV expressing a reporter gene, which enables fast and accurate measurement of virus infection in cell culture. This paper describes the development and optimization of the micro-neutralization assay using clinical samples from RSV-infected patients and vaccinated volunteers and demonstrates that the micro-neutralization assay has several benefits over the PRNT, such as higher sensitivity, specificity, reproducibility, and throughput. It also shows that the micro-neutralization assay can differentiate between RSV subtypes A and B, and can measure neutralizing antibodies in various sample types, such as serum, plasma, and nasal washes. Emphasis is placed on the significance and applications of the micro-neutralization assay for RSV research and vaccine development and on the possible use of micro-neutralization assay to assess the immunogenicity and efficacy of RSV vaccines, as well as to monitor the circulation and diversity of RSV strains in different populations.

 

Other studies have attempted to validate the microneutralization assay for RSV using different protocols and parameters. For example:

- Zielinska et al. (2005) developed an improved microneutralization assay using an image analyzer for automated plaque counting, which reduced the labor and variability of manual counting. They validated their assay using a reference serum and two control sera from different age groups.

- Piedra et al. (2016) described a standard protocol for performing the microneutralization assay using complement-mediated enhancement of neutralization, which increased the sensitivity and specificity of the assay. They validated their assay using sera from infants and adults vaccinated with different RSV vaccine candidates.

- Varada et al. (2013) proposed a quantitative PCR-based microneutralization assay, which measured the viral RNA levels instead of the viral protein levels. They validated their assay using a pooled human immunoglobulin reference standard and sera from children with RSV infection.

 

These studies demonstrate the feasibility and utility of validating the microneutralization assay for RSV using different approaches and criteria. However, there is still a need for a more comprehensive and standardized validation of the assay across different laboratories and settings, especially for the evaluation of RSV vaccines and immunotherapies.

 

Conclusion

 

The microneutralization assay is a valuable tool for measuring the neutralizing antibody activity of serum against RSV. It can provide information on the epidemiology, immunity, and vaccine efficacy of RSV infection. However, the assay requires careful validation and quality control to ensure its reliability and reproducibility. Further research and collaboration are needed to establish a universal and robust microneutralization assay for RSV that can be applied to various clinical and research purposes.

 

References

 

Bonifazi, C., Trombetta, C. M., Barneschi, I., Latanza, S., Leopoldi, S., Benincasa, L., Leonardi, M., Semplici, C., Piu, P., Marchi, S., Montomoli, E., & Manenti, A. (2023). Establishment and validation of a high-throughput micro-neutralization assay for respiratory syncytial virus (subtypes A and B). Journal of Medical Virology, 95(7), e28923. https://doi.org/10.1002/jmv.28923

Zielinska, E., Liu, D., Wu, HY. et al. Development of an improved microneutralization assay for respiratory syncytial virus by automated plaque counting using imaging analysis. Virol J 2, 84 (2005). https://doi.org/10.1186/1743-422X-2-84

Piedra PA, Hause AM, Aideyan L. Respiratory Syncytial Virus (RSV): Neutralizing Antibody, a Correlate of Immune Protection. Methods Mol Biol. 2016;1442:77-91. doi: 10.1007/978-1-4939-3687-8_7. PMID: 27464689.

Varada, J.C., Teferedegne, B., Crim, R.L. et al. A neutralization assay for respiratory syncytial virus using a quantitative PCR-based endpoint assessment. Virol J 10, 195 (2013). https://doi.org/10.1186/1743-422X-10-195



Thursday 18 May 2023

Owen's Function: A Simple Solution to Complex Problems

Definition

 

If you are interested in statistics or probability, you may have encountered the Owen's function, a special function that arises in various applications involving bivariate normal distributions. Let’s explain what Owen's function is, how it is defined and notated, and why it is useful. The Owen's function is defined as follows:

for any real numbers h and a, Owen's function T(h,a) is given by
T ( h , a ) = 1 2 π · 0 a exp ( - h 2 2 · ( 1 + x 2 ) ) 1 + x 2 dx
where h and a are any real numbers. The Owen's function can be interpreted as the probability of the event (X > h and 0 < Y < a * X), where X and Y are independent standard normal random variables. This means that it can be used to calculate the area under the bivariate normal density curve in a certain region. In other words, it is the probability that (X,Y) lies in a wedge-shaped region bounded by the lines x = h, y = 0 and y = ax.

Owen's function has many applications in statistics and probability. For instance, it appears in the calculation of the cumulative distribution function of the noncentral t-distribution, the power of certain statistical tests, the multivariate normal tail probabilities, and the multivariate normal orthant probabilities. It also has connections to copulas, elliptical distributions, and directional statistics.

Background



Owen's function is a special function that arises in the context of multivariate normal integrals. It was first introduced by Donald Bruce Owen in 1956, who was interested in the problem of testing the equality of two normal means when the variances are unknown and unequal. Owen derived an expression for the power function of this test, which involved a double integral that could not be evaluated in closed form. He then proposed a numerical approximation for this integral, based on a series expansion of a function that he called T(h,a), where h and a are real parameters.

Owen's function has many interesting properties and applications, such as:

 - It is symmetric in h and a, i.e., T(h,a) = T(a,h). 


- It is related to the standard normal cumulative distribution function Φ ( z ) by T ( h , 0 ) = 1 2 Φ ( - h ) and T ( 0 , a ) = 1 2 Ï€ · arctan ( a )
- It satisfies the recurrence relation T ( h , a ) = T ( h , 0 ) - T ( h , a 2 + h 2 )
- It can be used to compute the probability of a rectangular region under a bivariate normal distribution.
- It can be used to compute the tail probabilities of certain quadratic forms in normal variables.

Applications


One of the most useful applications of Owen's function is in computing bivariate and multivariate normal probabilities. In this section, I will explain how Owen's function can be used to calculate the probability that two or more normally distributed variables fall within a given region. Let X and Y be two independent standard normal variables, and let A and B be two constants. The bivariate normal probability P(X < A, Y < B) can be expressed as a function of Owen's function T(h,a), where h = B/A and a = A. This result was first derived by Owen (1956) and can be written as:

P ( X < A , Y < B ) = Φ ( A ) · Φ ( B ) - T ( h , a ) - T ( a , h )

where Φ is the standard normal cumulative distribution function. This formula allows us to compute bivariate normal probabilities without using numerical integration or tables.

The formula can be generalized to multivariate normal probabilities as well. Let X1, X2, ..., Xn be n independent standard normal variables, and let A1, A2, ..., An be n constants. The multivariate normal probability P(X1 < A1, X2 < A2, ..., Xn < An) can be expressed as a sum of products of Owen's functions T ( h i , a i ) , where h i = A i / A j and a i = A j for i ≠ j. This result was first derived by Genz (1992) and can be written as:

P ( X 1 < A 1 , , X n < A n ) = k = 1 n - 1 k · Φ ( A k ) · i k T ( h i , a i )

This formula allows us to compute multivariate normal probabilities without using numerical integration or tables. Owen's function is therefore a powerful tool for calculating bivariate and multivariate normal probabilities in various fields of statistics and applied mathematics. Some examples of its applications include testing hypotheses, evaluating integrals, estimating parameters, and simulating random vectors.



Software



The Owen’s function is used to compute the bivariate normal distribution function and related quantities, including the distribution function of a skew-normal variate. The Owen’s function is evaluated using the OwenQ package in R. The package provides two functions for evaluating the Owen’s function: OwenT(h, a) and OwenQ(h, a). The OwenT(h, a) function evaluates the Owen T-function while the OwenQ(h, a) function evaluates the Owen Q-function. You can install the package using the following command:

install.packages("OwenQ")

Here is an example of how to use the OwenT(h, a) function:

library(OwenQ)
OwenT(1.5, 2)

This will evaluate the Owen T-function with h=1.5 and a=2. In addition to the OwenQ package in R that I mentioned earlier, there are several other packages available for computing the Owen’s function in R. These include the CompQuadForm package and the mvtnorm package. In Matlab, you can use the skewt function in the Statistics and Machine Learning Toolbox to compute the cdf of a skew normal distribution. However, Matlab does not have an implementation of Owen’s T-function in its statistical toolbox.


References



Here is a list of recommended reading to learn more about the topic:

1.           Owen, D. B. (1956). Tables for computing bivariate normal probabilities. The Annals of Mathematical Statistics, 27(4), 1075-1090.

2.           Owen, D. B. (1980). A table of normal integrals. Communications in Statistics-Simulation and Computation, 9(4), 389-419.

3.           Owen, D. B., & Rabinowitz, M. (1983). A handbook of the Owen function and related functions. CRC Press.

4.           Owen, D. B., & Zhou, Y. (1990). Safe computation of probability integrals of the multivariate normal and multivariate t distributions. Statistics & Probability Letters, 9(4), 307-311.

5.           Genz A., (1992). Numerical computation of multivariate normal probabilities. Journal of computational and graphical statistics, 1(2):141-149.

6.           Genz, A., & Bretz, F. (2002). Methods for the computation of multivariate t-probabilities. Journal of Computational and Graphical Statistics, 11(4), 950-971.

7.           Genz, A., & Bretz, F. (2009). Computation of multivariate normal and t probabilities. Springer Science & Business Media.

8.           Genz, A., Bretz, F., Miwa, T., Mi, X., Leisch, F., Scheipl, F., & Hothorn, T. (2020). mvtnorm: Multivariate Normal and t Distributions. R package version 1.1-1.

9.           Phadia, E. G. (2010). A survey of the theory of hypergeometric functions of several variables. In Hypergeometric functions on domains of positivity, Jack polynomials, and applications (pp. 25-53). American Mathematical Soc.

10.       Phadia, E. G., & Srivastava, H. M. (2012). Some generalizations and applications of the Owen function T (h; a). Integral Transforms and Special Functions, 23(8), 575-588.

11.       Srivastava, H. M., & Daoust, M.-C. (1991). Some families of the multivariable H-functions with applications to probability distributions. Journal of Statistical Planning and Inference, 29(1), 11-26.

12.       Srivastava, H. M., & Gupta, K. C. (1982). The H-functions of one and two variables with applications. South Asian Publishers.

13.       Srivastava, H. M., & Karlsson, P. W. (1985). Multiple Gaussian hypergeometric series (Vol. 49). Ellis Horwood Limited.

14.       Srivastava, H.M., Choi J., Agarwal P., Jain S.K.(Eds.) (2018) Advances in Special Functions and Orthogonal Polynomials: Proceedings of the International Conference on Special Functions: Theory Computation and Applications held at Indian Institute of Technology Delhi during December 19–23 2016 Springer Singapore

15.       NIST Digital Library of Mathematical Functions (DLMF) https://dlmf.nist.gov/

16.       Wolfram MathWorld http://mathworld.wolfram.com/

17.       Wolfram Functions Site http://functions.wolfram.com/

18.       Abramowitz M., Stegun I.A.(Eds.) (1964) Handbook of Mathematical Functions with Formulas Graphs and Mathematical Tables Dover Publications

19.       Gradshteyn I.S., Ryzhik I.M.(Eds.) (2007) Table of Integrals Series and Products Elsevier

20.       Prudnikov A.P., Brychkov Y.A., Marichev O.I.(Eds.) (1998) Integrals and Series CRC Press

21.       Erdélyi A.(Ed.) (1953) Higher Transcendental Functions McGraw-Hill

1.   

Wednesday 3 May 2023

Serial Dilutions: A Common Technique for Biochemistry and Pharmacology

Introduction

Serial dilutions are a technique used to create a series of solutions with decreasing concentrations of a substance. They are important in biochemistry and pharmacology because they allow researchers to measure the effects of different doses of a substance on biological systems.

 

To perform a serial dilution, one starts with a stock solution of known concentration and transfers a fixed amount of it to a new container. Then, one adds a solvent (usually water or buffer) to the new container until it reaches the desired volume. This creates a diluted solution with a lower concentration than the stock solution. The process can be repeated with the diluted solution as the starting point, creating a further diluted solution, and so on.

 

The concentration of each solution in a serial dilution can be calculated by using the formula C1V1 = C2V2, where C1 and C2 are the concentrations of the initial and final solutions, respectively, and V1 and V2 are their volumes. For example, if one transfers 1 mL of a 10 mM stock solution to a new container and adds 9 mL of solvent, the resulting solution will have a concentration of 1 mM (10 mM x 1 mL = 1 mM x 10 mL).

 

Serial dilutions are useful for studying the effects of different concentrations of a substance on biological systems, such as enzymes, cells, tissues, or organisms. By using serial dilutions, one can test a range of doses and observe how they affect the activity, growth, survival, or response of the system. This can help determine the optimal dose, the threshold dose, or the toxic dose of a substance.

 

Serial dilutions are also essential for performing assays that measure the amount or activity of a substance in a sample. For example, in an enzyme-linkedimmunosorbent assay (ELISA), serial dilutions are used to create a standard curve that relates the concentration of an antigen to its optical density. By comparing the optical density of an unknown sample to the standard curve, one can estimate its concentration.

 

Serial dilutions are therefore an important technique in biochemistry and pharmacology that enable researchers to explore the properties and effects of various substances on biological systems.

 

Basic principles of serial dilutions

 

Serial dilutions are a common technique in experimental sciences, especially in biology and medicine, to create solutions with a desired concentration of a substance or a cell type.

Serial dilution is the process of diluting a sample step by step with a constant dilution factor. For example, if we want to make a ten-fold serial dilution of a solution, we can take 1 ml of the original solution and add it to 9 ml of a diluent (such as water or saline) and mix well. This will give us a new solution that is 10 times less concentrated than the original one. We can repeat this process with the new solution to get another 10-fold dilution, and so on.

 

The dilution factor is the ratio of the final volume to the initial volume of the solution. For a ten-fold serial dilution, the dilution factor is 10 for each step. We can also calculate the total dilution factor for the entire series by multiplying the individual dilution factors. For example, if we make four 10-fold serial dilutions, the total dilution factor will be 10 x 10 x 10 x 10 = 10,000.

 

Serial dilutions are useful for several reasons. First, they allow us to create solutions with very low concentrations that would be difficult to measure or pipette otherwise. For instance, if we want to make a solution with a concentration of 0.0001 M (or 0.1 mM) from a 1 M solution, we will need to pipette 0.0001 ml of the original solution, which is very impractical and inaccurate. However, by making a series of ten-fold serial dilutions, we can easily achieve this concentration.

 

Second, they allow us to estimate the concentration of cells or organisms in a sample by counting the number of colonies that grow on agar plates after inoculating them with different dilutions. For example, if we have a bacterial culture and we want to know how many bacteria are in it, we can make serial dilutions of the culture and spread a known volume (such as 0.1 ml) of each dilution on an agar plate. After incubating the plates for a suitable time, we can count the number of colonies that appear on each plate. The number of colonies is proportional to the number of bacteria in the inoculum, and we can use the total dilution factor to calculate the concentration of bacteria in the original culture.

 

Third, they allow us to create concentration curves with a logarithmic scale for experiments that involve measuring the response of a system to different concentrations of an analyte (such as an enzyme or an antibody). For example, if we want to measure how an enzyme reacts with different concentrations of a substrate, we can make serial dilutions of the substrate and add them to a fixed amount of enzyme in separate tubes. Then, we can measure the amount of product formed by the enzyme-substrate reaction in each tube. By plotting the product concentration versus the substrate concentration on a logarithmic scale, we can obtain a curve that shows how the enzyme activity changes with different substrate concentrations.

 

Briefly, serial dilutions are a simple and effective way to create solutions with different concentrations of a substance or a cell type for various purposes. They involve diluting a sample step by step with a constant dilution factor and calculating the total dilution factor for the entire series. Serial dilutions are widely used in biology and medicine for estimating cell counts, preparing cultures from single cells, titrating antibodies, and generating concentration curves.

 

Applications of serial dilutions

 

Serial dilutions have various applications in biochemistry and pharmacology, such as:

 

  • Drug discovery: Serial dilutions can be used to test the effects of different doses of a potential drug on a biological target, such as a cell, an enzyme, or a receptor. By measuring the response of the target to different concentrations of the drug, researchers can determine the optimal dose, the potency, and the safety margin of the drug.
  • Enzyme assays: Serial dilutions can be used to measure the activity of an enzyme by adding a substrate that changes color or fluorescence when it is catalyzed by the enzyme. By varying the concentration of the enzyme in different solutions, researchers can calculate the rate of the reaction, the maximum velocity, and the affinity of the enzyme for the substrate.
  • Protein quantification: Serial dilutions can be used to estimate the amount of protein in a sample by using a standard curve. A standard curve is a plot of absorbance versus concentration of a known protein that has been diluted in a series of solutions. By measuring the absorbance of the unknown protein sample and comparing it to the standard curve, researchers can infer its concentration.

 

Techniques for performing serial dilutions

 

There are different techniques used to perform serial dilutions, such as manual pipetting, automated liquid handling, and microfluidics. Each technique has its own advantages and disadvantages, depending on the accuracy, speed, and cost required for the experiment.

 

Manual pipetting is the simplest and most widely used technique for serial dilutions. It involves using a pipette to transfer a fixed volume of solution from one container to another and then adding a diluent to achieve the desired concentration. Manual pipetting is easy to perform and requires minimal equipment, but it can be prone to human errors and contamination. It can also be time-consuming and tedious for large numbers of samples or high dilution factors.

 

Automated liquid handling is a technique that uses a robotic device to perform serial dilutions. It can handle multiple samples simultaneously and accurately, reducing human errors and contamination. Automated liquid handling can also save time and labour for complex or high-throughput experiments. However, automated liquid handling can be expensive to purchase and maintain, and it may require specialized software and training to operate.

 

Microfluidics is a technique that uses microscale channels and devices to manipulate small volumes of fluids. It can perform serial dilutions by mixing different streams of fluids in precise ratios, using valves, pumps, or electric fields. Microfluidics can achieve high accuracy and precision for serial dilutions, as well as rapid mixing and reaction times. Microfluidics can also integrate multiple functions on a single chip, such as detection and analysis. However, microfluidics can be challenging to design and fabricate, and it may require sophisticated equipment and expertise to use.

 

Factors affecting accuracy and precision

 

The accuracy and precision of serial dilutions are important for obtaining reliable and reproducible results in various applications, such as viable bacterial counts, standard curves, and enzyme assays. However, there are several factors that can affect the accuracy and precision of serial dilutions, such as pipetting errors, evaporation, and contamination.

 

Pipetting errors are deviations from the nominal volume of the pipette due to human or mechanical factors. Pipetting errors can be classified into two types: systematic errors and random errors. Systematic errors are consistent deviations from the true value that result from calibration or technique errors. For example, using a pipette that is not properly calibrated or adjusted for temperature and pressure can cause systematic errors. Random errors are unpredictable deviations from the true value that result from variability or noise in the measurement process. For example, air bubbles, droplet formation, or inconsistent pipetting speed can cause random errors.

 

Evaporation is the loss of solvent due to vaporization during the dilution process. Evaporation can affect the accuracy and precision of serial dilutions by changing the concentration of the solute in the solution. Evaporation can be influenced by factors such as temperature, humidity, air flow, and surface area of the container. To minimize evaporation, it is recommended to use closed containers, avoid high temperatures and low humidity, and reduce the exposure time of the solution to air.

 

Contamination is the introduction of unwanted substances or microorganisms into the solution during the dilution process. Contamination can affect the accuracy and precision of serial dilutions by altering the composition or activity of the solute in the solution. Contamination can be caused by factors such as improper sterilization, cross-contamination, or environmental exposure. To prevent contamination, it is advised to use sterile equipment and materials, avoid contact between different solutions or pipette tips, and work in a clean and controlled environment.

 

Troubleshooting common problems

 

We have seen that serial dilutions are a useful technique to reduce the concentration of a solution or a sample in a controlled and stepwise manner. However, some common problems can affect the accuracy and reliability of serial dilutions, such as pipetting errors or contamination. Here are some tips to avoid or minimize these problems:

 

  1. - Use calibrated pipettes and check them regularly for accuracy and precision. Pipetting errors can result from improper technique, air bubbles, leaks, or damaged tips. Follow the manufacturer's instructions for pipetting and use the appropriate tips for each pipette.
  2. - Use sterile and disposable pipette tips for each transfer of solution or sample. This will prevent cross-contamination and ensure consistent volume delivery. Do not reuse or touch the tips with your hands or other objects.
  3. - Use fresh and sterile diluents for each dilution step. Do not reuse or mix diluents from different sources or batches. Store the diluents at the recommended temperature and conditions and check them for signs of contamination or degradation before use.
  4. - Label each tube or container clearly with the dilution factor and the sample name or number. Use a consistent and logical labeling system to avoid confusion and errors. Keep track of the order and number of dilutions performed.
  5. - Mix each tube or container thoroughly after adding the solution or sample. This will ensure homogeneity and uniformity of the diluted solution or sample. Use gentle swirling, vortexing, or inversion to mix the contents without introducing air bubbles or splashing.
  6. - Transfer a small and measured volume of each diluted solution or sample to a plate or well for further analysis. Use sterile and disposable pipettes or micropipettes for this step. Avoid touching the plate or well with the pipette tip and dispense the volume carefully and slowly.
  7. - Follow good laboratory practices and safety guidelines when performing serial dilutions. Wear gloves, goggles, and lab coat to protect yourself and your samples from contamination. Work in a clean and organized area with minimal distractions. Dispose of the used materials properly and sanitize your work area after completing the experiment.


Conclusions

 

In this post, the serial dilution technique that is widely used in biochemistry and pharmacology has been introduced. The following key points can help us summarize the main characteristics of the technique:

- Serial dilutions allow for accurate and precise measurement of small concentrations of substances, such as enzymes, hormones, drugs, or toxins.

- Serial dilutions reduce the risk of errors or contamination that may occur when handling or transferring small volumes of solutions.

- Serial dilutions enable the creation of standard curves or calibration curves that can be used to determine the unknown concentration of a substance in a sample.

- Serial dilutions facilitate the comparison of different samples or experiments by ensuring that they are tested under the same conditions and with the same units of measurement.

- Serial dilutions are essential for performing assays or tests that rely on the interaction between a substance and a specific receptor or indicator, such as enzyme-linked immunosorbent assay (ELISA), colorimetric assay, or fluorescence assay.

 

Serial dilutions are an essential skill for biochemists and pharmacologists who work with substances that have different concentrations and effects. By mastering this technique, we can prepare solutions that are suitable for our experiments and measurements, obtain accurate and precise amounts of substances that are hard to measure directly and create concentration curves that can reveal important information about the properties and behavior of substances. 



R function conf_int_Accuracy

  R function conf_int_Accuracy <- function(GMTobs, s, nDays = 2, nAnalysts = 2, nPlates = 2, nReplicates = 3, FoldDilution = 2, Threshold...