1.1 states that there are two aspects of

1.1
An Introduction to Statistical Quality Control and its Brief Review of
Literature:

Many
people used the word “quality” in different contexts. The quality of a service
or product is recognized as very important by various companies. However, it is
very difficult to frame a definition that differentiates between product and
services of bad quality from products and services of high quality. Shewhart
(1931) states that there are two aspects of quality. Firstly there is an
objective concept of quality, resulting in quantatively measurable physical
characteristics, which are independent of second, subjective, aspect of
quality. He recognized that the subjective side of quality is commercially
interesting, but that it is necessary to establish standards of quality in a
quantitative manner. Quality can be defined only in terms of the agent (Deming
(1982)). Juran and Gryna (1988) defines quality as   “quality is fitness for use.” In recent
years, the importance of quality has become increasingly apparent. Harder competition,
complex environmental and safety regulations, and abruptly changing economic
conditions have been key factors in tightening plant product quality.

used a synthetic
control charting approach, Muhammad and 
Riaz (2006) used a probability weighted moments based approach, He and
Grigoryan (2006) used a double sampling approach, Riaz and Saghir (2007) used a
gini mean difference based approach. Riaz (2008a) proposed a process variance
chart and claimed its superiority over the well known

chart, the cause-selecting and regression adjusting
control charts.

1.2
An Approach to Acceptance Sampling and its brief review of literature:

 Acceptance sampling has become typical to work
with suppliers to improve their process performance by using of statistical
process control (SPC) with designed experiments. It has been focused on when  the inspection required is destructive,
testing 100%  inspection is not feasible
due to the cost or time so an acceptance sampling plan is created to define how
many samples must be taken to verify the lot. Juran and Gryna (1988) defined
acceptance sampling as an inspection procedure applied in SQC. It is a method
of measuring random samples of population called “lots” of materials or
products against predetermined standards. Acceptance sampling is a part of
operations management and service quality supervision. Acceptance sampling
seems very beneficial for industrial and business purposes as its helps in
decision making process. Sampling plans are hypothesis tests regarding product
that has been submitted for an appraisal and subsequent acceptance and rejection.
The products may be grouped into lots or may be single pieces from a continuous
operation. A random sample is selected and could be checked for various
characteristics. Accepting or rejecting a lot is similar to not rejecting or
rejecting the null hypothesis in a hypothesis test. The acceptance methods are
used for attributes and variables. The attribute sampling is a simple
statistical method utilizes representative samples to analyze traits of a large
body of a data and decision based on number of defectives in a lot. In variable
sampling plans, one or more samples of items are drawn from a given lot,
measurement of quality characteristic in each sampled item is recorded, and the
decision of acceptance or rejection of the lot is made as a function of such
measurements. The variable sampling plans are used in the situations where the
quality characteristic of sampled item is measurable on continuous scale and
the functional form of the probability distribution is assumed to be known. A
variable sampling plan is advantageous to attribute sampling plan in the sense
that it generates more information from each item inspected, requires small
sample and provides same protection as provided by attribute sampling plan.  Studies relating to sampling plans when the
assumptions of normality and independence of the quality characteristic fails
or the functional form of the underlying distribution deviates from normal or
the form of the distribution is not known are  found in the literature of acceptance
sampling.  Some of the early works on
variable sampling includes Liberman and Resineff (1955), Schilling (1982), Owen
(1966, 1967) and Hamaker (1979). Saveral authors tried to design the sampling
plans in case the assumptions of normality and independence are not
fulfilled.  Srivastava (1961) studied
variable sampling inspection for non-normal samples. Das and Mitra (1964)
examined the effect of non-normality on plans for sampling inspection by
variables. Geetha and Vijayaragharvan (2011) studied studied the selection of
single sampling plan by variables based on Logistic distribution. Geetha and Vijayaraghavan
(2013) examined the procedure for the selection of single sampling plan by
variables based on Pareto distributions. For non-normal distributions, the
designing of unknown sigma plans is much complicated. Takagi (1972) attempted
to provide solution to such problems and proposed a methodology for determining
the parameters of variable sampling plans under the non-normality population by
introducing an expansion factor in terms of measure of skewness and kurtosis.  The performance of an acceptance sampling plan
is based on the operating characteristic (OC) curve. The OC curve plots the
probability of accepting the lot against the actual product fraction defective,
which displays the discriminating power of the sampling plan. Pukar et al.
(2011) designed an OC curve for acceptance sampling plan to minimize a
consumer’s risk. Khandwawala (2012) constructed OC curve for acceptance
sampling plan by using MATLAB software. 

1.3
A Brief Literature Review On Economic Acceptance Sampling
Plans: Economically
designed plans guarantee the lowest cost, but typically they have poor
statistical performance as they ignore statistical properties. Type I error
rate of the economic design may be too high for many situations and will cause
a large number of false alarms. This has infact became a major limitation of
the economic design. A statistically designed sampling plan is a structured
method in which the Type I error probability and power are generally fixed at
the desired levels. Statistically designed plans may yield high power and low
Type I error rate but they may cost more than economic designs. Saniga (1989)
was the first to introduce economic statistical design to combine the benefits
of both pure statistical and economic designs while minimizing their weakness.
The objective of both economic and statistical designs is to minimize the
expected total cost per unit via a non-linear constrained optimization. The
main difference between the two is that economic statistical designs are
subject to constraints on the type I error rate and power. Various authors have
tried to study sampling plans from economic view point. Some of the works
already done on economic designs includes Watherill and Chou (1975) have
compiled a through bibliography of papers dealing with acceptance sampling
schemes with emphasis on the economic aspects. Champernowni (1953) considers
the problem of deriving sequential sampling plans that minimize the sum of
decision and inspection costs. Beta distribution was used by him as the prior
distribution of lot quality. His plans are based on critical fraction
defective, p0, where decision costs are zero. Pukar et al. (2011)
designed an OC curve for acceptance sampling plan to minimize a consumer’s
risk. Farrel and Chhoker (2010) developed economically optimal acceptance
sampling plans in a two stage supply chain and tried to minimize the producer’s
and consumer’s total  quality cost while
satisfying both the  producer’s and
consumer’s quality and risk requirements. Vispute and Singh (2014) examined
economic effect of variable sampling plan for autocorrelated data. Narayanan
and Rajarathinam (2013) provided the procedure for selection of single sampling
plans by variables for Pareto distributions. 

1.4
An approach to Correlation and its brief review of literature:         Correlation means
association – more precisely it is a measure of the extent to which two
variables are related. If an increase in one variable tends to be associated
with an increase in the other then this is known as a positive correlation. A correlational study determines whether or not two
variables are correlated. This means to study whether an increase or decrease in one
variable corresponds to an increase or decrease in the other variable.
Correlational Research is also known as Associational Research. Relationships
among two or more variables are studied without any attempt to influence them
and investigated the possibility of relationships between two variables.
One purpose for doing correlational research is to determine the degree to which a
relationship exists between two or more variables.  Guo and
Manatunga (2007) proposed to estimate the concordance correlation coefficient
(CCC) non-parametrically through the bivariate survival function. They proved
the presented estimator of the CCC to be strongly consistent and asymptotically
normal, with consistent bootstrap variance estimator. Additionally, they
developed a non-parametric estimator for the time-dependent agreement
coefficient. It has the same asymptotic properties as the estimator of the CCC.
Previously, Liu et al. (2005) have also worked in this field of CCC and they
studied inter-rater agreement in
measurements of time to event, usually not observed with perfect consistency
between raters. As a function of the first two moments of rating measures, the
CCC can be estimated with data subject to censoring, using a likelihood-based
estimation method employed under the assumptions of random censoring and
parametric distribution models for the ratings of time to event. 

In traditional
quality control charts, fixed sampling interval (FSI) schemes are used where
the time between samples has fixed intervals. More efficient methods called VSI
schemes have been developed where one takes the next observation sooner than
usual if there is an indication that the process is operating off the target
value. Another traditional assumption behind most statistical process control
charts is that the sequential observations are independent. However, there are
many situations where the sequential observations should not to be treated as
independent. Rather, a time series model, in particular the first order
autoregressive (AR (1)) model, is appropriate. Baik (1991) used Markov chain
representation study the properties of the FSI and VSI Shewhart X control
charts. They showed that if the process variance is properly estimated and if
traditional control limits are used in the FSI control charts, then the
detection time is shorter when the consecutive observations are negatively
correlated than when they are positively correlated. If they are positively
correlated, then the false alarm rate decreases as the correlation between
consecutive observations increases. Consecutively, the detection time increases
as the correlation increases. In VSI control charts with traditional control
limits, if the process mean is near the target, then the average time to signal
(ATS) and average number of samples to signal (ANSS) tend to decrease as the
correlation increases until the correlation becomes rather moderate. Then, for
more highly correlated data, the ATS and ANSS tend to increase as the
correlation increases. Even under the AR (1) process, the VSI chart is more
efficient than the FSI chart in terms of ATS. In contrast, the VSI chart is
less efficient than the FSI chart in terms of ANSS. The inefficiency (efficiency)
of ATS (ANSS) tends to increase (decrease) as the correlation between the
consecutive observations becomes stronger.1.1
An Introduction to Statistical Quality Control and its Brief Review of
Literature:

Many
people used the word “quality” in different contexts. The quality of a service
or product is recognized as very important by various companies. However, it is
very difficult to frame a definition that differentiates between product and
services of bad quality from products and services of high quality. Shewhart
(1931) states that there are two aspects of quality. Firstly there is an
objective concept of quality, resulting in quantatively measurable physical
characteristics, which are independent of second, subjective, aspect of
quality. He recognized that the subjective side of quality is commercially
interesting, but that it is necessary to establish standards of quality in a
quantitative manner. Quality can be defined only in terms of the agent (Deming
(1982)). Juran and Gryna (1988) defines quality as   “quality is fitness for use.” In recent
years, the importance of quality has become increasingly apparent. Harder competition,
complex environmental and safety regulations, and abruptly changing economic
conditions have been key factors in tightening plant product quality.

used a synthetic
control charting approach, Muhammad and 
Riaz (2006) used a probability weighted moments based approach, He and
Grigoryan (2006) used a double sampling approach, Riaz and Saghir (2007) used a
gini mean difference based approach. Riaz (2008a) proposed a process variance
chart and claimed its superiority over the well known

chart, the cause-selecting and regression adjusting
control charts.

1.2
An Approach to Acceptance Sampling and its brief review of literature:

 Acceptance sampling has become typical to work
with suppliers to improve their process performance by using of statistical
process control (SPC) with designed experiments. It has been focused on when  the inspection required is destructive,
testing 100%  inspection is not feasible
due to the cost or time so an acceptance sampling plan is created to define how
many samples must be taken to verify the lot. Juran and Gryna (1988) defined
acceptance sampling as an inspection procedure applied in SQC. It is a method
of measuring random samples of population called “lots” of materials or
products against predetermined standards. Acceptance sampling is a part of
operations management and service quality supervision. Acceptance sampling
seems very beneficial for industrial and business purposes as its helps in
decision making process. Sampling plans are hypothesis tests regarding product
that has been submitted for an appraisal and subsequent acceptance and rejection.
The products may be grouped into lots or may be single pieces from a continuous
operation. A random sample is selected and could be checked for various
characteristics. Accepting or rejecting a lot is similar to not rejecting or
rejecting the null hypothesis in a hypothesis test. The acceptance methods are
used for attributes and variables. The attribute sampling is a simple
statistical method utilizes representative samples to analyze traits of a large
body of a data and decision based on number of defectives in a lot. In variable
sampling plans, one or more samples of items are drawn from a given lot,
measurement of quality characteristic in each sampled item is recorded, and the
decision of acceptance or rejection of the lot is made as a function of such
measurements. The variable sampling plans are used in the situations where the
quality characteristic of sampled item is measurable on continuous scale and
the functional form of the probability distribution is assumed to be known. A
variable sampling plan is advantageous to attribute sampling plan in the sense
that it generates more information from each item inspected, requires small
sample and provides same protection as provided by attribute sampling plan.  Studies relating to sampling plans when the
assumptions of normality and independence of the quality characteristic fails
or the functional form of the underlying distribution deviates from normal or
the form of the distribution is not known are  found in the literature of acceptance
sampling.  Some of the early works on
variable sampling includes Liberman and Resineff (1955), Schilling (1982), Owen
(1966, 1967) and Hamaker (1979). Saveral authors tried to design the sampling
plans in case the assumptions of normality and independence are not
fulfilled.  Srivastava (1961) studied
variable sampling inspection for non-normal samples. Das and Mitra (1964)
examined the effect of non-normality on plans for sampling inspection by
variables. Geetha and Vijayaragharvan (2011) studied studied the selection of
single sampling plan by variables based on Logistic distribution. Geetha and Vijayaraghavan
(2013) examined the procedure for the selection of single sampling plan by
variables based on Pareto distributions. For non-normal distributions, the
designing of unknown sigma plans is much complicated. Takagi (1972) attempted
to provide solution to such problems and proposed a methodology for determining
the parameters of variable sampling plans under the non-normality population by
introducing an expansion factor in terms of measure of skewness and kurtosis.  The performance of an acceptance sampling plan
is based on the operating characteristic (OC) curve. The OC curve plots the
probability of accepting the lot against the actual product fraction defective,
which displays the discriminating power of the sampling plan. Pukar et al.
(2011) designed an OC curve for acceptance sampling plan to minimize a
consumer’s risk. Khandwawala (2012) constructed OC curve for acceptance
sampling plan by using MATLAB software. 

1.3
A Brief Literature Review On Economic Acceptance Sampling
Plans: Economically
designed plans guarantee the lowest cost, but typically they have poor
statistical performance as they ignore statistical properties. Type I error
rate of the economic design may be too high for many situations and will cause
a large number of false alarms. This has infact became a major limitation of
the economic design. A statistically designed sampling plan is a structured
method in which the Type I error probability and power are generally fixed at
the desired levels. Statistically designed plans may yield high power and low
Type I error rate but they may cost more than economic designs. Saniga (1989)
was the first to introduce economic statistical design to combine the benefits
of both pure statistical and economic designs while minimizing their weakness.
The objective of both economic and statistical designs is to minimize the
expected total cost per unit via a non-linear constrained optimization. The
main difference between the two is that economic statistical designs are
subject to constraints on the type I error rate and power. Various authors have
tried to study sampling plans from economic view point. Some of the works
already done on economic designs includes Watherill and Chou (1975) have
compiled a through bibliography of papers dealing with acceptance sampling
schemes with emphasis on the economic aspects. Champernowni (1953) considers
the problem of deriving sequential sampling plans that minimize the sum of
decision and inspection costs. Beta distribution was used by him as the prior
distribution of lot quality. His plans are based on critical fraction
defective, p0, where decision costs are zero. Pukar et al. (2011)
designed an OC curve for acceptance sampling plan to minimize a consumer’s
risk. Farrel and Chhoker (2010) developed economically optimal acceptance
sampling plans in a two stage supply chain and tried to minimize the producer’s
and consumer’s total  quality cost while
satisfying both the  producer’s and
consumer’s quality and risk requirements. Vispute and Singh (2014) examined
economic effect of variable sampling plan for autocorrelated data. Narayanan
and Rajarathinam (2013) provided the procedure for selection of single sampling
plans by variables for Pareto distributions. 

1.4
An approach to Correlation and its brief review of literature:         Correlation means
association – more precisely it is a measure of the extent to which two
variables are related. If an increase in one variable tends to be associated
with an increase in the other then this is known as a positive correlation. A correlational study determines whether or not two
variables are correlated. This means to study whether an increase or decrease in one
variable corresponds to an increase or decrease in the other variable.
Correlational Research is also known as Associational Research. Relationships
among two or more variables are studied without any attempt to influence them
and investigated the possibility of relationships between two variables.
One purpose for doing correlational research is to determine the degree to which a
relationship exists between two or more variables.  Guo and
Manatunga (2007) proposed to estimate the concordance correlation coefficient
(CCC) non-parametrically through the bivariate survival function. They proved
the presented estimator of the CCC to be strongly consistent and asymptotically
normal, with consistent bootstrap variance estimator. Additionally, they
developed a non-parametric estimator for the time-dependent agreement
coefficient. It has the same asymptotic properties as the estimator of the CCC.
Previously, Liu et al. (2005) have also worked in this field of CCC and they
studied inter-rater agreement in
measurements of time to event, usually not observed with perfect consistency
between raters. As a function of the first two moments of rating measures, the
CCC can be estimated with data subject to censoring, using a likelihood-based
estimation method employed under the assumptions of random censoring and
parametric distribution models for the ratings of time to event. 

In traditional
quality control charts, fixed sampling interval (FSI) schemes are used where
the time between samples has fixed intervals. More efficient methods called VSI
schemes have been developed where one takes the next observation sooner than
usual if there is an indication that the process is operating off the target
value. Another traditional assumption behind most statistical process control
charts is that the sequential observations are independent. However, there are
many situations where the sequential observations should not to be treated as
independent. Rather, a time series model, in particular the first order
autoregressive (AR (1)) model, is appropriate. Baik (1991) used Markov chain
representation study the properties of the FSI and VSI Shewhart X control
charts. They showed that if the process variance is properly estimated and if
traditional control limits are used in the FSI control charts, then the
detection time is shorter when the consecutive observations are negatively
correlated than when they are positively correlated. If they are positively
correlated, then the false alarm rate decreases as the correlation between
consecutive observations increases. Consecutively, the detection time increases
as the correlation increases. In VSI control charts with traditional control
limits, if the process mean is near the target, then the average time to signal
(ATS) and average number of samples to signal (ANSS) tend to decrease as the
correlation increases until the correlation becomes rather moderate. Then, for
more highly correlated data, the ATS and ANSS tend to increase as the
correlation increases. Even under the AR (1) process, the VSI chart is more
efficient than the FSI chart in terms of ATS. In contrast, the VSI chart is
less efficient than the FSI chart in terms of ANSS. The inefficiency (efficiency)
of ATS (ANSS) tends to increase (decrease) as the correlation between the
consecutive observations becomes stronger.