AccountingQueen

(3)

$16/per page/Negotiable

About AccountingQueen

Levels Tought:
Elementary,Middle School,High School,College,University,PHD

Expertise:
Accounting,Algebra See all
Accounting,Algebra,Applied Sciences,Architecture and Design,Art & Design,Biology,Business & Finance,Calculus,Chemistry,Communications,Computer Science,Economics,Engineering,English,Environmental science,Essay writing,Film,Foreign Languages,Geography,Geology,Geometry,Health & Medical,History,HR Management,Information Systems,Law,Literature,Management,Marketing,Math,Numerical analysis,Philosophy,Physics,Precalculus,Political Science,Psychology,Programming,Science,Social Science,Statistics Hide all
Teaching Since: Jul 2017
Last Sign in: 264 Weeks Ago, 2 Days Ago
Questions Answered: 5502
Tutorials Posted: 5501

Education

  • MBA.Graduate Psychology,PHD in HRM
    Strayer,Phoniex,
    Feb-1999 - Mar-2006

  • MBA.Graduate Psychology,PHD in HRM
    Strayer,Phoniex,University of California
    Feb-1999 - Mar-2006

Experience

  • PR Manager
    LSGH LLC
    Apr-2003 - Apr-2007

Category > Statistics Posted 13 Aug 2017 My Price 10.00

Mr. Turttle postings 2.

o    Message expanded. Message read The only time in this course where we round UP

posted by Jerome Tuttle , Jul 27, 2017, 7:28 AM

The formula 7-2 on page 333 for how large a sample size n should be to estimate the population proportion p is the result of starting with the margin of error formula E = zα/2 * √ (p_hat * q_hat /n), and solving for n.  If the calculated value of n is not a whole number, round n UP to the next LARGER whole number.  This is rare instance of not rounding to the closest number.   

     When calculating how large a sample size n should be, if p_hat is known then use it; if p_hat is not known, use 0.5. The choice of .5 is not arbitrary, and it is not because if something either will or will not happen then its probability must be .5 (which I hope you recognize is incorrect from our probability discussion on socks in my drawer). 

     What do you think is the reason for choosing .5 in this circumstance?

Words: 145

 

 

 

o    Message expanded. Message read An unbiased estimator

posted by Jerome Tuttle , Jul 27, 2017, 7:13 AM

An unbiased estimator is a statistic whose sampling distribution has a mean equal to the population parameter it is estimating. Here it is the estimator - the choice of the sample statistic like mean or median - that is biased or unbiased, in contrast to a sample or an experimenter that is biased or unbiased.

     Suppose we want to estimate the mean of the population µ by taking samples.  Suppose we ask, when we take samples, wouldn't it be easier to use as our estimator the range of the sample (maximum minus minimum; see page 96), rather than the sample mean?  But we need to know whether the range is unbiased or not.

     If we were to take every possible sample, calculate each sample's range, and then calculate the mean of those ranges, we would have a number; this is the mean of the sample ranges.

     We are trying to estimate the population mean, which we don't know.  But if we did know it, we could check whether it equals the mean of every possible sample of the sample ranges.  If it did, then the range is an unbiased estimator.  But in a numerical example, we would discover they are not equal, so the range is biased.

     Not surprisingly, the sample mean is an unbiased estimator of the population mean µ, and the sample proportion p hat is an unbiased estimator of the population proportion p.

     When we say an estimator is unbiased, this says if we took every possible sample, the mean of the sampling distribution would equal the population parameter.  This does not say that if we take one sample, that sample statistic will equal the population parameter; it would have to be a lucky sample for this to happen.  

     The sample variance s2 whose denominator is n-1 is an unbiased estimator of the population variance σ2, and this is why we use n-1 instead of n.  The sample standard deviation s is a biased estimator of σ, but the bias is small.

Words: 328

 

 

 

o    Message expanded. Message read Intro to confidence intervals

posted by Jerome Tuttle , Jul 27, 2017, 7:18 AM

In chapter 7 we start with our sample mean, which we denote p hat (p with an upside down V on top) for proportions, or x bar (x with a horizontal line on top; ) for numbers other than proportions.  We ask how good of an estimate is it of the population mean.  To do this we use a probability distribution like the Normal to put a range, or confidence interval around the population mean. 

     In the example on pages 323 and 331, 1007 adults were randomly selected, and 85% said they know what Twitter is. So p hat = 85% is the sample proportion.  We are trying to measure the true population proportion p.  Note that p is some constant number - we just don't know what it really is.

     Based on our one sample with p hat = 85%, the calculations give us a 95% confidence interval of .828 < p < .872. 

     How would you explain the sentence above in your own words? (Note there is a correct and an incorrect way to do this.)

Words: 162

 

 

 

o    Message expanded. Message read Intro to margin of error

posted by Jerome Tuttle , Jul 27, 2017, 7:23 AM

The margin of error, E, for a confidence interval with probability 1- α, equals the product of the critical value times the standard deviation.  This is the maximum likely difference with probability 1- α  between the observed sample statistic and the true value of the population parameter.

 

<See art5>

 

     The confidence interval is sample statistic plus or minus E, that is p_hat  ± E, or xbar  ± E.

     For a sample proportion, E = zα/2 * √ (p_hat  *  q_hat /n).

     For a sample mean with sigma unknown, E = tα/2 * s / √ (n).

     For a sample mean with sigma known, E = zα/2 * σ / √ (n).

 

     Note the similarities among the three formulas. E = (Critical value from probability distribution) * (SD of sample). Recall that SD of a sample has a √ n  denominator.  (We have not yet discussed t.)

     Note that these formulas depend on the size of the sample n, but not the size of the population N.  Let me try to give an example why this is so.  Suppose you are sampling corn kernels from a harvest by thrusting a scoop into some corn.  The scoop doesn't know if it is within a small bag of corn or an entire truckload.  If the corn is well mixed so the scoop selects a random sample (and this is the key assumption), then the variability of the result depends only on the size of the scoop.

     Can you buy into the previous paragraph?  What do you think?

 

Words: 232

Attachments:

Answers

(3)
Status NEW Posted 13 Aug 2017 05:08 PM My Price 10.00

Hel-----------lo -----------Sir-----------/Ma-----------dam----------- T-----------han-----------k y-----------ou -----------for----------- us-----------ing----------- ou-----------r w-----------ebs-----------ite----------- an-----------d a-----------cqu-----------isi-----------tio-----------n o-----------f m-----------y p-----------ost-----------ed -----------sol-----------uti-----------on.----------- Pl-----------eas-----------e p-----------ing----------- me----------- on----------- ch-----------at -----------I a-----------m o-----------nli-----------ne -----------or -----------inb-----------ox -----------me -----------a m-----------ess-----------age----------- I -----------wil-----------l

Not Rated(0)