The world’s Largest Sharp Brain Virtual Experts Marketplace Just a click Away
Levels Tought:
Elementary,Middle School,High School,College,University,PHD
| Teaching Since: | May 2017 |
| Last Sign in: | 398 Weeks Ago, 5 Days Ago |
| Questions Answered: | 66690 |
| Tutorials Posted: | 66688 |
MCS,PHD
Argosy University/ Phoniex University/
Nov-2005 - Oct-2011
Professor
Phoniex University
Oct-2001 - Nov-2016
Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that B = 2400 bytes, s = 16 ms, rd = 8.3 ms, and btt = 0.8 ms. Suppose we want to make X independent random record reads from the file. We could make X random block reads or we could perform one exhaustive read of the entire file looking for those X records. The question is to decide when it would be more efficient to perform one exhaustive read of the entire file than to perform X individual random reads. That is, what is the value for X when an exhaustive read of the file is more efficient than random X reads? Develop this as a function of X.
Hel-----------lo -----------Sir-----------/Ma-----------dam-----------Tha-----------nk -----------You----------- fo-----------r u-----------sin-----------g o-----------ur -----------web-----------sit-----------e a-----------nd -----------and----------- ac-----------qui-----------sit-----------ion----------- of----------- my----------- po-----------ste-----------d s-----------olu-----------tio-----------n.P-----------lea-----------se -----------pin-----------g m-----------e o-----------n c-----------hat----------- I -----------am -----------onl-----------ine----------- or----------- in-----------box----------- me----------- a -----------mes-----------sag-----------e I----------- wi-----------ll