The translation lookaside buffer (TLB) is a special hardware cache that speeds up accesses to a process' page table. Even though a process can theoretically access its full virtual memory space, most processes access much less. The princple of the TLB is that, ideally, most programs will only access a very small subset of their total space. If all pages can be kept in the TLB, then memory accesses will be very fast.
In this studio, you will:
Please complete the required exercises below, as well as any optional enrichment exercises that you wish to complete.
As you work through these exercises, please record your answers in a text file. When finished, submit your work via the git repository.
Make sure that the name of each person who worked on these exercises is listed in the first answer, and make sure you number each of your responses so it is easy to match your responses with each exercise.
For the following exercises, consider the following page table. Suppose a page size of 1024 bytes:
Furthermore, suppose this machine has a TLB with the following contents:
00110011110011cause a TLB hit?
00010011001110cause a TLB hit?
Class 0 - not referenced, not modified
Class 1 - not referenced, modified
Class 2 - referenced, not modified
Class 3 - referenced, modified
Under the NRU policy, which page in the page table would be evicted next?
00110010010000 - read
00100000111011 - read
00110000101110 - read
00110000010101 - write
00010001000010 - write
01100000100001 - read
01010010011100 - read
In your first design you hit 90% of the time and miss 10% of the time- that is, you have 10,000 TLB misses and 90,000 TLB hits. How many cycles does your program spend on TLB misses? How many cycles does it spend on TLB hits? How many total cycles does your program spend on memory accesses?
In your first plan, you're able to reduce the TLB miss time from 30 cycles to 20 cycles. Given the same program as above, how many total cycles does the program spend on memory accesses? What is the average number of cycles per access?