Skip to main content

Abstract

An important feature of embedded processor-based design for systems-on-chip is that the CPU core is decoupled from the on-chip memory, and it is the system designer’s responsibility to instantiate an appropriate on-chip memory configuration in his design. Traditionally, cache configuration for commercial microprocessors has been determined by conducting experiments on benchmark examples. Since general-purpose microprocessors in a PC or workstation environment have to execute a large variety of application software, a usual and effective means to improve performance has been to increase the cache size to the extent allowable by silicon area constraints. This strategy is acceptable because, in the absence of advance knowledge of the application programs that will execute on the processor, it is fair to assume that an increase in cache size will, in most cases, lead to improvement in performance, because more instruction and data can be held in local memory using larger caches, leading to possibly greater degree of reuse. However, in the embedded processor domain, there is often only a single application, and it is possible to conduct a more thorough analysis of the application to determine the best memory configuration. When coupled with an aggressive compiler that exploits the knowledge of this architecture, the impact on the overall design is significant. For instance, if an analysis of the application reveals that the data cache hit ratio is not likely to improve for cache sizes larger than 1 KByte, the information can be utilized to allocate expensive on-chip silicon area to other hardware resources, instead of an unnecessarily large cache.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. If the region had six words, it could span three lines. E.g., we could have A[i] mapping to line j; A[i + 1] ...A[i + 4] mapping to line (j + 1); and A[i + 5] to line (j + 2).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media New York

About this chapter

Cite this chapter

Panda, P.R., Dutt, N., Nicolau, A. (1999). Memory Architecture Exploration. In: Memory Issues in Embedded Systems-on-Chip. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5107-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-5107-2_6

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7323-0

  • Online ISBN: 978-1-4615-5107-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics