exploiting on chip memories in linux applications
play

Exploiting On-Chip Memories In Linux Applications Will Newton, - PowerPoint PPT Presentation

Exploiting On-Chip Memories In Linux Applications Will Newton, Imagination Technologies What's wrong with SDRAM? 70 60 50 Cycles 40 30 20 10 0 L1 Cache Hit L1 Cache Miss 64 cycles is optimistic RAM clock often slower than core SoC


  1. Exploiting On-Chip Memories In Linux Applications Will Newton, Imagination Technologies

  2. What's wrong with SDRAM? 70 60 50 Cycles 40 30 20 10 0 L1 Cache Hit L1 Cache Miss

  3. 64 cycles is optimistic RAM clock often slower than core SoC fabric and arbiter delays SDRAM controller bursting delays TLB miss stalls

  4. It's not just latency Memory bus bandwidth Bus contention can affect other cores Memory bus power consumption Non-deterministic if you're doing RT

  5. What solutions are available? META Core Core Code Core Data I Cache D Cache ROM RAM RAM MMU Write Combiner System Bus Memory Arbiter Internal Peripherals Memory SDRAM

  6. Example META SoC Hardware multi-threaded DSP core L1 cache - 16k code, 16k data Core memory - 64k code, 64k data Internal memory - 384k general purpose

  7. Example META SoC 70 60 50 Cycles 40 30 20 10 0 L1 Cache Core Mem Internal SDRAM

  8. Using core memories Ideally we would like usage to be transparent Fixed addresses make this difficult

  9. Core memory: Executables Linker script allows placement of sections #define __section(S) __attribute__((__section__(#S))) #define __core_text __section(.core_text) #define __core_data __section(.core_data) static int __core_data mydata; int __core_text myfunction(int a); elf_map overridden in the kernel

  10. Core memory: Shared libraries Cannot mix core and MMU in one object Whole shared object can be placed in core Only useful for small objects

  11. Core memory: Dynamic allocation System call API to allocate and free Can replace specific malloc/free calls Allows kernel to reserve areas

  12. Core memory: In practice Not easy to get big speedups Cache manages small, frequently accessed items well Beware long branches Improved tremor decode speed by 11%

  13. Using internal memory Linux supports cpu-less NUMA nodes numactl set_mempolicy(2) mbind(2)

  14. Internal memory: numactl Tool to set NUMA policy of an application numactl --preferred=1 ls Does not build easily with uClibc Too coarse-grained for many situations

  15. Internal memory: set_mempolicy(2) Sets the memory policy of the current process int set_mempolicy(int mode, unsigned long *nodemask, unsigned long maxnode) Does not move existing pages Memory policy can be set multiple times

  16. Internal memory: mbind(2) Sets the memory policy for an address range int mbind(void *addr, unsigned long len, int mode, unsigned long *nodemask, unsigned long maxnode, unsigned flags) Overrides policy set by set_mempolicy(2) Capable of moving pages between nodes

  17. Internal memory: In practice No nice way to implement malloc_from_node(2) Moving pages can be costly, mbind(2) should be used with precision Improved tremor decode speed by 8%

  18. Finding hotspots Code profiling (gprof, oprofile, perf) Cache profiling (oprofile, perf) Emulator Simulator

  19. Where's the code? Source code for released products http://www.pure.com/gpl

  20. Questions? will.newton@imgtec.com

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend