Computer Memory

Brought by: Brilliant

Overview

This course was created in collaboration with Kenji Ejima and Kristian Takvam, senior members of Brilliant's software engineering team.

How is the memory managed in the running program? How does the OS manage it when multiple programs are running? What are the memory related features that the CPU provides?
This course will guide you through understanding memory management, layer by layer, so that you can answer the questions above and write efficient programs.

Syllabus

  • Introduction to Memory: What memory is, and how to represent it.
    • Retrieving Data from a Computer: Computers work by storing data both quickly and slowly, both persistently and temporarily.
    • Binary, Decimal, and Hexadecimal: These three numeric bases from discrete math are key for understanding how computers really work.
    • Linear Memory Model: Discover how every computer program really sees the computer's memory when you get under the hood.
    • Memory Layout: The linear memory model allows large pieces of data to be stored in byte-sized fragments.
  • Memory of Programs: Segments of memory.
    • Compiling Source Files to Executables: The programs you run are made up of bytes in memory, just like everything else.
    • Memory Segments: A running program uses four different parts of the computer's memory for different purposes.
    • The Code and Static Segments: Some parts of memory hold information that the program shouldn't be allowed to change.
    • The Stack Segment: The stack allows multiple functions to work together.
    • The Heap Segment: The heap is where memory for data structures can be easily allocated and freely shared.
  • Virtual Memory: How the OS manages memory.
    • Processes: A computer can hold many programs, each of which think they're using the same memory. How can this be?
    • Virtual and Physical Memory: Memory, as seen by every running computer program on your computer, is an elaborate virtual reality.
    • Memory Pages: The computer's actual memory is managed in large chunks, called "pages."
    • MMU and the OS: Your computer's processor and operating system work together to give each programs its own memory.
  • Techniques for Performance: OS features for efficient memory management.
    • Page Cache: The page cache keeps information from the hard drive in RAM for faster access.
    • Memory Mapping: A technique that helps the operating system access only small parts of big files.
    • Demand Load: Even after you think you've opened a file, the operating system may be lying to you.
    • Page Sharing and Copy-on-Write: Change is hard, and operating systems speed your computer up by assuming you'll avoid changes.
  • Shared Libraries: Linking and loading shared libraries to effectively reuse code.
    • Libraries: Programs do not exist in isolation. When they run, they need to be connected to libraries.
    • Relocation: After the compiler is done with your program, there is still missing information.
    • Position Independent Code: An advanced technique can avoid the need for relocations.
    • Procedure Linkage Table: See how the linkage information can be stored indirectly, without slowing programs down.
  • Caching: A fundamental technique to improve performance.
    • Caching Overview: Caching is a fundamental technique for speeding up computer systems.
    • Details of Caching: Computer processors use a few different techniques for deciding what to cache.
    • Practical Uses of Caching: The same caching techniques apply to your computer and to the global internet.
    • DRAM, SRAM, and CPU Caches: Caching always needs to consider tradeoffs between cost and performance.
    • Computer Memory Architecture: All modern processors use a similar pattern of caches.
Computer Memory
Go to course

Computer Memory

Brought by: Brilliant

  • Brilliant
  • Paid
  • English
  • Certificate Not Available
  • Available at any time
  • beginner
  • N/A