Instructor Support: textbooks.elsevier.com/9780128122754
Computer Abstractions and Technology
Eight Great Ideas in Computer Architecture
- Design for Moore’s Law: It is more of an observation or prediction which states that the transistors on a chip will double every two years whereas the cost will be halved.
- Use Abstraction to Simplify Design: Use abstractions to characterize the design at different levels of representation; lower level details are hidden to offer a simple and easy to understand view at the higher level.
- Make the Common Case Fast: Making the most probable or common scenario will tend to have a bigger impact on performance than trying to optimize a rare case. (Common sense to be honest)
- Performance via Parallelism: Get more performance by computing operations in parallel
- Performance via Pipelining: We can improve the throughput by utilizing different parts of our CPU to work on different task simultaneously instead of blocking it all for one task and waiting for it to finish before starting the next.
- Performance via Prediction: It is mostly always better to ask for forgiveness than permission. It is better to predict and start working instead of waiting to know for sure; assuming that the mechanism to recover from misprediction is not too expensive and the prediction is kinda accurate.
- Hierarchy of Memories: We want memory to be fast, large, and cheap (not possible as of now). To solve that problem we keep the fastest, smallest and the most expensive memory at the top of the hierarchy and the slowest, largest and cheapest at the bottom.
- Dependability via Redundancy: We make systems dependable by including redundant components to take over when a failure occurs and to help detect failures.
High-level language to Language of Hardware
- High level language → Compiler → Assembly language → Assembler → Binary language
Under the Covers
- The five classic components of a computer are input, output, memory, datapath and control. Last two are sometimes combined and called the processor.
Todo:
- Read more about LCDs or displays in general and how they work
- How are images displayed like a raster refresh buffer, frame buffer, etc.
- The datapath performs the arithmetic operations and the control tells the datapath, memory and I/O devices what to do according to the wishes of the instructions of the program.
- DRAM(Dynamic Random Access Memory) - Memory built as an integrated circuit, it provides random access to any location. Access times are 50ns.
- Cache Memory - Acts as a buffer for the DRAM memory. Cache is built using a different memory technology SRAM(Static Random Access Memory) which is faster but less dense and hence more expensive.
Todo:
- The progression of memory from HDD to SSD’s and what makes SSD’s so fast
- Difference between Cache, SRAM, DRAM and, how are they actually made or read from
- One of the most important abstractions is the interface between hardware and lowest-level software → Instruction Set Architecture(ISA) or simply the Architecture of a computer.
- The combination of the basic instruction set and the operating system interface provided for application programmers is called the Application Binary Interface (ABI).
- Nonvolatile memory: Things like DRAM will retain data only when they are receiving power, making them volatile. We need a form of memory that retains data even in the absence of power and can be used to store programs between runs.
- Flash Memory: Nonvolatile semiconductor memory. It is slower than DRAM but it is much cheaper and is nonvolatile.