Lecture 26 -- Security at the architecture level ================================================ May 26, 2005, Bill Mark What is "security"? ------------------- - control over capabilities used by software - secrecy/privacy (of data) -- prevent 'read' access - tamperproofing -- prevent 'write' access . tamper detection (of data; of communication channels; of processing elements) . tamper resistance - authentication -- is the person/software/hardware what I think it is? (i.e. can I trust it?) - non-repudiation/signatures Security is a system problem ---------------------------- - processor hardware - I/O hardware - system software (e.g. O/S kernel) - application software - network - user (client) - other user (server/business/etc.) Trust ----- - Trust = will behave as expected - whom/what do I trust? - for what actions? Principles of system design for security ---------------------------------------- - Minimize number of different entities that must be trusted - Make each of them as simple as possible (guards against bugs, unexpected attacks. facilitiates formal verification) Entities in a modern system - a giant swamp of stuff --------------------------- - One or more users - Paths to user: . input devices (keyboard, mouse) . output devices (monitor) - CPU - Chipset and memories - BIOS (bootstrap) - DRAM -- temporary storage - Disk -- permanent storage - Operating system kernel [which in turn comes from Microsoft, or lots of places if open source] - Operating system device drivers [from device manufacturers] . disk, network, keyboard, video, USB flash, PDA dock, digital camera,... - "installed" Application software . email, browser, browser plugins, a/v players, word processor, PDF reader, compiler, ... . usually must be installed as Administrator on Windows systems! - "web" software . Java / JavaScript / ActiveX from web sites Some trust scenarios -------------------- - User types password without it being intercepted by other software - I display output to user without it being intercepted by other software . assuming user is cooperating . assuming user is adversary ('content protection') - User data (e.g. financial records, email) cannot be read by other software - Operating system upgrade without it being intercepted by other software - Anonymous money is 'stored' in the hardware. - Uniqueness of hardware (software piracy protection) More fundamentally, what might I want to trust? ---------------------------------------------- - Hardware - Software - Storage - I/O path to user Classical approach ------------------ - Rings of trust . most trusted . medium trust . no trust - Hardware cooperates with 'most trusted' software to enforce these - E.g. . Processes are tagged with their trust level . VM pages marked as to who can read or write them (extra bits in page table) . I/O instructions (e.g. "out") restricted to certain trust levels . Code executing in lower trust levels preempted by hardware (interrupt, trap) to run higher trust levels . Secure transitions between trust levels . TRAP enters higher trust level . User process can have execute-only or read-only access to pages that are read/write at higher trust levels . Can have multiple instances of lower trust levels (e.g. different users, protected from each other) - Most modern HW/operating systems: . Two trust levels -- "user" and "kernel" . Not enough when device drivers come from other entities . Not enough when O/S kernel is incredibly complex . Particularly if we care about data security, where violations might not be obviously detectable like a crash - New tweaks . "no execute" bit on pages (AMD64) -- better protection against buffer overruns Next-generation approaches -------------------------- Microsoft = NGSCB Intel = LaGrande Technology Components: * "smartcard" hardware within the processor (and possibly every other chip in the system!) - small secure storage: protected, private key -- for authentication and decryption of 'sealed' data. - encryption and decryption engine - non-interruptable code execution for 'secure' code * small highly-trusted software -- 'security kernel' - "nub" in microsoft terminology - cooperates with and executes on security processor * security 'processes' "NCA" -- software running in trust boxes each piece of NCA software runs in its own box * Secure hardware path to keyboard, mouse, and display Overall goals: - Protection of trust domains and rights enforcement ("strong process isolation") - Sealed storage . encrypt, then send to disk . decrypt and put in memory only accessable to signed code - Secure path to/from user . trusted I/O device . take control over machine from untrusted operating system - HW/SW authentication . verify unique machine ID . verify that O/S software is of a certain version . etc.