CS 345 Distributed Systems — Spring 2022

News and InformationOverviewCalendarAssignmentsMaterials

ANNOUNCEMENTS

Remember to check this (and Canvas) regularly!

STAFF

Instructor

Fabián E. Bustamante
Seely Mudd #3905
fabianb@cs…

TAs

Rasha Kumar
Seely Mudd
RashnaKumar2024@u…

PMs

Vishwani Sati
Sebastian Perez-Delgado

LOCATION AND TIME

Lectures: Tuesdays and Thursdays 11:00-12:20PM | Tech L361 (map)

Professor Office Hours:  By appointment

TA Office Hours:

  • Wednesday 1-2PM | Mudd 3532
  • Thursday 4-5PM | Mudd 3534
  • Friday 3-4PM | Mudd 3532

TA/Recitation Sessions: TBD

Take-home Final: Due Wed. June 8, 2022 at 8PM CST

CATALOG DESCRIPTION

Basic principles behind distributed systems (collections of independent components that appear to users as a single coherent system) and main paradigms used to organize them.

COURSE PREREQUISITES

Disability

 In compliance with Section 504 of the 1973 Rehabilitation Act and the Americans with
Disabilities Act, Northwestern University is committed to providing equal access to all
programming. Students with disabilities seeking accommodations are encouraged to contact the office of Services for Students with Disabilities (SSD) at +1 847 467-5530 or ssd@northwestern.edu. SSD is located in the basement of Scott Hall. Additionally, I am available to discuss disability-related needs during office hours or by appointment.

Distributed systems are collections of networked computers that coordinate their actions through message exchanges. Most computing systems you interact with every day are indeed distributed (e.g. email, the Web, Google, Skype, Facebook, …) for a variety of reasons such as fault tolerance, performance, and the geographical nature of the requirements.

In this course, we will discuss some of the basic principles behind distributed systems as well as common approaches and techniques used to build them. We illustrate these ideas through case studies of widely used or seminal systems.

SOME OF THE TOPICS COVERED

  • Networking and Communication
  • Physical and Logical Clocks
  • Coordination in Distributed Systems
  • Distributed storage and file systems
  • Name services
  • Global state and transactions
  • Replication and consistency
  • Consensus
  • Fault tolerance
  • Security and privacy

COMMUNICATION CHANNELS

There are a number of communication channels set up for this class:

  • We will use the course website and associated Canvas site to post announcements related to the course. You should check this regularly for schedule changes, clarifications and corrections to assignments, and other course-related announcements.
  • We will use Campuswire for class discussion. TAs and I will check Campuswire frequently and answer unresolved questions, but you’re also encouraged to collaborate with each other and answer each other’s questions.
  • There is always email for questions that would be inappropriate to post on the newsgroup/discussion board. When using email to contact the staff please start your subject line with “CS345: helpful-comment” to ensure a prompt response.

COURSE ORGANIZATION

The course is organized as a series of lecture and paper discussions, four projects, homework assignments, and a take-home final.

  • Lectures and discussions – A set of lectures on the core of the material.
  • Readings – Textbook and paper reading in preparation for (not substitution of) the lecture.
  • Homework assignments – A set of assignments meant as reader enforcers.
  • Projects – Four programming projects to give you a better understanding of the subject matter and experience with the Go programming language.
  • A take-home final.

GRADING

I use a criterion-referenced method to assign your grade; in other words, your grade will be based on how well you do relative to predetermined performance levels, instead of in comparison with the rest of the class. Thus, if a test has 100 possible points, anyone with a score of 90 or greater will get an A (90-92: A-), those with scores of 80 or greater will get a B (80-82: B-), those with scores of 70 or greater will get a C, and so on. Notice that this means that if everyone works hard and gets >93, everyone gets an A.

Total scores (between 0 and 100) will be determined, roughly, as follows:

  • Homework assignments 20%
  • Class participation 15%
  • Projects 45%
  • Take-home final 20%

POLICIES

Late policy:

Unless otherwise indicated, homework assignments and projects are due by midnight on their due date. If you hand in an assignment late, we will take off 10% for each day (or portion thereof) it is late. Assignments that are three or more days late receive no credit.

Cheating vs. Collaboration:

Collaboration is a really good thing and we encourage it. On the other hand, cheating is considered a very serious offense. When in doubt, remember that it’s OK to meet with colleagues, study for exams together, and discuss assignments with them. However, what you turn in must be your own (or for group projects, your group’s own) work. Copying code, solution sets, etc. from other people or any other sources is strictly prohibited.

For projects, we do code walkthrough by randomly selected groups with the staff. The idea is simple if we pick your group you and your teammates will meet with the TA and/or instructor and walk them through your code, answering any questions they may have.
To get full credit you must be able to carry the walkthrough, showing you understand your code.

Note that our random sampling is with replacement, i.e., you may do it multiple times.

The following is our intended calendar with topics, slides (as they become available) and reference material. Note “MSAT3 #” refers to chapters/sections of (M. van Steen and A. Tanenbaum Distributed Systems 3rd Ed., 2017). Papers, except when tagged as [ref], may be part of homework assignment/finals questions. All papers are available in Canvas (“Reading” folder in the “Files” section); some links in the Calendar point to those files.

Week Date Topics and Reading
1 03/29 (Northwestern’s “Monday”)
03/31 Introduction

Reading:

  • MSAT 1.1,1.2
  • Google’s Introduction to Distributed System Design
    [Local PDF]
  • J. Dean and S. Ghemawat, MapReduce: Simplified Data Processing on Large Clusters. Proc. of OSDI, 2004
    [Local PDF]
2 04/05 Networking

Reading:

  • MSAT3 4.1
04/07 Communication and Organization

Reading:

  • MSAT 2.3, 4.2
3 04/12 Physical and Logical Clocks

Reading:

  • MSAT 6.1,6.2
  • L. Lamport. Time, Clocks, and the Ordering of Events in a Distributed System. Communications of the ACM, July 1978, pages 558-564.
    [Local PDF]
04/14 Global State

Reading:

  • M. Chandy and L. Lamport. Distributed Snapshots: Determining Global States of Distributed Systems. ACM Trans. Comput. Syst., 3(1):63-75, 1985.
    [PDF]
4 04/19 Coordination

Reading:

  • MSAT 6.3-6.4
  • P. Hunt et al., ZooKeeper: Wait-free coordination for Internet-scale systems. Proc. of USENIX ATC, 2010.
    [Local PDF]
04/21 Failure and Failure Detection

Reading:

  • MSAT 8.1
  • J. Leners et al., Detecting failures in distributed systems with the FALCON spy network. Proc. of SOSP, 2011.
    [Local PDF]
5 04/26 Consistency and Replication

Reading:

  • D. Scales et al., The Design of a Practical System for Fault-Tolerant Virtual Machines, ACM SIGOPS OSR, December 2010.
    [Local PDF]
04/28 Eventual Consistency

Reading:

  • D. Terry et al., Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System. Proc. of SOSP, 1995.
    [Local PDF]
6 05/03 Overlay Networks

Reading:

  • MSAT 5.2
  • I. Stoica et al., Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications. Proc. of SIGCOMM, 2001.
    [Local PDF]
05/05 Scaling Out Key-Value Stores

Reading:

  • G. DeCandia et al., Dynamo: Amazon’s Highly Available Key-value Store. Proc. of SOSP, 2007.
    [Online]
7 05/10 Consensus Problem and The Impossibility of Consensus

Reading:

  • M. Fischer, N. Lynch, M. Paterson, Impossibility of Distributed Consensus with One Faulty Process. Journal of the ACM, 32(2), April 1985. [Local PDF]
05/12 Consensus

Reading:

  • D. Ongaro and J. Ousterhout, In Search of an Understandable Consensus Algorithm. Proc. of USENIX ATC, 2014 (Extended version).
    [Local PDF]
8 05/17 Byzantine Fault Tolerance

Reading:

  • M. Castro and B. Liskov. Practical Byzantine Fault Tolerance. Proc. of OSDI, 1999.
    [Local PDF]
05/19 Distributed File Systems

Reading:

  • S. Ghemawat, H. Gobioff, and S.-T. Leung. The Google File System. Proc. of SOSP, 2003.
    [Local PDF]
9 05/24 Content Distribution Networks

Reading:

  • F. Chen et al., End-User Mapping: Next Generation Request Routing for Content Delivery. Proc. of SIGCOMM, 2015.
    [Local PDF]
05/26 Video Streaming

Reading:

  • F. Yan et al., Learning in situ: a randomized experiment in video streaming Proc. of NSDI, 2020.
    [PDF]
10 05/31 Distributed Transactions

Reading:

  • J. Corbett et al., Spanner: Google’s Globally-Distributed Database. Proc. of OSDI, 2012.
    [Local PDF]
06/02 Distributed Ledgers

Reading:

  • E. Andoulaki et al., Hyperledger fabric: a distributed operating system for permissioned blockchains, Proc. of EuroSys, 2018.
    [Local PDF]
* 06/08 Take-home final (Due June 8th, 11:59PM CST).

ASSIGNMENTS

There are four team-based projects, some basic homework assignments, mostly meant as reading enforcers, and a take-home final.

We will post all assignments in the Canvas’ site for the course.

PROJECTS

There will be four projects, including a MapReduce library and a replicated state machine protocol. Projects are to be done in teams of 2-3 students (1 is not allowed).

All projects will be done in Go, a language that was originally created within Google, but is now a fully open-source project. Go is garbage-collected and has built-in coroutines (called goroutines) and channels, making it highly suited to building distributed systems. Its standard library is already pretty comprehensive. For example, take a look at the net and rpc packages.

CALENDAR OF ASSIGNMENTS

MATERIALS

Papers/Textbooks

(Textbook are for reference only)

Go useful links