1清华云计算课件--分布式计算
合集下载
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system!
22
What’s Wrong With This?
Thread 1: void foo() { x++; y = x; }
both be green?)
27
Semaphores
set() and reset() can be thought of as lock() and unlock()
Calls to lock() when the semaphore is already locked cause the thread to block.
10
Parallel vs. Distributed
Parallel computing can mean: Vector processing of data (SIMD) Multiple CPUs in a single computer (MIMD)
Distributed computing is multiple CPUs across many computers (MIMD)
5
Distributed Problems
Rendering multiple frames of high-quality animation
Image: DreamWorks Animation 6
Distributed Problems
Simulating several hundred or thousand characters
lowered in one step
Semaphores were flags that railroad engineers would use when entering a shared track
Only one side of the semaphore can ever be red! (Can
completely separate tasks?
What is the common theme of all of these problems?
21
Parallelization Pitfalls (2)
Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource.
the same cost” (1965)
Image: Tom’s Hardware
4
Scope of Problems
What can you do with 1 computer?
What can you do with 100 computers?
What can you do with an entire data center?
“next phase” of processing
… All require synchronization!
25
Hale Waihona Puke Synchronization Primitives
A synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically.
many threads as we have processors. e.g., a four-
processor computer would be able to run four
threads at the same time.
18
Parallelization Idea (3)
Workers process data:
Hardware support guarantees that operations on synchronization primitives only ever take one step
26
Semaphores
Set:
Reset:
A semaphore is a flag
that can be raised or
extreme (10,000 node clusters)
14
Top 500, Architecture
15
16
Parallelization Idea
Parallelization is “easy” if processing can be cleanly split into n units:
13
A Brief History… 1995-Today
Cluster/grid architecture increasingly dominant
Special node machines eschewed in favor of COTS technologies
Web-wide cluster software Companies like Google take this to the
Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly
Many things that look like “one step” operations actually take several steps under the hood:
Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; }
Thread 2: void bar() { y++; x++; }
If the initial state is y = 0, x = 6, what happens after these threads finish running?
23
Multithreaded = Unpredictability
11
A Brief History… 1975-85
Parallel computing was favored in the early years
Primarily vector-based at first
Gradually more threadbased parallelism was introduced
How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into
Cray 2 supercomputer (Wikipedia)
12
A Brief History… 1985-95
“Massively parallel architectures” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem
Mass Data Processing Technology on Large Scale Clusters
Summer, 2007, Tsinghua University
All course material (slides, labs, etc) is licensed under the Creative Commons Attribution 2.5 License . Many thanks to Aaron Kimball & Sierra Michels-Slettvet for their original version
PlanetLab is a global research network that supports the development of new network services.
PlanetLab currently consists of 809 nodes at 401 sites.
9
CDN - Akamai
Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema
7
Distributed Problems
Indexing the web (Google) Simulating an Internet-sized network for
thread w1
thread w2
thread w3
19
Parallelization Idea (4)
thread w1
thread w2
thread w3
results
Report results
20
Parallelization Pitfalls
But this model is too simple!
1
Introduction to Distributed Systems Parallelization & Synchronization Fundamentals of Networking Remote Procedure Calls (RPC) Transaction Processing Systems
networking experiments (PlanetLab) Speeding up content delivery (Akamai)
What is the key attribute that all these examples have in common?
8
PlanetLab
2
3
Computer Speedup
Why slow down here?
Then, How to improve the performance?
Moore’s Law: “The density of transistors on a chip doubles every 18 months, for
Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; inc eax; mem[x] = eax; }
When we run a multithreaded program, we don’t
know what order threads run in, nor do we know
when they will interrupt one another.
24
Multithreaded = Unpredictability
This applies to more than just integers:
Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the
work
Partition problem
w1
w2
w3
17
Parallelization Idea (2)
w1
w2
w3
Spawn worker threads:
thread
thread
thread
In a parallel computation, we would like to have as
22
What’s Wrong With This?
Thread 1: void foo() { x++; y = x; }
both be green?)
27
Semaphores
set() and reset() can be thought of as lock() and unlock()
Calls to lock() when the semaphore is already locked cause the thread to block.
10
Parallel vs. Distributed
Parallel computing can mean: Vector processing of data (SIMD) Multiple CPUs in a single computer (MIMD)
Distributed computing is multiple CPUs across many computers (MIMD)
5
Distributed Problems
Rendering multiple frames of high-quality animation
Image: DreamWorks Animation 6
Distributed Problems
Simulating several hundred or thousand characters
lowered in one step
Semaphores were flags that railroad engineers would use when entering a shared track
Only one side of the semaphore can ever be red! (Can
completely separate tasks?
What is the common theme of all of these problems?
21
Parallelization Pitfalls (2)
Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource.
the same cost” (1965)
Image: Tom’s Hardware
4
Scope of Problems
What can you do with 1 computer?
What can you do with 100 computers?
What can you do with an entire data center?
“next phase” of processing
… All require synchronization!
25
Hale Waihona Puke Synchronization Primitives
A synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically.
many threads as we have processors. e.g., a four-
processor computer would be able to run four
threads at the same time.
18
Parallelization Idea (3)
Workers process data:
Hardware support guarantees that operations on synchronization primitives only ever take one step
26
Semaphores
Set:
Reset:
A semaphore is a flag
that can be raised or
extreme (10,000 node clusters)
14
Top 500, Architecture
15
16
Parallelization Idea
Parallelization is “easy” if processing can be cleanly split into n units:
13
A Brief History… 1995-Today
Cluster/grid architecture increasingly dominant
Special node machines eschewed in favor of COTS technologies
Web-wide cluster software Companies like Google take this to the
Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly
Many things that look like “one step” operations actually take several steps under the hood:
Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; }
Thread 2: void bar() { y++; x++; }
If the initial state is y = 0, x = 6, what happens after these threads finish running?
23
Multithreaded = Unpredictability
11
A Brief History… 1975-85
Parallel computing was favored in the early years
Primarily vector-based at first
Gradually more threadbased parallelism was introduced
How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into
Cray 2 supercomputer (Wikipedia)
12
A Brief History… 1985-95
“Massively parallel architectures” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem
Mass Data Processing Technology on Large Scale Clusters
Summer, 2007, Tsinghua University
All course material (slides, labs, etc) is licensed under the Creative Commons Attribution 2.5 License . Many thanks to Aaron Kimball & Sierra Michels-Slettvet for their original version
PlanetLab is a global research network that supports the development of new network services.
PlanetLab currently consists of 809 nodes at 401 sites.
9
CDN - Akamai
Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema
7
Distributed Problems
Indexing the web (Google) Simulating an Internet-sized network for
thread w1
thread w2
thread w3
19
Parallelization Idea (4)
thread w1
thread w2
thread w3
results
Report results
20
Parallelization Pitfalls
But this model is too simple!
1
Introduction to Distributed Systems Parallelization & Synchronization Fundamentals of Networking Remote Procedure Calls (RPC) Transaction Processing Systems
networking experiments (PlanetLab) Speeding up content delivery (Akamai)
What is the key attribute that all these examples have in common?
8
PlanetLab
2
3
Computer Speedup
Why slow down here?
Then, How to improve the performance?
Moore’s Law: “The density of transistors on a chip doubles every 18 months, for
Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; inc eax; mem[x] = eax; }
When we run a multithreaded program, we don’t
know what order threads run in, nor do we know
when they will interrupt one another.
24
Multithreaded = Unpredictability
This applies to more than just integers:
Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the
work
Partition problem
w1
w2
w3
17
Parallelization Idea (2)
w1
w2
w3
Spawn worker threads:
thread
thread
thread
In a parallel computation, we would like to have as