Рівночасні обчислення: відмінності між версіями

[перевірена версія][перевірена версія]
Вилучено вміст Додано вміст
Немає опису редагування
Рядок 1:
{{парадигми програмування}}
 
'''Конкурентні обчислення''' ({{lang-en|Concurrent computing}}) або '''Паралельні обчислення''' —- це форма [[обчислення|обчислень]], в якій кілька обчислень відбуваються в часових відрізках, щоякі перетинаються, тобто- здійснюються оночасно ([[Паралелізм (інформатика)|паралельно]])'', а не послідовно (так що кожне обчислення мусить закінчитись перед тим як почнеться інше). <!-- This is a property of a system—this may be an individual [[computer program|program]], a [[computer]], or a [[computer network|network]]—and there is a separate execution point or "thread of control" for each computation ("process"). A ''concurrent system'' is one where a computation can advance without waiting for all other computations to complete.<ref>''Operating System Concepts'' 9th edition, Abraham Silberschatz. "Chapter 4: Threads"</ref>
 
As a [[programming paradigm]], concurrent computing is a form of [[modular programming]], namely [[decomposition (computer science)|factoring]] an overall computation into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include [[Edsger Dijkstra]], [[Per Brinch Hansen]], and [[C.A.R. Hoare]].
== Механізми синхронізації ==
 
== Introduction ==
== Типові задачі ==
{{See also|Parallel computing}}
{{multiple issues|section=yes|
{{refimprove section|date=December 2016}}
{{original research section|date=December 2016}}
}}
The concept of concurrent computing is frequently confused with the related but distinct concept of [[parallel computing]],<ref name=waza>[[Rob Pike|Pike, Rob]] (2012-01-11). "Concurrency is not Parallelism". ''Waza conference'', 11 January 2012. Retrieved from http://talks.golang.org/2012/waza.slide (slides) and http://vimeo.com/49718712 (video).</ref><ref>{{cite web
|url=https://wiki.haskell.org/Parallelism_vs._Concurrency
|title=Parallelism vs. Concurrency
|work=Haskell Wiki
}}</ref> although both can be described as "multiple processes executing ''during the same period of time''". In parallel computing, execution occurs at the same physical instant: for example, on separate [[central processing unit|processors]] of a [[multi-processor]] machine, with the goal of speeding up computations—parallel computing is impossible on a ([[Multi-core processor|one-core]]) single processor, as only one computation can occur at any instant (during any single clock cycle).{{efn|This is discounting parallelism internal to a processor core, such as pipelining or vectorized instructions. A one-core, one-processor ''machine'' may be capable of some parallelism, such as with a [[coprocessor]], but the processor alone is not.}} By contrast, concurrent computing consists of process ''lifetimes'' overlapping, but execution need not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such as multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.<ref>{{cite book |first=Fred B. |last=Schneider |title=On Concurrent Programming |publisher=Springer |isbn=9780387949420}}</ref>{{rp|1}}
 
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via [[time-sharing]] slices: only one process runs at a time, and if it does not complete during its time slice, it is ''paused'', another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.{{citation needed|date=December 2016}}
== Інструменти ==
Інструменти, що дозволяють здійснювати паралельні обчислення можуть входити до складу окремих мов програмування (з властивостями паралелізму), або надаватись через бібліотеки, або бути реалізовані на системному рівні.
 
Concurrent computations ''may'' be executed in parallel,<ref name=waza/><ref name="benari2006">{{cite book|last=Ben-Ari|first=Mordechai|title=Principles of Concurrent and Distributed Programming|publisher=Addison-Wesley|year=2006|edition=2nd|isbn=978-0-321-31283-9}}</ref> for example, by assigning each process to a separate processor or processor core, or [[Distributed computing|distributing]] a computation across a network. In general, however, the languages, tools, and techniques for parallel programming might not be suitable for concurrent programming, and vice versa.{{citation needed|date=December 2016}}
== Посилання ==
 
The exact timing of when tasks in a concurrent system are executed depend on the [[Schedule (computer science)|scheduling]], and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:{{citation needed|date=December 2016}}
 
* T1 may be executed and finished before T2 or ''vice versa'' (serial ''and'' sequential)
* T1 and T2 may be executed alternately (serial ''and'' concurrent)
* T1 and T2 may be executed simultaneously at the same instant of time (parallel ''and'' concurrent)
 
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, ''concurrent/sequential'' and ''parallel/serial'' are used as opposing pairs.{{sfn|Patterson|Hennessy|2013|p=503}} A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a ''serial schedule''. A set of tasks that can be scheduled serially is ''[[Serializability|serializable]]'', which simplifies [[concurrency control]].{{citation needed|date=December 2016}}
 
-->
 
[[Категорія:Технології операційних систем]]